id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
61,950,060
https://en.wikipedia.org/wiki/KaVo%20Kerr
KaVo Kerr was a dental equipment manufacturer group that was sold formerly to Envista. The group stemmed from a joint venture set up in 2016 between KaVo (KaVo Dental GmbH), which was established in 1909 in Berlin, Germany, and Kerr Corporation, which was founded in 1891 in Detroit, Michigan, as well as a division of Danaher Corporation headquartered in Brea, California. In December 2019, Danaher spun off its dental segment into an independent publicly-traded company - Envista Holdings Corporation. Envista will employ 12,000 people worldwide. History Kerr Kerr was established in 1891 in Detroit, Michigan by brothers Robert and John Kerr as The Detroit Dental Manufacturing Company and started to offer its products and services to the European market in 1893. The company officially changed its name to The KERR Manufacturing Company in 1939. The company established its first factory in Europe in Scafati, Italy in 1959. Kerr acquired part of the McShirley line of products in 1971. Later in 1978, the Sybron Dental Product Division was formed. In 2001, Kerr acquired Hawe Neos company in the aim of enhancing its offer of prophylaxis consumables. In 2006, Kerr became part of Danaher Corporation. In 2014, Kerr acquired DUX Dental and Vettec Inc. In 2015, Total Care, Axis SybronEndo and Kerr reorganized into a unilateral organization: Kerr Dental. KaVo KaVo was established in 1909 in Berlin, Germany by Alois Kaltenbach as KaVo Dental GmbH. By 1919, Richard Voigt joined the Kavo and the number of the employees expanded to 300 by 1939. In 1946, the headquarters were moved from Potsdam to the Upper Swabian town of Biberach an der Riss. In 1959, the company opened a dental technology factory in Leutkirch. In 2004, it was purchased by Danaher Corporation. In the same year, KaVo acquired Gendex. In 2005, KaVo acquired Pelton & Crane, a dental operatory equipment manufacturer with a 100-year history in North America, and joined the KaVo Kerr family along with DEXIS. In 2007, i-CAT was acquired by Kavo, formerly Soredex imaging brands in 2009. In 2012, Aribex, which is best known for the NOMAD handheld and portable X-ray systems, was acquired by KaVo Dental Group. In September 2021, Envista announced that KaVo will be sold to Planmeca for $455 million. See also PaloDEx :de:KaVo Dental (German) References External links Official website Danaher Corporation Dental companies of the United States Companies based in Brea, California
KaVo Kerr
[ "Biology" ]
550
[ "Danaher Corporation", "Life sciences industry" ]
61,953,437
https://en.wikipedia.org/wiki/EPSG%20Geodetic%20Parameter%20Dataset
EPSG Geodetic Parameter Dataset (also EPSG registry) is a public registry of geodetic datums, spatial reference systems, Earth ellipsoids, coordinate transformations and related units of measurement, originated by a member of the European Petroleum Survey Group (EPSG) in 1985. Each entity is assigned an EPSG code between 1024 and 32767, along with a standard machine-readable well-known text (WKT) representation. The dataset is maintained by the IOGP Geomatics Committee. Most geographic information systems (GIS) and GIS libraries use EPSG codes as Spatial Reference System Identifiers (SRIDs) and EPSG definition data for identifying coordinate reference systems, projections, and performing transformations between these systems, while some also support SRIDs issued by other organizations (such as Esri). Common EPSG codes EPSG:4326 - WGS 84, latitude/longitude coordinate system based on the Earth's center of mass, used by the Global Positioning System among others. EPSG:3857 - Web Mercator projection used for display by many web-based mapping tools, including Google Maps and OpenStreetMap. EPSG:7789 - International Terrestrial Reference Frame 2014 (ITRF2014), an Earth-fixed system that is independent of continental drift. History The dataset was created in 1985 by Jean-Patrick Girbig of Elf, to "standardize, improve and share spatial data between members of the European Petroleum Survey Group". It was made public in 1993. In 2005, the EPSG organisation was merged into International Association of Oil & Gas Producers (IOGP), and became the Geomatics Committee. However, the name of the EPSG registry was kept to avoid confusion. Since then, the acronym "EPSG" became increasingly synonymous with the dataset or registry itself. See also List of map projections References External links Official website Spatial databases Spatial analysis Geodesy Catalogues Geomatics Geographic coordinate systems
EPSG Geodetic Parameter Dataset
[ "Physics", "Mathematics" ]
410
[ "Applied mathematics", "Spatial analysis", "Geographic coordinate systems", "Space", "Coordinate systems", "Spacetime", "Geodesy" ]
61,956,101
https://en.wikipedia.org/wiki/Energy%20Transitions%20Commission
The Energy Transitions Commission (ETC) is an international think tank, focusing on economic growth and climate change mitigation. It was created in September 2015 and is based in London. The commission currently contains 32 commissioners from a selection of individuals and company and government leaders. Activities The primary activity of the commission is publishing reports and position papers. They are typically supported by a body of readily available or explicitly commissioned data sets provided by various independent or industry-related organizations. The findings of reports are then reviewed through a broad consultation process within and outside of the commission. Finally, the report or position paper is redacted and generally understood to constitute the collective view of the ETC commission. Although individual commissioners may disagree with particular findings or recommendations, the general direction of the arguments developed in the publications is guided by consensus. Publications Since its founding in 2015, the commission has published two extensive reports and half a dozen papers. For example, Pathways from Paris – Accessing the INDC Opportunity, is a 25-page study of INDCs (i.e. the plans developed by individual countries and submitted at the 2015 UN Climate Change Conference in Paris). This investigation highlighted the mechanisms various countries utilize in order to reduce emissions and identify opportunities for further reductions. News outlets of general interest and the specialized press reported summaries of these reports. Both reports outlined below were cited as reference to several articles in a 2018 special report edition of The Economist magazine. Better Energy, Greater Prosperity This 120-page report recognized the opportunity to halve global carbon emissions by 2040. According to the report, it is possible to simultaneously ensure economic development and access affordable, sustainable energy for all, while reducing carbon emissions by half the current output. The report suggested four strategies to be concurrently implemented: Accelerate clean electricity access. Decarbonize beyond power generation, using bioenergy, hydrogen, and carbon capture for industrial activities and transport modes which cannot be electrified in an economical fashion. Improve energy productivity by targeting a 3% energy productivity per year (compared to 1.5% currently) Optimize usage of remaining fossil fuel uses According to the report, the strategies listed above would have reduced fossil fuel consumption by 30%, but 50% of energy needs would have needed to be met with fossil fuels. This, the report explained, could be solved by optimizing usage of these sources by switching from coal to gas, by preventing methane leakages, and by stopping routine flaring. Another area of optimization would come from carbon capture or sequestration such as underground storage, and finally a decrease in fossil fuel use. The report suggested two solutions for energy policy: Increased investment, keeping in mind that the investment required by the transition is estimated to be between $300-600 billion USD annually. At this level, the cost would not cause a significant macroeconomic challenge, relative to the approximately $20 trillion in anticipated savings and investments annually. The issue is more one of a shift in the mix of investments: moving away from fossil fuels and toward low carbon technologies and energy-efficient equipment and infrastructure. Public governance, with the introduction of coherent and predictable policies which favour the energetic transition, along with the phasing out of fossil fuel subsidies and the introduction of carbon pricing. Mission Possible This 172-page report focused on the "hard to abate sectors", namely: Heavy industry: cement, steel and plastics Heavy duty transport: heavy road transport, maritime shipping, and aviation Collectively, these sectors currently represent approximately 30% of energy emissions, with the potential to increase to 60% by 2050 (due to the reduction of the share owed to other sectors, and to the demand growth in these hard to abate sectors). The report concluded that full decarbonization of these sectors is feasible and the cost to the global economy would be less than 0.5% of GDP by 2050. It also identifies cement, plastics and shipping as the most challenging sectors, due to process emissions, end-of-life emissions and the fragmented nature of the maritime industry respectively. The feasibility if not inevitability of some of these transitions, for example these concerning the industrial production of ammonia, are echoed (or in some cases originate from) the respective industry sectors. Funding The ETC is funded by various businesses and organizations, including major oil and gas companies – this was a source of concern from many observers. Current or past sponsors include Bank of America Merrill Lynch, BHP Billiton, Energy Systems Catapult, CO2 Sciences, the European Climate Foundation, the Grantham Foundation and the UN Foundation. Regardless of funding every Commissioner has an equal voice and participation in ETC activities. List of commissioners References Think tanks based in the United Kingdom Emissions reduction
Energy Transitions Commission
[ "Chemistry" ]
949
[ "Greenhouse gases", "Emissions reduction" ]
61,956,494
https://en.wikipedia.org/wiki/Unified%20scattering%20function
The unified scattering function was proposed in 1995 as a universal approach to describe small-angle X-ray, and neutron scattering (and in some cases light scattering) from disordered systems that display hierarchical structure. Concept The concept of universal descriptions of scattering, that is scattering functions that do not depend on a specific structural model, but whose parameters can be related back to specific structures, have existed since about 1950. The prominent examples of universal scattering functions are Guinier's Law, and Porod's Law, where G, Rg, and B are constants related to the scattering contrast, structural volume, surface area, and radius of gyration. q is the magnitude of the scattering vector which is related to the Bragg spacing, d, q = 2π/d = 4π/λ sin(θ/2). λ is the wavelength and θ is the scattering angle (2θ in diffraction). Both Guinier's Law and Porod's Law refer to an aspect of a single structural level. A structural level is composed of a size that can be expressed in Rg, and a structure as reflected in a power-law decay, -4 in the case of Porod's Law for solid objects with smooth, sharp interfaces. For other structures the power-law decay yields the mass-fractal dimension, df, which relates the mass and size of the object, thereby partially defining the object. For instance, a rod has df = 1 and a disk has df = 2. The prefactor to the power-law yields other details of the structure such as the surface to volume ratio for solid objects, the branch content for chain structures, the convolution or crumpled-ness of various objects. The prefactor to Guinier's Law yields the mass and volume fraction under dilute conditions. Above the overlap concentration (generally 1 to 5 volume percent) structural screening must be considered. In addition to these universal functions that describe only a part of a structural level, a number of scattering functions that can describe a single structural level have been proposed for some disordered systems, most interestingly Debye's scattering function for a Gaussian polymer chain derived during World War II, where x = q2Rg2. reverts to at low-q and to a power-law, I(q) = Bq−2 at high-q reflecting the two dimensional nature of a random walk or a diffusion path. refers to a single structural level, corresponding to a Guinier regime and a power-law regime. The Guinier regime reflecting the overall size of the object without reference to the internal or surface structure of the object and the power-law reflecting the details of the structure, in this case a linear (unbranched), mass-fractal object with mass-fractal dimension, df = 2 (connectivity dimension of 1 reflecting a linear structure; and minimum dimension of 2 indicating a random conformation in 3d space). In the 1990s it became apparent that single structural level functions similar to would be of great use in describing complex, disordered structures such as branched mass-fractal aggregates, linear polymers in good solvents (df ~ 5/3), branched polymers (df > 2), cyclic polymers, and macromolecules of complex topology such as star, dendrimer, and comb polymers, as well as polyelectrolytes, micellar and colloidal materials such as worm-like micelles. Further, no analytically derived scattering functions could describe multiple structural levels in hierarchical materials. The observation of multiple structural levels is extremely common even in the case of a simple linear Gaussian polymer chain describe by which is statistically composed of rod-like Kuhn units (level 1) which follow I(q) = Bq−1 at the highest-q. Common examples of hierarchical materials are silica, titania, and carbon black nano-aggregates composed of solid primary particles (level 1) displaying Porod scattering at highest q, , which aggregate into fairly rigid mass-fractal structures at intermediate nanoscales (level 2), and which agglomerate into micron-scale solid or network structures (level 3). Since these structural levels overlap in a small-angle scattering pattern, it was not possible to accurately model these materials using and various power-law functions such as . For these reasons, a global scattering function that could be expanded to multiple structural levels was of interest. In 1995 Beaucage derived the Unified Scattering Function, where "i" refers to the structural level starting with the smallest size, highest q. qi* is defined by, and k has a value of 1 for solid structural levels (:) and approximately 1.06 for mass-fractal structural levels (:). recognizes that all structures display the behavior of at largest sizes, that is all structures exhibit a size, and if the structure is randomly arranged that size manifests as a Gaussian function in small-angle scattering governed by the radius of gyration with larger objects displaying a smaller standard deviation, or larger Rg. At high-q fails to describe the structure because it reflects an object with no surface or internal structure [8]. The second term in gives the missing information concerning the surface or internal structure of the object by way of the power Pi and the prefactor Bi (as well as how Pi and Bi relate to Gi, and Rg,i). Beaucage realized that the problem of obtaining a generic multi-level scattering function lay in since a power-law could not extend infinitely to low-q and yield a finite intensity at q => 0. Also, such a function would over power in the range of q where is appropriate. Reference provides one of several possible derivations of , using as an example of a power-law regime. A vector, r, can be visualized as the vector connecting interference points between an incident beam and the scattered beam. r = 2π/q where q = 4π/(λ sin θ/2) is the scattering vector in inverse space. Scattering occurs when two fringe points separated by r contain scattering material. If material is located at |r|/2 destructive interference occurs. So within a solid object there is always material at a position |r|/2 that negates scattering form material separated at |r|. Only at the surface do conditions of contrast occur. describes scattering from a smooth sharp interface which results in scattering that is proportional to the surface area and decays with q−4. The volume of a scattering element in this case scales with V ~ r3. Scattering involves binary interference so is proportional to (ρV)2 ~ r6. The number of these V domains is proportional to the surface area divided by the area of a domain, N ~ S/r2. So the scattering intensity follows I(q) ~ SV2/r2 ~ Sr4 ~ Sq−4. At small size scales, at high q, for an oddly shaped object with a smooth/sharp interface, the structure appears to be a flat surface and the described approach is appropriate. As the size scale of observation, r, approaches Rg at low q this model fails because the surface is no longer planar. That is, the scattering even in figure 1 relies on both ends of the vector, r, being coplanar and arranged as indicated (the specular condition) with respect to the incident and scattered beams. In the absence of this orientation no scattering occurs. The curvature of the particle, which is related to the radius of gyration, extinguishes surface scattering at low-q in the Guinier regime. Incorporating this observation in Porod's law in the original derivation is not possible since it relies on a Fourier transform of a correlation function for surface scattering. Beaucage arrived at through a new derivation of based on randomly placed particles and adoption of this approach to modification of . Beaucage derivation of Guinier's Law Consider a randomly placed vector r such that both ends of the vector are in the particle. If the vector were held constant in space, while the particle were translated and rotated to any position meeting this condition and an average of the structures were taken, any object would result in a Gaussian mass distribution that would display a Gaussian correlation function, and would appear as an average cloud with no surface. The Fourier transform of results in . Limitations to power-law scattering at low-q Power-law scattering is restricted to sizes smaller than the object. For example, within a mass-fractal object such as a polymer chain described by the normalized mass of the chain, z, scales with the normalized size, R ~ Reted/lk, with a scaling power of the mass-fractal dimension, df, z ~ Rdf. Considering scattering elements of size r, the number of such elements in a particle scales with N ~ z/rdf, and the mass of such a particle n ~ rdf, so the scattering is proportional to Nn2 or rdf ~ q-df. At low-q the vector r ~ 1/q approaches the size of the particle. For this reason the power-law regime ends at low-q. One way to consider this is to think of the vector ra beginning and ending in the particle, Figure 2 (a). This vector meets the mass fractal condition if the particle is a mass-fractal. In Figure 2 (b) the vector rb separating two points, does not meet the mass-fractal condition, but with a translation of the particle by d the mass fractal condition can be met for bothe ends of rb, (c). In scattering we are considering all possible translations of the particle relative to one end of the vector r being located within the mass-fractal particle. The probability of moving the particle to meet the mass-fractal condition for both ends of the vector is less than 1 if r is close to the particle size. If the particle were of infinite size this probability would always be 1. For a finite particle Figure 2 shows that the reduction in probability for a scattering event at large sizes can be viewed as a reduction in the length of the vector r. This is the basis of the Unified Function. Rather than directly determining the scattering function, the reduction in r related to this translation is calculated. Since r is related to 2π/q we consider an effective increase in scattering vector q to q*. The relationship between q and q* is determined by first considering the consequence of the translation in Figure 2 on the correlation function based on the Gaussian derivation of Guinier's Law [8]. This analysis results in a modifying factor of, Following the Debye relationship, this factor can be incorporated into q yielding the transform, where, as shown in Figure 2 in terms of q* = 2π/r*. References and demonstrates that for strong power-law decays is equivalent to, which allows for the direct use of a modification of as, For mass-fractal power-laws this approximation is not perfect due to the shape of the correlation function at low-q as described in. A good approximation is to include a constant k whose value is about 1.06 for df = 2, so that is replaced by, In general for mass fractals it is found that k ~ 1.06 is a good approximation and k = 1 for surface fractal scattering. With this modification, power-law scattering is compatible with Guinier scattering and the two terms can be summed in a Unified Equation, can describe a single structural level and can closely replicate , equations for polydisperse spheres, rods, sheets, good solvent polymers, branched polymers, cyclic polymers, as demonstrated in and related publications. A wide range of disordered materials including mass and surface fractal structures can therefore be described using the Unified Approach. For hierarchical materials with multiple structural levels can be extended using a Gaussian cutoff at high-q for the power-law function which is common to equations for rods, disks and other simple scattering functions such as described in Guinier and Fournet, where it is taken that Rg,0 = 0. This function has been used to describe persistence in polymer chains in good and theta solvents, branched polymers, polymers of complex topology such as star polymers, mass fractal primary particles/aggregates/agglomerates, rod diameter/length, disk thickness/width and other complex hierarchical structures. The lead cutoff term in assumes that the structural level i is composed of structural levels i-1. If this is not true, a free parameter can substitute for Rg,i-1 as described in. is quite flexible and it has been extended as a Hybrid Unified Function for micellar systems where the local structure is a perfect cylinder or other structure. Implementation of Unified Function Jan Ilavsky of Argonne National Laboratory's Advanced Photon Source (USA) has provided open user code to perform fits using the Unified Function in the Igor Pro programing environment including video tutorials and an instruction manual. References Scattering theory
Unified scattering function
[ "Chemistry" ]
2,715
[ "Scattering", "Scattering theory" ]
62,889,984
https://en.wikipedia.org/wiki/Lean%20%28proof%20assistant%29
Lean is a proof assistant and a functional programming language. It is based on the calculus of constructions with inductive types. It is an open-source project hosted on GitHub. It was developed primarily by Leonardo de Moura while employed by Microsoft Research and now Amazon Web Services, and has had significant contributions from other coauthors and collaborators during its history. Development is currently supported by the non-profit Lean Focused Research Organization (FRO). History Lean was launched by Leonardo de Moura at Microsoft Research in 2013. The initial versions of the language, later known as Lean 1 and 2, were experimental and contained features such as support for homotopy type theory – based foundations that were later dropped. Lean 3 (first released Jan 20, 2017) was the first moderately stable version of Lean. It was implemented primarily in C++ with some features written in Lean itself. After version 3.4.2 Lean 3 was officially end-of-lifed while development of Lean 4 began. In this interim period members of the Lean community developed and released unofficial versions up to 3.51.1. In 2021, Lean 4 was released, which was a reimplementation of the Lean theorem prover capable of producing C code which is then compiled, enabling the development of efficient domain-specific automation. Lean 4 also contains a macro system and improved type class synthesis and memory management procedures over the previous version. Another benefit compared to Lean 3 is the ability to avoid touching C++ code in order to modify the frontend and other key parts of the core system, as they are now all implemented in Lean and available to the end user to be overridden as needed. Lean 4 is not backwards-compatible with Lean 3. In 2023, the Lean FRO was formed, with the goals of improving the language's scalability and usability, and implementing proof automation. Overview Libraries The official lean package includes a standard library batteries, which implements common data structures that may be used for both mathematical research and more conventional software development. In 2017, a community-maintained project to develop a Lean library mathlib began, with the goal to digitize as much of pure mathematics as possible in one large cohesive library, up to research level mathematics. As of September 2024, mathlib had formalised over 165,000 theorems and 85,000 definitions in Lean. Editors integration Lean integrates with: Visual Studio Code Neovim Emacs Interfacing is done via a client-extension and Language Server Protocol server. It has native support for Unicode symbols, which can be typed using LaTeX-like sequences, such as "\times" for "×". Lean can also be compiled to JavaScript and accessed in a web browser and has extensive support for meta-programming. Examples (Lean 4) The natural numbers can be defined as an inductive type. This definition is based on the Peano axioms and states that every natural number is either zero or the successor of some other natural number. inductive Nat : Type | zero : Nat | succ : Nat → Nat Addition of natural numbers can be defined recursively, using pattern matching. def Nat.add : Nat → Nat → Nat | n, Nat.zero => n -- n + 0 = n | n, Nat.succ m => Nat.succ (Nat.add n m) -- n + succ(m) = succ(n + m) This is a simple proof of for two propositions and (where is the conjunction and the implication) in Lean using tactic mode: theorem and_swap (p q : Prop) : p ∧ q → q ∧ p := by intro h -- assume p ∧ q with proof h, the goal is q ∧ p apply And.intro -- the goal is split into two subgoals, one is q and the other is p · exact h.right -- the first subgoal is exactly the right part of h : p ∧ q · exact h.left -- the second subgoal is exactly the left part of h : p ∧ q This same proof in term mode: theorem and_swap (p q : Prop) : p ∧ q → q ∧ p := fun ⟨hp, hq⟩ => ⟨hq, hp⟩ Usage Mathematics Lean has received attention from mathematicians such as Thomas Hales, Kevin Buzzard, and Heather Macbeth. Hales is using it for his project, Formal Abstracts. Buzzard uses it for the Xena project. One of the Xena Project's goals is to rewrite every theorem and proof in the undergraduate math curriculum of Imperial College London in Lean. Macbeth is using Lean to teach students the fundamentals of mathematical proof with instant feedback. In 2021, a team of researchers used Lean to verify the correctness of a proof by Peter Scholze in the area of condensed mathematics. The project garnered attention for formalizing a result at the cutting edge of mathematical research. In 2023, Terence Tao used Lean to formalize a proof of the Polynomial Freiman-Ruzsa (PFR) conjecture, a result published by Tao and collaborators in the same year. Artificial intelligence In 2022, OpenAI and Meta AI independently created AI models to generate proofs of various high-school-level olympiad problems in Lean. Meta AI's model is available for public use with the Lean environment. In 2023, Vlad Tenev and Tudor Achim co-founded startup Harmonic, which aims to reduce AI hallucinations by generating and checking Lean code. In 2024, Google DeepMind created AlphaProof which proves mathematical statements in Lean at the level of a silver medalist at the International Mathematical Olympiad. This was the first AI system that achieved a medal-worthy performance on a math olympiad's problems. See also Dependent type List of proof assistants mimalloc Type theory References External links Lean Website Lean Community Website Lean FRO The Natural Number Game - an interactive tutorial to learn Lean Moogle.ai - a semantic search engine for finding theorems in mathlib Programming languages created in 2013 Proof assistants Dependently typed languages Educational math software Functional languages Free and open-source software Free software programmed in C++ Microsoft free software Microsoft programming languages Microsoft Research Software using the Apache license Theorem provers Theorem proving software systems
Lean (proof assistant)
[ "Mathematics" ]
1,294
[ "Automated theorem proving", "Free mathematics software", "Theorem proving software systems", "Educational math software", "Mathematical software" ]
62,891,333
https://en.wikipedia.org/wiki/Abelian%20Lie%20group
In geometry, an abelian Lie group is a Lie group that is an abelian group. A connected abelian real Lie group is isomorphic to . In particular, a connected abelian (real) compact Lie group is a torus; i.e., a Lie group isomorphic to . A connected complex Lie group that is a compact group is abelian and a connected compact complex Lie group is a complex torus; i.e., a quotient of by a lattice. Let A be a compact abelian Lie group with the identity component . If is a cyclic group, then is topologically cyclic; i.e., has an element that generates a dense subgroup. (In particular, a torus is topologically cyclic.) See also Cartan subgroup Citations Works cited Abelian group theory Geometry Lie groups
Abelian Lie group
[ "Mathematics" ]
171
[ "Lie groups", "Mathematical structures", "Algebraic structures", "Geometry", "Geometry stubs" ]
67,245,035
https://en.wikipedia.org/wiki/Bahareque
, also spelled (also referred to in spanish as bajareque or fajina), is a traditional building technique used in the construction of housing by indigenous peoples. The constructions are developed using a system of interwoven sticks or reeds, with a covering of mud, similar to the systems of wattle and clay structures seen in Europe. This technique is primarily used in regions such as Caldas, which is one of the 32 departments of Colombia. Origin , is an ancient construction system used within the Americas. The name is said to come from the word , is an old Spanish term for walls made of bamboo ( in Spanish) and soil. Guadua is a common woody grass found in Colombia. While its exact origin is uncertain, some authors have also attributed it to Caribbean-Taino culture and written it as 'bajareque'. Similar homophonies are found in other native American languages such as Miteca, ba and balibi, bava. Pedro José Ramírez Sendoya (1897-1966), a Colombian priest and anthropologist, mentioned its use in his writings, noting that it was used to construct "good buildings with walls of clay and wood almost as wide as one of our walls, tall and whitewashed with very white clay". Construction and materials Based on Jorge Enrique Robledo's book, Muñoz points out that this traditional technique of building evolved in Caldas from the first buildings constructed during the 1840s through the introduction of new materials, creating different typologies. All of these typologies typically use stone foundations. These typologies are: 1. , 2. , 3. , and 4. . Each typology has a different structural design. For instance, uses bamboo in both the frame and the structural panels and the plaster, and according to Sarmiento, is made from a mixture of earth and cattle dung. uses wood in the frame and bamboo () in its structural panels, and the plaster is made by a kind of “reinforced cement” because of the use of steel mat between the bamboo panels and the cement plaster. In the 1840s, the first settlers of Manizales, the capital city of Caldas, used in buildings that were usually single story. At the same time, in the rural areas, some farmers used a mix of traditional building styles. This mix of traditional styles was tapia, which is a pre-Hispanic construction technique, and . The first floor, , was based on compacted earth using wood earth forms, and the second floor was . In 1993, Robledo called this variation . The name derives from the fact that this new technique of had better performance in the earthquakes (Spanish meaning 'earthquake') since the first floor, which was rigid, absorbed the seismic energy, and the second floor, which was flexible, dissipated the energy. Consequently, the , which was used in a few farms and occasionally in the city of Manizales as temporary housing, gained favor after people saw that earthquakes were destroying buildings built with other construction techniques, such as . Those built with Estilo Temblorero remained standing. Because of the materials' flammability, and after the great fires of Manizales between 1925 and 1926, the trustworthiness of was lost. After these great fires and the introduction of new construction techniques, such as reinforced concrete, new variations of the technique were introduced, leaving more trust in reinforced concrete than . These new techniques, which used concrete frames and facades and structural panels, were the most common structural designs in the reconstruction of the downtown that was swept by the great fires. See also Adobe Footnotes Works cited Indigenous architecture of the Americas Architecture in Colombia Building materials Sustainable building
Bahareque
[ "Physics", "Engineering" ]
749
[ "Sustainable building", "Building engineering", "Architecture", "Construction", "Materials", "Matter", "Building materials" ]
67,246,222
https://en.wikipedia.org/wiki/Babler%20oxidation
The Babler oxidation, also known as the Babler-Dauben oxidation, is an organic reaction for the oxidative transposition of tertiary allylic alcohols to enones using pyridinium chlorochromate (PCC): It is named after James Babler who first reported the reaction in 1976 and William Dauben who extended the scope to cyclic systems in 1977, thereby significantly increasing the synthetic utility: The reaction produces the desired enone product to high yield (typically >75%), is operationally simple and does not require air-free techniques or heating. It suffers, however, from the very high toxicity and environmental hazard posed by the hexavalent chromium PCC oxidising reagent. The solvent of choice is usually dry dichloromethane (DCM) or chloroform (CHCl3). The reaction has been utilised as a step in the total syntheses of various compounds, e.g. of morphine. Mechanism The reaction proceeds through the formation of a chromate ester (1) from nucleophilic attack of the chlorochromate by the allylic alcohol. The ester then undergoes a [3,3]-sigmatropic shift to create the isomeric chromate ester (2). Finally, oxidation of this intermediate yields the α,β-unsaturated aldehyde or ketone product (3). Alternative reagents Concerns about the high toxicity and carcinogenicity of the PCC oxidant, as well as the role of chromium(VI) species as environmental pollutants in groundwater, have led to investigations for the replacement of PCC in the reaction. One successful alternative reported by multiple sources involves the use of N-oxoammonium salts derived from TMP: The oxoammonium salts with non-coordinating anions are used (such as tetrafluoroborate, perchlorate, hexafluorophosphate or hexafluoroantimonate). The oxidiser is added in stoichiometric amounts, usually 1.5 eq of alcohol. A different approach to minimise toxic chromium(VI) use involves performing the reaction with only a catalytic amount of PCC and an excess of another oxidant, to re-oxidise the chromium species as part of the catalytic cycle. Commonly reported stoichiometric reagents for this purpose include di-tert-butyl peroxide, 2-iodoxybenzoic acid or periodates. Secondary alcohols The Babler-Dauben oxidation of secondary allylic alcohols proves more difficult to control than that of tertiary analogues, as along with the desired product (a) a mixture with high proportion of side-products (b) and (c) is obtained: The yield of a is found to be maximised when PCC is not used in stoichiometric quantities but as a co-oxidant; the best effect (50–70% yield of a) is achieved for orthoperiodic acid as the main oxidiser with a 5% molar PCC. Acetonitrile (MeCN) over the usual DCM is used as the solvent. Notably, in contrast to the general oxidation of tertiary alcohols, the secondary alcohol case only works with aromatic substrates (Ar-: an aryl group). This, along with the strongly acidic conditions due to the stoichiometric amount of periodic acid, suggest that the initially formed chromate ester isomerises through a carbocationic route rather than a sigmatotropic reaction as for tertiary alcohols. See also Oxidation with chromium(VI) complexes Oxoammonium-catalyzed oxidation Other reactions of PCC References Organic oxidation reactions Name reactions
Babler oxidation
[ "Chemistry" ]
803
[ "Name reactions", "Organic oxidation reactions", "Organic reactions" ]
49,200,527
https://en.wikipedia.org/wiki/Mxparser
mXparser is an open-source mathematical expressions parser/evaluator providing abilities to calculate various expressions at a run time. Expressions definitions are given as plain text, then verified in terms of grammar / syntax, finally calculated. Library source code is maintained separately for Java and C#, providing the same API for Java/JVM, Android, .NET and Mono (Common Language Specification Compliant). Main features / usage examples mXparser delivers functionalities such as: basic calculations, implied multiplication, built-in constants and functions, numerical calculus operations, iterated operators, user defined constants, user defined functions, user defined recursion, Unicode mathematical symbols support. Basic operators mXparser supports basic operators, such as: addition '+', subtraction '-', multiplication '*', division '/', factorial '!', power '^', modulo '#'. Expression e = new Expression("2+3/(4+5)^4"); double v = e.calculate(); Implied multiplication Expression e = new Expression("2(3+4)3"); double v = e.calculate(); Expression e = new Expression("2pi(3+4)2sin(3)e"); double v = e.calculate(); Binary relations It is possible to combine typical expressions with binary relations (such as: greater than '>', less than '<', equality '=', inequality '<>', greater or equal '>=', lower or equal '<='), as each relation evaluation results in either '1' for true outcome, or '0' for false. Expression e = new Expression("(2<3)+5"); double v = e.calculate(); Boolean logic Boolean logic also operates assuming equivalence of '1 as true' and '0 as false'. Supported Boolean operators include: AND conjunction, OR disjunction, NAND Sheffer stroke, NOR, XOR Exclusive OR, IMP Implication, CIMP Converse implication, NIMP Material nonimplication, CNIMP Converse nonimplication, EQV Logical biconditional, Negation. Expression e = new Expression("1 --> 0"); double v = e.calculate(); Built-in mathematical functions Supported common mathematical functions (unary, binary and variable number of arguments), including: trigonometric functions, inverse trigonometric functions, logarithm functions, exponential function, hyperbolic functions, Inverse hyperbolic functions, Bell numbers, Lucas numbers, Stirling numbers, prime-counting function, exponential integral function, logarithmic integral function, offset logarithmic integral, binomial coefficient and others. Expression e = new Expression("sin(0)+ln(2)+log(3,9)"); double v = e.calculate(); Expression e = new Expression("min(1,2,3,4)+gcd(1000,100,10)"); double v = e.calculate(); Expression e = new Expression("if(2<1, 3, 4)"); double v = e.calculate(); Expression e = new Expression("iff(2<1, 1; 3<4, 2; 10<2, 3; 5<10, 4)"); double v = e.calculate(); Built-in math constants Built-in mathematical constants, with high precision. Expression e = new Expression("sin(pi)+ln(e)"); double v = e.calculate(); Iterated operators Iterated summation and product operators. Expression e = new Expression("sum(i, 1, 10, ln(i))"); double v = e.calculate(); Expression e = new Expression("prod(i, 1, 10, sin(i))"); double v = e.calculate(); Numerical differentiation and integration mXparser delivers implementation of the following calculus operations: differentiation and integration. Expression e = new Expression("der( sin(x), x )"); double v = e.calculate(); Expression e = new Expression("int( sqrt(1-x^2), x, -1, 1)"); double v = e.calculate(); Prime numbers support Expression e = new Expression("ispr(21)"); double v = e.calculate(); Expression e = new Expression("Pi(1000)"); double v = e.calculate(); Unicode mathematical symbols support Expression e = new Expression("√2"); double v = e.calculate(); Expression e = new Expression("∜16 + ∛27 + √16"); double v = e.calculate(); Expression e = new Expression("∑(i, 1, 5, i^2)"); double v = e.calculate(); Elements defined by user Library provides API for creation of user-defined objects, such as: constants, arguments, functions. User-defined constants Constant t = new Constant("t = 2*pi"); Expression e = new Expression("sin(t)", t); double v = e.calculate(); User-defined arguments Argument x = new Argument("x = 5"); Argument y = new Argument("y = 2*x", x); Expression e = new Expression("sin(x)+y", x, y); double v = e.calculate(); User-defined functions Function f = new Function("f(x, y) = sin(x)+cos(y)"); Expression e = new Expression("f(1,2)", f); double v = e.calculate(); User-defined variadic functions Function f = new Function("f(...) = sum( i, 1, [npar], par(i) )"); Expression e = new Expression("f(1,2,3,4)", f); double v = e.calculate(); User-defined recursion Function fib = new Function("fib(n) = iff( n>1, fib(n-1)+fib(n-2); n=1, 1; n=0, 0 ) )"); Expression e = new Expression("fib(10)", fib); double v = e.calculate(); Requirements Java: JDK 1.5 or higher .NET/Mono: framework 2.0 or higher Documentation Tutorial Javadoc API specification mXparser - source code Source code is maintained and shared on GitHub. See also List of numerical libraries List of numerical analysis software Mathematical software Exp4j References External links MathParser.org mXparser on NuGet mXparser on Apache Maven Scalar powered by mXparser ScalarMath.org powered by mXparser Free mathematics software Parsing 2010 software Free software programmed in Java (programming language) Free software programmed in C Sharp Software using the BSD license Free mobile software Software that uses Mono (software) Free and open-source Android software .NET Framework software Computer algebra systems
Mxparser
[ "Mathematics" ]
1,575
[ "Computer algebra systems", "Free mathematics software", "Mathematical software" ]
49,202,119
https://en.wikipedia.org/wiki/Diffusion%20bonding
Diffusion bonding or diffusion welding is a solid-state welding technique used in metalworking, capable of joining similar and dissimilar metals. It operates on the principle of solid-state diffusion, wherein the atoms of two solid, metallic surfaces intersperse themselves over time. This is typically accomplished at an elevated temperature, approximately 50-75% of the absolute melting temperature of the materials. A weak bond can also be achieved at room temperature. Diffusion bonding is usually implemented by applying high pressure, in conjunction with necessarily high temperature, to the materials to be welded; the technique is most commonly used to weld "sandwiches" of alternating layers of thin metal foil, and metal wires or filaments. Currently, the diffusion bonding method is widely used in the joining of high-strength and refractory metals within the aerospace and nuclear industries. History The act of diffusion welding is centuries old. This can be found in the form of "gold-filled," a technique used to bond gold and copper for use in jewelry and other applications. In order to create filled gold, smiths would begin by hammering out an amount of solid gold into a thin sheet of gold foil. This film was then placed on top of a copper substrate and weighted down. Finally, using a process known as "hot-pressure welding" or HPW, the weight/copper/gold-film assembly was placed inside an oven and heated until the gold film was sufficiently bonded to the copper substrate. Modern methods were described by the Soviet scientist N.F. Kazakov in 1953. Characteristics Diffusion bonding involves no liquid fusion, and often no filler metal. No weight is added to the total, and the join tends to exhibit both the strength and temperature resistance of the base metal(s). The materials endure no, or very little, plastic deformation. Very little residual stress is introduced, and there is no contamination from the bonding process. It may theoretically be performed on a join surface of any size with no increase in processing time, however, practically speaking, the surface tends to be limited by the pressure required and physical limitations. Diffusion bonding may be performed with similar and dissimilar metals, reactive and refractory metals, or pieces of varying thicknesses. Due to its relatively high cost, diffusion bonding is most often used for jobs either difficult or impossible to weld by other means. Examples include welding materials normally impossible to join via liquid fusion, such as zirconium and beryllium; materials with very high melting points such as tungsten; alternating layers of different metals which must retain strength at high temperatures; and very thin, honeycombed metal foil structures. Titanium alloys will often be diffusion bonded as the thin oxide layer can be dissolved and diffused away from the bonding surfaces at temperatures over 850 °C. Temperature Dependence Steady state diffusion is determined by the amount of diffusion flux that passes through the cross-sectional area of the mating surfaces. Fick's first law of diffusion states: where J is the diffusion flux, D is a diffusion coefficient, and dC/dx is the concentration gradient through the materials in question. The negative sign is a product of the gradient. Another form of Fick's law states: where M is defined as either the mass or amount of atoms being diffused, A is the cross-sectional area, and t is the time required. Equating the two equations and rearranging, we achieve the following result: As mass and area are constant for a given joint, time required is largely dependent on the concentration gradient, which changes by only incremental amounts through the joint, and the diffusion coefficient. The diffusion coefficient is determined by the equation: where Qd is the activation energy for diffusion, R is the universal gas constant, T is the thermodynamic temperature experienced during the process, and D0 is a temperature-independent preexponential factor that depends on the materials being joined. For a given joint, the only term in this equation within control is temperature. Processes When joining two materials of similar crystalline structure, diffusion bonding is performed by clamping the two pieces to be welded with their surfaces abutting each other. Prior to welding, these surfaces must be machined to as smooth a finish as economically viable, and kept as free from chemical contaminants or other detritus as possible. Any intervening material between the two metallic surfaces may prevent adequate diffusion of material. Specific tooling is made for each welding application to mate the welder to the workpieces. Once clamped, pressure and heat are applied to the components, usually for many hours. The surfaces are heated either in a furnace, or via electrical resistance. Pressure can be applied using a hydraulic press at temperature; this method allows for exact measurements of load on the parts. In cases where the parts must have no temperature gradient, differential thermal expansion can be used to apply load. By fixturing parts using a low-expansion metal (i.e. molybdenum) the parts will supply their own load by expanding more than the fixture metal at temperature. Alternative methods for applying pressure include the use of dead weights, differential gas pressure between the two surfaces, and high-pressure autoclaves. Diffusion bonding must be done in a vacuum or inert gas environment when using metals that have strong oxide layers (i.e. copper). Surface treatment including polishing, etching, and cleaning as well as diffusion pressure and temperature are important factors regarding the process of diffusion bounding. At the microscopic level, diffusion bonding occurs in three simplified stages: Microasperity deformation- before the surfaces completely contact, asperities (very small surface defects) on the two surfaces contact and plastically deform. As these asperities deform, they interlink, forming interfaces between the two surfaces. Diffusion-controlled mass transport- elevated temperature and pressure causes accelerated creep in the materials; grain boundaries and raw material migrate and gaps between the two surfaces are reduced to isolated pores. Interface migration- material begins to diffuse across the boundary of the abutting surfaces, blending this material boundary and creating a bond. Benefits The bonded surface has the same physical and mechanical properties as the base material. Once bonding is complete, the joint may be tested using tensile testing for example. The diffusion bonding process is able to produce high quality joints where no discontinuity or porosity exists in the interface. In other words, we are able to sand, manufacturing and heat the material. Diffusion bonding enables the manufacture of high precision components with complex shapes. Also, diffusion is flexible. The diffusion bonding method can be used widely, joining either similar or dissimilar materials, and is also important in processing composite materials. The process is not extremely hard to approach and the cost to perform the diffusion bonding is not high. The material under diffusion is able to reduce the plastic deformation. Applicability Diffusion bonding is primarily used to create intricate forms for the electronics, aerospace, nuclear, and microfluidics industries. Since this form of bonding takes a considerable amount of time compared to other joining techniques such as explosion welding, parts are made in small quantities, and often fabrication is mostly automated. However, due to different requirements, the required time could be reduced. In an attempt to reduce fastener count, labor costs, and part count, diffusion bonding, in conjunction with superplastic forming, is also used when creating complex sheet metal forms. Multiple sheets are stacked atop one another and bonded in specific sections. The stack is then placed into a mold and gas pressure expands the sheets to fill the mold. This is often done using titanium or aluminum alloys for parts needed in the aerospace industry. Typical materials that are welded include titanium, beryllium, and zirconium. In many military aircraft diffusion bonding will help to allow for the conservation of expensive strategic materials and the reduction of manufacturing costs. Some aircraft have over 100 diffusion-bonded parts, including fuselages, outboard and inboard actuator fittings, landing gear trunnions, and nacelle frames. References Further reading Kalpakjian, Serope, Schmid, Steven R. "Manufacturing Engineering and Technology, Fifth Edition", pp. 771-772 External links "Cast Nonferrous: Solid State Welding," at Key to Metals An excellent discussion of diffusion bonding by Amir Shirzadi for the UK Centre for Materials Education Welding Materials science
Diffusion bonding
[ "Physics", "Materials_science", "Engineering" ]
1,712
[ "Welding", "Applied and interdisciplinary physics", "Materials science", "Mechanical engineering", "nan" ]
71,550,825
https://en.wikipedia.org/wiki/Bretagnolle%E2%80%93Huber%20inequality
In information theory, the Bretagnolle–Huber inequality bounds the total variation distance between two probability distributions and by a concave and bounded function of the Kullback–Leibler divergence . The bound can be viewed as an alternative to the well-known Pinsker's inequality: when is large (larger than 2 for instance.), Pinsker's inequality is vacuous, while Bretagnolle–Huber remains bounded and hence non-vacuous. It is used in statistics and machine learning to prove information-theoretic lower bounds relying on hypothesis testing  (Bretagnolle–Huber–Carol Inequality is a variation of Concentration inequality for multinomially distributed random variables which bounds the total variation distance.) Formal statement Preliminary definitions Let and be two probability distributions on a measurable space . Recall that the total variation between and is defined by The Kullback-Leibler divergence is defined as follows: In the above, the notation stands for absolute continuity of with respect to , and stands for the Radon–Nikodym derivative of with respect to . General statement The Bretagnolle–Huber inequality says: Alternative version The following version is directly implied by the bound above but some authors prefer stating it this way. Let be any event. Then where is the complement of . Indeed, by definition of the total variation, for any , Rearranging, we obtain the claimed lower bound on . Proof We prove the main statement following the ideas in Tsybakov's book (Lemma 2.6, page 89), which differ from the original proof (see C.Canonne's note for a modernized retranscription of their argument). The proof is in two steps: 1. Prove using Cauchy–Schwarz that the total variation is related to the Bhattacharyya coefficient (right-hand side of the inequality): 2. Prove by a clever application of Jensen’s inequality that Step 1: First notice that To see this, denote and without loss of generality, assume that such that . Then we can rewrite And then adding and removing we obtain both identities. Then because Step 2: We write and apply Jensen's inequality: Combining the results of steps 1 and 2 leads to the claimed bound on the total variation. Examples of applications Sample complexity of biased coin tosses Source: The question is How many coin tosses do I need to distinguish a fair coin from a biased one? Assume you have 2 coins, a fair coin (Bernoulli distributed with mean ) and an -biased coin (). Then, in order to identify the biased coin with probability at least (for some ), at least In order to obtain this lower bound we impose that the total variation distance between two sequences of samples is at least . This is because the total variation upper bounds the probability of under- or over-estimating the coins' means. Denote and the respective joint distributions of the coin tosses for each coin, then We have The result is obtained by rearranging the terms. Information-theoretic lower bound for k-armed bandit games In multi-armed bandit, a lower bound on the minimax regret of any bandit algorithm can be proved using Bretagnolle–Huber and its consequence on hypothesis testing (see Chapter 15 of Bandit Algorithms). History The result was first proved in 1979 by Jean Bretagnolle and Catherine Huber, and published in the proceedings of the Strasbourg Probability Seminar. Alexandre Tsybakov's book features an early re-publication of the inequality and its attribution to Bretagnolle and Huber, which is presented as an early and less general version of Assouad's lemma (see notes 2.8). A constant improvement on Bretagnolle–Huber was proved in 2014 as a consequence of an extension of Fano's Inequality. See also Total variation for a list of upper bounds Bretagnolle–Huber–Carol Inequality in Concentration inequality References Information theory Probabilistic inequalities
Bretagnolle–Huber inequality
[ "Mathematics", "Technology", "Engineering" ]
836
[ "Telecommunications engineering", "Applied mathematics", "Theorems in probability theory", "Computer science", "Probabilistic inequalities", "Information theory", "Inequalities (mathematics)" ]
71,551,291
https://en.wikipedia.org/wiki/Marie-Louise%20Saboungi
Marie-Louise Saboungi is a Lebanese-born American condensed matter physicist at the Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie (IMPMC), Sorbonne University, Paris, France. Early life and education Saboungi was born January 1, 1948, in Lebanon. She studied Mathematics and Physics at the Lebanese University in Beirut and obtained a Doctorat d’Etat in Physics at Aix-Marseille University, France in 1973, studying the statistical thermodynamics of molten salts. Career After her doctorate, Saboungi joined Argonne National Laboratory and worked there as Senior Scientist until 2002. Following this she was a director at the Centre de Recherche sur la Matière Divisée, CNRS until 2011. From 2007 to 2011 she was also Program Officer at Agence Nationale de la Recherche. In 2011 she joined IMPMC at Sorbonne University, where she currently works. She has also been a Distinguished Professor of Physics at University of Orléans in 2002–2011, and was appointed Distinguished Visiting Professor in Soochow University in 2014. Research Saboungi's work focuses on complex soft materials, including ionic liquids and aqueous electrolytes, with a view to applications in energy and biotechnology.  She also studies silver chalcogenides, which display many fascinating phenomena including fast-ion conduction at higher temperatures, linear magnetoresistance over a broad range of magnetic fields, and topological insulator behavior. Awards and honours 1990 – Fellow of the American Association for the Advancement of Science 1991 – Award for Leadership in the Professions, YWCA of Metropolitan Chicago 1992 – Fellow of the American Physical Society 2000, 2014 – Fellow of Japan Society for the Promotion of Science 2007 – Fellow, Alexander von Humboldt Foundation 2014 – Doctor Honoris Causa, University of the Andes, Mérida, Venezuela Selected publications Large magnetoresistance in non-magnetic silver chalcogenides, Nature 390, 57–60 (1997), Electron distribution in water, J. Chem. Phys. 112, 9206 (2000), Improving reinforcement of natural rubber by networking of activated carbon nanotubes, Carbon, 46, 7, June 2008, 1037–1045, The Structure of Aqueous Guanidinium Chloride Solutions, J. Am. Chem. Soc. 2004, 126, 37, 11462–11470, References External links List of patents Living people Condensed matter physicists Women physicists Fellows of the American Physical Society Fellows of the American Association for the Advancement of Science Aix-Marseille University alumni Pierre and Marie Curie University people 1948 births
Marie-Louise Saboungi
[ "Physics", "Materials_science" ]
541
[ "Condensed matter physicists", "Condensed matter physics" ]
71,554,558
https://en.wikipedia.org/wiki/Minoxidil%20sulfate
Minoxidil sulfate, also known as minoxidil sulfate ester or minoxidil N-O-sulfate, is an active metabolite of minoxidil (Rogaine, Loniten, others) and is the active form of this agent. Minoxidil acts as a prodrug of minoxidil sulfate. Minoxidil sulfate is formed from minoxidil via sulfotransferase enzymes, with the predominant enzyme responsible, at least in hair follicles, being SULT1A1. Minoxidil sulfate acts as a potassium channel opener, among other actions, and has vasodilating, hypotensive, and trichogenic or hypertrichotic (hair growth-promoting) effects. Its mechanism of action in terms of hair growth is still unknown, although multiple potential mechanisms have been implicated. Minoxidil sulfate is a sulfate ester of minoxidil, not a sulfate salt of the compound. However, minoxidil sulfate forms an inner salt, which makes it more hydrophobic than minoxidil. This is in contrast to most sulfate esters, which are usually more hydrophilic than their non-ester forms. The bioactivation of minoxidil into minoxidil sulfate is very unusual and is among the only known instances of sulfation producing a more active drug form. Normally, sulfation tends to inactivate drugs by reducing their biological activity and increasing their excretion. Minoxidil sulfate is highly unstable in aqueous solutions and alcohol-containing solvents, with a half-life of 6hours in aqueous solutions and a further much lower half-life in alcohol-containing solvents. This has served as a limiting factor in its potential pharmaceutical use and therapeutic effectiveness. Moreover, minoxidil sulfate has a 40% higher molecular weight than minoxidil, and this may reduce its absorption into the scalp. In any case, a minoxidil sulfate-based topical formulation has been investigated for the treatment of scalp hair loss. Additionally, minoxidil-sulfate-based topical formulations appear to be available for medical use in some parts of the world, for instance in Brazil. References Amine oxides Aminopyrimidines Antihypertensive agents Hair loss medications Human drug metabolites 1-Piperidinyl compounds Potassium channel openers Sulfate esters Vasodilators
Minoxidil sulfate
[ "Chemistry" ]
512
[ "Chemicals in medicine", "Amine oxides", "Functional groups", "Human drug metabolites" ]
71,555,641
https://en.wikipedia.org/wiki/Thom%E2%80%93Sebastiani%20Theorem
In complex analysis, a branch of mathematics, the Thom–Sebastiani Theorem states: given the germ defined as where are germs of holomorphic functions with isolated singularities, the vanishing cycle complex of is isomorphic to the tensor product of those of . Moreover, the isomorphism respects the monodromy operators in the sense: . The theorem was introduced by Thom and Sebastiani in 1971. Observing that the analog fails in positive characteristic, Deligne suggested that, in positive characteristic, a tensor product should be replaced by a (certain) local convolution product. References Theorems in complex analysis
Thom–Sebastiani Theorem
[ "Mathematics" ]
128
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in complex analysis", "Mathematical analysis stubs" ]
71,555,642
https://en.wikipedia.org/wiki/Promethium%28III%29%20bromide
Promethium(III) bromide is an inorganic compound, with the chemical formula of PmBr3. It is radioactive salt. It is a crystal of the hexagonal crystal system, with the space group of P63/mc (No. 176). Preparation Promethium(III) bromide can be obtained by reacting hydrogen bromide and promethium(III) oxide: Pm2O3 + 6 HBr —500°C→ 2 PmBr3 + 3 H2O Promethium(III) bromide hydrate cannot be heated to form its anhydrous form. Instead it decomposes in water to form promethium oxybromide: PmBr3 + H2O(g) → PmOBr + 2 HBr References Bromides Promethium compounds Lanthanide halides
Promethium(III) bromide
[ "Chemistry" ]
175
[ "Bromides", "Salts" ]
44,337,770
https://en.wikipedia.org/wiki/Optica%20Optics%20Software
Optica is an optical design program used for the design and analysis of both imaging and illumination systems. It works by ray tracing the propagation of rays through an optical system. It performs polarization ray-tracing, non-sequential ray-tracing, energy calculations, and optimization of optical systems in three-dimensional space. It also performs symbolic modeling of optical systems, diffraction, interference, wave-front, and Gaussian beam propagation calculations. In addition to conducting simulations of optical designs, Optica is used by scientists to create illustrations of the simulated results in publications. Some examples of Optica being used in simulations and illustrations include holography, x-ray optics, spectrometers, Cerenkov radiation, microwave optics, nonlinear optics, scattering, camera design, extreme ultraviolet lithography simulations, telescope optics, laser design, ultrashort pulse lasers, eye models, solar concentrators and Ring Imaging CHerenkov (RICH) particle detectors. History Optica was originally developed by Donald Barnhart of Urbana, Illinois, USA, and has been in continual development since 1994. Wolfram Research first sold the original version as a Mathematica application. From 2005 to 2009, Optica Software was sold by iCyt Mission Technology Inc, Champaign, Illinois (renamed Sony Biotechnology Inc in 2010). At iCyt, Optica2 was renamed as Rayica, and Wavica and LensLab were also developed. Later Rayica-Wavica was combined and named back to Optica3. Since 2009, Optica Software has been a subsidiary of Barnhart Optical Research LLC. References External links Optica Software Website Wolfram Research Optics Page Wolfram Research Optica3 Optical software Physics software
Optica Optics Software
[ "Physics" ]
348
[ "Physics software", "Computational physics" ]
44,339,074
https://en.wikipedia.org/wiki/Atmospheric-pressure%20photoionization
Atmospheric pressure photoionization (APPI) is a soft ionization method used in mass spectrometry (MS) usually coupled to liquid chromatography (LC). Molecules are ionized using a vacuum ultraviolet (VUV) light source operating at atmospheric pressure (105 Pa), either by direct absorption followed by electron ejection or through ionization of a dopant molecule that leads to chemical ionization of target molecules. The sample is usually a solvent spray that is vaporized by nebulization and heat. The benefit of APPI is that it ionizes molecules across a broad range of polarity and is particularly useful for ionization of low polarity molecules for which other popular ionization methods such as electrospray ionization (ESI) and atmospheric pressure chemical ionization (APCI) are less suitable. It is also less prone to ion suppression and matrix effects compared to ESI and APCI and typically has a wide linear dynamic range. The application of APPI with LC/MS is commonly used for analysis of petroleum compounds, pesticides, steroids, and drug metabolites lacking polar functional groups and is being extensively deployed for ambient ionization particularly for explosives detection in security applications. Instrument configuration The figure shows the main components of an APPI source: a nebulizer probe which can be heated to 350–500 °C, an ionization region with a VUV photon source, and an ion-transfer region under intermediate pressure that introduces ions into the MS analyzer. The analyte(s) in solution from the HPLC flows into the nebulizer at a flow rate that can range from μL/min to mL/min range. The liquid flow is vaporized by nebulization and heat. The vaporized sample then enters into the radiation zone of the VUV source. Sample ions then enter into the MS interface region, frequently a capillary through the combination of a decreasing pressure gradient and electric fields. APPI has been commercially developed as dual ionization sources more commonly with APCI, but also with ESI. Ionization mechanisms The photoionization mechanism is simplified under vacuum conditions: photon absorption by the analyte molecule, leading to electron ejection, forming a molecular radical cation, M•+. This process is similar to electron ionization common to GC/MS, except that the ionization process is soft, i.e., less fragmentation. In the atmospheric region of an LC/MS system, the ionization mechanism becomes more complex. The unpredictable fate of ions is generally detrimental to LC/MS analysis, but like most processes, once they are better understood, these properties can be exploited to enhance performance. For example, the role of dopant in APPI, first developed and patented for the atmospheric ion source of ion mobility spectrometry (IMS), was adapted to APPI for LC/MS. The basic APPI mechanisms can be summarized by the following scheme: Direct positive ion APPI Dopant or solvent-assisted positive ion APPI The fundamental process in photoionization is the absorption of a high-energy photon by the molecule and subsequent ejection of an electron. In direct APPI, this process occurs for the analyte molecule, forming the molecular radical cation M•+. The analyte radical cation can be detected as M•+ or it can react with surrounding molecules and be detected as another ion. The most common reaction is the abstraction of a hydrogen atom from the abundant solvent to form the stable [M+H]+ cation, which is usually the observed ion. In dopant-APPI (or photoionization-induced APCI), a quantity of photoionizable molecules (e.g., toluene or acetone) is introduced into the sample stream to create a source of charge carriers. Use of a photoionizable solvent can also achieve the same effect. The dopant or solvent ions can then react with neutral analyte molecules via proton transfer or charge exchange reactions. The above table simplifies the dopant process. In fact, there may be extensive ion-molecule chemistry between dopant and solvent before the analyte becomes ionized. APPI can also produce negative ions by creating a high abundance of thermal electrons from dopant or solvent ionization or by photons striking metal surfaces in the ionization source. The cascade of reactions that can lead to M− or dissociative negative ions [M-X]− often involve O2 as an electron charge carrier. Examples of negative ionization mechanisms include: Direct or dopant-assisted negative ion APPI History Photoionization has a long history of use in mass spectrometry experiments, though mostly for research purposes and not for sensitive analytical applications. Pulsed lasers have been used for non-resonant multiphoton ionization (MPI), resonance-enhanced MPI (REMPI) using tunable wavelengths, and single-photon ionization using sum frequency generation in non-linear media (usually gas cells). Non-laser sources of photoionization include discharge lamps and synchrotron radiation. The former sources were not adaptable to high sensitivity analytical applications because of low spectral brightness in the former case and large "facility-size" in the latter case. Meanwhile, photoionization has been used for GC detection and as a source for ion mobility spectrometry for many years suggesting the potential for use in mass spectrometry. The first development of APPI for LC/MS was reported by Robb, Covey, and Bruins and by Syage, Evans, and Hanold in 2000. APPI sources were commercialized shortly thereafter by Syagen Technology and made available for most commercial MS systems and by Sciex for their line of MS instruments. Concurrent to the development of APPI was a similar use of a VUV source for low pressure photoionization (LPPI) by Syage and coworkers that accepted atmospheric pressure gas phase samples but stepped down the pressure for ionization to about 1 torr (~100 Pa) before further pressure reduction for introduction into a MS analyzer. This photoionization method is well suited as an interface between gas chromatography (GC) and MS. Advantages APPI is most used for LC/MS although it has recently found widespread use in ambient applications such as detection of explosives and narcotics compounds for security applications using ion mobility spectrometry. Compared to the more commonly used predecessor ionization sources ESI and APCI, APPI ionizes a broader range of compounds with the benefit increasing toward the non-polar end of the scale. It also has relatively low susceptibility to ion suppression and matrix effects, which makes APPI very effective in detecting compounds quantitatively in complex matrices. APPI has other advantages including a broader linear range and dynamic range than ESI as seen by the example in the left figure. It is also generally more selective than APCI with reduced background ion signals as shown in the right figure. This latter example also highlights the benefit of APPI vs. ESI in that the HPLC conditions were for non-polar normal-phase in this case using n-hexane solvent. ESI requires polar solvents and further hexane could pose an ignition hazard for ESI and APCI that use high voltages. APPI works well under normal-phase conditions since many of the solvents are photoionizable and serve as dopant ions, which allows specialized applications such as separation of enantiomers (right figure). Regarding applicability to a range of HPLC flow rates, the signal level of analytes by APPI has been observed to saturate and even decay at higher solvent flow rates (above 200 μl/min), and therefore, much lower flow rates are recommended for APPI than for ESI and APCI. This has been suggested to be due to absorption of photons by the increasing density of solvent molecules., However, this leads to the benefit that APPI can extend to very low flow rates (e.g., 1 μL/min domain) allowing for effective use with capillary LC and capillary-electrophoresis. Application The application of APPI with LC/MS is commonly used for analysis of low polarity compounds such as petroleums, polyatomic hydrocarbons, pesticides, steroids, lipids, and drug metabolites lacking polar functional groups. Excellent review articles can be found in the References. APPI has also been effectively applied for ambient ionization applications lending itself to several practical configurations. One configuration termed desorption APPI (DAPPI) was developed by Haapala et al. and is pictured in the figure here. This device has been applied to the analysis of drugs of abuse in various solid phases, drug metabolites and steroids in urine, pesticides in plant material, etc. APPI has also been interfaced to a DART (direct analysis in real time) source and shown for non-polar compounds such as steroids and pesticides to enhance signal by up to an order of magnitude for N2 flow, which is preferred for DART because it is significantly cheaper and easier to generate then the higher performing use of He. Commercial APPI sources have also been adapted to accept an insertable sampling probe that can deliver or liquid or solid sample to the nebulizer for vaporization and ionization. This configuration is similar to atmospheric solid analysis probe (ASAP) that is based on the use of APCI and therefore is referred to as APPI-ASAP. The benefits of APPI-ASAP vs. APCI-ASAP are similar to those observed in LC/MS, namely higher sensitivity to lower polarity compounds and less background signal for samples in complex matrices. Though ambient ionization has experienced a renaissance in the last decades, it has been used in the security industry for many decades, for example in swab detections at airports. The swabs collect condensed phase material from surfaces and are then inserted into a thermal desorber and ionizer assembly that then flows into the ion detector, which in most cases are an ion mobility spectrometer (IMS), but in later cases have been MS analyzers. A picture of a swab-APPI-IMS system used in airports and other security venues is given in the left figure In fact, a swab-APPI-MS system designed for explosives and narcotics detection for security applications performs very well for all types of ambient analysis using a sampling wand and swab (right figure). A particular demonstration (unpublished) showed excellent sensitivity and specificity for detection of pesticide compounds on a variety of fruits and vegetables showing detection limits for 37 priority pesticides ranging from 0.02 to 3.0 ng well below safe limits. See also Atmospheric pressure chemical ionization Chemical ionization Corona discharge Electrospray ionization Secondary electrospray ionization References Concepts in physics Mass spectrometry Ion source
Atmospheric-pressure photoionization
[ "Physics", "Chemistry" ]
2,259
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Ion source", "Mass spectrometry", "nan", "Matter" ]
44,340,383
https://en.wikipedia.org/wiki/Kawasaki%27s%20Riemann%E2%80%93Roch%20formula
In differential geometry, Kawasaki's Riemann–Roch formula, introduced by Tetsuro Kawasaki, is the Riemann–Roch formula for orbifolds. It can compute the Euler characteristic of an orbifold. Kawasaki's original proof made a use of the equivariant index theorem. Today, the formula is known to follow from the Riemann–Roch formula for quotient stacks. References Tetsuro Kawasaki. The Riemann-Roch theorem for complex V-manifolds. Osaka J. Math., 16(1):151–159, 1979 Theorems in differential geometry Theorems in algebraic geometry See also Riemann–Roch-type theorem
Kawasaki's Riemann–Roch formula
[ "Mathematics" ]
146
[ "Theorems in differential geometry", "Theorems in algebraic geometry", "Theorems in geometry" ]
59,340,793
https://en.wikipedia.org/wiki/Dallol%20%28hydrothermal%20system%29
Dallol is a unique, terrestrial hydrothermal system around a cinder cone volcano in the Danakil Depression, northeast of the Erta Ale Range in Ethiopia. It is known for its unearthly colors and mineral patterns, and the very acidic fluids that discharge from its hydrothermal springs. Etymology The term Dallol was coined by the Afar people and means dissolution or disintegration, describing a landscape of green acid ponds and geysers (pH-values less than 1) and iron oxide, sulfur and salt desert plains. Description Dallol mountain has an area of about , and rises about above the surrounding salt plains. A circular depression near the centre is probably a collapsed crater. The southwestern slopes have water-eroded salt canyons, pillars, and blocks. There are numerous saline springs and fields of small fumaroles. Numerous hot springs discharge brine and acidic liquid here. Small, widespread, temporary geysers produce cones of salt. The Dallol deposits include significant bodies of potash found directly at the surface. The yellow, ochre and brown colourings are the result of the presence of iron and other impurities. Older, inactive springs tend to be dark brown because of oxidation processes. Formation It was formed by the intrusion of basaltic magma into Miocene salt deposits and subsequent hydrothermal activity. Phreatic eruptions took place here in 1926, forming Dallol Volcano; numerous other eruption craters dot the salt flats nearby. These craters are the lowest known subaerial volcanic vents in the world, at or more below sea level. In October 2004, the shallow magma chamber beneath Dallol deflated and fed a magma intrusion southwards beneath the rift. The most recent signs of activity occurred in January 2011 in what may have been a degassing event from deep below the surface. Physical properties Dallol lies in the evaporitic plain of the Danakil Depression at the Afar Triangle, in the prolongation of the Erta Ale basaltic volcanic range. The intrusion of basaltic magma in the marine sedimentary sequence of Danakil resulted in the formation of a salt dome structure, where the hydrothermal system is hosted. The age of the hydrothermal system is unknown and the latest phreatic eruption that resulted in the formation of a diameter crater within the dome, took place in 1926. The wider area of Dallol is known as one of the driest and hottest places on the planet. It is also one of the lowest land points, lying below mean sea level. Other known hydrothermal features nearby Dallol are Gaet'Ale Pond and Black Lakes. The hydrothermal springs of Dallol discharge anoxic, hyper-acidic (pH < 0), hyper-saline (almost 10 times more saline than seawater), high temperature (hotter than ) brines that contain more than 26 g/L of iron. The main gases emitted from the springs and fumaroles are carbon dioxide, hydrogen sulfide, nitrogen, sulfur dioxide; and traces of hydrogen, argon, and oxygen. Although several other hyper-acidic (pH < 2) volcanic systems exist, mainly found in crater lakes and hydrothermal sites, the pH values of Dallol decrease far below zero. The coexistence of such extreme physicochemical characteristics (pH, salinity, high temperature, lack of oxygen, etc.) render Dallol one of the very few ‘poly-extreme’ sites on Earth. This is why Dallol is a key system for astrobiological studies investigating the limits of life. Parts of the region are nearly sterile, except for a diverse array of "ultrasmall" archaea. Dallol is highly dynamic; active springs go inactive and new springs emerge in new places in the range of days, and this is also reflected in the colors of the site that change with time, from white to green, lime, yellow, gold, orange, red, purple and ochre. In contrast to other hydrothermal systems known for their colorful pools (e.g. Grand Prismatic Spring), where the colors are generated by biological activity, the color palette of Dallol is produced by the inorganic oxidation of the abundant iron phases. Another fascinating feature of Dallol is the wide array of unusual mineral patterns such as salt-pillars, miniature geysers, water-lilies, flower-like crystals, egg-shaped crusts, and pearl-like spheres. The main mineral phases encountered at Dallol are halite, jarosite, hematite, akaganeite and other Fe-oxyhydroxides, gypsum, anhydrite, sylvite and carnallite. Absence of life In October 2019, a French-Spanish team of scientists published an article in Nature Ecology and Evolution that concludes that while the salt plains are teeming with halophilic microorganisms, there is no life in Dallol's multi-extreme ponds due to the combination of hyperacidic and hypersaline environments, and the abundance of magnesium (which catalyzes the denaturation of biomolecules). However another team reported for the first time evidence of life existing with these hot springs using a combination of morphological and molecular analyses. Ultra-small structures are shown to be entombed within mineral deposits, which are identified as members of the Order Nanohaloarchaea. History The Dallol area lies up to below sea level, and has been repeatedly flooded in the past when waters from the Red Sea have flowed into the depression. The last separation from the Red Sea was about 30,000 years ago. The discovery of the volcano by the first European settlers certainly dates from the first colonization and expeditions in the region, in the 17th or 18th century. But the hostility of the depression, the unbearable heat which reigns there, and the dangers of the site (acid basins, toxic fumes), did not favour the exploration of the zones close to the crater. On the contrary, the Erta Ale was much more accessible, especially because the part of the rift where it is located (called the Erta Ale Range), is significantly higher. The last eruption of this phreato-magmatic volcano dates back to 2011. Gallery See also List of volcanoes in Ethiopia Dallol – a ghost town in the Dallol crater. It had the record high average temperature for an inhabited location on Earth. References Bibliography On the Volcanoes of the World episode The Horn of Africa (2008; Science Channel) External links Photos from Dallol taken during an expedition to the Danakil in February 2008 YouTube Wired BBC BBC Afar Region Astrobiology Geologic formations of Ethiopia Geochemistry Hydrothermal vents Iron minerals
Dallol (hydrothermal system)
[ "Chemistry", "Astronomy", "Biology" ]
1,374
[ "Origin of life", "Speculative evolution", "Astrobiology", "nan", "Biological hypotheses", "Astronomical sub-disciplines" ]
59,354,087
https://en.wikipedia.org/wiki/Trigonometric%20Rosen%E2%80%93Morse%20potential
The trigonometric Rosen–Morse potential, named after the physicists Nathan Rosen and Philip M. Morse, is among the exactly solvable quantum mechanical potentials. Definition In dimensionless units and modulo additive constants, it is defined as where is a relative distance, is an angle rescaling parameter, and is so far a matching length parameter. Another parametrization of same potential is which is the trigonometric version of a one-dimensional hyperbolic potential introduced in molecular physics by Nathan Rosen and Philip M. Morse and given by, a parallelism that explains the potential's name. The most prominent application concerns the parametrization, with non-negative integer, and is due to Schrödinger who intended to formulate the hydrogen atom problem on Albert Einstein's closed universe, , the direct product of a time line with a three-dimensional closed space of positive constant curvature, the hypersphere , and introduced it on this geometry in his celebrated equation as the counterpart to the Coulomb potential, a mathematical problem briefly highlighted below. The hypersphere is a surface in a four-dimensional Euclidean space, , and is defined as, where , , , and are the Cartesian coordinates of a vector in , and is termed to as hyper-radius. Correspondingly, Laplace operator in is given by, In now switching to polar coordinates, one finds the Laplace operator expressed as Here, stands for the squared angular momentum operator in four dimensions, while is the standard three-dimensional squared angular momentum operator. Considering now the hyper-spherical radius as a constant, one encounters the Laplace-Beltrami operator on as With that the free wave equation on takes the form The solutions, , to this equation are the so-called four-dimensional hyper-spherical harmonics defined as where are the Gegenbauer polynomials. Changing in () variables as one observes that the function satisfies the one-dimensional Schrödinger equation with the potential according to The one-dimensional potential in the latter equation, in coinciding with the Rosen–Morse potential in () for and , clearly reveals that for integer values, the first term of this potential takes its origin from the centrifugal barrier on . Stated differently, the equation (), and its version () describe inertial (free) quantum motion of a rigid rotator in the four-dimensional Euclidean space, , such as the H Atom, the positronium, etc. whose "ends" trace the large "circles" (i.e. spheres) on . Now the question arises whether the second term in () could also be related in some way to the geometry. To the amount the cotangent function solves the Laplace–Beltrami equation on , it represents a fundamental solution on , a reason for which Schrödinger considered it as the counterpart to the Coulomb potential in flat space, by itself a fundamental solution to the Laplacian. Due to this analogy, the cotangent function is frequently termed to as "curved Coulomb" potential. Such an interpretation ascribes the cotangent potential to a single charge source, and here lies a severe problem. Namely, while open spaces, as is , support single charges, in closed spaces single charge can not be defined in a consistent way. Closed spaces are necessarily and inevitably charge neutral meaning that the minimal fundamental degrees of freedom allowed on them are charge dipoles (see Fig. 1). For this reason, the wave equation which transforms upon the variable change, , into the familiar one-dimensional Schrödinger equation with the trigonometric Rosen–Morse potential, in reality describes quantum motion of a charge dipole perturbed by the field due to another charge dipole, and not the motion of a single charge within the field produced by another charge. Stated differently, the two equations () and () do not describe strictly speaking a Hydrogen Atom on , but rather quantum motion on of a light dipole perturbed by the dipole potential of another very heavy dipole, like the H Atom, so that the reduced mass, , would be of the order of the electron mass and could be neglected in comparison with the energy. In order to understand this decisive issue, one needs to focus attention to the necessity of ensuring validity on of both the Gauss law and the superposition principle for the sake of being capable to formulate electrostatic there. With the cotangent function in () as a single-source potential, such can not be achieved. Rather, it is necessary to prove that the cotangent function represents a dipole potential. Such a proof has been delivered in. To understand the line of arguing of it is necessary to go back to the expression for the Laplace operator in () and before considering the hyper-radius as a constant, factorize this space into a time line and . For this purpose, a "time" variable is introduced via the logarithm of the radius. Introducing this variable change in () amounts to the following Laplacian, The parameter is known as "conformal time", and the whole procedure is referred to as "radial quantization". Charge-static is now built up in setting =const in () and calculating the harmonic function to the remaining piece, the so-called conformal Laplacian, , on , which is read off from () as where we have chosen , equivalently, . Then the correct equation to be employed in the calculation of the fundamental solution is . This Green function to has been calculated for example in. Its values at the respective South and North poles, in turn denoted by , and , are reported as and From them one can now construct the dipole potential for a fundamental charge placed, say, on the North pole, and a fundamental charge of opposite sign, , placed on the antipodal South pole of . The associated potentials, and , are then constructed through multiplication of the respective Green function values by the relevant charges as In now assuming validity of the superposition principle, one encounters a Charge Dipole (CD) potential to emerge at a point on according to The electric field to this dipole is obtained in the standard way through differentiation as and coincides with the precise expression prescribed by the Gauss theorem on , as explained in. Notice that stands for dimension-less charges. In terms of dimensional charges, , related to via the potential perceived by another charge , is For example, in the case of electrostatic, the fundamental charge is taken the electron charge, , in which case the special notation of is introduced for the so-called fundamental coupling constant of electrodynamics. In effect, one finds In Fig. 2 we display the dipole potential in (). With that, the one-dimensional Schrödinger equation that describes on the quantum motion of an electric charge dipole perturbed by the trigonometric Rosen–Morse potential, produced by another electric charge dipole, takes the form of Because of the relationship, , with being the node number of the wave function, one could change labeling of the wave functions, , to the more familiar in the literature, . In eqs. ()-() one recognizes the one-dimensional wave equation with the trigonometric Rosen–Morse potential in () for and . In this way, the cotangent term of the trigonometric Rosen–Morse potential could be derived from the Gauss law on in combination with the superposition principle, and could be interpreted as a dipole potential generated by a system consisting of two opposite fundamental charges. The centrifugal term of this potential has been generated by the kinetic energy operator on . In this manner, the complete trigonometric Rosen–Morse potential could be derived from first principles. Back to Schrödinger's work, the hyper-radius for the H Atom has turned out to be very big indeed, and of the order of . This is by eight orders of magnitudes larger than the H Atom size. The result has been concluded from fitting magnetic dipole elements to hydrogen hyper-fine structure effects (see } and reference therein). The aforementioned radius is sufficiently large to allow approximating the hyper-sphere locally by plane space in which case the existence of single charge still could be justified. In cases in which the hyper spherical radius becomes comparable to the size of the system, the charge neutrality takes over. Such an example will be presented in section 6 below. Before closing this section, it is in order to bring the exact solutions to the equations ()-(), given by where stand for the Romanovski polynomials. Application to Coulomb fluids Coulomb fluids consist of dipolar particles and are modelled by means of direct numerical simulations. It is commonly used to choose cubic cells with periodic boundary conditions in conjunction with Ewald summation techniques. In a more efficient alternative method pursued by, one employs as a simulation cell the hyper spherical surface in (). As already mentioned above, the basic object on is the electric charge dipole, termed to as "bi-charge" in fluid dynamics, which can be visualized classically as a rigid "dumbbell" (rigid rotator) of two antipodal charges of opposite signs, and . The potential of a bi-charge is calculated by solving on the Poisson equation, Here, is the angular coordinate of a charge placed at angular position , read off from the North pole, while stands for the anti-podal to angular coordinate of the position, at which the charge of opposite signs is placed in the Southern hemisphere. The solution found, equals the potential in (), modulo conventions regarding the charge signs and units. It provides an alternative proof to that delivered by the equations ()-() of the fact that the cotangent function on has to be associated with the potential generated by a charge dipole. In contrast, the potentials in the above equations (), and (), have been interpreted in as due to so called single "pseudo-charge" sources, where a "pseudo-charge" is understood as the association of a point charge with a uniform neutralizing background of a total charge, . The pseudo-charge potential, , solves . Therefore, the bi-charge potential is the difference between the potentials of two antipodal pseudo-charges of opposite signs. Application to color confinement and the physics of quarks The confining nature of the cotangent potential in () finds an application in a phenomenon known from the physics of strong interaction which refers to the non-observability of free quarks, the constituents of the hadrons. Quarks are considered to possess three fundamental internal degree of freedom, conditionally termed to as "colors", red , blue , and green , while anti-quarks carry the corresponding anti-colors, anti-red , anti-blue , or anti-green , meaning that the non-observability of free quarks is equivalent to the non-observability of free color-charges, and thereby to the "color neutrality" of the hadrons. Quark "colors" are the fundamental degrees of freedom of the Quantum Chromodynamics (QCD), the gauge theory of strong interaction. In contrast to the Quantum Electrodynamics, the gauge theory of the electromagnetic interactions, QCD is a non-Abelian theory which roughly means that the "color" charges, denoted by , are not constants, but depend on the values, , of the transferred momentum, giving rise to the so-called, running of the strong coupling constant, , in which case the Gauss law becomes more involved. However, at low momentum transfer, near the so-called infrared regime, the momentum dependence of the color charge significantly weakens, and in starting approaching a constant value, drives the Gauss law back to the standard form known from Abelian theories. For this reason, under the condition of color charge constancy, one can attempt to model the color neutrality of hadrons in parallel to the neutrality of Coulomb fluids, namely, by considering quantum color motions on closed surfaces. In particular for the case of the hyper-sphere , it has been shown in, that a potential, there denoted by , and obtained from the one in () through the replacement, i.e. the potential where is the number of colors, is the adequate one for the description of the spectra of the light mesons with masses up to . Especially, the hydrogen like degeneracies have been well captured. This because the potential, in being a harmonic function to the Laplacian on , has same symmetry as the Laplacian by itself, a symmetry that is defined by the isometry group of , i.e. by , the maximal compact group of the conformal group . For this reason, the potential in (), as part of , accounts not only for color confinement, but also for conformal symmetry in the infrared regime of QCD. Within such a picture, a meson is constituted by a quark -anti-quark color dipole in quantum motion on an geometry, and gets perturbed by the dipole potential in (), generated by and other color dipole, such as a gluon -anti-gluon , as visualized in Fig. 3. The geometry could be viewed as the unique closed space-like geodesic of a four-dimensional hyperboloid of one sheet, , foliating outside of the causal Minkowski light-cone the space-like region, assumed to have one more spatial dimension, this in accord with the so-called de Sitter Special Relativity, . Indeed, potentials, in being instantaneous and not allowing for time orderings, represent virtual, i.e. acausal processes and as such can be generated in one-dimensional wave equations upon proper transformations of virtual quantum motions on surfaces located outside the causal region marked by the Light Cone. Such surfaces can be viewed as geodesics of the surfaces foliating the space like region. Quantum motions on open geodesics can give rise to barriers describing resonances transmitted through them. An illustrative example for the application of the color confining dipole potential in () to meson spectroscopy is given in Fig. 4. It should be pointed out that the potentials in the above equations () and () have been alternatively derived in, from Wilson loops with cusps, predicting their magnitude as , and in accord with (). The potential in () has furthermore been used in in the Dirac equation on , and has been shown to predict realistic electromagnetic nucleon form-factors and related constants such as mean square electric-charge and magnetic-dipole radii, proton and nucleon magnetic dipole moments and their ratio, etc. The property of the trigonometric Rosen-Morse potential, be it in the parametrization with in eq. (32) which is of interest to electrodynamics, or in the parametrization of interest to QCD from the previous section, qualifies it to studies of phase transitions in systems with electromagnetic or strong interactions on hyperspherical "boxes" of finite volumes . The virtue of such studies lies in the possibility to express the temperature, , as the inverse, , to the radius of the hypersphere. For this purpose, knowledge on the partition function (statistical mechanics), here denoted by , of the potential under consideration is needed. In the following we evaluate for the case of the Schrödinger equation on with linear energy (here in units of MeV), where is the reduced mass of the two-body system under consideration. The partition function (statistical mechanics) for this energy spectrum is defined in the standard way as, Here, the thermodynamic beta is defined as with standing for the Boltzmann constant. In evaluating it is useful to recall that with the increase of the second term on the right hand side in () becomes negligible compared to the term proportional , a behavior which becomes even more pronounced for the choices, , and . In both cases is much smaller compared to the corresponding dimensionless factor, , multiplying . For this reason the partition function under investigation might be well approximated by, Along same lines, the partition function for the parametrization corresponding to the Hydrogen atom on has been calculated in, where a more sophisticated approximation has been employed. When transcribed to the current notations and units, the partition function in presents itself as, The infinite integral has first been treated by means of partial integration giving, Then the argument of the exponential under the sign of the integral has been cast as, thus reaching the following intermediate result, As a next step the differential has been represented as an algebraic manipulation which allows to express the partition function in () in terms of the function of complex argument according to, where is an arbitrary path on the complex plane starting in zero and ending in . For more details and physical interpretations, see. See also Romanovski polynomials Pöschl–Teller potential References Quantum mechanical potentials Mathematical physics
Trigonometric Rosen–Morse potential
[ "Physics", "Mathematics" ]
3,514
[ "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Quantum mechanical potentials", "Mathematical physics" ]
55,956,520
https://en.wikipedia.org/wiki/Quantum%20crystallography
Quantum crystallography is a branch of crystallography that investigates crystalline materials within the framework of quantum mechanics, with analysis and representation, in position or in momentum space, of quantities like wave function, electron charge and spin density, density matrices and all properties related to them (like electric potential, electric or magnetic moments, energy densities, electron localization function, one electron potential, etc.). Like the quantum chemistry, Quantum crystallography involves both experimental and computational work. The theoretical part of quantum crystallography is based on quantum mechanical calculations of atomic/molecular/crystal wave functions, density matrices or density models, used to simulate the electronic structure of a crystalline material. While in quantum chemistry, the experimental works mainly rely on spectroscopy, in quantum crystallography the scattering techniques (X-rays, neutrons, γ-Rays, electrons) play the central role, although spectroscopy as well as atomic microscopy are also sources of information. The connection between crystallography and quantum chemistry has always been very tight, after X-ray diffraction techniques became available in crystallography. In fact, the scattering of radiation enables mapping the one-electron distribution or the elements of a density matrix. The kind of radiation and scattering determines the quantity which is represented (electron charge or spin) and the space in which it is represented (position or momentum space). Although the wave function is typically assumed not to be directly measurable, recent advances enable also to compute wave functions that are restrained to some experimentally measurable observable (like the scattering of a radiation). The term Quantum Crystallography was first introduced in revisitation articles by L. Huang, L. Massa and Nobel Prize winner Jerome Karle, who associated it with two mainstreams: a) crystallographic information that enhances quantum mechanical calculations and b) quantum mechanical approaches to improve crystallography information. This definition mainly refers to studies started in the 1960s and 1970s, when first attempts to obtain wave functions from scattering experiments appeared, together with other methods to constrain a wavefunction to experimental observations like the dipole moment. This field has been recently reviewed, within the context of this definition. Parallel to studies on wave function determination, R. F. Stewart and P. Coppens investigated the possibilities to compute models for one-electron charge density from X-ray scattering (for example by means of pseudoatoms multipolar expansion), and later of spin density from polarized neutron diffraction, that originated the scientific community of charge, spin and momentum density. In a recent review article, V. Tsirelson gave a more general definition: "Quantum crystallography is a research area exploiting the fact that parameters of quantum-mechanically valid electronic model of a crystal can be derived from the accurately measured set of X-ray coherent diffraction structure factors". The book Modern Charge Density Analysis offers a survey of the research involving Quantum Crystallography and of the most adopted experimental or theoretical methodologies. The International Union of Crystallography has recently established a commission on Quantum Crystallography, as extension of the previous commission on Charge, Spin and Momentum density, with the purpose of coordinating research activities in this field. External links The Erice School of crystallography (52nd course): first course on Quantum crystallography (June 2018) The XIX Sagamore Conference (July 2018) The CECAM meeting on Quantum crystallography (June 2017) The IUCr commission on Quantum crystallography The International Union of Crystallography References Crystallography Quantum mechanics
Quantum crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
726
[ "Theoretical physics", "Materials science", "Quantum mechanics", "Crystallography", "Condensed matter physics" ]
55,957,766
https://en.wikipedia.org/wiki/Transplatin
trans-Dichlorodiammineplatinum(II) is the trans isomer of the coordination complex with the formula trans-PtCl2(NH3)2, sometimes called transplatin. It is a yellow solid with low solubility in water but good solubility in DMF. The existence of two isomers of PtCl2(NH3)2 led Alfred Werner to propose square planar molecular geometry. It belongs to the molecular symmetry point group D2h. Preparation and reactions The complex is prepared by treating [Pt(NH3)4]Cl2 with hydrochloric acid. Many of the reactions of this complex can be explained by the trans effect. It slowly hydrolyzes in aqueous solution to give the mixed aquo complex trans-[PtCl(H2O)(NH3)2]Cl. Similarly it reacts with thiourea (tu) to give colorless trans-[Pt(tu)2(NH3)2]Cl2. In contrast, the cis isomer gives [Pt(tu)4]Cl2. Oxidative addition of chlorine gives trans-PtCl4(NH3)2. Medicinal chemistry trans-Dichlorodiammineplatinum(II) has had far less impact on medicinal chemistry compared to its cis isomer, cisplatin, which is a major anticancer drug. Nonetheless, replacement of the ammonia with other ligands has led to highly active drugs that have attracted much attention. References Ammine complexes Coordination complexes Platinum(II) compounds Chloro complexes Platinum complexes
Transplatin
[ "Chemistry" ]
335
[ "Coordination chemistry", "Coordination complexes" ]
55,958,085
https://en.wikipedia.org/wiki/TSD%20Desalination
TSD Desalination (Tethys Solar Desalination) is an Israeli startup company that provides solar-powered desalination technology. Jewish Business News named TSD one of 25 cool Israeli startups to watch in 2017, and CNBC mentioned TSD alongside IDE Technologies in a review of Israeli high-tech. TSD was founded in 2014. Their technology, developed by Joshua Altman and Prof. Moshe Tshuva at Afeka College of Engineering in Tel Aviv, uses solar energy directly to power desalination and water treatment. Ze'ev Emmerich, a founder of TSD, claims their method is scalable and environmentally friendly, as well as being cheaper than reverse osmosis. References External links Tethys Solar Desalination Technology companies of Israel Water desalination Israeli companies established in 2014
TSD Desalination
[ "Chemistry" ]
170
[ "Water treatment", "Water technology", "Water desalination" ]
55,964,386
https://en.wikipedia.org/wiki/NGC%20510
NGC 510 is a double star in the constellation of Pisces. The stars are separated 8", and located 7' ESE of NGC 499 and 9' WNW of NGC 515. The RNGC mislabels PGC 5102 as NGC 510. Observational history NGC 510 was discovered by Swedish astronomer Herman Schultz on November 11, 1867. The object was initially considered a "misty" object (a galaxy) based on the observations with research instruments of that time, and was included on the NGC list. Later it became clear that it was a double star. See also Double star List of NGC objects (1–1000) Pisces (constellation) References External links SEDS Double stars Pisces (constellation) 510 Astronomical objects discovered in 1867 Discoveries by Herman Schultz (astronomer)
NGC 510
[ "Astronomy" ]
165
[ "Pisces (constellation)", "Constellations" ]
55,965,344
https://en.wikipedia.org/wiki/Spin%20Nernst%20Effect
The spin Nernst effect is a phenomenon of spin current generation caused by the thermal flow of electrons or magnons in condensed matter. Under a thermal drive such as temperature gradient or chemical potential gradient, spin-up and spin-down carriers can flow perpendicularly to the thermal current and towards opposite directions without the application of a magnetic field. This effect is similar to the spin Hall effect, where a pure spin current is induced by an electrical current. The spin Nernst effect can be detected by the spatial separation of opposite spin species, typically in the form of spin polarization (imbalanced spin accumulation) on the transverse boundaries of a material. The spin Nernst effect of electrons was first experimentally observed in 2016 and published by two independent groups in 2017. The spin Nernst effect of magnons (quanta of spin wave excitations) was theoretically proposed in 2016 in collinear antiferromagnetic materials, but its experimental confirmation remains elusive. In 2017, around the same time when its electronic counterpart was experimentally observed, the spin Nernst effect of magnons was first claimed in transition metal trichalcogenide MnPS3. However, the experiment involved ambiguities that cannot convincingly verify the spin Nernst effect of magnons, awaiting further experimental studies. With a more accurate description accounting for real device geometry, it was believed that optical detection should be more reliable than electronic detection. At present, optical detection of the spin Nernst effect of magnons has not been reported. See also Spin Hall effect Nernst effect References Condensed matter physics Spintronics Walther Nernst
Spin Nernst Effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
338
[ "Spintronics", "Phases of matter", "Materials science", "Condensed matter physics", "Matter" ]
55,966,819
https://en.wikipedia.org/wiki/Journal%20of%20Thermal%20Stresses
The Journal of Thermal Stresses is a monthly peer-reviewed scientific journal covering the theoretical and industrial applications of thermal stresses. It is published by Taylor & Francis. The journal was established in 1978 with Richard B. Hetnarski (Rochester Institute of Technology) as founding editor-in-chief. In July 2018 he was succeeded by Martin Ostoja-Starzewski (University of Illinois at Urbana-Champaign). Abstracting and indexing The journal is abstracted and indexed in: CSA databases Current Contents/Engineering, Computing, & Technology Science Citation Index According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.28. References External links Print: Online: Materials science journals Monthly journals English-language journals Taylor & Francis academic journals Academic journals established in 1978
Journal of Thermal Stresses
[ "Materials_science", "Engineering" ]
158
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
51,518,803
https://en.wikipedia.org/wiki/Steered-response%20power
Steered-response power (SRP) is a family of acoustic source localization algorithms that can be interpreted as a beamforming-based approach that searches for the candidate position or direction that maximizes the output of a steered delay-and-sum beamformer. Steered-response power with phase transform (SRP-PHAT) is a variant using a "phase transform" to make it more robust in adverse acoustic environments. Algorithm Steered-response power Consider a system of microphones, where each microphone is denoted by a subindex . The discrete-time output signal from a microphone is . The (unweighted) steered-response power (SRP) at a spatial point can be expressed as where denotes the set of integer numbers and would be the time-lag due to the propagation from a source located at to the -th microphone. The (weighted) SRP can be rewritten as where denotes complex conjugation, represents the discrete-time Fourier transform of and is a weighting function in the frequency domain (later discussed). The term is the discrete time-difference of arrival (TDOA) of a signal emitted at position to microphones and , given by where is the sampling frequency of the system, is the sound propagation speed, is the position of the -th microphone, is the 2-norm and denotes the rounding operator. Generalized cross-correlation The above SRP objective function can be expressed as a sum of generalized cross-correlations (GCCs) for the different microphone pairs at the time-lag corresponding to their TDOA where the GCC for a microphone pair is defined as The phase transform (PHAT) is an effective GCC weighting for time delay estimation in reverberant environments, that forces the GCC to consider only the phase information of the involved signals: Estimation of source location The SRP-PHAT algorithm consists in a grid-search procedure that evaluates the objective function on a grid of candidate source locations to estimate the spatial location of the sound source, , as the point of the grid that provides the maximum SRP: Modified SRP-PHAT Modifications of the classical SRP-PHAT algorithm have been proposed to reduce the computational cost of the grid-search step of the algorithm and to increase the robustness of the method. In the classical SRP-PHAT, for each microphone pair and for each point of the grid, a unique integer TDOA value is selected to be the acoustic delay corresponding to that grid point. This procedure does not guarantee that all TDOAs are associated to points on the grid, nor that the spatial grid is consistent, since some of the points may not correspond to an intersection of hyperboloids. This issue becomes more problematic with coarse grids since, when the number of points is reduced, part of the TDOA information gets lost because most delays are not anymore associated to any point in the grid. The modified SRP-PHAT collects and uses the TDOA information related to the volume surrounding each spatial point of the search grid by considering a modified objective function: where and are the lower and upper accumulation limits of GCC delays, which depend on the spatial location . Accumulation limits The accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid. Alternatively, they can be selected by considering the spatial gradient of the TDOA , where each component of the gradient is: For a rectangular grid where neighboring points are separated a distance , the lower and upper accumulation limits are given by: where and the gradient direction angles are given by See also Acoustic source localization Multilateration Audio signal processing References Acoustics Signal processing Digital signal processing
Steered-response power
[ "Physics", "Technology", "Engineering" ]
751
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Classical mechanics", "Acoustics" ]
51,519,180
https://en.wikipedia.org/wiki/Angular%20Correlation%20of%20Electron%20Positron%20Annihilation%20Radiation
Angular Correlation of Electron Positron Annihilation Radiation (ACAR or ACPAR) is a technique of solid state physics to investigate the electronic structure of metals. It uses positrons which are implanted into a sample and annihilate with the electrons. In the majority of annihilation events, two gamma quanta are created that are, in the reference frame of the electron-positron pair, emitted in exactly opposite directions. In the laboratory frame, there is a small angular deviation from collinearity, which is caused by the momentum of the electron. Hence, measuring the angular correlation of the annihilation radiation yields information about the momentum distribution of the electrons in the solid. Investigation of the electronic structure All the macroscopic electronic and magnetic properties of a solid result from its microscopic electronic structure. In the simple free electron model, the electrons do not interact with each other nor with the atomic cores. The relation between energy and momentum is given by with the electron mass . Hence, there is an unambiguous connection between electron energy and momentum. Because of the Pauli exclusion principle the electrons fill all the states up to a maximum energy, the so-called Fermi energy. By the momentum-energy relation, this corresponds to the Fermi momentum . The border between occupied and unoccupied momentum states, the Fermi surface, is arguably the most significant feature of the electronic structure and has a strong influence on the solid's properties. In the free electron model, the Fermi surface is a sphere. With ACAR it is possible to measure the momentum distribution of the electrons. A measurement on a free electron gas for example would give a positive intensity for momenta and zero intensity for . The Fermi surface itself can easily be identified from such a measurement by the discontinuity at . In reality, there is interaction between the electrons with each other and the atomic cores of the crystal. This has several consequences: For example, the unambiguous relation between energy and momentum of an electronic state is broken and an electronic band structure is formed. Measuring the momentum of one electronic state gives a distribution of momenta which are all separated by reciprocal lattice vectors. Hence, an ACAR measurement on a solid with completely filled bands (i.e. on an insulator) gives a continuous distribution. An ACAR measurement on a metal has discontinuities where bands cross the Fermi level in all Brillouin zones in reciprocal space. This discontinuous distribution is superimposed by a continuous distribution from the entirely filled bands. From the discontinuities the Fermi surface can be extracted. Since positrons that are created by beta decay possess a longitudinal spin polarization it is possible to investigate the spin-resolved electronic structure of magnetic materials. In this way, contributions from the majority and minority spin channel can be separated and the Fermi surface in the respective spin channels can be measured. ACAR has several advantages and disadvantages compared to other, more well known techniques for the investigation of the electronic structure like ARPES and quantum oscillation: ACAR requires neither low temperatures, high magnetic fields or UHV conditions. Furthermore, it is possible to probe the electronic structure at the surface and in the bulk ( deep). However, ACAR is reliant on defect free samples as vacancy concentrations of up to per atom can efficiently trap positrons and distort the measurement. Theory In an ACAR measurement the angular deviation of many pairs of annihilation radiation is measured. Therefore, the underlying physical observable is often called 'two photon momentum density' (TPMD) or . Quantum mechanically, can be expressed as the squared absolute value of the Fourier transform of the multi-particle wave function of all the electron and the positron in the solid: As it is not possible to imagine or compute the multi-particle wave function , it is often written as the sum of the single particle wave functions of the electron in the th state in the th band and the positron wave function : The enhancement factor accounts for the electron-positron correlation. There exist sophisticated enhancement models to describe the electron-positron correlations, but in the following it will be assumed that . This approximation is called the independent particle model (IPM). A very illustrative form of the TPMD can be obtained by the use of the Fourier coefficients for the wave function product : These Fourier coefficients are distributed over all reciprocal vectors . If one assumes that the overlap of the electron and the positron wave function is constant for the same band , summing over all reciprocal lattice vectors gives a very instructive result: The function is the Heaviside step function and the constant . This means, if is folded back into the first Brillouin zone, the resulting density is flat except at the Fermi momentum. Therefore, the Fermi surface can be easily identified by looking for this discontinuities in . Experimental details When a positron is implanted into a solid it will quickly lose all its kinetic energy and annihilate with an electron. By this process two gamma quanta with each are created which are in the reference frame of the electron positron pair emitted in exactly anti-parallel directions. In the laboratory frame, however, there is a Doppler shift from and an angular deviation from collinearity. Although the full momentum information about the momentum of the electron is encoded in the annihilation radiation, due to technical limitations it cannot be fully recovered. Either one measures the Doppler broadening of the annihilation radiation (DBAR) or the angular correlation of the annihilation radiation (ACAR). For DBAR a detector with a high energy resolution like a high purity germanium detector is needed. Such detectors typically do not resolve the position of absorbed photons. Hence only the longitudinal component of the electron momentum can be measured. The resulting measurement is a 1D projection of . In ACAR position sensitive detectors, gamma cameras or multi wire proportional chambers, are used. Such detectors have a position resolution of typically but an energy resolution which is just good enough to sort out scattered photons or background radiation. As is discarded, a 2D projection of is measured. In order to get a high angular resolution of and better, the detectors have to be set up at distances between from each other. Although it is possible to get even better angular resolutions by placing the detectors further apart, this comes at cost of the counting rate. Already with moderate detector distances, the measurement of one projection of typically takes weeks. As ACAR measures projections of the TPMD it is necessary to reconstruct in order to recover the Fermi surface. For such a reconstruction similar techniques as for X-ray computed tomography are used. In contrast to a human body, a crystal has many symmetries which can be included into the reconstruction. This makes the procedure more complex but increases the quality of the reconstruction. Another way to evaluate ACAR spectra is by a quantitative comparison with ab initio calculations. History In the early years, ACAR was mainly used to investigate the physics of the electron-positron annihilation process. In the 1930s several annihilation mechanism were discussed. Otto Klemperer could show with his angular correlation setup that the electron-positron pairs annihilate mainly into two gamma quanta which are emitted anti-parallel. In the 1950s, it was realized that by measuring the deviation from collinearity of the annihilation radiation information about the electronic structure of a solid can be obtained. During this time mainly setups with 'long slit geometry' were used. They consisted of a positron source and a sample in the center, one fixed detector on one side and a second movable detector on the other side of the sample. Each detector was collimated in such a way that the active area was much smaller in one than in the other dimension (thus 'long slit'). A measurement with a long slit setup yields a 1D projection of the electron momentum density . Hence, this technique is called 1D-ACAR. The development of two-dimensional gamma cameras and multi wire proportional chambers in the 1970s and early 1980s led to the setting up of the first 2D-ACAR spectrometer. This was an improvement to 1D-ACAR in two ways: i) The detection efficiency could be improved and ii) the informational content was greatly increased as the measurement gave a 2D projection of . An important early example of the use of spin-polarized 2D-ACAR is the proof of half metallicity in the half-Heusler alloy NiMnSb. References Notes Further reading Laboratory techniques in condensed matter physics
Angular Correlation of Electron Positron Annihilation Radiation
[ "Physics", "Chemistry", "Materials_science" ]
1,782
[ "Condensed matter physics", "Laboratory techniques in condensed matter physics" ]
51,525,633
https://en.wikipedia.org/wiki/Size-asymmetric%20competition
Size-asymmetric competition refers to situations in which larger individuals exploit disproportionately greater amounts of resources when competing with smaller individuals. This type of competition is common among plants but also exists among animals. Size-asymmetric competition usually results from large individuals monopolizing the resource by "pre-emption"—i.e., exploiting the resource before smaller individuals are able to obtain it. Size-asymmetric competition has major effects on population structure and diversity within ecological communities. Definition of size asymmetry Resource competition can vary from completely symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition) to perfectly size-symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size-asymmetric (the largest individuals exploit all the available resource). The degree of size asymmetry can be described by the parameter θ in the following equation focusing on the partition of the resource r among n individuals of sizes Bj. where ri refers to the amount of resources consumed by individuals in the neighbourhood of j. When θ = 1, competition is perfectly size-symmetric—e.g., if a large individual is twice the size of its smaller competitor, the large individual will acquire twice the amount of that resource (i.e. both individuals will exploit the same amount of resource per biomass unit). When θ > 1, competition is size-asymmetric—e.g., if a large individual is twice the size of its smaller competitor and θ = 2, the large individual will acquire four times the amount of that resource (i.e., the large individual will exploit twice the amount of resource per biomass unit). As θ increases, competition becomes more size-asymmetric, and larger plants get larger amounts of resources per unit of biomass compared with smaller plants. Differences in size asymmetry among resources in plant communities Competition among plants for light is size-asymmetric because of the directionality of its supply. Higher leaves shade lower leaves but not vice versa. Competition for nutrients appears to be relatively size-symmetric, although it has been hypothesized that a patchy distribution of nutrients in the soil may lead to size asymmetry in competition among roots. Nothing is known about the size asymmetry of competition for water. Implication for plant communities Various ecological processes and patterns have been shown to be affected by the degree of size asymmetry—e.g., succession, biomass distribution, grazing response, population growth, ecosystem functioning, coexistence and species richness. A large body of evidence shows that species loss following nutrient enrichment (eutrophication) is related to light competition. However, there is still a debate whether this phenomenon is related to the size asymmetry of light competition or to other factors. Contrasting assumptions about size asymmetry characterise the two leading and competing theories in plant ecology, the R* theory and the CSR theory. The R* theory assumes that competition is size-symmetric and therefore predicts that competitive ability in nature results from the ability to withstand low level of resources (known as the R* rule). In contrast the CSR theory assumes that competition is size-asymmetric and therefore predicts that competitive ability in nature results from the ability to grow fast and attain a large size. Size-asymmetric competition affects also several evolutionary processes in relation to trait selection. Evolution of plant height is highly affected by asymmetric light competition. Theory predicts that only under asymmetric light competition, plants will grow upward and invest in wood production at the expense of investment in leaves, or in reproductive organs (flowers and fruits). Consistent with this, there is evidence that plant height increases as water availability increases, presumably due to increase in the relative importance of size-asymmetric competition for light. Similarly, investment in the size of seeds at the expense of their number may be more effective under size-asymmetric resource competition, since larger seeds tend to produce larger seedlings that are better competitors. Size-asymmetric competition can be exploited in managing plant communities, such as the suppression of weed in crop fields. Weeds are a greater problem for farmer in dry than in moist environments, in large part because crops can suppress weeds much more effectively under size-asymmetric competition for light than under more size-symmetric competition below ground. See also Competition (biology) Asymmetric competition Resource (biology) Resource partitioning Plant ecology Jacob Weiner References Ecology Biological interactions Competition
Size-asymmetric competition
[ "Biology" ]
932
[ "Behavior", "Biological interactions", "Ecology", "nan", "Ethology" ]
51,527,976
https://en.wikipedia.org/wiki/Pawe%C5%82%20Urban
Paweł Urban (also spelled as Pawel L. Urban (Chinese name: 鄂本帕偉)) is a chemist and is a professor of Chemistry in the National Tsing Hua University (Hsinchu, Taiwan). He received his Ph.D. in Chemistry from the University of York (United Kingdom). Urban's research interests include mass spectrometry and biochemical analysis. Academic activity Urban is an inventor of the hydrogel micropatch sampling method, fizzy extraction, systems for imaging chemical reactions, and micro-arrays for mass spectrometry (MAMS). He co-authored a book on time-resolved mass spectrometry, and over 100 papers. Urban is editorial board member of Scientific Reports, HardwareX, PeerJ, and acted as a guest editor in Philosophical Transactions of the Royal Society A. His h-index is 34. He received the Ta-You Wu Memorial Award. References Living people Alumni of the University of York Taiwanese chemists Academic staff of the National Tsing Hua University Mass spectrometrists Year of birth missing (living people)
Paweł Urban
[ "Physics", "Chemistry" ]
228
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
51,529,881
https://en.wikipedia.org/wiki/Nuclear%20calcium
The concentration of calcium in the cell nucleus can increase in response to signals from the environment. Nuclear calcium is an evolutionary conserved potent regulator of gene expression that allows cells to undergo long-lasting adaptive responses. The 'Nuclear Calcium Hypothesis’ by Hilmar Bading describes nuclear calcium in neurons as an important signaling end-point in synapse-to-nucleus communication that activates gene expression programs needed for persistent adaptations. In the nervous system, nuclear calcium is required for long-term memory formation, acquired neuroprotection, and the development of chronic inflammatory pain. In the heart, nuclear calcium is important for the development of cardiac hypertrophy. In the immune system, nuclear calcium is required for human T cell activation. Plants use nuclear calcium to control symbiosis signaling. References Neuroscience Gene expression Calcium
Nuclear calcium
[ "Chemistry", "Biology" ]
166
[ "Neuroscience", "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
51,530,038
https://en.wikipedia.org/wiki/Witt%20vector%20cohomology
In mathematics, Witt vector cohomology was an early p-adic cohomology theory for algebraic varieties introduced by . Serre constructed it by defining a sheaf of truncated Witt rings Wn over a variety V and then taking the inverse limit of the sheaf cohomology groups Hi(V, Wn) of these sheaves. Serre observed that though it gives cohomology groups over a field of characteristic 0, it cannot be a Weil cohomology theory because the cohomology groups vanish when i > dim(V). For Abelian varieties, showed that one could obtain a reasonable first cohomology group by taking the direct sum of the Witt vector cohomology and the Tate module of the Picard variety. References Algebraic geometry Cohomology theories
Witt vector cohomology
[ "Mathematics" ]
164
[ "Fields of abstract algebra", "Algebraic geometry" ]
70,125,038
https://en.wikipedia.org/wiki/NGC%204324
NGC 4324 is a lenticular galaxy located about 85 million light-years away in the constellation Virgo. It was discovered by astronomer Heinrich d'Arrest on March 4, 1862. NGC 4324 has a stellar mass of 5.62 × 1010 M☉, and a baryonic mass of 5.88 × 1010 M☉. The galaxy's total mass is around 5.25 × 1011 M☉. NGC 4324 is notable for having a ring of star formation surrounding its nucleus. It was considered a member of the Virgo II Groups until 1999, when its distance was recalculated and it was placed in the Virgo W Group. Physical characteristics First discovered in 1957 by Russian astronomer Kirill Ogorodnikov and described by Ogorodnikov as "a system of planet-like concentrations similar to beads" and as "equally-spaced bead-like concentrations of equal size and brightness similar to the annular nebula of Kant-Lapace nebular hypothesis.", NGC 4324 features an inner ring that surrounds the nucleus. The ring appears complete but broken on opposite sides of its diameter which led to Burstein et al. suggesting that the ring is not a ring at all but instead tightly wound spiral arms and that NGC 4324 is a misclassified spiral or lenticular galaxy. Despite this, the ring is considered to be a true ring. The ring hosts most of the molecular gas observed in NGC 4324 with roughly 1.7 × 109 M☉ of HI (neutral hydrogen) and 9 × 107 M☉ of HII (singly-ionised hydrogen). Despite this, HI was detected by Duprie et al. in 1996 that extends roughly 2 optical diameters suggesting that atomic hydrogen is not only concentrated in the ring. In Ultraviolet light, the ring is bright, due to the presence of star formation that is occurring at an estimated rate of roughly 0.052 ± 0.021 M☉ per year, with star formation being segregated in the ring. In between the ring and the bulge of NGC 4324, there are tightly wound spiral arms that are defined mostly by dust. The gas in the ring in NGC 4324 may have been accredited from filaments of galaxies or minor merging with gas-rich satellite galaxies. Stellar populations In the center of NGC 4324, the stellar population has a mean age of about 8 billion years, with an abundance ratio that is close to the sun, at [Mg/Fe] … 0, and a metallicity that is slightly supersolar, at [Z/H] ~ +0.1. This suggests continuous effective star formation in the nucleus of NGC 4324. In the bulge of NGC 4324, the mean age of the stellar population is around 13 billion years, with abundance ratio of [Mg/Fe] = +0.15, and a metallicity of [Z/H] = −0.2 L −0.3. In the inner part of the disk of NGC 4324, the stellar population is old, with an abundance ratio of [Mg/Fe] = +0.2, and a metallicity of [Z/H] < −0.33. Such characteristics imply a brief single starburst took place more than 10 billion ago and formed the stellar disk of NGC 4324. In the ring-dominated area of the disk, the dominant stellar population is also old, despite being slightly younger than in the inner disk, and has chemical properties similar to the stars of the inner disk. Activity NGC 4324 is classified as a Seyfert Galaxy and as a LINER galaxy. Despite being classified as a Seyfert galaxy, NGC 4324 has no delectable nuclear radio continuum emission lines, suggesting that the emission lines that led to its classification as a Seyfert come from stellar processes such as photoionization driven by supernova remnants and/or planetary nebulae which can mimic the high-ionization nebular emission characteristic of the nuclei of other observed Seyfert Galaxies. This is despite the fact that NGC 4324 is host to a supermassive black hole with an estimated mass of 2.187 × 106 M☉. Group membership NGC 4324 is listed as member of the Virgo S Cloud, which is also known as the Virgo Southern Extension or the Virgo II Groups. It was placed in the NGC 4303 Group by P. Fouque et al. and A. M. Garcia et al. in 1992 and 1993 respectively, which is centered on the galaxy NGC 4303, which considered part of the Virgo Southern Extension. However, later distance measurements made with the Tully-Fisher method showed that NGC 4324 was not part of the NGC 4303 Group but was instead a member of the Virgo W Group, which lies at twice the distance of the Virgo Cluster and is centered on the elliptical galaxy NGC 4261. See also List of NGC objects (4001–5000) NGC 7217 NGC 7742 External links References 4324 040179 Virgo (constellation) Astronomical objects discovered in 1862 Lenticular galaxies 07451 Seyfert galaxies LINER galaxies Ring galaxies
NGC 4324
[ "Astronomy" ]
1,052
[ "Virgo (constellation)", "Constellations" ]
70,126,230
https://en.wikipedia.org/wiki/Thrombin%20generation%20assay
A thrombin generation assay (TGA) or thrombin generation test (TGT) is a global coagulation assay (GCA) and type of coagulation test which can be used to assess coagulation and thrombotic risk. It is based on the potential of a plasma to generate thrombin over time, following activation of coagulation via addition of phospholipids, tissue factor, and calcium. The results of the TGA can be output as a thrombogram or thrombin generation curve using computer software with calculation of thrombogram parameters. TGAs can be performed with methods like the semi-automated calibrated automated thrombogram (CAT) (2003) or the fully-automated ST Genesia system (2018). TGAs were first used as manual assays in the 1950s and have since become increasingly automated. Parameters Thrombogram parameters for the TGA include: Lag time (minutes; time until thrombin first generated/thrombin concentration first increased) Time to peak or ttPeak (minutes; time to maximum concentration of thrombin generated) Start tail (minutes; time at which thrombin generation ends and all generated thrombin has been inhibited) Peak height or peak thrombin (molar concentration (e.g., nM) of thrombin; peak or maximum concentration of thrombin generated) Velocity index (slope of thrombin generation between lag time/first thrombin generation and time to peak; corresponds to first derivative of this part of curve) Endogenous thrombin potential (ETP; area under the curve of the thrombin generation curve) ETP-based APC resistance test The addition of activated protein C (APC) to a TGA results in an inhibition of thrombin generation as measured by reduction of the endogenous thrombin potential (ETP; area under the thrombin generation curve). This can be used to assess APC resistance and is termed the ETP-based APC resistance test. Results may be expressed as normalized APC sensitivity ratio (nAPCsr), which corresponds to the ratio of the ETP measured in the presence and absence of APC divided by the same ratio in reference plasma. The higher the nAPCsr value, the greater the APC resistance of the person. The ETP-based APC resistance test was developed in 1997. References Blood tests Coagulation system Medical signs
Thrombin generation assay
[ "Chemistry" ]
521
[ "Blood tests", "Chemical pathology" ]
70,126,407
https://en.wikipedia.org/wiki/Space%20ethics
Space ethics, astroethics or astrobioethics is a discipline of applied ethics that discusses the moral and ethical implications arising from astrobiological research, space exploration and space flight. It deals with practical contemporary issues like the protection of the space environment and hypothetical future issues pertaining to our interaction with extraterrestrial life forms. Specific issues of space ethics include space debris mitigation, the militarization of space and the ethics of SETI and METI, but also more theoretical topics like space colonization, terraforming, directed panspermia and space mining. The field also concerns itself with more fundamental moral questions, such as the value of abiotic environments in space, the intrinsic value of extraterrestrial life, and how humans should treat extraterrestrial non-intelligent life (like microbes) and extraterrestrial intelligent life (and whether this distinction should be made in the first place). Astroethical issues are often discussed as elements of broader issues such as general environmental protection and imperialism. Astroethics have been described as an emerging discipline gaining in attention, a "necessity for astrobiology" and a "true issue for the future of astrobiology". Ethical guidelines for space exploration Planetary Protection A guiding principle in astroethics is that of Planetary Protection (PP), which seeks to prevent the introduction of lifeforms from Earth to other celestial bodies (forward contamination) and vice versa (back contamination), and thereby possible adverse consequences on existing ecospheres resulting from such contamination. This principle is anchored in the UN Outer Space Treaty, which was established in 1967 and has since been signed and ratified by all space-faring nations. Precautionary Principle The precautionary principle was defined in the 1998 Wingspread Conference on the Precautionary Principle. This approach is supposed to guide decisions in the face of a lack of scientific knowledge or consensus on a matter. In a 2010 COSPAR workshop at Princeton University, 26 experts embraced the precautionary principle and concluded that "further investigations before interference that is likely to be harmful to Earth and other extraterrestrial bodies, including extraterrestrial life and the contamination and disturbance of celestial environments", are to be conducted. Other Astroethical Principles for SETI SETI astrobiologist Margaret Race and Methodist theologian Richard Randolph have outlined 4 principles for the search for extra-terrestrial life within our solar system: Cause no harm to Earth, its life, or its diverse ecosystems. Respect the ecosystem on the surveyed celestial body, do not irreparably alter it or its evolutionary trajectory. Follow proper scientific procedures with honesty and integrity during all phases of exploration. Ensure international participation by all interested parties. Issues A wide range of concrete issues is discussed in astroethics. Some of them are herein elaborated. Sterlility Assumptions about outer space, particularly regarding space colonization, have characterized outer space as sterile and therefore a terra nullius. This assumption does not hold true, particularly considering that Earth is part of it. Space debris Millions of pieces of space debris, defunct artificial objects in space, are orbiting Earth. On average, one cataloged piece of space debris falls back onto the planet every day, potentially posing a risk to organisms and property. In total, an estimated 80 tons of space debris re-enter Earth's atmosphere every year. Due to the high friction with the atmospheric gases, the debris burns up, causing the release of its chemical components, which may contribute to atmospheric pollution and ozone depletion. Additionally, space debris orbits the Earth at extremely high velocity. In Low Earth Orbit, where all crewed space stations and many satellites are located, debris typically reaches speeds of around 8 km/s (approximately 18,000 mph or 29,000 km/h). As a result, even tiny pieces of debris can severely damage or destroy satellites and spacecraft in the event of a collision. This could pose a threat to the lives of astronauts on crewed missions and lead to the phenomenon of Kessler syndrome, where a collision of objects in space produces new fragments of space debris that could set off a chain reaction of more collisions. This could render the space around Earth untraversable for space missions and unsuitable for the use of satellites. As of March 2022, there are no legally binding international laws about who is responsible for the extraction of space debris, or mandating a reduction of new space debris brought into Earth's orbit. However, space agencies of several countries have implemented their own standards and policies to reduce introduction of new space debris, and the Inter-Agency Space Debris Coordination Committee (IADC) has been founded to address issues regarding orbital debris. Additionally, JAXA is researching an electromagnetic tether that could be used to pull debris down into the atmosphere. The moral problem is that those in power (space agencies) can launch material into the Earth's orbit for their own gains without being held accountable for it, while the general public has to bear the consequences (such as atmospheric pollution or the risk of being hit by space debris). Satellite surveillance Reconnaissance satellites are used for a variety of military and intelligence purposes, such as optical imaging and signals intelligence. It has been noted that such data could infringe on people's privacy and thereby lead to ethical and legal issues. It could also turn into a source of national security threats if such information got into malevolent hands. In order to ensure ethically correct obtainment and use of satellite data, leading researchers in law, meteorology and atmospheric science have called for new policy which would lead to more transparency and security. Weaponizing space In 1967, the Outer Space Treaty was signed, spurred by the development of intercontinental ballistic missiles, the Soviet Union's launch of Sputnik, the first artificial satellite, and the following arms race with the United States. The treaty outlaws all kinds of military action (including weapon tests) in space, limits the use of space to peaceful purposes only and ensures that all nations on Earth are free to explore space. This treaty has since been called into question multiple times, especially by former President of the United States Donald Trump. On June 18, 2018, Trump announced plans to establish a space force, which would constitute a new, sixth branch of the United States military. He expressed that "When it comes to defending America, it is not enough to merely have an American presence in space. We must have American dominance in space". On December 20, 2019, the United States Space Force Act was signed into law with votes from both Democratic and Republican senators and House members. As a result, the United States Space Force was founded. This was seen by some as an American contestation of the Outer Space Treaty. Viktor Bondarev, chair of the Federation Council Committee on Defense and Security, responded by saying that if the US were to go further and withdraw from the 1967 treaty, there would be "a tough response aimed at ensuring world security." This is despite Russia itself having a space force branch in their military. Private spaceflight and space tourism The emergence of space tourism gives rise to a number of ethical concerns. Future frequent and large-scale landings on celestial bodies like the moon may damage or pollute landing sites and the areas around them. While scientific activity in space is benign, this cannot be guaranteed for actions by private people. If, how, by what criteria and by whom laws should be made to ensure that space tourism doesn't negatively impact other celestial bodies is a question of astroethics. Terraforming other celestial bodies Terraforming is a controversial astroethical matter. Proponents of terraforming, like Robert Zubrin, argue that humans, being the only technologically advanced and intelligent species on Earth, have a moral obligation to make other celestial bodies habitable for Earth's lifeforms to ensure their survival after the inevitable destruction of our planet. The other, ecocentrist and biocentrist side of the debate criticizes this position as anthropocentrism and argues that other celestial bodies may already contain life which always has intrinsic value, no matter how advanced it may be. They oppose the interplanetary contamination and changes to the other world that would stem from terraforming, as they could endanger the indigenous life and alter its evolutionary trajectory. Ethicality of SETI and METI SETI and especially METI (Active SETI) are not uncontroversial and come with their own ethical implications. METI has been criticized as incompatible with the precautionary principle because it could reveal the location of our planet to potentially malevolent alien species. It therefore also potentially puts all of humanity at risk without the need for their individual prior consent, which violates the basic scientific rule of informed consent that all other science must abide by. Reflecting on human history, some authors even fear the enslavement of humanity, should we be discovered by a more advanced species. Similarly, Stephen Hawking, one of the most prominent METI critics, warned of the potential consequences of a meeting with such a species, citing the near-extinction of Aboriginal Tasmanians as an equivalent case from human history. Concerns regarding the ethicality of METI might be a solution to the Fermi paradox. It is proposed that extraterrestrial life forms may abstain from attempting interstellar communication due to the potential danger it may pose to them, in line with the precautionary principle. Other astroethical considerations regarding METI are the lack of legally enforceable protocols about the steps that should be taken once extraterrestrial life is discovered, the unpredictability of cultural consequences of that discovery (potential paradigm changes in policy, nations, religions, etc.), who will get to speak for humanity in case contact is made, how and by whom that person or group of people should be selected, and what the contents of the messages should be. Value of extraterrestrial life A further point of contention in the field is whether extraterrestrial life has intrinsic value and therefore if humans have a moral obligation to protect it. This becomes even more difficult when considering the wide span of possible extraterrestrial life forms and whether our treatment of them should differ based on criteria such as their advancement and intelligence. As former NASA chief historian Steven J. Dick put it, "Does Mars belong to the Martians, even if the Martians are only microbes?" Dick argues that the first step in deciding how we should interact with life forms is to assess their moral status, which is complicated by our ambiguous relations with animals on earth, sheltering some species as pets while eating and exterminating others. The principle of planetary protection provides that all life on other celestial bodies is worthy of protection from harm (also in the form of contamination) and therefore confers rights even on hypothetical extraterrestrial microbes, a situation that contrasts with our treatment of microbes and even most higher-developed organisms on Earth. This difference in treatment is hardly justifiable. Therefore, according to Dick, astroethical considerations will broaden our current ethical horizon: they will unveil such inconsistencies and double standards and move humanity from an anthropocentric ethic (ascribing intrinsic value only to rationing beings) to a cosmocentric or biocentric one that values all living things. In fact, Dick says that the finding of extraterrestrial life would "necessitate" a transition away from the anthropocentric approach because it would no longer be consistently applicable to a cosmos that harbors life beyond Earth. Space burial The decision to include several grams of human cremains onboard Peregrine Lunar Lander flight 01 was criticized by the Navajo Nation, whose president, Buu Nygren, argued that the Moon is sacred to the Navajo and other American Indian nations, saying "As stewards of our culture and traditions, it is our responsibility to voice our grievances when actions are taken that could desecrate sacred spaces and disregard deeply held cultural beliefs". Celestis CEO Charles Chafer responded that "[the company] reject[s] the whole premise that this is somehow desecration" and that "nobody owns the Moon". The launch was not successful in reaching the Moon. References See also Environmental ethics Ethics of technology Astrobiology Ethics of science and technology Space applications Space
Space ethics
[ "Physics", "Astronomy", "Mathematics", "Technology", "Biology" ]
2,511
[ "Origin of life", "Outer space", "Speculative evolution", "Astrobiology", "Space applications", "Space", "Geometry", "Ethics of science and technology", "Biological hypotheses", "Spacetime", "Astronomical sub-disciplines" ]
70,131,321
https://en.wikipedia.org/wiki/Deposit%20gauge
A deposit gauge is a large, funnel-like scientific instrument used for capturing and measuring atmospheric particulates, notably soot, carried in air pollution and deposited back down to ground. Design and construction Deposit gauges are similar to rain gauges. They have a large circular funnel on top, made of stone so as not to be corroded by acid rain and mounted on a simple wooden or metal stand, which drains down into a collection bottle beneath. Typically the funnel has a wire-mesh screen around its perimeter to deter perching birds. Most are made to a standardized design, known as a standard deposit gauge, introduced in 1916 and formalized in a British Standard in 1951, which means the pollution collected in different places can be systematically studied and compared. The bottle is removed after a month and the contents taken away for analysis of water (such as rain, fog, and snow), insoluble matter (such as soot), and soluble matter. Early history The first gauges of this type were developed in the early 20th century by W.J. Russell of St Bartholomew's Hospital and the Coal Smoke Abatement Society. Between 1910 and 1916, the design was refined and standardized by the Committee for the Investigation of Atmospheric Pollution, a group of expert, volunteer scientists studying air pollution of which Sir Napier Shaw, first director of the Met Office, was chair. The first scientific paper featuring deposit gauge measurements was titled "The Sootfall of London: Its Amount, Quality, and Effects" and published in The Lancet in January 1912. Thanks to the introduction of the deposit gauge, air quality in Britain was monitored systematically from 1914 onward and this played an important role in determining the effectiveness of efforts to control pollution. By 1927, some deposit gauges were already showing 50 percent reductions in "deposited matter", although air pollution remained a major problem. Over the next few decades, deposit gauges were deployed in many British towns and cities, allowing rough comparisons to be made of pollution in different parts of the country. According to pollution historian Stephen Mosley, by 1949, some 177 gauges had been deployed across Britain, so creating the world's first large-scale pollution monitoring network, but the number increased dramatically after the Great London Smog of 1952, reaching 615 in 1954 and 1066 in 1966. Modern use Although deposit gauges were inaccurate and their limitations were well known from the start, their widespread introduction still represented a considerable advance in the study and comparison of pollution at different times of the year and in different places. In his book State, Science and the Skies: Governmentalities of the British Atmosphere, Mark Whitehead, a geography lecturer at Aberystywth University, has described the deposit gauge as "perhaps the most important technological device in the history of Britain's air pollution monitoring". Even so, from the mid-20th century, it was gradually superseded by more accurate instruments and better methods of data collection and analysis. Today, although air pollution is more likely to be measured with automated electronic sensors, deposit gauges are still occasionally used. Modern variants of the standard deposit gauge include the so-called "frisbee" gauge, in which the deposit collector is shaped like an inverted frisbee. Other variants include the directional deposit gauge, which has four tall, removable bottles to collect deposits arriving from different directions. See also Rain gauge Air pollution measurement References Further reading Air pollution Atmospheric chemistry Measuring instruments Scientific instruments
Deposit gauge
[ "Chemistry", "Technology", "Engineering" ]
699
[ "Scientific instruments", "nan", "Measuring instruments" ]
70,140,127
https://en.wikipedia.org/wiki/Potassium%20tetrafluoronickelate
Potassium tetrafluoronickelate is the inorganic compound with the formula K2NiF4. It features octahedral (high spin) Ni centers with Ni-F bond lengths of 2.006 Å. This green solid is a salt of tetrafluoronickelate. It is prepared by melting a mixture of nickel(II) fluoride, potassium fluoride, and potassium bifluoride. The compound adopts a perovskite-like structure consisting of layers of octahedral Ni centers interconnected by doubly bridging fluoride ligands. The layers are interconnected by potassium cations. It is one of the principal Ruddlesden-Popper phases. Early discoveries on cuprate superconductors focused on compounds with structures closely related to K2NiF4, e.g. lanthanum cuprate and derivative lanthanum barium copper oxide. References Nickel compounds Fluorides Metal halides Crystal structure types
Potassium tetrafluoronickelate
[ "Chemistry", "Materials_science" ]
208
[ "Inorganic compounds", "Crystal structure types", "Salts", "Crystallography", "Metal halides", "Fluorides" ]
53,075,730
https://en.wikipedia.org/wiki/Michaelis%E2%80%93Menten%E2%80%93Monod%20kinetics
For Michaelis–Menten–Monod (MMM) kinetics it is intended the coupling of an enzyme-driven chemical reaction of the Michaelis–Menten type with the Monod growth of an organisms that performs the chemical reaction. The enzyme-driven reaction can be conceptualized as the binding of an enzyme E with the substrate S to form an intermediate complex C, which releases the reaction product P and the unchanged enzyme E. During the metabolic consumption of S, biomass B is produced, which synthesizes the enzyme, thus feeding back to the chemical reaction. The two processes can be expressed as where and are the forward and backward equilibrium rate constants, is the reaction rate constant for product release, is the biomass yield coefficient, and is the enzyme yield coefficient. Transient kinetics The kinetic equations describing the reactions above can be derived from the GEBIK equations and are written as where is the biomass mortality rate and is the enzyme degradation rate. These equations describe the full transient kinetics, but cannot be normally constrained to experiments because the complex C is difficult to measure and there is no clear consensus on whether it actually exists. Quasi-steady-state kinetics Equations 3 can be simplified by using the quasi-steady-state (QSS) approximation, that is, for ; under the QSS, the kinetic equations describing the MMM problem become where is the Michaelis–Menten constant (also known as the half-saturation concentration and affinity). Implicit analytic solution If one hypothesizes that the enzyme is produced at a rate proportional to the biomass production and degrades at a rate proportional to the biomass mortality, then Eqs. 4 can be rewritten as where , , , are explicit function of time . Note that Eq. (4b) and (4d) are linearly dependent on Eqs. (4a) and (4c), which are the two differential equations that can be used to solve the MMM problem. An implicit analytic solution can be obtained if is chosen as the independent variable and , , and ) are rewritten as functions of so to obtain where has been substituted by as per mass balance , with the initial value when , and where has been substituted by as per the linear relation expressed by Eq. (4d). The analytic solution to Eq. (5b) is with the initial biomass concentration when . To avoid the solution of a transcendental function, a polynomial Taylor expansion to the second-order in is used for in Eq. (6) as Substituting Eq. (7) into Eq. (5a} and solving for with the initial value , one obtains the implicit solution for as with the constants For any chosen value of , the biomass concentration can be calculated with Eq. (7) at a time given by Eq. (8). The corresponding values of and can be determined using the mass balances introduced above. See also Enzyme kinetics Michaelis–Menten kinetics Monod GEBIK equations References Enzyme kinetics
Michaelis–Menten–Monod kinetics
[ "Chemistry" ]
622
[ "Chemical kinetics", "Enzyme kinetics" ]
53,079,421
https://en.wikipedia.org/wiki/Dissimilatory%20nitrate%20reduction%20to%20ammonium
Dissimilatory nitrate reduction to ammonium (DNRA), also known as nitrate/nitrite ammonification, is the result of anaerobic respiration by chemoorganoheterotrophic microbes using nitrate (NO3−) as an electron acceptor for respiration. In anaerobic conditions microbes which undertake DNRA oxidise organic matter and use nitrate (rather than oxygen) as an electron acceptor, reducing it to nitrite, and then to ammonium (NO3− → NO2− → NH4+). Dissimilatory nitrate reduction to ammonium is more common in prokaryotes but may also occur in eukaryotic microorganisms. DNRA is a component of the terrestrial and oceanic nitrogen cycle. Unlike denitrification, it acts to conserve bioavailable nitrogen in the system, producing soluble ammonium rather than unreactive nitrogen gas (). Background and process Cellular process Dissimilatory nitrate reduction to ammonium is a two step process, reducing NO3− to NO2− then NO2− to NH4+, though the reaction may begin with NO2− directly. Each step is mediated by a different enzyme, the first step of dissimilatory nitrate reduction to ammonium is usually mediated by a periplasmic nitrate reductase. The second step (respiratory NO2− reduction to NH4+) is mediated by cytochrome c nitrite reductase, occurring at the periplasmic membrane surface. Despite DNRA not producing nitrous oxide (N2O) as an intermediate during nitrate reduction (as denitrification does), N2O may still be released as a byproduct, thus DNRA may also act as a sink of fixed, bioavailable nitrogen. DNRA's production of N2O may be enhanced at higher pH levels. Denitrification Dissimilatory nitrate reduction to ammonium is similar to the process of denitrification, though NO2− is reduced farther to NH4+ rather than to N2, transferring eight electrons. Both denitrifiers and nitrate ammonifiers are competing for NO3− in the environment. Despite the redox potential of dissimilatory nitrate reduction to ammonium being lower than denitrification and producing less Gibbs free energy, energy yield of denitrification may not be efficiently conserved in its series of enzymatic reactions and nitrate ammonifiers may achieve higher growth rates and outcompete denitrifiers. This is may be especially pronounced when NO3− is limiting compared to organic carbon, as organic carbon is oxidised more 'efficiently' per NO3− (as each molecule NO3− is reduced farther). The balance of denitrification and DNRA is important to the nitrogen cycle of an environment as both use NO3− but, unlike denitrification, which produces gaseous, non-bioavailable N2 (a sink of nitrogen), DNRA produces bioavailable, soluble NH4+. Marine context Marine microorganisms As dissimilatory nitrate reduction to ammonium is an anaerobic respiration process, marine microorganisms capable of performing DNRA are most commonly found in environments low in O2, such as oxygen minimum zones (OMZs) in the water column, or sediments with steep O2 gradients. DNRA has been documented in prokaryotes inhabiting the upper layer of marine sediments. For example, benthic sulfur bacteria in genera such as Beggiatoa and Thioploca inhabit anoxic sediments on continental shelves and obtain energy by oxidizing sulfide via DNRA. These bacteria are able to carry out DNRA using intracellular nitrate stored in vacuoles. The direct reduction of nitrate to ammonium via dissimilatory nitrate reduction, coupled with the direct conversion of ammonium to dinitrogen via Anammox, has been attributed to significant nitrogen loss in certain parts of the ocean; this DNRA-Anammox coupling by DNRA and Anammox bacteria can account for nitrate loss in areas with no detectable denitrification, such as in OMZs off the coast of Chile, Peru, and Namibia, as well as OMZs over the Omani Shelf in the Arabian Sea. While denitrification is more energetically favourable than DNRA, there is evidence that bacteria using DNRA conserve more energy than denitrifiers, allowing them to grow faster. Thus, via DNRA-Anammox coupling, bacteria using DNRA and Anammox may be stronger competitors for substrates than denitrifiers. While dissimilatory nitrate reduction to ammonium is more commonly associated with prokaryotes, recent research has found increasing evidence of DNRA in various eukaryotic microorganisms. Of the known DNRA-capable fungal species, one is found in marine ecosystems; an isolate of ascomycete Aspergillus terreus from an OMZ of the Arabian Sea has been found to be capable of performing DNRA under anoxic conditions. Evidence of DNRA has also been found in marine foraminifers. More recently, it has been discovered that using intracellular nitrate stores, diatoms can carry out dissimilatory nitrate reduction to ammonium, likely for short-term survival or for entering resting stages, thereby allowing them to persist in dark and anoxic conditions. However, their metabolism is probably not sustained by DNRA for long-term survival during resting stages, as these resting stages often can be much longer than their intracellular nitrate supply would last. The use of DNRA by diatoms is a possible explanation for how they can survive buried in dark, anoxic sediment layers on the ocean floor, without being able to carry out photosynthesis or aerobic respiration. Currently, DNRA is known to be carried out by the benthic diatom Amphora coffeaeformis, as well as the pelagic diatom Thalassiosira weissflogii. As diatoms are a significant source of oceanic primary production, the ability for diatoms to perform DNRA has major implications on their ecological role, as well as their role in the marine nitrogen cycle. Ecological role Unlike denitrification, which removes reactive nitrogen from the system under gaseous form (as N2 or N2O), dissimilatory nitrate reduction to ammonium conserves nitrogen as dissolved species within the system. Since DNRA takes nitrate and converts it into ammonium, it does not produce N2 or N2O gases. Consequently, DNRA recycles nitrogen rather than causing gaseous-N loss, which leads to more sustainable primary production and nitrification. Within an ecosystem, denitrification and DNRA can occur simultaneously. Usually DNRA is about 15% of the total nitrate reduction rate, which includes both DNRA and denitrification. However, the relative importance of each process is influenced by environmental variables. For example, DNRA is found to be three to seven times higher in sediments under fish cages than nearby sediments due to the accumulation of organic carbon. Conditions where dissimilatory nitrate reduction to ammonium is favoured over denitrification in marine coastal ecosystems include the following: High carbon loads and high sulfate reduction rates (e.g. areas of coastal or river runoff) Unvegetated subtidal sediment Marshes with high temperatures and sulfate reduction rates (producing high levels of sulfides), e.g. mangroves High organic matter deposition (e.g. aquacultures) Ecosystems where organic matter has a high C/N ratio High electron donor (organic carbon) to acceptor (nitrate) ratio High summer temperatures and low NO3− concentrations High sulfide concentration can inhibit the processes of nitrification and denitrification. Meanwhile, it can also enhance dissimilatory nitrate reduction to ammonium since high sulfide concentration provides more electron donors. Ecosystems where DNRA is dominant have less nitrogen loss, resulting in higher levels of preserved nitrogen in the system. Within sediments, the total dissimilatory nitrate reduction to ammonium rate is higher in spring and summer compared to autumn. Prokaryotes are the major contributors for DNRA during summer, while eukaryotes and prokaryotes contribute similarly to DNRA during spring and autumn. Potential benefits of using dissimilatory nitrate reduction to ammonium for individual organisms may include the following: Detoxification of accumulated nitrite: if an enzyme uses nitrate as an electron acceptor and produces nitrite, it can result in high levels of intracellular nitrite concentrations that can be toxic to the cell. DNRA does not store nitrite within the cell, reducing the level of toxicity. DNRA produces an electron sink that can be used for NADH re-oxidation into NAD+: the need for having an electron sink is more apparent when the environment is nitrate-limited. Changes to f-ratio calculation The balance of dissimilatory nitrate reduction to ammonium and denitrification alters the accuracy of f-ratio calculations. The f-ratio is used to quantify the efficiency of the biological pump, which reflects sequestering of carbon from the atmosphere to the deep sea. The f-ratio is calculated using estimates of 'new production' (primary productivity stimulated by nutrients entering the photic zone from outside the photic zone, for example from the deep ocean) and 'regenerated production' (primary productivity stimulated by nutrients already in the photic zone, released by remineralisation). Calculations of the f-ratio use the nitrogen species stimulating primary productivity as a proxy for the type of production occurring; productivity stimulated by NH4+ rather than NO3− is 'regenerated production'. DNRA also produces NH4+ (in addition to remineralisation) but from organic matter which has been exported from the photic zone; this may be subsequently reintroduced by mixing or upwelling of deeper water back to the surface, thereby, stimulating primary productivity; thus, in areas where high amounts of DNRA is occurring, f-ratio calculations will not be accurate. References Anaerobic digestion Cellular respiration
Dissimilatory nitrate reduction to ammonium
[ "Chemistry", "Engineering", "Biology" ]
2,177
[ "Cellular respiration", "Biochemistry", "Anaerobic digestion", "Environmental engineering", "Water technology", "Metabolism" ]
53,080,832
https://en.wikipedia.org/wiki/Institute%20for%20Genetic%20Engineering%20and%20Biotechnology
Institute for Genetic Engineering and Biotechnology (INGEB) is a Bosnian public research institute, member of Sarajevo University (UNSA), and affiliate center of International Centre for Genetic Engineering and Biotechnology (ICGEB). ICGEB was established as a special project of the United Nations Industrial Development Organization (UNIDO). INGEB was founded under the name "Center for Genetic Engineering and Biotechnology", in 1988. INGEB's headquarters are located in Sarajevo. One of INGEB's most prominent founders was Professor Rifat Hadžiselimović, with the support of the Government of Socialist Republic of Bosnia and Herzegovina, ANUBiH and the biggest B&H economic systems. After the establishment document, INGEB was entrusted with the functions maker, institutional creator and carrier of the overall scientific and professional work in the development of genetic engineering and biotechnology based molecular biology in B&H. In 1993, by a legal act, the Assembly of Republic of Bosnia and Herzegovina, assumed the right of the founder of the institution, at the beginning of the Bosnian War, and later, in 1999, entitled founder of INGEB (as a "public institution that will operate within the University of Sarajevo") took over the Sarajevo Canton. Structure and activities In INGEB, there are following functional units: Laboratory for forensic genetics; Laboratory for human genetics; Laboratory for GMO and food biosafety; Laboratory for molecular genetics of natural resources; Laboratory for bioinformatics and biostatistics, and Laboratory for cytogenetics and genotoxicology. Laboratory for Forensic Genetics Laboratory for forensic genetics provides scientific approach to analysis of samples of different origin. In this laboratory DNA profiling is routinely done for skeletal remains, blood stains (on different materials), hair, semen, controversial traces on cigarette butts, controversial traces under fingernails, in urine etc. Expert activities perform in laboratory for forensic genetics include: paternity testing using samples of buccal swab (which reduces traumatic effect on children), blood, hair, bones and other baseline samples. motherless paternity testing, maternity testing without the presence of the father, biological kinship testing, forensic DNA analysis for police, prosecution, law offices, courts and private individual purposes. In laboratory for forensic genetics scientific projects, supported by the respective ministries and foreign institutions, are implemented or are under realization. The focus of scientific research is directed towards genetic analysis of archaeological skeletal samples, forensic genetic parameters testing of Bosnian population, as well as towards a target oriented expansion of previously initiated population genetic research. Laboratory for Human Genetics This laboratory represents the organizational segment of the Institute which is dealing with genetic characterisation of DNA of human origin for the purpose of basic and applied research. We use molecular-genetics approach mainly PCR based in investigating the genetic structure. Main activities of the laboratory comprise research directions: detection of circulating DNA sequences as potential markers in molecular oncology, gene expression profiling for characterization of therapeutical effects of novel and biological substances and individual genetic predisposition to complex traits (disorders). Other important aspect is participation in higher education programs at University of Sarajevo as well as public engagement in developing molecular-genetics methods for support of medical diagnostics. Laboratory for GMO and Food Biosafety The laboratory scope includes wide array of activities mainly focused on the issues of food biosafety and plant biotechnology. It provides qualitative and quantitative analysis of specific DNA sequences in various food matrices, provides advice and correct interpretation of GMO related data to consumers and food safety authority and promotes science based approach to biosafety. In that respect the Laboratory has established communication with JRC-EURL-GMFF and follows the published guidelines. Also, the Laboratory develops new analytical methods, where appropriate, to bridge the gaps in the available methodology. Research aspect of the Laboratory is mainly focused on endemic and endangered plant species with bioactive potential. Simultaneously with bioactive potential of a species, which is explored in in vitro and in vivo models, molecular markers are employed to evaluate its genetic diversity for the purpose of conservation. Laboratory for Cytogenetics and Genotoxicology Research activities of the Laboratory for Cytogenetics and Genotoxicology are based on: Cytogenetic and genotoxicological analysis of bioactive potential of certain physical, chemical and biological agents, and Cytogenetic and genotoxicological monitoring of human populations in Bosnia and Herzegovina. Expert activity of the Laboratory for Cytogenetics and Genotoxicology mainly includes chromosome analysis and karyotyping of human samples. The most frequently used tests in research projects of this lab are based on cell culture and include: chromosome aberrations analysis, cytokinesis-block micronucleus cytome assay and sister chromatids exchange assay. Evaluation of cytotoxic and cytostatic potential of various chemical agents includes application of colorimetric method in different cell lines. Research capacities are significantly used for academic education and the realization of final thesis of Sarajevo University students. Expertises: Karyotype analysis; Cytogenetic biodosimetry; In vitro testing of genotoxic and cytotoxic potential of chemical substances and herbal extracts using: Chromosome aberrations analysis; Cytokinesis-block micronucleus cytome assay; Sister-chromatid exchange assay; Allium assay; Alamar blue assay; Trypan blue assay. Primary cell lines establishment. Projects: Analysis of K2(B3O3F4OH) bioactive and medical potential; (ongoing project). Analysis of natural bioactive compounds potential in the inhibition of genotoxic and cytotoxic effects in vitro; (2012-2013) Financed by Federal ministry of education and science. Cytotoxicity and genotoxicity analysis of natural and synthetic food colorants in FB&H; (2011-2012) Financed by Federal ministry of education and science. Evaluation of antitumor properties of halogenated boroxine; (2010-2011) Financed by Federal ministry of education and science. Participation in international collaborative project: HUMNXL – Exfoliated cells micronucleus project; (2009-2011); Analysis of the specific chromosomal markers of basal cell carcinoma; (2007-2009) Financed by the Ministry of Education and Science of Sarajevo Canton; Cytogenetic markers in human populations of FB&H as possible bioindicators for Balcan syndrome; (2002-2003) Financed by Federal ministry of science, culture and sport. References External links Institut za genetičko inženjerstvo i biotehnologiju u Sarajevu Research institutes in Bosnia and Herzegovina Genetic engineering University of Sarajevo Biological research institutes Medical research institutes Biochemistry research institutes Research institutes established in 1988 Organizations based in Sarajevo
Institute for Genetic Engineering and Biotechnology
[ "Chemistry", "Engineering", "Biology" ]
1,404
[ "Biological engineering", "Biochemistry organizations", "Genetic engineering", "Biochemistry research institutes", "Molecular biology" ]
53,084,684
https://en.wikipedia.org/wiki/Canary%20Diamond
The Canary Diamond is an uncut canary-yellow 17.86 carat diamond found in 1917 at what is now the Crater of Diamonds State Park in Arkansas. It is in the collection of the Smithsonian Museum of Natural History. The diamond was in the collection of civil engineer and mineral collector Washington Roebling; his son donated it, along with the rest of Roebling's collection, to the museum in 1926 after Roebling's death. See also List of diamonds References Diamonds originating in the United States Gemstones Individual diamonds
Canary Diamond
[ "Physics" ]
107
[ "Materials", "Gemstones", "Matter" ]
53,085,145
https://en.wikipedia.org/wiki/Axonometry
Axonometry is a graphical procedure belonging to descriptive geometry that generates a planar image of a three-dimensional object. The term "axonometry" means "to measure along axes", and indicates that the dimensions and scaling of the coordinate axes play a crucial role. The result of an axonometric procedure is a uniformly-scaled parallel projection of the object. In general, the resulting parallel projection is oblique (the rays are not perpendicular to the image plane); but in special cases the result is orthographic (the rays are perpendicular to the image plane), which in this context is called an orthogonal axonometry. In technical drawing and in architecture, axonometric perspective is a form of two-dimensional representation of three-dimensional objects whose goal is to preserve the impression of volume or relief. Sometimes also called rapid perspective or artificial perspective, it differs from conical perspective and does not represent what the eye actually sees: in particular parallel lines remain parallel and distant objects are not reduced in size. It can be considered a conical perspective conique whose center has been pushed out to infinity, i.e. very far from the object observed. The term axonometry is used both for the graphical procedure described below, as well as the image produced by this procedure. Axonometry should not be confused with axonometric projection, which in English literature usually refers to orthogonal axonometry. Principle of axonometry Pohlke's theorem is the basis for the following procedure to construct a scaled parallel projection of a three-dimensional object: Select projections of the coordinate axes, such that all three coordinate axes are not collapsed to a single point or line. Usually the z-axis is vertical. Select for these projections the foreshortenings, , and , where The projection of a point is determined in three sub-steps (the result is independent of the order of these sub-steps): starting at the point , move by the amount in the direction of , then move by the amount in the direction of , then move by the amount in the direction of and finally Mark the final position as point . In order to obtain undistorted results, select the projections of the axes and foreshortenings carefully (see below). In order to produce an orthographic projection, only the projections of the coordinate axes are freely selected; the foreshortenings are fixed (see :de:orthogonale Axonometrie). The choice of the images of the axes and the foreshortenings Notation: angle between -axis and -axis angle between -axis and -axis angle between -axis and -axis. The angles can be chosen so that The foreshortenings: Only for suitable choices of angles and foreshortenings does one get undistorted images. The next diagram shows the images of the unit cube for various angles and foreshortenings and gives some hints for how to make these personal choices. In order to keep the drawing simple, one should choose simple foreshortenings, for example or . If two foreshortenings are equal, the projection is called dimetric. If the three foreshortenings are equal, the projection is called isometric. If all foreshortenings are different, the projection is called trimetric. The parameters in the diagram at right (e.g. of the house drawn on graph paper) are: Hence it is a dimetric axonometry. The image plane is parallel to the y-z-plane and any planar figure parallel to the y-z-plane appears in its true shape. Special axonometries Engineer projection In this case the foreshortenings are: (dimetric axonometry) and the angles between the axes are: These angles are marked on many German set squares. Advantages of an engineer projection: simple foreshortenings, a uniformly scaled orthographic projection with scaling factor 1.06, the contour of a sphere is a circle (in general, an ellipse) . For more details: see :de:Axonometrie. Cavalier perspective, cabinet perspective image plane parallel to y-z-plane. In the literature the terms "cavalier perspective" and "cabinet perspective" are not uniformly defined. The above definition is the most general one. Often, further restrictions are applied. For example: cabinet perspective: additionally choose (oblique) and (dimetric), cavalier perspective: additionally choose (oblique) and (isometric). Birds eye view, military projection image plane parallel to x-y-plane. military projection: additionally choose (isometric). Such axonometries are often used for city maps, in order to keep horizontal figures undistorted. Isometric axonometry (Not to be confused with an isometry between metric spaces.) For an isometric axonometry all foreshortenings are equal. The angles can be chosen arbitrarily, but a common choice is . For the standard isometry or just isometry one chooses: (all axes undistorted) The advantage of a standard isometry: the coordinates can be taken unchanged, the image is a scaled orthographic projection with scale factor . Hence the image has a good impression and the contour of a sphere is a circle. Some computer graphic systems (for example, xfig) provide a suitable raster (see diagram) as support. In order to prevent scaling, one can choose the unhandy foreshortenings (instead of 1) and the image is an (unscaled) orthographic projection. Circles in axonometry A parallel projection of a circle is in general an ellipse. An important special case occurs, if the circle's plane is parallel to the image plane–the image of the circle is then a congruent circle. In the diagram, the circle contained in the front face is undistorted. If the image of a circle is an ellipse, one can map four points on orthogonal diameters and the surrounding square of tangents and in the image parallelogram fill-in an ellipse by hand. A better, but more time consuming method consists of drawing the images of two perpendicular diameters of the circle, which are conjugate diameters of the image ellipse, determining the axes of the ellipse with Rytz's construction and drawing the ellipse. Spheres in axonometry In a general axonometry of a sphere the image contour is an ellipse. The contour of a sphere is a circle only in an orthogonal axonometry. But, as the engineer projection and the standard isometry are scaled orthographic projections, the contour of a sphere is a circle in these cases, as well. As the diagram shows, an ellipse as the contour of a sphere might be confusing, so, if a sphere is part of an object to be mapped, one should choose an orthogonal axonometry or an engineer projection or a standard isometry. References Notes External links Orthogonal axonometry Graphical projections
Axonometry
[ "Mathematics" ]
1,456
[ "Mathematical objects", "Functions and mappings", "Graphical projections", "Mathematical relations" ]
65,773,644
https://en.wikipedia.org/wiki/Vivanti%E2%80%93Pringsheim%20theorem
The Vivanti–Pringsheim theorem is a mathematical statement in complex analysis, that determines a specific singularity for a function described by certain type of power series. The theorem was originally formulated by Giulio Vivanti in 1893 and proved in the following year by Alfred Pringsheim. More precisely the theorem states the following: A complex function defined by a power series with non-negative real coefficients and a radius of convergence has a singularity at . A simple example is the (complex) geometric series with a singularity at . References Reinhold Remmert: The Theory of Complex Functions. Springer Science & Business Media, 1991, , p. 235 I-hsiung Lin: Classical Complex Analysis: A Geometric Approach (Volume 2). World Scientific Publishing Company, 2010, , p. 45 Theorems in complex analysis
Vivanti–Pringsheim theorem
[ "Mathematics" ]
170
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in complex analysis", "Mathematical analysis stubs" ]
65,778,287
https://en.wikipedia.org/wiki/Snezhana%20Abarzhi
Snezhana I. Abarzhi (also known as Snejana I. Abarji) is an applied mathematician and theoretical physicist specializing in the dynamics of fluids and plasmas and their applications in nature and technology. Her research has revealed that instabilities elucidate dynamics of supernova blasts, and that supernovae explode more slowly and less turbulently than previously thought, changing the understanding of the mechanisms by which heavy atomic nuclei are formed in these explosions. Her works have found the mechanism of interface stabilization, the special self-similar class in interfacial mixing, and the fundamentals of Rayleigh-Taylor instabilities. Education and career Abarzhi earned bachelor's degrees in physics and applied mathematics and in molecular biology in 1987 from the Moscow Institute of Physics and Technology, and earned a master's degree in physics and applied mathematics there, summa cum laude, in 1990. She completed her doctorate in 1994 through the Landau Institute for Theoretical Physics and Kapitza Institute for Physical Problems of the Russian Academy of Sciences, supervised by Sergei I. Anisimov. Abarzhi held a position as a researcher for the Russian Academy of Sciences from 1994 to 1997 (on leave in 1997-2004). She came to the US in 1997 as a visiting professor at the University of North Carolina at Chapel Hill, and then in 1998 became an Alexander von Humboldt Fellow at the University of Bayreuth in Germany. In 1999 she took a research position at Stony Brook University. In 2002 she briefly moved to a research professorship at Osaka University before returning to the US as a senior fellow in the Center for Turbulence Research at Stanford University. In 2005 she became a research faculty member at the University of Chicago and in 2006 she added a regular-rank faculty position as an associate professor at the Illinois Institute of Technology. She also worked at Carnegie Mellon University from 2013 to 2016 before moving to the University of Western Australia as professor and chair of applied mathematics. Abarzhi is a member of the Committee on Scientific Publications of the American Physical Society, and an organizer of conferences and programs on non-equilibrium dynamics of interfaces and turbulent mixing and beyond. In 2020 Abarzhi was named a Fellow of the American Physical Society (APS), after a nomination from the APS Division of Fluid Dynamics, "for deep and abiding work on the Rayleigh-Taylor and related instabilities, and for sustained leadership in that community". Selected publications Abarzhi SI, Hill DL, Williams KC, Li JT, Remington BA, Arnett WD 2023 Fluid dynamics mathematical aspects of supernova remnants. Phys. Fluids 35, 034106. https://doi.org/10.1063/5.0123930 Abarzhi SI, Sreenivasan KR 2022 Self-similar Rayleigh-Taylor mixing with accelerations varying in time and space. Proc. Natl. Acad. Sci. USA 119, e2118589119. https://doi.org/10.1073/pnas.2118589119 Ilyin DV, Abarzhi SI 2022 Interface dynamics under thermal heat flux, inertial stabilization and destabilizing acceleration. Springer Nat. Appl. Sci. 4, 197. https://doi.org/10.1007/s42452-022-05000-4 Meshkov EE, Abarzhi SI 2019 On Rayleigh-Taylor interfacial mixing. Fluid Dyn. Res. 51, 065502. https://dx.doi.org/10.1088/1873-7005/ab3e83, http://arxiv.org/abs/1901.04578 References Year of birth missing (living people) Living people 20th-century American mathematicians 20th-century American women mathematicians 21st-century American mathematicians 21st-century American women mathematicians Russian mathematicians Russian women mathematicians Fluid dynamicists Moscow Institute of Physics and Technology alumni Illinois Institute of Technology faculty Carnegie Mellon University faculty Academic staff of the University of Western Australia Fellows of the American Physical Society
Snezhana Abarzhi
[ "Chemistry" ]
849
[ "Fluid dynamicists", "Fluid dynamics" ]
65,781,271
https://en.wikipedia.org/wiki/Neuropeptide%20W
Neuropeptide W or preprotein L8 is a short human neuropeptide. Neuropeptide W acts as a ligand for two neuropeptide B/W receptors, NPBWR1 and NPBWR2, which are integrated in GPCRs family of alpha-helical transmembrane proteins. Structure There are two forms of neuropeptide W whose precursor is encoded by NPW gene. The 23-amino-acid form (neuropeptide W-23) is the one that activates the receptors whereas the C-terminally extended form (neuropeptide W-30) is less effective. These isoforms were demonstrated in different species like rat, human, chicken, mouse and pig. The name of neuropeptid W is due to the tryptophan residues located on both sides, the N- side and -C side, in its two mature forms. Location Neuropeptide W was first identified in porcine hypothalamus in 2002. In humans, it is highly confined in neurons of the substantia nigra and the spinal cord, and fewer expressed in neurons of the hippocampus, hypothalamus, amygdala, parietal cortex and cerebellum. It can also be found in some peripheral tissues such as trachea, stomach, liver, kidney prostate, uterus and ovary. It has to be said that tissue distribution information is still lacking. For the moment, Neuropeptide W location differences between studied species (rat, mouse, chicken, pig) are slight, even though quantities differ between the organs. Function Neuropeptide W in CNS Neuropeptide W in the Central Nervous System is surely implicated in feeding activity and energy metabolism, in the adrenal axis stress response, and the regulation of neuroendocrine functions like the hormone release from the pituitary gland, but it is not considered as an inhibitory or regulatory factor in it. Neuropeptide W may also be involved in autonomic regulation, pain sensation, emotions, anxiety and fear. It seems that regulation of feeding behaviour and energy metabolism is the primary function of the neuropeptide W signaling system. On the one hand, Neuropeptide W regulates the endocrine signals aimed at anterior hypophysis. This stimulates both the need for water (thirst) and the need for food (hunger). On the other hand, it plays a compensatory role in energy metabolism. Regarding the adrenal axis response to stress, it plays a relevant role as a messenger in brain networks that help the activation of HPA (hypothalamic–pituitary–adrenal axis), which will cause the response to stress. An example of neuroendocrine functions is the regulation of the secretion of cortisol due to the activation or deactivation of neuropeptide B/W receptors. Moreover, Neuropeptide W is found in an area that is connected with preauthonomic centers in the brainstem and spinal cord. Because of this location, there is a chance that it can affect some cardiovascular function. Infusion of neuropeptide W has been shown to suppress the eating of food and body weight and increase heat production and body temperature, this verifies its works as an endogenous catabolic signaling molecule. Neuropepdide W in peripheral tissues Nevertheless, function and physiological role of peripheric neuropeptid W is not clearly known. References Neuropeptides G proteins Genes on human chromosome 17
Neuropeptide W
[ "Chemistry" ]
771
[ "G proteins", "Signal transduction" ]
65,781,618
https://en.wikipedia.org/wiki/Iodine%20nitrate
Iodine nitrate is a chemical with formula INO3. It is a covalent molecule with a structure of I–O–NO2. Preparation The compound was first produced by the reaction of mercury(II) nitrate and iodine in ether. Other nitrate salts and solvents can also be used. As a gas it is slightly unstable, decaying with a rate constant of −3.2×10−2 s−1. The possible formation of this chemical in the atmosphere and its ability to destroy ozone have been studied. Potential reactions in this context are: IONO2 → IO + NO2 IONO2 → I + NO3 I + O3 → IO + O2 References Nitrates Iodine compounds
Iodine nitrate
[ "Chemistry" ]
146
[ "Oxidizing agents", "Nitrates", "Salts" ]
65,783,919
https://en.wikipedia.org/wiki/Taylor%E2%80%93von%20Neumann%E2%80%93Sedov%20blast%20wave
Taylor–von Neumann–Sedov blast wave (or sometimes referred to as Sedov–von Neumann–Taylor blast wave) refers to a blast wave induced by a strong explosion. The blast wave was described by a self-similar solution independently by G. I. Taylor, John von Neumann and Leonid Sedov during World War II. History G. I. Taylor was told by the British Ministry of Home Security that it might be possible to produce a bomb in which a very large amount of energy would be released by nuclear fission and asked to report the effect of such weapons. Taylor presented his results on June 27, 1941. Exactly at the same time, in the United States, John von Neumann was working on the same problem and he presented his results on June 30, 1941. It was said that Leonid Sedov was also working on the problem around the same time in the USSR, although Sedov never confirmed any exact dates. The complete solution was published first by Sedov in 1946. von Neumann published his results in August 1947 in the Los Alamos scientific laboratory report on , although that report was distributed only in 1958. Taylor got clearance to publish his results in 1949 and he published his works in two papers in 1950. In the second paper, Taylor calculated the energy of the atomic bomb used in the Trinity (nuclear test) using the similarity, just by looking at the series of blast wave photographs that had a length scale and time stamps, published by Julian E Mack in 1947. This calculation of energy caused, in Taylor's own words, 'much embarrassment' (according to Grigory Barenblatt) in US government circles since the number was then still classified although the photographs published by Mack were not. Taylor's biographer George Batchelor writes This estimate of the yield of the first atom bomb explosion caused quite a stir... G.I. was mildly admonished by the US Army for publishing his deductions from their (unclassified) photographs. Mathematical description Consider a strong explosion (such as nuclear bombs) that releases a large amount of energy in a small volume during a short time interval. This will create a strong spherical shock wave propagating outwards from the explosion center. The self-similar solution tries to describe the flow when the shock wave has moved through a distance that is extremely large when compared to the size of the explosive. At these large distances, the information about the size and duration of the explosion will be forgotten; only the energy released will have influence on how the shock wave evolves. To a very high degree of accuracy, then it can be assumed that the explosion occurred at a point (say the origin ) instantaneously at time . The shock wave in the self-similar region is assumed to be still very strong such that the pressure behind the shock wave is very large in comparison with the pressure (atmospheric pressure) in front of the shock wave , which can be neglected from the analysis. Although the pressure of the undisturbed gas is negligible, the density of the undisturbed gas cannot be neglected since the density jump across strong shock waves is finite as a direct consequence of Rankine–Hugoniot conditions. This approximation is equivalent to setting and the corresponding sound speed , but keeping its density non zero, i.e., . The only parameters available at our disposal are the energy and the undisturbed gas density . The properties behind the shock wave such as are derivable from those in front of the shock wave. The only non-dimensional combination available from and is . It is reasonable to assume that the evolution in and of the shock wave depends only on the above variable. This means that the shock wave location itself will correspond to a particular value, say , of this variable, i.e., The detailed analysis that follows will, at the end, reveal that the factor is quite close to unity, thereby demonstrating (for this problem) the quantitative predictive capability of the dimensional analysis in determining the shock-wave location as a function of time. The propagation velocity of the shock wave is With the approximation described above, Rankine–Hugoniot conditions determines the gas velocity immediately behind the shock front , and for an ideal gas as follows where is the specific heat ratio. Since is a constant, the density immediately behind the shock wave is not changing with time, whereas and decrease as and , respectively. Self-similar solution The gas motion behind the shock wave is governed by Euler equations. For an ideal polytropic gas with spherical symmetry, the equations for the fluid variables such as radial velocity , density and pressure are given by At , the solutions should approach the values given by the Rankine-Hugoniot conditions defined in the previous section. The variable pressure can be replaced by the sound speed since pressure can be obtained from the formula . The following non-dimensional self-similar variables are introduced, . The conditions at the shock front becomes Substituting the self-similar variables into the governing equations will lead to three ordinary differential equations. Solving these differential equations analytically is laborious, as shown by Sedov in 1946 and von Neumann in 1947. G. I. Taylor integrated these equations numerically to obtain desired results. The relation between and can be deduced directly from energy conservation. Since the energy associated with the undisturbed gas is neglected by setting , the total energy of the gas within the shock sphere must be equal to . Due to self-similarity, it is clear that not only the total energy within a sphere of radius is constant, but also the total energy within a sphere of any radius (in dimensional form, it says that total energy within a sphere of radius that moves outwards with a velocity must be constant). The amount of energy that leaves the sphere of radius in time due to the gas velocity is , where is the specific enthalpy of the gas. In that time, the radius of the sphere increases with the velocity and the energy of the gas in this extra increased volume is , where is the specific energy of the gas. Equating these expressions and substituting and that is valid for ideal polytropic gas leads to The continuity and energy equation reduce to Expressing and as a function of only using the relation obtained earlier and integrating once yields the solution in implicit form, where The constant that determines the shock location can be determined from the conservation of energy to obtain For air, and . The solution for is shown in the figure by graphing the curves of , , and where is the temperature. Asymptotic behavior near the central region The asymptotic behavior of the central region can be investigated by taking the limit . From the figure, it can be observed that the density falls to zero very rapidly behind the shock wave. The entire mass of the gas which was initially spread out uniformly in a sphere of radius is now contained in a thin layer behind the shock wave, that is to say, all the mass is driven outwards by the acceleration imparted by the shock wave. Thus, most of the region is basically empty. The pressure ratio also drops rapidly to attain the constant value . The temperature ratio follows from the ideal gas law; since density ratio decays to zero and the pressure ratio is constant, the temperature ratio must become infinite. The limiting form for the density is given as follows Remember that the density is time-independent whereas which means that the actual pressure is in fact time dependent. It becomes clear if the above forms are rewritten in dimensional units, The velocity ratio has the linear behavior in the central region, whereas the behavior of the velocity itself is given by Final stage of the blast wave As the shock wave evolves in time, its strength decreases. The self-similar solution described above breaks down when becomes comparable to (more precisely, when ). At this later stage of the evolution, (and consequently ) cannot be neglected. This means that the evolution is not self-similar, because one can form a length scale and a time scale to describe the problem. The governing equations are then integrated numerically, as was done by H. Goldstine and John von Neumann, Brode, and Okhotsimskii et al. Furthermore, in this stage, the compressing shock wave is necessarily followed by a rarafaction wave behind it; the waveform is empirically fiited by the Friedlander waveform. Cylindrical line explosion The analogous problem in cylindrical geometry corresponding to an axisymmetric blast wave, such as that produced in a lightning, can be solved analytically. This problem was solved independently by Leonid Sedov, A. Sakurai and S. C. Lin. In cylindrical geometry, the non-dimensional combination involving the radial coordinate (this is different from the in the spherical geometry), the time , the total energy released per unit axial length (this is different from the used in the previous section) and the ambient density is found to be See also Guderley–Landau–Stanyukovich problem Zeldovich–Taylor flow Becker–Morduchow–Libby solution References Fluid dynamics Equations of fluid dynamics
Taylor–von Neumann–Sedov blast wave
[ "Physics", "Chemistry", "Engineering" ]
1,845
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Piping", "Fluid dynamics" ]
65,784,120
https://en.wikipedia.org/wiki/Louisenthal%20Paper%20Mill
The Louisenthal Paper Mill, or Papierfabrik Louisenthal (PL) in regional language, is a German manufacturer of security paper. Founded in 1878, the company has been a subsidiary of the Giesecke+Devrient company since 1964 which is best known as a German manufacturer of banknotes. History In 1878, a paper mill was established in Gmund am Tegernsee. Since 1964, the company has been a subsidiary of Giesecke+Devrient. The company owns a second factory at Königstein, Saxony, acquired in 1991 after the German reunification. Manufacturing The substrate bears essential security features of banknotes to protect against counterfeiting. In the early days of banknote production, security paper was equipped with real watermarks and security threads. In 1994 the world's first banknote paper with hologram stripes was produced in Louisenthal (the 2000 leva banknote for Bulgaria). After plastic banknotes could not establish themselves on the market, the mill brought a banknote onto the market in 2008 which combined the advantages of paper and polymer banknotes. In 2019, the company was the creator of two new technologies in a new set of banknotes by the Bulgarian National Bank, which won the Best New Banknote award given by the High Security Printing EMEA Conference in Malta. As of 2020, the Louisenthal paper mill claims to be a leading supplier of advanced film elements as security features that produce color shifts and three-dimensional effects depending on the viewing angle. Production Employing a workforce of around 1,100, 320 of which are located at the Königstein site near Dresden, the paper mill produces about 13,000 tons of paper per year. References Banknotes Paper products Manufacturing
Louisenthal Paper Mill
[ "Engineering" ]
354
[ "Manufacturing", "Mechanical engineering" ]
41,468,418
https://en.wikipedia.org/wiki/Evolution%20of%20photosynthesis
The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. It is believed that the pigments used for photosynthesis initially were used for protection from the harmful effects of light, particularly ultraviolet light. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water. Origin Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles were the first photochemically active pigments, the photosynthetic organisms were anaerobic and relied on without relying on H2 emitted by alkaline hydrothermal vents. The divergence of anoxygenic photosynthetic organisms at the photic zone could have led to the ability to strip electrons from more efficiently under ultraviolet radiation. There is geochemical evidence that suggests that anaerobic photosynthesis emerged 3.3 to 3.5 billion years ago. The organisms later developed a Chlorophyll F synthase. They could have also stripped electrons from soluble metal ions although it is unknown. The first oxygenic photosynthetic organisms are proposed to be -dependent. It is also suggested photosynthesis originated under sunlight, using emitted by volcanoes and hydrothermal vents which ended the need for scarce H2 emitted by alkaline hydrothermal vents. Oxygenic photosynthesis uses water as an electron donor, which is oxidized to molecular oxygen () in the photosynthetic reaction center. The biochemical capacity for oxygenic photosynthesis evolved in a common ancestor of extant cyanobacteria. The first appearance of free oxygen in the atmosphere is sometimes referred to as the oxygen catastrophe. The geological record indicates that this transforming event took place during the Paleoproterozoic era at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of blue-greens. Cyanobacteria remained principal primary producers throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined blue-greens as major primary producers on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251–65 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did primary production in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic. Timeline of photosynthesis on Earth Source: Symbiosis and the origin of chloroplasts Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges and sea anemones. It is presumed that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks Elysia viridis and Elysia chlorotica also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies. This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins that they need to survive. An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosomes, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts still possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those in cyanobacteria. DNA in chloroplasts codes for redox proteins such as photosynthetic reaction centers. The CoRR Hypothesis proposes that this Co-location is required for Redox Regulation. Evolution of photosynthetic pathways In its simplest form, photosynthesis is adding water to to produce sugars and oxygen, but a complex chemical pathway is involved, facilitated along the way by a range of enzymes and co-enzymes. The enzyme RuBisCO is responsible for "fixing"  – that is, it attaches it to a carbon-based molecule to form a sugar, which can be used by the plant, releasing an oxygen molecule along the way. However, the enzyme is notoriously inefficient, and just as effectively will also fix oxygen instead of in a process called photorespiration. This is energetically costly as the plant has to use energy to turn the products of photorespiration back into a form that can react with . Concentrating carbon The C4 metabolic pathway is a valuable recent evolutionary innovation in plants, involving a complex set of adaptive changes to physiology and gene expression patterns. About 7600 species of plants use carbon fixation, which represents about 3% of all terrestrial species of plants. All these 7600 species are angiosperms. C4 plants evolved carbon concentrating mechanisms. These work by increasing the concentration of around RuBisCO, thereby facilitating photosynthesis and decreasing photorespiration. The process of concentrating around RuBisCO requires more energy than allowing gases to diffuse, but under certain conditions – i.e. warm temperatures (>25 °C), low concentrations, or high oxygen concentrations – pays off in terms of the decreased loss of sugars through photorespiration. One type of C4 metabolism employs a so-called Kranz anatomy. This transports through an outer mesophyll layer, via a range of organic molecules, to the central bundle sheath cells, where the is released. In this way, is concentrated near the site of RuBisCO operation. Because RuBisCO is operating in an environment with much more than it otherwise would be, it performs more efficiently. In C4 photosynthesis, carbon is fixed by an enzyme called PEP carboxylase, which, like all enzymes involved in C4 photosynthesis, originated from non-photosynthetic ancestral enzymes. A second mechanism, CAM photosynthesis, is a carbon fixation pathway that evolved in some plants as an adaptation to arid conditions. The most important benefit of CAM to the plant is the ability to leave most leaf stomata closed during the day. This reduces water loss due to evapotranspiration. The stomata open at night to collect , which is stored as the four-carbon acid malate, and then used during photosynthesis during the day. The pre-collected is concentrated around the enzyme RuBisCO, increasing photosynthetic efficiency. More is then harvested from the atmosphere when stomata open, during the cool, moist nights, reducing water loss. CAM has evolved convergently many times. It occurs in 16,000 species (about 7% of plants), belonging to over 300 genera and around 40 families, but this is thought to be a considerable underestimate. It is found in quillworts (relatives of club mosses), in ferns, and in gymnosperms, but the great majority of plants using CAM are angiosperms (flowering plants). Evolutionary record These two pathways, with the same effect on RuBisCO, evolved a number of times independently – indeed, C4 alone arose 62 times in 18 different plant families. A number of 'pre-adaptations' seem to have paved the way for C4, leading to its clustering in certain clades: it has most frequently developed in plants that already had features such as extensive vascular bundle sheath tissue. Whole-genome and individual gene duplication are also associated with C4 evolution. Many potential evolutionary pathways resulting in the phenotype are possible and have been characterised using Bayesian inference, confirming that non-photosynthetic adaptations often provide evolutionary stepping stones for the further evolution of . The C4 construction is most famously used by a subset of grasses, while CAM is employed by many succulents and cacti. The trait appears to have emerged during the Oligocene, around ; however, they did not become ecologically significant until the Miocene, . Remarkably, some charcoalified fossils preserve tissue organised into the Kranz anatomy, with intact bundle sheath cells, allowing the presence C4 metabolism to be identified without doubt at this time. Isotopic markers are used to deduce their distribution and significance. C3 plants preferentially use the lighter of two isotopes of carbon in the atmosphere, 12C, which is more readily involved in the chemical pathways involved in its fixation. Because C4 metabolism involves a further chemical step, this effect is accentuated. Plant material can be analysed to deduce the ratio of the heavier 13C to 12C. This ratio is denoted . C3 plants are on average around 14‰ (parts per thousand) lighter than the atmospheric ratio, while C4 plants are about 28‰ lighter. The of CAM plants depends on the percentage of carbon fixed at night relative to what is fixed in the day, being closer to C3 plants if they fix most carbon in the day and closer to C4 plants if they fix all their carbon at night. It is troublesome procuring original fossil material in sufficient quantity to analyse the grass itself, but fortunately there is a good proxy: horses. Horses were globally widespread in the period of interest, and browsed almost exclusively on grasses. There's an old phrase in isotope palæontology, "you are what you eat (plus a little bit)" – this refers to the fact that organisms reflect the isotopic composition of whatever they eat, plus a small adjustment factor. There is a good record of horse teeth throughout the globe, and their has been measured. The record shows a sharp negative inflection around , during the Messinian, and this is interpreted as the rise of C4 plants on a global scale. When is C4 an advantage? While C4 enhances the efficiency of RuBisCO, the concentration of carbon is highly energy intensive. This means that C4 plants only have an advantage over C3 organisms in certain conditions: namely, high temperatures and low rainfall. C4 plants also need high levels of sunlight to thrive. Models suggest that, without wildfires removing shade-casting trees and shrubs, there would be no space for C4 plants. But, wildfires have occurred for 400 million years – why did C4 take so long to arise, and then appear independently so many times? The Carboniferous period (~) had notoriously high oxygen levels – almost enough to allow spontaneous combustion – and very low , but there is no C4 isotopic signature to be found. And there doesn't seem to be a sudden trigger for the Miocene rise. During the Miocene, the atmosphere and climate were relatively stable. If anything, increased gradually from before settling down to concentrations similar to the Holocene. This suggests that it did not have a key role in invoking C4 evolution. Grasses themselves (the group which would give rise to the most occurrences of C4) had probably been around for 60 million years or more, so had had plenty of time to evolve C4, which, in any case, is present in a diverse range of groups and thus evolved independently. There is a strong signal of climate change in South Asia; increasing aridity – hence increasing fire frequency and intensity – may have led to an increase in the importance of grasslands. However, this is difficult to reconcile with the North American record. It is possible that the signal is entirely biological, forced by the fire- and grazer- driven acceleration of grass evolution – which, both by increasing weathering and incorporating more carbon into sediments, reduced atmospheric levels. Finally, there is evidence that the onset of C4 from is a biased signal, which only holds true for North America, from where most samples originate; emerging evidence suggests that grasslands evolved to a dominant state at least 15Ma earlier in South America. See also Photorespiration Evolution of plants References Evolutionary biology Photosynthesis
Evolution of photosynthesis
[ "Chemistry", "Biology" ]
2,985
[ "Biochemistry", "Evolutionary biology", "Photosynthesis" ]
41,469,136
https://en.wikipedia.org/wiki/Chloroxymorphamine
Chloroxymorphamine is an opioid and a derivative of oxymorphone which binds irreversibly as an agonist to the μ-opioid receptor. See also β-Chlornaltrexamine Naloxazone Oxymorphazone References Opioids Alkylating agents Mu-opioid receptor agonists Irreversible agonists Nitrogen mustards Chloroethyl compounds Tertiary alcohols Cyclohexanols
Chloroxymorphamine
[ "Chemistry" ]
102
[ "Alkylating agents", "Reagents for organic chemistry" ]
41,469,204
https://en.wikipedia.org/wiki/Epistemological%20Letters
Epistemological Letters (French: Lettres Épistémologiques) was a hand-typed, mimeographed "underground" newsletter about quantum physics that was distributed to a private mailing list, described by the physicist and Nobel laureate John Clauser as a "quantum subculture", between 1973 and 1984. Distributed by a Swiss foundation, the newsletter was created because mainstream academic journals were reluctant to publish articles about the philosophy of quantum mechanics, especially anything that implied support for ideas such as action at a distance. Thirty-six or thirty-seven issues of Epistemological Letters appeared, each between four and eighty-nine pages long. Several well-known scientists published their work there, including the physicist John Bell, the originator of Bell's theorem. According to John Clauser, much of the early work on Bell's theorem was published only in Epistemological Letters. Interpretations of quantum physics According to the Irish physicist Andrew Whitaker, a powerful group of physicists centred on Niels Bohr, Wolfgang Pauli and Werner Heisenberg made clear that "there was no place in physics – no jobs in physics! – for anybody who dared to question the Copenhagen interpretation" (Bohr's interpretation) of quantum theory. John Clauser writes that any inquiry into the "wonders and peculiarities" of quantum mechanics and quantum entanglement that went outside the "party line" was prohibited, in what he argues amounted to an "evangelical crusade". Samuel Goudsmit, editor of the prestigious Physical Review and Physical Review Letters until he retired in 1974, imposed a formal ban on the philosophical debate, issuing instructions to referees that they should feel free to reject material that even hinted at it. Alternative publications Articles questioning the mainstream position were therefore distributed in alternative publications, and Epistemological Letters became one of the main conduits. The newsletter was sent out by the L'Institut de la Méthode of the Association Ferdinand Gonseth, which had been established in honour of the philosopher Ferdinand Gonseth. The newsletter described itself as "an open and informal journal allowing confrontation and ripening of ideas before publishing in some adequate journal." According to Clauser, it announced that the usual stigma against discussing certain ideas, such as hidden-variable theories, was to be absent. The newsletter's editors included Abner Shimony. Several eminent physicists published their material in Epistemological Letters, including John Bell, the originator of Bell's theorem. Clauser writes that much of the early work on Bell's theorem was published only in Epistemological Letters. Bell's paper, "The Theory of Local Beables" (beable, as opposed to observable, referring to something that exists independently of any observer), appeared there in March 1976. Abner Shimony, John Clauser and Michael Horne published responses to it, also in the Letters. Henry Stapp was another prominent physicist who wrote for the Letters. H. Dieter Zeh published a paper in the Letters on the many-minds interpretation of quantum mechanics in 1981. Digitization of the Epistemological Letters Don Howard, Professor of Philosophy at the University of Notre Dame, was a Ph.D. student of Abner Shimony, one of the editors of the newsletter; as such, he had an almost complete set. In collaboration with Sebastian Murgueitio Ramirez (then his graduate student, now assistant professor of philosophy at Purdue University), the set was completed and digitized in 2018–2019, in order to make this very rare document available to the community of historians and philosophers of physics. The entire set is available to the public at the Epistemological Letters digital archive, and the original newsletter is in Special Collections at the University Library. See also Fundamental Fysiks Group Physics Physique Физика References Further reading Friere, Olival (2003). "A Story Without an Ending: The Quantum Physics Controversy 1950–1970", Science & Education, 12, pp. 573–586. Gusterson, Hugh (18 August 2011). "Physics: Quantum outsiders", Nature, 476, pp. 278–279. "Epistemological Letters", digital archive at the University of Notre Dame. External links Index to the Letters at Information Philosopher Contemporary philosophical literature Defunct journals English-language journals Interpretations of quantum mechanics Quantum mind Philosophy of physics Philosophy of science literature Physics journals Academic journals established in 1973 Publications disestablished in 1984 Underground press Irregular journals
Epistemological Letters
[ "Physics" ]
924
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Quantum mechanics", "Quantum mind", "Interpretations of quantum mechanics" ]
61,972,259
https://en.wikipedia.org/wiki/Photonic%20topological%20insulator
Photonic topological insulators are artificial electromagnetic materials that support topologically non-trivial, unidirectional states of light. Photonic topological phases are classical electromagnetic wave analogues of electronic topological phases studied in condensed matter physics. Similar to their electronic counterparts, they, can provide robust unidirectional channels for light propagation. The field that studies these phases of light is referred to as topological photonics. History Topological order in solid state systems has been studied in condensed matter physics since the discovery of integer quantum Hall effect. But topological matter attracted considerable interest from the physics community after the proposals for possible observation of symmetry-protected topological phases (or the so-called topological insulators) in graphene, and experimental observation of a 2D topological insulator in CdTe/HgTe/CdTe quantum wells in 2007. In 2008, Haldane and Raghu proposed that unidirectional electromagnetic states analogous to (integer) quantum Hall states can be realized in nonreciprocal magnetic photonic crystals. This prediction was first realized in 2009 in the microwave frequency regime. This was followed by the proposals for analogous quantum spin Hall states of electromagnetic waves that are now known as photonic topological insulators. It was later found that topological electromagnetic states can exist in continuous media as well--theoretical and numerical study has confirmed the existence of topological Langmuir-cyclotron waves in continuous magnetized plasmas. Platforms Photonic topological insulators are designed using various photonic platforms including optical waveguide arrays, coupled ring resonators, bi-anisotropic meta-materials, and photonic crystals. More recently, they have been realized in 2D dielectric and plasmonic meta-surfaces. Despite the theoretical prediction, no experimental demonstration of photonic topological insulator in continuous media has been reported. Chern number As an important figure of merit for characterizing the quantized collective behaviors of the wavefunction, Chern number is the topological invariant of quantum Hall insulators. Chern number also identifies the topological properties of the photonic topological insulators (PTIs), thus it is of crucial importance in PTI design. The full-wave finite-difference frequency-domain (FDFD) method based MATLAB program for computing the Chern number has been written. Recently, the finite-difference method has been extended to analyze the topological invariant of non-Hermitian topological dielectric photonic crystals by first-principle Wilson loop calculation. All MATLAB codes can be found at GitHub website. See also Symmetry-protected topological order Metamaterial References Photonics Electromagnetism
Photonic topological insulator
[ "Physics" ]
540
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
60,785,404
https://en.wikipedia.org/wiki/Nuclear%20acoustic%20resonance
Nuclear acoustic resonance is a phenomenon closely related to nuclear magnetic resonance. It involves utilizing ultrasound and ultrasonic acoustic waves of frequencies between 1 MHz and 100 MHz to determine the acoustic radiation resulted from interactions of particles that experience nuclear spins as a result of magnetic and/or electric fields. The principles of nuclear acoustic resonance are often compared with nuclear magnetic resonance, specifically its usage in conjunction with nuclear magnetic resonance systems for spectroscopy and related imaging methodologies. Due to this, it is denoted that nuclear acoustic resonance can be used for the imaging of objects as well. However, for most cases, nuclear acoustic resonance requires the presence of nuclear magnetic resonance to induce electron spins within specimens in order for the absorption of acoustic waves to occur. Research conducted through experimental and theoretical investigations relative to the absorption of acoustic radiation of different materials, ranging from metals to subatomic particles, have deducted that nuclear acoustic resonance has its specific usages in other fields other than imaging. Experimental observations of nuclear acoustic resonance was first obtained in 1963 by Alers and Fleury in solid aluminum. History Nuclear acoustic resonance was first discussed in 1952 when Semen Altshuler proposed that the acoustic coupling to nuclear spins should be visible. This was also proposed by Alfred Kastler around the same time. From his specialization in the field, Altshuler theorized the nuclear spin-acoustic phonon interactions which resulted with experimentation in 1955. The experiments led physicists to suggest that nuclear acoustic resonance coupling in metals could be formulated and observed, with modern physicists discussing the many properties of nuclear acoustic resonance, although it is not a widely known concept. Concepts of nuclear acoustic resonance in objects have been theorized and predicted by many physicists, but it was not until in 1963 when the first observation of the phenomenon occurred in solid aluminum along with observation of its dispersion in 1973, and subsequently, the first experimental nuclear acoustic resonance in liquid gallium in 1975. However, the aspect of acoustic spin resonance has been observed by Bolef and Menes in 1966 through samples of indium antimonide where nuclear spins were shown to absorb acoustic energy exhibited by the sample. Theory of nuclear acoustic resonance Nuclear Spin and Acoustic Radiation The nuclei is deduced to spin due to its different properties ranging from magnetic to electric properties of different nuclei within atoms. Commonly this spin is utilized within the field of nuclear magnetic resonance, where an external RF (or ultra-high frequency range) magnetic field is used to excite and resonate with the nuclei spin within the internal system. This in turn allows the absorption or dispersion of electromagnetic radiation to occur, and allows magnetic resonance imaging equipment to detect and produce images. However, for nuclear acoustic resonance, the energy levels that determine the orientation of the spinning while under internal or external fields are transitioned by acoustic radiation. As acoustic waves are often between frequencies of 1 MHz and 100 MHz, they are usually characterized as ultrasound or ultrasonic (sound of frequencies above the audible range of ). Comparison with Nuclear Magnetic Resonance Similar to nuclear magnetic resonance, both phenomena introduces and utilizes external sources such as a DC magnetic field or different frequencies, and results from both methods produce similar data sets and trends in different variables. However, there are distinct differences in the methodologies of the two concepts. Nuclear acoustic resonance involves inducing internal spin-dependent interactions while nuclear magnetic resonance denotes interactions with external magnetic fields. Due to this, nuclear acoustic resonance is not solely dependent on nuclear magnetic resonance, and can be operated independently. Such cases where nuclear acoustic resonance is a better substitute for nuclear magnetic resonance include resonance in metals where electromagnetic waves can be difficult to penetrate and resonate, such as amorphous metals and alloys, while acoustic waves can easily pass through. However, the suitability for using nuclear acoustic resonance or nuclear magnetic resonance is reliant on the material to be used in order to achieve the most efficient and evident results. Physics of Nuclear Acoustic Resonance Nuclear acoustic resonance implements physics from both nuclear magnetic resonance and acoustics, involving the use of laws of quantum mechanics to derive theory on acoustic resonance in objects with nuclei that have a nonzero angular momentum (I), with its magnitude given by . In elements where , the characteristic of the nuclei spin also includes electric moments, also known as the electric quadrupole moment (denoted as ) for the weakest electric moment. This moment () influences the electric field gradients within the nucleus as a result of surrounding charges relative to the nucleus. In effect, the results of nuclear magnetic resonance used to induce nuclear acoustic resonance is affected. By utilizing the magnetic spin of nuclei under RF magnetic fields, and their spin-lattice relaxation properties after excitement from the external field to higher energy states, it is possible for acoustic waves to interact with nuclear spins, which often involves externally generated phonon. However, interactions of acoustic waves with nuclear spins do not guarantee the observation of acoustic resonance in objects. During the interactions, the acoustic waves experience a slight change in magnitude caused by the absorption by the object under nuclear spin, and the measurement of the change is crucial to observe and detect nuclear acoustic resonance in the object. Hence due to the difficulties analyzing nuclear acoustic resonance, it is only observed indirectly. However, as further propositions are made, ultrasonic pulse-echo techniques are introduced to detect changes in acoustic attenuation in specimens during experiments due to its capability of detecting changes in solids around 1 part in , which is capable of detecting background attenuation, although not for nuclear spin-phonon coupling, in which has attenuation coefficients from to dB/cm. Hence a combination of a continuous wave (CW) ultrasonic composite-resonator technique and nuclear magnetic resonance techniques is required to actually detect nuclear acoustic resonance. Nuclear Acoustic Resonance in Metals Coherent or incoherent generated phonon entice the nuclear spins in nuclear acoustic resonance processes, and as a result is compared with the direct spin-lattice relaxation mechanism. Due to this, spins are de-excited from interactions with resonant thermal phonon at low frequencies, which is often denoted to be insignificant. This is certainly the case when compared with the indirect or Raman process where multiple phonon are involved. However, as the direct spin-lattice relaxation characterizes solids at specific temperatures due to formations of a small percentage of the lattice vibration spectrum, it is proposed that solids can be subjected to acoustic energy using ultrasound with energy ranging from 1010 to 1012 in terms of density greater than energy from the incoherent thermal phonon. From this theory, it is predicted that observations of nuclear spin can be achieved at high temperatures using nuclear acoustic resonance principles and techniques, unlike normal circumstances where they are only visible at low temperatures. The initial direct observation of nuclear acoustic resonance occurred in 1963 with the use of samples of aluminum under an applied magnetic field, in which created an electromagnetic field that minimally affected the properties of the sound waves being used, specifically its velocity and attenuation. The experimental analysis deduced that the effects on velocity and attenuation by the external magnetic field was proportional to its square, which allowed the acoustic attenuation coefficient to be calculated for any nuclear spin systems undergoing absorption of acoustic energy, which is characterized as , where , with being the incident acoustic power per unit area. is determined by being the density of the metal, as the velocity of the propagated sound wave, and being the peak value of the strain. Furthermore, , the power per unit volume being absorbed by the system undergoing nuclear spin, is characterized by where N is the count of nuclear spins per unit volume of the metal, v is the frequency, and being the magnetic dipole coupling value. However, this formula does not factor in the effect of eddy currents on the metal caused by the magnetic fields. Nevertheless, the results of the experimental observation of nuclear acoustic resonance in aluminum devised propositions of further investigations in the field such as single crystals of metals with weak quadrupole moments and nuclear spins of 1/2. Nuclear Acoustic Resonance in Liquids Due to the different properties of liquids when compared to solids, it is typically impossible to detect nuclear acoustic resonance in liquids due to difficulties when inducing resonance in liquids. In solids, the spin transitions of nuclear acoustic resonance are induced by two different coupling mechanisms. However, objects in the liquid state are strongly affected by their thermal properties, which also influences the dynamic electric field gradient, leading to a near impossibility of inducing nuclear acoustic resonance in liquids via the coupling method. Hence in the first experimental attempt to observe nuclear acoustic resonance in a liquid sample, a metallic specimen was used as the object of interest. Further experimentation led to usage of external factors such as using piezoelectric nano-particles to detect nuclear acoustic resonance in liquids, particularly in fluids. In the initial successful experimental investigation on nuclear acoustic resonance in liquid, a coherent electromagnetic wave inside the metal sample was produced by sound waves generated by external dc magnetic fields surrounding the metallic object; the generated sound wave resonate with the nuclear spins of the object, allowing nuclear acoustic resonance to be theoretically observed. The theoretical predictions were confirmed when samples of liquid gallium were observed and measured. From this experimental observation, it was proposed that nuclear acoustic resonance in liquids metals requires magnetic dipole interactions due to the properties of liquids, and in which creates a dependence on the distance between particles in the liquid metal instead of the ultrasonic displacement field as seen in solids. Due to this, and the fact that the total displacement field for the generated electromagnetic field is the superposition of the displacement fields, the electromagnetic field can be modeled by a sum of the coherent and incoherent parts due to Maxwell's equations. Hence Unterhorst, Muller, and Schanz devised that nuclear acoustic resonance in liquid metals can be achieved and observed if the diffusion length during the relation time is relatively small compared to the ultrasonic wavelength of the sound wave. Imaging By utilizing ultrasound acoustic waves via propagation onto objects such as patients, imaging is possible when resonance is achieved. This is then computed by a system of equipment that combines techniques and concepts from both ultrasound and magnetic resonance imaging to produce images for medical purposes. However, due to the specific requirements of attaining nuclear acoustic resonance and the characteristics of ultrasound and magnetic resonance imaging, while imaging via nuclear acoustic resonance is achievable, experimentally limitations exist. Typical ultrasound techniques for imaging can obtain detection of acoustic attenuation differences of approximately 1 part in 1000, in which is not within the range of the required detection capability for nuclei spin systems which has acoustic coefficients from to dB/cm. Harmonic Correlation Although experimental nuclear acoustic resonance techniques on objects such as metals can achieve acoustic resonance, it is not a viable option for medical imaging, although it may be useful for spectroscopy in non-organic compounds. Hence the concept of harmonic correlation is introduced. This allows a new method of obtaining, amplifying, and analyzing acoustic signals. This method allows the sensitivity of the detection technique to be enhanced by implementing broadband signals into narrow-band signals for analysis. Harmonic correlation in general determines the correlation between the amplitude functions of two harmonically related narrow-band signals directed towards a patient, in which the assumption that they originate from the same source is made in order for the processing algorithm that collects that data and simulates them to boost the sensitivity of the signal detection of the analysis. Hence harmonic correlation clarifies the consequences of the absorption process of the induced nuclear spin phonon, however, such a process is very complicated and requires rigorous treatment of the data collected. See also Nuclear magnetic resonance Ultrasound Magnetic resonance imaging Resonance Relaxation (NMR) Electromagnetic radiation Spectroscopy Acoustic Resonance Acoustic Resonance Spectroscopy References Nuclear magnetic resonance spectroscopy Ultrasound Acoustics
Nuclear acoustic resonance
[ "Physics", "Chemistry" ]
2,354
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy", "Classical mechanics", "Acoustics", "Spectroscopy" ]
60,789,000
https://en.wikipedia.org/wiki/Photopyroelectric
Photopyroelectric As known that Photopyroelectric can be regarded as –Photo +Pyroelectric,which means any optical systems using a pyroelectric detector or imaging system, In addition, pyroelectricity could be depicted as the capability of the components formulating the transient voltage when heated or cooled. Once the temperature on which they depend changes, the position of the atom will change slightly in the crystal structure. This process of change can also be referred to as the polarization of the material. As a result, the voltage across the crystal will be triggered by this change in polarization. To further explain, when the temperature in the engine is kept constant for a period of time, the voltage in the photovoltage will gradually disappear due to the leakage current. In this sense, leakage is mainly caused by several ways, for example, electrons going through the crystal, ions going through the air, or current leaking through a voltmeter connected to the crystal. Technical Base of Photopyroelectric The photopyroelectric refers to the technique of the optimal system which is mainly based on the imaginary system and the pyroelectric detector. Pyroelectric detector In terms of the pyroelectric detector, it can be used as a sensor to support the system. Due to the unipolar axis characteristics of the pyroelectric crystal, it is characterized by asymmetry. Polarization due to changes in temperature, the so-called pyroelectric effect, is currently widely used in sensor technology. Pyroelectric crystals need to be very thin to prepare and are plated in a direction perpendicular to the polar axis. An absorbing layer (blackening layer) is also required on the upper electrode. When this absorbing layer is exposed to infrared radiation, the pyroelectric chip is heated and produces a surface electrode. If the amount of radiation is interrupted, a charge opposite to the direction of polarization is generated. However, this charge is very small, so the charge is converted to a signal voltage by ultra low noise and ultra low leakage field effect transistors (JFET) or operational amplifiers (OpAmp) before neutralized by the internal resistance of the crystal. Pyroelectric detectors have a high signal-to-noise ratio even at 4K Hz. For example, in a Fourier infrared spectrometer, a thermopile can only perform better at a few hertz. Imaginary system In terms of the imaginary system, it is a general term for various types of remote sensor systems that acquire remote sensing images of objects without photography. Scanning is usually used for imaging, tape recording or indirect recording on film. According to the structure of the system, the scanning method and the detector parts are roughly divided into: 1. Optomechanical scanning. Such as multi-spectral scanners. The mirror is used to scan the object surface, and the image data is output after being split, detected and photoelectrically converted. 2. Electronic scanning. For example, a return beam guiding TV camera, is an image-side scanning method. The process is optical imaging on the target surface of the light guide, and the signal is amplified and output after being scanned by the electron beam. 3. Robust self-scanning. For example, the photoelectric scanning sensor of the French SPOT satellite is also an image scanning method. The object is imaged by an objective lens on a detector array consisting of a plurality of charge coupled devices (CCDs) that are photoelectrically converted and output. 4. Antenna scanning. Such as side-view radar, which is an active remote sensing imaging system that is a surface scanning method. It transmits the microwave beam through the antenna and receives an echo reflected by the scene, which is demodulated and output. The Use of Photopyroelectric Photopyroelectric calorimetry of composite materials The use of optoelectronics tells us that previous optoelectronic structures were used to check the thermal efficiency of certain materials that were composite and inserted into the detection unit as a liner. This technique depends on the coupled fluid thickness scanning process (TWRC method). Two special composites were chosen for this study: (I) Liquid: Nanofluid based on water and containing gold nanoparticles (ii) More solid type: Urea - Fumaric acid eutectic in a ratio of 1:1. It has been found that the thermal effusivity is independent upon the volume and concentration in the gold particles. Considering the eutectic characterized by urea-fumaric acid, it can be reasonably concluded that the value of the heat permeable compound is quite different from that of the pure raw material. This illustrates the production of compounds. Self-consistence photopyroelectric calorimetry for liquids This photopyroelectric also demonstrate that the front photopyroelectric (FPPE)structure is also important. In addition, it clearly explains the Thermal Wave Resonator Cavity (TWRC) method, which is designed to check the thermal mobility and diffusivity of liquids. It has demonstrated that the same type of technology is capable of producing a variety of static and dynamic thermal parameters. In addition, two of these parameters are checked and calculated in a straightforward manner, while the other two are still calculated indirectly. This method shows the principle of sustainability in that it studies certain liquids such as various oils, water, glycerin, ethylene glycol and the like. Photopyroelectric Effect and Pyroelectric Measurement Due to fluid processing, photoelectric effect and thermoelectric measurement and subtraction between the sample and the detector, the optoelectronic technology used in the standard distribution systematically underestimates the thermal diffusivity of the solid sample. In order to solve the negative effects in the process of treating fluids in this study, a completely new method will be proposed. It depends on the application of the transparent thermoelectric sensor as well as the transparent coupling of the fluid, as well as the self-standardization process. In this sense, it is easy to measure examples of accurate opacity and solidity of thermal diffusivity, as well as the light absorption coefficient of translucent solid samples. Photopyroelectric for the Simultaneous Thermal Photoelectron display thermophysical studies for simultaneous thermals are very important and critical in many relevant academic sciences. The heating capacity is closely related to the microstructure of the approved material and is important in monitoring the energy content of the system. Therefore, calorimetry plays an important role in the cataloging of physical systems, especially in the transition phase where energy fluctuations are very important. This paper summarizes the ability of photothermographic technology to study the variation of certain heat and other thermal parameters with temperature and is closely related to the transition. The working principle is applied to the theoretical basis, and the experimental structure and additional benefits of the technology compared with the traditional technology are described in detail. The integration in the calorimetric setting provides the possibility of performing calorimetric studies while also depicting the complementary nature of optical, structural and electrical properties. This paper reviews the high temperature resolution results for several phase transition parameters in different systems under various possible configurations. Optimized configuration of the pyroelectric sensor in photopyroelectric technique Optimized configuration of pyroelectric sensors in optoelectronic technology. It has been shown that in the case of constant laser power, the response of the pyroelectric sensor would not depend on the spatial distribution of the intensity of the laser beam. Therefore, depending on the voltage model, the signal amplitude will be inversely proportional to the effective range of the sensor. In addition, the thermoelectric signal may increase once the effective area decreases and the total area of the sensor remains constant. Based on this, by optimizing the metal electrode structure of the sensor, a method is proposed to improve the PPE signal measured in voltage mode. The experiment shows that this improved method can increase the signal amplitude by 10 times without increasing the electrical noise. Deficiency in the photopyroelectric Types of deficiency The so-called optical component surface defects mainly refer to surface rickets and surface contaminants. Surface rickets refer to various processing defects such as pitting, scratches, open bubbles, broken edges, and broken spots on the surface of polished optical components. The main reason is processing or subsequent processing. Scratches are the scratches on the surface of an optical component. Due to the length of the scratch, it can be divided into long scratches and short scratches, with a limit of 2 mm. If the scratch length is greater than 2 mm, it is a long scratch, and if it is less than 2 mm, it is a short scratch . For short scratches, the evaluation criterion is to detect their cumulative length. Relatively speaking, scratches are easier to detect than defects such as pitting. Pitting refers to pits and defects on the surface of an optical component. The surface roughness in the pit is large, the width and depth are approximately the same, and the edges are irregular. Typically, defects with an aspect ratio greater than 4:1 are scratches, while defects less than 4:1 are pitting. The bubbles are formed by gases that are not removed in time during the manufacture or processing of the optical component. Since the pressure of the gas in each direction is evenly distributed, the shape of the bubble is usually spherical. Broken edges are a criticism of the edge of optical components. Although it is outside the effective area of the light source, it is also a source of light scattering, which also has an effect on optical performance. Negative impact caused by the deficiency Surface rickets, as a microscopic local defect caused by man-made process, have a certain influence on the surface properties of optical components, which may lead to serious consequences such as optical instrument operation errors. In short, the surface defects of optical components can be detrimental to the performance of optical systems, and the root cause is the scattering characteristics of light. The damage of optical component surface defects to itself and the entire optical system is manifested in the following aspects: (1) The quality of the beam is degraded. The surface scattering defect of the component produces a scattering effect of light, so that the energy of the beam is greatly consumed after passing through the defect, thereby reducing the quality of the beam. (2) The thermal effect of defects. Since the area where the surface defects are located absorbs more energy than other areas, the thermal effect phenomenon may cause local particial deformation of the component, damage the film layer, etc., and thus damage the entire optical system. (3) Damage to other optical components in the system. In a laser system, under the illumination of a high-energy laser beam, the scattered light generated by the surface of the component is absorbed by other optical components in the system, resulting in uneven light received by the component. When the damage threshold of the optical component material is reached. The quality of the transmitted light is affected, and the optical components are damaged, which is more likely to cause serious damage to the optical system. (4) Rickets can affect the cleanliness of the field of view. When there are too many rickets on the optical components, it will affect the microscopic aesthetics. In addition, the cockroaches will leave tiny dust, microorganisms, polishing powder and other impurities, which will cause the components to be corroded, moldy, and foggy. Will significantly affect the basic performance of the component. References Spectroscopy
Photopyroelectric
[ "Physics", "Chemistry" ]
2,424
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
42,889,377
https://en.wikipedia.org/wiki/Generator%20%28circuit%20theory%29
A generator in electrical circuit theory is one of two ideal elements: an ideal voltage source, or an ideal current source. These are two of the fundamental elements in circuit theory. Real electrical generators are most commonly modelled as a non-ideal source consisting of a combination of an ideal source and a resistor. Voltage generators are modelled as an ideal voltage source in series with a resistor. Current generators are modelled as an ideal current source in parallel with a resistor. The resistor is referred to as the internal resistance of the source. Real world equipment may not perfectly follow these models, especially at extremes of loading (both high and low), but for most purposes, they suffice. The two models of non-ideal generators are interchangeable; either can be used for any given generator. Thévenin's theorem allows a non-ideal current source model to be converted to a non-ideal voltage source model and Norton's theorem allows a non-ideal voltage source model to be converted to a non-ideal current source model. Both models are equally valid, but the voltage source model is more applicable when the internal resistance is low (that is, much lower than the load impedance), and the current source model is more applicable when the internal resistance is high (compared to the load). Symbols |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Ideal voltage source | Ideal current source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Controlled voltage source | Controlled current source |- align="center" |style="padding: 1em 2em 0;"| |style="padding: 1em 2em 0;"| |- align="center" | Battery of cells | Single cell Symbols commonly used for ideal sources are shown in the figure. Symbols do vary from region to region and time period to time period. Another common symbol for a current source is two interlocking circles. Dependent sources A dependent source is one in which the voltage or current of the source output is dependent on another voltage or current elsewhere in the circuit. There are thus four possible types: current-dependent voltage source, voltage-dependent voltage source, current-dependent current source, and voltage-dependent current source. Non-ideal dependent sources can be modelled with the addition of an impedance in the same way as non-dependent sources. These elements are widely used to model the function of two-port networks; one generator is needed for each port, and it is dependent on either voltage or current at the other port. The models are an example of black box modelling; that is, they are quite unrelated to what is physically inside the device but correctly model the device's function. There are a number of these two-port models, differing only in the type of generator required to represent them. This kind of model is particularly useful for modelling the behaviour of transistors. The model used to represent h-parameters is shown in the figure. h-parameters are frequently used in transistor data sheets to specify the device. The h-parameters are defined as the matrix where the voltage and current variables are as shown in the figure. The circuit model using dependent generators is just an alternative way of representing this matrix. References Circuit theorems
Generator (circuit theory)
[ "Physics" ]
730
[ "Circuit theorems", "Equations of physics", "Physics theorems" ]
42,894,552
https://en.wikipedia.org/wiki/Network%20Definition%20Language
NDL (Network Definition Language) was a compiler on Burroughs Large and Medium Systems computers used to create a network definition file for a data communications controller (DCC) and object code for a data communications processor (DCP) that interfaced between a message control program (written in DCALGOL) such as (RJE), (MCSII) or (CANDE) and the computer's line adaptors and terminal network. Burroughs Network Definition Language allowed many parameters of the mainframe communications adapter, modems (where used), protocol and attached terminal to be defined. However it treated the low-level operation of the multi-drop protocol, including the modulus of sequence numbers and the algorithm used for CRCs etc. as primitives. References External links NDL Language Reference Manual Burroughs mainframe computers Hardware_description_languages Mainframe computer software
Network Definition Language
[ "Engineering" ]
177
[ "Electronic engineering", "Hardware description languages" ]
42,898,403
https://en.wikipedia.org/wiki/Ether%20cleavage
Ether cleavage refers to chemical substitution reactions that lead to the cleavage of ethers. Due to the high chemical stability of ethers, the cleavage of the C-O bond is uncommon in the absence of specialized reagents or under extreme conditions. In organic chemistry, ether cleavage is an acid catalyzed nucleophilic substitution reaction. Depending on the specific ether, cleavage can follow either SN1 or SN2 mechanisms. Distinguishing between both mechanisms requires consideration of inductive and mesomeric effects that could stabilize or destabilize a potential carbocation in the SN1 pathway. Usage of hydrohalic acids takes advantage of the fact that these agents are able to protonate the ether oxygen atom and also provide a halide anion as a suitable nucleophile. However, as ethers show similar basicity as alcohols (pKa of approximately 16), the equilibrium of protonation lies on the side of the unprotonated ether and cleavage is usually very slow at room temperature. Ethers can be cleaved by strongly basic agents, e.g. organolithium compounds. Cyclic ethers are especially susceptible to cleavage, but acyclic ethers can be cleaved as well. SN1 Ether cleavage The unimolecular SN1 mechanism proceeds via a carbocation (provided that the carbocation can be adequately stabilized). In the example, the oxygen atom in methyl tert-butyl ether is reversibly protonated. The resulting oxonium ion then decomposes into methanol and a relatively stable tert-butyl cation. The latter is then attacked by a nucleophile halide (here bromide), yielding tert-butyl bromide. Mechanism SN2 ether cleavage If the potential carbocation can not be stabilized, ether cleavage follows a bimolecular, concerted SN2 mechanism. In the example, the ether oxygen is reversibly protonated. The halide ion (here bromide) then nucleophilically attacks the sterically less hindered carbon atom, thereby forming methyl bromide and 1-propanol. Mechanism Other factors SN1 ether cleavage is generally faster than SN2 ether cleavage. However, reactions that would require the formation of unstable carbocations (methyl, vinyl, aryl or primary carbon) proceed via SN2 mechanism. The hydrohalic acid also plays an important role, as the rate of reaction is greater with hydroiodic acid than with hydrobromic acid. Hydrochloric acid only reacts under more rigorous conditions. The reason lies in the higher acidity of the heavier hydrohalic acids as well as the higher nucleophilicity of the respective conjugate base. Fluoride is not nucleophilic enough to allow for usage of hydrofluoric acid to cleave ethers in protic media. Regardless of which hydrohalic acid is used, the rate of reaction is comparably low, so that heating of the reaction mixture is required. Ether cleavage with organometallic agents Mechanism Basic ether cleavage is induced by deprotonation in α position. The ether then decomposes into an alkene and an alkoxide. Cyclic ethers allow for an especially quick concerted cleavage, as seen for THF: Deprotonated acyclic ethers perform beta-hydride elimination, forming an olefinic ether. The formed hydride then attacks the olefinic rest in α position to the ether oxygen, releasing the alkoxide. Impact Organometallic agents are often handled in etheric solvents, which coordinate to the metallic centers and thereby enhance the reactivity of the organic rests. Here, the ether cleavage poses a problem, as it does not only decompose the solvent, but also uses up the organometallic agent. Reactions with organometallic agents are therefore typically performed at low temperatures (-78 °C). At these temperatures, deprotonation is kinetically inhibited and slow compared to many reactions that are intended to take place. Literature Paula Y. Bruice: Organic Chemistry, Prentice Hall. . References Substitution reactions Chemical processes Reactions of ethers
Ether cleavage
[ "Chemistry" ]
882
[ "Chemical process engineering", "Chemical processes", "nan" ]
42,899,470
https://en.wikipedia.org/wiki/Chloroflexales
Chloroflexales is an order of bacteria in the class Chloroflexia. The clade is also known as filamentous anoxygenic phototrophic bacteria (FAP), as the order contains phototrophs that do not produce oxygen. These bacteria are facultative aerobic. They generally use chemotrophy when oxygen is present and switch to light-derived energy when otherwise. Most species are heterotrophs, but a few are capable of photoautotrophy. The order can be divided into two suborders. Chloroflexineae ("Green FAP", "green non-sulfur bacteria") is the better-known one. This suborder uses chlorosomes, a specialized antenna complex, to pass light energy to the reaction center. Roseiflexineae ("Red FAP") on the other hand has no such ability. The named colors are not absolute, as growth conditions such as oxygen concentration will make a green FAP appear green, brown, or reddish-orange by inducing changes in pigment composition. Classification The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). Phylogeny Taxonomy Suborder Roseiflexineae Gupta et al. 2013 Family Roseiflexaceae Gupta et al. 2013 ["Kouleotrichaceae" Mehrshad et al. 2018] Genus ?Heliothrix Pierson et al. 1986 Genus "Kouleothrix" Kohno et al. 2002 Genus "Candidatus Ribeiella" Petriglieri et al. 2023 Genus Roseiflexus Hanada et al. 2002 Suborder Chloroflexineae Gupta et al. 2013 Family Chloroflexaceae Gupta et al. 2013 Genus ?"Candidatus Chloranaerofilum" Thiel et al. 2016 Genus Chloroflexus Pierson & Castenholz 1974 ["Chlorocrinis" Ward et al. 1998] Family Oscillochloridaceae Gupta et al. 2013 Genus ?Chloronema ♪ Dubinina & Gorlenko 1975 Genus "Candidatus Chloroploca" Gorlenko et al. 2014 Genus Oscillochloris Gorlenko & Pivovarova 1989 Genus "Candidatus Viridilinea" Grouzdev et al. 2018 See also List of bacteria genera List of bacterial orders References External links Phototrophic bacteria Chloroflexota
Chloroflexales
[ "Chemistry", "Biology" ]
548
[ "Bacteria stubs", "Bacteria", "Photosynthesis", "Phototrophic bacteria" ]
57,699,124
https://en.wikipedia.org/wiki/P-Chiral%20phosphine
P-Chiral phosphines are organophosphorus compounds of the formula PRR′R″, where R, R′, R″ = H, alkyl, aryl, etc. They are a subset of chiral phosphines, a broader class of compounds where the stereogenic center can reside at sites other than phosphorus. P-chirality exploits the high barrier for inversion of phosphines, which ensures that enantiomers of PRR'R" do not racemize readily. The inversion barrier is relatively insensitive to substituents for triorganophosphines. By contrast, most amines of the type NRR′R″ undergo rapid pyramidal inversion. Research themes Most chiral phosphines are C2-symmetric diphosphines. Famous examples are DIPAMP and BINAP. These chelating ligands support catalysts used in asymmetric hydrogenation and related reactions. DIPAMP is prepared by coupling the P-chiral methylphenylanisylphosphine. P-Chiral phosphines are of particular interest in asymmetric catalysis. P-Chiral phosphines have been investigated for two main applications, as ligands for asymmetric homogeneous catalysts and as nucleophiles in organocatalysis. References Catalysis Coordination chemistry Stereochemistry Ligands
P-Chiral phosphine
[ "Physics", "Chemistry" ]
291
[ "Catalysis", "Chemical process stubs", "Ligands", "Stereochemistry", "Coordination chemistry", "Space", "Stereochemistry stubs", "nan", "Spacetime", "Chemical reaction stubs", "Chemical kinetics" ]
57,701,673
https://en.wikipedia.org/wiki/CD28%20family%20receptor
CD28 family receptors are a group of regulatory cell surface receptors expressed on immune cells. The CD28 family in turn is a subgroup of the immunoglobulin superfamily. Two family members, CD28 and ICOS, act as positive regulators of T cell function while another three, BTLA, CTLA-4 and PD-1 act as inhibitors. Ligands for the CD28 receptor family include B7 family proteins. CD28 receptors play a role in the development and proliferation of T cells. The CD28 receptors enhance signals from the T cell receptors (TCR) in order to stimulate an immune response and an anti-inflammatory response on regulatory T cells. Through the promotion of T cell function, CD28 receptors allow effector T cells to combat regulatory T cell-mediated suppression from adaptive immunity. CD28 receptors also elicit the prevention of spontaneous autoimmunity. Function CD28 receptors aid in other T cell processes such as cytoskeletal remodeling, production of cytokines and chemokines and intracellular biochemical reactions (i.e. phosphorylation, transcriptional signaling, and metabolism) that are key for T cell proliferation and differentiation. Ligation of CD28 receptors causes epigenetic, transcriptional and post-translational alterations in T cells. Specifically, CD28 costimulation controls many aspects within T cells, one being the expression of proinflammatory cytokine genes. A particular cytokine gene encodes for IL-2, which influences T cell proliferation, survival, and differentiation. The absence of CD28 costimulation results in the loss of IL-2 production causing the T cells to be anergic. Additionally, CD28 ligation causes arginine-methylation for many proteins. CD28 also drives transcription within T cells and produce signals that lead to IL-2 production and Bcl-xL regulation, an antiapoptotic protein, which are essential for T cell survival. CD28 receptors can be seen on 80% of human CD4+ and 50% of CD8+ T cells, in which this percentage decreases with age. Clinical significance Cancer Some cancer cells evade destruction by the immune system through an of B7 ligands that bind to inhibitory CD28 family member receptors on immune cells. Antibodies directed against CD28 family members CTLA-4, PD-1, or their B7 ligands function as checkpoint inhibitors to overcome tumor immune tolerance and are clinically used in cancer immunotherapy. Additionally, genetically engineered T cells containing CD28 and CD137 can be used in a molecularly targeted therapy response to a type of carcinomas called mesothelin. These T cells have a high affinity for human mesothelin. Upon mesothelin stimulation, the T cells proliferate, express an antiapoptotic gene, and secrete cytokines with the help of CD28 expression. When introduced to mice with pre-existing tumors, these T cells remove the tumors completely. The CD137 presence within the cells maintains the persistence of the engineered T cells. This interaction between engineered T cells with CD28 and CD137 are essential for immunotherapy, and show promise for directing T lymphocytes to tumor antigens and altering the tumor microenvironment for mesothelin. HIV The CD28 pathway is targeted by the human immunodeficiency virus (HIV) as the virus infects large numbers of normal cells. CD28 has effects on the transcription and stability of interleukin-2 and IFN-γ, cytokines that are important for immunity and stimulating NK cells. HIV alters the CD28 signaling as well as CD8 cells. As a result, there are reduced levels of CD8 cells, which express CD28, in individuals with HIV. With regards to subjects with both Hepatitis C Virus (HCV) and HIV, levels of CD8 cells are also reduced. CD28 signaling has a large role in the adaptive response to HCV and can increase morbidity for HCV/HIV coinfection within a subject. CD28 induces IL-2 secretion that increases IL-2 mRNA stability. CD28 costimulation influences the expression of key genes expressed in T cell differentiation. Tat, a regulatory protein that regulates viral transcription, increases the transcription of the HIV dsDNA. CD28 costimulation with the Tat protein can contribute to chronic immune hyperactivation seen among HIV-infected individuals. Thus, CD28 is an essential part of therapeutics for the infection and pathogenesis of HIV. Hyper-induced inflammatory cytokines Binding CD28 to superantigens can induce an overexpression of inflammatory cytokines which may be harmful. When CD28 interacts with coligand B7-2, these superantigens elicit T-cell hyperactivation. Superantigens can form this overexpression by controlling interactions between MHC-II and TCRs as well as increasing the B7-2 and CD28 costimulatory interactions. This is dangerous because the overexpression of inflammatory cytokines can cause toxic shock in an individual. References Receptors Immunoglobulin superfamily Immunology
CD28 family receptor
[ "Chemistry", "Biology" ]
1,083
[ "Immunology", "Receptors", "Signal transduction" ]
64,283,297
https://en.wikipedia.org/wiki/UV-Vis%20absorption%20spectroelectrochemistry
Ultraviolet-visible (UV-Vis) absorption spectroelectrochemistry (SEC) is a multiresponse technique that analyzes the evolution of the absorption spectra in UV-Vis regions during an electrode process. This technique provides information from an electrochemical and spectroscopic point of view. In this way, it enables a better perception about the chemical system of interest. On one hand, molecular information related to the electronic levels of the molecules is obtained from the evolution of the spectra. On the other hand, kinetic and thermodynamic information of the processes is obtained from the electrochemical signal. UV-Vis absorption SEC allows qualitative analysis, through the characterization of the different present compounds, and quantitative analysis, by determining the concentration of the analytes of interest. Furthermore, it helps to determine different electrochemical parameters such as absorptivity coefficients, standard potentials, diffusion coefficients, electronic transfer rate constants, etc. Throughout history, reversible processes have been studied with colored reagents or electrolysis products. Nowadays, it is possible to study all kinds of electrochemical processes in the entire UV-Vis spectral range, even in the near infrared (NIR). Configuration In UV-Vis absorption SEC, depending on the configuration of the light beam respect to the electrode/solution interface, two types of optical arrangements can be distinguished: normal and parallel configuration. Normal configuration In normal configuration, the light beam samples perpendicularly the electrode surface. Normal configuration provides optical information related to the changes that take place in the solution adjacent to the electrode and on the electrode surface. The optical path length coincides with the diffusion layer thickness, which is usually in the order of micrometers. This arrangement is the most suitable when the compound of interest is deposited or adsorbed on the working electrode, because it provides information about all processes occurring on the electrode surface. UV-Vis absorption SEC in normal arrangement can be performed using both transmission and reflection phenomena. Normal transmission In normal transmission, the light beam passes through a optically transparent working electrode, collecting information about the phenomena that take place on the surface of the electrode and on the solution adjacent to it. Electrodes in this configuration must be composed of materials that have great electrical conductivity and adequate optical transparency in the spectral region of interest. The external reflection mode was proposed to improve the sensitivity and to use non-transparent electrodes. Normal reflection In normal reflection, the light beam travels in a perpendicular direction to the working electrode surface on which the reflection occurs. The reflected beam is collected to be analyzed in the spectrometer. It is also possible to work with other incidence and collection angles. This configuration is an alternative when the working electrode is non-transparent. In this configuration, the optical path-length in solution is on the order of twice the diffusion layer thickness. It should be noticed that growth of films on the electrode surface could cause optical interference phenomena. As it is based on reflection phenomenon, in many cases reflectance is used as unit of measurement instead of absorbance. Parallel or long optical path-length configuration The parallel configuration or long optical path-length arrangement only provides information about the spectral changes that occur in the solution adjacent to the working electrode surface, improving the sensitivity to soluble compounds because the length of the optical pathway can be as longer as the length of the electrode. The light beam travels parallel to the working electrode surface, sampling the first micrometers of the solution adjacent to the working electrode surface, and collecting the information on the spectrometer. Usually, aligning light beams has been a difficult task. However, simple alternatives have been developed to perform measurements in parallel configuration. There are several advantages in this configuration respect to the normal one: better sensitivity, lower detection limits; optically transparent electrodes are not required; and the spectral changes are related only to the diffusion layer. Instrumentation The experimental set-up used to carry out UV-Vis absorption SEC measurements depends on the chosen configuration and the characteristics of the analyte. The experimental set-up is composed of a light source, a spectrometer, a potentiostat/galvanostat, a SEC cell, a three-electrode system, optical elements to conduct the light beam, and a computer for data collection and analysis. Currently, there are commercial devices that integrate all these elements in a single instrument, simplifying significantly the SEC experiments. Light source: provides the electromagnetic radiation that interacts with the sample while the electrochemical process is taking place. A specific source is required for the UV-Vis spectral region, being the most common the deuterium/halogen lamp. Spectrometer: instrument that allows measuring the properties of the light in a certain region of the electromagnetic spectrum. It uses a monochromator to separate the different spectral wavelengths of interest emitted by the light source. A diode-array detector can be used to obtain time-resolved spectra. For UV-Vis spectroelectrochemistry, spectrometer must be specific for UV-Vis spectral region. Potentiostat/Galvanostat: electronic device that allows controlling the working electrode potential regarding to the reference electrode or controlling the current that passes respect to the auxiliary electrode. Three electrode system: consists of a working electrode, a reference electrode and an auxiliary electrode. This system can be simplified by using screen-printed electrodes that include the three electrodes on a single holder. Spectroelectrochemical cell: device in which the solution and the system of three electrodes is located, avoiding possible interference in the optical path. It is the link between the electrochemistry and the UV-Vis absorption spectroscopy. Devices to conduct the radiation beam: lenses, mirrors and/or optical fibers. The last ones conduct electromagnetic radiation over great distances with hardly any losses. In addition, they simplify the optical configurations because they allow working with a small amount of solution. Optical fibers make easier to conduct and collect light near the electrode. Analysis and data collection devices: a computer collects the signals provided by the spectrometer and potentiostat that, using a suitable software, treats, analyzes and interprets the signals. Applications UV-Vis absorption SEC is a recent technique that is continuously evolving. However, many advantages have been observed over other techniques. The most outstanding advantages are: It generates a large amount of information about the systems. Generally, solvents are not a problem when carrying out these kinds of measurements. The wavelength selection generates specificity in the measurement of each species. Currently, there are commercial devices that allow carrying out a large number of experiments with high reproducibility. The kinetics of the reactions can be studied. It is used to determine a large number of electrochemical and optical parameters. Trilinear signals are obtained. Small amounts of sample can be analyzed. Faradaic current can be separated from non-faradaic current in an electrode process. It is more specific than electrochemistry. Quantitative information can be obtained. UV-Vis absorption SEC has been used mainly in different research fields such as: Sensor development. Reaction mechanisms. Diffusion and adsorption processes. Characterization of compounds. Study of biological interest substances. Study of optical and electrical materials properties. Study of liquid/liquid interfaces. Study and synthesis of nanomaterials. Evaluation of reaction parameters in which electron transfer occurs. References Spectroscopy Electrochemistry
UV-Vis absorption spectroelectrochemistry
[ "Physics", "Chemistry" ]
1,493
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Electrochemistry", "Spectroscopy" ]
64,290,680
https://en.wikipedia.org/wiki/BAM15
BAM15 is a novel mitochondrial protonophore uncoupler capable of protecting mammals from acute renal ischemic-reperfusion injury and cold-induced kidney tubule damage. It is being studied for the treatment of obesity sepsis, and cancer. References External links Fat-fighting molecule sees the body burn more fuel Oxadiazoles Pyrazines Bicyclic compounds 2-Fluorophenyl compounds Anilines Secondary amines Uncouplers
BAM15
[ "Chemistry" ]
98
[ "Cellular respiration", "Uncouplers" ]
64,294,715
https://en.wikipedia.org/wiki/Applied%20Nanoscience
Applied Nanoscience is a science journal specializing in nanotechnology and published by Springer Nature. It caters to areas fundamental to building sustainable progress, including water science, advanced materials, energy, electronics, environmental science and medicine. Abstracting and indexing According to the information on the journal's web site, in September 2024, Applied Nanoscience was indexed in the following databases: Astrophysics Data System (ADS) Baidu CLOCKSS CNKI CNPIEC Chemical Abstracts Service (CAS) Dimensions EBSCO EI Compendex Google Scholar INIS Atomindex Japanese Science and Technology Agency (JST) Naver OCLC WorldCat Discovery Service Portico ProQuest SCImago SCOPUS Semantic Scholar TD Net Discovery Service UGC-CARE List (India) Wanfang Controversies In March 2023, Clarivate discontinued the coverage of Applied Nanoscience (along with 81 other journals) in Web of Science. According to Chris Graf, research integrity director at Springer Nature, the publisher was "looking carefully at the journal, utilising the Web of Science criteria as well as evaluating it more holistically, to ensure that it can be relisted at the earliest opportunity." In January 2024, Retraction Watch covered mass retraction of papers from Applied Nanoscience. The retracted papers had been submitted to various special issues, subsequently raising concerns "including but not limited to compromised editorial handling and peer review process, inappropriate or irrelevant references or not being in scope of the journal or guest-edited issue." References Nanotechnology journals Chemistry journals
Applied Nanoscience
[ "Materials_science" ]
321
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Nanotechnology journals", "Nanotechnology stubs", "Nanotechnology" ]
64,295,245
https://en.wikipedia.org/wiki/Tensor%20sketch
In statistics, machine learning and algorithms, a tensor sketch is a type of dimensionality reduction that is particularly efficient when applied to vectors that have tensor structure. Such a sketch can be used to speed up explicit kernel methods, bilinear pooling in neural networks and is a cornerstone in many numerical linear algebra algorithms. Mathematical definition Mathematically, a dimensionality reduction or sketching matrix is a matrix , where , such that for any vector with high probability. In other words, preserves the norm of vectors up to a small error. A tensor sketch has the extra property that if for some vectors such that , the transformation can be computed more efficiently. Here denotes the Kronecker product, rather than the outer product, though the two are related by a flattening. The speedup is achieved by first rewriting , where denotes the elementwise (Hadamard) product. Each of and can be computed in time and , respectively; including the Hadamard product gives overall time . In most use cases this method is significantly faster than the full requiring time. For higher-order tensors, such as , the savings are even more impressive. History The term tensor sketch was coined in 2013 describing a technique by Rasmus Pagh from the same year. Originally it was understood using the fast Fourier transform to do fast convolution of count sketches. Later research works generalized it to a much larger class of dimensionality reductions via Tensor random embeddings. Tensor random embeddings were introduced in 2010 in a paper on differential privacy and were first analyzed by Rudelson et al. in 2012 in the context of sparse recovery. Avron et al. were the first to study the subspace embedding properties of tensor sketches, particularly focused on applications to polynomial kernels. In this context, the sketch is required not only to preserve the norm of each individual vector with a certain probability but to preserve the norm of all vectors in each individual linear subspace. This is a much stronger property, and it requires larger sketch sizes, but it allows the kernel methods to be used very broadly as explored in the book by David Woodruff. Tensor random projections The face-splitting product is defined as the tensor products of the rows (was proposed by V. Slyusar in 1996 for radar and digital antenna array applications). More directly, let and be two matrices. Then the face-splitting product is The reason this product is useful is the following identity: where is the element-wise (Hadamard) product. Since this operation can be computed in linear time, can be multiplied on vectors with tensor structure much faster than normal matrices. Construction with fast Fourier transform The tensor sketch of Pham and Pagh computes , where and are independent count sketch matrices and is vector convolution. They show that, amazingly, this equals – a count sketch of the tensor product! It turns out that this relation can be seen in terms of the face-splitting product as , where is the Fourier transform matrix. Since is an orthonormal matrix, doesn't impact the norm of and may be ignored. What's left is that . On the other hand, . Application to general matrices The problem with the original tensor sketch algorithm was that it used count sketch matrices, which aren't always very good dimensionality reductions. In 2020 it was shown that any matrices with random enough independent rows suffice to create a tensor sketch. This allows using matrices with stronger guarantees, such as real Gaussian Johnson Lindenstrauss matrices. In particular, we get the following theorem Consider a matrix with i.i.d. rows , such that and . Let be independent consisting of and . Then with probability for any vector if . In particular, if the entries of are we get which matches the normal Johnson Lindenstrauss theorem of when is small. The paper also shows that the dependency on is necessary for constructions using tensor randomized projections with Gaussian entries. Variations Recursive construction Because of the exponential dependency on in tensor sketches based on the face-splitting product, a different approach was developed in 2020 which applies We can achieve such an by letting . With this method, we only apply the general tensor sketch method to order 2 tensors, which avoids the exponential dependency in the number of rows. It can be proved that combining dimensionality reductions like this only increases by a factor . Fast constructions The fast Johnson–Lindenstrauss transform is a dimensionality reduction matrix Given a matrix , computing the matrix vector product takes time. The Fast Johnson Lindenstrauss Transform (FJLT), was introduced by Ailon and Chazelle in 2006. A version of this method takes where is a diagonal matrix where each diagonal entry is independently. The matrix-vector multiplication can be computed in time. is a Hadamard matrix, which allows matrix-vector multiplication in time is a sampling matrix which is all zeros, except a single 1 in each row. If the diagonal matrix is replaced by one which has a tensor product of values on the diagonal, instead of being fully independent, it is possible to compute fast. For an example of this, let be two independent vectors and let be a diagonal matrix with on the diagonal. We can then split up as follows: In other words, , splits up into two Fast Johnson–Lindenstrauss transformations, and the total reduction takes time rather than as with the direct approach. The same approach can be extended to compute higher degree products, such as Ahle et al. shows that if has rows, then for any vector with probability , while allowing fast multiplication with degree tensors. Jin et al., the same year, showed a similar result for the more general class of matrices call RIP, which includes the subsampled Hadamard matrices. They showed that these matrices allow splitting into tensors provided the number of rows is . In the case this matches the previous result. These fast constructions can again be combined with the recursion approach mentioned above, giving the fastest overall tensor sketch. Data aware sketching It is also possible to do so-called "data aware" tensor sketching. Instead of multiplying a random matrix on the data, the data points are sampled independently with a certain probability depending on the norm of the point. Applications Explicit polynomial kernels Kernel methods are popular in machine learning as they give the algorithm designed the freedom to design a "feature space" in which to measure the similarity of their data points. A simple kernel-based binary classifier is based on the following computation: where are the data points, is the label of the th point (either −1 or +1), and is the prediction of the class of . The function is the kernel. Typical examples are the radial basis function kernel, , and polynomial kernels such as . When used this way, the kernel method is called "implicit". Sometimes it is faster to do an "explicit" kernel method, in which a pair of functions are found, such that . This allows the above computation to be expressed as where the value can be computed in advance. The problem with this method is that the feature space can be very large. That is . For example, for the polynomial kernel we get and , where is the tensor product and where . If is already large, can be much larger than the number of data points () and so the explicit method is inefficient. The idea of tensor sketch is that we can compute approximate functions where can even be smaller than , and which still have the property that . This method was shown in 2020 to work even for high degree polynomials and radial basis function kernels. Compressed matrix multiplication Assume we have two large datasets, represented as matrices , and we want to find the rows with the largest inner products . We could compute and simply look at all possibilities. However, this would take at least time, and probably closer to using standard matrix multiplication techniques. The idea of Compressed Matrix Multiplication is the general identity where is the tensor product. Since we can compute a (linear) approximation to efficiently, we can sum those up to get an approximation for the complete product. Compact multilinear pooling Bilinear pooling is the technique of taking two input vectors, from different sources, and using the tensor product as the input layer to a neural network. In the authors considered using tensor sketch to reduce the number of variables needed. In 2017 another paper takes the FFT of the input features, before they are combined using the element-wise product. This again corresponds to the original tensor sketch. References Further reading Dimension reduction Tensors
Tensor sketch
[ "Engineering" ]
1,741
[ "Tensors" ]
54,380,446
https://en.wikipedia.org/wiki/Euler%E2%80%93Boole%20summation
Euler–Boole summation is a method for summing alternating series. The concept is named after Leonhard Euler and George Boole. Boole published this summation method, using Euler's polynomials, but the method itself was likely already known to Euler. Euler's polynomials are defined by The periodic Euler functions modify these by a sign change depending on the parity of the integer part of : The Euler–Boole formula to sum alternating series is where and is the kth derivative. References Mathematical series Summability methods
Euler–Boole summation
[ "Mathematics" ]
116
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Calculus", "Summability methods" ]
54,381,005
https://en.wikipedia.org/wiki/Magnus%20%28computer%20algebra%20system%29
Magnus was a computer algebra system designed to solve problems in group theory. It was designed to run on Unix-like operating systems, as well as Windows. The development process was started in 1994 and the first public release appeared in 1997. The project was abandoned in August 2005. The unique feature of Magnus was that it provided facilities for doing calculations in and about infinite groups. Almost all symbolic algebra systems are oriented toward finite computations that are guaranteed to produce answers, given enough time and resources. By contrast, Magnus was concerned with experiments and computations on infinite groups which in some cases are known to terminate, while in others are known to be generally recursively unsolvable. Features of Magnus A graphical object and method based user interface which is easy and intuitive to use and naturally reflects the underlying C++ classes; A kernel consisting of a ``session manager", to communicate between the user interface or front-end and the back-end where computations are carried out, and ``computation managers" which direct the computations which may involve several algorithms and "information centers" where information is stored; Facilities for performing several procedures in parallel and allocating resources to each of several simultaneous algorithms working on the same problem; Enumerators which generate sizable finite approximations to both finite and infinite algebraic objects and make it possible to carry out searches for answers even when general algorithms may not exist; Innovative genetic algorithms; A package manager to ``plug in" more special purpose algorithms written by others; References Computer algebra systems
Magnus (computer algebra system)
[ "Mathematics" ]
308
[ "Computer algebra systems", "Mathematical software" ]
54,388,042
https://en.wikipedia.org/wiki/Arianna%20W.%20Rosenbluth
Arianna Wright Rosenbluth (September 15, 1927 – December 28, 2020) was an American physicist who contributed to the development of the Metropolis–Hastings algorithm. She wrote the first full implementation of the Markov chain Monte Carlo method. Early life and education Arianna Rosenbluth was born in Houston, Texas, on September 15, 1927. She attended university at the Rice Institute, now Rice University, where she received a Bachelor of Science in 1946. During her college days, she fenced competitively and won both the Texas women's championship in foil as well as the Houston men's championship. She qualified for the Olympics, but was unable to compete because the 1944 Summer Olympics were cancelled due to World War II and she could not afford to travel to the 1948 games in London. Rosenbluth obtained her Master of Arts from Radcliffe College in 1947 before beginning her PhD in physics at Harvard University under the supervision of Nobel Laureate John Hasbrouck Van Vleck. At the time Van Vleck also supervised the future Nobel Laureate P.W. Anderson and the philosopher of science Thomas Kuhn. She completed her thesis, entitled Some Aspects of Paramagnetic Relaxation, in 1949 at the age of 22. Career After completing her thesis Rosenbluth won an Atomic Energy Commission postdoctoral fellowship to Stanford University which she attended before moving to a staff position at Los Alamos National Laboratory where her research focused on atomic bomb development and statistical mechanics. Along with Marshall Rosenbluth she verified analytic calculations for the Ivy Mike test using the SEAC at the National Bureau of Standards. Once the MANIAC I had been completed at Los Alamos she collaborated with Nicholas Metropolis, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller to develop the first Markov chain Monte Carlo algorithm, in particular the prototypical Metropolis–Hastings algorithm, in the seminal paper Equation of State Calculations by Fast Computing Machines. In close collaboration with her husband Marshall, she developed the implementation of the algorithm for the MANIAC I hardware, making her the first person to ever implement the Markov chain Monte Carlo method. Over the next few years Rosenbluth and Marshall applied the method to novel studies of statistical mechanical systems, including three-dimensional hard spheres and two-dimensional Lennard-Jones molecules and two and three-dimensional molecular chains. After the birth of her first child, Rosenbluth left research to focus on raising her family. Personal life While at Stanford University she met Marshall Rosenbluth and the two married on January 26, 1951. They had four children before divorcing in 1978. In 1956, she moved from Los Alamos to San Diego, California, and then Princeton, New Jersey, before finally settling in the greater Los Angeles area. She kept her married name after the divorce. Rosenbluth died from complications of COVID-19 in the greater Los Angeles area on December 28, 2020, during the COVID-19 pandemic in California. She was 93. References 1927 births 2020 deaths American nuclear physicists Monte Carlo methodologists Computational physicists American women computer scientists American computer scientists Los Alamos National Laboratory personnel Radcliffe College alumni Rice University alumni Women nuclear physicists Deaths from the COVID-19 pandemic in California
Arianna W. Rosenbluth
[ "Physics" ]
654
[ "Computational physicists", "Computational physics" ]
54,388,603
https://en.wikipedia.org/wiki/Selenopyrylium
Selenopyrylium is an aromatic heterocyclic compound consisting of a six-membered ring with five carbon atoms and a positively charged selenium atom. Naming and numbering Formerly it was named selenapyrylium. However, this is misleading as "selena" indicates that selenium substitutes for a carbon atom, but actually selenium is substituted for the oxygen atom in pyrylium. In the Hantzsch-Widman system of nomenclature, it is called seleninium. This is the name used by Chemical Abstracts. Replacement nomenclature would call this selenoniabenzene. Numbering in selenopyrylium starts with 1 on the selenium atom and counts up to 6 on the carbon atoms. The positions adjacent to the chalcogen, numbered 2 and 6 can also be called α, the next two positions 3 and 5 can be termed "β" and the opposite carbon at position 4 can be called "γ". Occurrence Because selenopyrylium is a positively charged ion, it takes the solid form as a salt with non-nucleophillic anions such as perchlorate, tetrafluoroborate, fluorosulfate, and hexafluorophosphate. Formation Selenopyrylium and derivatives can be made from 1,5-diketones (such as glutaraldehyde) and hydrogen selenide, along with hydrogen chloride (HCl) as a catalyst using acetic acid as a solvent. A side product is 2,6-bis-(hydroseleno)selenacyclohexane. When 5-chloro-2,4-pentadienenitrile derivatives react with sodium hydroselenide, or sodium selenite, and are then treated with perchloric acid, a 2-amino-selenopyrilium perchlorate salt results. Properties The positive charge is not confined to the selenium atom, but distributes on the ring in several resonance structures, so that the α and γ positions have some positive charge. A nucleophillic attack targets these carbon atoms. Selenopyrylium has two prominent absorption bands in the ultraviolet spectrum, band I is at 3000 Å, and band II is at 2670 Å. Band I, also known as 1Lb is from the 1B1←1A1 transition. The wavelength is longer and the band is much stronger than that of benzene. This is a bathochromic shift. The wavelength is longer than in thiopyrylium and pyrylium, but the intensity is weaker, due to selenium being less electronegative. Band II, also called 1La, is stronger and longer than that of benzene, thiopyrylium and pyrylium. Band II is polarized in the direction of Se-γ axis. The nuclear magnetic resonance spectrum shows a 10.98 ppm shift for H2 and 6, 8.77 for H3 and H5 and 9.03 for H4 (BF4− salt dissolved in CD3CN). Compared to other pyryliums H2,6 is more than that of oxygen or sulfur, H3,5 is between that of oxygen and sulfur, and H4 is very similar to thiopyrylium, but is slightly lower. NMR for 13C has the same trends as for the attached hydrogens. Solvents include trifluoroacetic acid, methanol, dichloromethane, chloroform, and acetonitrile. Derivatives Many derivatives of selenopyrylium are known with side chains attached to carbons 2, 3, or 6. Examples include 4-(p-dimethylaminophenyl)selenopyridinium, 2,6-diphenylselenopyridinium, 4-methyl-2,6-diphenylselenopyrylium, 2,4,6-triphenylselenopyrylium, 2,6-diphenyl-4-(p-dimethylaminophenyl)selenopyrylium, and 2,6-di-tert-butylselenopyrylium. Related When the ring is fused with other aromatic rings, larger aromatic structures such as selenochromenylium, selenoflavylium, and selenoxanthylium result. See also 6-membered aromatic rings with one carbon replaced by another group: borabenzene, silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, stibabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium, telluropyrylium References Heterocyclic compounds with 1 ring Organoselenium compounds Aromatic compounds Cations Six-membered rings Selenium(−II) compounds Selenium heterocycles
Selenopyrylium
[ "Physics", "Chemistry" ]
1,057
[ "Matter", "Aromatic compounds", "Organic compounds", "Cations", "Ions" ]
54,389,578
https://en.wikipedia.org/wiki/%CE%944-Abiraterone
Δ4-Abiraterone (D4A; code name CB-7627), also known as 17-(3-pyridyl)androsta-4,16-dien-3-one, is a steroidogenesis inhibitor and active metabolite of abiraterone acetate, a drug which is used in the treatment of prostate cancer and is itself a prodrug of abiraterone (another active metabolite of abiraterone acetate). D4A is formed from abiraterone by 3β-hydroxysteroid dehydrogenase/Δ5-4 isomerase (3β-HSD). It is said to be a more potent inhibitor of steroidogenesis than abiraterone, and is partially responsible for the activity of abiraterone acetate. D4A is specifically an inhibitor of CYP17A1 (17α-hydroxylase/17,20-lyase), 3β-HSD, and 5α-reductase. In addition, it has also been found to act as a competitive antagonist of the androgen receptor (AR), with potency reportedly comparable to that of enzalutamide. However, the initial 5α-reduced metabolite of D4A, 3-keto-5α-abiraterone, is an agonist of the AR, and has been found to stimulate prostate cancer progression. The formation of this metabolite can be blocked by the coadministration of dutasteride, a selective and highly potent 5α-reductase inhibitor, and the addition of this medication may improve the effectiveness of abiraterone acetate in the treatment of prostate cancer. References {{DISPLAYTITLE:Δ4-Abiraterone}} 3β-Hydroxysteroid dehydrogenase inhibitors 5α-Reductase inhibitors Androstanes CYP17A1 inhibitors Hormonal antineoplastic drugs Human drug metabolites Enones Prostate cancer 3-Pyridyl compounds Steroidal antiandrogens
Δ4-Abiraterone
[ "Chemistry" ]
442
[ "Chemicals in medicine", "Human drug metabolites" ]
49,208,894
https://en.wikipedia.org/wiki/Overflow%20metabolism
Overflow metabolism refers to the seemingly wasteful strategy in which cells incompletely oxidize their growth substrate (e.g. glucose) instead of using the respiratory pathway, even in the presence of oxygen. As a result of employing this metabolic strategy, cells excrete (or "overflow") metabolites like lactate, acetate and ethanol. Incomplete oxidation of growth substrates yields less energy (e.g. ATP) than complete oxidation through respiration, and yet overflow metabolism—known as the Warburg effect in the context of cancer and the Crabtree effect in the context of yeast—occurs ubiquitously among fast-growing cells, including bacteria, fungi and mammalian cells. Based on experimental studies of acetate overflow in Escherichia coli, recent research has offered a general explanation for the association of overflow metabolism with fast growth. According to this theory, the enzymes required for respiration are more costly than those required for partial oxidation of glucose. That is, if the cell were to produce enough of these enzymes to support fast growth with respiratory metabolism, it would consume much more energy, carbon and nitrogen (per unit time) than supporting fast growth with an incompletely oxidative metabolism (e.g. fermentation). Given that cells have limited energy resources and fixed physical volume for proteins, there is thought to be a trade-off between efficient energy capture through central metabolism (i.e. respiration) and fast growth achieved through high central-metabolic fluxes (e.g. through fermentation as in yeast). As an alternative explanation, it was suggested that cells could be limited by the rate with which they can dissipate Gibbs energy to the environment. Using combined thermodynamic and stoichiometric metabolic models in flux balance analyses with (i) growth maximization as objective function and (ii) an identified limit in the cellular Gibbs energy dissipation rate, correct predictions of physiological parameters, intracellular metabolic fluxes and metabolite concentrations were achieved. See also Stream metabolism Metabolism References Metabolism
Overflow metabolism
[ "Chemistry", "Biology" ]
422
[ "Cellular processes", "Biochemistry", "Metabolism" ]
49,209,954
https://en.wikipedia.org/wiki/Oil%20constant
The term crude oil constant (Erdölkonstante in German) has been used as an inside joke and pun in the German petroleum industry, pointing out that the reserves-to-production ratio has been observed as roughly constant in the past decades, whereas oil constant (Ölkonstante in German) is a term describing various material properties of (vegetable and mineral) oils. Reasons for reserve expansion The so-called crude oil constant refers to the approximately constant estimate of available petroleum reserves to production ratio R/P. The estimated duration until the available petroleum reserves are depleted at current production has remained around 40 years since the late 80s. Prewar and immediately postwar estimates were sometimes lower, in 1919 as low as 9 years (USA) and in 1948 around 20 years (world) and rose up to 35 years until the 1970s. However, since then the duration value of static production T=R/P has been rather constant for decades despite rising oil consumption. Price elasticity of reserves One factor contributing to the apparent constancy of the R/P ratio is a neglect or misunderstanding of the fact that the term "proven reserves" does not refer to some absolute quantity of remaining oil that is thought to exist, but rather to the quantity of oil that can be economically extracted given the current price of oil and current oil-extraction technologies. Thus, either an increase in the price of oil or improvements in oil-extraction technologies can lead to an increase in the estimate of "proven reserves" since more-expensive-to-mine deposits such as tight oil become economically viable at a higher oil price, and because newer or more expensive enhanced oil recovery processes such as gas injection, steam injection, and hydraulic fracturing allow continued extraction of oil from fields that would have been considered worth to abandon at a lower price or using older technologies. Thus, it is possible for the "proven reserves of oil" (i.e., economically extractable reserves of oil) to keep pace with or even pull ahead of oil consumption at the current rate. Unconventional oil On the other hand, the reserves to production ratio is only one mathematical indicator for the geological inventory. More important than the size of the tank is the production rate (e.g. the size of the spigot of a barrel), and with many capital-intensive technologies for extracting oil from non-conventional sources, also the flow rate is getting smaller. A large expansion of global reserves took place in the 2000s, when Athabasca oil sands (Canada) and the heavy oil of the Orinoco Belt (Venezuela) were reclassified from (physically in place) ressource to (producible) reserve. While the oil reserves are sizeable and in the same range as the reserves of Saudi Arabia, oil production is growing slowly in Canada and declining in Venezuela. OPEC quota wars Another contributing factor for the steady P/R-ratio is the large expansion of OPEC reserves, that were booked in the years around 1988. The OPEC quota system had been amended, allowing a production which relates to the reported reserves. Within a few years, OPEC members raised their reserves on paper without reporting any major new discoveries. SEC reporting rules Oil companies which were listed at US stock exchanges or elsewhere are obliged to report their reserves on the principle of carefulness. This led to the effect that a new discovery was first reported by its lowest estimate (P90 = high confidence). Later, during production when the reservoir data became more detailed, the most likely estimate (P50) was reported but without backdating this reserve expansion to the year of the discovery. Enhanded oil recovery techniques made it possible to produce the P10 value (10% probability), but again the backdating was forgotten and it seemed as if new discoveries have been made. Analogous use A similar pun has been used about the feasibility of fusion power: Since the 1950s, feasible technological means of using fusion for electricity production have constantly been predicted as being 30–40 years ahead, so the "fusion constant" exhibits a similar range to the "oil constant". References Petroleum Pseudoscience Fusion power
Oil constant
[ "Physics", "Chemistry" ]
837
[ "Plasma physics", "Fusion power", "Petroleum", "Chemical mixtures", "Nuclear fusion" ]
62,907,910
https://en.wikipedia.org/wiki/Effective%20one-body%20formalism
The effective one-body or EOB approach is an analytical approach to the gravitational two-body problem in general relativity. It was introduced by Alessandra Buonanno and Thibault Damour in 1999. It aims to describe all different phases of the two-body dynamics in a single analytical method. Classical gravity theory allows analytical calculations to be made in particular limits, such as post-Newtonian theory in the early inspiral, when the objects are at large separation, or black hole perturbation theory, when the two objects differ greatly in mass. In addition, they lead to results faster than numerical relativity. Rather than being considered distinct from these independent approaches to the two-body problem, the EOB approach is a way to resum information from other independent methods. It does so by mapping the general two-body problem to that of a test particle in an effective metric. The EOB approach was used in the data analysis of gravitational wave detectors such as LIGO and Virgo. References General relativity Gravitational-wave astronomy
Effective one-body formalism
[ "Physics", "Astronomy" ]
212
[ "Astrophysics", "General relativity", "Relativity stubs", "Theory of relativity", "Gravitational-wave astronomy", "Astronomical sub-disciplines" ]
62,913,193
https://en.wikipedia.org/wiki/Tetramethyl%20bisphenol%20F
Tetramethyl bisphenol F (TMBPF) is a new coating intended as a safer replacement for bisphenol A and bisphenol F to use in epoxy linings of aluminium cans and steel cans. It was previously suggested as an insulator in electronic circuit boards. Polymerization of tetramethyl bisphenol F occurs with epichlorohydrin when heated between 40 and 70 °C using an alkali as a catalyst to form the resin used as a coating. Health and environmental effects Causes serious eye irritation. May cause respiratory and skin irritation. Very toxic to aquatic life. Human endocrine effects TMBPF does not have any effect on the endocrine system; it does not leach out of cans because unlike BPA it is fully polymerized when deposited on the metal, so there is no free chemical to leach out. Tetramethyl bisphenol F was tested on rats to see if there were effects like male or female hormones. It had almost no effects like this. However, a different study did find effects. References Commodity chemicals Coatings Plasticizers
Tetramethyl bisphenol F
[ "Chemistry" ]
232
[ "Coatings", "Commodity chemicals", "Products of chemical industry" ]
62,919,056
https://en.wikipedia.org/wiki/EWS/FLI
EWS/FLI1 is an oncogenic protein that is pathognomonic for Ewing sarcoma. It is found in approximately 90% of all Ewing sarcoma tumors with the remaining 10% of fusions substituting one fusion partner with a closely related family member (e.g. ERG for FLI1). Origin EWSR1 is a gene on chromosome 22 whose mRNA is translated into the protein Ewing sarcoma breakpoint region 1 (abbreviated EWS). The gene FLI1 resides on chromosome 11 where it encodes a member of the ETS transcription factor family, Friend leukemia integration 1 transcription factor (abbreviated FLI1). Most fusions between EWS and FLI1 result from a t(11;22)(q24;q12) reciprocal chromosome translocation. This translocation creates a chimeric transcript which fuses exons 1-7 of EWSR1 to exons 6-9 (or less commonly 5-9) of FLI1. It has recently been appreciated that almost half of EWS and FLI1 fusions are a result of chromoplexy. Evidence of chromoplectic looping is enriched in both metastatic and p53 mutant tumors. Chromoplectic looping appears to be the mechanism involved in forming the EWS/ERG variant transcription factor. This preference is probably due to EWSR1 and ERG being in opposite orientations on the genome precluding the production of functional EWS/ERG via a reciprocal translocation. Molecular Biology EWS/FLI1 functions as both a pioneering transcription factor and potent oncogene. Its expression leads to a complete restructuring of the transcriptome of the cell of origin to favor a tumorigenic state. EWS/FLI1 accomplishes this through a set of complementary mechanisms: The N-terminus of EWS/FLI1 retains the prion-like transactivation domain of EWSR1. This allows EWS/FLI1 to both bind RNA polymerase II and recruit the BAF complex. These interactions change heterochromatin to euchromatin at EWS/FLI1 DNA-binding sites effectively generating de novo enhancers. The C-terminus of EWS/FLI1 retains the DNA-binding domain of FLI1. While wild-type FLI1 recognizes an ACCGGAAG core sequence, EWS/FLI1 preferentially binds GGAA-repetitive regions. There is a positive correlation between the number of consecutive GGAA microsatellites, EWS/FLI1 binding, and target gene expression. The core motif of ETS transcription factors includes a GGAA sequence. EWS/FLI1 may bind to such sequences with greater affinity than the wild-type ETS member disrupting the normal regulation of ETS target genes. References Proteins Sarcoma Cancer
EWS/FLI
[ "Chemistry" ]
605
[ "Proteins", "Biomolecules by chemical classification", "Molecular biology" ]
71,563,068
https://en.wikipedia.org/wiki/Promethium%28III%29%20phosphate
Promethium(III) phosphate is an inorganic compound, a salt of promethium and phosphate, with the chemical formula of PmPO4. It is radioactive. Its hydrate can be obtained by precipitation of soluble promethium salt and diammonium hydrogen phosphate at pH 3~4 (or obtained by hydrothermal reaction ), and the hydrate can be obtained by burning at 960 °C to obtain the anhydrous form. Its standard enthalpy of formation is −464 kcal/mol. References Promethium compounds Phosphates
Promethium(III) phosphate
[ "Chemistry" ]
115
[ "Salts", "Phosphates", "Inorganic compounds", "Inorganic compound stubs" ]
71,564,421
https://en.wikipedia.org/wiki/Kay%20Kinoshita
Kay Kinoshita is an experimental particle physicist. She is a professor at University of Cincinnati. Kinoshita completed her undergrad studies in Physics at Harvard University in 1976 and her PhD at University of California, Berkeley in 1981. She then returned to work at Harvard, before becoming a full professor at Virginia Tech in 1993. She is currently a professor at University of Cincinnati and was head of the Physics department 2009-2016. She is investigating topics such as dark matter. She was a 2020 Fellow of the American Physical Society for "innovative contributions to the study of b-quarks and for leadership in accelerator searches for magnetic monopoles." References External links Living people Harvard University alumni University of California, Berkeley alumni University of Cincinnati faculty Experimental physicists American women physicists Fellows of the American Physical Society Year of birth missing (living people)
Kay Kinoshita
[ "Physics" ]
171
[ "Experimental physics", "Experimental physicists" ]
71,566,375
https://en.wikipedia.org/wiki/Claudia%20Cenedese
Claudia Cenedese (born 1971) is an Italian physical oceanographer and applied mathematician whose research focuses on the circulation and flow of water in the ocean, and on the theoretical fluid dynamics needed to model these flows, including phenomena such as mesoscale vortices, buoyancy-driven flow, coastal currents, dense overflows, and the melting patterns of icebergs. She is a senior scientist at the Woods Hole Oceanographic Institution. Education and career Cenedese's father, Antonio Cenedese, is also a fluid dynamics researcher at Sapienza University of Rome, and as a child, she became fascinated by the motion of water in his experimental tanks. She earned a laurea in environmental engineering from Sapienza University in 1995, and completed a Ph.D. in applied mathematics and theoretical physics at the University of Cambridge in 1998, under the supervision of Paul Linden. She came to the Woods Hole Oceanographic Institution in 1998 as a postdoctoral scholar working with John A. Whitehead. She remained there for the rest of her career, becoming an assistant scientist in 2000 and obtaining a permanent research staff position in 2004. She was promoted to senior scientist in 2015. At Woods Hole, she established an exchange program for Italian students to visit, and has been active in mentoring women in oceanography. Since 2015, she has also held an adjunct faculty position in the Department of Civil and Natural Resources Engineering of the University of Canterbury in New Zealand. Recognition In 2018, Cenedese was elected as a Fellow of the American Physical Society (APS), after a nomination from the APS Division of Fluid Dynamics, "for fundamental contributions to the understanding of fluid-dynamical processes in the world's oceans, particularly turbulent entrainment into overflows and the melting of glaciers and icebergs, obtained through elegant and physically insightful laboratory experiments". References External links Home page 1971 births Living people Physical oceanographers Women oceanographers Italian mathematicians Italian women mathematicians Applied mathematicians Fluid dynamicists Sapienza University of Rome alumni Alumni of the University of Cambridge Woods Hole Oceanographic Institution Fellows of the American Physical Society Italian oceanographers
Claudia Cenedese
[ "Chemistry", "Mathematics" ]
436
[ "Applied mathematics", "Applied mathematicians", "Fluid dynamicists", "Fluid dynamics" ]
71,570,180
https://en.wikipedia.org/wiki/Talent%20scheduling
Talent scheduling represents a complex optimization challenge within the fields of computer science and operations research, specifically categorized under combinatorial optimization. Consider, for example, a case involving the production of multiple films, each comprising several scenes that necessitate the participation of one or more actors. Importantly, only one scene can be filmed per day, and the remuneration for the actors is calculated on a daily basis. A critical constraint in this problem is that actors must be engaged for consecutive days; for instance, an actor cannot be contracted for filming on the first and third days without also being hired on the intervening second day. Furthermore, during the entire hiring period, producers are obligated to compensate the actors, even on days when they are not actively participating in filming. The primary objective of talent scheduling is to minimize the total salary expenditure for the actors by optimizing the sequence in which scenes are filmed. Mathematical formulation Consider a film shoot composed of shooting days and involving a total of actors. Then we use the day out of days matrix (DODM) to represent the requirements for the various shooting days. The matrix with the entry given by: Then we define the pay vector , with the th element given by which means rate of pay per day of the th actor. Let v denote any permutation of the n columns of , we have: is the permutation set of the n shooting days. Then define to be the matrix with its columns permuted according to , we have: for Then we use and to represent denote respectively the earliest and latest days in the schedule determined by a which require actor . So we can find actor will be hired for days. But in these days, only days are actually required, which means days are unnecessary, we have: The total cost of unnecessary days is: will be the objective function we should minimize. Proof of strong NP-hardness It can be proved that the talent scheduling problem is NP-hard by a reduction to the optimal linear arrangement(OLA) problem. Even if we restrict the problem by requiring that each actor is needed for just two days and all actors' salaries are 1, it's still polynomially reducible to the OLA problem. Thus, this problem is unlikely to have pseudo-polynomial algorithm. Integer programming The integer programming model is given by: In this model, means the earliest shooting day for talent , is the latest shooting day for talent , is the scheduling for the project, i.e. References Optimal scheduling NP-complete problems
Talent scheduling
[ "Mathematics", "Engineering" ]
507
[ "Optimal scheduling", "Industrial engineering", "Computational problems", "Mathematical problems", "NP-complete problems" ]
71,572,562
https://en.wikipedia.org/wiki/Passau%20Glass%20Museum
The Passau Glass Museum has the largest collections in the world of European art glass, Bohemian glass, and glass made by Johann Loetz. The museum is listed as a "Nationally Valuable Cultural Property". It is located at Schrottgasse 2, D-94032 on the Rathaus or town hall square in the old town of Passau. It is connected to the Hotel Wilder Mann. The museum was founded by Georg Hoeltl. It covers five floors across four buildings which have been joined together. The top floor, the size of a soccer field, is the first exhibition hall. Hoeltl also owns the Hotel Wilder Mann, to which the museum is attached. The museum was opened on March 15, 1985, with US astronaut Neil Armstrong as the guest of honor. The museum's collection of European art glass includes over 30,000 pieces, 13,000 of which are on display. It includes the largest collection in the world of Bohemian glass from Bohemia and Silesia. The areas were rich in silica, limestone, potash and other materials used in making high quality glass. Bohemian glass was made in different styles and often involves crystal engraving, hand enameling, and iridescence. The Passau Glass Museum also includes the largest collection of glass made by Johann Loetz, a Bohemian glassmaker whose highly iridescent work rivals that of Louis Comfort Tiffany. The museum documents the history of glass in 25 rooms spanning 1650 to 1950: 1650 being considered a starting point for glass making as an art form in Europe. Among the rooms are exhibits on the Baroque era (1590-1750), the Empire periods (1650 - 1820), the Biedermeier period (mid-1800s), Classicism, the Historicism period (1850-1895), the Johann Loetz workshop (1880-1940), Ludwig Moser & Sons, Art Nouveau, Art Deco and Modern art styles. References Museums in Germany Glass museums and galleries History of glass Museums established in 1985
Passau Glass Museum
[ "Materials_science", "Engineering" ]
406
[ "Glass engineering and science", "Glass museums and galleries" ]
47,439,551
https://en.wikipedia.org/wiki/Penicillium%20roseopurpureum
Penicillium roseopurpureum is an anamorph species of fungus in the genus Penicillium which produces Carviolin. References Further reading roseopurpureum Fungi described in 1901 Fungus species
Penicillium roseopurpureum
[ "Biology" ]
47
[ "Fungi", "Fungus species" ]
51,531,969
https://en.wikipedia.org/wiki/Nonadiabatic%20transition%20state%20theory
Nonadiabatic transition state theory (NA-TST) is a powerful tool to predict rates of chemical reactions from a computational standpoint. NA-TST has been introduced in 1988 by Prof. J.C. Lorquet. In general, all of the assumptions taking place in traditional transition state theory (TST) are also used in NA-TST but with some corrections. First, a spin-forbidden reaction proceeds through the minimum energy crossing point (MECP) rather than through transition state (TS). Second, unlike TST, the probability of transition is not equal to unity during the reaction and treated as a function of internal energy associated with the reaction coordinate. At this stage non-relativistic couplings responsible for mixing between states is a driving force of transition. For example, the larger spin-orbit coupling at MECP the larger the probability of transition. NA-TST can be reduced to the traditional TST in the limit of unit probability. References Chemical physics
Nonadiabatic transition state theory
[ "Physics", "Chemistry" ]
205
[ "nan", "Applied and interdisciplinary physics", "Chemical physics" ]
51,534,333
https://en.wikipedia.org/wiki/IMM-101
IMM-101 is an immunomodulatory drug that is being studied to see if it is useful in chemotherapy. It consists of heat-killed Mycobacterium obuense bacteria. It may have relatively few side effects compared to other drugs. References Experimental cancer drugs Chemotherapy
IMM-101
[ "Chemistry" ]
61
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,644,033
https://en.wikipedia.org/wiki/Session%20type
In type theory, session types are used to ensure correctness in concurrent programs. They guarantee that messages sent and received between concurrent programs are in the expected order and of the expected type. Session type systems have been adapted for both channel and actor systems. Session types are used to ensure desirable properties in concurrent and distributed systems, i.e. absence of communication errors or deadlocks, and protocol conformance. Binary versus multiparty session types Interaction between two processes can be checked using binary session types, while interactions between more than two processes can be checked using multiparty session types. In multiparty session types interactions between all participants are described using a global type, which is then projected into local types that describe communication from the local view of each participant. Importantly, the global type encodes the sequencing information of the communication, which would be lost if we were to use binary session types to encode the same communication. Formal definition of binary session types Binary session types can be described using send operations (), receive operations (), branches (), selections (), recursion () and termination (). For example, represents a session type which first sends a boolean (), then receives an integer () before finally terminating (). Implementations Session types have been adapted for several existing programming languages, including: lchannels (Scala) Effpi (Scala) STMonitor (Scala) EnsembleS Session-types (Rust) sesh (Rust) Session Actors (Python) Monitored Session Erlang (Erlang) FuSe (OCaml) session-ocaml (OCaml) Priority Sesh (Haskell) Java Typestate Checker (Java) Swift Sessions (Swift) References Concurrency (computer science) Type theory Type systems
Session type
[ "Mathematics", "Technology" ]
361
[ "Mathematical structures", "Mathematical logic", "Mathematical objects", "Computer science stubs", "Type systems", "Type theory", "Computer science", "Computing stubs" ]
44,346,577
https://en.wikipedia.org/wiki/Eskimo%20yo-yo
An Eskimo yo-yo or Alaska yo-yo (; ) is a traditional two-balled skill toy played and performed by the Eskimo-speaking Alaska Natives, such as Inupiat, Siberian Yupik, and Yup'ik. It resembles fur-covered bolas and yo-yo. It is regarded as one of the most simple, yet most complex, cultural artifacts/toys in the world. The Eskimo yo-yo involves simultaneously swinging two sealskin balls suspended on caribou sinew strings in opposite directions with one hand. It is popular with Alaskans and tourists alike. This traditional toy is two unequal lengths of twine, joined together, with hand-made leather objects (balls, bells, hearts) at the ends of the twine. The object of the Eskimo yo-yo is to make the balls circle in opposite directions at the same time. Each cord is a different length to allow the balls to pass without striking one another, and the balls are powered by centripetal force (as they rise the performer pumps down, while they fall the performer pumps up). This basic trick may be referred to as the "Eskimo orbit", and the orbit may be performed vertically, horizontally, or (horizontally) above one's head. Other tricks or patterns include atypical beginnings and wrapping and/or bouncing the strings around a part of one's body and then continuing with the orbit. A three-ball version of the Eskimo yo-yo also exists, and this requires all three balls to be moving at the same time. The objects at the end of the string are made in a variety of shapes, ranging from seals, ptarmigan feet and dolls, to miniature mukluks and simple balls. The handle may be wood, bone, or ivory, as well as baleen. Many are plainly decorated; others display elaborate decorations, fine beadwork, and intricate details. The Eskimo yo-yo is bola, toy, and art form all rolled into one. One of their most popular forms of the Alaska Native art are yo-yos. Also, this is a popular tourist art found in gift shops across Alaska. See: Indian Arts and Crafts Act of 1990. Much like the spinning top (e.g. Maxwell's top), the yo-yo may also be used to demonstrate visual properties such as optical rotation and circular dichroism. Though the early history of the Eskimo yo-yo is not recorded, Eskimos maintain that this game originated as an important and widely used hunting tool made simply with sinew and bones, the bola. It possibly evolved on St. Lawrence Island from the similarly constructed sinew and rock bolas used in bird hunting. See also Astrojax Blanket toss Clackers Eskimo bowline Euler top Gyroscope Meteor (juggling) Poi (performance art) Whirly tube Bolas Meteor hammer Footnotes References Further reading Kiana, Chris (1986). Eskimo Yo Yo Tricks: 50 Tricks Instructional Book with Eskimo Customs & Legends Paperback. H&K. ASIN: B00P0GWUDE. Kiana, Chris (1997). Alaska Eskimo Yo-Yo. VHS. Takotna Video, Alaska Eskimo Yo-Yo Company Inc. ASIN: B000UFSP8E. Kiana, Chris (2009). Chris Kiana's Educational Eskimo Yo-yo. DVD. Takotna Video, Alaska Eskimo Yo-Yo Company Inc. External links Inupiat culture Rotation Traditional toys Yo-yos Yupik culture
Eskimo yo-yo
[ "Physics" ]
765
[ "Physical phenomena", "Motion (physics)", "Classical mechanics", "Rotation" ]
44,353,369
https://en.wikipedia.org/wiki/Induced-charge%20electrokinetics
Induced-charge electrokinetics in physics is the electrically driven fluid flow and particle motion in a liquid electrolyte. Consider a metal particle (which is neutrally charged but electrically conducting) in contact with an aqueous solution in a chamber/channel. If different voltages apply to the end of this chamber/channel, electric field will generate in this chamber/channel. This applied electric field passes through this metal particle and causes the free charges inside the particle migrate under the skin of particle. As a result of this migration, the negative charges move to the side which is close to the positive (or higher) voltage while the positive charges move to the opposite side of the particle. These charges under the skin of the conducting particle attract the counter-ions of the aqueous solution; thus, the electric double layer (EDL) forms around the particle. The EDL sign on the surface of the conducting particle changes from positive to negative and the distribution of the charges varies along the particle geometry. Due to these variations, the EDL is non-uniform and has different signs. Thus, the induced zeta potential around the particle, and consequently slip velocity on the surface of the particle, vary as a function of the local electric field. Differences in magnitude and direction of slip velocity on the surface of the conducting particle effects the flow pattern around this particle and causes micro vortices. Yasaman Daghighi and Dongqing Li, for the first time, experimentally illustrated these induced vortices around a 1.2mm diameter carbon-steel sphere under the 40V/cm direct current (DC) external electric filed. Chenhui Peng et al. also experimentally showed the patterns of electro-osmotic flow around an Au sphere when alternating current (AC) is involved (E=10mV/μm, f=1 kHz). Electrokinetics here refers to a branch of science related to the motion and reaction of charged particles to the applied electric filed and its effects on its environment. It is sometimes referred as non-linear electrokinetic phenomena as well. History Levich is one of the pioneers in induced-charge electrokinetic field. He calculated the perturbed slip profile around a conducting particle in contact with electrolyte. He also theoretically predicted that vortices induced around this particle once the electric filed is applied. Induced vortices around a conducting particle The size and strength of the induced vortices around a conducting particle have direct relationship with the applied electric filed and also the size of the conducted surface. This phenomenon is experimentally and numerically proven by several studies. The vortices grow as the external electric field increases and generate "sinkhole" at the center of the each vortex while circulates the fluid faster. It is demonstrated that increasing the size of the conducting surface forms bigger induced vortices to the point that geometry does not limits this grows. Applications The induced vortices have many applications in various aspects of electrokinetic microfluidics. There are many micro-mixers that are designed and fabricated based on the existence of their induced vortices in the microfluidics devices. Such micro-mixers which are used for biochemical, medicine, biology applications has no mechanical parts and only use conducting surfaces to generate induced vortices to mix the different fluid streams. This phenomenon even is used to trap the micron and submicron particles floating in flow inside a micro-channel. This method can be used to manipulate, detect, handle, and concentrate cells and virus in biomedical field; or, for colloidal particle assembly. In addition the induced vortices around the conducting surfaces in a microfluidic system can be used as a micro-valve, micro-actuator, micro-motor and micro-regulator to control the direction and manipulation. See also Diffusiophoresis Electro-osmosis Electrophoresis Lab-on-a-chip Surface charge References Microfluidics Fluid dynamics Biotechnology Electrochemistry
Induced-charge electrokinetics
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
821
[ "Microfluidics", "Microtechnology", "Chemical engineering", "Biotechnology", "Electrochemistry", "nan", "Piping", "Fluid dynamics" ]
44,354,911
https://en.wikipedia.org/wiki/MXenes
In materials science, MXenes are a class of two-dimensional inorganic compounds along with MBenes, that consist of atomically thin layers of transition metal carbides, nitrides, or carbonitrides. MXenes accept a variety of hydrophilic terminations. The first MXene was reported in 2011 at Drexel University's College of Engineering. Structure As-synthesized MXenes prepared via HF etching have an accordion-like morphology, which can be referred to as multi-layer MXene (ML-MXene), or few-layer MXene (FL-MXene) given fewer than five layers. Because the surfaces of MXenes can be terminated by functional groups, the naming convention Mn+1XnTx can be used, where T is a functional group (e.g. O, F, OH, Cl). Mono transition MXenes adopt three structures with one metal on the M site, as inherited from the parent MAX phases: M2C, M3C2, and M4C3. They are produced by selectively etching out the A element from a MAX phase or other layered precursor (e.g., Mo2Ga2C), which has the general formula Mn+1AXn, where M is an early transition metal, A is an element from group 13 or 14 of the periodic table, X is C and/or N, and n = 1–4. MAX phases have a layered hexagonal structure with P63/mmc symmetry, where M layers are nearly closed packed and X atoms fill octahedral sites. Therefore, Mn+1Xn layers are interleaved with the A element, which is metallically bonded to the M element. Double transition Double transition metal MXenes can take two forms, ordered double transition metal MXenes or solid solution MXenes. For ordered double transition metal MXenes, they have the general formulas: M'2M"C2 or M'2M"2C3 where M' and M" are different transition metals. Double transition metal carbides that have been synthesized include Mo2TiC2, Mo2Ti2C3, Cr2TiC2, and Mo4VC4. In some of these MXenes (such as Mo2TiC2, Mo2Ti2C3, and Cr2TiC2), the Mo or Cr atoms are on outer edges of the MXene and these atoms control electrochemical properties of the MXenes. For solid-solution MXenes, they have the general formulas: (M'2−yM"y)C, (M'3−yM"y)C2, (M'4−yM"y)C3, or (M'5−yM"y)C4, where the metals are randomly distributed throughout the structure in solid solutions leading to continuously tailorable properties. Divacancy By designing a parent 3D atomic laminate, (Mo2/3Sc1/3)2AlC, with in-plane chemical ordering, and by selectively etching the Al and Sc atoms, there is evidence for 2D Mo1.33C sheets with ordered metal divacancies. Synthesis MXenes are typically synthesized by a top-down selective etching process. This synthetic route is scalable, with no loss or change in properties as the batch size is increased. Producing a MXene by etching a MAX phase occurs mainly by using strong etching solutions that contain a fluoride ion (F−), such as hydrofluoric acid (HF), ammonium bifluoride (NH4HF2), and a mixture of hydrochloric acid (HCl) and lithium fluoride (LiF). For example, etching of Ti3AlC2 in aqueous HF at room temperature causes the A (Al) atoms to be selectively removed, and the surface of the carbide layers becomes terminated by O, OH, and/or F atoms. MXene can also be obtained in Lewis acid molten salts, such as ZnCl2, and a Cl terminal can be realized. The Cl-terminated MXene is structurally stable up to 750 °C. A general Lewis acid molten salt approach was proven viable to etch most of MAX phases members (such as MAX-phase precursors with A elements Si, Zn, and Ga) by some other melts (CdCl2, FeCl2, CoCl2, CuCl2, AgCl, and NiCl2). The MXene Ti4N3 was the first nitride MXene reported, and is prepared by a different procedure than those used for carbide MXenes. To synthesize Ti4N3, the MAX phase Ti4AlN3 is mixed with a molten eutectic fluoride salt mixture of lithium fluoride, sodium fluoride, and potassium fluoride and treated at elevated temperatures. This procedure etches out Al, yielding multilayered Ti4N3, which can further be delaminated into single and few layers by immersing the MXene in tetrabutylammonium hydroxide, followed by sonication. MXenes can also be synthesized directly or via CVD processes. Recently, single crystalline monolayer W5N6 has been successfully synthesized by CVD in wafer scale which shows promise of MXenes in electronic application in the future. Since their first discovery, scientists have sought a more effective and efficient synthesis process. In a 2018 report, Peng et al. described a hydrothermal etching technique. In this etching method, the MAX phase is treated in the solution of acid and salt under high pressure and temperature conditions. The method is more effective in producing MXene dots and nano-sheets. Moreover, it is safer since there is no release of HF fumes during the etching process. Types 2-1 MXenes: Ti2C, V2C, Nb2C, Mo2C Mo2N, Ti2N, (Ti2−yNby)C, (V2−yNby)C, (Ti2−yVy)C, W1.33C, Nb1.33C, Mo1.33C, Mo1.33Y0.67C 3-2 MXenes: Ti3C2 , Ti3CN, Zr3C2 and Hf3C2 4-3 MXenes: Ti4N3, Nb4C3 , Ta4C3 , V4C3, (Mo,V)4C3 5-4 MXenes: Mo4VC4 Double transition metal MXenes: 2-1-2 MXenes: Mo2TiC2, Cr2TiC2, Mo2ScC2 2-2-3 MXenes: Mo2Ti2C3 Covalent surface modification 2D transition-metal carbides surfaces can be chemically transformed with a variety of functional groups such as O, NH, S, Cl, Se, Br, and Te surface terminations as well as bare MXenes. The strategy involves installation and removal of the surface groups by performing substitution and elimination reactions in molten inorganic salts. Covalent bonding of organic molecules to MXene surfaces has been demonstrated through reaction with aryl diazonium salts. Moreover, heating and re-termination experiments of Ti3C2Tx have shown that H2O, with a strong bonding to the Ti-Ti bridge-sites, can be considered as a termination species. An O and H2O terminated Ti3C2Tx-surface restricts the CO2 adsorption to the Ti on-top sites and may reduce the ability to store positive ions, such as Li+ and Na+. On the other hand, an O and H2O terminated Ti3C2Tx-surface shows the capability to split water . Intercalation and delamination Since MXenes are layered solids and the bonding between the layers is weak, intercalation of the guest molecules in MXenes is possible. Guest molecules include dimethyl sulfoxide (DMSO), hydrazine, and urea. For example, N2H4 (hydrazine) can be intercalated into Ti3C2(OH)2 with the molecules parallel to the MXene basal planes to form a monolayer. Intercalaction increases the MXene c lattice parameter (crystal structure parameter that is directly proportional to the distance between individual MXene layers), which weakens the bonding between MX layers. Ions, including Li+, Pb2+, and Al3+, can also be intercalated into MXenes, either spontaneously or when a negative potential is applied to a MXene electrode. Delamination Ti3C2 MXene produced by HF etching has accordion-like morphology with residual forces that keep MXene layers together preventing separation into individual layers. Although those forces are quite weak, ultrasound treatment results only in very low yields of single-layer flakes. For large scale delamination, DMSO is intercalated into ML-MXene powders under constant stirring to further weaken the interlayer bonding and then delaminated with ultrasound treatment. This results in large scale layer separation and formation of the colloidal solutions of the FL-MXene. These solutions can later be filtered to prepare MXene "paper" (similar to Graphene oxide paper). MXene clay For the case of Ti3C2Tx and Ti2CTx, etching with concentrated hydrofluoric acid leads to open, accordion-like morphology with a compact distance between layers (this is common for other MXene compositions as well). To be dispersed in suspension, the material must be pre-intercalated with something like dimethylsulfoxide. However, when etching is conducted with hydrochloric acid and LiF as a fluoride source, morphology is more compact with a larger inter-layer spacing, presumably due to amounts of intercalated water. The material has been found to be 'clay-like': as seen in clay materials (e.g. smectite clays and kaolinite), Ti3C2Tx demonstrates the ability to expand its interlayer distance hydration and can reversibly exchange charge-balancing Group I and Group II cations. Further, when hydrated, the MXene clay becomes pliable and can be molded into desired shapes, becoming a hard solid upon drying. Unlike most clays, however, MXene clay shows high electrical conductivity upon drying and is hydrophilic, and disperses into single layer two-dimensional sheets in water without surfactants. Further, due to these properties, it can be rolled into free-standing, additive-free electrodes for energy storage applications. Material processing MXenes can be solution-processed in aqueous or polar organic solvents, such as water, ethanol, dimethyl formamide, propylene carbonate, etc., enabling various types of deposition via vacuum filtration, spin coating, spray coating, dip coating, and roll casting. There have been studies conducted on ink-jet printing of additive free Ti3C2Tx inks and inks composed of Ti3C2Tx and proteins. Lateral flake size often plays a role in the observed properties and there are several synthetic routes that produce varying degrees of flake size. For example, when HF is used as an etchant, the intercalation and delamination step will require sonication to exfoliate material into single flakes, resulting in flakes that are several hundreds of nanometers in lateral size. This is beneficial for applications such as catalysis and select biomedical and electrochemical applications. However, if larger flakes are warranted, especially for electronic or optical applications, defect-free and large area flakes are necessary. This can be achieved by Minimally Intensive Layer Delamination (MILD) method, where the quantity of LiF to MAX phase is scaled up resulting in flakes that can be delminated in situ when washing to neutral pH. Post-synthesis processing techniques to tailor the flake size have also been investigated, such as sonication, differential centrifugation, and density gradient centrifugation procedures. Post processing methods rely heavily on the as-produced flake size. Using sonication allows for a decrease in flake size from 4.4 μm (as-produced), to an average of 1.0 μm after 15 minutes of bath sonication (100 W, 40 kHz), down to 350 nm after 3 hours of bath sonication. By utilizing probe sonication (8 s ON, 2 s OFF pulse, 250 W), flakes were reduced to an average of 130 nm in lateral size. Differential centrifugation, also known as cascading centrifugation, can be used to select flakes based on lateral size by increasing the centrifuge speed sequentially from low speeds (e.g. 1000 rpm) to high speeds (e.g., 10000 rpm) and collecting the sediment. When this was performed, "large" (800 nm), "medium" (300 nm) and "small" (110 nm) flakes can be obtained. Density gradient centrifugation is also another method for selecting flakes based on lateral size, where a density gradient is employed in the centrifuge tube and flakes move through the centrifuge tube at different rates based on the flake density relative to the medium. In the case of sorting MXenes, a sucrose and water density gradient can be used from 10 to 66 w/v %. Using density gradients allows for more mono-disperse distributions in flake sizes and studies show the flake distribution can be varied from 100 to 10 μm without employing sonication. Properties With a high electron density at the Fermi level, MXene monolayers are predicted to be metallic. In MAX phases, N(EF) is mostly M 3d orbitals, and the valence states below EF are composed of two sub-bands. One, sub-band A, made of hybridized Ti 3d-Al 3p orbitals, is near EF, and another, sub-band B, −10 to −3 eV below EF which is due to hybridized Ti 3d-C 2p and Ti 3d-Al 3s orbitals. Said differently, sub-band A is the source of Ti-Al bonds, while sub-band B is the source of Ti-C bond. Removing A layers causes the Ti 3d states to be redistributed from missing Ti-Al bonds to delocalized Ti-Ti metallic bond states near the Fermi energy in Ti2, therefore N(EF) is 2.5–4.5 times higher for MXenes than MAX phases. Experimentally, the predicted higher N(EF) for MXenes has not been shown to lead to higher resistivities than the corresponding MAX phases. The energy positions of the O 2p (~6 eV) and the F 2p (~9 eV) bands from the Fermi level of Ti2CTx and Ti3C2Tx both depend on the adsorption sites and the bond lengths to the termination species. Significant changes in the Ti-O/F coordination are observed with increasing temperature in the heat treatment. Only MXenes without surface terminations are predicted to be magnetic. Cr2C, Cr2N, and Ta3C2 are predicted to be ferromagnetic; Ti3C2 and Ti3N2 are predicted to be anti-ferromagnetic. None of these magnetic properties have yet been demonstrated experimentally. Optical Membranes of MXenes, such as Ti3C2 and Ti2C, have dark colors, indicating their strong light absorption in the visible wavelengths. MXenes are promising photo-thermal materials due to their strong visible light absorption. More interestingly, it is reported that the optical properties of MXenes such as Ti3C2 and Ti2C in the IR region quite differ from that in the visible wavelengths. For the wavelengths above 1.4 micrometer, these materials show negative permittivity, resulting in a strong metallic response to the IR light. In other words, they are highly reflective to IR lights. From the Kirchhoff's law of radiation, a low IR absorption means a low IR emissivity. The two MXenes materials show IR emissivity as low as 0.1, which are similar to some metals. Such materials that are visible black but IR white are highly desired in many areas, such as camouflage, thermal management, and information encryption. Corrosion resistance There is a growing body of the literature that recognises MXenes as high-performance corrosion inhibitors. The corrosion resistance of Ti3C2Tx MXene can be attributed to the synergy of good dispersibility, barrier effect and corrosion inhibitor release. Biological properties Compared to graphene oxide, which has been widely reported as an antibacterial agent, Ti2C MXene shows a lack of antibacterial properties. However, MXene of Ti3C2 MXene shows a higher antibacterial efficiency toward both Gram-negative E. coli and Gram-positive B. subtilis. Colony forming unit and regrowth curves showed that more than 98% of both bacterial cells lost viability at 200 μg/mL Ti3C2 colloidal solution within 4 h of exposure. Damage to the cell membrane was observed, which resulted in release of cytoplasmic materials from the bacterial cells and cell death. The principal in vitro studies of cytotoxicity of 2D sheets of MXenes showed promise for applications in bioscience and biotechnology. Presented studies of anticancer activity of the Ti3C2 MXene was determined on two normal (MRC-5 and HaCaT) and two cancerous (A549 and A375) cell lines. The cytotoxicity results indicated that the observed toxic effects were higher against cancerous cells compared to normal ones. The mechanisms of potential toxicity were also elucidated. It was shown that Ti3C2 MXene may affect the occurrence of oxidative stress and, in consequence, the generation of reactive oxygen species (ROS). Further studies on Ti3C2 MXene revealed potential of MXenes as a novel ceramic photothermal agent used for cancer therapy. In neuronal biocompatibility studies, neurons cultured on Ti3C2 are as viable as those in control cultures, and they can adhere, grow axonal processes, and form functional networks. Water purification Recently, Ti3C2 MXenes have been used as flowing electrodes in a flow-electrode capacitive deionization cell for the removal of ammonia from simulated wastewater. MXene FE-CDI demonstrated a 100x improvement in ion absorption capacity at 10x greater energy efficiency as compared to activated carbon flowing electrodes. One-micron-thick Ti3C2 MXene membranes demonstrated ultrafast water flux (approximately 38 L/(Bar·h·m2)) and differential sieving of salts depending on both the hydration radius and charge of the ions. Cations larger than the interlayer spacing of MXene do not permeate through Ti3C2 membranes. As for smaller cations, the ones with a larger charge permeate an order of magnitude slower than single-charged cations. Potential applications As conductive layered materials with tunable surface terminations, MXenes have been shown to be promising for energy storage applications (Li-ion batteries, supercapacitors, and energy storage components), composites, photocatalysis, water purification, gas sensors, transparent conducting electrodes, neural electrodes, as a metamaterial, SERS substrate, photonic diode, electrochromic device, and triboelectric nanogenerator (TENGs). Lithium-ion batteries MXenes have been investigated experimentally in lithium-ion batteries (LIBs) (e.g. V2CTx , Nb2CTx , Ti2CTx , and Ti3C2Tx). V2CTx has demonstrated the highest reversible charge storage capacity among MXenes in multi-layer form (280 mAhg−1 at 1C rate and 125 mAhg−1 at 10C rate). Multi-layer Nb2CTx showed a stable, reversible capacity of 170 mAhg−1 at 1C rate and 110 mAhg−1 at a 10C rate. Although Ti3C2Tx shows the lowest capacity among the four MXenes in multi-layer form, it can be delaminated via sonication of the multi-layer powder. By virtue of higher electrochemically active and accessible surface area, delaminated Ti3C2Tx paper demonstrates a reversible capacity of 410 mAhg−1 at 1C and 110 mAhg−1 at 36C rate. As a general trend, M2X MXenes can be expected to have greater capacity than their M3X2 or M4X3 counterparts at the same applied current, since M2X MXenes have the fewest atomic layers per sheet. In addition to high power capabilities, each MXene has a different active voltage window, which could allow their use as battery cathodes/anodes. Moreover, the experimentally measured capacity for Ti3C2Tx paper is higher than predicted from computer simulations, indicating that further investigation is required to ascertain the charge storage mechanism. Sodium-ion batteries MXenes exhibit promising performances for sodium-ion batteries. Na+ should diffuse rapidly on MXene surfaces, which is favorable for fast charging/discharging. Two layers of Na+ can be intercalated in between MXene layers. As a typical example, multilayered Ti2CTx MXene as a negative electrode material showed a capacity of 175 mA h g−1 and good rate capability. It is possible to tune the Na-ion insertion potentials of MXenes by changing the transition metal and surface functional groups. V2CTx MXene has been successfully applied as a cathode material. Porous MXene-based paper electrodes have been reported to exhibit high volumetric capacities and stable cycling performance, demonstrating promise for devices where size matters. Supercapacitors MXenes are under study to improve supercapacitor energy density. Improvements come from increased charge storage density, which can be increased in several ways. Increasing the available surface area for potential redox reactions through increasing interlayer spacing can accommodate more ions, but reduces electrode density. The synthesis route controls the surface chemistry and plays a large role in determining the intercalation reaction rate and the charge storage density. For example, molten salt prepared Ti3C2Tx MXenes, with chlorine surface groups, show a capacity of 142 mAh g−1 at 13C rate and 75 mAh g−1 at 128C rate, driven by full desolvation of Li+, allowing for increased charge storage density in the electrode. In comparison, Ti3C2Tx MXenes prepared through HF etching show a capacity of 107.2 mAh g−1 at 1C rate. Composite Ti3C2Tx-based electrodes, including Ti3C2Tx/polymer (e.g. PPy, Polyaniline), Ti3C2Tx /TiO2, and Ti3C2Tx/Fe2O3 have been explored. Notably, Ti3C2Tx hydrogel electrodes delivered a high volumetric capacitance of up to 1500 F/cm3. Supercapacitor electrodes based on Ti3C2Tx MXene paper in aqueous solutions demonstrate excellent cyclability and the ability to store 300-400 F/cm3, which translates to three times as much energy as for activated carbon and graphene-based capacitors. Ti3C2 MXene clay showed a volumetric capacitance of 900 F/cm3, a higher capacitance per unit of volume than most other materials, without losing any of its capacitance through more than 10,000 charge/discharge cycles. In Ti3C2Tx MXene electrodes for lithium-ion electrolytes, the choice of solvent greatly affected the ion transport and intercalation kinetics. In a propylene carbonate (PC) solvent, efficient desolvation of lithium ions during intercalation led to increased volumetric charge storage, with negligible increase in electrode volume. The improved kinetics garnered through solvent choice led to improved charge storage density when comparing the PC system to acetonitrile or dimethyl sulfoxide by a factor greater than 2. Composites FL-Ti3C2 (the most studied MXene) nanosheets can mix intimately with polymers such as polyvinyl alcohol (PVA), forming alternating MXene-PVA layered structures. The electrical conductivities of the composites can be controlled from 4×10−4 to 220 S/cm (MXene weight content from 40% to 90%). The composites have tensile strength up to 400% stronger than pure MXene films and show better capacitance up to 500 F/cm3. By using electrostatic self-assembly, flexible and conductive MXene/graphene supercapacitor electrodes are produced. The free-standing MXene/graphene electrode displays a volumetric capacitance of 1040 F/cm3, an impressive rate capability with 61% capacitance retention and in long cycle life. A method of alternative filtration for forming MXene-carbon nanomaterials composite films is also devised. These composites show better rate performance at high scan rates in supercapacitors. The insertion of polymers or carbon nanomaterials between MXene layers enables electrolyte ions to diffuse more easily through the MXenes, which is the key for their applications in flexible energy storage devices. The mechanical properties of epoxy/MXenes is comparable with graphene and CNTs, the tensile strength and modulus can increase up to 67% and 23% respectively. MXene/C-dot nanocomposites are reported to exhibit synergistic optical absorption and thermal properties of MXene and C-dot nanomaterials. Sensors MXenes-based sensors have been studied for various applications, including gas, and biological sensing. One of the novel sensors where MXenes were applied is a SERS. It was reported that Ti3C2Tx MXenes substrates are applicable in sensing salicylic acid, a metabolite of acetylsalicylic acid (also known as Aspirin), organic dye molecules and biomolecules. Another promising area for applications of MXenes is gas sensing. MXenes-based gas sensors have shown high sensitivity and selectivity towards various gases, including ammonia, alcohols, nitrogen dioxide, and sulfur dioxide. These sensors can be used for environmental monitoring, industrial safety, and healthcare applications. Porous materials Porous MXenes (Ti3C2, Nb2C and V2C) have been produced via a facile chemical etching method at room temperature. Porous Ti3C2 has a larger specific surface area and more open structure, and can be filtered as flexible films with, or without, the addition of carbon nanotubes (CNTs). The as-fabricated p-Ti3C2/CNT films showed significantly improved lithium ion storage capabilities, with a capacity as high as 1250 mA·h·g−1 at 0.1 C, excellent cycling stability, and good rate performance. Antennas Scientists at Drexel University in the US have created spray on antennas that perform as well as current antennas found in phones, routers and other gadgets by painting MXene's onto everyday objects, widening the scope of the Internet of things considerably. Optoelectronic devices MXene SERS substrates have been manufactured by spray-coating and were used to detect several common dyes, with calculated enhancement factors reaching ~106. Titanium carbide MXene demonstrates SERS effect in aqueous colloidal solutions, suggesting the potential for biomedical or environmental applications, where MXene can selectively enhance positively charged molecules. Transparent conducting electrodes have been fabricated with titanium carbide MXene showing the ability to transmit approximately 97% of visible light per nanometer thickness. The performance of MXene transparent conducting electrodes depends on the MXene composition as well as synthesis and processing parameters. Superconductivity Nb2C MXenes exhibit surface-group-dependent superconductivity. References Materials science Electrochemistry Physical chemistry Inorganic carbon compounds
MXenes
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,945
[ "Applied and interdisciplinary physics", "Inorganic compounds", "Materials science", "Electrochemistry", "Inorganic carbon compounds", "nan", "Physical chemistry" ]
61,981,734
https://en.wikipedia.org/wiki/Germanium%28II%29%20dicationic%20complexes
Ge(II) dicationic complexes refer to coordination compounds of germanium with a +2 formal oxidation state, and a +2 charge on the overall complex. In some of these coordination complexes, the coordination is strongly ionic, localizing a +2 charge on Ge, while in others the bonding is more covalent, delocalizing the cationic charge away from Ge. Examples of dicationic Ge(II) complexes are much rarer than monocationic Ge(II) complexes, often requiring the use of bulky ligands to shield the germanium center. Dicationic complexes of Ge(II) have been isolated with bulky isocyanide and carbene ligands. Much more weakly coordinated Germanium (II) dications have been isolated as complexes with polyether ligands, such as crown ethers and [2.2.2]cryptand. Crown ethers and cryptands are typically known for their ability to bind metal cations, however these ligands have also been employed in stabilizing low-valent cations of heavier p-block elements. A Ge2+ ion's valence shell consists of a filled valence s orbital but empty valence p orbitals, giving rise to atypical bonding in these complexes. Germanium is a metalloid of the carbon group, typically forming compounds with mainly covalent bonding, contrasting with the dative bonding observed in these coordination complexes. History In 2007, a Ge(II) based dication was reported by Rupar, Staroverov, Ragogna and Baines in which a Ge(II) unit is coordinated by three bulky N-heterocyclic carbene ligands. Later in 2008, Rupar, Staroverov and Baines isolated a weakly coordinate Ge(II) dication using cryptand[2.2.2], also the first example of a non-metallic mononuclear dication complexed with a cryptand. In this report, a Ge(II) cation is encapsulated within [2.2.2]cryptand with two triflate counter ions. The crystal structure of this Ge cryptand[2.2.2] (CF3SO3)2 salt reveals a lack of coordination between the encapsulated Ge(II) cation and the triflate anions. Since these reports, similar cationic Ge(II) complexes have been prepared employing crown ethers, azamacrocycles, and bulky isocyanide ligands. Synthesis In the preparation of Ge(II) cationic complexes, triflate is often chosen as a counter anion as it is relatively weakly coordinating. GeCl2•dioxane is often used as a starting material, as it is a convenient source of Ge(II). Ge(II) cryptand[2.2.2] The Ge(II) cryptand[2.2.2] complex was prepared by the addition of cryptand to a solution of N-heterocyclic carbene stabilized GeCl(CF3SO3) in tetrahydrofuran. The products obtained from this reaction are summarized below. The germanium cryptand salt precipitated from solution as a white powder, and the identity was established using proton NMR and crystal X-ray diffraction. The carbene stabilized germanium chloride side products (structures given below) were identified in solution after the reaction. Ge(II) crown ethers Ge(II) cationic species have been isolated with several crown ether ligands, including [12]crown-4, [15]crown-5, and [18]crown-6. Rupar et al. reported the synthesis of various germanium crown ethers employing GeCl2•dioxane as the source of Ge(II). Trimethylsilyl trifluoromethanesulfonate (Me3SiOTf) was used to displace chloride ligands with a more weakly associating triflate ligand. The resulting germanium crown ether complexes can adopt different geometries and cation charges depending on the size of the crown ether and the nature of the anionic ligand, summarized in the figure below. Only the Ge complex with [12]crown-4 is able to fully exclude counter anions from coordinating to Ge to give a dicationic complex. The larger crown ethers do not form sandwich complexes with Ge, and leave room for an anion to associate with the encapsulated Ge. These complexes were characterized with NMR, X-ray crystallography, Raman spectroscopy, and mass spectrometry. Ge(II) carbene complex The Ge(II) carbene stabilized dication reported by Rupar et al. was prepared by treating GeCl2•dioxane with an N-heterocyclic carbene (1,3-diisopropyl-4,5-dimethylimidazol-2-ylidene) to give the GeCl2 carbene complex. Upon treatment with trimethylsilyl iodide and excess carbene, the dicationic complex consisting of three carbene ligands to one Ge atom was formed. Ge(II) 2,6-dimethylphenyl isocyanide complex A Ge(II) dication stabilized by 4 isocyanide ligands was prepared by mixing GeCl2•dioxane and 2,6-dimethylphenyl isocyanide in toluene (scheme given below). Three molecules of GeCl2 are required per four molecules of the isocyanide ligand, as the counter anion is GeCl3−. This complex was crystallized from toluene, and was characterized by X-ray crystallography and NMR spectroscopy. Structure and bonding The geometry of these Ge(II) complexes is not adequately described by VSEPR theory due to the nature of the lone pair on Ge(II). VSEPR theory is used to predict geometric distortions about atoms with nonbonding electrons (lone pairs), but in some cases heavier main group elements can violate VSEPR theory, displaying a stereochemically inactive or "spherically symmetric" lone pair, deemed the inert-pair effect. Ge(II) complexes can possess stereochemically active or inactive lone pairs, depending on the ligand. To further assess the nature of the electronic structure of Ge(II) dicationic complexes, natural bond orbital (NBO) computational analysis is often employed. Cryptand and crown ethers The bonding in such Ge(II) polyether complexes is believed to be mainly ionic in character, differing from the expected mainly covalent character typical of most germanium compounds. This lack of a covalent interaction is exemplified in the relatively long Ge-O distances observed in crystal structures of Ge crown ether and Ge cryptand complexes. Ge-O covalent single bonds are expected to be approximately 1.8 Å in length. The crystal structure of the Ge(II) cryptand[2.2.2] complex reveals a much longer Ge-O distance of 2.49 Å, similarly the Ge-O distances range from 2.38-2.49 Å in the Ge(II) ([12]crown-4)2 sandwich complex. For the Ge(II) cryptand[2.2.2] complex, NBO analysis reveals the Ge(II) cation does not participate in any covalent bonding and that the lone pair on the Ge(II) resides in a purely s orbital, indicating a stereochemically inactive lone pair. This lone pair orbital of Ge(II) within cryptand[2.2.2] is depicted to the right. In the Ge(II) crown ether complexes presented above, only the sandwich complex with [12]crown-4 clearly bears a stereochemically inactive lone pair, suggested by the high symmetry of the complex. The Ge(II) complexes with [15]crown-5, and [18]crown-6 show geometric distortions likely due to the activity of the Ge(II) lone pair. Carbenes and isocyanides The bonding in Ge(II) dications stabilized by carbenes and isocyanides is believed to be more covalent in nature compared with the bonding in the polyether complexes. Furthermore, the positive charge in these complexes can be quite delocalized. In the Ge(II) carbene dication complex reported by Rupar et al., the Ge-C bonds are 2.07 Å in length, only marginally longer than expected Ge-C bond lengths. This suggests that the Ge-carbene interaction is not dative, but more covalent in nature. Limiting resonance forms for the Ge(II) carbene dication can be drawn (shown below), with the Ge(II) bearing the full +2 charge, or with the carbenes forming covalent bonds to the Ge center giving each ligand a +1 charge and the Ge a -1 charge. Natural population analysis, a computational technique associated with NBO assigns a charge of +0.64 to the Ge atom, indicating that charge delocalization is significant, and that the structure is best described as an intermediate between the two limiting representations. This compound adopts a pyramidal geometry, with a stereochemically active lone pair on Ge. Similar to the Ge(II) carbene complex, the Ge-C bond lengths in the Ge(II) (2,6-dimethylphenyl isocyanide)3 structure range between 2.03-2.07 Å, typical for expected Ge-C bonds. The ligands adopt a distorted tetrahedral structure about the germanium center in the crystal structure. NBO analysis of the Ge(II) isocyanide dication reveals a partially filled Ge p orbital as a frontier orbital of this complex, depicted to the right. The nature of the frontier orbitals change upon consideration of the GeCl3− counter anions in the NBO analysis. The NBO analysis also reveals a charge of +0.74 on Ge, with some positive charge delocalized on the isocyanide ligands. Geometry optimizations for both singlet and triplet electron configurations were performed for this complex, and the singlet was found to be favored by 48.6 kcal/mol. Reactivity The weakly coordinated Ge(II) cations are Lewis acids. Due to this weak coordination, such Ge(II) crown ether complexes could be useful for the preparation of other germanium compounds. Bandyopadhyay et al. have investigated the reactivity of a GeOTf+ [15]crown-5 complex, and found that the weakly coordinating triflate could be exchanged for H2O or NH3. Addition of water to a solution of GeOTf+ [15]crown-5 in dichloromethane results in the formation of the dicationic water complex, as depicted in the figure below. This water adduct was isolated and the structure was determined by X-ray crystallography, making it the first characterized Ge(II)-water adduct. Further addition of bulk water to this complex results in decomposition. Upon treatment with base, this water adduct [Ge[15]crown-5·OH2]2+can be deprotonated to give the hydroxide adduct [Ge[15]crown-5·OH]+. Upon deprotonation to give the hydroxide adduct, the Ge-O bond becomes shorter and stronger. NBO analysis identifies the H2O-Ge[15]crown-5 interaction as a donor-acceptor interaction, while the HO-Ge[15]crown-5 interaction is identified as a polar single bond. This reactivity presents a potential strategy for the preparation of new Ge complexes. The empty p orbitals of Ge(II) dications make them potential π-acceptors for transition metal complexes. Intriguingly, dicationic Ge(II) complexes have been shown to act as ligands for Au(I) and Ag(I). Raut and Majumdar report the use of a bis(α-iminopyridine) ligand to prepare a Ge(II) dicationic complex that coordinates to the electron rich Au(I) or Ag(I) metal centers. The bonding in such complexes is best described by σ-donation of the Ge(II) lone pair to the transition metal, and π-back donation from the filled transition metal d orbitals to the vacant Ge(II) p orbitals. This unusual activity for Ge(II) is under investigation for possible applications in catalysis. See also Cryptand Host–guest chemistry Organogermanium compounds References Germanium(II) compounds Coordination complexes
Germanium(II) dicationic complexes
[ "Chemistry" ]
2,673
[ "Coordination chemistry", "Coordination complexes" ]
61,982,153
https://en.wikipedia.org/wiki/Magic%20state%20distillation
Magic state distillation is a method for creating more accurate quantum states from multiple noisy ones, which is important for building fault tolerant quantum computers. It has also been linked to quantum contextuality, a concept thought to contribute to quantum computers' power. The technique was first proposed by Emanuel Knill in 2004, and further analyzed by Sergey Bravyi and Alexei Kitaev the same year. Thanks to the Gottesman–Knill theorem, it is known that some quantum operations (operations in the Clifford group) can be perfectly simulated in polynomial time on a classical computer. In order to achieve universal quantum computation, a quantum computer must be able to perform operations outside this set. Magic state distillation achieves this, in principle, by concentrating the usefulness of imperfect resources, represented by mixed states, into states that are conducive for performing operations that are difficult to simulate classically. A variety of qubit magic state distillation routines and distillation routines for qubits with various advantages have been proposed. Stabilizer formalism The Clifford group consists of a set of -qubit operations generated by the gates (where H is Hadamard and S is ) called Clifford gates. The Clifford group generates stabilizer states which can be efficiently simulated classically, as shown by the Gottesman–Knill theorem. This set of gates with a non-Clifford operation is universal for quantum computation. Magic states Magic states are purified from copies of a mixed state . These states are typically provided via an ancilla to the circuit. A magic state for the rotation operator is where . A non-Clifford gate can be generated by combining (copies of) magic states with Clifford gates. Since a set of Clifford gates combined with a non-Clifford gate is universal for quantum computation, magic states combined with Clifford gates are also universal. Purification algorithm for distilling |M〉 The first magic state distillation algorithm, invented by Sergey Bravyi and Alexei Kitaev, is as follows. Input: Prepare 5 imperfect states. Output: An almost pure state having a small error probability. repeat Apply the decoding operation of the five-qubit error correcting code and measure the syndrome. If the measured syndrome is , the distillation attempt is successful. else Get rid of the resulting state and restart the algorithm. until The states have been distilled to the desired purity. References Quantum computing Algorithms
Magic state distillation
[ "Mathematics" ]
494
[ "Applied mathematics", "Algorithms", "Mathematical logic" ]
61,984,669
https://en.wikipedia.org/wiki/Hibar%20Systems
Hibar Systems Ltd was a Canadian manufacturer of automated, precision liquid dispensing and filling systems. Hibar was started in 1974 when German-born Canadian engineer Heinz Barall developed a prototype of a precision metering pump that would dispense a small, precise amount (two microlitres) of electrolytes into button cell batteries. Barall took his prototype to one of the world's largest battery companies, where representatives were so impressed with its flawless performance, they kept the prototype and ordered more. Building on its success in the battery industry, over the next 40 years, Hibar started building precision liquid dispensing systems for other industries, such as filling printer ink cartridges, putting pharmaceuticals into vials, and packaging cosmetics. The company also continued to supply its products to the battery production industry and developed new vacuum filling systems for lithium-ion battery applications. Hibar's technology caught the eye of Tesla, Inc., which builds battery-electric vehicles and battery energy systems. Tesla quietly acquired Hibar sometime in 2019, which was first revealed in an October 2019 filing with the Canadian government. The purchase came amid an acquisition spree where Tesla bought six other small companies with expertise in automation or battery technology. Tesla merged the company into its operation, removing the Hibar signage from in front of the Richmond Hill office building in 2020 and changing the legal name on Hibar's website to Tesla Toronto Automation ULC in 2021. References Industrial machine manufacturers Pump manufacturers Canadian companies established in 1974 Manufacturing companies established in 1974 2019 mergers and acquisitions Tesla, Inc. Companies based in Richmond Hill, Ontario Canadian brands
Hibar Systems
[ "Engineering" ]
327
[ "Industrial machine manufacturers", "Industrial machinery" ]
61,988,019
https://en.wikipedia.org/wiki/Water%20distribution%20system
A water distribution system is a part of water supply network with components that carry potable water from a centralized treatment plant or wells to consumers to satisfy residential, commercial, industrial and fire fighting requirements. Definitions Water distribution network is the term for the portion of a water distribution system up to the service points of bulk water consumers or demand nodes where many consumers are lumped together. The World Health Organization (WHO) uses the term water transmission system for a network of pipes, generally in a tree-like structure, that is used to convey water from water treatment plants to service reservoirs, and uses the term water distribution system for a network of pipes that generally has a loop structure to supply water from the service reservoirs and balancing reservoirs to consumers. Components A water distribution system consists of pipelines, storage facilities, pumps, and other accessories. Pipelines laid within public right of way called water mains are used to transport water within a distribution system. Large diameter water mains called primary feeders are used to connect between water treatment plants and service areas. Secondary feeders are connected between primary feeders and distributors. Distributors are water mains that are located near the water users, which also supply water to individual fire hydrants. A service line is a small diameter pipe used to connect from a water main through a small tap to a water meter at user's location. There is a service valve (also known as curb stop) on the service line located near street curb to shut off water to the user's location. Storage facilities, or distribution reservoirs, provide clean drinking water storage (after required water treatment process) to ensure the system has enough water to service in response to fluctuating demands (service reservoirs), or to equalize the operating pressure (balancing reservoirs). They can also be temporarily used to serve fire fighting demands during a power outage. The following are types of distribution reservoirs: Underground storage reservoir or covered finished water reservoir: An underground storage facility or large ground-excavated reservoir that is fully covered. The walls and the bottom of these reservoirs may be lined with impermeable materials to prevent ground water intrusion. covered finished water reservoir: A large ground-excavated reservoir that has adequate measures or lining to prevent surface water runoff and ground water intrusion but does not have a top cover. This type of reservoir is less desirable as the water will not be further treated before distribution and is susceptible to contaminants such as bird waste, animal and human activities, algal bloom, and airborne deposition. Surface reservoir (also known as ground storage tank and ground storage reservoir): A storage facility built on the ground with the wall lined with concrete, shotcrete, asphalt, or membrane. A surface reservoir is usually covered to prevent contamination. They are typically located in high elevation areas that have enough hydraulic head for distribution. When a surface reservoir at ground level cannot provide a sufficient hydraulic head to the distribution system, booster pumps will be required. Water tower (also known as elevated surface reservoir): An elevated water tank. A few common types are spheroid elevated storage tank, a steel spheroid tank on top of a small-diameter steel column; composite elevated storage tank, a steel tank on a large-diameter concrete column; and hydropillar elevated storage tanks, a steel tank on a large-diameter steel column. The space within the large column below the water tank can be used for other purposes such as multi-story office space and storage space. A main concern for using water towers in the water distribution system is the aesthetic of the area. Standpipe: A water tank that is a combination of ground storage tank and water tower. It is slightly different from an elevated water tower in that the standpipe allows water storage from the ground level to the top of the tank. The bottom storage area is called supporting storage, and the upper part which would be at the similar height of an elevated water tower is called useful storage. Sump: This is a contingency water storage facility that is not used to distribute water directly. It is typically built underground in a circular shape with a dome top above ground. The water from a sump will be pumped to a service reservoir when it is needed. Storage facilities are typically located at the center of the service locations. Being at the central location reduces the length of the water mains to the services locations. This reduces the friction loss when water is transported over a water main. Topologies In general, a water distribution system can be classified as having a grid, ring, radial or dead end layout. A grid system follows the general layout of the road grid with water mains and branches connected in rectangles. With this topology, water can be supplied from several directions allowing good water circulation and redundancy if a section of the network has broken down. Drawbacks of this topology include difficulty sizing the system. A ring system has a water main for each road, and there is a sub-main branched off the main to provide circulation to customers. This topology has some of the advantages of a grid system, but it is easier to determine sizing. A radial system delivers water into multiple zones. At the center of each zone, water is delivered radially to the customers. A dead end system has water mains along roads without a rectangular pattern. It is used for communities whose road networks are not regular. As there are no cross-connections between the mains, water can have less circulation and therefore stagnation may be a problem. Integrity of the systems The integrity of the systems are broken down into physical, hydraulic, and water quality. The physical integrity includes concerns on the ability of the barriers to prevents contaminations from the external sources to get into water distribution systems. The deterioration can be caused by physical or chemical factors. The hydraulic integrity is an ability to maintain adequate water pressure inside the pipes throughout distribution systems. It also includes the circulation and length of time that the water travels within a distribution system which has impacts on the effectiveness of the disinfectants. The water quality integrity is a control of degradations as the water travels through distribution systems. The impacts of water quality can be caused by physical or hydraulic integrity factors. The water quality degradations can also take place within the distribution systems such as microorganism growth, nitrification, and internal corrosion of the pipes. Network analysis and optimization Analyses are done to assist in design, operation, maintenance and optimization of water distribution systems. There are two main types of analyses: hydraulic, and water quality behavior as it flows through a water distribution system. Optimizing the design of water distribution networks is a complex task. However, a large number of methods have already been proposed, mainly based on metaheuristics. Employing mathematical optimization techniques can lead to substantial construction savings in these kinds of infrastructures. Hazards Hazards in water distribution systems can be in the forms of microbial, chemical and physical. Most microorganisms are harmless within water distribution systems. However, when infectious microorganisms enter the systems, they form biofilms and create microbial hazards to the users. Biofilms are usually formed near the end of the distribution where the water circulation is low. This supports their growth and makes disinfection agents less effective. Common microbial hazards in distribution systems come from contamination of human faecal pathogens and parasites which enter the systems through cross-connections, breaks, water main works, and open storage tanks. Chemical hazards are those of disinfection by-products, leaching of piping materials and fittings, and water treatment chemicals. Physical hazards include turbidity of water, odors, colors, scales which are buildups of materials inside the pipes from corrosions, and sediment resuspension. There are several bodies around the world that create standards to limit hazards in the distribution systems: NSF International in North America; European Committee for Standardization, British Standards Institution and Umweltbundesamt in Europe; Japanese Standards Association in Asia; Standards Australia in Australia; and Brazilian National Standards Organization in Brazil. Lead service lines Lead contamination in drinking water can be from leaching of lead that was used in old water mains, service lines, pipe joints, plumbing fittings and fixtures. According to WHO, the most significant contributor of lead in water in many countries is the lead service line. Maintenance Internal corrosion control Water quality deteriorate due to corrosion of metal pipe surfaces and connections in distribution systems. Pipe corrosion shows in water as color, taste and odor, any of which may cause health concerns. Health issues relate to releases of trace metals such as lead, copper or cadmium into the water. Lead exposure can cause delays in physical and mental development in children. Long term exposure to copper may cause liver and kidney damage. High or long term exposure of cadmium may cause damage to various organs. Corrosion of iron pipes causes rusty or red water. Corrosion of zinc and iron pipes can cause metallic taste. Various techniques can be used to control internal corrosion, for example, pH level adjustment, adjustment of carbonate and calcium to create calcium carbonate as pipe surface coating, and applying a corrosion inhibitor. For example, phosphate products that form films over pipe surfaces is a type of corrosion inhibitor. This reduces the chance of leaching of trace metals from the pipe materials into the water. Hydrant flushing Hydrant flushing is the scheduled release of water from fire hydrants or special flushing hydrants to purge iron and other mineral deposits from a water main. Another benefit of using fire hydrants for water main flushing is to test whether water is supplied to fire hydrants at adequate pressure for fire fighting. During hydrant flushing, consumers may notice rust color in their water as iron and mineral deposits are stirred up in the process. Water main renewals After water mains are in service for a long time, there will be deterioration in structural, water quality, and hydraulic performance. Structural deterioration may be caused by many factors. Metal-based pipes develop internal and external corrosion, causing the pipe walls to thin or degrade. They can eventually leak or burst. Cement-based pipes are subject to cement matrix and reinforced steel deterioration. All pipes are subject to joint failures. Water quality deterioration includes scaling, sedimentation, and biofilm formation. Scaling is the formation of hard deposits on the interior wall of pipes. This can be a by-product of pipe corrosion combined with calcium in the water, which is called tuberculation. Sedimentation is when solids settle within the pipes, usually at recesses between scaling build-ups. When there is a change in the velocity of water flow (such as sudden use of a fire hydrant), the settled solids will be stirred up, causing water to be discolored. Biofilms can develop in highly scaled and thus rough-surfaced pipes where bacteria are allowed to grow, as the higher the roughness of the interior wall, the harder it is for disinfectant to kill the bacteria on the surface of the pipe wall. Hydraulic deterioration that affects pressures and flows can be a result of other deterioration that obstructs the water flow. When it is time for water main renewal, there are many considerations in choosing the method of renewal. This can be open-trench replacement or one of the pipeline rehabilitation methods. A few pipeline rehabilitation methods are pipe bursting, sliplining, and pipe lining. When an in-situ rehabilitation method is used, one benefit is the lower cost, as there is no need to excavate along the entire water main pipeline. Only small pits are excavated to access the existing water main. The unavailability of the water main during the rehabilitation, however, requires building a temporary water bypass system to serve as the water main in the affected area. A temporary water bypass system (known as temporary bypass piping) should be carefully designed to ensure an adequate water supply to the customers in the project area. Water is taken from a feed hydrant into a temporary pipe. When the pipe crosses a driveway or a road, a cover or a cold patch should be put in place to allow cars to cross the temporary pipe. Temporary service connections to homes can be made to the temporary pipe. Among many ways to make a temporary connection, a common one is to connect the temporary service connection to a garden hose. The temporary pipe should also add temporary fire hydrants for fire protection. As water main work can disturb lead service lines, which can result in elevated lead levels in drinking water, it is recommended that when a water utility plans a water main renewal project, it should work with property owners to replace lead service lines as part of the project. See also Water supply network District heating for hot water distribution system References Supply network Environmental engineering Hydraulics Supply network
Water distribution system
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
2,577
[ "Hydrology", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "Water industry", "Environmental engineering", "Fluid dynamics" ]
53,089,666
https://en.wikipedia.org/wiki/Kappa%20Lupi
The Bayer designation κ Lupi (Kappa Lupi) is shared by two star systems in the constellation Lupus: κ1 Lupi (HD 134481) κ2 Lupi (HD 134482) According to Eggleton and Tokovinin (2008), the pair form a binary system with an angular separation of . References Lupi, Kappa Lupus (constellation)
Kappa Lupi
[ "Astronomy" ]
80
[ "Constellations", "Lupus (constellation)" ]
53,092,613
https://en.wikipedia.org/wiki/Treatise%20on%20Analysis
Treatise on Analysis is a translation by Ian G. Macdonald of the nine-volume work Éléments d'analyse on mathematical analysis by Jean Dieudonné, and is an expansion of his textbook Foundations of Modern Analysis. It is a successor to the various Cours d'Analyse by Augustin-Louis Cauchy, Camille Jordan, and Édouard Goursat. Contents and publication history Volume I The first volume was originally a stand-alone graduate textbook with a different title. It was first written in English and later translated into French, unlike the other volumes which were first written in French. It has been republished several times and is much more common than the later volumes of the series. The contents include Chapter I: Sets Chapter II Real numbers Chapter III Metric spaces Chapter IV The real line Chapter V Normed spaces Chapter VI Hilbert spaces Chapter VII Spaces of continuous functions Chapter VIII Differential calculus (This uses the Cauchy integral rather than the more common Riemann integral of functions.) Chapter IX Analytic functions (of a complex variable) Chapter X Existence theorems (for ordinary differential equations) Chapter XI Elementary spectral theory Volume II The second volume includes Chapter XII Topology and topological algebra Chapter XIII Integration Chapter XIV Integration in locally compact groups Chapter XV Normed algebras and spectral theory Volume III The third volume includes chapter XVI on differential manifolds and chapter XVII on distributions and differential operators. Volume IV The fourth volume includes Chapter XVIII Differential systems Chapter XIX Lie groups Chapter XX Riemannian geometry Volume V Volume V consists of chapter XXI on compact Lie groups. Volume VI Volume VI consists of chapter XXII on harmonic analysis (mostly on locally compact groups) Volume VII Volume VII consists of the first part of chapter XXIII on linear functional equations. This chapter is considerably more advanced than most of the other chapters. Volume VIII Volume VIII consists of the second part of chapter XXIII on linear functional equations. Volume IX Volume IX contains chapter XXIV on elementary differential topology. Unlike the earlier volumes there is no English translation of it. Volume X Dieudonne planned a final volume containing chapter XXV on nonlinear problems, but this was never published. References Mathematics books Mathematical analysis Treatises
Treatise on Analysis
[ "Mathematics" ]
437
[ "Mathematical analysis" ]
53,093,752
https://en.wikipedia.org/wiki/Wolfgang%20Fink
Wolfgang Fink is a German-American theoretical physicist. He is currently an associate professor and the inaugural Maria & Edward Keonjian Endowed Chair of Microelectronics at the University of Arizona. Fink has joint appointments in the Departments of Electrical & Computer Engineering, Biomedical Engineering, Systems & Industrial Engineering, Aerospace & Mechanical Engineering, and Ophthalmology & Vision Science at the University of Arizona. He is the current Vice President of the Prognostics and Health Management (PHM) Society. Research career & education Fink has a B.S. (Vordiplom, 1990) and M.S. (Diplom 1993) degrees in physics and physical chemistry from the University of Göttingen, Germany, and a Ph.D. "summa cum laude" in theoretical physics from the University of Tübingen, Germany (1997). He was a senior researcher at NASA's Jet Propulsion Laboratory (2001–2009). He was also a visiting associate in physics at the California Institute of Technology (2001–2016), where he founded Caltech's Visual and Autonomous Exploration Systems Research Laboratory. He also held concurrent appointments as Voluntary Research Associate Professor of both Ophthalmology and Neurological Surgery at the University of Southern California (2005–2014). Active research areas Fink is a specialist in the areas of autonomous systems, biomedical engineering for healthcare, human/brain-machine interfaces, and smart service systems. In particular, his research focuses on autonomous robotic systems for hazardous environments, C4ISR architectures (Tier-Scalable Reconnaissance), vision prostheses for the blind, smart mobile and tele-ophthalmic platforms, ophthalmic instruments and tests, self-adapting wearable sensors, cognitive/reasoning systems, and computer-optimized design. Fink was a principal investigator of the United States Department of Energy's (USDOE's) "Artificial Retina" project (2004–2011), a multi-institutional and multi-disciplinary CRADA-based effort to develop an implantable microelectronic retinal device that restores useful vision to people blinded by retinal diseases (Retinitis pigmentosa and Macular degeneration). Furthermore, Fink is Caltech's founding Co-Investigator of the NSF-funded Center for Biomimetic Microelectronic Systems (2003–2010), awarded in 2003 to University of Southern California, Caltech, and UC Santa Cruz. The center enacted the only FDA-approved visual prosthesis to date (Argus retinal prosthesis or ARGUS II). Honors & awards Fellow of the National Academy of Inventors (NAI), Class of 2023 for having "demonstrated a highly prolific spirit of innovation in creating or facilitating outstanding inventions that have made a tangible impact on the quality of life, economic development, and welfare of society". Recipient of the 2023 SPIE Aden and Marjorie Meinel Technology Achievement Award "for pioneering, sustained contributions to the development of transformational opto-medical examination and device technologies, with particular focus on visual prostheses for the blind, ophthalmology, and tele-ophthalmology." Co-winner of USDOE/NREL-sponsored E-ROBOT Prize 2021 (Phase 1: $200,000), an American-Made Challenge, to devise building envelope retrofit solutions that make retrofits easier, faster, safer and more accessible for workers. The team "wall-EIFS" co-led by Fink devised "wall-EIFS: a robotically applied, 3D-sprayable exterior insulation and finish system (EIFS) for building envelope retrofits." Recipient of the inaugural Scott Clements Most Valuable Person (MVP) Award of the Prognostics and Health Management (PHM) Society (2020). ARVO Fellow, Class of 2023. Fellow of SPIE (inducted in 2020) "for achievements in vision science for the blind and tele-ophthalmic healthcare worldwide". Fellow of the Prognostics and Health Management (PHM) Society (inducted in 2018). ACABI Fellow 2017 recognizing "faculty that are strongly active in innovation related to ACABI", the Arizona Center for Accelerated BioMedical Innovation at the University of Arizona. da Vinci Fellow 2015 for "innovative, productive and highly recognized engineering research" at the University of Arizona. Senior Member IEEE College of Fellows of the American Institute for Medical and Biological Engineering (AIMBE) (inducted in 2012) "for outstanding contributions in the field of ophthalmology and vision sciences with particular focus on diagnostics and artificial vision systems". Patents Fink has been awarded 31 US and international patents to date in the areas of autonomous systems, biomedical devices, neural stimulation, MEMS fabrication, data fusion and analysis, and multi-dimensional optimization. References University of Arizona faculty Living people California Institute of Technology faculty 21st-century German physicists Theoretical physicists Year of birth missing (living people)
Wolfgang Fink
[ "Physics" ]
1,029
[ "Theoretical physics", "Theoretical physicists" ]
59,362,417
https://en.wikipedia.org/wiki/Sustainability%20in%20construction
Sustainable construction aims to reduce the negative health and environmental impacts caused by the construction process and by the operation and use of buildings and the built environment. It can be seen as the construction industry's contribution to more sustainable development. Precise definitions vary from place to place, and are constantly evolving to encompass varying approaches and priorities. More comprehensively, sustainability can be considered from three dimension of planet, people and profit across the entire construction supply chain. Key concepts include the protection of the natural environment, choice of non-toxic materials, reduction and reuse of resources, waste minimization, and the use of life-cycle cost analysis. Definition of sustainable construction One definition of "Sustainable Construction" is the introduction of healthy living and workplace environments, the use of materials that are sustainable, durable and by extension environmentally friendly. In the United States, the Environmental Protection Agency (EPA) defines sustainable construction as "the practice of creating structures and using processes that are environmentally responsible and resource-efficient throughout a building's life-cycle from siting to design, construction, operation, maintenance, renovation and deconstruction." Agyekum-Mensah et al. note that some definitions of sustainable construction and development "seem to be vague" and they question use of any definition of "sustainability" which suggests that sustainable or acceptable activities can be continued indefinitely, because construction projects do not run on indefinitely. Evolution path In the 1970s, awareness of sustainability emerged, amidst oil crises. At that time, people began to realize the necessity and urgency of energy conservation, which is to utilize energy in an efficient way and find alternatives to contemporary sources of energy. Additionally, shortages of other natural resources at that time, such as water, also raised public attention to the importance of sustainability and conservation. In the late 1960s, the construction industry began to explore ecological approaches to construction, aiming to seek harmony with nature. The concept of sustainable construction was born out of sustainable development discourse. The term sustainable development was first coined in the Brundtland report of 1987, defined as the ability to meet the needs of all people in the present without compromising the ability of future generations to meet their own. This report defined a turning point in sustainability discourse since it deviated from the earlier limits-to-growth perspective to focus more on achieving social and economic milestones, and their connection to environmental goals, particularly in developing countries. Sustainable development interconnects three socially concerned systems—environment, society and economy—a system seeking to achieve a range of goals as defined by the United Nations Development Program. The introduction of sustainable development into the environmental/economical discourse served as a middle ground for the limits-to-growth theory, and earlier pro-growth theories that argued maintaining economic growth would not hinder long-term sustainability. As a result, scholars have faulted sustainable development for being too value-laden since applications of its definition vary heavily depending on relevant stakeholders, allowing it to be used in support of both pro-growth and pro-limitation perspectives of development arguments despite their vastly different implications. In order for the concept to be effective in real-life applications, several specified frameworks for its use in various fields and industries, including sustainable construction, were developed. The construction industry's response to sustainable development is sustainable construction. In 1994, the definition of sustainable construction was given by Professor Charles J. Kibert during the Final Session of the First International Conference of CIB TG 16 on Sustainable Construction as "the creation and responsible management of a healthy built environment based on resource efficient and ecological principles". Notably, the traditional concerns in construction (performance, quality, cost) are replaced in sustainable construction by resource depletion, environmental degradation and healthy environment. Sustainable construction addresses these criteria through the following principles set by the conference: Minimize resource consumption (Conserve)by effective procurment systems and strategies Maximize resource reuse (Reuse) Use renewable or recyclable resources (Renew/Recycle/Repurpose Protect and incorporate the natural environment (Protect Nature) Create a healthy, non-toxic environment (Non-Toxics) Pursue quality in creating the built environment (Quality) Additional definitions and frameworks for sustainable construction practices were more rigorously defined in the 1999 Agenda 21 on Sustainable Construction, published by the International Council for Research and Innovation in Building and Construction (CIB). The same council also published an additional version of the agenda for sustainable construction in developing countries in 2001 to counteract biases present in the original report as a result of most contributors being from the developed world. Since 1994, much progress to sustainable construction has been made all over the world. According to a 2015 Green Building Economic Impact Study released by U.S. Green Building Council (USGBC), the green building industry contributes more than $134.3 billion in labor income to working Americans. The study also found that green construction's growth rate is rapidly outpacing that of conventional construction and will continue to rise. Goals of sustainable construction Current state According to United Nations Environment Programme (UNEP), "the increased construction activities and urbanization will increase waste which will eventually destroy natural resources and wild life habitats over 70% of land surface from now up to 2032. " Moreover, construction uses around half of natural resources that humans consume. Production and transport of building materials consumes 25 - 50 percent of all energy used (depending on the country considered). Taking UK as an example, the construction industry counts for 47% of emissions, of which manufacturing of construction products and materials accounts for the largest amount within the process of construction. Benefits By implementing sustainable construction, benefits such as lower cost, environmental protection, sustainability promotion, and expansion of the market may be achieved during the construction phase. As mentioned in ConstructionExecutive, construction waste accounts for 34.7% of all waste in Europe. Implementing sustainability in construction would cut down on wasted materials substantially. Potential lower cost Sustainable construction might result in higher investment at the construction stage of projects, the competition between contractors, due to the promotion of sustainability in the industry, would encourage the application of sustainable construction technologies, ultimately decreasing the construction cost. Meanwhile, the encouraged cooperation of designer and engineer would bring better design into the construction phase. Using more sustainable resources reduces cost of construction as there will be less water and energy being used for construction and with less resources being used in the projects, it would lead to lower disposal costs as there is less waste being made. Environment protection By adopting sustainable construction, contractors would make the construction plan or sustainable site plan to minimize the environmental impact of the project. According to a study took place in Sri Lanka, considerations of sustainability may influence the contractor to choose more sustainable, locally sourced products and materials, and to minimize the amount of waste and water pollution.  Another example is from a case study in Singapore, the construction team implemented rainwater recycling and waste water treatment systems that help achieve a lower environmental impact. Promoting sustainability According to "Sustainable Construction: Reducing the Impact of Creating a Building", the contractor in collaboration with the owner would deliver the project in a sustainable way. More importantly, the contractor would have known this was a key performance indicator for the client from day one, allowing them the opportunity to not tender for the work, should this not appeal to them. Moreover, "It also sends a clear message to the industry, 'sustainability is important to us' and this, especially within the government and public sectors can significantly drive change in the way projects are undertaken, as well as up-skilling the industry to meet this growing demand. Expand market By promoting sustainable methods and products in daily work, the good result directly shows the public the positive effect of sustainable construction. Consequently, there would be potential to expand the market of sustainable concepts or products. According to a report published by USGBC, "The global green building market grew in 2013 to $260 billion, including an estimated 20 percent of all new U.S. commercial real estate construction." Sustainable construction strategies Globally, construction industries are attempting to implement sustainable construction principles. Below are some examples of successful implementations of sustainable construction promotion on a national level. Also included are new technologies that could improve the application of sustainable construction. Strategic Policy and Guide Creation of a national strategy to improve the development: the government of Singapore announced a Sustainable Singapore Blueprint in April 2009, launching a long-term strategy of sustainable construction development. Another example is Strategy for sustainable construction in the UK. Investing money on research and education : S$50 million "Research Fund for the Built Environment" was launched in 2007 by Singapore Government to kick-start R&D efforts in sustainable development. Guidance for sustainable application: Government department cooperating with academic institutes to make an industrial guide for workers, for example, the Field Guide for Sustainable Construction published in 2004. Changing Mindset in the Way of Development The Government in Singapore has developed a Sustainable Construction Master Plan with the hope to transform the industrial development path from only focusing on the traditional concerns of "cost, time, and quality" to construction products and materials, to reduce natural resource consumption and minimize waste on site. With the expediting concern of the climate crisis, it is essential to keep in mind the importance of reducing energy consumption and toxic waste whilst moving forward with sustainable architectural plans. New Technologies The development of efficiency codes has prompted the development of new construction technologies and methods, many pioneered by academic departments of construction management that seek to improve efficiency and performance while reducing construction waste. New techniques of building construction are being researched, made possible by advances in 3D printing technology. In a form of additive building construction, similar to the additive manufacturing techniques for manufactured parts, building printing is making it possible to flexibly construct small commercial buildings and private habitations in around 20 hours, with built-in plumbing and electrical facilities, in one continuous build, using large 3D printers. Working versions of 3D-printing building technology are already printing of building material per hour , with the next-generation printers capable of per hour, sufficient to complete a building in a week. Dutch architect Janjaap Ruijssenaars's performative architecture 3D-printed building was scheduled to be built in 2014. Over the years, the construction industry has seen a trend in IT adoption, something it always found hard to compete with when paired against other fields such as, the manufacturing or healthcare industries. Nowadays, construction is starting to see the full potential of technological advancements, moving on to paperless construction, using the power of automation and adopting BIM, the internet of things, cloud storage and co-working, and mobile apps, implementation of surveying drones, and more. In the current trend of sustainable construction, the recent movements of New Urbanism and New Classical architecture promote a sustainable approach towards construction, that appreciates and develops smart growth, architectural tradition and classical design. This is in contrast to modernist and short-lived globally uniform architecture, as well as opposing solitary housing estates and suburban sprawl. Both trends started in the 1980s. Timber is being introduced as a feasible material for skyscrapers (nicknamed "plyscrapers") thanks to new developments incorporating engineered timber, whose collective name is "mass timber" and includes cross-laminated timber. Industrial hemp is becoming increasingly recognised as an eco-friendly building material. It can be used in a range of ways, including as an alternative to concrete (known as 'hempcrete'), flooring, and insulation. King Charles is reported to have used hemp to insulate an eco-home. In December 2022, the United Nations Conference on Trade and Development (UNCTAD) emphasised hemp's versatility and sustainability, and advocated its use as a building material, in a report entitled 'Commodities at a glance: Special issue on industrial hemp'. Sustainable construction in developing countries Specific parameters are needed for sustainable construction projects in developing countries. Scholar Chrisna Du Plessis of the Council for Scientific and Industrial Research (CSIR) defines the following key issues as specific to work in developing countries: New, non-western frameworks for development Understanding the connection between urbanization and rural development Sustainable housing solutions Education Innovative materials Innovative methods of construction Merging modern and traditional practices Promoting equity in gender roles Development of new financing systems Improving the capacity of the government and the construction industry In a later work, Du Plessis furthers the definition for sustainable construction to touch on the importance of sustainability in social and economic contexts as well. This is especially relevant in construction projects in the Global South, where local value systems and social interactions may differ from the western context in which sustainable construction frameworks were developed. Debates surrounding sustainable construction in developing countries First, the need for sustainable development measures in developing countries is considered. Most scholars have reached a consensus on the concept of the 'double burden' placed on developing countries as a result of the interactions between development and the environment. Developing countries are uniquely vulnerable to problems of both development (resource strain, pollution, waste management, etc.) and under-development (lack of housing, inadequate water and sanitation systems, hazardous work environments) that directly influence their relationship with the surrounding environment. Additionally, scholars have defined two classes of environmental problems faced by developing countries; 'brown agendas' consider issues that cause more immediate environmental health consequences on localized populations, whereas 'green agendas' consider issues that address long-term, wide-scope threats to the environment. Typically, green agenda solutions are promoted by environmentalists from developed, western countries, leading them to be commonly criticized as being elitist and ignorant to the needs of the poor, especially since positive results are often delayed due to their long-term scope. Scholars have argued that sometimes these efforts can even end up hurting impoverished communities; for example, conservation initiatives often lead to restrictions on resource-use despite the fact that many rural communities rely on these resources as a source of income, forcing households to either find new livelihoods or find different areas for harvesting. General consensus is that the best approaches to sustainable construction in developing countries is through a merging of brown and green agenda ideals. Stakeholders Foreign investors and organizations Since all of the definitions and frameworks for the major concepts outlined previously are developed by large international organizations and commissions, their research and writings directly influence the organization, procedures, and scale of rural development projects in the Global South. Attempts at community development by foreign organizations like the ones discussed have questionable records of success. For instance, billions of dollars of aid have flowed into Africa over the past 60 years in order to address infrastructure shortcomings, yet this aid has created numerous social and economic problems without making any progress toward infrastructure development. One compelling explanation for why infrastructure projects as a result of foreign aid have failed in the past is that they are often eurocentric in modelling and applied off successful strategies used in western countries without adapting to local needs, environmental circumstances and cultural value systems. NGOs/Non-profits Often NGOs and development nonprofits are criticized for taking over responsibilities that are traditionally carried out by the state, causing governments to become ineffective in handling these responsibilities over time. Within Africa, NGOs carry out the majority of sustainable building and construction through donor-funded, low-income housing projects. Future development Currently, sustainable construction has become mainstream in the construction industry. The increasing drive to adopt a better way of construction, stricter industrial standards and the improvement of technologies have lowered the cost of applying the concept, according to Business Case For Green Building Report. The current cost of sustainable construction may be 0.4% lower than the normal cost of construction. See also Alternative natural materials Autonomous building Biophilic design Development Fund of the Swedish Construction Industry Ecological design Energy-efficient landscaping Environmental engineering Environmental surveying Green building Hempcrete Natural building Sustainable architecture References Construction Sustainability
Sustainability in construction
[ "Engineering" ]
3,182
[ "Construction" ]
59,376,843
https://en.wikipedia.org/wiki/Alta%20Devices
Alta Devices was a US-based specialty gallium arsenide (GaAs) PV manufacturer, which claimed to have achieved a solar cell conversion efficiency record of 29.1%, as certified by Germany's Fraunhofer ISE CalLab. The company has ceased operations. History Alta Devices was founded in 2007 by Eli Yablonovitch and Harry Atwater and manufactured solar photovoltaic applications for mobile devices that enable the conversion of light into electricity. The firm was acquired by Hanergy Group, a privately held Chinese multinational renewable energy company, in 2013 The firm's technology was significant for unmanned aerial vehicles (UAVs), solar cars, and other electric vehicles.' NASA has been testing its solar technology for the International Space Station. The firm had been working with Audi on 'solar roofs' for their automobiles. Audi working with Alta Devices on cars with ‘solar roofs’ Opics.org. 24 Aug 2017 The firm broke efficiency records for single-junction solar cells and solar modules. In 2016, Alta Devices broke the world record conversion efficiency for dual-junction solar cell using InGaP/GaAs tandem structure. The world record was certified by National Renewable Energy Laboratory. This is the second time Alta Devices broke world record efficiency. The first time was 30.8% in 2013. In 2019, the company ceased operations. Awards 2018 - Finalist in the 2019 Prism Awards for Photonics Innovation, determined by SPIE and Photonics Media. See also Solar cell efficiency Timeline of solar cells References Photovoltaics manufacturers Renewable energy companies of the United States
Alta Devices
[ "Engineering" ]
320
[ "Photovoltaics manufacturers", "Engineering companies" ]
59,383,794
https://en.wikipedia.org/wiki/Nanoparticle%20drug%20delivery
Nanoparticle drug delivery systems are engineered technologies that use nanoparticles for the targeted delivery and controlled release of therapeutic agents. The modern form of a drug delivery system should minimize side-effects and reduce both dosage and dosage frequency. Recently, nanoparticles have aroused attention due to their potential application for effective drug delivery. Nanomaterials exhibit different chemical and physical properties or biological effects compared to larger-scale counterparts that can be beneficial for drug delivery systems. Some important advantages of nanoparticles are their high surface-area-to-volume ratio, chemical and geometric tunability, and their ability to interact with biomolecules to facilitate uptake across the cell membrane. The large surface area also has a large affinity for drugs and small molecules, like ligands or antibodies, for targeting and controlled release purposes. Nanoparticles refer to a large family of materials both organic and inorganic. Each material has uniquely tunable properties and thus can be selectively designed for specific applications. Despite the many advantages of nanoparticles, there are also many challenges, including but not exclusive to: nanotoxicity, biodistribution and accumulation, and the clearance of nanoparticles by human body. The National Institute of Biomedical Imaging and Bioengineering has issued the following prospects for future research in nanoparticle drug delivery systems: crossing the blood-brain barrier (BBB) in brain diseases and disorders; enhancing targeted intracellular delivery to ensure the treatments reach the correct structures inside cells; combining diagnosis and treatment. The development of new drug systems is time-consuming; it takes approximately seven years to complete fundamental research and development before advancing to preclinical animal studies. Characterization Nanoparticle drug delivery focuses on maximizing drug efficacy and minimizing cytotoxicity. Fine-tuning nanoparticle properties for effective drug delivery involves addressing the following factors. The surface-area-to-volume ratio of nanoparticles can be altered to allow for more ligand binding to the surface. Increasing ligand binding efficiency can decrease dosage and minimize nanoparticle toxicity. Minimizing dosage or dosage frequency also lowers the mass of nanoparticle per mass of drug, thus achieving greater efficiency. Surface functionalization of nanoparticles is another important design aspect and is often accomplished by bioconjugation or passive adsorption of molecules onto the nanoparticle surface. By functionalizing nanoparticle surfaces with ligands that enhance drug binding, suppress immune response, or provide targeting/controlled release capabilities, both a greater efficacy and lower toxicity are achieved. Efficacy is increased as more drug is delivered to the target site, and toxic side effects are lowered by minimizing the total level of drug in the body. The composition of the nanoparticle can be chosen according to the target environment or desired effect. For example, liposome-based nanoparticles can be biologically degraded after delivery, thus minimizing the risk of accumulation and toxicity after the therapeutic cargo has been released. Metal nanoparticles, such as gold nanoparticles, have optical qualities(also described in nanomaterials) that allow for less invasive imaging techniques. Furthermore, the photothermal response of nanoparticles to optical stimulation can be directly utilized for tumor therapy. Platforms Current nanoparticle drug delivery systems can be cataloged based on their platform composition into several groups: polymeric nanoparticles, inorganic nanoparticles, viral nanoparticles, lipid-based nanoparticles, and nanoparticle albumin-bound (nab) technology. Each family has its unique characteristics. Polymeric nanoparticles Polymeric nanoparticles are synthetic polymers with a size ranging from 10 to 100 nm. Common synthetic polymeric nanoparticles include polyacrylamide, polyacrylate, and chitosan. Drug molecules can be incorporated either during or after polymerization. Depending on the polymerization chemistry, the drug can be covalently bonded, encapsulated in a hydrophobic core, or conjugated electrostatically. Common synthetic strategies for polymeric nanoparticles include microfluidic approaches, electrodropping, high pressure homogenization, and emulsion-based interfacial polymerization. Polymer biodegradability is an important aspect to consider when choosing the appropriate nanoparticle chemistry. Nanocarriers composed of biodegradable polymers undergo hydrolysis in the body, producing biocompatible small molecules such as lactic acid and glycolic acid. Polymeric nanoparticles can be created via self assembly or other methods such as particle replication in nonwetting templates (PRINT) which allows customization of composition, size, and shape of the nanoparticle using tiny molds. Dendrimers Dendrimers are unique hyper-branched synthetic polymers with monodispersed size, well-defined structure, and a highly functionalized terminal surface. They are typically composed of synthetic or natural amino acid, nucleic acids, and carbohydrates. Therapeutics can be loaded with relative ease onto the interior of the dendrimers or the terminal surface of the branches via electrostatic interaction, hydrophobic interactions, hydrogen bonds, chemical linkages, or covalent conjugation. Drug-dendrimer conjugation can elongate the half-life of drugs. Currently, dendrimer use in biological systems is limited due to dendrimer toxicity and limitations in their synthesis methods. Dendrimers are also confined within a narrow size range (<15 nm) and current synthesis methods are subject to low yield. The surface groups will reach the de Gennes dense packing limit at high generation level, which seals the interior from the bulk solution – this can be useful for encapsulation of hydrophobic, poorly soluble drug molecules. The seal can be tuned by intramolecular interactions between adjacent surface groups, which can be varied by the condition of the solution, such as pH, polarity, and temperature, a property which can be utilized to tailor encapsulation and controlled release properties. Inorganic Nanoparticles and Nanocrystals Inorganic nanoparticles have emerged as highly valuable functional building blocks for drug delivery systems due to their well-defined and highly tunable properties such as size, shape, and surface functionalization. Inorganic nanoparticles have been largely adopted to biological and medical applications ranging from imaging and diagnoses to drug delivery. Inorganic nanoparticles are usually composed of inert metals such as gold and titanium that form nanospheres, however, iron oxide nanoparticles have also become an option. Quantum dots (QDs), or inorganic semiconductor nanocrystals, have also emerged as valuable tools in the field of bionanotechnology because of their unique size-dependent optical properties and versatile surface chemistry. Their diameters (2 - 10 nm) are on the order of the exciton Bohr radius, resulting in quantum confinement effects analogous to the "particle-in-a-box" model. As a result, optical and electronic properties of quantum dots vary with their size: nanocrystals of larger sizes will emit lower energy light upon fluorescence excitation. Surface engineering of QDs is crucial for creating nanoparticle–biomolecule hybrids capable of participating in biological processes. Manipulation of nanocrystal core composition, size, and structure changes QD photo-physical properties Designing coating materials which encapsulate the QD core in an organic shell make nanocrystals biocompatible, and QDs can be further decorated with biomolecules to enable more specific interaction with biological targets. The design of inorganic nanocrystal core coupled with biologically compatible organic shell and surface ligands can combine useful properties of both materials, i.e. optical properties of the QDs and biological functions of ligands attached. Toxicity While application of inorganic nanoparticles in bionanotechnology shows encouraging advancements from a materials science perspective, the use of such materials in vivo is limited by issues related with toxicity, biodistribution and bioaccumulation. Because metal inorganic nanoparticle systems degrade into their constituent metal atoms, challenges may arise from the interactions of these materials with biosystems, and a considerable amount of the particles may remain in the body after treatment, leading to buildup of metal particles potentially resulting in toxicity. Recently, however, some studies have shown that certain nanoparticle environmental toxicity effects aren't apparent until nanoparticles undergo transformations to release free metal ions. Under aerobic and anaerobic conditions, it was found that copper, silver, and titanium nanoparticles released low or insignificant levels of metal ions. This is evidence that copper, silver, and titanium NP are slow to release metal ions, and may therefore appear at low levels in the environment. Additionally, nanoshell coatings significantly protect against degradation in the cellular environment and also reduce QDs toxicity by reducing metal ion leakage from the core. Organic Nanocrystals Organic nanocrystals consist of pure drugs and surface active agents required for stabilization. They are defined as carrier-free submicron colloidal drug delivery systems with a mean particle size in the nanometer range. The primary importance of the formulation of drugs into nanocrystals is the increase in particle surface area in contact with the dissolution medium, therefore increasing bioavailability. A number of drug products formulated in this way are on the market. Solubility One of the issues faced by drug delivery is the solubility of the drug in the body; around 40% of newly detected chemicals found in drug discovery are poorly soluble in water. This low solubility affects the bioavailability of the drug, meaning the rate at which the drug reaches the circulatory system and thus the target site. Low bioavailability is most commonly seen in oral administration, which is the preferred choice for drug administration due to its convenience, low costs, and good patient practice. A measure to improve poor bioavailability is to inject the drugs in a solvent mixture with a solubilizing agent. However, results show this solution is ineffective, with the solubilizing agent demonstrating side-effects and/or toxicity. Nanocrystals used for drug delivery can increase saturation solubility and dispersion velocity. Generally, saturation solubility is thought to be a function of temperature, but it is also based on other factors, such as crystalline structure and particle size, in regards to nanocrystals. The Ostwald-Freundlich equation below shows this relationship: Where Cs is the saturation solubility of the nanocrystal, C𝛼 is the solubility of the drug at a non-nano scale, σ is the interfacial tension of the substance, V is the molar volume of the particle, R is the gas constant, T is the absolute temperature, 𝜌 is the density of the solid, and r is the radius. The advantage of nanocrystals is that they can improve oral adsorption, bioavailability, action onset and reduces intersubject variability. Consequently, nanocrystals are now being produced and are on the market for a variety of purposes ranging from antidepressants to appetite stimulants. Nanocrystals can be produced using two different ways: the top-down method or the bottom-up method. Bottom-up technologies are also known as nanoprecipitation. This technique involves dissolving a drug in a suitable solvent and then precipitating it with a non-solvent. On the other hand, top-down technologies use force to reduce the size of a particle to nanometers, usually done by milling a drug. Top-down methods are preferred when working with poorly soluble drugs. Stability A disadvantage of using nanocrystals for drug delivery is nanocrystal stability. Instability problems of nanocrystalline structures derive from thermodynamic processes such as particle aggregation, amorphization, and bulk crystallization. Particles at the nanoscopic scale feature a relative excess of Gibbs free energy, due to their higher surface area to volume ratio. To reduce this excess energy, it is generally favorable for aggregation to occur. Thus, individual nanocrystals are relatively unstable by themselves and will generally aggregate. This is particularly problematic in top-down production of nanocrystals. Methods such as high-pressure homogenization and bead milling, tend to increase instabilities by increasing surface areas; to compensate, or as a response to high pressure, individual particles may aggregate or turn amorphous in structure. Such methods can also lead to the reprecipitation of the drug by surpassing the solubility beyond the saturation point (Ostwald ripening). One method to overcome aggregation and retain or increase nanocrystal stability is by use of stabilizer molecules. These molecules, which interact with the surface of the nanocrystals and prevent aggregation via ionic repulsion or steric barriers between the individual nanocrystals, include surfactants and are generally useful for stabilizing suspensions of nanocrystals. Concentrations of surfactants that are too high, however, may inhibit nanocrystal stability and enhance crystal growth or aggregation. It has been shown that certain surfactants, upon reaching a critical concentration, begin to self-assemble into micelles, which then compete with nanocrystal surfaces for other surfactant molecules. With fewer surface molecules interacting with the nanocrystal surface, crystal growth and aggregation is reported to occur at increased amounts. Use of surfactant at optimal concentrations reportedly allows for higher stability, larger drug capacity as a carrier, and sustained drug release. In a study using PEG as a stabilizer was found that nanocrystals treated with PEG enhanced accumulation at tumor sites and had greater blood circulation, than those not treated with PEG. Amorphization can occur in top-down methods of production. With different intramolecular arrangements, amorphization of nanocrystals leads to different thermodynamic and kinetic properties that affect drug delivery and kinetics. Transition to amorphous structures is reported to occur through production practices such as spray drying, lyophilization, and mechanical mechanisms, such as milling. This amorphization has been reportedly observed with or without the presence of stabilizer in a dry milling process. Using a wet milling process with surfactant, however significantly reduced amorphization, suggesting that solvent, in this case water, and surfactant could inhibit amorphization for some top-down production methods that otherwise reportedly facilitate amorphization. Liposome delivery Liposomes are spherical vesicles composed of synthetic or natural phospholipids that self-assemble in aqueous solution in sizes ranging from tens of nanometers to micrometers. The resulting vesicle, which has an aqueous core surrounded by a hydrophobic membrane, can be loaded with a wide variety of hydrophobic or hydrophilic molecules for therapeutic purposes. Liposomes are typically synthesized with naturally occurring phospholipids, mainly phosphatidylcholine. Cholesterol is often included in the formulation to adjust the rigidity of the membrane and to increase stability. The molecular cargo is loaded through liposome formation in aqueous solution, solvent exchange mechanisms, or pH gradients methods. Various molecules can also be chemically conjugated to the surface of the liposome to alter recognition properties. One typical modification is conjugating polyethyleneglycol (PEG) to the vesicle surface. The hydrophilic polymer prevents recognition by macrophages and decreases clearance. The size, surface charge, and bilayer fluidity also alter liposome delivery kinetics. Liposomes diffuse from the bloodstream into the interstitial space near the target site. As the cell membrane itself is composed of phospholipids, liposomes can directly fuse with the membrane and release the cargo into the cytosol, or may enter the cell through phagocytosis or other active transport pathways. Liposomal delivery has various advantages. Liposomes increase the solubility, stability, and uptake of drug molecules. Peptides, polymers, and other molecules can be conjugated to the surface of a liposome for targeted delivery. Conjugating various ligands can facilitate binding to target cells based on the receptor-ligand interaction. Altering vesicle size and surface chemistry can also be tuned to increase circulation time. Various FDA-approved liposomal drugs are in clinical use in the US. The anthracycline drug doxorubicin is delivered with phospholipid-cholesterol liposomes to treat AIDS-related Kaposi sarcoma and multiple myeloma with high efficacy and low toxicity. Many others are undergoing clinical trials, and liposomal drug delivery remains an active field of research today, with potential applications including nucleic acid therapy, brain targeting, and tumor therapy. Viral vectors, viral-like particles, and biological nanocarriers Viruses can be used to deliver genes for genetic engineering or gene therapy. Commonly used viruses include adenoviruses, retroviruses, and various bacteriophages. The surface of the viral particle can also be modified with ligands to increase targeting capabilities. While viral vectors can be used to great efficacy, one concern is that may cause off-target effects due to its natural tropism. This usually requires replacing the proteins causing virus-cell interactions with chimeric proteins. In addition to using viruses, drug molecules can also be encapsulated in protein particles derived from the viral capsid, or virus-like particles (VLPs). VLPs are easier to manufacture than viruses, and their structural uniformity allows VLPs to be produced precisely in large amounts. VLPs also have easy-to-modify surfaces, allowing the possibility for targeted delivery. There are various methods of packaging the molecule into the capsid; most take advantage of the capsid's ability to self-assemble. One strategy is to alter the pH gradient outside the capsid to create pores on the capsid surface and trap the desired molecule. Other methods use aggregators such as leucine zippers or polymer-DNA amphiphiles to induce capsid formation and capture drug molecules. It is also possible to chemically conjugate of drugs directly onto the reactive sites on the capsid surface, often involving the formation of amide bonds. After being introduced to the organism, VLPs often have broad tissue distribution, rapid clearance, and are generally non-toxic. It may, however, like viruses, invoke an immune response, so immune-masking agents may be necessary. Nanoparticle Albumin-bound (nab) Technology Nanoparticle albumin-bound technology utilizes the protein albumin as a carrier for hydrophobic chemotherapy drugs through noncovalent binding. Because albumin is already a natural carrier of hydrophobic particles and is able to transcytose molecules bound to itself, albumin composed nanoparticles have become an effective strategy for the treatment of many diseases in clinical research. Delivery and release mechanisms An ideal drug delivery system should have effective targeting and controlled release. The two main targeting strategies are passive targeting and active targeting. Passive targeting depends on the fact that tumors have abnormally structured blood vessels that favor accumulation of relatively large macromolecules and nanoparticles. This so-called enhanced permeability and retention effect (EPR) allows the drug-carrier be transported specifically to the tumor cells. Active targeting is, as the name suggests, much more specific and is achieved by taking advantage of receptor-ligand interactions at the surface of the cell membrane. Controlled drug release systems can be achieved through several methods. Rate-programmed drug delivery systems are tuned to the diffusivity of active agents across the membrane. Another delivery-release mechanism is activation-modulated drug delivery, where the release is triggered by environmental stimuli. The stimuli can be external, such as the introduction of a chemical activators or activation by light or electromagnetic fields, or biological - such as pH, temperature, and osmotic pressure which can vary widely throughout the body. Polymeric nanoparticles For polymeric nanoparticles, the induction of stimuli-responsiveness has usually relied heavily upon well-known polymers that possess an inherent stimuli-responsiveness. Certain polymers that can undergo reversible phase transitions due to changes in temperature or pH have aroused interest. Arguably the most utilized polymer for activation-modulated delivery is the thermo-responsive polymer poly(N-isopropylacrylamide). It is readily soluble in water at room temperature but precipitates reversibly from when the temperature is raised above its lower critical solution temperature (LCST), changing from an extended chain conformation to a collapsed chain. This feature presents a way to change the hydrophilicity of a polymer via temperature. Efforts also focus on dual stimuli-responsive drug delivery systems, which can be harnessed to control the release of the encapsulated drug. For example, the triblock copolymer of poly(ethylene glycol)-b-poly(3-aminopropyl-methacrylamide)-b-poly(N-isopropylacrylamide) (PEG-b-PAPMA-b-PNIPAm) can self-assemble to form micelles, possessing a core–shell–corona architecture above the lower critical solution temperature. It is also pH responsive. Therefore, drug release can be tuned by changing either temperature or pH conditions. Inorganic nanoparticles Drug delivery strategies of inorganic nanoparticles are dependent on material properties. The active targeting of inorganic nanoparticle drug carriers is often achieved by surface functionalization with specific ligands of nanoparticles. For example, the inorganic multifunctional nanovehicle (5-FU/Fe3O4/αZrP@CHI-FA-R6G) is able to accomplish tumor optical imaging and therapy simultaneously. It can be directed to the location of cancer cells with sustained release behavior. Studies have also been done on gold nanoparticle responses to local near-infrared (NIR) light as a stimuli for drug release. In one study, gold nanoparticles functionalized with double-stranded DNA encapsulated with drug molecules, were irradiated with NIR light. The particles generated heat and denatured the double-stranded DNA, which triggered the release of drugs at the target site. Studies also suggest that a porous structure is beneficial to attain a sustained or pulsatile release. Porous inorganic materials demonstrate high mechanical and chemical stability within a range of physiological conditions. The well-defined surface properties, such as high pore volume, narrow pore diameter distribution, and high surface area allow the entrapment of drugs, proteins and other biogenic molecules with predictable and reproducible release patterns. Toxicity Some of the same properties that make nanoparticles efficient drug carriers also contribute to their toxicity. For example, gold nanoparticles are known to interact with proteins through surface adsorption, forming a protein corona, which can be utilized for cargo loading and immune shielding. However, this protein-adsorption property can also disrupt normal protein function that is essential for homeostasis, especially when the protein contains exposed sulfur groups. The photothermal effect, which can be induced to kill tumor cells, may also create reactive oxygen species that impose oxidative stress on surrounding healthy cells. Gold nanoparticles of sizes below 4-5 nm fit in DNA grooves which can interfere with transcription, gene regulation, replication, and other processes that rely on DNA-protein binding. Lack of biodegradability for some nanoparticle chemistries can lead to accumulation in certain tissues, thus interfering with a wide range of biological processes. Currently, there is no regulatory framework in the United States for testing nanoparticles for their general impact on health and on the environment. References Nanoparticles Drug delivery devices Nanomedicine
Nanoparticle drug delivery
[ "Chemistry", "Materials_science" ]
5,021
[ "Nanomedicine", "Pharmacology", "Drug delivery devices", "Nanotechnology" ]
42,903,555
https://en.wikipedia.org/wiki/A.J.%20Drexel%20Plasma%20Institute
The Drexel Plasma Institute, in Camden, New Jersey, is the largest university-based plasma research facility in the United States. Led by Drexel University, the members of the scientific team are from University of Illinois at Chicago, Argonne National Laboratory, Pacific Northwest National Laboratory and Kurchatov Institute of Atomic Energy. The primary fields of research are applications in medicine, Environmental Control, energy, and agricultural industries. The institute actively develops and researches specific types of plasma discharges such as gliding arc, dielectric barrier discharge, gliding arc tornado, reverse vortex flow, Pulsed Corona Discharge, and many more. Applications Many applications of plasma can be utilized in many industries. As such, its application is often categorized in the context of its research. For example, the dissociation of hydrogen sulfide can be labeled as an environmental application. However, its production of gaseous hydrogen can be more relevant to the energy industry. As such, it is categorized on its researched application, rather than its objective goal. Medicine Dr. Gregory Fridman is the laboratory director regarding the application of plasma in the field of medicine. Along with teaching at Drexel University, he creates and finds new applications of plasma in medicine such as blood coagulation. Environmental control Energy Alexander Rabinovich is the laboratory director regarding the application of plasma in the field of energy. Primarily, he studies and researches how plasma can be used in the Energy Systems, Fuel Conversion & Hydrogen Production Division. Some of his research is specialized in conversion of certain gases or the dissociation of others: "Gliding Arc Plasma-Stimulated Conversion of Pyrogas into Synthesis Gas" "Low-Temperature Plasma Reforming of Hydrocarbon Fuels Into Hydrogen and Carbon Suboxide for Energy Generation Without CO2 Emission" "Plasma assisted dissociation of hydrogen sulfide" Agriculture References Plasma physics facilities Research institutes in New Jersey Drexel University Education in Camden, New Jersey Organizations established in 2002 2002 establishments in New Jersey
A.J. Drexel Plasma Institute
[ "Physics" ]
400
[ "Plasma physics facilities", "Plasma physics" ]
42,906,061
https://en.wikipedia.org/wiki/Total%20position%20spread
In physics, the total position-spread (TPS) tensor is a quantity originally introduced in the modern theory of electrical conductivity. In the case of molecular systems, this tensor measures the fluctuation of the electrons around their mean positions, which corresponds to the delocalization of the electronic charge within a molecular system. The total position-spread can discriminate between metals and insulators taking information from the ground state wave function. This quantity can be very useful as an indicator to characterize Intervalence charge transfer processes, the bond nature of molecules (covalent, ionic, or weakly bonded), and Metal–insulator transition. Overview The Localization Tensor (LT) is a per electron quantity proposed in the context of the theory of Kohn to characterize electrical conductivity properties. In 1964, Kohn realized that electrical conductivity is more related to the proper delocalization of the wave function than a simple bandgap. In fact, he proposed that a qualitative difference between insulators and conductors also manifests as a different organization of the electrons in their ground state where one has that: the wave function is strongly localized in insulators and very delocalized in conductors. The interesting outcome of this theory is: i) it relates the classical idea of localized electrons as a cause of insulating state; ii) the needed information can be recovered from the ground state wave function because in the insulated regime the wave function breaks down as a sum of disconnected terms. It is until 1999 that Resta and coworkers found a way to define the Kohn delocalization by proposing the already mentioned Localization Tensor. The LT is defined as a second order moment cumulant of the position operator divided by the number of electrons in the system. The key property of the LT is that: it diverges for metals while it takes finite values for insulators in the Thermodynamic limit. Recently, the global quantity (the LT not divided by the number of electrons) has been introduced to study molecules and named Total Position-Spread tensor. Theory Spin-summed total position-spread (SS-TPS) The total position spread Λ is defined as the second moment cumulant of the total electron position operator, and its units are in length square (e.g. bohr²). In order to compute this quantity, one has to take into account the position operator and its tensorial square. For a system of n electrons, the position operator and its Cartesian components are defined as: (total position) Where the i index runs over the number of electrons. Each component of the position operator is a one-electron operator, they can be represented in second quantization as follows: where i,j run over orbitals. The expectation values of the position components are the first moments of the electrons' position. Now we consider the tensorial square (second moment). In this sense, there are two types of them: in quantum chemistry programs like MOLPRO or DALTON the second moment operator is a tensor defined as the sum of the tensor squares of the positions of a single electron. Then, this is a one-electron operator s defined by its Cartesian components: where index i runs over the number of electrons. there is also the square of the total position operator . This is a two-electron operator S, and also defined by its Cartesian components: where indices i,j run over electrons. The second moment of the position becomes then the sum of the one- and two-electron operators already defined: Given a n-electron wave function , one wants to compute the second moment cumulant of it. A cumulant is a linear combination of moments so we have: Spin-partitioned total position-spread (SP-TPS) The position operator can be partitioned according to spin components. From the one-particle operator it is possible to define the total spin-partitioned position operator as: Therefore, the total position operator can be expressed by the sum of the two spin parts and : and the square of the total position operator decomposes as: Thus, there are four joint second moment cumulant of the spin-partitioned position operator: Applications Model Hamiltonians Hubbard model The Hubbard model is a very simple and approximate model employed in Condensed matter physics to describe the transition of materials from metals to insulators. It takes into account only two parameters: i) the kinetic energy or hopping integral denoted by -t; and ii) the on-site repulsion between electrons represented by U (see the example of 1D chain of hydrogen atoms). In Figure 1, there are two limit cases to consider: larger values of -t/U representing a strong charge fluctuation (electrons free to move) whereas for small values of -t/U the electrons are completely localized. The spin-summed total position-spread is very sensitive to these changes because it increases faster than linearly when electrons start to present mobility (0.0 to 0.5 range of -t/U). Heisenberg model Monitor the wave function The total position-spread is a powerful tool to monitor the wave function. In Figure 3 is shown the longitudinal spin-summed total position-spread (Λ∥) computed at full configuration interaction level for the H2 diatomic molecule. The Λ∥ in the high repulsive region shows a value that is lower than in the asymptotic limit. This is a consequence of nuclei being near to each other's causing and enhancement of the effective nuclear charge that makes electrons to be more localized. When stretching the bond, the total position-spread starts growing until it reaches a maximum (strong delocalization of the wave function) before the bond is broken. Once the bond is broken, the wave function becomes a sum of disconnected localized regions, and the tensor decreases until it reaches twice the value of the atomic limit (1 bohr² for each hydrogen atom). Spin delocalization When the total position-spread tensor is partitioned according to spin (spin-partitioned total position-spread), it becomes a powerful tool to describe spin delocalization in the insulating regime. In Figure 4 is shown the longitudinal spin-partitioned total position-spread (Λ∥) computed at full configuration interaction level for the H2 diatomic molecule. The horizontal line at 0 bohr2 divides the same spin (positive values) and different spin (negative values) contributions of the spin partitioned total position-spread. Unlike the spin-summed total position-spread that saturates to the atomic value for R>5, the spin-partitioned total position-spread diverges as R2 indicating that there is a strong spin delocalization. The spin-partitioned total position-spread can also be seen as a measure of how strong the electron correlation is. Properties The total position-spread is a cumulant and thus it possesses the following properties: Cumulants can be explicitly represented only by moments of lower or equal order. Cumulants are a linear combination of the products of these moments of lower or equal order. Cumulants are additive. This is a very important property when studying molecular systems because it means that the total position-spread tensor shows size consistency. A diagonal element of the cumulant tensor is the variance (see also this article), and it is always a positive value. Cumulants also are invariant under the translation of the origin of when they are of order ≥ 2. The total position-spread tensor being a second-order cumulant, is invariant under the translation of the origin. The total position-spread is more sensitive to the variation of the wave function than the energy, which makes it a good indicator for instance in a Metal–insulator transition situation. References Tensors Condensed matter physics
Total position spread
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,599
[ "Tensors", "Phases of matter", "Materials science", "Condensed matter physics", "Matter" ]
42,909,270
https://en.wikipedia.org/wiki/Anton%E2%80%93Schmidt%20equation%20of%20state
The Anton–Schmidt equation is an empirical equation of state for crystalline solids, e.g. for pure metals or intermetallic compounds. Quantum mechanical investigations of intermetallic compounds show that the dependency of the pressure under isotropic deformation can be described empirically by . Integration of leads to the equation of state for the total energy. The energy required to compress a solid to volume is which gives . The fitting parameters and are related to material properties, where is the bulk modulus at equilibrium volume . correlates with the Grüneisen parameter . However, the fitting parameter does not reproduce the total energy of the free atoms. The total energy equation is used to determine elastic and thermal material constants in quantum chemical simulation packages. The equation of state has been used in cosmological contexts to describe the dark energy dynamics. However its use has been recently criticized since it appears disfavored than simpler equations of state adopted for the same purpose. See also Murnaghan equation of state Rose–Vinet equation of state Birch–Murnaghan equation of state References Solid mechanics Equations of state
Anton–Schmidt equation of state
[ "Physics" ]
225
[ "Solid mechanics", "Equations of physics", "Statistical mechanics", "Mechanics", "Equations of state" ]
42,914,369
https://en.wikipedia.org/wiki/Infrared%20safety
In quantum field theory, and especially asymptotically free quantum field theories, an observable is infrared safe if it does not depend on the low energy/long distance physics of the theory. Such observables can therefore be calculated reliably using perturbative methods and then compared to experiment. An example of an observable which is infrared safe is the total scattering cross-section for the collision of an electron and a positron to produce hadrons. See also Asymptotic freedom Infrared divergence Kinoshita–Lee–Nauenberg theorem References Quantum field theory Quantum chromodynamics
Infrared safety
[ "Physics" ]
131
[ "Quantum field theory", "Quantum mechanics", "Quantum physics stubs" ]