id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
54,536,319
https://en.wikipedia.org/wiki/HPA-23
HPA-23, sometimes known as antimonium tungstate, is an antiretroviral drug that was used for the treatment of HIV infection. It achieved widespread publicity as an effective treatment for HIV and AIDS beginning in 1984, just one year after HIV was first identified. Later testing failed to demonstrate any efficacy and some patients suffered serious side effects from the drug, including liver failure. History HPA-23 was developed by Rhône-Poulenc at the Pasteur Institute in the 1970s and used in France on an experimental basis to treat HIV and AIDS patients beginning in 1984. The inventors of the drug, as listed in its patent, were Jean-Claude Chermann, Dominique Dormont, Etienne Vilmer, Bruno Spire, Françoise Barré-Sinoussi, Luc Montagnier, and Willy Rozenbaum. While the drug was not presented as a cure for HIV/AIDS, it was suggested it could arrest replication and spread of the virus. The United States, which had a more stringent drug approval process than France, delayed authorizing use of HPA-23 even for clinical trials, prompting an angry outcry and an exodus of more than 100 American AIDS patients to France to seek treatment, encouraged in part by a French call for American volunteers. Bill Kraus, who received HPA-23 dosages in France as a medical tourist, "pinned his entire hope for survival" on the drug, even to the exclusion of other experimental medications then in development. After actor Rock Hudson received treatment at a Paris hospital with HPA-23, a representative of the National Gay Task Force declared that "something is wrong with the health-care system when a wealthy man and a friend of the President has to go to Europe for treatment". At the same time, however, some within the American scientific community cautioned AIDS sufferers against putting too much hope in HPA-23 and generally supported the Food and Drug Administration's (FDA) conservative approach to certification. William A. Haseltine commented that reports of the drug's success in France were based on "the crummiest kind of anecdotal stories – they don't do the scientifically controlled trials". Physicians at San Francisco General Hospital's AIDS Clinic echoed Haseltine's concerns, noting that French testing of the drug was done without any type of control group and that the drug's high toxicity made it potentially dangerous to patients already suffering serious infections. Public Citizen, which was often critical of FDA decisions, also came out in support of the agency's timeline for certification. In August 1985, under increasing public pressure to fast track approval of the drug, the United States Food and Drug Administration permitted the use of HPA-23 in extremely limited human testing. In the ensuing clinical trials no improvement in the condition of the test subjects was observed, with some even showing increased levels of HIV replication and three patients suffering liver failure triggered by the drug. By 1986, the National Academy of Sciences had concluded that no therapeutic benefits for persons infected with HIV could be attributed to HPA-23. It was subsequently abandoned as a treatment option. See also Ammonium paratungstate References HIV/AIDS French inventions Tungstates Withdrawn drugs Antimony compounds Sodium compounds Ammonium compounds 1984 in science
HPA-23
[ "Chemistry" ]
667
[ "Ammonium compounds", "Drug safety", "Withdrawn drugs", "Salts" ]
54,537,321
https://en.wikipedia.org/wiki/Random%20sequential%20adsorption
Random sequential adsorption (RSA) refers to a process where particles are randomly introduced in a system, and if they do not overlap any previously adsorbed particle, they adsorb and remain fixed for the rest of the process. RSA can be carried out in computer simulation, in a mathematical analysis, or in experiments. It was first studied by one-dimensional models: the attachment of pendant groups in a polymer chain by Paul Flory, and the car-parking problem by Alfréd Rényi. Other early works include those of Benjamin Widom. In two and higher dimensions many systems have been studied by computer simulation, including in 2d, disks, randomly oriented squares and rectangles, aligned squares and rectangles, various other shapes, etc. An important result is the maximum surface coverage, called the saturation coverage or the packing fraction. On this page we list that coverage for many systems. The blocking process has been studied in detail in terms of the random sequential adsorption (RSA) model. The simplest RSA model related to deposition of spherical particles considers irreversible adsorption of circular disks. One disk after another is placed randomly at a surface. Once a disk is placed, it sticks at the same spot, and cannot be removed. When an attempt to deposit a disk would result in an overlap with an already deposited disk, this attempt is rejected. Within this model, the surface is initially filled rapidly, but the more one approaches saturation the slower the surface is being filled. Within the RSA model, saturation is sometimes referred to as jamming. For circular disks, saturation occurs at a coverage of 0.547. When the depositing particles are polydisperse, much higher surface coverage can be reached, since the small particles will be able to deposit into the holes in between the larger deposited particles. On the other hand, rod like particles may lead to much smaller coverage, since a few misaligned rods may block a large portion of the surface. For the one-dimensional parking-car problem, Renyi has shown that the maximum coverage is equal to the so-called Renyi car-parking constant. Then followed the conjecture of Ilona Palásti, who proposed that the coverage of d-dimensional aligned squares, cubes and hypercubes is equal to θ1d. This conjecture led to a great deal of work arguing in favor of it, against it, and finally computer simulations in two and three dimensions showing that it was a good approximation but not exact. The accuracy of this conjecture in higher dimensions is not known. For -mers on a one-dimensional lattice, we have for the fraction of vertices covered, When goes to infinity, this gives the Renyi result above. For k = 2, this gives the Flory result . For percolation thresholds related to random sequentially adsorbed particles, see Percolation threshold. Saturation coverage of k-mers on 1d lattice systems Asymptotic behavior: . Saturation coverage of segments of two lengths on a one dimensional continuum R = size ratio of segments. Assume equal rates of adsorption Saturation coverage of k-mers on a 2d square lattice Asymptotic behavior: . Saturation coverage of k-mers on a 2d triangular lattice Saturation coverage for particles with neighbors exclusion on 2d lattices . Saturation coverage of squares on a 2d square lattice For k = ∞, see "2d aligned squares" below. Asymptotic behavior: . See also Saturation coverage for randomly oriented 2d systems 2d oblong shapes with maximal coverage Saturation coverage for 3d systems Saturation coverages for disks, spheres, and hyperspheres Saturation coverages for aligned squares, cubes, and hypercubes See also Adsorption Particle deposition Percolation threshold References Chemistry Materials science Colloidal chemistry
Random sequential adsorption
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
789
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Materials science", "Colloids", "Surface science", "nan" ]
64,425,095
https://en.wikipedia.org/wiki/Stress%20triaxiality
In continuum mechanics, stress triaxiality is the relative degree of hydrostatic stress in a given stress state. It is often used as a triaxiality factor, T.F, which is the ratio of the hydrostatic stress, , to the Von Mises equivalent stress, . Stress triaxiality has important applications in fracture mechanics and can often be used to predict the type of fracture (i.e. ductile or brittle) within the region defined by that stress state. A higher stress triaxiality corresponds to a stress state which is primarily hydrostatic rather than deviatoric. High stress triaxiality (> 2–3) promotes brittle cleavage fracture as well as dimple formation within an otherwise ductile fracture. Low stress triaxiality corresponds with shear slip and therefore larger ductility, as well as typically resulting in greater toughness. Ductile crack propagation is also influenced by stress triaxiality, with lower values producing steeper crack resistance curves. Several failure models such as the Johnson-Cook (J-C) fracture criterion (often used for high strain rate behavior), Rice-Tracey model, and J-Q large scale yielding model incorporate stress triaxiality. History In 1959 Davies and Connelly introduced so called triaxiality factor, defined as the ratio of Cauchy stress first principal invariant divided by effective stress , cf. formula (35) in Davies and Conelly (1959). The  denotes first invariant of Cauchy stress tensor,  denote principal values of Cauchy stress,  denotes mean stress,  is second invariant of Cauchy stress deviator,  denote principal values of Cauchy stress deviator,  denotes effective stress. Davies and Conelly were motivated in this proposal by supposition, correct in view of their own and later research, that negative pressure (spherical tension)  called by them rather exotically triaxial tension, has a strong influence on the loss of ductility of metals, and the need to have some parameter to describe this effect. Wierzbicki and collaborators adopted a slightly modified definition of triaxiality factor than the original one , , cf. e.g. Wierzbicki et al (2005). The name triaxiality factor is rather unfortunate, inadequate, because in physical terms the triaxiality factor determines the calibrated ratio of pressure forces relative to shearing forces or the ratio of isotropic (spherical) part of stress tensor in relation to its anisotropic (deviatoric) part both expressed in terms of their moduli, ; , . The triaxiality factor does not discern triaxial stress states from states of lower dimension. Ziółkowski proposed to use as a measure of pressure towards shearing forces another modification of the index , not burdened with whatever strength effort hypothesis, in the form , cf. formula (8.2) in Ziółkowski (2022). In the context of material testing a reasonable mnemonic name for  could be, e.g. pressure index or pressure factor. Stress Triaxiality factor in biaxial tests The triaxiality factor  gained considerable attention and popularity when Wierzbicki and his collaborators pointed out that not only pressure () but also Lode angle  can considerably influence ductile fracture and other properties of metals, cf. e.g. Wierzbicki et al (2005). The class of biaxial tests is defined by the condition that always one of the principal values of the stress tensor is equal to zero (). In 2005 Wierzbicki and Xue found that in the class of biaxial tests a unique constraint relation exists between normalized principal third invariant of deviator and triaxiality factor in the form , cf. formula (23) in Wierzbicki et al (2005). The normalized third invariant of stress deviator is defined as , , where  denotes third invariant of stress deviator . In presentation of material testing results, the most frequently at present, it is used so called Lode angle . The Lode angle is defined with the relation . However, the Lode angle  does not have clear (lucid) physical interpretation. From a mathematical standpoint, the Lode angle describes the angle between projection of Cauchy stress on the octahedral plane and projection of the greatest principal stress  on the octahedral plane. Ziółkowski proposed to use a skewness angle  defined with the following relation ,  for characterization of mode of shearing forces, cf. formula (4.2) in Ziółkowski (2022). The skewness angle  has several cogent and useful physical-statistical interpretations. It describes the departure of the actual Cauchy stress deviator from the corresponding reference pure shear , i.e., deviator with the same modulus as  but with third invariant equal to zero . In micromechanical context the skewness angle can be understood as a macroscopic measure of the magnitude of internal entropy of the (macroscopic) Cauchy stress tensor. In this sense that its value determines degree of order of the population of micro pure shears (directional dipoles) generating the specific macroscopic stress state. The smaller is absolute values of skewness angle the smaller is internal entropy of Cauchy stress tensor. The skewness angle enters as a parameter in a measure of anisotropy factor (degree) of stress tensor, which can be expressed with the formula , cf. formula (4.5) in Ziółkowski (2022). The formula elucidates that the greater is internal order of pure shears population generating specific macroscopic stress state, i.e. the lower its entropy, the larger is anisotropy of the macroscopic stress tensor. The  denotes isotropy angle defined with the formula , , , . The isotropy angle enables extraction of the spherical (isotropic) part and deviatoric (anisotropic) part of the stress tensor in a very straightforward and convenient manner. The measure of tensor anisotropy , introduced by Rychlewski (1985) and actually applicable to tensors of any degree, is defined with the formula , . The  denotes diameter of tensor orbit defined as follows, , where  denotes distance generated by the usual tensorial norm ,  is any second order proper orthogonal (rotation) tensor . The diameter of tensor orbit is simply a maximum distance between any two members in the orbit of a tensor . A very simple (linear) connection exists between Lode angle and skewness angle . The Wierzbicki's constraint relation , valid for biaxial stress states can be solved with respect to skewness angle to obtain the following explicit relations linking triaxiality factor and skewness angle, cf. Ziółkowski (2022). The above relations  are three bijections (one to one relations) in three sharing edges but otherwise separate subdomains, which altogether make the entire two parameter domain (half-plane) of biaxial tests stress states.The explicit reverse relations , easily obtainable from the above formulae, are very convenient for numerical computations, because they enable determination of the value of skewness (Lode) angle  (shearing mode of stress) only from the value of the triaxiality factor  without the necessity to compute determinant of stress deviator, what delivers large computational savings. Selection of the correct subformula is very easy because it can be decided only upon the value of  falling into a specific range of values. For example, when , then it belongs to the range ; hence . The relations  allowed for formulation and proof of the following important theorems and corollary, cf. Ziółkowski (2022). Theorem I. The radial lines (rays) coming out from the origin  of the coordinates frame of the biaxial tests domain, i.e., half-plane , are lines of constant values of triaxiality factor and at the same time, lines of constant values of skewness angle . Theorem II. The relations , , , valid for plane stress conditions, are bijections (one to one relations) in three sharing edges but otherwise separate subdomains of the whole domain of biaxial tests stress states, except on the line , on which  for any value of . Corollary. In the case of convex critical surface, with the aid of any type of biaxial (plane) stress test, for any fixed value of mean stress (pressure) , critical effective stress  can be determined for only a single value of the skewness (Lode) angle , and thus corresponding to it single value of triaxiality factor . In the case of convex critical surface, with the aid of any type of biaxial (plane) stress test, for any fixed value of skewness (Lode) angle , critical effective stresses  can be determined for only three values of mean stress (pressure) , and thus three values of triaxiality factor  corresponding to  in three subdomains. The Corrollary indicates for limitations of the class of biaxial (plane) tests in experimental examination of the influence of skewness (Lode) angle and pressure on materials behavior submitted to multiaxial loadings. This is so, because upon executing only biaxial tests no adequate experimental data results can be collected to reliably separate the influence of mean stress and/or skewness angle on the possible variations of critical effective stresses. One value of skewness angle for any fixed pressure and/or three values of pressure for any fixed skewness angle deliver skimpy information for such purpose. Triaxialy factor as convenient indicator showing transition from two-dimensional (plane) stress to full three-dimensional state of stress Relations  valid for biaxial (plane) stress states show that in such a case, the values of the triaxiality factor must always remain in the range , while in the general case of three-dimensional multiaxial tests, the triaxiality factor can take any value from the range . In many experimental mechanics publications, in which results from biaxial tests are presented, values of triaxiality factor exceeding the two-third value can be encountered which may seem to be incorrect. However, experimental observation of the triaxiality factor greater than  rather indicates that the biaxiality condition of the plane stress test was lost, and in the sample three-dimensional stress state started to exist, cf. Ziółkowski (2022). References Continuum mechanics
Stress triaxiality
[ "Physics" ]
2,190
[ "Classical mechanics", "Continuum mechanics" ]
64,428,950
https://en.wikipedia.org/wiki/Nora%20Brambilla
Nora Brambilla (February 19, 1964) is an Italian and German theoretical particle physicist known for her research on quarkonium, particles composed of two quarks instead of the more usual three. She is a professor of theoretical particle and nuclear physics at the Technical University of Munich. Education and career Brambilla is originally from Milan, and holds both Italian and German citizenship. She studied particle physics at the University of Milan, completing her PhD there in 1993. In 1999, she earned a habilitation in theoretical physics at the University of Vienna. After various research positions, she became a tenured faculty member at the University of Milan in 2002, before moving to Munich in 2008. She is currently the head of a research group at the Physik-Department of the Technical University of Munich. Recognition In 2012, Brambilla was named a Fellow of the American Physical Society "for her contributions to the theory of heavy-quark-antiquark-systems, including the development of new effective field theories, and for contributions to the field of heavy-quarkonium physics through the founding and leadership of the Quarkonium Working Group". References External links Research group home page 20th-century Italian women scientists 20th-century German women scientists 20th-century German physicists 20th-century Italian physicists 21st-century Italian women scientists 21st-century German women scientists 21st-century German physicists 21st-century Italian physicists Living people German women physicists Italian women physicists University of Milan alumni Academic staff of the University of Milan Academic staff of the Technical University of Munich Fellows of the American Physical Society Particle physicists Scientists from Milan 1964 births University of Vienna alumni
Nora Brambilla
[ "Physics" ]
340
[ "Particle physicists", "Particle physics" ]
64,435,183
https://en.wikipedia.org/wiki/Reticulocyte%20binding%20protein%20homologs
Reticulocyte binding protein homologs (RHs) are a superfamily of proteins found in Plasmodium responsible for cell invasion. Together with the family of erythrocyte binding-like proteins (EBLs) they make up the two families of invasion proteins universal to Plasmodium. The two families function cooperatively. This family is named after the reticulocyte binding proteins in P. vivax, a parasite that only infects reticulocytes (immature red blood cells) expressing the Duffy antigen. Homologs have since been identified in P. yoelii and P. reichenowi. A P. falciparum protein complex called PfRH5-PfCyRPA-PfRipr (RCR) is known to bind basigin via the tip of RH5. The trimeric complex forms an elongated structure with RH5 and Ripr on distal ends and CyRPA in the middle. The RCR complex has been identified as a promising malaria vaccine target with each individual component capable of inducing strain transcending immunity in in vitro assays of parasite growth. Of the entire family of RHs, only RH5 appears to be essential for invasion and functions downstream of the other RHs during invasion. PfRH4 is known to bind complement receptor 1. RHs do not express any significant sequence feature for specific domains, except for a set of transmembrane helices at the C-terminal. From experimentation on partial proteins, RHs are known to contain enterocyte-binding and nucleotide-sensing domains (EBD and NBD) that may partially overlap. The structure of the EBD has been experimentally observed in 2011 by small angle X-ray scattering. A much better crystal structure for an N-terminal receptor-binding domain (presumably the same as EBD) was published in 2014. References Proteins Plasmodium
Reticulocyte binding protein homologs
[ "Chemistry" ]
404
[ "Proteins", "Biomolecules by chemical classification", "Molecular biology stubs", "Molecular biology" ]
59,501,534
https://en.wikipedia.org/wiki/Lie%20operad
In mathematics, the Lie operad is an operad whose algebras are Lie algebras. The notion (at least one version) was introduced by in their formulation of Koszul duality. Definition à la Ginzburg–Kapranov Fix a base field k and let denote the free Lie algebra over k with generators and the subspace spanned by all the bracket monomials containing each exactly once. The symmetric group acts on by permutations of the generators and, under that action, is invariant. The operadic composition is given by substituting expressions (with renumbered variables) for variables. Then, is an operad. Koszul-Dual The Koszul-dual of is the commutative-ring operad, an operad whose algebras are the commutative rings over k. Notes References External links Todd Trimble, Notes on operads and the Lie operad https://ncatlab.org/nlab/show/Lie+operad Abstract algebra Category theory
Lie operad
[ "Mathematics" ]
208
[ "Functions and mappings", "Mathematical structures", "Algebra stubs", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Abstract algebra", "Algebra" ]
59,508,090
https://en.wikipedia.org/wiki/Washington%20Glass%20School
The Washington Glass School was founded in 2001 by Washington, DC area artists Tim Tate and Erwin Timmers. The school teaches classes on how to make kiln cast, fused, and cold worked glass sculptures and art. It is the second largest warm glass school in the United States. History Co-Founder Tim Tate's glass sculpture at the 2000 Artomatic art event was acquired by the Smithsonian American Art Museum for the Renwick Gallery's permanent collection. That sale also provided the funds that started the Washington Glass School. Erwin Timmers' artwork was also on exhibit at Artomatic, where after the show, they began to collaborate, later teaming up to start the Washington Glass School & Studio. Michael Janis joined the school in 2003, and became a Co-Director of the Washington Glass School in 2005. The school was initially located in the neighborhood where Nationals Park now stands, and as a result of the construction of the park, had to relocate to the current location in Mount Rainier, Maryland, just over the border with Washington, D.C. In 2008, Artomatic organized an exhibit that focused on how three "glass" cities approach the sculptural medium and hosted by the Washington Glass School. The collaborative show was titled "Glass 3″ referencing the invited glass centers of Washington, D.C., Toledo, Ohio, and Sunderland, England. The exhibit featured nearly 50 glass artists and created an international partnership and strong relationships that led to more international collaborative interactions. Tim Tate and Michael Janis' Fulbright Scholarships were both completed at the University of Sunderland and the UK's National Glass Centre. Washington Glass Studio The Washington Glass Studio was established as part of the school in 2001 to create site specific art for architectural and landscape environments. The studio draws on the Washington Glass School Co-director's educational backgrounds in steel and glass sculpture, electronics and video media, architectural design, and ecological sustainability. Notable public art projects by Washington Glass Studio include the monumental glass doors for the John Adams Building at the Library of Congress. Under the auspices of the Architect of the Capitol, the bronze doors to the John Adams Building were replaced in 2013 with code-complaint sculpted glass panels mirroring the original bronze door sculptures by American artist, Lee Lawrie, designed to commemorate the history of the written word, depicting gods of writing as well as real-life Native American Sequoyah. " The public art commission for artwork at the entrance to the Laurel Branch Library was awarded to the Washington Glass Studio in 2016. The high glass-and-steel sculpture was made involving the surrounding community and library groups. In a series of glass-making workshops, images of books and stories, education and learning, and shared aspirations were created at the Washington Glass School to be incorporated into the internally illuminated tower. In 2023, a second piece of public art for the Prince George's County Memorial Library system, "Reading the Waters," a fused glass mural, was installed at the Bladensburg Branch Library as part of the facility's renovation. Faculty Directors Michael Janis Tim Tate Erwin Timmers Glass Secessionism The Washington Glass School championed a new art movement dubbed Glass Secessionism to "underscore and define the 21st Century Sculptural Glass Movement and to illustrate the differences and strengths compared to late 20th century technique-driven glass. While the 20th century glass artists contributions have been spectacular and ground breaking, this group focuses on the aesthetic of the 21st century. The object of the Glass-Secession is to advance glass as applied to sculptural expression; to draw together those glass artists practicing or otherwise interested in the arts, and to discuss from time to time examples of the Glass-Secession or other narrative work." Reflecting the evolving nature of glass art, the name of the Facebook group was amended in 2017 to "21st Century Glass : Conversations and Images / Glass Secessionism". References External links "Capitol Improvements", American Craft Magazine reviews the process in the school's creation of the new cast glass doors for the US Library of Congress Adams Building. June/July 2013. "All Things Considered - Interview with Tim Tate: A Tiny Digital Arts Revolution, Encased In Glass." National Public Radio. August 3, 2009. WETA TV - "Around Town Visits the Washington Glass School." Aired July 16, 2007. Glassmaking schools
Washington Glass School
[ "Materials_science", "Engineering" ]
879
[ "Glass engineering and science", "Glassmaking schools" ]
60,929,882
https://en.wikipedia.org/wiki/Flux%20%28machine-learning%20framework%29
Flux is an open-source machine-learning software library and ecosystem written in Julia. Its current stable release is v. It has a layer-stacking-based interface for simpler models, and has a strong support on interoperability with other Julia packages instead of a monolithic design. For example, GPU support is implemented transparently by CuArrays.jl. This is in contrast to some other machine learning frameworks which are implemented in other languages with Julia bindings, such as TensorFlow.jl (the unofficial wrapper, now deprecated), and thus are more limited by the functionality present in the underlying implementation, which is often in C or C++. Flux joined NumFOCUS as an affiliated project in December of 2021. Flux's focus on interoperability has enabled, for example, support for Neural Differential Equations, by fusing Flux.jl and DifferentialEquations.jl into DiffEqFlux.jl. Flux supports recurrent and convolutional networks. It is also capable of differentiable programming through its source-to-source automatic differentiation package, Zygote.jl. Julia is a popular language in machine-learning and Flux.jl is its most highly regarded machine-learning repository (Lux.jl is another more recent, that shares a lot of code with Flux.jl). A demonstration compiling Julia code to run in Google's tensor processing unit (TPU) received praise from Google Brain AI lead Jeff Dean. Flux has been used as a framework to build neural networks that work with homomorphic encrypted data without ever decrypting it. This kind of application is envisioned to be central for privacy to future API using machine-learning models. Flux.jl is an intermediate representation for running high level programs on CUDA hardware. It was the predecessor to CUDAnative.jl which is also a GPU programming language. See also Differentiable programming Comparison of deep-learning software References Machine learning Free software programmed in Julia Software using the MIT license
Flux (machine-learning framework)
[ "Engineering" ]
428
[ "Artificial intelligence engineering", "Machine learning" ]
60,930,640
https://en.wikipedia.org/wiki/Human%20Space%20Flight%20Centre
The Human Space Flight Centre (HSFC) is a body under the Indian Space Research Organisation (ISRO) to coordinate the Indian Human Spaceflight Programme. The agency will be responsible for implementation of the Gaganyaan project. The first crewed flight is planned for 2024 on a home-grown LVM3 rocket. Before Gaganyaan mission announcement in August 2018, human spaceflight was not the priority for ISRO, though most of the required capability for it had been realised. ISRO has already developed most of the technologies for crewed flight and it performed a Crew Module Atmospheric Re-entry Experiment and a Pad Abort Test for the mission. The project will cost less than Rs. 10,000 crore. In December 2018, the government approved further 100 billion (US$1.5 billion) for a 7-days crewed flight of 3 astronauts to take place in December 2021, later delayed to 2023. If completed on schedule, India will become world's fourth nation to conduct independent human spaceflight after the Soviet Union/Russia, United States and People's Republic of China. As part of an integrated lunar exploration and outer space strategy, the agency plans to continue working on the Bharatiya Antariksha Station program, future crewed lunar landings, and moonbase habitat after completing crewed spaceflights. The Human Space Flight Center's founder is S Unnikrishnan Nair. The director of Human Space Flight Centre is Dinesh Kumar Singh, Distinguished Scientist. History The trials for crewed space missions began in 2007 with the 600 kg Space Capsule Recovery Experiment (SRE), launched using the Polar Satellite Launch Vehicle (PSLV) rocket, and safely returned to earth 12 days later. The Defence Food Research Laboratory (DFRL) has worked on the space food for crewed spaceflight and has been conducting trials on G-suit for astronauts as well. A prototype 'Advanced Crew Escape Suit' weighing 13 kg was built by Sure Safety (India) Limited based on ISRO's requirements has been tested and performance verified. On 28 December 2018, the Indian Union cabinet approved the funding for Indian Space Research Organisation's (ISRO's) human spaceflight programme, under which a three-member crew will be sent to space for seven days and is expected to cost Rs 9,023 crore. The testing phase is expected to begin from 2022 and the mission will be undertaken by 2023. Spacecraft development The first phase of this programme is to develop and fly the 3.7-ton spaceship called Gaganyaan that will carry a 3-member crew to low Earth orbit and safely return to Earth after a mission duration of a few orbits to two days. The first uncrewed launch is planned for 2022. The extendable version of the spaceship will allow flights up to seven days, rendezvous and docking capability. Enhancements in spacecraft will lead to development of a space habitat allowing spaceflight duration of 30–40 days at once in next phase. Further advances from experience will subsequently lead to development of a space station. On 7 October 2016, Vikram Sarabhai Space Centre Director K. Sivan stated that ISRO was gearing up to conduct a critical 'crew bailout test' called ISRO Pad Abort Test to see how fast and effectively the crew module could be released safely in the event of an emergency. The tests were conducted successfully on 5 July 2018 at Satish Dhawan Space Centre, Sriharikota. This was the first test in a series of tests to qualify a crew escape system technology. India will not use any animals for life support systems testing but robots resembling humans will be used. ISRO is targeting more than 99.8% reliability for its crew escape system. As of August 2018, ISRO plans to launch its crewed orbiter Gaganyaan atop a LVM3 rocket. About 16 minutes after lift-off, the rocket will inject the orbital vehicle into an orbit 300 to 400 km above Earth. The capsule would return for a splashdown in the Arabian Sea near the Gujarat coastline. As of May 2019, design of crew module has been completed. The spacecraft will be flown twice uncrewed for validation before conducting actual human spaceflight. Infrastructure development Human-Rating of LVM3 Human-rating rates the system is capable of safely transporting humans. ISRO will be building and launching 2 missions to validate the human rating of LVM3. Existing launch facilities will be upgraded to enable them to carry out launches under Indian Human Spaceflight campaign. Escape System The escape system will boast of a recently included geometry. Work on parachute enlargement and new architecture are also going on. Astronaut training Training for Gaganyaan programme ISRO Chairman, K. Sivan, announced in January 2019 the creation of India's Human Space Flight Centre in Bangalore for training astronauts, also called vyomanauts (vyoma means 'Space' or 'Sky' in Sanskrit). The centre will train the selected astronauts in rescue and recovery operations, operate in zero gravity environment, and monitoring of the radiation environment. In spring 2009 a full-scale mock-up of the crew capsule was built and delivered to Satish Dhawan Space Centre for training of astronauts. India will be short listing 200 Indian Air Force pilots for this purpose. The selection process would begin by the candidates having to complete an ISRO questionnaire, after which they would be subjected to physical and psychological analyses. Only 4 of the 200 applicants will be selected for the first space mission training. While two will fly, two shall act as reserve. ISRO signed a memorandum of understanding in 2009 with the Indian Air Force's Institute of Aerospace Medicine (IAM) to conduct preliminary research on psychological and physiological needs of crew and development of training facilities. ISRO is also discussing an agreement with Russia regarding some aspects of astronaut training. As of January 2020, 4 crews have been selected for the mission with astronaut training scheduled to begin in third week of January. NASA administrator Bill Nelson visited India in November 2023 and said he was ready to back the country's goal of constructing a commercial space station by 2040, provided that India asked for NASA's assistance. By combining the knowledge and experience of the two nations, this possible collaboration might promote innovation and increase human presence in space between the two parties of Artemis Accords. He stated that during a prior state visit, there was discussion about the Indian proposal to send an astronaut to the International Space Station (ISS). Planned facilities within India An astronaut training facility will be established on proposed site of nearby Kempegowda International Airport in Devanahalli, Karnataka. Another such facility is proposed to be constructed in Challakere under a plan. It will be a facility spanning and will be the primary facility for astronaut training and other related activities. As of January 2020, it is planned to be completed in 3 years. Once completed, all activities related to the Indian Human Spaceflight Programme will be undertaken there. In order to provide appropriate interplanetary conditions for astronaut training, Human Space Flight Centre worked with AAKA Space Studio, University of Ladakh, Ladakh Autonomous Hill Development Council, Leh and IIT Bombay on Ladakh Human Analogue Mission (LHAM). This is to understand the challenges that future astronauts might have when venturing beyond of Earth. Hab-1 is a small, inflatable habitat that is part of the mission. In addition to testing life support systems, the expedition will gather biometric data, recreate an extraterrestrial environment, examine circadian lighting, and evaluate human health and endurance in isolation. Experiments and objectives On 7 November 2018, ISRO released an Announcement of Opportunity seeking proposals from the Indian science community for microgravity experiments that could be carried out during the first two robotic flights of Gaganyaan. The scope of the experiments is not restricted, and other relevant ideas will be entertained. The proposed orbit for microgravity platform is expected to be in an Earth-bound orbit at approximately 400 km altitude. All the proposed internal and external experimental payloads will undergo thermal, vacuum and radiation tests under required temperature and pressure conditions. To carry out microgravity experiments for long duration, a satellite may be placed in orbit. International collaboration On 1 July 2019, Human Space Flight Center and Glavkosmos inked a contract for the medical evaluation, astronaut training, and selection assistance of Indian astronauts for the Gaganyaan mission. The Russian Academy of Sciences' Institute of Biomedical Problems, the Yuri Gagarin Cosmonaut Training Center, and the Federal State Budget Organization will all contribute in the executed contract. An ISRO Technical Liaison Unit (ITLU) will be set up in Moscow to facilitate the development of some key technologies and establishment of special facilities which are essential to support life in space. Human Space Flight Centre inked a deal with Glavkosmos in October 2019 for Energia to equip the Gaganyaan crew with life support system and supply thermal control system for the spacecraft. In addition to supplying food, water, and oxygen and assisting in regulating body temperature, the life support system will also handle waste products of crew members. Throughout the mission, the thermal control system will maintain the spacecraft's component within permissible temperature limits. A comprehensive framework for cooperation activities in human space exploration was signed by ISRO and European Space Agency (ESA) on 21 December 2024. It focuses on research projects and astronaut training programs, including access to ESA's facilities on the ISS. Beginning with the Axiom Mission 4, the agreement will be put into effect. Indian astronauts will take part in ESA's technology demonstration projects and human physiological investigations. See also Indian Human Spaceflight Programme Indian Space Research Organisation References External links President Kalam's vision: India will land on the Moon in August 2025 Hindustan Aeronautics Ltd (HAL) hands over the first ‘Crew Module Structural Assembly’ to ISRO. 13 February 2014. Indian Space Research Organisation 2019 establishments in Karnataka Transport in Bengaluru Human spaceflight programs
Human Space Flight Centre
[ "Engineering" ]
2,037
[ "Space programs", "Human spaceflight programs" ]
70,275,297
https://en.wikipedia.org/wiki/Faroe-Bank%20Channel%20overflow
Cold and dense water from the Nordic Seas is transported southwards as Faroe-Bank Channel overflow. This water flows from the Arctic Ocean into the North Atlantic through the Faroe-Bank Channel between the Faroe Islands and Scotland. The overflow transport is estimated to contribute to one-third (2.1±0.2 Sv, on average) of the total overflow over the Greenland-Scotland Ridge. The remaining two-third of overflow water passes through Denmark Strait (being the strongest overflow branch with an estimated transport of 3.5 Sv), the Wyville Thomson Ridge (0.3 Sv), and the Iceland-Faroe Ridge (1.1 Sv). Faroe-Bank Channel overflow (FBCO) contributes to a large extent to the formation of North Atlantic Deep Water. Therefore, FBCO is important for water transport towards the deep parts of the North Atlantic, playing a significant role in Earth's climate system. Faroe-Bank Channel The Faroe-Bank Channel (FBC) is a deeply eroded channel in the Greenland-Scotland Ridge (GSR). Its primary sill, located south of the Faroe Islands, has a width of about 15 km and a maximum depth of 840 m, with very steep walls at both sides of the channel. 100 km north-west of this sill, there is a secondary sill with a maximum depth of 850 m. Faroe-Bank Channel overflow enters the FBC from the northeast, turns towards the west between the Faroe Islands and the Faroe Bank, and leaves the GSR in southwestern direction, west-southwest of the Faroe Islands. Hydrography The water flowing over the Greenland-Scotland Ridge through the Faroe-Bank Channel consists of a very well-mixed bottom layer, with a stratified water layer on top. The temperature of this stratified layer can get to 11 °C in the upper 100 m of the channel, with a salinity around 35.1 g/kg; between 100 and 400 m depth the temperature of the water in the stratified layer is around 8 °C, with a salinity of 35.2 g/kg. The water below 400 m, in the well-mixed layer, can be characterised as overflow water. Definition of overflow The mixed bottom layer of the FBC is where the actual overflow takes place, being fed by inflow of cold and fresh North Atlantic Water, Modified North Atlantic Water, Norwegian Sea Deep Water and Norwegian Sea Arctic Intermediate Water. These water masses have different temperatures (between -0.5 and 7.0 °C) and salinities (between 34.7 and 35.4 g/kg). Therefore, it may be complicated to exactly define which water entering the FBC contributes to the actual overflow. Four definitions are possible, two of which depending on the overflow velocity, one depending on the overflow flux, and one depending on the overflow water properties. The simplest definition is in terms of velocities: water with a velocity in northwestern direction is then termed Faroe-Bank Channel overflow. At the sill, velocities can grow up until 1.2 m/s, accelerating when flowing downwards the deepening bathymetry. In this respect, high velocities are associated with strong mixing and highly turbulent flows. In the stratified layer at the top of the channel, velocities become negative (i.e., in southeastern direction), which makes these water no part of the overflow. Another option is to take into account the barotropic (i.e., horizontal sea-surface height gradients determine currents) and baroclinic (i.e., horizontal density gradients determine currents) pressure gradients at the overflow depth between both sides of the GSR:where is the decrease in sea-surface height and is the decrease in interface height from upstream areas to the sill. Processes like mixing, circulation and convection contribute to these pressure gradients. The overflow velocity, then, scales as follows with the pressure gradient between the basins north and south of the ridge: This velocity can then be used to define the total overflow flux in the FBC. A third definition is so-called kinematic overflow: the water flux from the bottom of the channel up to the interface height, being the level where the velocity in northwestern direction measures one half of the maximum velocity in the profile. The overflow flux is then calculated throughwhere is the average profile velocity, is the interface height, is the height of the layer below the lowest measurement station in the channel, and is the volume flux per unit width of the channel. Lastly, overflow can also be defined on the basis of hydrographical properties: namely as water that flows through the FBC having a temperature lower than 3 °C, or having a potential density higher than 27.8 kg/m3. This definition is most often used when estimating values for the magnitude of the FBCO. Periodicity Temperature and salinity profiles as well as current speeds in the FBC vary strongly on a day-to-day basis. The dense water forms domes that move through the channel with a period of 2.5 to 6 days. At the ocean surface, this periodicity can be observed in the form of topographic Rossby waves at the sea surface, which are caused by mesoscale oscillations in the velocity field. The resulting eddies are the consequence of baroclinic instabilities within the overflow water, which then induce the observed periodicity. On a greater timescale, atmospheric forcing also causes periodic changes in the FBCO. When the atmospheric circulation governing the Nordic Seas is in a cyclonic (anticyclonic) regime, the source of the deep water predominantly comes via a western (eastern) inflow path, and the FBCO will be weaker (stronger). The eastern inflow path is called the Faroe-Strait Channel Jet. This transition from a cyclonic to an anticyclonic regime takes place on an interannual timescale, but the atmospheric forcing also shows a seasonal cycle. During summer the weakened cyclonic winds are associated with a higher FBCO transport. This indicates a fast barotropic response to the wind forcing. Outflow Faroe-Strait Channel Jet water is much colder than the water flowing into the Faroe-Bank Channel via its western entrance path. Within the FBC, water always flows along its eastern rather than its western boundary, regardless the different inflow pathways from the Nordic Seas. Moreover, at times the eastern inflow path is dominant, overflow waters are denser and higher in volume. After passing the primary Faroe-Bank Channel sill, the overflow bifurcates into two different branches that both flow with a maximum velocity of 1.35 m/s on top of each other. The average thickness of the total outflow plume along its descent is 160±70 m, showing a high lateral variability, and yields a transport of ~1 Sv per branch. A transverse circulation actively dilutes the bottom branch of the plume. The shallow, intermediate branch transports warmer, less dense outflow water along the ridge slope towards the west. This branch mixes with oxygen-poor, fresh Modified East Icelandic Water. The deep (deeper than 1000 m) branch transports the most dense, cold water towards the deep parts of the North Atlantic. This branch entrains warmer and more saline water, mixes, and consequently obtains higher temperatures and salinity. Both branches ultimately contribute to the formation of North Atlantic Deep Water. North Atlantic overturning The Atlantic meridional overturning circulation (AMOC) is important for Earth's climate because of its distribution of heat and salinity over the globe. The strength of the Faroe-Bank Channel overflow is an important indicator for the stability of the AMOC, since the overflow produces dense waters that contribute for a large extent to the total overturning in the North Atlantic. Parameters that can effect the AMOC are kinematic overflow (i.e., the magnitude of the overflow transport) and overflow density (as the AMOC being a density-driven circulation). In this respect, density characteristics of the overflow could vary even if the kinematic overflow does not. Measurements From 1995 onwards, FBCO has been monitored by a continuous Acoustic Doppler current profiler (ADCP) mooring, measuring volume transport, hydrographic properties and the density of the overflow. The kinematic overflow, derived from the velocity field, showed a non-significant positive linear trend of 0.01±0.013 Sv/yr between 1995 and 2015, whereas the coldest part of the FBCO warmed in that same period with 0.1±0.06 °C (which made density decrease), causing increasing transport of heat into the AMOC. This warming, however, is accompanied by an observed salinity (and therefore density) increase, which results in no net change in density. Model simulations Climate models have shown an overall decreasing trend in the baroclinic component of the overflow between 1948 and 2005; the barotropic pressure gradient, however, shows an increasing trend of equal magnitude. These processes compensate each other; as a result the pressure difference at depth does not show a significant trend over time. Global inverse modelling, ocean hydrographic surveys, chlorofuorocarbon (CFC) inventories, and monitoring of the AMOC from 2004 to present have shown that the AMOC has slowed down in the past decades. As explained, density of FBCO waters did not significantly change in that time period, so changes in FBCO cannot (fully) explain the changes in the AMOC. See also Atlantic meridional overturning circulation Nordic Seas Mesoscale ocean eddies References Currents of the Atlantic Ocean Physical oceanography
Faroe-Bank Channel overflow
[ "Physics" ]
2,062
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
70,276,139
https://en.wikipedia.org/wiki/Efik%20calendar
The Efik calendar () is the traditional calendar system of the Efik people located in present-day Nigeria. The calendar consisted of 8 days in a week (urua). Each day was dedicated to a god or goddess greatly revered in the Efik religion. It also consisted of festivals many of which were indefinite. Definite festivals were assigned on specific periods during the year while indefinite festivals or ceremonies occurred due to certain social or political circumstances. Days of the week The names of the eight day week in the traditional Efik calendar include:- Akwa ederi Akwa eyibio Akwa ikwọ Akwa ọfiọñ Ekpri ederi Ekpri eyibio Ekpri ikwọ Ekpri ọfiọñ Donald C. Simmons, an anthropologist who undertook several studies on the Efik society asserts that the presence of the adjectives, Akwa "big" and Ekpri "small" suggests that the Efik may have once possessed a four-day week. Each Efik day was of great importance in the religious life of the Efik. On Akwa ederi which was also known as Usen Ibet, day of rest, the Efik did not work but spent the day resting and feasting. Europeans also nicknamed Akwa ederi as "Calabar Sunday". The 8-day week had an adverse effect on the routine of European traders who often visited Old Calabar. Savage attests that the day was also dedicated to Eka ndem, the mother of Ndem. The Christian Sunday came to be known as due to the Christian prohibition of work on Sunday. It was common for families, houses and towns to have their separate deities. These communal deities were worshipped on Akwa eyibio. Akwa eyibio was originally known as Akwa ibibio but was later changed in 1967 by Chief Efiong Ukpong Aye. The use of Akwa ibibio has since become redundant. Akwa ikwọ was set aside for the display of the Ekpe masquerade (). On this day, women and non-Ekpe initiates were allowed to watch Ekpe displays. The last day of the Efik week was Akwa ọfiọñ. According to Savage, The national deity and patron of Nsibidi, Ekpenyong Obio Ndem was also worshipped on Akwa eyibio. Akwa ọfiọñ was also a day dedicated to grand Ekpe or Nyamkpe. On this day, slaves, women and non-Ekpe initiates were not allowed to watch the Ekpe display. Anyone who was prohibited from watching this display would usually not leave the door of their house open and would go through a bush path away from the ceremonies if they needed to undertake an errand. Festivals The timing of Festivals in Efik society was mainly indefinite. Definite festivals occurred at particular periods in the year at Old Calabar. Among such festivals were Ndọk and Usukabia. Ndọk was a biennial purification ceremony that occurred sometime between November and December. Usukabia was the ceremony of first partaking of new yams in the year. The festival occurred at the beginning of the harvest season. Environmental factors were the main determinant for the setting of the time and day of these festivals. Indefinite ceremonies included Victory in war celebrations; purification carried out after war or illness; the coronation of an Edidem; the funeral rites of an edidem. See also Ekpe References Bibliography Calendars Efik
Efik calendar
[ "Physics" ]
730
[ "Spacetime", "Calendars", "Physical quantities", "Time" ]
70,285,102
https://en.wikipedia.org/wiki/Relative%20wind%20stress
Relative wind stress is a shear stress that is produced by wind blowing over the surface of the ocean, or another large body of water. Relative wind stress is related to wind stress but takes the difference between the surface ocean current velocity and wind velocity into account. The units are Newton per meter squared or Pascal . Wind stress over the ocean is important as it is a major source of kinetic energy input to the ocean which in turn drives large scale ocean circulation. The use of relative wind stress instead of wind stress, where the ocean current is assumed to be stationary, reduces the stress felt over the ocean in models. This leads to a decrease in the calculation of power input into the ocean of 20–35% and thus, results in a different simulation of the large scale ocean circulation. Mathematical formulation The wind stress acting on the ocean surface is usually parameterized using the turbulent drag formula . where is the turbulent drag coefficient (usually determined empirically), is the air density, and is the wind velocity vector, usually taken at 10m above sea level. This parameterization is commonly referred to as resting ocean approximation. From now on we will refer to wind stress in resting ocean approximation as simply resting ocean wind stress. On the other hand, relative wind stress makes use of the velocity of the surface wind relative to the velocity at the ocean surface , as follows, . where is the surface ocean velocity and thus, the terms with represent the wind velocity relative to the surface ocean velocity. Therefore, the difference between wind stress and relative wind stress is that relative wind stress takes into account the relative motion of the wind with respect to the surface ocean current. Work done by the wind on the ocean The work wind does on the ocean can be computed by where is the chosen parameterization for the wind stress. Thus, in resting ocean approximation, the work done on the ocean by the wind is . Furthermore, if the relative wind stress parameterization is used, the work done on the ocean is given by Then, assuming is the same in both situations, the difference between work done by resting ocean wind stress and relative wind stress is given by . Analysing this expression, we first see that the term is always positive (since and all the other terms are positive). Next, for the term , we have: if then and so the overall sign is positive. if then and again, the overall sign is positive. Therefore, it is always the case that , meaning the calculation of the work done is always larger when using the resting ocean wind stress. This overestimate is referred to in the literature as a "positive bias". Note that this may not be the case if the used in the calculation of is different from the used in the calculations of (See section: Ocean currents as output of ocean models). Wind mechanical damping effect The mathematical explanation for the positive bias in the calculation of work using the resting ocean wind stress can also be interpreted physically through the mechanical damping effect. As seen in Figure 2, when the wind velocity and ocean current velocity are in the same direction, the relative wind stress is smaller than the resting ocean wind stress. In other words, less positive work is using relative wind stress. When the wind and the ocean velocities are in opposite directions, then the relative wind stress does more negative work than the resting ocean wind stress. Consequently, in both scenarios less work is being done on the ocean when the relative wind stress is used for the calculation. This physical interpretation can also be adapted to a scenario where there is an ocean eddy. As illustrated on the top part of Figure 3, in the eddy situation, the relative wind stress is smaller when the wind and ocean velocities are aligned, a similar situation to the top part of Figure 2. At the bottom part of Figure 3 we have a situation analogous to the bottom part of Figure 2, where more negative work is being performed on the system than in the resting ocean case. Therefore, at the top of the eddy less energy is being put in and at the bottom more energy is being taken out, which means the eddy is being dampened more in the relative wind case. The two situations depicted in Figures 2 and 3 are the physical reason why there is a positive bias when estimating the power (work per unit time) input to the ocean when using the resting ocean stress rather than the relative wind stress. Impact on models for large-scale ocean circulation For the computation of surface currents, a general circulation model is forced with surface winds. A study by Pacanowski (1987) shows that including ocean current velocity through relative wind stress in an Atlantic circulation model reduces the surface currents by 30%. This decrease in surface current can impact sea surface temperature and upwelling along the equator. However, the greatest impact of including ocean currents in the air-sea stress is in the calculation of Power input to the general circulation, with the mechanism as described above. An additional effect of the computation with relative wind stress instead of resting ocean wind stress leads to a lower Residual Meridional Overturning Circulation in models. Power Input Figure 4 shows the difference between relative wind stress and resting ocean wind stress. Data for relative wind stress is obtained from scatterometers. These accurately represent the relative wind stress as they measure backscatter from small-scale structures on the ocean surface, which respond to the sea surface-air interface and not to wind speed. Overestimations of power input into the ocean in models have been identified when using wind stress calculated from zonal mean wind instead of relative wind stress, ranging between 20-35%. In regions where wind speeds are relatively low and current speeds relatively high this effect is the greatest. An example is the tropical Pacific Ocean where trade winds blow with 5–9 m/s and the ocean current velocities can exceed 1 m/s. In this region, depending on if it is an El Niño or La Niña state, the wind stress difference (resting ocean wind stress minus relative wind stress) can vary between negative and positive, respectively. Residual Meridional Overturning Circulation In the Southern Ocean, the use of relative wind stress is important because eddies are crucial in the Antarctic Circumpolar Circulation, and the damping of these eddies with relative wind stress will affect the overturning circulation. The Residual Meridional Overturning Circulation (RMOC), is a streamfunction that quantifies the transport of tracers across isopycnals. Wind stress is taken into account through the formulation of the RMOC, which is the sum of the Eulerian mean MOC and eddy-induced bolus overturning . The Eulerian mean MOC is dependent on the meridional winds that drive Ekman transport in zonal direction. The eddy-induced bolus overturning acts to restore sloping isopycnals to the horizontal, which are induced by eddies. The formulation of the RMOC is given by: with being the zonal mean wind stress, the reference density, the Coriolis parameter (negative in Southern Hemisphere), the quasi-Stokes eddy diffusivity field, equal to being the length and the velocity of the eddy, respectively, and the slope of the isopycnals. Inserting a lower wind stress, by using relative wind stress instead of resting ocean wind stress, directly leads to lower residual overturning, by reducing the Eulerian mean MOC (). Furthermore, it affects the eddy-induced bolus overturning () by damping eddies which results in reduced length and velocity scale ( & ) of eddies. The sum of this thus leads to a lower . Ocean currents as output of ocean models As briefly mentioned in Section: Impact on Models for large-scale Ocean Circulation, the surface currents can be calculated by forcing surface winds into a general circulation model. The case of a model which is also forced by relative wind stress can be visualized in Figure 5. Firstly, the satellite data is used to input the 10m wind velocity into the calculation of the relative wind stress. However, if the parameterization for relative wind stress is used, this will result in a coupled problem. The ocean model requires the relative wind stress to output the ocean current velocity, which in turn the calculation of relies on. This coupled system needs to be formulated as an inverse problem. Another consequence is that, depending on the parameterization used for the wind stress, a different vector field will be inputted into the ocean model and, consequently, a different value of will be outputted by the ocean model. Therefore, if a different wind field is used for the calculations of and then it could be that . In other words, there may be a negative bias when calculating the work done on the ocean using the resting ocean approximation. On the global scale, however, the literature has found an over rather than underestimation, as previously mentioned. References Fluid dynamics Physical oceanography
Relative wind stress
[ "Physics", "Chemistry", "Engineering" ]
1,818
[ "Applied and interdisciplinary physics", "Chemical engineering", "Physical oceanography", "Piping", "Fluid dynamics" ]
57,840,089
https://en.wikipedia.org/wiki/Albedometer
An albedometer is an instrument used to measure the albedo (reflecting radiation) of a surface. An albedometer is mostly used to measure the reflectance of earths surface. It is also useful to evaluate thermal effects in buildings and generation capacity with bifacial solar photovoltaic panels. Often it consists of two pyranometers: one facing up towards the sky and one facing down towards the surface. From the ratio of incoming and reflecting radiation the albedo can be calculated. Measurement principle The measurement of surface albedo of earths surface happens by using two pyranometers. The upfacing pyranometer measures the incoming global solar radiation. The downward facing pyranometer measures the reflected global solar radiation. The ratio of the reflected to the global radiation is the solar albedo and depends on the properties of the surface and the directional distribution of the incoming solar radiation. Typical values range from 4% for asphalt to 90% for fresh snow. Designs for a low-cost albedometer have been released with an open source hardware license which measures the reflection in 8 spectral bands in the visible light spectrum, additionally the system is equipped with a global navigation satellite system receiver, to georeference its position and an Inertial Measurement Unit to know its absolute orientation, make corrections in real time or detect errors. Standards ISO 9060 WMO No.8 ISO 9847 ASTM G207-11. References Meteorological instrumentation and equipment Measuring instruments
Albedometer
[ "Technology", "Engineering" ]
302
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
57,840,163
https://en.wikipedia.org/wiki/Rational%20thermodynamics
Rational thermodynamics is a school of thought in statistical thermodynamics developed in the 1960s. Its introduction is attributed to Clifford Truesdell, and Walter Noll. The aim was to develop a mathematical model of thermodynamics that would go beyond the traditional "thermodynamics of irreversible processes" or TIP developed in the late 19th to early 20th centuries. Truesdell's "flamboyant style" and "satirical verve" caused controversy between "rational thermodynamics" and proponents of traditional thermodynamics. References Clifford A. Truesdell, Rational Thermodynamics: A Course of Lectures on Selected Topics, Springer, (1969, 2nd ed. 1984). Ingo Müller, Tommaso Ruggeri, Extended Rational Thermodynamics, Springer (1998), doi:10.1007/978-1-4612-2210-1. See also Archive for Rational Mechanics and Analysis Thermodynamics
Rational thermodynamics
[ "Physics", "Chemistry", "Mathematics" ]
213
[ "Thermodynamics", "Dynamical systems" ]
57,843,630
https://en.wikipedia.org/wiki/Alloy%20broadening
Alloy broadening is a mechanism by which the spectral lines of an alloy are broadened by the random distribution of atoms within the alloy. It is one of a number of spectral line broadening mechanisms. Alloy broadening occurs because the random distribution of atoms in an alloy causes a different material composition at different positions. In semiconductors and insulators the different material composition leads to different band gap energies. This gives different exciton recombination energies. Therefore, depending on the position where an exciton recombines the emitted light has a different energy. The alloy broadening is an inhomogeneous line broadening, meaning that its shape is Gaussian. Binary alloy In the mathematical description it is assumed that no clustering occurs within the alloy. Then, for a binary alloy of the form A_{1-x}B_{x}, e.g. Si_{1-x}Ge_{x}, the standard deviation of the composition is given by: , where is the number of atoms within the excitons' volume, i.e. with being the atoms per volume. In general, the band gap energy of a semiconducting alloy depends on the composition, i.e. . The band gap energy can be considered to be the fluorescence energy. Therefore, the standard deviation in fluorescence is: As the alloy broadening belongs to the group of inhomogeneous broadenings the line shape of the fluorescence intensity is Gaussian: References Physical chemistry
Alloy broadening
[ "Physics", "Chemistry" ]
314
[ "Physical chemistry", "Applied and interdisciplinary physics", "Physical chemistry stubs", "nan" ]
77,481,293
https://en.wikipedia.org/wiki/Planar%20reentry%20equations
The planar reentry equations are the equations of motion governing the unpowered reentry of a spacecraft, based on the assumptions of planar motion and constant mass, in an Earth-fixed reference frame.where the quantities in these equations are: is the velocity is the flight path angle is the altitude is the atmospheric density is the ballistic coefficient is the gravitational acceleration is the radius from the center of a planet with equatorial radius is the lift-to-drag ratio is the bank angle of the spacecraft. Simplifications Allen-Eggers solution Harry Allen and Alfred Eggers, based on their studies of ICBM trajectories, were able to derive an analytical expression for the velocity as a function of altitude. They made several assumptions: The spacecraft's entry was purely ballistic . The effect of gravity is small compared to drag, and can be ignored. The flight path angle and ballistic coefficient are constant. An exponential atmosphere, where , with being the density at the planet's surface and being the scale height. These assumptions are valid for hypersonic speeds, where the Mach number is greater than 5. Then the planar reentry equations for the spacecraft are: Rearranging terms and integrating from the atmospheric interface conditions at the start of reentry leads to the expression: The term is small and may be neglected, leading to the velocity: Allen and Eggers were also able to calculate the deceleration along the trajectory, in terms of the number of g's experienced , where is the gravitational acceleration at the planet's surface. The altitude and velocity at maximum deceleration are: It is also possible to compute the maximum stagnation point convective heating with the Allen-Eggers solution and a heat transfer correlation; the Sutton-Graves correlation is commonly chosen. The heat rate at the stagnation point, with units of Watts per square meter, is assumed to have the form: where is the effective nose radius. The constant for Earth. Then the altitude and value of peak convective heating may be found: Equilibrium glide condition Another commonly encountered simplification is a lifting entry with a shallow, slowly-varying, flight path angle. The velocity as a function of altitude can be derived from two assumptions: The flight path angle is shallow, meaning that: . The flight path angle changes very slowly, such that . From these two assumptions, we may infer from the second equation of motion that: See also Atmospheric entry Hypersonic flight References Further reading Regan, F.J.; Anandakrishnan, S.M. (1993). Dynamics of Atmospheric Re-Entry. AIAA Education Series. pp. 180-184. Atmospheric entry Differential equations Aerospace engineering Classical mechanics
Planar reentry equations
[ "Physics", "Mathematics", "Engineering" ]
552
[ "Mathematical objects", "Differential equations", "Classical mechanics", "Equations", "Atmospheric entry", "Mechanics", "Aerospace engineering" ]
77,489,015
https://en.wikipedia.org/wiki/Maris%E2%80%93Tandy%20model
Within the Schwinger-Dyson equation approach to calculate structure of bound states under quantum field theory dynamics, one applies truncation schemes such that the finite tower of integral equations for Green's functions becomes manageable. For hadrons (mesons and baryons) as relativistic bound states of quarks and gluons interacting via the strong nuclear force, a well-adopted scheme is the rainbow-ladder truncation. Particularly the bound state amplitude (Bethe-Salpeter amplitude) of mesons is determined from the homogeneous Bethe-Salpeter equation. While the amplitude for baryons is solved from the Faddeev equation. Information on the structure of hadrons is contained within these amplitudes. The established quantum field theory of the strong interaction is quantum chromodynamics (QCD). The Maris-Tandy model is a practical case of the rainbow-ladder truncation that yields reasonable description for hadrons with up quarks, down quarks, and strange quarks as their valence quarks. Description of the model Within the Maris-Tandy model of QCD interactions for quarks and gluons, the quark-gluon proper vertex in combination with the dressed gluon propagator in the Landau gauge is replaced by the bare vertex multiplied by a scalar dressing function: where , , and are the momenta of the quarks and the gluon in Euclidean space. The matrix is the Dirac matrix. And the scalar function is given by The parameters and specify the strength and the scale of the infrared term, respectively. The elementary color charge is given by . The second term on the right-hand side is the ultraviolet (UV) term constructed in agreement with perturbative QCD, within which the parameter is the characteristic scale of QCD, Other parameters in the UV term are explained in Ref.. Applications The Maris-Tandy model can be applied to solve for the structure of pions, kaons, and a selection of vector mesons from the homogeneous Bethe-Salpeter equation. It can also be used to solve for the quark-photon vertex from the inhomogeneous Bethe-Salpeter equation, for the elastic form factors of pseudoscalar mesons, and for the radiative transitions of mesons. Meanwhile the mass spectrum and structure of nucleons can be solved within this model from the Faddeev equation. References Quantum chromodynamics Quantum field theory
Maris–Tandy model
[ "Physics" ]
526
[ "Quantum field theory", "Quantum mechanics" ]
77,490,433
https://en.wikipedia.org/wiki/Dicalcium%20ruthenate
Dicalcium ruthenate, with the chemical formula Ca2RuO4, is a stochiometric oxide compound that hosts a multi-orbital (band) Mott insulating ground state. For this reason, Ca2RuO4 serves as an important "meeting-point" between conceptual developments of strongly correlated multi-band physics and advanced experimental spectroscopies. Its electronic structure and also orbital magnetism are therefore subjects of experimental and theoretical scrutiny. Electronic properties Around 350 K, Ca2RuO4 undergoes a metal insulator transition which involves a crystal structure transition leading to a strong c-axis compression. Negative thermal expansion has also been reported in conjunction with this c-axis compression. The metal insulator transition is sensitive to electrical current. Below 80 K, an anti-ferromagnetic ordering emerges. Related materials Ca1.8Sr0.2RuO4 has been proposed as a candidate system for orbital selective Mott physics. The bilayer compound Ca3Ru2O7 is metallic, but display a sequence of electronic transitions below 60 K. Finally, Sr2RuO4 hosts an unconventional superconducting state. References Calcium compounds Ruthenates Condensed matter physics
Dicalcium ruthenate
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
248
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
47,576,594
https://en.wikipedia.org/wiki/HP%20Lyrae
HP Lyrae (HP Lyr) is a variable star in the constellation Lyra, with a visual magnitude varying between 10.2 and 10.8. It will likely be an RV Tauri variable, an unstable post-AGB star losing mass before becoming a white dwarf. Discovery HP Lyr was first reported to be variable in 1935 by Otto Morgenroth of the Sonneberg Observatory. The range was given as 9.5 - 10.5 and the variability type only as long-period. In 1961, it was formally designated as a β Lyr eclipsing variable with two A type supergiants in a close orbit producing smooth continuous variations with alternating minima of different depths. The period was given as 140.75 days, covering two maxima, and both a deep primary minimum and a slightly less deep secondary minimum. In 2001 a request was made for observations of HP Lyr. Shortly after it was reported that HP Lyr was likely to be an RV Tauri variable rather than an eclipsing binary. This was confirmed with a more detailed study published in 2002. Some authors still maintain that the spectral type and nature of variation mean HP Lyr is more likely to be an eclipsing variable. Variability HP Lyr varies by about 0.5 magnitude over a "halfperiod" of 68.4 days. The formal period, defined for an RV Tauri variable as being from deep minimum to deep minimum is twice that length. Its spectrum changes from A2-3 at maximum to F2 at the deepest minima. The radial velocity changes are typical for the pulsations of an RV Tauri variable, but are not compatible with a binary orbit. The spectral type and colour indicated that it was likely to be the hottest known RV Tauri star. Until 1960, the period of HP Lyr was very consistent at 140.75 days. Since then it has been observed to reduce to below 140 days, probably quite suddenly. A survey of historic photography including the star showed that the period changed in 1962 or 1963, taking no more than four cycles to reach a new value of 138.66 days. Properties A 2005 study of the elemental abundances of RV Tauri stars calculated that HP Lyr had a temperature around and typical abundances for an RV Tauri variable. It also revealed that the abundances were altered by dust-gas separation in circumstellar material. HP Lyr has been included in a catalog of confirmed post-AGB stars, highly evolved and on its way to becoming a white dwarf. In 2017, the temperature was calculated to be , still one of the hottest known RV Tau variables. The distance is uncertain, although large. Gaia Data Release 2 contains a parallax indicating a distance of around . Using luminosities derived from a period-luminosity-colour relationship, together with interstellar extinctions, gives a distance of around . From the radius and effective temperature, the radius is calculated to be . HP Lyrae is a post-AGB star, one that has completed its evolution along the asymptotic giant branch (AGB) and is now rapidly shedding its outer layers prior to becoming a white dwarf. During this process it becomes hotter and crosses the instability strip which causes it to become unstable and pulsate. Binary Many RV Tauri stars are found to be in binary systems, and HP Lyrae has an invisible companion in a orbit. Its properties are not known, but the mass is estimated to be a little under , leaving open the possibility that it is a white dwarf. References External links ASAS-3 information INTEGRAL-OMC catalogue Lyra RV Tauri variables A-type supergiants F-type supergiants J19213906+3956080 IRAS catalogue objects Lyrae, HP Beta Lyrae variables
HP Lyrae
[ "Astronomy" ]
788
[ "Lyra", "Constellations" ]
47,578,523
https://en.wikipedia.org/wiki/Adaptive%20Redaction
Adaptive Redaction is an alternate version of redaction whereby sensitive parts of a document are automatically removed based on policy. It is primarily used in next generation Data Loss Prevention (DLP) solutions. Content and context The policy is a set of rules based on content and context. Context can include: Who is sending (or uploading) the information. Who is receiving the information (including a website if uploading or downloading). The communication channel (e.g. email, web, copy to removable media). The content can be 'visible' information, such as that you see on the screen. It can also be 'invisible' information such as that in document properties and revision history, and it can also be 'active' content which has been embedded in an electronic document, such as a macro. Purpose Adaptive Redaction is designed to alleviate "False Positive" events created with Data loss prevention software (DLP) security solutions. False positives occur when a DLP policy triggers and prevents legitimate outgoing communication. In the majority of cases this is caused through oversight by the sender. Examples Sending unprotected credit card information outside an organisation breaches the Payment Card Industry Data Security Standard (PCI DSS regulations). Many organisations accept credit card information through email, however a reply to an email containing such information would send out the prohibited information. That would cause a breach of policy. Adaptive Redaction can be used to remove just the credit card number but allow the email to be sent. 'Invisible' information can be found in documents and has created embarrassment for several governments. See also Data masking Redaction Tokenization (data security) References Cryptography Data security Information technology
Adaptive Redaction
[ "Mathematics", "Technology", "Engineering" ]
349
[ "Information and communications technology", "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Information technology", "Data security" ]
62,146,673
https://en.wikipedia.org/wiki/Glauber%20dynamics
In statistical physics, Glauber dynamics is a way to simulate the Ising model (a model of magnetism) on a computer. The algorithm In the Ising model, we have say N particles that can spin up (+1) or down (-1). Say the particles are on a 2D grid. We label each with an x and y coordinate. Glauber's algorithm becomes: Choose a particle at random. Sum its four neighboring spins. . Compute the change in energy if the spin x, y were to flip. This is (see the Hamiltonian for the Ising model). Flip the spin with probability where T is the temperature . Display the new grid. Repeat the above N times. In Glauber algorithm, if the energy change in flipping a spin is zero, , then the spin would flip with probability . Comparison to Metropolis In the Glauber dynamic, however, every spin has an equal chance of being chosen at each time step, regardless of being chosen before. The Metropolis acceptance criterion includes the Boltzmann weight, , but it always flips a spin in favor of lowering the energy, such that the spin-flip probability is: .Although both of the acceptance probabilities approximate a step curve and they are almost indistinguishable at very low temperatures, they differ when temperature gets high. For an Ising model on a 2d lattice, the critical temperature is . In practice, the main difference between the Metropolis–Hastings algorithm and with Glauber algorithm is in choosing the spins and how to flip them (step 4). However, at thermal equilibrium, these two algorithms should give identical results. In general, at equilibrium, any MCMC algorithm should produce the same distribution, as long as the algorithm satisfies ergodicity and detailed balance. In both algorithms, for any change in energy, , meaning that transition between the states of the system is always possible despite being very unlikely at some temperatures. So, the condition for ergodicity is satisfied for both of the algorithms. Detailed balance, which is a requirement of reversibility, states that if you observe the system for a long enough time, the system goes from state to with the same frequency as going from to . In equilibrium, the probability of observing the system at state A is given by the Boltzmann weight, . So, the amount of time the system spends in low energy states is larger than in high energy states and there is more chance that the system is observed in states where it spends more time. Meaning that when the transition from to is energetically unfavorable, the system happens to be at more frequently, counterbalancing the lower intrinsic probability of transition. Therefore, both, Glauber and Metropolis–Hastings algorithms exhibit detailed balance. History The algorithm is named after Roy J. Glauber. Software Simulation package IsingLenzMC provides simulation of Glauber Dynamics on 1D lattices with external field. CRAN. Related pages Metropolis algorithm Ising model Monte Carlo algorithm Simulated annealing References Monte Carlo methods Spin models
Glauber dynamics
[ "Physics" ]
629
[ "Spin models", "Monte Carlo methods", "Quantum mechanics", "Computational physics", "Statistical mechanics" ]
62,149,090
https://en.wikipedia.org/wiki/Hisashi%20Okamoto
Hisashi Okamoto (岡本 久, Okamoto Hisashi, born 23 November 1956) is a Japanese applied mathematician, specializing in mathematical fluid mechanics and computational fluid dynamics. Okamoto graduated from the University of Tokyo in March 1979. In April 1981 he became a research associate to Hiroshi Fujita (known for the Fujita-Kato theorem) at the University of Tokyo. There in 1985 he received his Doctorate of Science with Fujita as advisor. For the academic year 1986–1987 Okamoto was a visiting fellow at the University of Minnesota's Institute for Mathematics and Its Applications. In August 1987 Okamoto became an associate professor in the University of Tokyo's Department of Applied Science. In 1988 he visited the National University of Singapore. At Kyoto University's Research Institute for Mathematical Sciences (RIMS), he became an associate professor in April 1990 and a full professor in April 1994. At RIMS he was Head of the Computer Science Research Laboratory from 2004 to 2005 and deputy director in 2006, 2009, and 2011. He is editor-in-chief of the Japan Journal of Industrial and Applied Mathematics (JJIAM). Okamoto is the author or co-author of over 100 articles in refereed journals or in books of conference proceedings. He wrote, with Mayumi Shōji, the 2001 monograph The mathematical theory of permanent progressive water-waves. Awards and honors 1998 — Invited Speaker, International Congress of Mathematicians, Berlin 1998 2002 — Inoue Science Award 2011 — President of the East Asia SIAM, 2011–2012 2013 — Fellow of the Japan Society of Fluid Mechanics 2013 — Fellow of the Japan Society for Industrial and Applied Mathematics 2015 — Plenary Lecturer, Mathematical Society of Japan, September 2015. 2016 — Hiroshi Fujiwara Prize on Mathematical Science References 1956 births Living people 20th-century Japanese mathematicians 21st-century Japanese mathematicians Applied mathematicians Numerical analysts Fluid dynamicists University of Tokyo alumni Academic staff of Kyoto University Academic staff of the University of Tokyo
Hisashi Okamoto
[ "Chemistry", "Mathematics" ]
397
[ "Applied mathematics", "Applied mathematicians", "Fluid dynamicists", "Fluid dynamics" ]
62,153,058
https://en.wikipedia.org/wiki/Polytopological%20space
In general topology, a polytopological space consists of a set together with a family of topologies on that is linearly ordered by the inclusion relation where is an arbitrary index set. It is usually assumed that the topologies are in non-decreasing order. However some authors prefer the associated closure operators to be in non-decreasing order where if and only if for all . This requires non-increasing topologies. Formal definitions An -topological space is a set together with a monotone map Top where is a partially ordered set and Top is the set of all possible topologies on ordered by inclusion. When the partial order is a linear order then is called a polytopological space. Taking to be the ordinal number an -topological space can be thought of as a set with topologies on it. More generally a multitopological space is a set together with an arbitrary family of topologies on it. History Polytopological spaces were introduced in 2008 by the philosopher Thomas Icard for the purpose of defining a topological model of Japaridze's polymodal logic (GLP). They were later used to generalize variants of Kuratowski's closure-complement problem. For example Taras Banakh et al. proved that under operator composition the closure operators and complement operator on an arbitrary -topological space can together generate at most distinct operators where In 1965 the Finnish logician Jaakko Hintikka found this bound for the case and claimed it "does not appear to obey any very simple law as a function of ". See also Bitopological space References Topology
Polytopological space
[ "Physics", "Mathematics" ]
322
[ "Spacetime", "Topology", "Space", "Geometry" ]
63,066,808
https://en.wikipedia.org/wiki/Adrian%20Mathias
Adrian Richard David Mathias (born 12 February 1944) is a British mathematician working in set theory. The forcing notion Mathias forcing is named for him. Career Mathias was educated at Shrewsbury and Trinity College, Cambridge, where he read mathematics and graduated in 1965. After graduation, he moved to Bonn in Germany where he studied with Ronald Jensen, visiting UCLA, Stanford, the University of Wisconsin, and Monash University during that period. In 1969, he returned to Cambridge as a research fellow at Peterhouse and was admitted to the Ph.D. at Cambridge University in 1970. From 1969 to 1990, Mathias was a fellow of Peterhouse; during this period, he was the editor of the Mathematical Proceedings of the Cambridge Philosophical Society from 1972 to 1974, spent one academic year (1978/79) as Hochschulassistent to Jensen in Freiburg and another year (1989/90) at the MSRI in Berkeley. After leaving Peterhouse in 1990, Mathias had visiting positions in Warsaw, at the Mathematisches Forschungsinstitut Oberwolfach, at the CRM in Barcelona, and in Bogotá, before becoming Professor at the Université de la Réunion. He retired from his professorship in 2012 and was admitted to the higher degree of Doctor of Science at the University of Cambridge in 2015. Work Mathias became mathematically active soon after the introduction of forcing by Paul Cohen, and Kanamori credits his survey of forcing that was eventually published as Surrealist landscape with figures as being a "vital source" on forcing in its early days. His paper Happy families, extending his 1968 Cambridge thesis, proves important properties of the forcing now known as Mathias forcing. In the same paper he shows that no (infinite) maximal almost disjoint family can be analytic. Mathias also used forcing to separate two weak forms of the Axiom of choice, showing that the ordering principle, which states that any set can be linearly ordered, does not imply the Boolean Prime Ideal Theorem. His more recent work on forcing includes the study of the theory PROVI of provident sets, a minimalist axiom system that still allows the forcing construction to proceed. Mathias is also known for his writings around sociological aspects of logic. These include The ignorance of Bourbaki and Hilbert, Bourbaki and the scorning of logic, in which Mathias criticises Bourbaki's approach to logic; in A Term of Length 4,523,659,424,929 he shows that the number in the title is the number of symbols required for Bourbaki's definition of the number 1. Mathias has also considered claims that standard ZFC is stronger than necessary for "mainstream" mathematics; his paper What is Mac Lane missing? on this topic appeared alongside Saunders Mac Lane's response Is Mathias an ontologist?. Mathias also conducted a detailed study of the strength of a weakened system suggested by Mac Lane. References External links Home page Adrian Richard David Mathias at the Mathematics Genealogy Project 20th-century English mathematicians 21st-century English mathematicians 1944 births Living people Mathematical logicians Set theorists Fellows of Peterhouse, Cambridge Alumni of Trinity College, Cambridge Cambridge mathematicians
Adrian Mathias
[ "Mathematics" ]
643
[ "Mathematical logic", "Mathematical logicians" ]
63,067,144
https://en.wikipedia.org/wiki/Boltzmann%20sampler
A Boltzmann sampler is an algorithm intended for random sampling of combinatorial structures. If the object size is viewed as its energy, and the argument of the corresponding generating function is interpreted in terms of the temperature of the physical system, then a Boltzmann sampler returns an object from a classical Boltzmann distribution. The concept of Boltzmann sampler was proposed by Philippe Duchon, Philippe Flajolet, Guy Louchard and Gilles Schaeffer in 2004. Description The concept of Boltzmann sampling is closely related to the symbolic method in combinatorics. Let be a combinatorial class with an ordinary generating function which has a nonzero radius of convergence , i.e. is complex analytic. Formally speaking, if each object is equipped with a non-negative integer size , then the generating function is defined as where denotes the number of objects of size . The size function is typically used to denote the number of vertices in a tree or in a graph, the number of letters in a word, etc. A Boltzmann sampler for the class with a parameter such that , denoted as returns an object with probability Construction Finite sets If is finite, then an element is drawn with probability proportional to . Disjoint union If the target class is a disjoint union of two other classes, , and the generating functions and of and are known, then the Boltzmann sampler for can be obtained as where stands for "if the random variable is 1, then execute , else execute ". More generally, if the disjoint union is taken over a finite set, the resulting Boltzmann sampler can be represented using a random choice with probabilities proportional to the values of the generating functions. Cartesian product If is a class constructed of ordered pairs where and , then the corresponding Boltzmann sampler can be obtained as i.e. by forming a pair with and drawn independently from and . Sequence If is composed of all the finite sequences of elements of with size of a sequence additively inherited from sizes of components, then the generating function of is expressed as , where is the generating function of . Alternatively, the class admits a recursive representation This gives two possibilities for . where stands for "draw a random variable ; if the value is returned, then execute independently times and return the sequence obtained". Here, stands for the geometric distribution . Recursive classes As the first construction of the sequence operator suggests, Boltzmann samplers can be used recursively. If the target class is a part of the system where each of the expressions involves only disjoint union, cartesian product and sequence operator, then the corresponding Boltzmann sampler is well defined. Given the argument value , the numerical values of the generating functions can be obtained by Newton iteration. Labelled structures Boltzmann sampling can be applied to labelled structures. For a labelled combinatorial class , exponential generating function is used instead: where denotes the number of labelled objects of size . The operation of cartesian product and sequence need to be adjusted to take labelling into account, and the principle of construction remains the same. In the labelled case, the Boltzmann sampler for a labelled class is required to output an object with probability Labelled sets In the labelled universe, a class can be composed of all the finite sets of elements of a class with order-consistent relabellings. In this case, the exponential generating function of the class is written as where is the exponential generating function of the class . The Boltzmann sampler for can be described as where stands for the standard Poisson distribution . Labelled cycles In the cycle construction, a class is composed of all the finite sequences of elements of a class , where two sequences are considered equivalent if they can be obtained by a cyclic shift. The exponential generating function of the class is written as where is the exponential generating function of the class . The Boltzmann sampler for can be described as where describes the log-law distribution . Properties Let denote the random size of the generated object from . Then, the size has the first and the second moment satisfying . Examples Binary trees The class of binary trees can be defined by the recursive specification and its generating function satisfies an equation and can be evaluated as a solution of the quadratic equation The resulting Boltzmann sampler can be described recursively by Set partitions Consider various partitions of the set into several non-empty classes, being disordered between themselves. Using symbolic method, the class of set partitions can be expressed as The corresponding generating function is equal to . Therefore, Boltzmann sampler can be described as where the positive Poisson distribution is a Poisson distribution with a parameter conditioned to take only positive values. Further generalisations The original Boltzmann samplers described by Philippe Duchon, Philippe Flajolet, Guy Louchard and Gilles Schaeffer only support basic unlabelled operations of disjoint union, cartesian product and sequence, and two additional operations for labelled classes, namely the set and the cycle construction. Since then, the scope of combinatorial classes for which a Boltzmann sampler can be constructed, has expanded. Unlabelled structures The admissible operations for unlabelled classes include such additional operations as Multiset, Cycle and Powerset. Boltzmann samplers for these operations have been described by Philippe Flajolet, Éric Fusy and Carine Pivoteau. Differential specifications Let be a labelled combinatorial class. The derivative operation is defined as follows: take a labelled object and replace an atom with the largest label with a distinguished atom without a label, therefore reducing a size of the resulting object by 1. If is the exponential generating function of the class , then the exponential generating function of the derivative class is given byA differential specification is a recursive specification of type where the expression involves only standard operations of union, product, sequence, cycle and set, and does not involve differentiation. Boltzmann samplers for differential specifications have been constructed by Olivier Bodini, Olivier Roussel and Michèle Soria. Multi-parametric Boltzmann samplers A multi-parametric Boltzmann distribution for multiparametric combinatorial classes is defined similarly to the classical case. Assume that each object is equipped with the composition size which is a vector of non-negative integer numbers. Each of the size functions can reflect one of the parameters of a data structure, such as the number of leaves of certain colour in a tree, the height of the tree, etc. The corresponding multivariate generating function is then associated with a multi-parametric class, and is defined asA Boltzmann sampler for the multiparametric class with a vector parameter inside the domain of analyticity of , denoted as returns an object with probability Multiparametric Boltzmann samplers have been constructed by Olivier Bodini and Yann Ponty. A polynomial-time algorithm for finding the numerical values of the parameters given the target parameter expectations, can be obtained by formulating an auxiliary convex optimisation problem Applications Boltzmann sampling can be used to generate algebraic data types for the sake of property-based testing. Software Random Discrete Objects Suite (RDOS): http://lipn.fr/rdos/ Combstruct package in Maple: https://www.maplesoft.com/support/help/Maple/view.aspx?path=combstruct Haskell package Boltzmann Brain: https://github.com/maciej-bendkowski/boltzmann-brain References Combinatorial algorithms
Boltzmann sampler
[ "Mathematics" ]
1,566
[ "Combinatorial algorithms", "Computational mathematics", "Combinatorics" ]
63,067,942
https://en.wikipedia.org/wiki/Austrian%20Lightning%20Detection%20%26%20Information%20System
ALDIS (Austrian Lightning Detection & Information System) is a sensor network in Austria for the detection and localization of lightning discharge occurring during thunderstorms. In addition to the location of the strike point, the associated peak current is also estimated. ALDIS is a member of the pan-European lightning detection project EUCLID (EUropean Cooperation for LIghtning Detection). ALDIS was initiated in 1991. Project partners are the OVE (Austrian Electrotechnical Association) and APG (Austrian Power Grid). The detection of lightning, either from cloud-to-ground or within the cloud is accomplished by eight sensors of type LS7002 (Vaisala), which are distributed across the Austrian territory. The performance of a lightning location system is best described by the main performance parameters detection efficiency (DE), location accuracy (LA), and classification accuracy (CA). In a study by Schwalt et al. (2020) based on data from a high speed video camera and an electric field recording system, it is shown that the DE of flashes (any group of cloud-to-ground, CG, and intracloud, IC, discharges belonging to the same origin in the cloud) exceeds 96%. The LA of the detected cloud-to-ground discharges is about 100 m on average. The accuracy to classify correctly individual lightning events as cloud-to-ground (CG) or intracloud (IC) events is 80-90% (classification accuracy, CA) for the sensor system LS7002. Since 1998 a radio tower located on top of the Gaisberg (a mountain near Salzburg, Austria) is equipped with instruments in order to record lighting current waveforms and allow to obtain parameters of the lightning strokes to the tower. Thereby obtained data are also applicable for the performance analyses and calibration of the lightning location system ALDIS and lightning research studies in general. The main goals of the ALDIS project group are: to provide lightning data to a number of lighting sensitive businesses and organizations in Austria (meteorological services, insurance companies, etc.) to perform research about the origins and effects of lightning which has an impact on the development of lightning protection system thunderstorm warning due to automatically monitoring of first indications of approaching thunderstorms. This can be used to send warning messages to critical industries (e.g. handling of explosives) long-term archiving of located lightning for statistical evaluations in connection with the determination of the local lightning hazard or risk management according to the valid international lightning protection standards (IEC/EN 62305 series). Some historical lightning data can be accessed via HORA (Natural Hazard Overview & Risk Assessment Austria, https://hora.gv.at). An overview of the actual lightning activity in Austria is shown on ALDIS mobile. References Lightning Science and technology in Austria
Austrian Lightning Detection & Information System
[ "Physics" ]
583
[ "Physical phenomena", "Electrical phenomena", "Lightning" ]
63,069,997
https://en.wikipedia.org/wiki/The%20Mathematical%20Coloring%20Book
The Mathematical Coloring Book: Mathematics of Coloring and the Colorful Life of Its Creators is a book on graph coloring, Ramsey theory, and the history of development of these areas, concentrating in particular on the Hadwiger–Nelson problem and on the biography of Bartel Leendert van der Waerden. It was written by Alexander Soifer and published by Springer-Verlag in 2009 (). The book has since been updated: The New Mathematical Coloring Book: Mathematics of Coloring and the Colorful Life of Its Creators was published in 2024. Topics The book "presents mathematics as a human endeavor" and "explores the birth of ideas and moral dilemmas of the times between and during the two World Wars". As such, as well as covering the mathematics of its topics, it includes biographical material and correspondence with many of the people involved in creating it, including in-depth coverage of Issai Schur, , and Bartel Leendert van der Waerden, in particular studying the question of van der Warden's complicity with the Nazis in his war-time service as a professor in Nazi Germany. It also includes biographical material on Paul Erdős, Frank P. Ramsey, Emmy Noether, Alfred Brauer, Richard Courant, Kenneth Falconer, Nicolas de Bruijn, Hillel Furstenberg, and Tibor Gallai, among others, as well as many historical photos of these subjects. Mathematically, the book considers problems "on the boundary of geometry, combinatorics, and number theory", involving graph coloring problems such as the four color theorem, and generalizations of coloring in Ramsey theory where the use of a too-small number of colors leads to monochromatic structures larger than a single graph edge. Central to the book is the Hadwiger–Nelson problem, the problem of coloring the points of the Euclidean plane in such a way that no two points of the same color are a unit distance apart. Other topics covered by the book include Van der Waerden's theorem on monochromatic arithmetic progressions in colorings of the integers and its generalization to Szemerédi's theorem, the Happy ending problem, Rado's theorem, and questions in the foundations of mathematics involving the possibility that different choices of foundational axioms will lead to different answers to some of the coloring questions considered here. Reception and audience As a work in graph theory, reviewer Joseph Malkevitch suggests caution over the book's intuitive treatment of graphs that may in many cases be infinite, in comparison with much other work in this area that makes an implicit assumption that every graph is finite. William Gasarch is surprised by the book's omission of some closely related topics, including the proof of the Heawood conjecture on coloring graphs on surfaces by Gerhard Ringel and Ted Youngs. And Günter M. Ziegler complains that many claims are presented without proof. Although Soifer has called the Hadwiger–Nelson problem "the most important problem in all of mathematics", Ziegler disagrees, and suggests that it and the four color theorem are too isolated to be fruitful topics of study. As a work in the history of mathematics, Malkevitch finds the book too credulous of first-person recollections of troubled political times (the lead-up to World War II) and of priority in mathematical discoveries. Ziegler points to several errors of fact in the book's history, takes issue with its insistence that each contribution should be attributed to only one researcher, and doubts Soifer's objectivity with respect to van der Waerden. And reviewer John J. Watkins writes that "Soifer’s book is indeed a treasure trove filled with valuable historical and mathematical information, but a serious reader must also be prepared to sift through a considerable amount of dross" to reach the treasure. And although Watkins is convinced by Soifer's argument that the first conjectural versions of van der Waerden's theorem were due to Schur and Baudet, he finds idiosyncratic Soifer's insistence that this updated credit necessitates a change in the name of the theorem, concluding that "This is a book that needed far better editing." Ziegler agrees, writing "Someone should have also forced him to cut the manuscript, at the long parts and chapters where the investigations into the colorful lives of the creators get out of hand." According to Malkevitch, the book is written for a broad audience, and does not require a graduate-level background in its material, but nevertheless contains much that is of interest to experts as well as beginners. And despite his negative review, Ziegler concurs, writing that it "has interesting parts and a lot of valuable material". Gasarch is much more enthusiastic, writing "This is a Fantastic Book! Go buy it Now!". References Graph coloring Ramsey theory Books about the history of mathematics 2009 non-fiction books
The Mathematical Coloring Book
[ "Mathematics" ]
1,026
[ "Graph coloring", "Graph theory", "Combinatorics", "Mathematical relations", "Ramsey theory" ]
63,070,573
https://en.wikipedia.org/wiki/Derived%20noncommutative%20algebraic%20geometry
In mathematics, derived noncommutative algebraic geometry, the derived version of noncommutative algebraic geometry, is the geometric study of derived categories and related constructions of triangulated categories using categorical tools. Some basic examples include the bounded derived category of coherent sheaves on a smooth variety, , called its derived category, or the derived category of perfect complexes on an algebraic variety, denoted . For instance, the derived category of coherent sheaves on a smooth projective variety can be used as an invariant of the underlying variety for many cases (if has an ample (anti-)canonical sheaf). Unfortunately, studying derived categories as geometric objects of themselves does not have a standardized name. Derived category of projective line The derived category of is one of the motivating examples for derived non-commutative schemes due to its easy categorical structure. Recall that the Euler sequence of is the short exact sequence if we consider the two terms on the right as a complex, then we get the distinguished triangle Since we have constructed this sheaf using only categorical tools. We could repeat this again by tensoring the Euler sequence by the flat sheaf , and apply the cone construction again. If we take the duals of the sheaves, then we can construct all of the line bundles in using only its triangulated structure. It turns out the correct way of studying derived categories from its objects and triangulated structure is with exceptional collections. Semiorthogonal decompositions and exceptional collections The technical tools for encoding this construction are semiorthogonal decompositions and exceptional collections. A semiorthogonal decomposition of a triangulated category is a collection of full triangulated subcategories such that the following two properties hold (1) For objects we have for (2) The subcategories generate , meaning every object can be decomposed in to a sequence of , such that . Notice this is analogous to a filtration of an object in an abelian category such that the cokernels live in a specific subcategory. We can specialize this a little further by considering exceptional collections of objects, which generate their own subcategories. An object in a triangulated category is called exceptional if the following property holds where is the underlying field of the vector space of morphisms. A collection of exceptional objects is an exceptional collection of length if for any and any , we have and is a strong exceptional collection if in addition, for any and any , we have We can then decompose our triangulated category into the semiorthogonal decomposition where , the subcategory of objects in such that . If in addition then the strong exceptional collection is called full. Beilinson's theorem Beilinson provided the first example of a full strong exceptional collection. In the derived category the line bundles form a full strong exceptional collection. He proves the theorem in two parts. First showing these objects are an exceptional collection and second by showing the diagonal of has a resolution whose compositions are tensors of the pullback of the exceptional objects. Technical Lemma An exceptional collection of sheaves on is full if there exists a resolution in where are arbitrary coherent sheaves on . Another way to reformulate this lemma for is by looking at the Koszul complex associated towhere are hyperplane divisors of . This gives the exact complexwhich gives a way to construct using the sheaves , since they are the sheaves used in all terms in the above exact sequence, except for which gives a derived equivalence of the rest of the terms of the above complex with . For the Koszul complex above is the exact complexgiving the quasi isomorphism of with the complex Orlov's reconstruction theorem If is a smooth projective variety with ample (anti-)canonical sheaf and there is an equivalence of derived categories , then there is an isomorphism of the underlying varieties. Sketch of proof The proof starts out by analyzing two induced Serre functors on and finding an isomorphism between them. It particular, it shows there is an object which acts like the dualizing sheaf on . The isomorphism between these two functors gives an isomorphism of the set of underlying points of the derived categories. Then, what needs to be check is an ismorphism , for any , giving an isomorphism of canonical rings If can be shown to be (anti-)ample, then the proj of these rings will give an isomorphism . All of the details are contained in Dolgachev's notes. Failure of reconstruction This theorem fails in the case is Calabi-Yau, since , or is the product of a variety which is Calabi-Yau. Abelian varieties are a class of examples where a reconstruction theorem could never hold. If is an abelian variety and is its dual, the Fourier–Mukai transform with kernel , the Poincare bundle, gives an equivalence of derived categories. Since an abelian variety is generally not isomorphic to its dual, there are derived equivalent derived categories without isomorphic underlying varieties. There is an alternative theory of tensor triangulated geometry where we consider not only a triangulated category, but also a monoidal structure, i.e. a tensor product. This geometry has a full reconstruction theorem using the spectrum of categories. Equivalences on K3 surfaces K3 surfaces are another class of examples where reconstruction fails due to their Calabi-Yau property. There is a criterion for determining whether or not two K3 surfaces are derived equivalent: the derived category of the K3 surface is derived equivalent to another K3 if and only if there is a Hodge isometry , that is, an isomorphism of Hodge structure. Moreover, this theorem is reflected in the motivic world as well, where the Chow motives are isomorphic if and only if there is an isometry of Hodge structures. Autoequivalences One nice application of the proof of this theorem is the identification of autoequivalences of the derived category of a smooth projective variety with ample (anti-)canonical sheaf. This is given by Where an autoequivalence is given by an automorphism , then tensored by a line bundle and finally composed with a shift. Note that acts on via the polarization map, . Relation with motives The bounded derived category was used extensively in SGA6 to construct an intersection theory with and . Since these objects are intimately relative with the Chow ring of , its chow motive, Orlov asked the following question: given a fully-faithful functor is there an induced map on the chow motives such that is a summand of ? In the case of K3 surfaces, a similar result has been confirmed since derived equivalent K3 surfaces have an isometry of Hodge structures, which gives an isomorphism of motives. Derived category of singularities On a smooth variety there is an equivalence between the derived category and the thick full triangulated of perfect complexes. For separated, Noetherian schemes of finite Krull dimension (called the ELF condition) this is not the case, and Orlov defines the derived category of singularities as their difference using a quotient of categories. For an ELF scheme its derived category of singularities is defined as for a suitable definition of localization of triangulated categories. Construction of localization Although localization of categories is defined for a class of morphisms in the category closed under composition, we can construct such a class from a triangulated subcategory. Given a full triangulated subcategory the class of morphisms , in where fits into a distinguished trianglewith and . It can be checked this forms a multiplicative system using the octahedral axiom for distinguished triangles. Given with distinguished triangles where , then there are distinguished triangles where since is closed under extensions. This new category has the following properties It is canonically triangulated where a triangle in is distinguished if it is isomorphic to the image of a triangle in The category has the following universal property: any exact functor where where , then it factors uniquely through the quotient functor , so there exists a morphism such that . Properties of singularity category If is a regular scheme, then every bounded complex of coherent sheaves is perfect. Hence the singularity category is trivial Any coherent sheaf which has support away from is perfect. Hence nontrivial coherent sheaves in have support on . In particular, objects in are isomorphic to for some coherent sheaf . Landau–Ginzburg models Kontsevich proposed a model for Landau–Ginzburg models which was worked out to the following definition: a Landau–Ginzburg model is a smooth variety together with a morphism which is flat. There are three associated categories which can be used to analyze the D-branes in a Landau–Ginzburg model using matrix factorizations from commutative algebra. Associated categories With this definition, there are three categories which can be associated to any point , a -graded category , an exact category , and a triangulated category , each of which has objects where are multiplication by . There is also a shift functor send to.The difference between these categories are their definition of morphisms. The most general of which is whose morphisms are the -graded complex where the grading is given by and differential acting on degree homogeneous elements by In the morphisms are the degree morphisms in . Finally, has the morphisms in modulo the null-homotopies. Furthermore, can be endowed with a triangulated structure through a graded cone-construction in . Given there is a mapping code with maps where and where Then, a diagram in is a distinguished triangle if it is isomorphic to a cone from . D-brane category Using the construction of we can define the category of D-branes of type B on with superpotential as the product category This is related to the singularity category as follows: Given a superpotential with isolated singularities only at , denote . Then, there is an exact equivalence of categories given by a functor induced from cokernel functor sending a pair . In particular, since is regular, Bertini's theorem shows is only a finite product of categories. Computational tools Knörrer periodicity There is a Fourier-Mukai transform on the derived categories of two related varieties giving an equivalence of their singularity categories. This equivalence is called Knörrer periodicity. This can be constructed as follows: given a flat morphism from a separated regular Noetherian scheme of finite Krull dimension, there is an associated scheme and morphism such that where are the coordinates of the -factor. Consider the fibers , , and the induced morphism . And the fiber . Then, there is an injection and a projection forming an -bundle. The Fourier-Mukai transform induces an equivalence of categories called Knörrer periodicity. There is another form of this periodicity where is replaced by the polynomial . These periodicity theorems are the main computational techniques because it allows for a reduction in the analysis of the singularity categories. Computations If we take the Landau–Ginzburg model where , then the only fiber singular fiber of is the origin. Then, the D-brane category of the Landau–Ginzburg model is equivalent to the singularity category . Over the algebra there are indecomposable objects whose morphisms can be completely understood. For any pair there are morphisms where for these are the natural projections for these are multiplication by where every other morphism is a composition and linear combination of these morphisms. There are many other cases which can be explicitly computed, using the table of singularities found in Knörrer's original paper. See also Derived category Triangulated category Perfect complex Semiorthogonal decomposition Fourier–Mukai transform Bridgeland stability condition Homological mirror symmetry Derived Categories notes - http://www.math.lsa.umich.edu/~idolga/derived9.pdf References Research articles A noncommutative version of Beilinson's theorem Derived Categories of Toric Varieties Derived Categories of Toric Varieties II Algebraic geometry Noncommutative geometry
Derived noncommutative algebraic geometry
[ "Mathematics" ]
2,498
[ "Fields of abstract algebra", "Algebraic geometry" ]
73,139,878
https://en.wikipedia.org/wiki/Multicover%20bifiltration
The multicover bifiltration is a two-parameter sequence of nested topological spaces derived from the covering of a finite set in a metric space by growing metric balls. It is a multidimensional extension of the offset filtration that captures density information about the underlying data set by filtering the points of the offsets at each index according to how many balls cover each point. The multicover bifiltration has been an object of study within multidimensional persistent homology and topological data analysis. Definition Following the notation of Corbet et al. (2022), given a finite set , the multicover bifiltration on is a two-parameter filtration indexed by defined index-wise as , where denotes the non-negative integers. Note that when is fixed we recover the Offset Filtration. Properties The multicover bifiltration admits a topologically equivalent polytopal model of polynomial size, called the "rhomboid bifiltration." The rhomboid bifiltration is an extension of the rhomboid tiling introduced by Edelsbrunner and Osang in 2021 for computing the persistent homology of the multicover bifiltration along one axis of the indexing set. The rhomboid bifiltration on a set of points in a Euclidean space can be computed in polynomial time. The multicover bifiltration is also topologically equivalent to a multicover nerve construction due to Sheehy called the subdivision-Čech bifiltration, which considers the barycentric subdivision on the nerve of the offsets. In particular, the subdivision-Čech and multicover bifiltrations are weakly equivalent, and hence have isomorphic homology modules in all dimensions. However, the subdivision-Čech bifiltration has an exponential number of simplices in the size of the data set, and hence is not amenable to efficient direct computations. References Computational geometry Topology Geometry
Multicover bifiltration
[ "Physics", "Mathematics" ]
397
[ "Computational mathematics", "Topology", "Space", "Computational geometry", "Geometry", "Spacetime" ]
73,140,291
https://en.wikipedia.org/wiki/The%20Ocean%20Frontier%20Institute
The Ocean Frontier Institute (OFI) is a non-profit research and higher education organization dedicated to ocean-based research and data. Established in 2016, the institute focuses its research on achieving net zero, protecting ocean biodiversity and sustaining ocean bioresources. OFI is based at Dalhousie University in the Ocean Sciences Building in Halifax, Nova Scotia, Canada. History OFI was established in 2016 by Dalhousie University with partnerships in Memorial University of Newfoundland, and University of Prince Edward Island. The Institution also partners with international ocean research institutes, other Canadian universities, governments, Indigenous communities, and industry ranging from local small businesses to international corporations. In announcing the creation of OFI, Dalhousie University noted that it was set to become “one of the world’s most significant international ocean science collaborations.” The initial funding for the organization included a $93.7 million commitment from the Canadian government through the Canada First Research Excellence Fund (CFREF). At the time of the announcement, it was the largest research grant in the history of Dalhousie University. An additional $125 million in cash and in kind contributions was also provided by provincial governments and partners, most notably a $25 million gift from business leader and philanthropist John Risley. The Institute administers the Canada First Research Excellence Fund, The Safe and Sustainable Development of the Ocean Frontier; Ocean School; and the North Atlantic Carbon Observatory (NACO); and hosts the Canadian project office of the Integrated Marine Biosphere Research (IMBeR) program. OFI's first CEO was Wendy Watson-Wright, who served in the role until December 2019. During this period, OFI saw more than sixteen ocean focused research projects reviewed by internal and external experts which included scientific analysis of the changing ocean ecosystems as well as studies to strengthen marine safety, ocean data and technology and the fishing and aquaculture industries. A further six large-scale research projects were launched in 2020 focused on the North Atlantic Ocean Climate and Coastal Communities and the Ocean. In March 2018 OFI launched its first round of Seed Funding in partnership with Canada's Ocean Supercluster and Innovacorp, providing financial support to ideas with the potential for advancing research, commercial or social concepts relating to the ocean. The Seed Fund has supported over 100 ocean related research projects ranging from studies on Non-Toxic Marine Anti-fouling Paint, to 4D Ocean Sensing Strategy, to collaboration efforts on the blending of Indigenous and Western knowledge. In 2018 OFI and Dalhousie University also invested over 2 million dollars for the creation of DeepSense, a partnership between industry, academia and government focused on using ocean related data and artificial intelligence to better support commercial enterprises. In January 2020, the CEO position was taken up by Dr. Anya Waite who had previously served as the organizations Scientific Director. Throughout 2021 and 2022, OFI dramatically expanded its engagement with public and research groups with a series of webinars to inform ocean and coastal governance and introduce social sciences and humanities-led research and with the introduction of an annual Carbon Workshop to discuss the ocean's changing ability to absorb carbon, in particular the importance of ‘Deep Blue Carbon.’ Training and Education The Ocean Graduate Excellence Network (OGEN) was launched by OFI in 2021, with current funding partners including the National Research Council of Canada, Mitacs, Graphite Innovation Technologies, and Fisheries and Oceans Canada. The program provides individualized training and research opportunities to graduate students. OFI's International Postdoctoral Fellowships offer opportunities for early-career PhDs to conduct collaborative research at Dalhousie University, with travel to one of OFI's partner institutions in Europe or the United States. OFI has supported fellowships with international partners such as WHOI, GEOMAR, AWI, and ISBlue. OFI's Visiting Fellowships program helps to develop ocean leaders by providing opportunities for early-career PhDs to conduct research at one of OFI's Canadian institutions or at one of OFI's eight international academic partner institutions. In spring of 2022 OFI sponsored undergraduate students from Dalhousie University in Halifax and Memorial University of Newfoundland on a 16-week expedition aboard the research sailing vessel Statsraad Lehmkuhl. The students participated as crew members as they sailed the Pacific Ocean, following an ocean sustainability course offered through Norway's University of Bergen. Conferences OFI hosts a broad open conference on a biennial basis, gathering experts, researchers, and leaders from around the world to discuss and debate topical issues in ocean research. 2018: First OFI Conference: The first OFI conference was held in St. John's Newfoundland and attended by over 330 delegates from across the globe. The conference covered areas such as: How Science, Partnerships and Innovation Will Secure a Future for the Ocean; Our Changing Ocean; Identifying Ocean Solutions; and Industry Perspectives on the Importance of Ocean Research. 2022: Second OFI Conference: The second OFI Conference took place in Halifax, Nova Scotia with more than 200 delegates engaged in four main themes: Achieving Net Zero and Ocean Carbon, People and the Ocean, Imperative of Ocean Based Carbon Dioxide Removal (CDR), Food from the Ocean, and Innovation and Commercialisation. OFI and United Nations Conference of the Parties (COP) Beginning with COP26, OFI has been an active participant in the UN COP meetings, bringing attention to the role the ocean plays in sustainable development efforts and advocating for the importance of integrated ocean carbon observations. COP26 – At the 26th Conference of the Parties to the UNFCCC in 2021, an OFI delegation advocated for including ocean chemistry variables in the climate targets planned to be set at the Conference. COP27 – At the 27th COP Meeting in 2022, OFI saw its role expand as CEO Dr. Waite spoke at or moderated 10 events including: An event on Ocean Observations for Climate Change in partnership with Observation of the Global Ocean (POGO) and GOOS; A National Oceanography Centre event on Blue Carbon; The ocean's role in fighting climate change; and An Egyptian Space Agency event on The interplay of machine learning and earth sciences in assessing coral reefs and other marine habitats. Indigenous Engagement Like many Canadian institutions, OFI incorporates Local Traditional Knowledge, Place Based Knowledge and Traditional Ecological Knowledge as a central part of its research efforts. In collaboration with Indigenous ( Métis and First Nations) groups and the OFI research community, the Indigenous Engagement Guide, an evolving document, assists the OFI community in identifying how research programs may impact Indigenous groups and provides guidance on respectful engagement and developing meaningful research relationships with Indigenous governments, communities, and organisations. Research Programs funded through OFI are required to meet the expectations set out in the Guide. References 2016 establishments Dalhousie University Oceanography
The Ocean Frontier Institute
[ "Physics", "Environmental_science" ]
1,377
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
73,147,509
https://en.wikipedia.org/wiki/Glide%20%28docking%29
Glide is a molecular modeling software for docking of small molecules into proteins and other biopolymers. It was developed by Schrödinger, Inc. References Further reading Molecular modelling software Computational chemistry software
Glide (docking)
[ "Chemistry" ]
42
[ "Molecular modelling software", "Molecular physics", "Computational chemistry software", "Chemistry software", "Molecular modelling", "Computational chemistry", "Molecular physics stubs" ]
73,148,269
https://en.wikipedia.org/wiki/RDKit
RDKit is open-source toolkit for cheminformatics. It was developed by Greg Landrum with numerous additional contributions from the RDKit open source community. It has an application programming interface (API) for Python, Java, C++, and C#. References External links Python (programming language) scientific libraries Computational chemistry software
RDKit
[ "Chemistry" ]
70
[ "Computational chemistry", "Computational chemistry software", "Chemistry software" ]
73,153,690
https://en.wikipedia.org/wiki/Persistence%20module
A persistence module is a mathematical structure in persistent homology and topological data analysis that formally captures the persistence of topological features of an object across a range of scale parameters. A persistence module often consists of a collection of homology groups (or vector spaces if using field coefficients) corresponding to a filtration of topological spaces, and a collection of linear maps induced by the inclusions of the filtration. The concept of a persistence module was first introduced in 2005 as an application of graded modules over polynomial rings, thus importing well-developed algebraic ideas from classical commutative algebra theory to the setting of persistent homology. Since then, persistence modules have been one of the primary algebraic structures studied in the field of applied topology. Definition Single Parameter Persistence Modules Let be a totally ordered set and let be a field. The set is sometimes called the indexing set. Then a single-parameter persistence module is a functor from the poset category of to the category of vector spaces over and linear maps. A single-parameter persistence module indexed by a discrete poset such as the integers can be represented intuitively as a diagram of spaces: To emphasize the indexing set being used, a persistence module indexed by is sometimes called a -persistence module, or simply a -module. Common choices of indexing sets include , etc. One can alternatively use a set-theoretic definition of a persistence module that is equivalent to the categorical viewpoint: A persistence module is a pair where is a collection of -vector spaces and is a collection of linear maps where for each , such that for any (i.e., all the maps commute). Multiparameter Persistence Modules Let be a product of totally ordered sets, i.e., for some totally ordered sets . Then by endowing with the product partial order given by only if for all , we can define a multiparameter persistence module indexed by as a functor . This is a generalization of single-parameter persistence modules, and in particular, this agrees with the single-parameter definition when . In this case, a -persistence module is referred to as an -dimensional or -parameter persistence module, or simply a multiparameter or multidimensional module if the number of parameters is already clear from context. Multidimensional persistence modules were first introduced in 2009 by Carlsson and Zomorodian. Since then, there has been a significant amount of research into the theory and practice of working with multidimensional modules, since they provide more structure for studying the shape of data. Namely, multiparameter modules can have greater density sensitivity and robustness to outliers than single-parameter modules, making them a potentially useful tool for data analysis. One downside of multiparameter persistence is its inherent complexity. This makes performing computations related to multiparameter persistence modules difficult. In the worst case, the computational complexity of multidimensional persistent homology is exponential. The most common way to measure the similarity of two multiparameter persistence modules is using the interleaving distance, which is an extension of the bottleneck distance. Examples Homology Modules When using homology with coefficients in a field, a homology group has the structure of a vector space. Therefore, given a filtration of spaces , by applying the homology functor at each index we obtain a persistence module for each called the (th-dimensional) homology module of . The vector spaces of the homology module can be defined index-wise as for all , and the linear maps are induced by the inclusion maps of . Homology modules are the most ubiquitous examples of persistence modules, as they encode information about the number and scale of topological features of an object (usually derived from building a filtration on a point cloud) in a purely algebraic structure, thus making understanding the shape of the data amenable to algebraic techniques, imported from well-developed areas of mathematics such as commutative algebra and representation theory. Interval Modules A primary concern in the study of persistence modules is whether modules can be decomposed into "simpler pieces", roughly speaking. In particular, it is algebraically and computationally convenient if a persistence module can be expressed as a direct sum of smaller modules known as interval modules. Let be a nonempty subset of a poset . Then is an interval in if For every if then For every there is a sequence of elements such that , , and are comparable for all . Now given an interval we can define a persistence module index-wise as follows: ; . The module is called an interval module. Free Modules Let . Then we can define a persistence module with respect to where the spaces are given by , and the maps defined via . Then is known as a free (persistence) module. One can also define a free module in terms of decomposition into interval modules. For each define the interval , sometimes called a "free interval." Then a persistence module is a free module if there exists a multiset such that . In other words, a module is a free module if it can be decomposed as a direct sum of free interval modules. Properties Finite Type Conditions A persistence module indexed over is said to be of finite type if the following conditions hold for all : Each vector space is finite-dimensional. There exists an integer such that the map is an isomorphism for all . If satisfies the first condition, then is commonly said to be pointwise finite-dimensional (p.f.d.). The notion of pointwise finite-dimensionality immediately extends to arbitrary indexing sets. The definition of finite type can also be adapted to continuous indexing sets. Namely, a module indexed over is of finite type if is p.f.d., and contains a finite number of unique vector spaces. Formally speaking, this requires that for all but a finite number of points there is a neighborhood of such that for all , and also that there is some such that for all . A module satisfying only the former property is sometimes labeled essentially discrete, whereas a module satisfying both properties is known as essentially finite. An -persistence module is said to be semicontinuous if for any and any sufficiently close to , the map is an isomorphism. Note that this condition is redundant if the other finite type conditions above are satisfied, so it is not typically included in the definition, but is relevant in certain circumstances. Structure Theorem One of the primary goals in the study of persistence modules is to classify modules according to their decomposability into interval modules. A persistence module that admits a decomposition as a direct sum of interval modules is often simply called "interval decomposable." One of the primary results in this direction is that any p.f.d. persistence module indexed over a totally ordered set is interval decomposable. This is sometimes referred to as the "structure theorem for persistence modules." The case when is finite is a straightforward application of the structure theorem for finitely generated modules over a principal ideal domain. For modules indexed over , the first known proof of the structure theorem is due to Webb. The theorem was extended to the case of (or any totally ordered set containing a countable subset that is dense in with the order topology) by Crawley-Boevey in 2015. The generalized version of the structure theorem, i.e., for p.f.d. modules indexed over arbitrary totally ordered sets, was established by Botnan and Crawley-Boevey in 2019. References Commutative algebra Representation theory Computational topology Homological algebra Data analysis
Persistence module
[ "Mathematics" ]
1,534
[ "Computational topology", "Mathematical structures", "Computational mathematics", "Fields of abstract algebra", "Topology", "Category theory", "Representation theory", "Commutative algebra", "Homological algebra" ]
67,396,618
https://en.wikipedia.org/wiki/Patera%20Building
The Patera Building prototype, a significant example of British high-tech architecture, was manufactured in Stoke-on-Trent in 1982 by Patera Products Ltd. In 1980, Michael Hopkins architects and Anthony Hunt Associates engineers were instructed by LIH (Properties) Ltd to design a relocatable building 216 square metres in size. Longton Industrial Holdings Plc (LIH), an industrial group based in Stoke-on-Trent, Staffordshire, commissioned designs for an “off the peg” relocatable industrial building made from steel. They sought to expand their interests in steel fabrication, intending to sell the buildings as a product. The Patera Products Ltd factory where the Patera buildings were made and where the first two were erected was in Victoria Road, Fenton, Stoke-on-Trent, Staffordshire. Clarification This article traces the history of the prototype Patera Building completed in 1982 under the ownership and direction of Longton Industrial Holdings Plc through their wholly-owned subsidiary companies LIH (Properties) Ltd, and Patera Products Ltd. The article does not cover 'Patera Building System' (a later development of the Patera concept using several of the fabrication techniques such as the innovative panels, but with alternative traditional structural frames). The article does not cover the period during which the Patera concept was promoted under a trading name 'Patera Products' ('Patera Products' was an acquired name, unrelated to the original company Patera Products Ltd), nor during the period in which the Patera concept was promoted under the trading name of Patera Engineering Ltd (established 1988) also an acquired name. Patera Engineering Ltd did not manufacture any Patera Buildings. History The first prototype Patera Building was manufactured by Patera Products Ltd in 1982 by a workforce of experienced hands-on engineers and craftsmen drawn from industries in the area then in decline such as coal-mining. As almost every component was designed anew for the prototype, a high degree of accuracy was required as these prototype components formed a standard model to which future components were manufactured. The idea of the Patera project was to supply a factory finished industrial workshop. The buildings were standardised, 18m long by 12m wide, with an internal height of 3.85m throughout. They were fully finished in the factory ready for bolting together at the desired location. Three men with a forklift truck could erect one in a matter of days. It was seen in the context of vehicle or boatbuilding technologies in terms of its light weight construction. Each building needed a reinforced concrete raft slab as a base to which the structure was fixed using specially designed steel castings. All the buildings' services — power, telephone cabling, water, etc. — were distributed within the depth of the building envelope. To support panels struck by automotive industry hydraulic presses, constituent parts of the Patera Building structure were pin-jointed for ease of handling and assembly. At the centres of the spans of the frames were unique 'tension-only' links — special fittings able to respond to varying structural loads. Under normal conditions the structure acted as a three-pin arch. In other conditions, such as wind up-lift, it acted as a rigid frame. This innovation meant that very slender lightweight steel tubes could be used for the portal frame trusses. The 'Patera Building Stoke-on-Trent for Longton Industrial Holdings (Properties) Ltd' received a commendation in the British Constructional Steelwork Association's Structural Steel Design Awards 1983, sponsored by the British Steel Corporation and the British Constructional Steelwork Association Ltd. The Judges' Comments: 'The creative thought that lies behind this design breaks new ground in the excellence of its parts and their skilful integration in the making of a architectural whole. It is a delight to see such innovation and care being applied to the production of precisely fabricated, economical, small buildings.' Use of reclaimed land The Berry Hill area of Stoke-on-Trent had a history of coal mining and brick-making. The Patera Building prototype was built on drained and reclaimed land there, circumstances that informed the design - requiring lightness of weight and raft foundations. In the 1960s visionary architect Cedric Price had proposed a Potteries Thinkbelt design which sought to make use of decommissioned railway routes following the Beeching Cuts and the scarred landscape of coal mining to provide linked learning centres for a technical industry-based curriculum. The first design studies for the Patera project in 1981 were for a managed industrial estate consisting of thirty or so standard Patera Buildings sited at the former Mossfield colliery in Longton Stoke-on-Trent. Structural innovation Anthony Hunt Associates devised an all-steel light weight structure, a hybrid three pin arch. Made in easily transportable component form, once assembled it offered significant advantages: The elimination of cross-bracing elements to the roof and wall trusses The use of panel assemblies as a diaphragm to prevent buckling of lower (innermost) truss boom during compression Introduction of a 'tension-only' link at midspan to prevent outer roof truss booms from buckling under compression Use of line bracing and secondary high tensile steel cross-bracing at the knee-joint position to prevent 'flipping' of structure under certain wind-loading Introduction of steel castings for ease of fabrication of pin joint connections Development of distinctive cast steel base plates to allow structural bolted connection to flat concrete slab base Wind loading analysis which allowed use anywhere within the UK mainland and climates where a similar pattern of wind speeds might prevail. Innovation in manufacturing techniques With steel panels pressed and factory finished rather than being cold-rolled, and with all components accurately sized and with their fixings prepositioned, the following advantages ensued: All components sized to fit efficiently within a standard 40 ft shipping container Ease of site assembly Interchangeable components within a single building or between others, allowed flexibility of layout and use Standard buildings made available ex stock Fully finished externally and internally Services such as power, water and communications routed within building shell Commercial implementation The Patera Building was launched in November 1981 at "Interbuild" a building exhibition at the National Exhibition Centre (NEC) Birmingham, with the wording: Patera Building A new concept in building design to provide efficient working units which combine good looks with engineering quality at sensible prices. The first two buildings were erected at the site adjacent to the Patera Products Ltd factory in Stoke-on-Trent where they stayed in place for some two years. They were used as demonstration buildings, part of the marketing of the project. Sites where other buildings were erected include Barrow-in-Furness, Canary Wharf and the Royal Docks in London. LIH Plc were proud to have hosted a Royal visit by Duke of Gloucester, an architect himself, during which he was shown around the workshops and the buildings. 1984-85: After the manufacturing company Patera Products Ltd was closed down, the two stock buildings, that is the prototype and another similarly sized building, were each extended from five bays to six and moved to London's Canary Wharf to be used as BT exhibition space provided by London Docklands Development Corporation. Neighbours were the now demolished Limehouse TV Studios and the giant dishes of a satellite receiving station established for improved business communication. The site was on the late 1980s route of the London Marathon between, the fifteenth and sixteenth mile marks. In 1989, to make way for the much heralded high rise commercial developed planned for Canary Wharf, Limehouse TV Studios was compulsorily purchased and demolished, and one of the two Patera Buildings (the original prototype) was moved to its third location on Albert Island. It was until recently used as a workshop on a boat repair yard and marina by Gallions Point Marina Ltd.; the company faced eviction from the site in October 2018 to make way for development of the Royal Docks Enterprise Zone. The other, the second-ever standard Patera Building, was moved from its Canary Wharf site in c1989 to become a part of the LDDC temporary offices adjacent to the Docklands Light Railway close to Royal Victoria Dock. Future Through a multi-agency initiative led by Twentieth Century Society, application was made to Historic England for the building to be listed. If the application had been successful, the Docklands Patera Building would have been carefully stabilised, conserved and moved once more to make way for development in the London Royal Docks Enterprise Zone. Interested parties associated with the listing process have accepted that the Docklands Patera Building is in fact the original 1982 prototype manufactured and first assembled in Stoke-on-Trent. Dismantlement of the building in its Albert Island location started in Autumn 2021, but then for over a year, the building was left in a semi-dismantled state pending the decision, made in April 2022, not to list the building. Further, requests made to DCMS for a review of the Historic England decision were denied in October 2022 leaving the decision (not to list) to stand. References High-tech architecture Prefabricated buildings 1980s architecture
Patera Building
[ "Engineering" ]
1,824
[ "Building engineering", "Prefabricated buildings" ]
68,763,182
https://en.wikipedia.org/wiki/Pendell%C3%B6sung
The Pendellösung effect or phenomenon is seen in diffraction in which there is a beating in the intensity of electromagnetic waves travelling within a crystal lattice. It was predicted by P. P. Ewald in 1916 and first observed in electron diffraction of magnesium oxide in 1942 by Robert D. Heidenreich and in X-ray diffraction by Norio Kato and Andrew Richard Lang in 1959. At the exit surface of a photonic crystal (PhC), the intensity of the diffracted wave can be periodically modulated, showing a maximum in the "positive" (forward diffracted) or in the "negative" (diffracted) direction, depending on the crystal slab thickness. The Pendellösung effect in photonic crystals can be understood as a beating phenomenon due to the phase modulation between coexisting plane wave components, propagating in the same direction. This thickness dependence is a direct result of the so-called Pendellösung phenomenon, consisting of the periodic exchange inside the crystal of the energy between direct and diffracted beams. The Pendellösung interference effect was predicted by dynamical diffraction and also by its fellow theories developed for visible light. References Condensed matter physics Metamaterials Photonics
Pendellösung
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
265
[ "Materials science stubs", "Metamaterials", "Phases of matter", "Materials science", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
68,767,084
https://en.wikipedia.org/wiki/Journal%20of%20Posthuman%20Studies
The Journal of Posthuman Studies is a biannual peer-reviewed academic journal published by the Penn State University Press and hosted by the Ewha Institute for the Humanities. Established in 2017, the journal seeks to address questions such as what it is to be human in this age of technological, scientific, cultural, and social evolution. Drawing on theories from critical posthumanism and transhumanism, the journal encourages constructive and critical dialogue through research articles, discussion papers, and forums. References External links Biannual journals English-language_journals Academic journals established in 2017 Penn State University Press academic journals Transhumanism
Journal of Posthuman Studies
[ "Technology", "Engineering", "Biology" ]
129
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
76,055,341
https://en.wikipedia.org/wiki/Chandrasekhar%E2%80%93Friedman%E2%80%93Schutz%20instability
Chandrasekhar–Friedman–Schutz instability or shortly CFS instability refers to an instability that can occur in rapidly rotating stars with which the instability arises for cases where the gravitational radiation reaction is unable to cope with the change in angular momentum associated with the perturbations. The instability was discovered by Subrahmanyan Chandrasekhar in 1970 and later a simple intuitive explanation for the instability was provided by John L. Friedman and Bernard F. Schutz. Specifically, the instability arises when a non-axisymmetric perturbation mode that appears co-rotating in the inertial frame (from which gravitational waves are observed), is in fact is counter-rotating with respect to the rotating star. Roberts–Stewartson instability and CFS instability Although it has been anticipated a long time (1883) ago by William Thomson (later Lord Kelvin) and Peter Guthrie Tait in their book Treatise on Natural Philosophy that a small presence of viscosity in a rotating, self-gravitating, otherwise ideal fluid mass would lose its stability, it is shown to be true only much later by Paul H. Roberts and Keith Stewartson in 1963. Similar to how an energy dissipation by viscosity will lead to loss of stability, Chandrasekhar showed that the dissipation by the gravitational radiation reaction would also lead to a loss of stability, although such an instability is unprecedented in a non-rotating star. The instability that arises only when there is a dissipation, but disappears in the absence of dissipation is referred to as the secular instability. Both the Roberts–Stewartson instability and CFS instability are secular instability, although they do not both correspond to same modes in the following sense: In the absence of radiation reaction and viscosity, the Maclaurin spheroid (a model for rotating, self-gravitating body) becomes marginally or neutrally stable when its eccentricity reaches a critical value with two possible neutral modes, but it does not become unstable after this bifurcation. It is only in the presence of dissipation, Maclaurin spheroid becomes unstable when eccentricity exceeds its bifurcation value. The Roberts–Stewartson instability stems from one of the neutral mode, whereas the CFS instability stems from the other neutral mode. References Astrophysics Fluid dynamics
Chandrasekhar–Friedman–Schutz instability
[ "Physics", "Chemistry", "Astronomy", "Engineering" ]
480
[ "Chemical engineering", "Astrophysics", "Piping", "Astronomical sub-disciplines", "Fluid dynamics" ]
76,067,366
https://en.wikipedia.org/wiki/Fiveling
A fiveling, also known as a decahedral nanoparticle, a multiply-twinned particle (MTP), a pentagonal nanoparticle, a pentatwin, or a five-fold twin is a type of twinned crystal that can exist at sizes ranging from nanometers to millimetres. It contains five different single crystals arranged around a common axis. In most cases each unit has a face centered cubic (fcc) arrangement of the atoms, although they are also known for other types of crystal structure. They nucleate at quite small sizes in the nanometer range, but can be grown much larger. They have been found in mineral crystals excavated from mines such as pentagonite or native gold from Ukraine, in rods of metals grown via electrochemical processes and in nanoparticles produced by the condensation of metals either onto substrates or in inert gases. They have been investigated for their potential uses in areas such as improving the efficiency of solar cell or heterogeneous catalysis for more efficient production of chemicals. Information about them is distributed across a diverse range of scientific disciplines, mainly chemistry, materials science, mineralogy, nanomaterials and physics. Because many different names have been used, sometimes the information in the different disciplines or within any one discipline is fragmented and overlapping. At small sizes in the nanometer range, up to millimetres in size, with fcc metals they often have a combination of {111} and {100} facets, a low energy shape called a Marks decahedron. Relative to a single crystal, at small sizes a fiveling can be a lower energy structure due to having more low energy surface facets. Balancing this there is an energy cost due to elastic strains to close an angular gap (disclination), which makes them higher in energy at larger sizes. They can be the most stable structure in some intermediate sizes, but they can be one among many in a population of different structures due to a combination of coexisting nanoparticles and kinetic growth factors. The temperature, gas environment and chemisorption can play an important role in both their thermodynamic stability and growth. While they are often symmetric, they can also be asymmetric with the disclination not in the center of the particle. History Dating back to the nineteenth century there are reports of these particles by authors such as Jacques-Louis Bournon in 1813 for marcasite, and Gustav Rose in 1831 for gold. In mineralogy and the crystal twinning literature they are referred to as a type of cyclic twin where a number of identical single crystal units are arranged in a ring-like pattern where they all join at a common point or line. The name comes from them having five members (single crystals). Fivelings have also been described as a type of macle twinning. The older literature was mainly observational, with information on many materials documented by Victor Mordechai Goldschmidt in his Atlas der Kristallformen. Drawings are available showing their presence in marcasite, gold, silver, copper and diamond. New mineral forms with a fiveling structure continue to be found, for instance pentagonite, whose structure was first decoded in 1973, is named because it is often found with the five-fold twinning. Most modern analysis started with the observation of these particles by Shozo Ino and Shiro Ogawa in 1966-67, and independently but slightly later (which they acknowledged) in work by John Allpress and John Veysey Sanders. In both cases these were for vacuum deposition of metal onto substrates in very clean (ultra-high vacuum) conditions, where nanoparticle islands of size 10-50 nm were formed during thin film growth. Using transmission electron microscopy and diffraction these authors demonstrated the presence of the five single crystal units in the particles, and also the twin relationships. They also observed single crystals and a related type of icosahedral nanoparticle. They called the five-fold and icosahedral crystals multiply twinned particles (MTPs). In the early work near perfect decahedron (pentagonal bipyramid) and icosahedron shapes were formed, so they were called decahedral MTPs or icosahedral MTPs, the names connecting to the decahedral () and icosahedral () point group symmetries. Parallel, and apparently independent there was work on larger metal whiskers (nanowires) which sometimes showed a very similar five-fold structure, an occurrence reported in 1877 by Gerhard vom Rath. There was fairly extensive analysis following this, particularly for the nanoparticles, both of their internal structure by some of the first electron microscopes that could image at the atomic scale, and by various continuum or atomic models as cited later. Following this early work there was a large effort, mainly in Japan, to understand what were then called "fine particles", but would now be called nanoparticles. By heating up different elements so atoms evaporated and were then condensed in an inert argon atmosphere, fine particles of almost all the elemental solids were made and then analyzed using electron microscopes. The decahedral particles were found for all face centered cubic materials and a few others, often together with other shapes. While there was some continuing work over the following decades, it was with the National Nanotechnology Initiative that substantial interest was reignited. At the same time terms such as pentagonal nanoparticle, pentatwin, or five-fold twin became common in the literature, together with the earlier names. A large number of different methods have now been published for fabricating fivelings, sometimes with a high yield but often as part of a larger population of different shapes. These range from colloidal solution methods to different deposition approaches. It is documented that fivelings occur frequently for diamond, gold and silver, sometimes for copper or palladium and less often for some of the other face-centered cubic (fcc) metals such as nickel. There are also cases such as pentagonite where the crystal structure allows for five-fold twinning with minimal to no elastic strain (see later). There is work where they have been observed in colloidal crystals consisting of ordered arrays of nanoparticles, and single crystals composed on individual decahedral nanoparticles. There has been extensive modeling by many different approaches such as embedded atom, many body, molecular dynamics, tight binding approaches, and density functional theory methods as discussed by Francesca Baletto and Riccardo Ferrando and also discussed for energy landscapes later. Disclination strain These particles consist of five different (single crystal) units which are joined together by twin boundaries. The simplest form shown in the figure has five tetrahedral crystals which most commonly have a face centered cubic structure, but there are other possibilities such as diamond cubic and a few others as well as more complex shapes. The angle between two twin planes is approximately 70.5 degrees in fcc, so five of these sums to 352.5 degrees (not 360 degrees) leading to an angular gap. At small sizes this gap is closed by an elastic deformation, which Roland de Wit pointed out could be described as a wedge disclination, a type of defect first discussed by Vito Volterra in 1907. With a disclination the strains to close the gap vary radially and are distributed throughout the particle. With other structures the angle can be different; marcasite has a twin angle of 74.6 degrees, so instead of closing a missing wedge, one of angle 13 degrees has to be opened, which would be termed a negative disclination of 13 degrees. It has been pointed out by Chao Liang and Yi Yu that when intermetallics are included there is a range of different angles, some similar to fcc where there is a deficiency (positive disclination), others such as AuCu where there is an overlap (negative disclination) similar to marcasite, while pentagonite has probably the smallest overlap at 3.5 degrees. Early experimental high-resolution transmission electron microscopy data supported the idea of a distributed disclination strain field in the nanoparticles, as did dark field and other imaging modes in electron microscopes. In larger particles dislocations have been detected to relieve some of the strain. The disclination deformation requires an energy which scales with the particle volume, so dislocations or grain boundaries are lower in energy for large sizes. More recently there has been detailed analysis of the atomic positions first by Craig Johnson et al, followed up by a number of other authors, providing more information on the strains and showing how they are distributed in the particles. While the classic disclination strain field is a reasonable first approximation model, there are differences when more complete elastic models are used such as finite element methods, particularly as pointed out by Johnson et al, anisotropic elasticity needs to be used. One further complication is that the strain field is three dimensional, and more complex approaches are needed to measure the full details as detailed by Bart Goris et al, who also mention issues with strain from the support film. In addition, as pointed out by Srikanth Patala, Monica Olvera de la Cruz and Marks and shown in the figure, the Von Mises stress are different for (kinetic growth) pentagonal bipyramids versus the minimum energy shape. As of 2024 the strains are consistent with finite element calculations and a disclination strain field, with the possible addition of a shear component at the twin boundaries to accommodate some of the strains. An alternative to the disclination strain model which was proposed by B G Bagley in 1965 for whiskers is that there is a change in the atomic structure away from face-centered cubic; a hypothesis that a tetragonal crystal structure is lower in energy than fcc, and a lower energy atomic structure leads to the decahedral particles. This view was expanded upon by Cary Y. Yang and can also be found in some of the early work of Miguel José Yacamán. There have been measurements of the average structure using X-ray diffraction which it has been argued support this view. However, these x-ray measurements only see the average which necessarily shows a tetragonal arrangement, and there is extensive evidence for inhomogeneous deformations dating back to the early work of Allpress and Sanders, Tsutomu Komoda, Marks and David J. Smith and more recently by high resolution imaging of details of the atomic structure. As mentioned above, as of 2024 experimental imaging supports a disclination model with anisotropic elasticity. Three-dimensional shape The three-dimensional shape depends upon how the fivelings are formed, including the environment such as gas pressure and temperature. In the very early work only pentagonal bipyramids were reported. In 1970 Ino tried to model the energetics, but found that these bipyramids were higher in energy than single crystals with a Wulff construction shape. He found a lower energy form where he added {100} facets, what is now commonly called the Ino decahedron. The surface energy of this form and a related icosahedral twin scale as the two-thirds power of the volume, so they can be lower in energy than a single crystal as discussed further below. However, while Ino was able to explain the icosahedral particles, he was not able to explain the decahedral ones. Later Laurence D. Marks proposed a model using both experimental data and a theoretical analysis, which is based upon a modified Wulff construction which includes more surface facets, including Ino's {100} as well as re-entrant {111} surfaces at the twin boundaries with the possibility of others such as {110}, while retaining the decahedral point group symmetry. This approach also includes the effect of gas and other environmental factors via how they change the surface energy of different facets. By combining this model with de Wit's elasticity, Archibald Howie and Marks were able to rationalize the stability of the decahedral to particles. Other work soon confirmed the shape reported by Marks for annealed particles. This was further confirmed in detailed atomistic calculations a few years later by Charles Cleveland and Uzi Landman who coined the term Marks decahedra for these shapes, this name now being widely used. The minimum energy or thermodynamic shape for these particles depends upon the relative surface energies of different facets, similar to a single crystal Wulff shape; they are formed by combining segments of a conventional Wulff construction with two additional internal facets to represent the twin boundaries. An overview of codes to calculate these shapes was published in 2021 by Christina Boukouvala et al. Considering just {111} and {100} facets: The Ino decahedron occurs when the surface energy of the {100} facets is small, ; Common is the Marks decahedron with {100} facets and a re-entrant surface at the twin boundaries for With there is no {100} faceting, and the particles have been called nanostars. For very low the equilibrium shape is a long rod along the common five-fold axis. The photograph of an 0.5 cm gold fiveling from Miass is a Marks decahedron with , while the sketch of Rose is for . The 75 atom cluster shown above corresponds to the same shape for a small number of atoms. Experimentally, in fcc crystals fivelings with only {111} and {100} facets are common, but many other facets can be present in the Wulff construction leading to more rounded shapes, for instance {113} facets for silicon. It is known that the surface can reconstruct to a different atomic arrangement in the outermost atomic plane, for instance a dimer reconstruction for {100} facets of silicon particles of a hexagonal overlayer on the {100} facets of gold decahedra. What shape is present depends not just on the surface energy of the different facets, but also upon how the particles grow. The thermodynamic shape is determined by the Wulff construction, which considers the energy of each possible surface facet and yields the lowest energy shape. The original Marks decahedron was based upon a form of Wulff construction that takes into account the twin boundaries. There is a related kinetic Wulff construction where the growth rate of different surfaces is used instead of the energies. This type of growth matters when the formation of a new island on a flat facet limits the growth rate. If the {100} surfaces of Ino grow faster, then they will not appear in the final shape, similarly for the re-entrant surfaces at the twin boundaries—this leads to the pentagonal bipyramids often observed. Alternatively, if the {111} surfaces grow fast and {100} slow the kinetic shape will be a long rod along the common five-fold axis as shown in the figure. Another different set of shapes can occur when diffusion of atoms to the particles dominates, a growth regime called diffusion controlled growth. In such cases surface curvature can play a major role, for instance leading to spikes originating at the sharp corners of a pentagonal bipyramids, sometimes leading to pointy stars, as shown in the figure. Energy versus size The most common approach to understand the formation of these particles, first used by Ino in 1969, is to look at the energy as a function of size comparing icosahedral twins, decahedral nanoparticles and single crystals. The total energy for each type of particle can be written as the sum of three terms: for a volume , where is the surface energy, is the disclination strain energy to close the gap (or overlap for marcasite and others), and is a coupling term for the effect of the strain on the surface energy via the surface stress, which can be a significant contribution. The sum of these three terms is compared to the total surface energy of a single crystal (which has no strain), and to similar terms for an icosahedral particle. Because the decahedral particles have a lower total surface energy than single crystals due (approximately, in fcc) to more low energy {111} surfaces, they are lower in total energy for an intermediate size regime, with the icosahedral particles more stable at very small sizes. (The icosahedral particle have even more {111} surfaces, but also more strain.) At large sizes the strain energy can become very large, so it is energetically favorable to have dislocations and/or a grain boundary instead of a distributed strain. The very large mineral samples are almost certainly trapped in metastable higher energy configurations. There is no general consensus on the exact sizes when there is a transition in which type of particle is lowest in energy, as these vary with material and also the environment such as gas and temperature; the coupling surface stress term and also the surface energies of the facets are very sensitive to these. In addition, as first described by Michael Hoare and P Pal and R. Stephen Berry and analyzed for these particles by Pulickel Ajayan and Marks as well as discussed by others such as Amanda Barnard, David J. Wales, Kristen Fichthorn and Baletto and Ferrando, at very small sizes there will be a statistical population of different structures so many different ones will coexist. In many cases nanoparticles are believed to grow from a very small seed without changing shape, and reflect the distribution of coexisting structures. For systems where icosahedral and decahedral morphologies are both relatively low in energy, the competition between these structures has implications for structure prediction and for the global thermodynamic and kinetic properties. These result from a double funnel energy landscape where the two families of structures are separated by a relatively high energy barrier at the temperature where they are in thermodynamic equilibrium. This situation arises for a cluster of 75 atoms with the Lennard-Jones potential, where the global potential energy minimum is decahedral, and structures based upon incomplete Mackay icosahedra are also low in potential energy, but higher in entropy. The free energy barrier between these families is large compared to the available thermal energy at the temperature where they are in equilibrium. An example is shown in the figure, with probability in the lower part and energy above with axes of an order parameter and temperature . At low temperature the 75 atom decahedral cluster (Dh) is the global free energy minimum, but as the temperature increases the higher entropy of the competing structures based on incomplete icosahedra (Ic) causes the finite system analogue of a first-order phase transition; at even higher temperatures a liquid-like state is favored. There has been experiment support based upon work where single nanoparticles are imaged using electron microscopes either as they grow or as a function of time. One of the earliest works was that of Yagi et al who directly observed changes in the internal structure with time during growth. More recent work has observed variations in the internal structure in liquid cells, or changes between different forms due to either (or both) heating or the electron beam in an electron microscope including substrate effects. Successive twinning Allpress and Sanders proposed an alternative approach to energy minimization to understanding these particles called "successive twinning". Here one starts with a single tetrahedral unit, which then forms a twin either by accident during growth or by collision with another tetrahedron. It was proposed that this could continue to eventually have five units join. The term "successive twinning" has now come to mean a related concept: motion of the disclination either to or from a symmetric position as sketched in the atomistic simulation in the figure; see also Haiqiang Zhao et al for very similar experimental images. While in many cases experimental images show symmetric structures, sometimes they are less so and the five-fold center is quite asymmetric. There are asymmetric cases which can be metastable, and asymmetry can also be a strain relief process or involved in how the particle convert to single crystals or from single crystals. During growth there may be changes, as directly observed by Katsumichi Yagi et al for growth inside an electron microscope, and migration of the disclination from the outside has been observed in liquid-cell studies in electron microscopes. Extensive details about the atomic processes involved in motion of the disclination have been given using molecular dynamics calculations supported by density functional theory as shown in the figure. Connections There are a number of related concepts and applications of decahedral particles. Quasicrystals Soon after the discovery of quasicrystals it was suggested by Linus Pauling that five-fold cyclic twins such as these were the source of the electron diffraction data observed by Dan Shechtman. While there are similarities, quasicrystals are now considered to be a class of packing which is different from fivelings and the related icosahedral particles. Heterogeneous catalysts There are possible links to heterogeneous catalysis, with the decahedral particles displaying different performance. The first study by Avery and Sanders did not find them in automobile catalysts. Later work by Marks and Howie found them in silver catalysts, and there have been other reports. It has been suggested that the strain at the surface can change reaction rates, and since there is evidence that surface strain can change the adsorption of molecules and catalysis there is circumstantial support for this. , there is some experimental evidence for different catalytic reactivity. Plasmonics It is known that the response of the surface plasmon polaritons in nanoparticles depends upon their shape. As a consequence decahedral particles have specific optical responses. One suggested use is to improve light adsorption using their plasmonic properties by adding them to polymer solar cells. Thin films and mechanical deformation Most observations of fivelings have been for isolated particles. Similar structures can occur in thin films when particles merge to form a continuous coating, but do not recrystallize immediately. They can also form during annealing of films, which molecular dynamics simulations have indicated correlates to the motion of twin boundaries and a disclination, similar to the case of isolated nanoparticles described earlier. There is experimental evidence in thin films for interactions between partial dislocations and disclinations, as discussed in 1971 by de Wit. They can also be formed by mechanical deformation. The formation of a local fiveling structure by annealing or deformation has been attributed to a combination of stress relief and twin motion, which is different from the surface energy driven formation of isolated particles described above. See also Notes References External links Code from the group of Emilie Ringe which calculates thermodynamic and kinetic shapes for decahedral particles and also does optical simulations, see also . Code from J M Rahm and P Erhart which calculates thermodynamic shapes, both continuum and atomistic, see also . The code can be used to generate thermodynamic Wulff shapes including twinning. Chemical physics Condensed matter physics Crystallography Materials science Mineralogy Nanoparticles Physical chemistry Solid-state chemistry
Fiveling
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,819
[ "Applied and interdisciplinary physics", "Phases of matter", "Materials science", "Chemical physics", "Crystallography", "Condensed matter physics", "nan", "Physical chemistry", "Matter", "Solid-state chemistry" ]
76,067,772
https://en.wikipedia.org/wiki/Extended%20Wulff%20constructions
Extended Wulff constructions refers to a number of different ways to model the structure of nanoparticles as well as larger mineral crystals, and as such can be used to understand both the shape of certain gemstones or crystals with twins.as well as in other areas such as how nanoparticles play a role in the commercial production of chemicals using heterogeneous catalysts. They are variants of the Wulff construction which is used for a solid single crystal in isolation. They include cases for solid particle on substrates, those with twins and also when growth is important. Depending upon whether there are twins or a substrate there are different cases as indicated in the decision tree figure. The simplest forms of these constructions yield the lowest Gibbs free energy (thermodynamic) shape, or the stable growth form for an isolated particle; it can be difficult to differentiate between the two in experimental data. The thermodynamic cases involve the surface energy of different facets; the term surface tension refers to liquids, not solids. The shapes found due to growth kinetics involve the growth velocity of the different surface facets. While the thermodynamic and kinetic constructions are relevant for free standing particles, often in technological applications particles are on supports. An important case is for heterogeneous catalysis where typically the surface of metal nanoparticles is where chemical reactions are taking place. To optimize the reactions a large metal surface area is desirable, but for stability the nanoparticles need to be supported on a substrate. The problem of the shape on a flat substrate is solved via the Winterbottom construction. All the above are for single crystals, but it is common to have twins in the crystals. These can occur either by accident (growth twins), or can be an integral part of the structure as in decahedral or icosahedral particles. To understand the shape of particles with twin boundaries a modified Wulff construction is used. All these add some additional terms to the base Wulff construction. There are related constructions which have been proposed for other cases such as with alloying or when the interface between a nanoparticle and substrate is not flat. General form The thermodynamic Wulff construction describes the relationships between the shape of a single crystal and the surface free energy of different surface facets. It has the form that the perpendicular distance from a common center to all the external facets is proportional to the surface free energy of each one. This can be viewed as a relationship between the different surface energies and the distance from a Wulff center , where the vector is the "height" of the th face, drawn from the center to the face with a surface free energy of , and a scale. A common approach is to construct the planes normal to the vectors from the center to the surface free energy curve, with the Wulff shape the inner envelope. This is represented in the Wulff construction figure where the surface free energy is in red, and the single crystal shape would be in blue. In a more mathematical formalism it can be written describing the shape as a set of points given by for all unit vectors . For the extended constructions, one or more additional terms are included for interface free energies, for instance the marked in purple with dashes in the figure. The dashed interface is included which may be a solid interface for the Winterbottom case, two interfaces for summertop and or one, two or three twin boundaries for the modified Wulff construction. Comparable cases are generated when the surface free energy is replaced by a growth velocity, these applying for kinetic shapes. Winterbottom construction The Winterbottom construction, named after Walter L. Winterbottom, is the solution for the shape of a solid particle on a fixed substrate, where the substrate is forced to remain flat. It is sometimes called the Kaischew-Winterbottom or Kaischew construction, since it was first analyzed for polyhedral shapes in a less general fashion by Kaischew and later Ernst G. Bauer. However, the proof by Winterbottom is more general. The Winterbottom construction adds an extra term for the free energy of the interface between a particle and the substrate, the substrate being assumed to stay flat. These shapes are found for nanoparticles supported on substrates such as in heterogeneous catalysis and also nanoparticle superlattices, and look similar to a truncated single particle as shown in the figure for a gold nanoparticle on ceria, and can also resemble that of a liquid drop on a surface. If the energy for the interface is very high then the particle has the same shape as it would have in isolation, and effectively dewets the substrate. If the energy is very low then a thin raft is formed on the substrate, it effectively wets the substrate. The configuration found depends upon the orientation of the substrate, that of the particle as well as the relative orientation of the two. It is not uncommon to have more than one particle orientation and shape, each being a metastable energy minimum. There is also some dependence upon whether there are steps, strain and anisotropy at the interface. A related form has also been used for precipitates at boundaries, with semi-Wulff construction shapes on both sides. Summertop construction This form was proposed as an extension of the Winterbottom construction (and a play on words) by Jean Taylor. It applies to the case of a nanoparticle at a corner. Instead of just using one extra facet for the interface two are included. There are other related extensions, such as solutions in two dimensions for a crystal between two parallel planes. Modified Wulff construction In many materials there are twins, which often correspond to a mirroring on a specific plane. For instance, a {111} plane for a face centered material such as gold is the normal twin plane. They often have re-entrant surfaces at the twin boundaries, a phenomenon reported in the 19th century and described in encyclopedias of crystal shapes. The cases with one twin boundary are also called macle twins, although there can be more than one twin boundary. An example of this called the Spinel law contact twinning is shown in the figure. There can also be a series of parallel twins forming what are called Lamellar Twinned Particles, which have been found in experimental samples both large and small. For an odd number of boundaries these all resemble the macle twins; for an even number they are closer to single crystals. There can also be two, non-parallel twin boundaries on each segment, a total of five twins in the composite particle, which leads to a shape that Cleveland and Uzi Landman called a Marks decahedron when it occurs in face centered cubic materials with five units forming a fiveling cyclic twin. There can also be three twin boundaries per segment where twenty units assemble to form an icosahedral structure. Both the decahedral and icosahedral forms can be the most stable ones at the nanoscale. These forms occur for both elemental nanoparticles as well as alloys and colloidal crystals. The approach to model these is similar to the Winterbottom construction, now adding an extra facet of energy per unit area half that of the twin boundary -- half so the energy per unit area of the two adjacent segments sums to a full twin boundary energy, and the facets that for the twin boundary are identical for thee segments. Mathematically this is similar to the Wulff construction, with the shape for all unit vectors . Here is the origin of the Wulff construction for each segment. In many cases the twin boundary energy is small compared to external surface energy, so a single twin is close to half a single crystal rotated by 180 degrees and with all the origins the same; this is often observed experimentally. Five units then form a fiveling, which has reentrant surfaces at the twin boundaries and is shown in the figure of a gold fiveling by Rose, while for three boundaries per unit, a close-to-perfect icosahedron is formed. (An image of an 0.5 cm gold mineral crystal is shown later.) The construction also predicts more complicated shapes composed of combinations of decahedra, icosahedra, and other complex twin-connected shapes, which have been observed experimentally in nanoparticles and were called polyparticles. Other recent examples include bi-decahedra and bi-icosahedra. Extended combinations can lead to complex structures of overlapping five-fold structures in wires. While the earlier work was for crystals of materials such as silver and gold, more recently there has been work on colloidal clusters of nanoparticles where similar shapes have been observed, although nonequilibrium shapes also occur. Kinetic Wulff construction The thermodynamic Wulff and the others above describe the relationship between the shape of a single crystal and the surface free energy of different surface facets. It is named after Georg Wulff, but his paper was not in fact on thermodynamics, but rather on growth kinetics. In many cases growth occurs via the nucleation of small islands on the surface then their sideways growth, either step-flow or layer-by-layer growth. The variant where this type growth dominates is the kinetic Wulff construction. In the kinetic Wulff case, the distance from the origin to each surface facet is proportional to the growth rate of the facet. This means that fast-growing facets are often not present, for instance often {100} for a face-centered cubic material; the external shape may be dominated by the slowest-growing faces. Note that other facets will reappear if the crystal is annealed when surface diffusion changes the shape towards the equilibrium shape. Most of the shapes in larger mineral crystals are a consequence of kinetic control. Both the surface free energy and growth rate of different surfaces depend strongly upon the presence of adsorbates, so can vary substantially. Similar to the original work by Wulff, it is often unclear whether single crystals have a thermodynamic or kinetic Wulff shape. For reference, the form of the kinetic Wulff construction is given by for all unit vectors , where is the growth velocity of the facet. This is equivalent to , where, as above, the index refers to the facet and is the height from the Wulff center. There are analogues of all the earlier cases when kinetic control dominates: Kinetic Winterbottom: the velocity replaces the surface energies for all the external facets, with the growth rate at the interface zero. Kinetic summertop: similar to the Winterbottom, with zero growth rate at the interfaces. Kinetic modified Wulff: the velocity replaces the surface energies for all the external facets, with zero growth velocity at the twin boundaries. When kinetic growth dominates the velocity of the buried twin boundaries is zero. This can lead to cyclic twins with very sharp shapes. There can also be faster growth at re-entrant surfaces around twin boundaries, at the interface for a Winterbottom case, at dislocations and possibly at disclinations, all of which can lead to different shapes. For instance, faster growth at twin boundaries leads to regular polyhedra such as pentagonal bipyramids for the fivelings with sharp corners and edges, and sharp icosahedral for the particles made of twenty subunits. The pentagonal bipyramids have been frequently observed in growth experiments, dating back to the early work by Shozo Ino and Shiro Ogawa in 1966-67, and are not the thermodynamically stable stable but the kinetic one. Similar to the misinterpretation of the original paper by Wulff as mentioned above, these sharp shapes have been misinterpreted as being part of the equilibrium shape. For completeness, there is a different type of kinetic control of shapes called diffusion control, which can lead to more complex shapes such as dendrites and others, for instance the star-shaped decahedral nanoparticle shown in the figure. Related constructions There are quite a few extensions and related constructions. Most of these to date are for relatively specialized cases. In particular: Strain at the particle-substrate interface can lead to changes which have been described in more generalized Winterbottom models or by including a triple-line energy term; the latter has been observed experimentally. Modified forms have been developed when there are steps, as this can introduce strain. A more complex variational approach can be used to model alloy nanoparticles or when combining the twin-variant and a substrate. While the most common use of these constructions are in three dimensions for particles, they can also be used to understand two-dimensional growth shapes, grain boundary faceting, voids when the interface is anisotropic, and for dislocations. Caveats These variants of the Wulff construction correlate well to many shapes found experimentally, but do not explain everything. Sometimes the shapes with multiple different units are due to coalescence, sometimes they are less symmetric and sometimes, as in Janus particles (for the two-headed god) they contain two materials as illustrated in the figure. There are also some assumptions such as that the substrate remains flat in the Winterbottom construction. This does not have to be the case, the particle can partially or completely be buried by the substrate. It can also be the case that metastable structures are formed. For instance during growth at elevated temperatures a neck can form between two particles, and they can start to merge. If the temperature is decreased then diffusion can become slow so this shape can persist. Finally, the descriptions here work well for particles of size about 5nm and larger. At smaller sizes more local bonding can become important, so nanoclusters of smaller sizes can be more complex. Application relevance Heterogeneous catalysts These contain nanoparticles on a support, where either the nanoparticles or combination plays a key role in speeding up a chemical reaction. The support can also play a role in reducing sintering by stabilizing the particles so there is less reduction in their surface area with extended use -- larger particles produced by sintering small ones have less surface area for the same total number of atoms. In addition, the substrate can determine the orientation of the nanoparticles, and combined with what surfaces are exposed in the Winterbottom construction there can be different reactivities which has been exploited for prototype catalysts. Minerals As alluded to earlier, many minerals have crystal twins, and these approaches provide methods to explain the morphologies for either kinetic or thermodynamic control for shapes found in the literature for in marcasite, and by Gustav Rose in 1831 for gold. An image of a rather large one from Miass is shown in the figure. Nucleation At small sizes, particularly for face centered cubic materials cyclic twins called multiply twinned particles are often of lower energy than single crystals. The main reason is that they have more lower energy surfaces, mainly (111). This is balanced by elastic deformation which raises the energy. At small sizes the surface energy dominated so icosahedral particles are lowest in energy. As the size increases the decahedral ones become lowest in energy, then at the largest size it is single crystals. The decahedral particles and, to a lesser extent the icosahedral ones have shapes determined by the modified Wulff construction. Note that due to the discrete nature of atoms there can be deviations from the continuum shapes at very small sizes. Plasmonics The optical response of nanoparticles depends upon their shape, size and the materials. For instance, rod shapes which are very anisotropic can be grown using decahedral seeds if the growth on (100) facets is slow, a kinetic Wulff shapes. These have quite different optical responses than icosahedra, which are close to spherical, while cubes can also be produced if the (111) growth rate is very fast, and these have yet further optical responses. See Also References External links Code from the group of Emilie Ringe which calculates thermodynamic and kinetic shapes for decahedral particles and also does optical simulations, see also Code for Wulff and Winterbottom shapes. Updated information on Wulffman, including double Winterbottom shapes, Code from J M Rahm and P Erhart which calculates thermodynamic shapes, both continuum and atomistic, see also . The code can be used to generate thermodynamic Wulff shapes including twinning. Web page using the WulffPack code. Chemical physics Condensed matter physics Crystallography Materials science Mineralogy Nanoparticles
Extended Wulff constructions
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,470
[ "Applied and interdisciplinary physics", "Phases of matter", "Materials science", "Chemical physics", "Crystallography", "Condensed matter physics", "nan", "Matter" ]
74,553,020
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28torque%29
The following are examples of orders of magnitude for torque. Examples References Orders of magnitude Torque
Orders of magnitude (torque)
[ "Physics", "Mathematics" ]
19
[ "Force", "Physical quantities", "Quantity", "Wikipedia categories named after physical quantities", "Orders of magnitude", "Units of measurement", "Torque" ]
74,555,797
https://en.wikipedia.org/wiki/Artificial%20intelligence%20in%20pharmacy
Artificial intelligence in pharmacy is the application of artificial intelligence (AI) to the discovery, development, and the treatment of patients with medications. AI in pharmacy practices has the potential to revolutionize all aspects of pharmaceutical research as well as to improve the clinical application of pharmaceuticals to prevent, treat, or cure disease. AI, a technology that enables machines to simulate human intelligence, has found applications in pharmaceutical research, drug manufacturing, drug delivery systems, clinical trial optimization, treatment plans, and patient-centered services. Drug discovery and development AI algorithms analyze vast datasets with greater speed and accuracy than traditional methods. This has enabled the identification of potential drug candidates, prediction of their interactions, and optimization of formulations. AI-driven simulations and modeling assist researchers in understanding molecular interactions, thus expediting the drug development timeline. Drug delivery systems AI is revolutionizing the drug delivery systems. AI technology can assist in identifying biological targets for pharmaceuticals, evaluating the pharmacological profiles of potential drugs, and analyzing genetic information; in the future, this could lead to drugs personalized to an individual, targeted cancer treatments, and edible vaccines. References Applications of artificial intelligence Pharmacy Pharmaceutical industry
Artificial intelligence in pharmacy
[ "Chemistry", "Biology" ]
235
[ "Pharmaceutical industry", "Pharmacology", "Life sciences industry", "Pharmacy" ]
56,105,123
https://en.wikipedia.org/wiki/IONIS-GCCRRx
{{DISPLAYTITLE:IONIS-GCCRRx}} IONIS-GCCRRx, also known as ISIS-426115, is an antiglucocorticoid which is under development by Ionis Pharmaceuticals (formerly Isis Pharmaceuticals) for the treatment of diabetes mellitus type 2. It has also been under investigation for the treatment of Cushing's syndrome, but no development has been reported. The drug is an antisense oligonucleotide against the glucocorticoid receptor. As of December 2017, it is in phase II clinical trials for diabetes mellitus type 2. References Experimental diabetes drugs Antiglucocorticoids Antisense RNA Drugs with undisclosed chemical structures Experimental drugs Therapeutic gene modulation
IONIS-GCCRRx
[ "Biology" ]
158
[ "Therapeutic gene modulation" ]
56,109,450
https://en.wikipedia.org/wiki/Yutaka%20Tokiwa
Yutaka Tokiwa is a Senior Researcher at Okinawa Industrial Technology Center, who has published extensively on the biodegradability of plastics. He has an h-index of 61 according to Google Scholar. References Year of birth missing (living people) Living people Polymer scientists and engineers
Yutaka Tokiwa
[ "Chemistry", "Materials_science" ]
58
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
56,114,444
https://en.wikipedia.org/wiki/Chandrasekhar%20polarization
Chandrasekhar Polarization is a partial polarization of emergent radiation at the limb of rapidly rotating early-type stars or binary star system with purely electron-scattering atmosphere, named after the Indian American astrophysicist Subrahmanyan Chandrasekhar, who first predicted its existence theoretically in 1946. Chandrasekhar published a series of 26 papers in The Astrophysical Journal titled On the Radiative Equilibrium of a Stellar Atmosphere from 1944 to 1948. In the 10th paper, he predicted that the purely electron stellar atmosphere emits a polarized light using Thomson law of scattering. The theory predicted that 11 percent polarization could be observed at maximum. But when this is applied to a spherical star, the net polarization effect was found to be zero, because of the spherical symmetry. But it took another 20 years to explain under what conditions this polarization can be observed. J. Patrick Harrington and George W. Collins, II showed that this symmetry is broken if we consider a rapidly rotating star (or a binary star system), in which the star is not exactly spherical, but slightly oblate due to extreme rotation (or tidal distortion in the case of binary system). The symmetry is also broken in eclipsing binary star system. Discovery Attempts made to predict this polarization effect were initially unsuccessful, but rather led to the prediction of interstellar polarization. In 1983, scientists found the first evidence of this polarization effect on the star Algol, an eclipsing binary-star system. The polarization on rapidly rotating star was not found until 2017 since it required a high-precision polarimeter. In September 2017, a team of scientists from Australia observed this polarization on the star Regulus, which rotates at 96.5 percent of its critical angular velocity for breakup. See also Polarization in astronomy References Polarization (waves) Astrophysics
Chandrasekhar polarization
[ "Physics", "Astronomy" ]
376
[ "Astronomical sub-disciplines", "Polarization (waves)", "Astrophysics" ]
56,115,043
https://en.wikipedia.org/wiki/Chandrasekhar%E2%80%93Kendall%20function
Chandrasekhar–Kendall functions are the eigenfunctions of the curl operator derived by Subrahmanyan Chandrasekhar and P. C. Kendall in 1957 while attempting to solve the force-free magnetic fields. The functions were independently derived by both, and the two decided to publish their findings in the same paper. If the force-free magnetic field equation is written as , where is the magnetic field and is the force-free parameter, with the assumption of divergence free field, , then the most general solution for the axisymmetric case is where is a unit vector and the scalar function satisfies the Helmholtz equation, i.e., The same equation also appears in Beltrami flows from fluid dynamics where, the vorticity vector is parallel to the velocity vector, i.e., . Derivation Taking curl of the equation and using this same equation, we get . In the vector identity , we can set since it is solenoidal, which leads to a vector Helmholtz equation, . Every solution of above equation is not the solution of original equation, but the converse is true. If is a scalar function which satisfies the equation , then the three linearly independent solutions of the vector Helmholtz equation are given by where is a fixed unit vector. Since , it can be found that . But this is same as the original equation, therefore , where is the poloidal field and is the toroidal field. Thus, substituting in , we get the most general solution as Cylindrical polar coordinates Taking the unit vector in the direction, i.e., , with a periodicity in the direction with vanishing boundary conditions at , the solution is given by where is the Bessel function, , the integers and is determined by the boundary condition The eigenvalues for has to be dealt separately. Since here , we can think of direction to be toroidal and direction to be poloidal, consistent with the convention. See also Poloidal–toroidal decomposition Woltjer's theorem References Astrophysics Plasma theory and modeling
Chandrasekhar–Kendall function
[ "Physics", "Astronomy" ]
423
[ "Plasma theory and modeling", "Astronomical sub-disciplines", "Astrophysics", "Plasma physics" ]
53,247,133
https://en.wikipedia.org/wiki/European%20Lead%20Factory
The European Lead Factory is a public-private partnership that aims to accelerate early drug discovery in Europe. The European Lead Factory is funded by the Innovative Medicines Initiative and consists of a pan-European consortium that includes 7 pharmaceutical companies as well as partners from academia and small and medium-sized enterprises (SMEs). Drug discovery platform The European Lead Factory is operational since 2013 and consists of two main components: the Joint European Compound Library and the European Screening Centre. Together these elements provide a platform for pharmaceutical researchers in Europe to identify drug discovery starting points, by connecting innovative drug targets to high-quality small molecules. The result is defined in a ‘hit list’: a number of compounds that show affinity for the target. The compounds on those lists can either be used as probes to better understand biological pathways or as starting points for lead discovery efforts for novel drugs. These hits can be further optimised outside of the European Lead Factory, for affinity but also for drug-like properties as selectivity, solubility and metabolism in the human body. The ultimate goal is that these candidate drugs will solve unmet medical needs when fully approved as drug by the authorities. Open innovation The Joint European Compound Library has a collection of around 500,000 chemical compounds selected from private company collections and complemented by the novel molecules synthesised by the European Lead Factory chemistry partners. European researchers from academia as well as SMEs and patient organisations submit their biological target to be screened against the compound collection by the European Lead Factory researchers by means of industrial standard high-throughput screening. References 2013 establishments in Europe Biology in Europe Consortia in Europe Crowdsourcing Drug discovery companies Research projects
European Lead Factory
[ "Chemistry" ]
334
[ "Drug discovery companies", "Drug discovery" ]
53,257,288
https://en.wikipedia.org/wiki/HSH2D
Hematopoietic SH2 Domain Containing (HSH2D) protein is a protein encoded by the hematopoietic SH2 domain containing (HSH2D) gene. Gene HSH2D is located on chromosome 19 at 19p13.11. Common aliases of the gene include HSH2 (Hematopoietic SH2 Protein) and ALX (Adaptor in Lymphocytes of Unknown Function X). The mRNA encodes two main isoforms. Isoform 1, the longest isoform, contains seven exons. The gene spans from 16134028 to 16158575. mRNA Two main isoforms of HSH2D exist. Isoform 1 has seven exons and is 2,403 bp in length. Isoform 2 has six exons and is 2,936 bp long. Although isoform 2 has longer mRNA, it still produces the smaller isoform in the mature protein. Isoform 2 has a variant 5’ UTR and a different start codon, as well as a shorter N-terminus. The mRNA has a short 5' UTR and a long 3' UTR. Protein The protein has a molecular weight of 39.0 kilodaltons (kDa) and a pI of 6.678. The main feature of the protein is the SH2 (Src homology) domain, which is a region that has phosphotyrosine receptors and is important in many signaling molecules. This domain is located from residues 26-127. The secondary structure of the protein contains a helical section around residues 40-50, a sheet between 60-70, helices between 100-110, 135-145, 175-180, 200-225, and additional sheets between 235-240 and 295-300, shown in the figure at the bottom of the section (helices are purple arrows and sheets are red arrows). The protein has several locations of post-translational modifications, especially phosphorylation and GalNAc O-glycosylation, which has been shown to play a role in cancers. The tertiary structure of the protein has not been confirmed through research, however, predictions using I-TASSER software are useful in visualizing the protein. Expression Based on NCBI GEO expression profiles and EST analyses, the protein appears to be narrowly expressed throughout human tissues. It is highly expressed in bone marrow, CD4+ and CD8+ T cells, lymph node, mammary gland, spleen, stomach, thyroid, and small intestine tissue. Expression is elevated in cases of early T-cell precursor acute lymphoblastic leukemia and lowered in breast cancer cells that are treated with estrogen, suggesting an interaction between the protein and estrogen. Function The function of the HSH2D protein is still not fully understood, however it has been shown to play a role in various cellular functions such as apoptosis, wound healing, vascular endothelial growth factors, membrane-associated intracellular trafficking, biogenesis of lipid droplets and collagen remodeling. It is also thought to play a role in T-cell activation. Interacting proteins HSH2D interacts with several proto-oncogenes, including FES proto-oncogene (FES) and CRK proto-oncogene (CRK). It also has suspected interactions with other proteins such as tyrosine kinase non-receptor 2 (TRK2), PTEN-induced putative kinase (PINK1), and Interleukin 2 (IL2). A summary of these proteins is shown below with their suspected functions. Clinical significance The HSH2D protein has been studied along other human genes predicted to be involved in the human immune system. HSH2D was found to be highly expressed in patients with ulcerative colitis. The protein is also associated with alpha-interferon activity. Homology HSH2D has four distant paralogs and several orthologs in other species that have high levels of conservation. Paralogs The four paralogs of HSH2D in humans are other proteins containing SH2 domains. They do not have a high level of conservation other than this domain. All paralogs were found through genecards Orthologs HSH2D has several orthologous proteins that span across several orders of species. The protein was well conserved across mammals as well as a few reptiles, amphibians, and invertebrates. The following list is not exhaustive, rather, it shows the wide range of organisms that the protein may be found in. All orthologous proteins were found with BLAST or BLAT programs. References Proteins
HSH2D
[ "Chemistry" ]
977
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
51,711,705
https://en.wikipedia.org/wiki/Approximate%20measures
Approximate measures are units of volumetric measurement which are not defined by a government or government-sanctioned organization, or which were previously defined and are now repealed, yet which remain in use. It may be that all English-unit derived capacity measurements are derived from one original approximate measurement: the mouthful, consisting of about ounce, called the ro in ancient Egypt (their smallest recognized unit of capacity). The mouthful was still a unit of liquid measure during Elizabethan times. (The principal Egyptian standards from small to large were the ro, hin, hekat, and khar.) Because of the lack of official definitions, many of these units will not have a consistent value. United Kingdom glass-tumbler breakfast-cup tea-cup wine-glass table-spoon dessert-spoon tea-spoon black-jack demijohn (dame-jeanne) goblet pitcher gyllot (about equal to 1/2 gill) noggin (1/4 pint) nipperkin (measure for liquor, containing no more than 1/2 pint) tumblerful (10 fl oz or 2 gills or 2 teacupsful) apothecaries' approximate measures teacupful = about 4 fl oz wineglassful = about 2 fl oz tablespoonful = about 1/2 fl oz dessertspoonful = about 2 fl dr teaspoonful = about 1 fl dr drop = about minim teacupful (5 fl oz, or 1 gill ibid) wineglassful (2-1/2 fl oz or 1/2 gill or 1/2 teacupful or 1/4 tumblerful) dessertspoonful (1/4 fl oz or 2 fl dr and equal to 2 teaspoonful or 1/2 tablespoonful) teaspoonful (1/8 fl oz or 1 fl dr and also equal to 1/2 dessertspoonful or 1/4 tablespoonful) United States The vagueness of how these measures have been defined, redefined, and undefined over the years, both through written and oral history, is best exemplified by the large number of sources that need to be read and cross-referenced in order to paint even a reasonably accurate picture. So far, the list includes the United States Pharmacopoeia, U.S. FDA, NIST, A Manual of Weights, Measures, and Specific Gravity, State Board Questions and Answers, MediCalc, MacKenzie's Ten Thousand Receipts, Approximate Practical Equivalents, When is a Cup not a Cup?, Cook's Info, knitting-and.com., and Modern American Drinks. Dashes, pinches, and smidgens are all traditionally very small amounts well under a teaspoon, but not more uniformly defined. In the early 2000s some companies began selling measuring spoons that defined a dash as teaspoon, a pinch as teaspoon, and a smidgen as teaspoon. Based on these spoons, there are two smidgens in a pinch and two pinches in a dash. However, the 1954 Angostura “Professional Mixing Guide” states that “a dash” is 1/6th of a teaspoon, or 1/48 of an ounce, and Victor Bergeron (a.k.a. Trader Vic, famous saloonkeeper), said that for bitters it was teaspoon, but fl oz for all other liquids. References Measurement
Approximate measures
[ "Physics", "Mathematics" ]
706
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
51,714,214
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20direct%20Xa%20inhibitors
Four drugs from the class of direct Xa inhibitors are marketed worldwide. Rivaroxaban (Xarelto) was the first approved FXa inhibitor to become commercially available in Europe and Canada in 2008. The second one was apixaban (Eliquis), approved in Europe in 2011 and in the United States in 2012. The third one edoxaban (Lixiana, Savaysa) was approved in Japan in 2011 and in Europe and the US in 2015. Betrixaban (Bevyxxa) was approved in the US in 2017. History Heparin Heparin was discovered by Jay McLean and William Henry Howell in 1916, it was first isolated from a canine liver, which in Greek translates to hepar. Heparin targets multiple factors in the blood coagulation cascade, one of them being FXa. At first, it had many side effects but for the next twenty years, investigators worked on heparin to make it better and safer. It entered clinical trials in 1935 and the first drug was launched in 1936. Chains of natural heparin can vary from 5.000 to 40.000 daltons. In the 1980s Low molecular weight heparin (LMWH) were developed and they only contain chains with an average molecular weight of less than 8.000 Da. Warfarin In the 1920s there was an outbreak of a mysterious haemorrhagic cattle disease in Canada and the northern United States. The disease was named sweet clover disease because the cattle had grazed on sweet clover hay. It wasn't until ten years after the outbreak, that a local investigator, Karl P. Link and his student Wilhelm Schoeffel started an intense investigation to find the substance causing the internal bleeding. It took them 6 years to discover dicoumarol, the causing agent. They patented the right for the substance and in 1945 Link started selling a coumarin derivative as a rodenticide. He and his colleagues worked on several variations and ended up with a substance they named warfarin in 1948. It wasn't until 1954 that it was approved for medicinal use in humans making warfarin the first oral anticoagulant drug. Need for newer and better oral drugs Warfarin treatment requires blood monitoring and dose adjustments regularly due to its narrow therapeutic window. If supervision isn't adequate warfarin poses a threat in causing, all too frequent, haemorrhagic events and multiple interactions with food and other drugs. Currently, the main problem with low molecular weight heparin (LMWH) is the administration route, as it has to be given subcutaneously. Because of these disadvantages there has been an urgent need for better anticoagulant drugs. For a modern society, convenient and fast drug administration is the key to a good drug compliance. In 2008 the first direct Xa inhibitor was approved for clinical use. Direct Xa inhibitors are just as efficacious as LMWH and warfarin but they are given orally and don't need as strict monitoring. Other Xa inhibitors advantages are rapid onset/offset, few drug interactions and predictable pharmacokinetics. The rapid onset/offset effect greatly reduces the need for “bridging” with parenteral anticoagulants after surgeries. Today there are four factor Xa inhibitors marketed: rivaroxaban, apixaban, edoxaban and betrixaban. Antistasin and tick anticoagulant peptide (TAP) Factor Xa was identified as a promising target for the development of new anticoagulants in the early 1980s. In 1987 the first factor Xa inhibitor, the naturally occurring compound antistasin, was isolated from the salivary glands of the Mexican leech Haementeria officinalis. Antistasin is a polypeptide and a potent Xa inhibitor. In 1990 another naturally occurring Xa inhibitor was isolated, tick anticoagulant peptide (TAP) from extracts of the tick Ornithodoros moubata. TAP and antistasin were used to estimate factor Xa as a drug target. Mechanism of action Blood coagulation is a complex process by which the blood forms clots. It is an essential part of hemostasis and works by stopping blood loss from damaged blood vessels. At the site of injury, where there is an exposure of blood under the endothelium, the platelets gather and immediately form a plug. That process is called primary hemostasis. Simultaneously, a secondary hemostasis occurs. It is defined as the formation of insoluble fibrin by activated coagulation factors, specifically thrombin. These factors activate each other in a blood coagulation cascade that occurs through two separate pathways that interact, the intrinsic and extrinsic pathway. After activating various proenzymes, thrombin is formed in the last steps of the cascade, it then converts fibrinogen to fibrin which leads to clot formation. Factor Xa is an activated serine protease that occupies a key role in the blood coagulation pathway by converting prothrombin to thrombin. Inhibition of factor Xa leads to antithrombotic effects by decreasing the amount of thrombin. Directly targeting factor Xa is suggested to be an effective approach to anticoagulation. Development In 1987 antistasin was tested as the first direct Xa inhibitor. Antistasin is a protein made up of 119 amino acid residues, of which 20 are cysteines involved in 10 disulfide bonds. It acts as a slow, tight-binding inhibitor of factor Xa with a Ki value of 0.3–0.6 nM but it also inhibits trypsin. Recombinant Antistasin can be produced by genetically modified yeast, saccharomyces cerevisiae. Another natural occurring direct Xa-inhibitor, the tick anticoagulant peptide (TAP), was discovered in 1990. It is a single-chain, 60 amino acid peptide and like antistasin it is a slow, tight-binding inhibitor with a similar Ki value (~0.6 nM). These two proteins were mostly used to validate factor Xa as a drug target. Animal studies suggested direct Xa-inhibition to be a more efficient approach to anticoagulation compared to direct thrombin inhibitors, especially offering a wider therapeutic window and reducing the risk of rebound thrombosis, (increase in thromboembolic events occurring shortly after the withdrawal of an antithrombotic medication) compared to direct and indirect thrombin inhibitors. During the 1990s several low-molecular-weight substances were developed, such as DX-9065a and YM-60828. DX-9065a was the first synthetic compound that inhibited FXa without inhibiting thrombin. That was attained by inserting a carboxyl group which seemed to be the most important moiety for a selective binding to FXa. Those early developed small molecules yet had amidine-groups or even higher-basic functions, which were thought to be necessary as mimics for an arginine residue in prothrombin, the natural substrate of factor Xa. Nevertheless, these basic functions are also related to a very poor oral bioavailability (e.g. 2–3% for DX-9065a). In 1998 Bayer Healthcare, a pharmaceutical company started searching for low-molecular-weight direct factor Xa inhibitors with higher oral bioavailability. High-throughput screening and further optimisation at first lead to several substances from the class of isoindolinones demonstrating that much less basic substances can also act as potent Xa inhibitors to an IC50 value of up to 2 nM. Although isoindolinones have a better oral bioavailability than the original compounds it was insufficient. However, the project later lead to the class of n-aryloxazolidinones that provides substances with both high potency of inhibiting factor Xa and high bioavailability. One compound of this class, Rivaroxaban (IC50 = 0.7 nM, bioavailability: 60%), was granted marketing authorization for the prevention of venous thromboembolism in Europe and Canada in September 2008. Chemistry Factor Xa: Structure and binding sites Factors IIa, Xa, VIIa, IXa and XIa are all proteolytic enzymes that have a specific role in the coagulation cascade. Factor Xa (FXa) is the most promising one due to its position at the intersection of the intrinsic and extrinsic pathway as well as generating around 1000 thrombin molecules for each Xa molecule which results in a potent anticoagulant effect. FXa is generated from FX by cleavage of a 52 amino acid activation peptide, as the "a" in factor Xa means activated. FXa consists of 254 amino acid catalytic domain and is also linked to a 142 amino acid light chain. The chain contains both GLA domain and two epidermal growth factor domains (EGF like domains). The active site of FXa is structured to catalyze the cleavage of physiological substrates and cleaves PhePheAsnProArg-ThrPhe and TyrIleAspGlyArg-IleVal in prothrombin. FXa has four so-called pockets which are targets for substrates to bind to factor Xa. These pockets are lined up by different amino acids and Xa inhibitors target these pocket when binding to factor Xa. The two most relevant pockets regarding affinity and selectivity for the Xa inhibitors are S1 and S4. S1: The S1 pocket is a hydrophobic pocket and contains an aspartic acid residue (Asp-189) which can serve as a recognition site for a basic group. FXa has a residual space in the S1 pocket and is lined by residues Tyr-228, Asp-189 and Ser-195. S2: The S2 pocket is a small and shallow pocket. It merges with the S4 pocket and has room for small amino acids. Tyr-99 seems to block access to this pocket, so this pocket is not as important as S1 and S4. S3: The S3 pocket is located on the rim of the S1 pocket and is flat and exposed to the solvent. This pocket is not as important as S1 and S4. S4: The S4 pocket is hydrophobic in nature and the floor of the pocket is formed by Trp-215 residue. The residues Phe-174 and Tyr-99 of FXa join Trp-215 to form an aromatic box that is able to bind aliphatic, aromatic and positively charged fragments. Because of the binding to positively charged entities, it can be described as a cation hole. Chemical structure and properties of direct Xa inhibitors Binding of Xa inhibitors to factor Xa The Xa inhibitors all bind in a so-called L-shape fashion within the active site of factor Xa. The key constituents of the factor Xa are the S1 and S4 binding sites. It was first noted that the natural compounds, antistasin and TAP, which possess highly polar and therefore charged components bind to the target with some specificity. That's why newer drugs were designed with positively charged groups but those resulted in poor bioavailability. Nowadays marketed Xa inhibitors, therefore contain an aromatic ring with various moieties attached for different interactions with the S1 and S4 binding sites. This also ensures good bioavailability as well as maintaining firm binding strength. The Xa inhibitors currently on market today, therefore rely on hydrophobic and hydrogen bonding instead of highly polar interactions. Antistasin binding to factor Xa Antistasin contains an N- and a C-terminal domain which are similar in their amino acid sequences with ~40% identity and ~56% homology. Each of them contains a short β-sheet structure and 5 disulfide bonds. Only the N-terminal domain is necessary to inhibit Xa while the C-terminal domain does not contribute to the inhibitory properties due to differences in the 3 dimensional structure, even though the C-terminal domain has a strongly analogue pattern to the actual active site. The interaction of antistasin with FXa involves both the active site and the inactive surface of FXa. The reactive site of antistasin formed by Arg-34 and Val-35 in the N-terminal domain suits the binding site of FXa, most likely the S1 pocket. At the same time, Glu-15 located outside the reactive site of antistasin fits to positively charged residues on the surface of FXa. The multiple binding is thermodynamically advantageous and leads to sub-nanomolar inhibition (Ki = 0.3–0.6 nM). DX-9065a binding to factor Xa DX-9065a, the first small molecule direct Xa-inhibitor, is an amidinoaryl derivate with a molecular weight of 571.07g/mol. Its positively charged amidinonaphtalene group forms a salt bridge to the Asp-189 residue in the S1 pocket of FXa. The pyrrolidine ring fits between Tyr-99, Phe-174 and Trp-215 in the S4 pocket of FXa. Unlike older drugs, e.g. heparin, DX-9065a is selective for FXa compared to thrombin even though FXa and thrombin are similar in their structure. This is caused by a difference in the amino acid residue in the homologue position 192. While FXa has a glutamine residue in that position, thrombin has a glutamic acid that causes electrostatic repulsion with the carboxyl group of DX-9065a. In addition, a salt bridge between Glu-97 of thrombin and the amidine group fixed in the pyrrolidine ring of DX-9065a reduces the flexibility of the DX-9065a molecule, which now cannot rotate enough to avoid the electrostatic clash. That's why the IC50 value for thrombin is >1000μM while the IC50 value for FXa is 0.16μM. Rivaroxaban binding to factor Xa Rivaroxaban binding to FXa is mediated through two hydrogen bonds to the amino acid Gly-219. These two hydrogen bonds serve an important role directing the drug into the S1 and S4 subsites of FXa. The first hydrogen bond is a strong interaction which comes from the carbonyl oxygen of the oxazolidinone core of rivaroxaban. The second hydrogen bond is a weaker interaction and comes from the amino group of the clorothiophene carboxamide moiety. These two hydrogen bonds result in the drug forming an L-shape and fits in the S1 and S4 pockets. The amino acids residues Phe-174, Tyr-99, and Trp-215 form a narrow hydrophobic channel that is the S4 binding pocket. The morpholinone part of rivaroxaban is “sandwiched” between amino acids Tyr-99 and Phe-174 and the aryl ring of rivaroxaban is oriented perpendicularly across Trp-215. The morpholinone carbonyl group does not have a direct interaction to the FXa backbone, instead, it contributes to a planarization of the morpholinone ring and therefore supports rivaroxaban to be sandwiched between the two amino acids. The interaction between the chlorine substituent of the thiophene moiety and the aromatic ring of Tyr-228, which is located at the bottom of the S1, it is very important due to the fact that it obviates the need for strongly basic groups for high affinity for FXa. This enables rivaroxaban, which is non-basic, to achieve good oral bioavailability and potency. Apixaban binding to factor Xa Apixaban shows a similar binding mode as rivaroxaban and forms a tight inhibitor-enzyme complex when connected to FXa. The p-methoxy group of apixaban connects to S1 pocket of FXa but does not appear to have any interaction with any residues in this region of FXa. The pyrazole N-2 nitrogen atom of apixaban interacts with Gln-192 and the carbonyl oxygen interacts with Gly-216. The phenyl lactam group of apixaban is positioned between Tyr-99 and Phe-174 and due to its orientation, it is able to interact with Trp-215 of the S4 pocket. The carbonyl oxygen group of the lactam moiety interacts with a water molecule and does not seem to interact with any residues in the S4 pocket. Structure-activity-relationship (SAR) An important part of designing a compound, that is an ideal inhibitor to a certain target, is to understand the amino acid sequence of the target site for the compound to bind to. Modelling both prothrombin and FXa makes it possible to deduct the difference and identify the amino acids at each binding site. At the bottom of the S1 pocket on FXa the binding amino acid is Asp-189 which amidine moieties can bind to. After X-raying the binding site of FXa, it was revealed that the S1 pocket had a planar shape, meaning that a flat amidinoaryl group should bind to it without steric hindrance. Modern direct Xa inhibitors are L-shaped molecules whose ends fit perfectly in the S1 and S4 pockets. The long side of the L-shape has to conform to a highly-specific tunnel within the targets active site. To accomplish that, this part of the molecules is designed to have little formal interactions with FXa in that region. As there is no specific bonding, the fit of these agents between the pockets of FXa increases the total specificity of the drugs to the FXa molecule. The interaction between the S1 pocket of FXa and the inhibitor can be both ionic or non-ionic, which is important because it allows the design of the moiety to be adjusted to increase oral bioavailability. Previously designed compounds were charged molecules that are not absorbed well in the gastrointestinal tract and therefore did not reach high serum concentrations. The newer drugs have a better bioavailability as they are not charged and have a non-ionic interaction to the S1 pocket. Rivaroxaban During the SAR development of rivaroxaban, researchers realized that adding a 5-chlorothiophene-2-carboxamide group to the oxazolidonine core could increase the potency by 200 fold, which had previously been too weak for medical use. In addition to this discovery, a clear preference for the (S)-configuration was confirmed. This compound had a promising pharmacokinetical profile and did not contain a highly basic amidine group, but that had previously been considered important for the interaction with the S1 pocket. These findings lead to extensive SAR (structure-activity relationship) researches. During the SAR testing, R1 was defined as the most important group for potency. Pyrrolidinone was the first R1 functional group to significantly increase the potency but further researches revealed even higher potency with a morpholinone group instead. Groups R2 and R3 had hydrogen or fluorine attached and it was quickly assessed that having hydrogen resulted in highest potency. Groups R2 and R3 were then substituted for various groups, which were all less potent than the hydrogen, so hydrogen was the final result. As the chlorothiophene moiety had an inadequate water solubility, substituting it with another group was attempted but was unsuccessful. The chlorothiophene moiety binds to Tyr-228 at the bottom of the S1 pocket, making it a key factor regarding binding to FXa. Rivaroxaban has both high affinity and good bioavailability. Apixaban During the SAR development of apixaban there were three groups that needed to be tested to attain maximum potency and bioavailability. The first group to be tested was the non-active site as it needs to be stabilized before SAR testing on the p-methoxyphenyl group (S1 binding moiety). There are several of groups that increase the potency of the compound, mostly amides, amines and tetrazoles but also methylsulfonyl and trifluoromethyl groups. Of these groups, carboxamide has the greatest binding and had similar clotting activity as the compounds. In dog testing, this compound with a carboxamide group called 13F, showed a great pharmacokinetical profile, a low clearance and adequate half-life and volume of distribution. Due to the success of finding a stabilizing group, SAR research for S1 binding moiety (p-methoxyphenyl) was discontinued. In the S4 binding group, N-methylacetyl and lactam analogues proved to have a very high binding affinity for FXa, showed great clotting and selectivity versus other proteases. Orientation turned out to be important as N-methyl acetyl, compared to acetamide, had a 300 fold lower binding ability to FXa due to unfavorable planarity close to the S4 region binding site. Synthesis Rivaroxaban Rivaroxaban chemically belongs to the group of n-aryloxazolidinones. Other drugs of that group are linezolid and tedizolid, both of whom are antibiotics. A synthesis of n-aryloxazolidinones starting with an O-silyl protected ethyl(2,3-dihydroxypropyl)-carbamate was published in 2016. In a one-pot reaction the carbamate cyclisizes to a 2-oxazolidone ring under slightly basic conditions while simultaneously the oxazolidone nitrogen is arylized by copper-catalization. For rivaroxaban in particular, 3-morpholinone substitutes the iodine in p-position of the benzene ring by copper-catalization. Afterwards, the silyl protecting group is removed and the resulting alcohol is replaced by an amino group which is then acylated in the last step. An industrial preparation of rivaroxaban was registered as a patent by Bayer Healthcare in 2005. It starts from N-(4-aminophenol)-morpholinone which is alkylated by a propylene oxide derivate that also contains a primary amine involved in a phthalimide protection group. Next, a phosgene equivalent is added to form the 2-oxazolidone ring and the phthalimide is removed. The free amine can now be acylated which leads to rivaroxaban. However, according to the patent the synthesis has “various disadvantages in the reaction management which has particularly unfavourable effects for preparation“. The patent also explains another synthesis starting from a chlorothiophene derivate that would be more suitable for the industrial process but points out that toxic solvents or reagents have to be removed from the final product. Therefore, this way is not an alternative. Various other synthesis pathways of rivaroxaban have been described. Apixaban The first full synthesis of apixaban was published in 2007. The key step of this reaction is a (3+2)cycloaddition of a p-methoxyphenylchlorohydrazon derivate and a p-iodophenyl-morpholin-dihydropyridin derivate. After the following elimination of HCl and morpholine, the iodine is substituted by 2-piperidinone by copper-catalization and the ethyl esther is converted to an amide (aminolysis). This reaction was registered as a patent in 2009. Clinical use Direct factor Xa inhibitors are being used clinically and their usage is constantly increasing. They are gradually taking over warfarin usage and low molecular weight heparins (LMWH). Indication for Xa inhibitors is preventing deep vein thrombosis (DVT) which can lead to pulmonary embolism. It is also used to treat atrial fibrillation to lower the risk of stroke caused by a blood clot. Another indication is a prophylactic treatment for blood clotting (thrombosis) due to atherosclerosis. Rivaroxaban was the first FXa inhibitor on the market and then followed by apixaban, edoxaban and betrixaban. Pharmacokinetics Future perspectives Direct Xa inhibitors in clinical trials Rivaroxaban, apixaban, edoxaban and betrixaban are already on the market. As of October 2016, several new direct Xa inhibitors have entered clinical trials. These are letaxaban from Takeda and eribaxaban from Pfizer. Antidotes Andexxa (Andexanet alfa) from Portola Pharmaceuticals is a recombinant protein that is given intravenously. It works as an antidote to all direct and indirect FXa inhibitors. Andexxa acts as a decoy receptor for Xa inhibitors. References Direct Xa inhibitors Drug discovery
Discovery and development of direct Xa inhibitors
[ "Chemistry", "Biology" ]
5,368
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
51,719,045
https://en.wikipedia.org/wiki/Propylene%20chlorohydrin
Propylene chlorohydrin usually refers to the organic compound with the formula CH3CH(OH)CH2Cl. A related compound, an isomer, is CH3CH(Cl)CH2OH. Both isomers are colorless liquids that are soluble in organic solvents. They are classified as chlorohydrins. Both are generated on a large scale as intermediates in the production of propylene oxide. The reaction of aqueous solution of chlorine with propene gives a 10:1 ratio of CH3CH(OH)CH2Cl and CH3CH(Cl)CH2OH. These compounds are treated with lime to give propylene oxide, which is useful in the production of plastics and other polymers. References Organochlorides Commodity chemicals
Propylene chlorohydrin
[ "Chemistry" ]
166
[ "Commodity chemicals", "Products of chemical industry" ]
51,719,806
https://en.wikipedia.org/wiki/Parallel%20redrawing
In geometric graph theory, and the theory of structural rigidity, a parallel redrawing of a graph drawing with straight edges in the Euclidean plane or higher-dimensional Euclidean space is another drawing of the same graph such that all edges of the second drawing are parallel to their corresponding edges in the first drawing. A parallel morph of a graph is a continuous family of drawings, all parallel redrawings of each other. Parallel redrawings include translations, scaling, and other modifications of the drawing that change it more locally. For instance, for graphs drawn as the vertices or edges of a simple polyhedron, a parallel drawing can be obtained by translating the plane of one of the polyhedron's face, and adjusting the positions of the vertices and edges that border that face. A polyhedron is said to be tight if its only parallel redrawings are similarities (combinations of translation and scaling); among the Platonic solids, the cube and dodecahedron are not tight (because of the possibility of translating one face while keeping the others fixed), but the tetrahedron, octahedron, and icosahedron are tight. In three dimensions, even for drawings where all edges are axis-parallel and the drawing forms the boundary of a polyhedron, there may exist parallel redrawings that cannot be connected by a parallel morph. For two-dimensional planar drawings, with parallel edges required to preserve their orientation, a morph always exists when the slope number is two, but it is NP-hard to determine the existence of a morph for three or more slopes. Any parallel morph can be parameterized so that the each point moves with constant speed along a line. The graphs that remain planar throughout such a motion can be derived from pseudotriangulations. In structural rigidity, the existence of (infinitesimal) parallel redrawings of a structural framework is dual to the existence of an infinitesimal motion, one that preserves its edge lengths but not their orientations. Thus, a framework has one kind of motion if it has the other kind, but detecting the existence of a parallel redrawing may be easier than detecting the existence of an infinitesimal motion. References Geometric graph theory Graph drawing Mathematics of rigidity
Parallel redrawing
[ "Physics", "Mathematics" ]
464
[ "Mathematics of rigidity", "Graph theory", "Mathematical relations", "Mechanics", "Geometric graph theory" ]
62,161,313
https://en.wikipedia.org/wiki/Blastoid%20%28embryoid%29
A blastoid is an embryoid, a stem cell-based embryo model which, morphologically and transcriptionally resembles the early, pre-implantation, mammalian conceptus, called the blastocyst. The first blastoids were created by the Nicolas Rivron laboratory by combining mouse embryonic stem cells and mouse trophoblast stem cells. Upon in vitro development, blastoids generate analogs of the primitive endoderm cells, thus comprising analogs of the three founding cell types of the conceptus (epiblast, trophoblast and primitive endoderm), and recapitulate aspects of implantation on being introduced into the uterus of a compatible female. Mouse blastoids have not shown the capacity to support the development of a foetus and are thus generally not considered as an embryo but rather as a model. As compared to other stem cell-based embryo models (e.g., Gastruloids), blastoids model the preimplantation stage and the integrated development of the conceptus including the embryo proper and the two extraembryonic tissues (trophectoderm and primitive endoderm). The blastoid is a model system for the study of mammalian development and disease. It might be useful for the identification of therapeutic targets and preclinical modelling. References Stem cells Tissue engineering Animal developmental biology
Blastoid (embryoid)
[ "Chemistry", "Engineering", "Biology" ]
281
[ "Biological engineering", "Cloning", "Chemical engineering", "Biotechnology stubs", "Tissue engineering", "Medical technology" ]
54,547,057
https://en.wikipedia.org/wiki/EPS%20Statistical%20and%20Nonlinear%20Physics%20Prize
The EPS Statistical and Nonlinear Physics Prize is a biannual award by the European Physical Society (EPS) given since 2017. Its aim is to recognize outstanding research contributions in the area of statistical physics, nonlinear physics, complex systems, and complex networks. Early Career Recipients Senior Recipients See also List of physics awards References Awards of the European Physical Society Statistical mechanics
EPS Statistical and Nonlinear Physics Prize
[ "Physics" ]
72
[ "Statistical mechanics" ]
54,550,458
https://en.wikipedia.org/wiki/Trophic%20coherence
Trophic coherence is a property of directed graphs (or directed networks). It is based on the concept of trophic levels used mainly in ecology, but which can be defined for directed networks in general and provides a measure of hierarchical structure among nodes. Trophic coherence is the tendency of nodes to fall into well-defined trophic levels. It has been related to several structural and dynamical properties of directed networks, including the prevalence of cycles and network motifs, ecological stability, intervality, and spreading processes like epidemics and neuronal avalanches. Definition Consider a directed network defined by the adjacency matrix . Each node can be assigned a trophic level according to where is 's in-degree, and nodes with (basal nodes) have by convention. Each edge has a trophic difference associated, defined as . The trophic coherence of the network is a measure of how tightly peaked the distribution of trophic distances, , is around its mean value, which is always . This can be captured by an incoherence parameter , equal to the standard deviation of : where is the number of edges in the network. The figure shows two networks which differ in their trophic coherence. The position of the nodes on the vertical axis corresponds to their trophic level. In the network on the left, nodes fall into distinct (integer) trophic levels, so the network is maximally coherent . In the one on the right, many of the nodes have fractional trophic levels, and the network is more incoherent . Trophic coherence in nature The degree to which empirical networks are trophically coherent (or incoherent) can be investigated by comparison with a null model. This is provided by the basal ensemble, which comprises networks in which all non-basal nodes have the same proportion of basal nodes for in-neighbours. Expected values in this ensemble converge to those of the widely used configuration ensemble in the limit , (with and the numbers of nodes and edges), and can be shown numerically to be a good approximation for finite random networks. The basal ensemble expectation for the incoherence parameter is where is the number of edges connected to basal nodes. The ratio measured in empirical networks reveals whether they are more or less coherent than the random expectation. For instance, Johnson and Jones find in a set of networks that food webs are significantly coherent , metabolic networks are significantly incoherent , and gene regulatory networks are close to the random expectation . Trophic levels and node function There is as yet little understanding of the mechanisms which might lead to particular kinds of networks becoming significantly coherent or incoherent. However, in systems which present correlations between trophic level and other features of nodes, processes which tended to favour the creation of edges between nodes with particular characteristics could induce coherence or incoherence. In the case of food webs, predators tend to specialise on consuming prey with certain biological properties (such as size, speed or behaviour) which correlate with their diet, and hence with trophic level. This has been suggested as the reason for food-web coherence. However, food-web models based on a niche axis do not reproduce realistic trophic coherence, which may mean either that this explanation is insufficient, or that several niche dimensions need to be considered. The relation between trophic level and node function can be seen in networks other than food webs. The figure shows a word adjacency network derived from the book Green Eggs and Ham, by Dr. Seuss. The height of nodes represents their trophic levels (according here to the edge direction which is the opposite of that suggested by the arrows, which indicate the order in which words are concatenated in sentences). The syntactic function of words is also shown with node colour. There is a clear relationship between syntactic function and trophic level: the mean trophic level of common nouns (blue) is , whereas that of verbs (red) is . This example illustrates how trophic coherence or incoherence might emerge from node function, and also that the trophic structure of networks provides a means of identifying node function in certain systems. Generating trophically coherent networks There are various ways of generating directed networks with specified trophic coherence, all based on gradually introducing new edges to the system in such a way that the probability of each new candidate edge being accepted depends on the expected trophic difference it would have. The preferential preying model is an evolving network model similar to the Barábasi-Albert model of preferential attachment, but inspired on an ecosystem that grows through immigration of new species. One begins with basal nodes and proceeds to introduce new nodes up to a total of . Each new node is assigned a first in-neighbour (a prey species in the food-web context) and a new edge is placed from to . The new node is given a temporary trophic level . Then a further new in-neighbours are chosen for from among those in the network according to their trophic levels. Specifically, for a new candidate in-neighbour , the probability of being chosen is a function of . Johnson et al use where is a parameter which tunes the trophic coherence: for maximally coherent networks are generated, and increases monotonically with for . The choice of is arbitrary. One possibility is to set to , where is the number of nodes already in the network when arrives, and is a random variable drawn from a Beta distribution with parameters and ( being the desired number of edges). This way, the generalised cascade model is recovered in the limit , and the degree distributions are as in the niche model and generalised niche model. This algorithm, as described above, generates networks with no cycles (except for self-cycles, if the new node is itself considered among its candidate in-neighbours ). In order for cycles of all lengths to be a possible, one can consider new candidate edges in which the new node is the in-neighbour as well as those in which it would be the out-neighbour. The probability of acceptance of these edges, , then depends on . The generalised preferential preying model is similar to the one described above, but has certain advantages. In particular, it is more analytically tractable, and one can generate networks with a precise number of edges . The network begins with basal nodes, and then a further new nodes are added in the following way. When each enters the system, it is assigned a single in-neighbour randomly from among those already there. Every node then has an integer temporary trophic level . The remaining edges are introduced as follows. Each pair of nodes has two temporary trophic distances associated, and . Each of these candidate edges is accepted with a probability that depends on this temporary distance. Klaise and Johnson use because they find the distribution of trophic distances in several kinds of networks to be approximately normal, and this choice leads to a range of the parameter in which . Once all the edges have been introduced, one must recalculate the trophic levels of all nodes, since these will differ from the temporary ones originally assigned unless . As with the preferential preying model, the average incoherence parameter of the resulting networks is a monotonically increasing function of for . The figure above shows two networks with different trophic coherence generated with this algorithm. References External links Why don't large ecosystems just collapse? Looplessness Trophic Coherence Could Help Solve the Mystery of Coexistence within Complex Ecosystems Trophic coherence explains why networks have few feedback loops and high stability Samuel Johnson's website Nick Jones's website Ecology Directed graphs Graph theory
Trophic coherence
[ "Mathematics", "Biology" ]
1,610
[ "Discrete mathematics", "Ecology", "Graph theory", "Combinatorics", "Mathematical relations" ]
77,506,133
https://en.wikipedia.org/wiki/3-Phenoxymandelonitrile
3-phenoxymandelonitrile (also 3-phenoxy-α-cyanobenzyl alcohol) is an organic compound belonging to the group of cyanohydrins. It is primarily used in the synthesis of pyrethroids, a class of insecticides. Production The synthesis of 3-phenoxymandelonitrile begins with the reaction of 3-phenoxybenzaldehyde with sodium cyanide and acetic anhydride in a water/dichloromethane mixture, using benzyltriethylammonium chloride as a phase transfer catalyst. This reaction initially produces the acetate, which can be hydrolyzed enzymatically with a suitable lipase to yield enantiomerically pure (S)-3-phenoxymandelonitrile through chiral resolution. The desired product can be extracted at this stage. The remaining enantiomeric acetate can undergo racemization via reaction with triethylamine in toluene or diisopropyl ether to improve yield. An alternative synthesis involves transferring a cyano group from acetone cyanohydrin to 3-phenoxybenzaldehyde. Again, enzymatic reactions through an ester can be used to produce the enantiomerically pure compound. Use (S)-3-Phenoxymandelonitrile serves as an important intermediate in the production of various pyrethroids, which are carboxylic acid esters incorporating the compound as an alcohol component, and are employed as insecticides. Notable examples within this group include deltamethrin and esfenvalerate. The presence of the 3-phenoxy group and nitriles enhances the efficacy of these compounds compared to other pyrethroids. References Benzyl compounds Nitriles Secondary alcohols
3-Phenoxymandelonitrile
[ "Chemistry" ]
386
[ "Nitriles", "Functional groups" ]
77,509,282
https://en.wikipedia.org/wiki/NGC%207769
NGC 7769 is an unbarred spiral galaxy in the constellation of Pegasus. Its velocity with respect to the cosmic microwave background is 3855 ± 25 km/s, which corresponds to a Hubble distance of 56.85 ± 4 Mpc (∼185 million light-years). It was discovered by German-British astronomer William Herschel on 18 September 1784. The galaxies NGC 7769, together with NGC 7770 and NGC 7771, are listed together as Holm 820 in Erik Holmberg's A Study of Double and Multiple Galaxies Together with Inquiries into some General Metagalactic Problems, published in 1937. NGC 7769 also is listed as part of the five-member NGC 7771 Group (also known as LGG 483), which contains the 3 galaxies from Holm 820, NGC 7786, and UGC 12828. NGC 7769 is a LINER galaxy, i.e. it has a type of nucleus that is defined by its spectral line emission which has weakly ionized or neutral atoms, while the spectral line emission from strongly ionized atoms is relatively weak. Supernovae Three supernovae have been observed in NGC 7769: SN 2019iex (type II, mag. 17.6) was discovered by the Searches After Gravitational-waves using ARizona Observatories (SAGUARO) project on 26 June 2019. SN 2022mxv (type II, mag. 18.249) was discovered by ATLAS on 18 June 2022. SN 2024grb (type II, mag. 18.2) was discovered by ATLAS on 16 April 2024. See also List of NGC objects (7001–7840) References External links 7769 072615 12808 23485+1952 Pegasus (constellation) 17840918 Discoveries by William Herschel +03-60-030 Unbarred spiral galaxies LINER galaxies Markarian galaxies
NGC 7769
[ "Astronomy" ]
394
[ "Pegasus (constellation)", "Constellations" ]
77,512,918
https://en.wikipedia.org/wiki/Kieron%20Burke
Kieron Burke is a professor known for his work in the field of quantum mechanics, particularly in developing and advancing density functional theory (DFT). He holds joint appointments as a distinguished professor in the Departments of Chemistry and Physics at the University of California, Irvine (UCI). Career and research Density functional theory Burke's primary research focus is on density functional theory (DFT), a computational quantum mechanical modeling method used to investigate the electronic structure of many-body systems, particularly atoms, molecules, and condensed phases. DFT has become an essential tool in chemistry and materials science due to its balance of accuracy and computational efficiency. Burke has been instrumental in developing formalism, new approximations, and extensions of DFT to various scientific applications (UCI Chemistry) (Eddleman Quantum Institute) (UCI DFT). Key contributions Kieron Burke has contributed significantly to several areas within DFT, including: PBE Functional Development: Contributed to the development of the Perdew-Burke-Ernzerhof (PBE) functional, which is widely used in computational chemistry and materials science. Adiabatic Connection Arguments: Played a role in developing the PBE0 hybrid functional, which combines DFT with Hartree-Fock theory. Thermal DFT: Advanced the understanding of DFT under thermal conditions, which is crucial for studying matter under extreme environments such as planetary interiors and fusion reactors. Machine Learning: Integrated machine learning techniques to improve the accuracy and efficiency of DFT calculations (UCI Chemistry) (IAQMS) (APS Physics). Academic and professional recognition Burke is a fellow of several prestigious organizations, including the American Physical Society, the British Royal Society of Chemistry, and the American Association for the Advancement of Science. He has received numerous awards, including the International Journal of Quantum Chemistry Young Investigator Award and the Bourke Lectureship from the Royal Society of Chemistry (UCI Chemistry) (IAQMS). Outreach and education Kieron Burke is also known for his educational efforts and outreach activities. He has delivered lectures and tutorials on DFT around the world and is actively involved in mentoring students and postdoctoral researchers from various scientific disciplines, including chemistry, physics, mathematics, and computer science (UCI Chemistry) (Eddleman Quantum Institute). Selected publications Burke has authored over 180 research papers in theoretical chemistry, physical chemistry, condensed matter physics, and surface and interface science. His work is highly cited, reflecting its impact on the scientific community. Some notable publications include: "Thermal Density Functional Theory in Context" - A comprehensive overview of thermal DFT and its applications. "Exact Conditions and Approximations in Density Functional Theory" - Discusses the theoretical foundations and practical approximations in DFT. References Fellows of the American Physical Society Living people Fellows of the American Association for the Advancement of Science 21st-century American chemists Theoretical chemists Computational chemists American academics University of California, Irvine faculty Year of birth missing (living people)
Kieron Burke
[ "Chemistry" ]
594
[ "Theoretical chemists", "American theoretical chemists" ]
77,513,494
https://en.wikipedia.org/wiki/C19H20O7
{{DISPLAYTITLE:C19H20O7}} The molecular formula C19H20O7 (molar mass: 360.362 g/mol) may refer to: Barbatic acid Elephantopin
C19H20O7
[ "Chemistry" ]
50
[ "Isomerism", "Set index articles on molecular formulas" ]
77,513,732
https://en.wikipedia.org/wiki/Elastocaloric%20materials
Elastocaloric materials are a class of advanced materials. These materials show a big change in temperature when mechanical stress is applied and then removed. This phenomenon, known as the elastocaloric effect, is the reversible thermal response of the material to mechanical loading and unloading. The effect is often caused by changes in entropy within the material's structure. This can be due to phase transformations or reorientation of crystalline domains. Unlike conventional materials, elastocaloric materials can experience substantial temperature changes under mechanical stress. This makes them promising for solid-state refrigeration and heating applications. The relevance of elastocaloric materials lies in their potential to revolutionize the cooling and heating systems that are integral to modern life. Traditional cooling technologies, such as vapor-compression refrigeration, rely on harmful refrigerants that contribute to global warming and have significant energy consumption. These materials can potentially replace conventional systems, leading to reduced greenhouse gas emissions and lower energy usage. Elastocaloric Effect The elastocaloric effect is a complex thermomechanical phenomenon in which a material experiences a temperature change as a result of mechanical stress. When mechanical stress is applied to an elastocaloric material—through stretching, compressing, or bending—the material can either absorb heat from its surroundings (resulting in cooling) or release heat (resulting in heating). This effect arises primarily due to a change in the material's entropy. The change in entropy is often linked to a phase transition or the reorientation of the material's crystalline structure. Shape memory alloys (SMAs) have the elastocaloric effect. This effect is closely connected to the reversible phase transition between different crystal structures. For example, the transition from austenite to martensite. During this transition, the entropy of the system changes. This is due to the rearrangement of atoms and changes in internal energy. The transformation from a high-symmetry austenitic phase to a low-symmetry martensitic phase can either absorb or release latent heat. This depends on whether the process is endothermic or exothermic. The temperature change (ΔT) depends on several factors: material composition - the specific elements and their concentrations in the alloy can significantly influence the phase transition temperature and the associated entropy change; microstructure - the size, distribution, and orientation of grains within the material can affect the mechanical properties and the efficiency of the phase transition; mechanical load - the type and magnitude of the applied stress determine the extent of the phase transition and, consequently, the temperature change. By controlling these factors, the elastocaloric effect can be finely tuned. This allows for the design of materials with tailored thermal responses for specific applications. Materials Elastocaloric materials are diverse and include a range of shape memory alloys (SMAs), which are among the most widely studied due to their pronounced phase transition properties. Notable examples include: Nickel-Titanium (NiTi) Alloys: Known for their excellent mechanical properties and significant temperature changes during the austenite-martensite phase transformation, NiTi alloys are highly efficient elastocaloric materials. Copper-Based Alloys: Alloys such as Cu-Zn-Al and Cu-Al-Ni have also shown promising elastocaloric properties, with the added benefit of being less expensive than NiTi. Iron-Based Alloys: These materials, including Fe-Pd and Fe-Ni, offer potential for elastocaloric applications, especially at lower temperatures. Elastomers and Ceramics: Some polymer-based materials and ceramics exhibit elastocaloric effects due to entropy changes associated with stretching or bending. These materials can provide unique advantages, such as flexibility and lower weight. The choice of material for elastocaloric applications depends on several criteria, including the desired operating temperature range, the required mechanical strength, the material's durability under cyclic loading (fatigue resistance), and cost considerations. Comparison with Other Caloric Effects The elastocaloric effect is part of a broader category of caloric effects that can be utilized for solid-state cooling technologies. Other notable caloric effects include: Magnetocaloric effect (MCE): This effect involves a temperature change in a material due to a change in magnetic field. It is based on the magnetocaloric materials' ability to undergo an entropy change when subjected to a magnetic field, which aligns magnetic domains and reduces entropy, leading to heating or cooling. Electrocaloric effect (ECE): Involves temperature changes due to the application or removal of an electric field. Electrocaloric materials experience a change in polarization and entropy when an electric field is applied, causing the material to either absorb or release heat. Compared to the magnetocaloric and electrocaloric effects, the elastocaloric effect offers several distinct advantages: No Need for External Fields: Elastocaloric materials do not need external magnetic or electric fields. These fields can be energy-intensive to generate and control. This makes elastocaloric systems potentially simpler and more cost-effective. Higher Temperature Changes: Elastocaloric materials can show larger temperature changes when a mechanical stress is applied. The temperature changes are bigger compared to the changes from magnetocaloric or electrocaloric effects. This can lead to higher cooling efficiencies. Material Diversity: A wide range of materials can exhibit elastocaloric properties, offering more options for specific applications and potentially lower material costs. These unique attributes make elastocaloric materials a promising avenue for developing next-generation cooling technologies that are more energy-efficient and environmentally friendly than current systems. References Refrigerants Phase transitions
Elastocaloric materials
[ "Physics", "Chemistry" ]
1,178
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Statistical mechanics", "Matter" ]
47,587,093
https://en.wikipedia.org/wiki/Adult%20polyglucosan%20body%20disease
Adult polyglucosan body disease (APBD) is a rare monogenic glycogen storage disorder (GSD type IV) caused by an inborn error of metabolism. Symptoms can emerge any time after the age of 30. Early symptoms include trouble controlling urination, trouble walking, and lack of sensation in the legs. People eventually develop dementia. A person inherits loss-of-function mutations in the GBE1 gene from each parent, and the lack of glycogen branching enzyme (the protein encoded by GBE1) leads to buildup of unbranched glycogen in cells, which harms neurons more than other kinds of cells. Most people first go to the doctor due to trouble with urination. The condition is diagnosed by gathering symptoms, a neurological examination, laboratory tests including genetic testing, and medical imaging. As of 2024, there was no cure or treatment, but the symptoms could be managed. People diagnosed with APBD can live a long time after diagnosis, but will probably die earlier than people without the condition. Signs and symptoms Adult polyglucosan body disease is a condition that affects the nervous system. People with this condition have problems walking due to reduced sensation in their legs (peripheral neuropathy) and progressive muscle weakness and stiffness (spasticity). Damage to the nerves that control bladder function (neurogenic bladder) causes progressive difficulty in controlling the flow of urine. About half of people with adult polyglucosan body disease experience dementia. Most people with the condition first complain of bladder issues. People with adult polyglucosan body disease typically first experience signs and symptoms related to the condition between ages 30 and 60. Causes APBD is an autosomal recessive disorder that is caused when a person inherits genes from both parents containing one or more loss-of-function mutations in the gene GBE1 which encodes for glycogen branching enzyme, also called 1,4-alpha-glucan-branching enzyme. Mechanism The GBE1 gene provides instructions for making the glycogen branching enzyme. This enzyme is involved in the production of a complex sugar called glycogen, which is a major source of stored energy in the body. Most GBE1 gene mutations result in a shortage (deficiency) of the glycogen branching enzyme, which leads to the production of abnormal glycogen molecules. These abnormal glycogen molecules, called polyglucosan bodies, accumulate within cells and cause damage. Neurons appear to be particularly vulnerable to the accumulation of polyglucosan bodies in people with this disorder, leading to impaired neuronal function. Some mutations in the GBE1 gene that cause adult polyglucosan body disease do not result in a shortage of glycogen branching enzyme. In people with these mutations, the activity of this enzyme is normal. How mutations cause the disease in these individuals is unclear. Other people with adult polyglucosan body disease do not have identified mutations in the GBE1 gene. In these individuals, the cause of the disease is unknown. Diagnosis Along with evaluation of the symptoms and a neurological examination, a diagnosis can be made based on genetic testing. Whether or not a person is making sufficient amounts of functional glycogen branching enzyme can be determined by taking a skin biopsy and testing for activity of the enzyme. Examination of tissue biopsied from the sural nerve under a microscope can reveal the presence of polyglucosan bodies. There will also be white matter changes visible in a magnetic resonance imaging scans. Classification Adult polyglucosan body disease is an orphan disease and a glycogen storage disorder that is caused by an inborn error of metabolism, that affects the central and peripheral nervous systems. The condition in newborns caused by the same mutations is called glycogen storage disease type IV. Prevention APBD can only be prevented if parents undergo genetic screening to understand their risk of producing a child with the condition; if in vitro fertilization is used, then preimplantation genetic diagnosis can be done to identify fertilized eggs that do not carry two copies of mutated GBE1. Management , there is no cure for APBD; instead symptoms are managed. There are various approaches to managing neurogenic bladder dysfunction, physical therapy and mobility aids to help with walking, and dementia can be managed with occupational therapy, counseling and drugs. Outcomes The rate of progression varies significantly from person to person. There is not good data on outcomes; it appears that APBD likely leads to earlier death, but people with APBD can live many years after diagnosis with relatively good quality of life. Epidemiology The prevalence is unknown; about 70 cases had been reported in the medical literature as of 2016. As of 2016, the largest set of case studies included 50 people; about 70% of them were of Ashkenazic Jewish descent. Society and culture Gregory Weiss, a person with APBD, created the Adult Polyglucosan Body Disease Research Foundation to fund research into the disease and its management. Research directions In 2015 the first transgenic mouse that appeared to be a useful model organism for studying APBD was published. See also GBE1 Glycogen storage disease type IV Neurogenic bladder dysfunction Peripheral neuropathy References External links Adult Polyglucosan Body Disease Research Foundation Inborn errors of carbohydrate metabolism Rare diseases
Adult polyglucosan body disease
[ "Chemistry" ]
1,127
[ "Inborn errors of carbohydrate metabolism", "Carbohydrate metabolism" ]
47,589,429
https://en.wikipedia.org/wiki/Norgesterone
Norgesterone, also known as norvinodrel or vinylestrenolone and sold under the brand name Vestalin, is a progestin medication which was formerly used in birth control pills for women but is now no longer marketed. It was used in combination with the estrogen ethinylestradiol. It is taken by mouth. Norgesterone is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has no androgenic activity. Norgesterone was first described in 1962. It is no longer available. Medical uses Norgesterone was used in combination with ethinylestradiol in birth control pills to prevent pregnancy. It is no longer available. Pharmacology Pharmacodynamics Norgesterone is a progestogen, and hence is an agonist of the progesterone receptor. Unlike related progestins, it is virtually devoid of androgenic activity in animal assays. Chemistry Norgesterone, also known as 17α-vinyl-δ5(10)-19-nortestosterone or as 17α-vinylestr-5(10)-en-17β-ol-3-one, is a synthetic estrane steroid and a derivative of testosterone and 19-nortestosterone. Analogues of norgesterone include norvinisterone (17α-vinyl-19-nortestosterone) and vinyltestosterone (17α-vinyltestosterone). History Norgesterone was first described in 1962. Society and culture Generic names Norgesterone is the generic name of the drug and its . It has also been referred to as norvinodrel, vinylestrenolone, and vinylnoretynodrel. Brand names Norgesterone was marketed in combination with ethinylestradiol, an estrogen, as a birth control pill under the brand name Vestalin. Availability Norgesterone is no longer marketed and hence is no longer available in any country. References Abandoned drugs Alkene derivatives Estranes Hormonal contraception Ketones Progestogens Vinyl compounds
Norgesterone
[ "Chemistry" ]
478
[ "Ketones", "Functional groups", "Drug safety", "Abandoned drugs" ]
71,738,191
https://en.wikipedia.org/wiki/Hydrogen%20evolution%20reaction
Hydrogen evolution reaction (HER) is a chemical reaction that yields H2. The conversion of protons to H2 requires reducing equivalents and usually a catalyst. In nature, HER is catalyzed by hydrogenase enzymes. Commercial electrolyzers typically employ supported platinum as the catalyst at the anode of the electrolyzer. HER is useful for producing hydrogen gas, providing a clean-burning fuel. HER, however, can also be an unwelcome side reaction that competes with other reductions such as nitrogen fixation, or electrochemical reduction of carbon dioxide or chrome plating. HER in electrolysis HER is a key reaction which occurs in the electrolysis of water for the production of hydrogen for both industrial energy applications, as well as small-scale laboratory research. Due to the abundance of water on Earth, hydrogen production poses a potentially scalable process for fuel generation. This is an alternative to steam methane reforming for hydrogen production, which has significant greenhouse gas emissions, and as such scientists are looking to improve and scale up electrolysis processes that have fewer emissions. Electrolysis Mechanism In acidic conditions, the hydrogen evolution reaction follows the formula: 2H^+ + 2e^- -> H2 In neutral or alkaline conditions, the reaction follows the formula: 4H2O + 4e^- -> 2H2 + 4OH^- Both of these mechanisms can be seen in industrial practices at the cathode side of the electrolyzer where hydrogen evolution occurs. In acidic conditions, it is referred to as proton exchange membrane electrolysis or PEM, while in alkaline conditions it is referred to simply as alkaline electrolysis. Historically, alkaline electrolysis has been the dominant method of the two, though PEM has recently began to grow due to the higher current density that can be achieved in PEM electrolysis. Catalysts for HER The HER process is driven forward by electricity and requires a large energy input without a highly efficient catalyst, which is a chemical which lowers the activation energy of a reaction without being consumed. In alkaline electrolyzers, Nickel and Iron based catalysts for HER are typically used at the anode. The alkalinity of the electrolyte in these processes enables the use of less expensive catalysts In PEM electrolyzers, the standard catalyst for HER is platinum supported on carbon, or Pt/C, used at the anode. The performance of a catalyst can be characterized by the level of adsorption of hydrogen into binding sites of the metal surface, as well as the overpotential of the reaction as current density increases. Challenges The high cost and energy input from water electrolysis poses a challenge to the large scale implementation of hydrogen power. While alkaline electroysis is commonly used, its limited current density capacity requires large electrical input, which poses both a cost and environmental concern due to the high carbon content of electricity in the many countries, including the United States The electrocatalysts used for electrolysis of PEM electrolyzers currently account for about 5% of the total process cost, however, as this process is scaled up, it is predicted that catalysts costs will rise due to scarcity and become a huge factor in the cost of producing hydrogen. As such, low-cost, high-efficiency, and scalable alternative materials for the HER catalysts in PEM electrolyzers are a point of research interest for scientists. References Electrolysis Energy engineering
Hydrogen evolution reaction
[ "Chemistry", "Engineering" ]
702
[ "Electrochemistry", "Energy engineering", "Electrolysis" ]
71,739,731
https://en.wikipedia.org/wiki/Countable%20Borel%20relation
In descriptive set theory, specifically invariant descriptive set theory, countable Borel relations are a class of relations between standard Borel space which are particularly well behaved. This concept encapsulates various more specific concepts, such as that of a hyperfinite equivalence relation, but is of interest in and of itself. Motivation A main area of study in invariant descriptive set theory is the relative complexity of equivalence relations. An equivalence relation on a set is considered more complex than an equivalence relation on a set if one can "compute using " - formally, if there is a function which is well behaved in some sense (for example, one often requires that is Borel measurable) such that . Such a function If this holds in both directions, that one can both "compute using " and "compute using ", then and have a similar level of complexity. When one talks about Borel equivalence relations and requires to be Borel measurable, this is often denoted by . Countable Borel equivalence relations, and relations of similar complexity in the sense described above, appear in various places in mathematics (see examples below, and see for more). In particular, the Feldman-Moore theorem described below proved useful in the study of certain Von Neumann algebras (see ). Definition Let and be standard Borel spaces. A countable Borel relation between and is a subset of the cartesian product which is a Borel set (as a subset in the Product topology) and satisfies that for any , the set is countable. Note that this definition is not symmetric in and , and thus it is possible that a relation is a countable Borel relation between and but the converse relation is not a countable Borel relation between and . Examples A countable union of countable Borel relations is also a countable Borel relation. The intersection of a countable Borel relation with any Borel subset of is a countable Borel relation. If is a function between standard Borel spaces, the graph of the function is a countable Borel relation between and if and only if is Borel measurable (this is a consequence of the Luzin-Suslin theorem and the fact that ). The converse relation of the graph, , is a countable Borel relation if and only if is Borel measurable and has countable fibers. If is an equivalence relation, it is a countable Borel relation if and only if it is a Borel set and all equivalence classes are countable. In particular hyperfinite equivalence relations are countable Borel relations. The equivalence relation induced by the continuous action of a countable group is a countable Borel relation. As a concrete example, let be the set of subgroups of , the Free group of rank 2, with the topology generated by basic open sets of the form and for some (this is the Product topology on ). The equivalence relation is then a countable Borel relation. Let be the space of subsets of the naturals, again with the product topology (a basic open set is of the form or ) - this is known as the Cantor space. The equivalence relation of Turing equivalence is a countable Borel equivalence relation. The isomorphism equivalence relation between various classes of models, while not being countable Borel equivalence relations, are of similar complexity to a Borel equivalence relation in the sense described above. Examples include: The class of countable graphs where the degree of each vertex is finite. The class field extensions of finite transcendence degree over the rationals. The Luzin–Novikov theorem This theorem, named after Nikolai Luzin and his doctoral student Pyotr Novikov, is an important result used is many proofs about countable Borel relations. Theorem. Suppose and are standard Borel spaces and is a countable Borel relation between and . Then the set is a Borel subset of . Furthermore, there is a Borel function (known as a Borel uniformization) such that the graph of is a subset of . Finally, there exist Borel subsets of and Borel functions such that is the union of the graphs of the , that is . This has a couple of easy consequences: If is a Borel measurable function with countable fibers, the image of is a Borel subset of (since the image is exactly where is the converse relation of the graph of ) . Assume is a Borel equivalence relation on a standard Borel space which has countable equivalence classes. Assume is a Borel subset of . Then is also a Borel subset of (since this is precisely where , and is a Borel set). Below are two more results which can be proven using the Luzin-Novikov Novikov theorem, concerning countable Borel equivalence relations: Feldman–Moore theorem The Feldman–Moore theorem, named after Jacob Feldman and Calvin C. Moore, states: Theorem. Suppose is a Borel equivalence relation on a standard Borel space which has countable equivalence classes. Then there exists a countable group and action of on such that for every the function is Borel measurable, and for any , the equivalence class of with respect to is exactly the orbit of under the action. That is to say, countable Borel equivalence relations are exactly those generated by Borel actions by countable groups. Marker lemma This lemma is due to Theodore Slaman and John R. Steel, and can be proven using the Feldman–Moore theorem: Lemma. Suppose is a Borel equivalence relation on a standard Borel space which has countable equivalence classes. Let . Then there is a decreasing sequence such that for all and . Less formally, the lemma says that the infinite equivalence classes can be approximated by "arbitrarily small" set (for instance, if we have a Borel probability measure on the lemma implies that by the continuity of the measure). References Descriptive set theory Binary relations
Countable Borel relation
[ "Mathematics" ]
1,207
[ "Mathematical relations", "Binary relations" ]
71,740,001
https://en.wikipedia.org/wiki/Random%20polytope
In mathematics, a random polytope is a structure commonly used in convex analysis and the analysis of linear programs in d-dimensional Euclidean space . Depending on use the construction and definition, random polytopes may differ. Definition There are multiple non equivalent definitions of a Random polytope. For the following definitions. Let K be a bounded convex set in a Euclidean space: The convex hull of random points selected with respect to a uniform distribution inside K. The nonempty intersection of half-spaces in . The following parameterization has been used: such that (Note: these polytopes can be empty). Properties definition 1 Let be the set of convex bodies in . Assume and consider a set of uniformly distributed points in . The convex hull of these points, , is called a random polytope inscribed in . where the set stands for the convex hull of the set. We define to be the expected volume of . For a large enough and given . vol vol Note: One can determine the volume of the wet part to obtain the order of the magnitude of , instead of determining . For the unit ball , the wet part is the annulus where h is of order : vol Given we have is the volume of a smaller cap cut off from by aff, and is a facet if and only if are all on one side of aff . . Note: If (a function that returns the amount of d-1 dimensional faces), then and formula can be evaluated for smooth convex sets and for polygons in the plane. Properties definition 2 Assume we are given a multivariate probability distribution on that is Absolutely continuous on with respect to Lebesgue measure. Generates either 0 or 1 for the s with probability of each. Assigns a measure of 0 to the set of elements in that correspond to empty polytopes. Given this distribution, and our assumptions, the following properties hold: A formula is derived for the expected number of dimensional faces on a polytope in with constraints: . (Note: where ). The upper bound, or worst case, for the number of vertices with constraints is much larger: . The probability that a new constraint is redundant is: . (Note: , and as we add more constraints, the probability a new constraint is redundant approaches 100%). The expected number of non-redundant constraints is: . (Note: ). Example uses Minimal caps Macbeath regions Approximations (approximations of convex bodies see properties of definition 1) Economic cap covering theorem (see relation from properties of definition 1 to floating bodies) References Metric geometry Convex analysis Computational geometry
Random polytope
[ "Mathematics" ]
527
[ "Computational geometry", "Computational mathematics" ]
71,742,331
https://en.wikipedia.org/wiki/VFTS%20243
VFTS 243 (2MASS J05380840-6909190) is an O7V type main sequence star that orbits a stellar mass black hole. The black hole is around nine times the mass of the Sun, with the blue star being 25 times the mass of the Sun making the star 200,000 times larger than the black hole. VFTS 243 is located in the Large Magellanic Cloud inside NGC 2070 (the Tarantula Nebula) around 160,000 light years from Earth. The binary has an orbital period of 10.4 days. References O-type main-sequence stars Stellar black holes Dorado Binary stars Stars in the Large Magellanic Cloud
VFTS 243
[ "Physics", "Astronomy" ]
145
[ "Black holes", "Stellar black holes", "Dorado", "Unsolved problems in physics", "Constellations" ]
57,855,428
https://en.wikipedia.org/wiki/Continent%20of%20stability
The continent of stability is a hypothesised large group of nuclides with masses greater than 300 daltons that is stable against radioactive decay, consisting of freely flowing up quarks and down quarks rather than up and down quarks bound into protons and neutrons. Matter containing these nuclides is termed up-down quark matter (udQM). The continent of stability is named in analogy with the island of stability. However, if it exists, the range of charge and mass will be much greater than in the island. Quark matter composed of up quarks and down quarks is predicted to be a lower energy state than that which contains strange quarks (strange quark matter), and also lower than the combination of quarks in the form of hadrons found in normal atomic nuclei if there are over 300 protons and neutrons. The lower limit of 300 was calculated based on a surface tension model, where the surface has a higher energy than the interior of the piece of quark matter. In order to be the absolutely more stable form, the energy must be lower than that of the most stable normal matter, that is 930 MeV per baryon. If these quark matter nuclides exist, they would be stable against fission, as fission would increase the surface. The quark matter nuclide could absorb neutrons resulting in an increase in its mass. The boundary to the continent of stability is determined by the situations where the Coulomb energy due to electric charge overcomes the binding energy, or where decay into atomic nuclei results in lower energy. The lowest energy mass number is proportional to the cube of the charge (atomic number). However, a range of charges is stable for each mass, and the range increases as the mass increases. This can result in very heavy nuclides with atomic numbers the same as existing known elements, and even zero-charge pieces of quark matter. A proposed alternative form of quark matter known as strangelets contains strange quarks in addition to the up and down quarks. This would be neutral in charge, and thus not form atoms. udQM is probably lower energy than strangelets (uds-matter). At the Large Hadron Collider, the ATLAS Collaboration is attempting to observe this kind of matter. Other properties Electron-positron pairs will form in the high charge field via the Schwinger mechanism when the electric charge of udQM is larger than 163, at which the baryon number is 609. The smallest stable udQM against neutron emission would be at baryon number 39. Formation in nature udQM could be possibly formed during a supernova core collapse from conversion of superheavy nuclei. In this environment there is a high density of electrons and electron neutrinos present. The udQM would then end up in neutron stars. udQM nuclides may be detectable in cosmic rays. A star containing a large proportion of udQM is called a ud quark star (or udQS). Heavy neutron stars may convert into this star type. Whether they do may be verified by detecting binary compact stellar collisions via gravitational waves. References Periodic table Isotopes Hypothetical chemical elements Nuclear physics Quantum chromodynamics Quark matter
Continent of stability
[ "Physics", "Chemistry" ]
677
[ "Periodic table", "Quark matter", "Astrophysics", "Isotopes", "Nuclear physics" ]
57,855,538
https://en.wikipedia.org/wiki/Pregnanediol%20glucuronide
Pregnanediol glucuronide, or 5β-pregnane-3α,20α-diol 3α-glucuronide, is the major metabolite of progesterone and the C3α glucuronide conjugate of pregnanediol (5β-pregnane-3α,20α-diol). Approximately 15 to 30% of a parenteral dose of progesterone is metabolized into pregnanediol glucuronide. While this specific isomer is referred to as pregnanediol glucuronide and is the most major form, there are actually many possible isomers of the metabolite. References Diols Glucuronide esters Human metabolites Pregnanes
Pregnanediol glucuronide
[ "Chemistry", "Biology" ]
169
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
57,855,710
https://en.wikipedia.org/wiki/Tungsten%28II%29%20chloride
Tungsten(II) chloride is the inorganic compound with the formula W6Cl12. It is a polymeric cluster compound. The material dissolves in concentrated hydrochloric acid, forming (H3O)2[W6Cl14](H2O)x. Heating this salt gives yellow-brown W6Cl12. The structural chemistry resembles that observed for molybdenum(II) chloride. Tungsten(II) chloride is prepared by reduction of the hexachloride. Bismuth is a typical reductant: 6 WCl6 + 8 Bi → W6Cl12 + 8 BiCl3 . References Tungsten halides Chlorides Octahedral compounds
Tungsten(II) chloride
[ "Chemistry" ]
148
[ "Chlorides", "Inorganic compounds", "Salts" ]
57,855,810
https://en.wikipedia.org/wiki/Tungsten%28III%29%20chloride
Tungsten(III) chloride is the inorganic compound with the formula W6Cl18. It is a cluster compound. It is a brown solid, obtainable by chlorination of tungsten(II) chloride. Featuring twelve doubly bridging chloride ligands, the cluster adopts a structure related to the corresponding chlorides of niobium and tantalum. In contrast, W6Cl12 features eight triply bridging chlorides. A related mixed valence W(III)-W(IV) chloride is prepared by reduction of the hexachloride with bismuth: 9 WCl6 + 8 Bi → 3 W3Cl10 + 8 BiCl3 References Tungsten halides Chlorides Octahedral compounds
Tungsten(III) chloride
[ "Chemistry" ]
157
[ "Chlorides", "Inorganic compounds", "Salts" ]
63,071,801
https://en.wikipedia.org/wiki/Wigner%E2%80%93Araki%E2%80%93Yanase%20theorem
The Wigner–Araki–Yanase theorem, also known as the WAY theorem, is a result in quantum physics establishing that the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured. It is named for the physicists Eugene Wigner, Huzihiro Araki and Mutsuo Yanase. The theorem can be illustrated with a particle coupled to a measuring apparatus. If the position operator of the particle is and its momentum operator is , and if the position and momentum of the apparatus are and respectively, assuming that the total momentum is conserved implies that, in a suitably quantified sense, the particle's position itself cannot be measured. The measurable quantity is its position relative to the measuring apparatus, represented by the operator . The Wigner–Araki–Yanase theorem generalizes this to the case of two arbitrary observables and for the system and an observable for the apparatus, satisfying the condition that is conserved. Mikko Tukiainen gave a generalized version of the WAY theorem, which makes no use of conservation laws, but uses quantum incompatibility instead. Yui Kuramochi and Hiroyasu Tajima proved a generalized form of the theorem for possibly unbounded and continuous conserved observables. References Quantum measurement
Wigner–Araki–Yanase theorem
[ "Physics" ]
277
[ "Quantum measurement", "Quantum mechanics", "Quantum physics stubs" ]
63,072,640
https://en.wikipedia.org/wiki/De%20numeris%20triangularibus%20et%20inde%20de%20progressionibus%20arithmeticis%3A%20Magisteria%20magna
De numeris triangularibus et inde de progressionibus arithmeticis: Magisteria magna is a 38-page mathematical treatise written in the early 17th century by Thomas Harriot, lost for many years, and finally published in facsimile form in 2009 in the book Thomas Harriot's Doctrine of Triangular Numbers: the "Magisteria Magna". Harriot's work dates from before the invention of calculus, and uses finite differences to accomplish many of the tasks that would later be made more easy by calculus. De numeris triangularibus Thomas Harriot wrote De numeris triangularibus et inde de progressionibus arithmeticis: Magisteria magna in the early 1600s, and showed it to his friends. By 1618 it was complete, but in 1621 Harriot died before publishing it. Some of its material was published posthumously, in 1631, as Artis analyticae praxis, but the rest languished in the British Library among many other pages of Harriot's works, and became forgotten until its rediscovery in the late 1700s. It was finally published in its entirety, as part of the 2009 book Thomas Harriot’s Doctrine of Triangular Numbers: the "Magisteria Magna". The title can be translated as "The Great Doctrine of triangular numbers and, through them, of arithmetic progressions". Harriot's work concerns finite differences, and their uses in interpolation for calculating mathematical tables for navigation. Harriot forms the triangular numbers through the inverse process to finite differencing, partial summation, starting from a sequence of constant value one. Repeating this process produces the higher-order binomial coefficients, which in this way can be thought of as generalized triangular numbers, and which give the first part of Harriot's title. Harriot's results were only improved 50 years later by Isaac Newton, and prefigure Newton's use of Newton polynomials for interpolation. As reviewer Matthias Schemmel writes, this work "shows what was possible in dealing with functional relations before the advent of the calculus". The work was written as a 38-page manuscript in Latin, and Harriot wrote it up as if for publication, with a title page. However, much of its content consists of calculations and formulas with very little explanatory text, leading at least some of Harriot's contemporaries such as Sir Charles Cavendish to complain of the difficulty of understanding it. Thomas Harriot’s Doctrine The monograph Thomas Harriot’s Doctrine of Triangular Numbers: the "Magisteria Magna", edited by Janet Beery and Jackie Stedall, was published in 2009 by the European Mathematical Society in their newly created Heritage of European Mathematics series. Its subject is De numeris triangularibus, and the third of its three sections consists of a facsimile reproduction of Harriot's manuscript, with each page facing a page of commentary by the editors, including translations of its Latin passages. The earlier parts of Beery and Stedall's book survey the material in Harriot's work, the context for this work, the chronology of its loss and recovery, and the effect of this work on the 17th-century mathematicians who read it. Although reviewer Matthias Schemmel suggests that the 2009 monograph is primarily aimed at historians of mathematics, who "will welcome this book as providing new insights into the development of mathematics", he suggests that it may also be of interest to other mathematicians and could perk their interest in the history of mathematics. References Factorial and binomial topics Finite differences Mathematics manuscripts Books about the history of mathematics 2009 non-fiction books
De numeris triangularibus et inde de progressionibus arithmeticis: Magisteria magna
[ "Mathematics" ]
770
[ "Mathematical analysis", "Factorial and binomial topics", "Finite differences", "Combinatorics" ]
63,077,136
https://en.wikipedia.org/wiki/Cartier%20isomorphism
In algebraic geometry, the Cartier isomorphism is a certain isomorphism between the cohomology sheaves of the de Rham complex of a smooth algebraic variety over a field of positive characteristic, and the sheaves of differential forms on the Frobenius twist of the variety. It is named after Pierre Cartier. Intuitively, it shows that de Rham cohomology in positive characteristic is a much larger object than one might expect. It plays an important role in the approach of Deligne and Illusie to the degeneration of the Hodge–de Rham spectral sequence. Statement Let k be a field of characteristic p > 0, and let be a morphism of k-schemes. Let denote the Frobenius twist and let be the relative Frobenius. The Cartier map is defined to be the unique morphismof graded -algebras such that for any local section x of . (Here, for the Cartier map to be well-defined in general it is essential that one takes cohomology sheaves for the codomain.) The Cartier isomorphism is then the assertion that the map is an isomorphism if is a smooth morphism. In the above, we have formulated the Cartier isomorphism in the form it is most commonly encountered (e.g., in the 1970 paper of Katz). In his original paper, Cartier actually considered the inverse map in a more restrictive setting, whence the notation for the Cartier map. The smoothness assumption is not essential for the Cartier map to be an isomorphism. For instance, one has it for ind-smooth morphisms since both sides of the Cartier map commute with filtered colimits. By Popescu's theorem, one then has the Cartier isomorphism for a regular morphism of noetherian k-schemes. Ofer Gabber has also proven a Cartier isomorphism for valuation rings. In a different direction, one can dispense with such assumptions entirely if one instead works with derived de Rham cohomology (now taking the associated graded of the conjugate filtration) and the exterior powers of the cotangent complex. References Algebraic geometry
Cartier isomorphism
[ "Mathematics" ]
455
[ "Fields of abstract algebra", "Algebraic geometry" ]
63,079,473
https://en.wikipedia.org/wiki/The%20Pursuit%20of%20Perfect%20Packing
The Pursuit of Perfect Packing is a book on packing problems in geometry. It was written by physicists Tomaso Aste and Denis Weaire and published in 2000 by Institute of Physics Publishing (doi:10.1887/0750306483, ) with a second edition published in 2008 by Taylor & Francis (). Topics The mathematical topics described in the book include sphere packing (including the Tammes problem, the Kepler conjecture, and higher-dimensional sphere packing), the Honeycomb conjecture and the Weaire–Phelan structure, Voronoi diagrams and Delaunay triangulations, Apollonian gaskets, random sequential adsorption, and the physical realizations of some of these structures by sand, soap bubbles, the seeds of plants, and columnar basalt. A broader theme involves the contrast between locally ordered and locally disordered structures, and the interplay between local and global considerations in optimal packings. As well, the book includes biographical sketches of some of the contributors to this field, and histories of their work in this area, including Johannes Kepler, Stephen Hales, Joseph Plateau, Lord Kelvin, Osborne Reynolds, and J. D. Bernal. Audience and reception The book is aimed at a general audience rather than to professional mathematicians. Therefore, it avoids mathematical proofs and is otherwise not very technical. However, it contains pointers to the mathematical literature where readers more expert in these topics can find more detail. Avoiding proof may have been a necessary decision as some proofs in this area defy summarization: the proof by Thomas Hales of the Kepler conjecture on optimal sphere packing in three dimensions, announced shortly before the publication of the book and one of its central topics, is hundreds of pages long. Reviewer Johann Linhart complains that (in the first edition) some figures are inaccurately drawn. And although finding the book "entertaining and easy to read", William Satzer finds it "frustrating" in the lack of detail in its stories. Nevertheless, Linhart and reviewer Stephen Blundell highly recommend the book, and reviewer Charles Radin calls it "a treasure trove of intriguing examples" and "a real gem". And despite complaining about a format that mixes footnote markers into mathematical formulas, and the illegibility of some figures, Michael Fox recommends it to "any mathematics or science library". References Packing problems Mathematics books 2000 non-fiction books 2008 non-fiction books
The Pursuit of Perfect Packing
[ "Mathematics" ]
496
[ "Mathematical problems", "Packing problems" ]
63,083,789
https://en.wikipedia.org/wiki/Dupin%27s%20theorem
In differential geometry Dupin's theorem, named after the French mathematician Charles Dupin, is the statement: The intersection curve of any pair of surfaces of different pencils of a threefold orthogonal system is a curvature line. A threefold orthogonal system of surfaces consists of three pencils of surfaces such that any pair of surfaces out of different pencils intersect orthogonally. The most simple example of a threefold orthogonal system consists of the coordinate planes and their parallels. But this example is of no interest, because a plane has no curvature lines. A simple example with at least one pencil of curved surfaces: 1) all right circular cylinders with the z-axis as axis, 2) all planes, which contain the z-axis, 3) all horizontal planes (see diagram). A curvature line is a curve on a surface, which has at any point the direction of a principal curvature (maximal or minimal curvature). The set of curvature lines of a right circular cylinder consists of the set of circles (maximal curvature) and the lines (minimal curvature). A plane has no curvature lines, because any normal curvature is zero. Hence, only the curvature lines of the cylinder are of interest: A horizontal plane intersects a cylinder at a circle and a vertical plane has lines with the cylinder in common. The idea of threefold orthogonal systems can be seen as a generalization of orthogonal trajectories. Special examples are systems of confocal conic sections. Application Dupin's theorem is a tool for determining the curvature lines of a surface by intersection with suitable surfaces (see examples), without time-consuming calculation of derivatives and principal curvatures. The next example shows, that the embedding of a surface into a threefold orthogonal system is not unique. Examples Right circular cone Given: A right circular cone, green in the diagram. Wanted: The curvature lines. 1. pencil: Shifting the given cone C with apex S along its axis generates a pencil of cones (green). 2. pencil: Cones with apexes on the axis of the given cone such that the lines are orthogonal to the lines of the given cone (blue). 3. pencil: Planes through the cone's axis (purple). These three pencils of surfaces are an orthogonal system of surfaces. The blue cones intersect the given cone C at a circle (red). The purple planes intersect at the lines of cone C (green). Alternative with spheres The points of the space can be described by the spherical coordinates . It is set S=M=origin. 1. pencil: Cones with point S as apex and their axes are the axis of the given cone C (green): . 2. pencil: Spheres centered at M=S (blue): 3. pencil: Planes through the axis of cone C (purple): . Torus 1. pencil: Tori with the same directrix (green). 2. pencil: Cones containing the directrix circle of the torus with apexes on the axis of the torus (blue). 3. pencil: Planes containing the axis of the given torus (purple). The blue cones intersect the torus at horizontal circles (red). The purple planes intersect at vertical circles (green). The curvature lines of a torus generate a net of orthogonal circles. A torus contains more circles: the Villarceau circles, which are not curvature lines. Surface of revolution Usually a surface of revolution is determined by a generating plane curve (meridian) . Rotating around the axis generates the surface of revolution. The method used for a cone and a torus can be extended to a surface of revolution: 1. pencil: Parallel surfaces to the given surface of revolution. 2. pencil: Cones with apices on the axis of revolution with generators orthogonal to the given surface (blue). 3. pencil: Planes containing the axis of revolution (purple). The cones intersect the surface of revolution at circles (red). The purple planes intersect at meridians (green). Hence: The curvature lines of a surface of revolution are the horizontal circles and the meridians. Confocal quadrics The article confocal conic sections deals with confocal quadrics, too. They are a prominent example of a non trivial orthogonal system of surfaces. Dupin's theorem shows that the curvature lines of any of the quadrics can be seen as the intersection curves with quadrics out of the other pencils (see diagrams). Confocal quadrics are never rotational quadrics, so the result on surfaces of revolution (above) cannot be applied. The curvature lines are i.g. curves of degree 4. (Curvature lines of rotational quadrics are always conic sections !) Ellipsoid (see diagram) Semi-axes: . The curvature lines are sections with one (blue) and two (purple) sheeted hyperboloids. The red points are umbilic points. Hyperboloid of one sheet (see diagram) Semi-axes: . The curvature lines are intersections with ellipsoids (blue) and hyperboloids of two sheets (purple). Dupin cyclides A Dupin cyclide and its parallels are determined by a pair of focal conic sections. The diagram shows a ring cyclide together with its focal conic sections (ellipse: dark red, hyperbola: dark blue). The cyclide can be seen as a member of an orthogonal system of surfaces: 1. pencil: parallel surfaces of the cyclide. 2. pencil: right circular cones through the ellipse (their apexes are on the hyperbola) 3. pencil: right circular cones through the hyperbola (their apexes are on the ellipse) The special feature of a cyclide is the property: The curvature lines of a Dupin cyclide are circles. Proof of Dupin's theorem Any point of consideration is contained in exactly one surface of any pencil of the orthogonal system. The three parameters describing these three surfaces can be considered as new coordinates. Hence any point can be represented by: or shortly: For the example (cylinder) in the lead the new coordinates are the radius of the actual cylinder, angle between the vertical plane and the x-axis and the height of the horizontal plane. Hence, can be considered as the cylinder coordinates of the point of consideration. The condition "the surfaces intersect orthogonally" at point means, the surface normals are pairwise orthogonal. This is true, if are pairwise orthogonal. This property can be checked with help of Lagrange's identity. Hence (1) Deriving these equations for the variable, which is not contained in the equation, one gets Solving this linear system for the three appearing scalar products yields: (2) From (1) and (2): The three vectors are orthogonal to vector and hence are linear dependent (are contained in a common plane), which can be expressed by: (3) From equation (1) one gets (coefficient of the first fundamental form) and from equation (3): (coefficient of the second fundamental form) of the surface . Consequence: The parameter curves are curvature lines. The analogous result for the other two surfaces through point is true, too. References H.S.M. Coxeter: Introduction to geometry, Wiley, 1961, pp. 11, 258. Ch. Dupin: Développements de géométrie, Paris 1813. F. Klein: Vorlesungen über Höhere Geometrie, Springer-Verlag, 2013, , p. 9. Ludwig Schläfli: Über die allgemeinste Flächenschar zweiten Grades, die mit irgend zwei anderen Flächenscharen ein orthogonales System bildet, in L. Schläfli: Gesammelte mathematische Abhandlungen p. 163, Springer-Verlag, 2013, . J. Weingarten: Über die Bedingung, unter welcher eine Flächenfamilie einem orthogonalen Flächensystem angehört., Journal für die reine und angewandte Mathematik, Band 1877, Heft 83, pp. 1–12, ISSN (Online) 1435–5345, ISSN (Print) 0075–4102. T. J. Willmore: An Introduction to Differential Geometry, Courier Corporation, 2013, , p. 295. Surfaces
Dupin's theorem
[ "Mathematics" ]
1,742
[ "Theorems in differential geometry", "Theorems in geometry" ]
63,083,999
https://en.wikipedia.org/wiki/C9H20O2
{{DISPLAYTITLE:C9H20O2}} The molecular formula C9H20O2 may refer to: Dibutoxymethane 1,9-Nonanediol
C9H20O2
[ "Chemistry" ]
43
[ "Isomerism", "Set index articles on molecular formulas" ]
64,439,436
https://en.wikipedia.org/wiki/Deficiency%20%28graph%20theory%29
Deficiency is a concept in graph theory that is used to refine various theorems related to perfect matching in graphs, such as Hall's marriage theorem. This was first studied by Øystein Ore. A related property is surplus. Definition of deficiency Let be a graph, and let U be an independent set of vertices, that is, U is a subset of V in which no two vertices are connected by an edge. Let denote the set of neighbors of U, which is formed by all vertices from 'V' that are connected by an edge to one or more vertices of U. The deficiency of the set U is defined by: Suppose G is a bipartite graph, with bipartition V = X ∪ Y. The deficiency of G with respect to one of its parts (say X), is the maximum deficiency of a subset of X: Sometimes this quantity is called the critical difference of G. Note that defG of the empty subset is 0, so def(G;X) ≥ 0. Deficiency and matchings If def(G;X) = 0, it means that for all subsets U of X, |NG(U)| ≥ |U|. Hence, by Hall's marriage theorem, G admits a perfect matching. In contrast, if def(G;X) > 0, it means that for some subsets U of X, |NG(U)| < |U|. Hence, by the same theorem, G does not admit a perfect matching. Moreover, using the notion of deficiency, it is possible to state a quantitative version of Hall's theorem: Proof. Let d = def(G;X). This means that, for every subset U of X, |NG(U)| ≥ |U|-d. Add d dummy vertices to Y, and connect every dummy vertex to all vertices of X. After the addition, for every subset U of X, |NG(U)| ≥ |U|. By Hall's marriage theorem, the new graph admits a matching in which all vertices of X are matched. Now, restore the original graph by removing the d dummy vertices; this leaves at most d vertices of X unmatched. This theorem can be equivalently stated as: where ν(G) is the size of a maximum matching in G (called the matching number of G). Properties of the deficiency function In a bipartite graph G = (X+Y, E), the deficiency function is a supermodular set function: for every two subsets X1, X2 of X:A tight subset is a subset of X whose deficiency equals the deficiency of the entire graph (i.e., equals the maximum). The intersection and union of tight sets are tight; this follows from properties of upper-bounded supermodular set functions. In a non-bipartite graph, the deficiency function is, in general, not supermodular. Strong Hall property A graph G has the Hall property if Hall's marriage theorem holds for that graph, namely, if G has either a perfect matching or a vertex set with a positive deficiency. A graph has the strong Hall property if def(G) = |V| - 2 ν(G). Obviously, the strong Hall property implies the Hall property. Bipartite graphs have both of these properties, however there are classes of non-bipartite graphs that have these properties. In particular, a graph has the strong Hall property if-and-only-if it is stable - its maximum matching size equals its maximum fractional matching size. Surplus The surplus of a subset U of V is defined by:surG(U) := |NG(U)| − |U| = −defG(U)The surplus of a graph G w.r.t. a subset X is defined by the minimum surplus of non-empty subsets of X: sur(G;X) := min [U a non-empty subset of X] surG(U)Note the restriction to non-empty subsets: without it, the surplus of all graphs would always be 0. Note also that:def(G;X) = max[0, −sur(G; X)]In a bipartite graph G = (X+Y, E), the surplus function is a submodular set function: for every two subsets X1, X2 of X:A surplus-tight subset is a subset of X whose surplus equals the surplus of the entire graph (i.e., equals the minimum). The intersection and union of tight sets with non-empty intersection are tight; this follows from properties of lower-bounded submodular set functions. For a bipartite graph G with def(G;X) = 0, the number sur(G;X) is the largest integer s satisfying the following property for every vertex x in X: if we add s new vertices to X and connect them to the vertices in NG(x), the resulting graph has a non-negative surplus. If G is a bipartite graph with a positive surplus, such that deleting any edge from G decreases sur(G;X), then every vertex in X has degree sur(G;X) + 1. A bipartite graph has a positive surplus (w.r.t. X) if-and-only-if it contains a forest F such that every vertex in X has degree 2 in F. Graphs with a positive surplus play an important role in the theory of graph structures; see the Gallai–Edmonds decomposition. In a non-bipartite graph, the surplus function is, in general, not submodular. References Graph theory
Deficiency (graph theory)
[ "Mathematics" ]
1,189
[ "Discrete mathematics", "Mathematical relations", "Graph theory", "Combinatorics" ]
64,439,520
https://en.wikipedia.org/wiki/A%20Treatise%20on%20the%20Circle%20and%20the%20Sphere
A Treatise on the Circle and the Sphere is a mathematics book on circles, spheres, and inversive geometry. It was written by Julian Coolidge, and published by the Clarendon Press in 1916. The Chelsea Publishing Company published a corrected reprint in 1971, and after the American Mathematical Society acquired Chelsea Publishing it was reprinted again in 1997. Topics As is now standard in inversive geometry, the book extends the Euclidean plane to its one-point compactification, and considers Euclidean lines to be a degenerate case of circles, passing through the point at infinity. It identifies every circle with the inversion through it, and studies circle inversions as a group, the group of Möbius transformations of the extended plane. Another key tool used by the book are the "tetracyclic coordinates" of a circle, quadruples of complex numbers describing the circle in the complex plane as the solutions to the equation . It applies similar methods in three dimensions to identify spheres (and planes as degenerate spheres) with the inversions through them, and to coordinatize spheres by "pentacyclic coordinates". Other topics described in the book include: Tangent circles and pencils of circles Steiner chains, rings of circles tangent to two given circles Ptolemy's theorem on the sides and diagonals of quadrilaterals inscribed in circles Triangle geometry, and circles associated with triangles, including the nine-point circle, Brocard circle, and Lemoine circle The Problem of Apollonius on constructing a circle tangent to three given circles, and the Malfatti problem of constructing three mutually-tangent circles, each tangent to two sides of a given triangle The work of Wilhelm Fiedler on "cyclography", constructions involving circles and spheres The Mohr–Mascheroni theorem, that in straightedge and compass constructions, it is possible to use only the compass Laguerre transformations, analogues of Möbius transformations for oriented projective geometry Dupin cyclides, shapes obtained from cylinders and tori by inversion Legacy At the time of its original publication this book was called encyclopedic, and "likely to become and remain the standard for a long period". It has since been called a classic, in part because of its unification of aspects of the subject previously studied separately in synthetic geometry, analytic geometry, projective geometry, and differential geometry. At the time of its 1971 reprint, it was still considered "one of the most complete publications on the circle and the sphere", and "an excellent reference". References External links A Treatise on the Circle and the Sphere (1916 edition) at the Internet Archive Circles Spherical geometry Inversive geometry Mathematics books 1916 non-fiction books Treatises Clarendon Press books
A Treatise on the Circle and the Sphere
[ "Mathematics" ]
553
[ "Circles", "Pi" ]
64,440,132
https://en.wikipedia.org/wiki/L25%20ribosomal%20protein%20leader
L25 ribosomal protein leader is a ribosomal protein leader involved in the ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein L25. Known Examples were predicted in Gammaproteobacteria with bioinformatic approaches. or in Enterobacteria. The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal protein L25 (rplY). See also Ribosomal protein leader References External links Ribosomal protein leader
L25 ribosomal protein leader
[ "Chemistry" ]
110
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
64,440,137
https://en.wikipedia.org/wiki/S10%20ribosomal%20protein%20leader
S10 ribosomal protein leader is a ribosomal protein leader involved in the ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein S10. Known Examples were predicted in Clostridia or other lineages of Bacillota with bioinformatic approaches. The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal proteins S10 (rpsJ), L3 (rplc) and L4 (rplD). There is an uncertainty about the ligand, because of a lack of experimental investigation. See also Ribosomal protein leader References External links Ribosomal protein leader
S10 ribosomal protein leader
[ "Chemistry" ]
143
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
64,440,565
https://en.wikipedia.org/wiki/Miro%20Erkintalo
Miro Erkintalo is a New Zealand physicist specialising in nonlinear optics and laser physics, based at the University of Auckland. Education Erkintalo was born and grew up in Pori, Finland, with an interest in science and maths. He attended the Tampere University of Technology intending to get his MSc and become a teacher or technologist, but after interning in a research lab decided to become a physicist. He completed three degrees in succession: a BSc (March 2009), an MSc (November 2009) and Doctor of Science in Physics (January 2012). After his PhD, Erkintalo came to New Zealand in 2012 to take up a postdoctoral fellowship at the University of Auckland, at the suggestion of his mentor John Dudley. He had intended to just stay for two years, but enjoyed New Zealand so much he became a permanent resident. He became a Lecturer in the Department of Physics in 2014, Senior Lecturer in February 2017 and Associate Professor in February 2021. He is a principal investigator at the Dodd-Walls Centre for Photonic and Quantum Technologies. Areas of research Erkintalo studies laser light and how it interacts with matter, both fundamental physics and technological applications. He developed the theoretical model for microresonator frequency combs, which can convert a single laser beam into hundreds or thousands of different-coloured beams. Currently fibre-optic communications systems use hundreds of lasers with different wavelengths to increase the amount of information transmitted; a microresonator frequency comb could allow a single beam to do this work, greatly improving performance and energy efficiency. His work on temporal cavity solitons has potential for the development of light-based computer memory. Erkintalo has also been part of the development of inexpensive ultrashort pulsed lasers with potential applications in microscopy and micro-machining. These lasers have extremely short pulses of hundreds of femtoseconds, which have very high peak energy and can be used in environments where they would have to work under extreme noise, temperature, and vibration. Honours and awards Erkintalo was awarded a Rutherford Discovery Fellowship in 2015 and two Marsden Fund grants. He won the Hamilton Award, the Royal Society Te Apārangi's Early Career Research Excellence Award for Science, in 2016 for his work in nonlinear optics and laser physics. On 30 June 2020 Erkintalo was presented with the 2019 Prime Minister’s MacDiarmid Emerging Scientist Prize for his contributions to new laser technologies. Most of the $200,000 prize will go towards exploring microresonator frequency comb architecture. Selected publications References External links 2016 Hamilton Award winner: Dr Miro Erkintalo (YouTube) University of Auckland staff profile Research website Living people Optical physicists Theoretical physicists Tampere University of Technology alumni Year of birth missing (living people) Recipients of Marsden grants 20th-century New Zealand physicists 21st-century New Zealand physicists Expatriates in New Zealand Finnish expatriates
Miro Erkintalo
[ "Physics" ]
599
[ "Theoretical physics", "Theoretical physicists" ]
64,442,240
https://en.wikipedia.org/wiki/Xiangdong%20Ji
Xiangdong Ji (; born 1962) is a Chinese theoretical nuclear and elementary particle physicist. He is a Distinguished University Professor at the University of Maryland, College Park. Ji received his bachelor's degree from Tongji University in 1982 and his PhD from Drexel University in 1987. He was a postdoctoral researcher at Caltech and MIT. In 1991, he became Assistant Professor at the MIT, and in 1996 he moved to the University of Maryland, where he was the founding director of the Maryland Center for Fundamental Physics from 2007 to 2009. He was the Dean of Physics and Astronomy Department at Shanghai Jiao Tong University from 2009 to 2013. Ji’s main research interest has been in the quark and gluon structure of the proton and neutron in Quantum Chromodynamics (QCD). He formulated the spin structure of the proton in terms of local and gauge-invariant spin and orbital angular momentum contributions of quarks and gluons (Ji spin decomposition), showed that they can be obtained (Ji sum rule) from a class of physical quantities called Generalized Parton Distributions (GPDs) he introduced independently. GPDs are the special cases of Wigner distributions which provide simultaneous space and momentum information of partons. Ji found a new class of QCD hard scattering called Deep Exclusive Processes in lepton-nucleon collisions, which allows to probe the GPDs experimentally. The simplest example is production of a high-energy photon and a recoil nucleon in hard scattering, which he named it as Deeply-Virtual Compton scattering (DVCS). Deep Exclusive Processes has been an important part of the experimental program at Jefferson Lab 12 GeV facility and the Electron-Ion Collider at Brookhaven National Laboratory. In 2013, Ji found that the fundamental quantities charactering the high-energy properties of the nucleon, the parton distributions introduced by R. Feynman, can be directly calculated in Euclidean lattice field theory. He developed this into Large-Momentum Effective Theory or LaMET which allows parton physics or light-cone correlations computable from large momentum expansion of time-independent observables in lattice QCD. Ji was elected a fellow of the American Physical Society in 2000, "[f]or fundamental contributions to the understanding of the structure of the nucleon and the process of deeply virtual Compton scattering." In 2014 he won the Humboldt Prize and in 2015 he won the Outstanding Nuclear Physicists Award from the Jefferson Sciences Associates. In 2016 he won the Herman Feshbach Prize in Theoretical Nuclear Physics for pioneering work in developing tools to characterize the structure of the nucleon within QCD and for showing how its properties can be probed through experiments; this work not only illuminates the nucleon theoretically but also acts as a driver of experimental programs worldwide. Ji is also engaged in elementary particle physics. He was the founder and the first spokesperson (2009-2018) of the PandaX project, one of the three most advanced deep underground liquid xenon experiments in the world (the other two are XENON and LZ), to elucidate the nature of dark matter and fundamental properties of neutrinos. References External links Chinese nuclear physicists 21st-century Chinese physicists Particle physicists 1962 births Living people Fellows of the American Physical Society
Xiangdong Ji
[ "Physics" ]
678
[ "Particle physicists", "Particle physics" ]
64,442,755
https://en.wikipedia.org/wiki/Valerii%20Vinokur
Valerii Vinokur (also spelled as Vinokour, or Valery Vinokour; born 26 April 1949) is a condensed matter physicist who works on superconductivity, the physics of vortices, disordered media and glasses, nonequilibrium physics of dissipative systems, quantum phase transitions, quantum thermodynamics, and topological quantum matter. He is a senior scientist and Argonne Distinguished Fellow at Argonne National Laboratory and a senior scientist at the Consortium for Advanced Science and Engineering, Office of Research and National Laboratories, The University of Chicago. He is a Foreign Member of the National Norwegian Academy of Science and Letters and a Fellow of the American Physical Society. Career Vinokur earned his BSc in physics of metals at Moscow Institute of Steel and Alloys in 1972 and moved to the Institute of Solid State Physics, Chernogolovka, Russia, where he received a Ph.D. in physics in 1979. He has held appointments as a visiting scientist at CNRS, Grenoble (1987), a visiting scientist at Leiden University (1989), a visiting scientist at ETH (Zurich) (1990), and as visiting director of research at Ecole Normale Superieure (Paris) (1996). Since 1990 till January 2021 Vinokur has worked at the Argonne National Laboratory, having become a Distinguished Argonne Fellow in 2009. Since 2018 till January 2021, he has been a senior scientist at the Consortium for Advanced Science and Engineering, Office of Research and National Laboratories, The University of Chicago. Since January 2021 Vinokur has been working for Terra Quantum AG as a Chief Technology Officer US. Since January 2021 Vinokur also has been an adjunct professor at City College of the City University of New York. Honors, awards and fellowships Fellow of the American Physical Society, 1998 University of Chicago Distinguished Performance Award, 1998 , 2003 Alexander von Humboldt Research Award, 2003 Foreign Member of the Norwegian National Academy of Letters and Science, 2013 Alexander von Humboldt Research Award, 2013 International Abrikosov Prize, 2017 Fritz London Memorial Prize, 2020 References 1949 births Fellows of the American Physical Society Argonne National Laboratory people American physicists Living people Russian physicists Jewish physicists Theoretical physicists
Valerii Vinokur
[ "Physics" ]
459
[ "Theoretical physics", "Theoretical physicists" ]
64,445,673
https://en.wikipedia.org/wiki/F.%20Riesz%27s%20theorem
F. Riesz's theorem (named after Frigyes Riesz) is an important theorem in functional analysis that states that a Hausdorff topological vector space (TVS) is finite-dimensional if and only if it is locally compact. The theorem and its consequences are used ubiquitously in functional analysis, often used without being explicitly mentioned. Statement Recall that a topological vector space (TVS) is Hausdorff if and only if the singleton set consisting entirely of the origin is a closed subset of A map between two TVSs is called a TVS-isomorphism or an isomorphism in the category of TVSs if it is a linear homeomorphism. Consequences Throughout, are TVSs (not necessarily Hausdorff) with a finite-dimensional vector space. Every finite-dimensional vector subspace of a Hausdorff TVS is a closed subspace. All finite-dimensional Hausdorff TVSs are Banach spaces and all norms on such a space are equivalent. Closed + finite-dimensional is closed: If is a closed vector subspace of a TVS and if is a finite-dimensional vector subspace of ( and are not necessarily Hausdorff) then is a closed vector subspace of Every vector space isomorphism (i.e. a linear bijection) between two finite-dimensional Hausdorff TVSs is a TVS isomorphism. Uniqueness of topology: If is a finite-dimensional vector space and if and are two Hausdorff TVS topologies on then Finite-dimensional domain: A linear map between Hausdorff TVSs is necessarily continuous. In particular, every linear functional of a finite-dimensional Hausdorff TVS is continuous. Finite-dimensional range: Any continuous surjective linear map with a Hausdorff finite-dimensional range is an open map and thus a topological homomorphism. In particular, the range of is TVS-isomorphic to A TVS (not necessarily Hausdorff) is locally compact if and only if is finite dimensional. The convex hull of a compact subset of a finite-dimensional Hausdorff TVS is compact. This implies, in particular, that the convex hull of a compact set is equal to the convex hull of that set. A Hausdorff locally bounded TVS with the Heine-Borel property is necessarily finite-dimensional. See also References Bibliography Theorems in functional analysis Lemmas Topological vector spaces
F. Riesz's theorem
[ "Mathematics" ]
495
[ "Theorems in mathematical analysis", "Mathematical theorems", "Vector spaces", "Topological vector spaces", "Space (mathematics)", "Theorems in functional analysis", "Mathematical problems", "Lemmas" ]
73,163,203
https://en.wikipedia.org/wiki/Shokken
Shokken (食券 "food ticket") are a type of Japanese ticket machine/vending machine, usually used at restaurants for ordering food. Information Shokken machines were first seen in 1926 at Tokyo Station There are currently over 43,000 shokken machines in Japan. Shokken are often found in restaurants, cafes, fast-food restaurants and other establishments. A typical shokken machine features buttons where the customer can select an item, a coin slot, where the customer can pay for the item and a printer where the customer can receive their receipt. Upon receiving their receipt, the customer can then exchange their receipt for their purchased item. Shokken machines can be standalone machines and sometimes are located on countertops and tables. Companies often use shokken machines as they can reduce the amount of staff needed, reduce theft, reduce the turnover rate and can help reduce ordering errors. While useful, shokken machines are not associated with a fine dining atmosphere, as they are often seen in inexpensive restaurants such as Matsuya, Yoshinoya and Sukiya. Shokken machines also can break and limit customized orders. References Dispensers Vending machines Retail formats Commercial machines
Shokken
[ "Physics", "Technology", "Engineering" ]
237
[ "Machines", "Commercial machines", "Vending machines", "Automation", "Physical systems" ]
73,164,006
https://en.wikipedia.org/wiki/4%20Draconis
4 Draconis, also known as HR 4765 and CQ Draconis, is a star about 570 light years from the Earth, in the constellation Draco. It is a 5th magnitude star, so it will be faintly visible to the naked eye of an observer far from city lights. It is a variable star, whose brightness varies slightly from 4.90 to 5.12 over a period of 4.66 years. In 1967, Olin Eggen discovered that 4 Draconis is a variable star, during a multicolor photometric survey of red stars. In 1973 it was given the variable star designation CQ Draconis. Until the year 1985, 4 Draconis was thought to be a normal red giant star. In 1985, Dieter Reimers announced that the International Ultraviolet Explorer had detected a hot companion to the red giant, which itself appeared to be a binary cataclysmic variable star, making the complete system a triple star. However a 2003 study by Peter Wheatley et al., who examined ROSAT X-ray data for the star, concluded that the hot companion was more apt to be a single white dwarf, rather than a binary, and that the white dwarf is accreting material from the red giant. There does not yet appear to be a consensus about the multiplicity; some later studies consider 4 Draconis to be a binary, and some a triple. In 1987, Alexander Brown announced that 6 cm wavelength radio emission had been detected by the Very Large Array. The strength of the radio emission was variable on a timescale of weeks to months. It is possible that an outburst of 4 Draconis was the "guest star" reported by Chinese astronomers in the year 369 CE, in the constellation Zigong. References Draco (constellation) 060998 108907 Draconis, CQ Z Andromedae variables Draconis, 4 M-type giants
4 Draconis
[ "Astronomy" ]
397
[ "Constellations", "Draco (constellation)" ]
73,167,663
https://en.wikipedia.org/wiki/Tcr-seq
TCR-Seq (T-cell Receptor Sequencing) is a method used to identify and track specific T cells and their clones. TCR-Seq utilizes the unique nature of a T-cell receptor (TCR) as a ready-made molecular barcode. This technology can apply to both single cell sequencing technologies and high throughput screens Background T-cell Receptor (TCR) T cells are a part of the adaptive immune system and play a critical role in protecting the body from foreign pathogens. T-cell receptors (TCRs) are a group of membrane proteins found on the surface of T cells which can bind to foreign antigens. TCRs interact with major histocompatibility complexes (MHC) on cell surfaces to recognize antigens. They are heterodimers made up of predominantly α and β chains (or more rarely δ and γ chains) and consist of a variable region and a constant region. Variable regions are produced through a process called VDJ recombination, which results in unique amino acid sequences for α, β, and γ chains. The result is that each TCR is unique and recognizes a specific antigen Complementarity Determining Regions (CDRs) Complementarity determining regions (CDRs) are a part of the TCR and play an essential role in TCR-MHC interactions. CDR1 and CDR2 are encoded by V genes, while CDR3 is made from the region between V and J genes or between D and J genes (termed "VDJ genes" when referred to together). CDR3 is the most variable of the CDRs, and is in direct contact with the antigen. As such, CDR3 is used as the “barcode region” to identify unique T cell populations, as it is highly unlikely for two T cells to have the same CDR3 sequence unless they came from the same parental T cell. Clonality VDJ recombination produces such a vast amount of unique TCRs that many receptors never encounter the antigen they are best suited for. When a foreign antigen is present in the body, the few T cells that recognize that antigen are positively selected for so that the body has an adequate number of T cells to mount an effective immune response. The selected T cells rapidly divide and differentiate into effector T-cells through a process called clonal expansion, which retains the TCR sequence (including the CDR3 sequence) that originally recognized the antigen TCR-Seq uses the unique nature of the TCR - in particular CDR3 - as a molecular barcode to track T cells through a variety of processes like differentiation and proliferation, which can be used for a wide variety of purposes. Methods Bulk vs Single-Cell Sequencing TCR sequencing can be performed in on pooled cell populations (“bulk sequencing”) or single cells (“single cell sequencing”). Bulk sequencing is useful to explore entire TCR repertoires - all the TCRs within an individual or a sample - and to generate comparisons between repertoires of different individuals. This method can sequence millions of cells in a single experiment. However, one major disadvantage is that bulk sequencing cannot determine which TCR chains pair together, only the frequency within the repertoire. The large amount of TCRs sampled also means that lower abundance TCRs may not be detected Single cell sequencing can determine TCR chain pairs, making them more useful for identifying specific TCRs. Some major disadvantages of this technique are its high costs, limited capacity of a few thousand cells, and the necessity of live cells which may be more challenging to obtain Target Sequences Any TCR chain can be sequenced, although the α and β chain are more commonly chosen due to their abundance in the T cell population. In particular, the β chain is of interest due to its higher diversity and specificity compared to other chains. The presence of a D gene component in the β chain which is not present in the α chain allows more diverse combinations. As well, β chains are unique to each T cell, which can be used to identify distinct T cell populations within a sample To perform TCR-sequencing, polymerase chain reaction (PCR) amplification is performed on the CDR3 region as a measure of unique T cells within a population. The CDR3 region is chosen over CDR1 and CDR2 as it is directly responsible for antigen interactions and is generally unique to TCRs from the same lineage, which allows identification of distinct populations Library Preparation The goal of this step is to generate a library of transcripts to be sequenced. There are 3 major ways of generating a library for TCR sequencing. Multiplex DNA Multiplex PCR can be employed on both genomic DNA (gDNA) or RNA which has been converted to double-stranded complementary DNA (cDNA). Primer pools with primer pairs targeting J and V alleles are used to amplify the CDR3 region of the TCR transcript. The transcript goes through two or more rounds of PCR to amplify the region of interest, then adaptors are ligated onto either end of the resulting transcript. This method is among the most used in the generation of libraries for TCR-seq as it can capture a great deal of the diversity of the TCR through the primer pool. However, as it is near-impossible to optimize PCR conditions for all the primers in the pool, multiplex DNA can result in amplification bias where some CDR3 regions with primers that bind poorly may not be amplified. This means the abundance of amplified segments may not correspond with the actual abundance within the cell Target Enrichment In-Solution This method can use genomic gDNA or RNA converted to cDNA. The starting material is first processed to generate DNA or cDNA transcripts with indexed adaptors on the 5’ and 3’ ends. These transcripts are then incubated with RNA baits designed to bind to regions of interest, which is generally the CDR3 region. These baits, which are normally bound to magnetic beads, can be isolated using a magnet. This allows the isolation of transcripts of the CDR3 region which can then amplified using PCR. Target enrichment using RNA baits requires fewer PCR amplification steps, which may decrease amplification bias. However, the efficiency of the capture by magnets may affect the diversity of the amplified transcripts. 5’-RACE Rapid Amplification of cDNA Ends (RACE) is a method that uses RNA transcripts for generation of the library. Although RACE can be applied with the 3' or the 5' end, the 5' end more commonly used for TCR-seq. This method revolves around the addition of a common 5' adaptor sequence to the transcript, which can be a done a few different ways. One method is to add on the adapter following reverse transcription. During the generation of the reverse DNA strand from the RNA template, a forward primer adds a sequence complementary to the 5'adapter, leading to template switching This allows a 5' adapter to be incorporated into the cDNA when the complementary sequence is generated. Primers can be designed to amplify the entire region from the adaptor to the constant region, then adaptor ligation can be performed in a second PCR reaction. As all the different transcripts now share an identical adapter, they can be amplified using a single primer pair. As such, this method decreases amplification bias and improves the ability to detect more uncommon TCR populations with greater certainty. However, as TCR transcription levels differ between cells, this method cannot provide an accurate measurement of the number of different T cell types in the sample based on the level of RNA transcripts alone Sequencing Following generation of the library, the products can be sequenced, generally via Next Generation Sequencing (NGS). Usage of machines capable of longer reads and maintains read quality at the 3’end is important, as the CDR3 region is at the 3’end of an approximately 500 base pair transcript The error rate of NGS presents a challenge for analysis of TCR repertoires. Small variations in the TCR can change their specificity towards antigens, and as such may be interest to researchers. However, errors in sequencing can generate a minor change that may be interpreted as a low-frequency, distinct TCR population, which is a problem when analyzing changes in TCR repertoires. Efforts have been made to establish thresholds to remove low abundance reads from analysis, as well as to develop algorithms to correct these errors Applications Generally, the data collected from TCR-seq is used to compare TCR repertoires, either between the same patient at different timepoints, or between different patients. Recent studies examined the characteristics of a healthy repertoire, and found a high degree of variation in TCR β chain levels and types, though a subset is shared across different individuals. However, this diversity has yet to be shown to strongly correlate with any conditions of interest, such as rates of infection or chance of cancer relapse, suggesting further research is necessary. Infectious Diseases Clonal expansion of T cells allow the immune system to deal with a variety of infection disease with high specificity. Thus, understanding changes that occur to the T cell repertoire following disease infection can early diagnosis, disease monitoring, and therapeutic development Acquired Immunodeficiency Syndrome (AIDS) is a devastating disease caused by Human Immunodeficiency Virus (HIV) infection, which results in the death of CD4+ T cells. and dysfunctional CD8+ T cells. Recent studies have suggested that increased TCR diversity may decrease HIV diversity and limit disease progression. Sequencing of the TCR would also increase understanding of the progression of AIDS and predict morbidity. Additionally, sequencing the TCR repertoire of individuals with natural defense against AIDs infection could help development of a vaccine to limit further spread of the disease Cancer Cancer is the uncontrolled proliferation of malignant cells which can spread throughout the body. This is caused by mutations within the cancer cell, which often leads to expression of mutant proteins termed neoantigens. Identification of these neoantigens has great therapeutic benefit, as they can be exploited to target cancer cells without harming normal cells. As CD8+ T cells can recognize some neoantigens in their TCR, sequencing of TCR repertoires can help identify potential cancer biomarkers. In addition to biomarker identification, sequencing of the TCR repertoire can also track changes in cancer progression, assess responses to immunotherapy, and evaluate the tumour microenvironment for conditions that may make it permissible to cancer growth See also NOMe-seq PLAC-Seq References DNA sequencing Molecular biology techniques
Tcr-seq
[ "Chemistry", "Biology" ]
2,202
[ "Molecular biology techniques", "DNA sequencing", "Molecular biology" ]
73,169,134
https://en.wikipedia.org/wiki/HD%20189080
HD 189080, also known as HR 7621 or rarely 74 G. Telescopii, is a solitary orange-hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.18, placing it near the limit for naked eye visibility. Gaia DR3 parallax measurements place it at a distance of 357 light years and it is currently receding rapidly with a heliocentric radial velocity of . At its current distance, HD 189080's brightness is diminished by 0.17 magnitudes due to extinction from interstellar dust. It has an absolute magnitude of +1.1. This is an evolved red giant with a stellar classification of K0 III. It is currently on the red giant branch, fusing a hydrogen shell around an inert helium core. It has 119% the mass of the Sun, but at the age of 4.83 billion years it has expanded to 9.9 times the radius of the Sun. It radiates 43.6 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 189080 is slightly metal deficient with [Fe/H] = −0.11 and spins too slowly to be measured accurately. References K-type giants Telescopium Telescopii, 74 CD-49 12949 189080 098482 7621
HD 189080
[ "Astronomy" ]
288
[ "Telescopium", "Constellations" ]
73,170,392
https://en.wikipedia.org/wiki/Anne-Christine%20Hladky
Anne-Christine Hladky-Hennion (born 1965) is a French researcher in acoustic metamaterials. She is a director of research for the French National Centre for Scientific Research (CNRS), and scientific deputy director of the CNRS (INSIS). Education and career Hladky is originally from Lille, where she was born in 1965. After earning a diploma in 1987 from the Institut supérieur de l'électronique et du numérique in Lille, she continued her education at the Lille University of Science and Technology, where she earned a doctorate in 1990, in materials science. Her doctoral dissertation, Application de la méthode des éléments finis à la modélisation de structures périodiques utilisées en acoustique, was supervised by Jean-Noël Decarpigny. She joined CNRS in 1992, and became a director of research in 2015. Recognition Hladky was the 1990 winner of the Young Researcher Prize of the French Acoustical Society. In 2018 she received the CNRS Silver Medal. References 1965 births Living people Scientists from Lille French materials scientists Women materials scientists and engineers Metamaterials scientists Research directors of the French National Centre for Scientific Research Acousticians
Anne-Christine Hladky
[ "Materials_science", "Technology" ]
246
[ "Metamaterials", "Materials scientists and engineers", "Metamaterials scientists", "Women materials scientists and engineers", "Women in science and technology" ]
67,409,235
https://en.wikipedia.org/wiki/Polar%20factorization%20theorem
In optimal transport, a branch of mathematics, polar factorization of vector fields is a basic result due to Brenier (1987), with antecedents of Knott-Smith (1984) and Rachev (1985), that generalizes many existing results among which are the polar decomposition of real matrices, and the rearrangement of real-valued functions. The theorem Notation. Denote the image measure of through the map . Definition: Measure preserving map. Let and be some probability spaces and a measurable map. Then, is said to be measure preserving iff , where is the pushforward measure. Spelled out: for every -measurable subset of , is -measurable, and . The latter is equivalent to: where is -integrable and is -integrable. Theorem. Consider a map where is a convex subset of , and a measure on which is absolutely continuous. Assume that is absolutely continuous. Then there is a convex function and a map preserving such that In addition, and are uniquely defined almost everywhere. Applications and connections Dimension 1 In dimension 1, and when is the Lebesgue measure over the unit interval, the result specializes to Ryff's theorem. When and is the uniform distribution over , the polar decomposition boils down to where is cumulative distribution function of the random variable and has a uniform distribution over . is assumed to be continuous, and preserves the Lebesgue measure on . Polar decomposition of matrices When is a linear map and is the Gaussian normal distribution, the result coincides with the polar decomposition of matrices. Assuming where is an invertible matrix and considering the probability measure, the polar decomposition boils down to where is a symmetric positive definite matrix, and an orthogonal matrix. The connection with the polar factorization is which is convex, and which preserves the measure. Helmholtz decomposition The results also allow to recover Helmholtz decomposition. Letting be a smooth vector field it can then be written in a unique way as where is a smooth real function defined on , unique up to an additive constant, and is a smooth divergence free vector field, parallel to the boundary of . The connection can be seen by assuming is the Lebesgue measure on a compact set and by writing as a perturbation of the identity map where is small. The polar decomposition of is given by . Then, for any test function the following holds: where the fact that was preserving the Lebesgue measure was used in the second equality. In fact, as , one can expand , and therefore . As a result, for any smooth function , which implies that is divergence-free. See also References Measures (measure theory) Theorems involving convexity
Polar factorization theorem
[ "Physics", "Mathematics" ]
546
[ "Measures (measure theory)", "Quantity", "Physical quantities", "Size" ]
67,411,784
https://en.wikipedia.org/wiki/Computational%20models%20in%20epilepsy
Computational models in epilepsy mainly focus on describing an electrophysiological manifestation associated with epilepsy called seizures. For this purpose, computational neurosciences use differential equations to reproduce the temporal evolution of the signals recorded experimentally. A book published in 2008, Computational Neuroscience in Epilepsy. summarizes different works done up to this time. The goals of using its models are diverse, from prediction to comprehension of underlying mechanisms. The crisis phenomenon (seizure) exists and shares certain dynamical properties across different scales and different organisms. It is possible to distinguish different approaches: the phenomenological models focus on the dynamics observed, generally reduced to few dimension it facilitates the study from the point of view of the theory of dynamical systems and more mechanistic models that explain the biophysical interactions underlying seizures. It is also possible to use these approaches to model and analyse the interactions between different regions of the brain (In this case the notion of network plays an important role) and the transition to ictal state. These large-scale approaches have the advantage of being able to be related to the recordings made in humans thanks to electroencephalography (EEG). It offers new directions for clinical research, particularly as an additional tool in the treatment of refractory epilepsy Other approaches are to use the models to try to understand the mechanisms underlying these seizures using biophysical descriptions from the neuron scale. This makes it possible to understand the role of homeostasis and to understand the link between physical quantities (such as the concentration of potassium for example) and the pathological dynamics observed. This area of research has evolved rapidly in recent years and continues to show promise for our understanding and treatment of epilepsies for either for direct clinical application in the case of refractory epilepsy or fundamental research to guide experimental works. References Computational biology Epilepsy
Computational models in epilepsy
[ "Biology" ]
380
[ "Computational biology" ]
67,412,446
https://en.wikipedia.org/wiki/Predictive%20methods%20for%20surgery%20duration
Predictions of surgery duration (SD) are used to schedule planned/elective surgeries so that utilization rate of operating theatres be optimized (maximized subject to policy constraints). An example for a constraint is that a pre-specified tolerance for the percentage of postponed surgeries (due to non-available operating room (OR) or recovery room space) not be exceeded. The tight linkage between SD prediction and surgery scheduling is the reason that most often scientific research related to scheduling methods addresses also SD predictive methods and vice versa. Durations of surgeries are known to have large variability. Therefore, SD predictive methods attempt, on the one hand, to reduce variability (via stratification and covariates, as detailed later), and on the other employ best available methods to produce SD predictions. The more accurate the predictions, the better the scheduling of surgeries (in terms of the required OR utilization optimization). An SD predictive method would ideally deliver a predicted SD statistical distribution (specifying the distribution and estimating its parameters). Once SD distribution is completely specified, various desired types of information could be extracted thereof, for example, the most probable duration (mode), or the probability that SD does not exceed a certain threshold value. In less ambitious circumstance, the predictive method would at least predict some of the basic properties of the distribution, like location and scale parameters (mean, median, mode, standard deviation or coefficient of variation, CV). Certain desired percentiles of the distribution may also be the objective of estimation and prediction. Experts estimates, empirical histograms of the distribution (based on historical computer records), data mining and knowledge discovery techniques often replace the ideal objective of fully specifying SD theoretical distribution. Reducing SD variability prior to prediction (as alluded to earlier) is commonly regarded as part and parcel of SD predictive method. Most probably, SD has, in addition to random variation, also a systematic component, namely, SD distribution may be affected by various related factors (like medical specialty, patient condition or age, professional experience and size of medical team, number of surgeries a surgeon has to perform in a shift, type of anesthetic administered). Accounting for these factors (via stratification or covariates) would diminish SD variability and enhance the accuracy of the predictive method. Incorporating expert estimates (like those of surgeons) in the predictive model may also contribute to diminish the uncertainty of data-based SD prediction. Often, statistically significant covariates (also related to as factors, predictors or explanatory variables) — are first identified (for example, via simple techniques like linear regression and knowledge discovery), and only later more advanced big-data techniques are employed, like Artificial Intelligence and Machine Learning, to produce the final prediction. Literature reviews of studies addressing surgeries scheduling most often also address related SD predictive methods. Here are some examples (latest first). The rest of this entry review various perspectives associated with the process of producing SD predictions — SD statistical distributions, Methods to reduce SD variability (stratification and covariates), Predictive models and methods, and Surgery as a work-process. The latter addresses surgery characterization as a work-process (repetitive, semi-repetitive or memoryless) and its effect on SD distributional shape. SD Statistical Distributions Theoretical models A most straightforward SD predictive method comprises specifying a set of existent statistical distributions, and based on available data and distribution-fitting criteria select the most fitting distribution. There is a large volume of comparative studies that attempt to select the most fitting models for SD distribution. Distributions most frequently addressed are the normal, the three-parameter lognormal, gamma (including the exponential) and Weibull. Less frequent "trial" distributions (for fitting purposes) are the loglogistic model, Burr, generalized gamma and the piecewise-constant hazard model. Attempts to presenting SD distribution as a mixture-distribution have also been reported (normal-normal, lognormal-lognormal and Weibull–Gamma mixtures). Occasionally, predictive methods are developed that are valid for a general SD distribution, or more advanced techniques, like Kernel Density Estimation (KDE), are used instead of the traditional methods (like distribution-fitting or regression-oriented methods). There is broad consensus that the three-parameter lognormal describes best most SD distributions. A new family of SD distributions, which includes the normal, lognormal and exponential as exact special cases, has recently been developed. Here are some examples (latest first). Using historical records to specify an empirical distribution As an alternative to specifying a theoretical distribution as model for SD, one may use records to construct a histogram of available data, and use the related empirical distribution function (the cumulative plot) to estimate various required percentiles (like the median or the third quartile). Historical records/expert estimates may also be used to specify location and scale parameters, without specifying a model for SD distribution. Data mining methods These methods have recently gained traction as an alternative to specifying in-advance a theoretical model to describe SD distribution for all types of surgeries. Examples are detailed below ("Predictive models and methods"). Reducing SD variability (stratification and covariates) To enhance SD prediction accuracy, two major approaches are pursued to reduce SD data variability: Stratification and covariates (incorporated in the predictive model). Covariates are often referred to in the literature also as factors, effects, explanatory variables or predictors. Stratification The term means that available data are divided (stratified) into subgroups, according to a criterion statistically shown to affect SD distribution. The predictive method then aims to produce SD prediction for specified subgroups, having SD with appreciably reduced variability. Examples for stratification criteria are medical specialty, Procedure Code systems, patient-severity condition or hospital/surgeon/technology (with resulting models related to as hospital-specific, surgeon-specific or technology-specific). Examples for implementation are Current Procedural Terminology (CPT) and ICD-9-CM Diagnosis and Procedure Codes (International Classification of Diseases, 9th Revision, Clinical Modification). Covariates (factors, effects, explanatory variables, predictors) This approach to reduce variability incorporates covariates in the prediction model. The same predictive method may then be more generally applied, with covariates assuming different values for different levels of the factors shown to affect SD distribution (usually by affecting a location parameter, like the mean, and, more rarely, also a scale parameter, like the variance). A most basic method to incorporate covariates into a predictive method is to assume that SD distribution is lognormally distributed. The logged data (taking log of SD data) then represent a normally distributed population, allowing use of multiple- linear-regression to detect statistically significant factors. Other regression methods, which do not require data normality or are robust to its violation (generalized linear models, nonlinear regression) and artificial intelligence methods have also been used (references sorted chronologically, latest first). Predictive models and methods Following is a representative (non-exhaustive) list of models and methods employed to produce SD predictions (in no particular order). These, or a mixture thereof, may be found in the sample of representative references below: Linear regression (LR); Multivariate adaptive regression splines (MARS); Random forests (RF); Machine learning; Data mining (rough sets, neural networks); Knowledge discovery in databases (KDD); Data warehouse model (used to extract data from various, possibly non-interacting, databases); Kernel density estimation (KDE); Jackknife; Monte Carlo simulation. Surgery as work-process (repetitive, semi-repetitive, memoryless) Surgery is a work process, and likewise it requires inputs to achieve the desired output, a recuperating post-surgery patient. Examples of work-process inputs, from Production Engineering, are the five M's — "money, manpower, materials, machinery, methods" (where "manpower" refers to the human element in general). Like all work-processes in industry and the services, surgeries also have a certain characteristic work-content, which may be unstable to various degrees (within the defined statistical population to which the prediction method aims). This generates a source for SD variability that affects SD distributional shape (from the normal distribution, for purely repetitive processes, to the exponential, for purely memoryless processes). Ignoring this source may confound its variability with that due to covariates (as detailed earlier). Therefore, as all work-processes may be partitioned into three types (repetitive, semi-repetitive, memoryless), surgeries may be similarly partitioned. A stochastic model that takes account of work-content instability has recently been developed, which delivers a family of distributions, with the normal/lognormal and exponential as exact special cases. This model was applied to construct a statistical process control scheme for SD. References Prediction Health care management Surgery Hospitals Health informatics Health Resources and Services Administration
Predictive methods for surgery duration
[ "Biology" ]
1,902
[ "Health informatics", "Medical technology" ]
67,414,125
https://en.wikipedia.org/wiki/List%20of%20heritage%20railways%20and%20funiculars%20in%20Switzerland
This is a list of heritage railways in Switzerland. For convenience, the list includes any pre-World War II railway in the large sense of the term (either adhesion railway, rack railway or funicular) currently operated with at least several original or historical carriages. Switzerland has a very dense rail network, both standard and narrow gauge. The overwhelming majority of railways, built between the mid-19th and early 20th century, are still in regular operation today and were electrified earlier than in the rest of Europe. The major exception is the partially rack and pinion-operated Furka Steam Railway, the longest unelectrified line in the country. However, numerous rail operators, notably SBB Historic, provide services with well-maintained historical rolling stock. List Blonay–Chamby museum railway (adhesion) Brienz Rothorn Railway (rack) Dampfbahn-Verein Zürcher Oberland (adhesion) Etzwilen–Singen railway (adhesion) (funicular) Furka Steam Railway (rack and adhesion) (adhesion) Giessbachbahn (funicular) Heimwehfluhbahn (funicular) International Rhine Regulation Railway (adhesion) La Traction (adhesion) Les Avants–Sonloup (funicular) Montreux–Glion–Rochers-de-Naye railway (rack) Montreux–Lenk im Simmental line (adhesion) (adhesion) Pilatus Railway (rack) Reichenbachfall Funicular Rhaetian Railway, notably on the Albula and Bernina lines (adhesion) Riffelalp tram (adhesion) Rigi Railways (rack) Rorschach–Heiden railway (rack) SBB Historic (adhesion) (adhesion) Schynige Platte Railway (rack) Sonnenberg (funicular) (funicular) Vapeur Val-de-Travers (adhesion) (adhesion) Zürcher Museums-Bahn (adhesion) See also List of railway museums in Switzerland List of narrow-gauge railways in Switzerland List of mountain railways in Switzerland List of funiculars in Switzerland Lists of tourist attractions in Switzerland Swiss Museum of Transport () References Switzerland
List of heritage railways and funiculars in Switzerland
[ "Engineering" ]
456
[ "Lists of heritage railways", "Engineering preservation societies" ]
67,415,899
https://en.wikipedia.org/wiki/Microbial%20pathogenesis
Microbial pathogenesis is a field of microbiology that started at least as early as 1988, with the identification of the triune Falkow's criteria, aka molecular Koch's postulates. In 1996, Fredricks and Relman proposed a seven-point list of "Molecular Guidelines for Establishing Microbial Disease Causation," because of "the discovery of nucleic acids" by Watson and Crick "as the source of genetic information and as the basis for precise characterization of an organism." The subsequent development of the "ability to detect and manipulate these nucleic acid molecules in microorganisms has created a powerful means for identifying previously unknown microbial pathogens and for studying the host-parasite relationship." Postulates for the detection of microbial pathogens In 1996, Fredricks and Relman suggested the following postulates for the novel field of microbial pathogenesis. (i) A nucleic acid sequence belonging to a putative pathogen should be present in most cases of an infectious disease. Microbial nucleic acids should be found preferentially in those organs or gross anatomic sites known to be diseased, and not in those organs that lack pathology. (ii) Fewer, or no, copies of pathogen-associated nucleic acid sequences should occur in hosts or tissues without disease. (iii) With resolution of disease, the copy number of pathogen-associated nucleic acid sequences should decrease or become undetectable. With clinical relapse, the opposite should occur. (iv) When sequence detection predates disease, or sequence copy number correlates with severity of disease or pathology, the sequence-disease association is more likely to be a causal relationship. (v) The nature of the microorganism inferred from the available sequence should be consistent with the known biological characteristics of that group of organisms. (vi) Tissue-sequence correlates should be sought at the cellular level: efforts should be made to demonstrate specific in situ hybridization of microbial sequence to areas of tissue pathology and to visible microorganisms or to areas where microorganisms are presumed to be located. (vii) These sequence-based forms of evidence for microbial causation should be reproducible. References Microbiology Diseases and disorders Epidemiology Cause (medicine)
Microbial pathogenesis
[ "Chemistry", "Biology", "Environmental_science" ]
472
[ "Epidemiology", "Microbiology", "Environmental social science", "Microscopy" ]
67,417,478
https://en.wikipedia.org/wiki/Mie%20potential
The Mie potential is an interaction potential describing the interactions between particles on the atomic level. It is mostly used for describing intermolecular interactions, but at times also for modeling intramolecular interaction, i.e. bonds. The Mie potential is named after the German physicist Gustav Mie; yet the history of intermolecular potentials is more complicated. The Mie potential is the generalized case of the Lennard-Jones (LJ) potential, which is perhaps the most widely used pair potential. The Mie potential is a function of , the distance between two particles, and is written as with . The Lennard-Jones potential corresponds to the special case where and in Eq. (1). In Eq. (1), is the dispersion energy, and indicates the distance at which , which is sometimes called the "collision radius." The parameter is generally indicative of the size of the particles involved in the collision. The parameters and characterize the shape of the potential: describes the character of the repulsion and describes the character of the attraction. The attractive exponent is physically justified by the London dispersion force, whereas no justification for a certain value for the repulsive exponent is known. The repulsive steepness parameter has a significant influence on the modeling of thermodynamic derivative properties, e.g. the compressibility and the speed of sound. Therefore, the Mie potential is a more flexible intermolecular potential than the simpler Lennard-Jones potential. The Mie potential is used today in many force fields in molecular modeling. Typically, the attractive exponent is chosen to be , whereas the repulsive exponent is used as an adjustable parameter during the model fitting. Thermophysical properties of the Mie substance As for the Lennard-Jonesium, where a theoretical substance exists that is defined by particles interacting by the Lennard-Jones potential, a substance class of Mie substances exists that are defined as single site spherical particles interacting by a given Mie potential. Since an infinite number of Mie potentials exist (using different n, m parameters), equally many Mie substances exist, as opposed to Lennard-Jonesium, which is uniquely defined. For practical applications in molecular modelling, the Mie substances are mostly relevant for modelling small molecules, e.g. noble gases, and for coarse grain modelling, where larger molecules, or even a collection of molecules, are simplified in their structure and described by a single Mie particle. However, more complex molecules, such as long-chained alkanes, have successfully been modelled as homogeneous chains of Mie particles. As such, the Mie potential is useful for modelling far more complex systems than those whose behaviour is accurately captured by "free" Mie particles. Thermophysical properties of both the Mie fluid, and chain molecules built from Mie particles have been the subject of numerous papers in recent years. Investigated properties include virial coefficients and interfacial, vapor-liquid equilibrium, and transport properties. Based on such studies the relation between the shape of the interaction potential (described by n and m) and the thermophysical properties has been elucidated. Also, many theoretical (analytical) models have been developed for describing thermophysical properties of Mie substances and chain molecules formed from Mie particles, such as several thermodynamic equations of state and models for transport properties. It has been observed that many combinations of different () can yield similar phase behaviour, and that this degeneracy is captured by the parameter , where fluids with different exponents, but the same -parameter will exhibit the same phase behavior. Mie potential used in molecular modeling Due to its flexibility, the Mie potential is a popular choice for modelling real fluids in force fields. It is used as an interaction potential many molecular models today. Several (reliable) united atom transferable force fields are based on the Mie potential, such as that developed by Potoff and co-workers. The Mie potential has also been used for coarse-grain modeling. Electronic tools are available for building Mie force field models for both united atom force fields and transferable force fields. The Mie potential has also been used for modeling small spherical molecules (i.e. directly the Mie substance - see above). The Table below gives some examples. There, the molecular models have only the parameters of the Mie potential itself. References Thermodynamics Intermolecular forces Computational chemistry Quantum mechanical potentials
Mie potential
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
940
[ "Molecular physics", "Quantum mechanics", "Intermolecular forces", "Materials science", "Quantum mechanical potentials", "Computational chemistry", "Theoretical chemistry", "Thermodynamics", "Dynamical systems" ]
67,419,030
https://en.wikipedia.org/wiki/Target%202035
Target 2035 is a global effort or movement to discover open science, pharmacological modulator(s) for every protein in the human proteome by the year 2035. The effort is led by the Structural Genomics Consortium with the intention that this movement evolves organically. Target 2035 has been borne out of the success that chemical probes have had in elevating or de-prioritizing the therapeutic potential of protein targets. The availability of open access pharmacological tools is a largely unmet aspect of drug discovery especially for the dark proteome. The first five years will include building mechanisms (Phase 1 below) which allow researchers to find collaborators with like-minded goals towards discovering a pharmacological tool for a specific protein or protein family, and make it open access (without encumbrances due to intellectual property). One strategic goal is seeding new open science programs on components of the drug discovery pipeline with the goal to bring medicines to the bedside equitably, affordably and rapidly. Phase 1 will also build a framework that welcomes new and (re-)emerging enabling technologies in hit-finding and characterization. An update on the progress was published. Target 2035 will draw on successes from past and current publicly-funded programs including National Institutes of Health (NIH) Illuminating the Druggable Genome initiative for under-explored kinases, GPCR’s and ion channels, Innovative Medicines Initiative's RESOLUTE project on human SLCs, Innovative Medicines Initiative's Enabling and Unlocking Biology in the Open (EUbOPEN), and Innovative Medicines Initiative's Unrestricted Leveraging of Targets for Research Advancement and Drug Discovery. The NIH recently re-iterated their commitment to making their data open to mitigate the tens of billions due to irreproducible data. Target 2035 will collaborate with the Chemical Probes Portal and open science platforms, e.g. Just One Giant Lab, in order to spread awareness and education of best practices for chemical modulators and the benefits of open science, respectively. The following draft plan has been outlined in a white paper. Phase 1 The first phase, from 2020 to 2025, would be structured to build the foundation for a concerted global effort, and would aim to collect, characterize and make available existing pharmacological modulators for key representatives from all proteins families in the current druggable genome (~4,000 proteins), as well as to develop critical and centralized infrastructure to facilitate data collection, curation, dissemination, and mining that will power the scientific community worldwide. This phase might also create centralized facilities to provide quantitative genome-scale biochemical and cell-based profiling assays to the federated community, as well as to coordinate the development of new technologies to extend the definition of druggability. This first phase will complement and extend ongoing efforts to create chemical tools and chemogenomic libraries to blanket priority gene families, such as kinases and epigenetics families. One year into Target 2035 has so far yielded infrastructure to house data on chemogenomic compounds reported in the literature. A progress update was published recently. Towards the development of new technologies, Target 2035 started a new initiative Critical Assessment of Computational Hit-Finding Experiments (CACHE) aimed at benchmarking computational methods for hit-finding. The first competition - finding ligands for the WD40 domain of LRRK2 - started in March 2022. The first round of predictions have been submitted. In the meantime, applications for the second CACHE benchmarking - predicting ligands for the RNA-binding domain for Nsp13 - has been posted. Phase 2 The second phase, from 2025 to 2035, will be to apply the new technologies and infrastructure to generate a complete set of pharmacological modulators for > 90% of the ~20,000 proteins encoded by the genome. “Target 2035” sounds ambitious, but its concept and practicality is on firm ground based on a number of pilot studies, which revealed the following success parameters: Collaborate with the pharmaceutical sector to access unparalleled expertise, experience, materials, and logistics Establish clear and quantitative quality criteria for the output (target chemical tool profiles) to provide focus Organize the project around protein families – it is the most efficient, practical and scientifically sound way to divide this large project into teams Establish clear open science principles to eliminate or reduce conflicts of interest, to reduce legal encumbrances, and to encourage participation by the community. References External links Drug discovery Open science Chemical biology
Target 2035
[ "Chemistry", "Biology" ]
940
[ "Life sciences industry", "Drug discovery", "nan", "Medicinal chemistry", "Chemical biology" ]
67,421,377
https://en.wikipedia.org/wiki/Connecting%20Organizations%20for%20Regional%20Disease%20Surveillance
The Connecting Organizations for Regional Disease Surveillance (CORDS) is a "regional infectious disease surveillance network that neighboring countries worldwide are organizing to control cross-border outbreaks at their source." In 2012, CORDS was registered as a legal, non-profit international organization in Lyon, France. As of 2021, CORDS was composed of "six regional member networks, working in 28 countries in Africa, Asia, the Middle East and Europe." Synopsis CORDS are "distinct from more formal networks in geographic regions designated by the World Health Organization (WHO)... Some of these regional networks existed before the sudden 2003 outbreak of SARS," for example: the Pacific Public Health Surveillance Network (PPHSN) (1996), the Mekong Basin Disease Surveillance (MBDS) network (1999), and the East African Integrated Disease Surveillance Network (EAIDSNet) (2000) the Southeastern European Health Network (SEEHN) (2001) the Asia Partnership on Emerging Infectious Diseases Research (APEIR) (2006) the SACIDS Foundation for One Health (SACIDS) of the Southern African Development Community (2008) the Southeast European Center for Surveillance and Control of Infectious Diseases (SECID) (2013) History The CORDS grew out of the 1960s-era Organisations de Coordination et de Cooperation pour la lutte contre les Grandes Endemies (OCCGE) which was an African network, reformed in 1987 to add the West African Health Community (WAHC) and give birth to the West African Health Organisation (WAHO). The PPHSN was formed in 1996 in order to "streamline" members' "disease reporting and response". In 1997, the PPHSN set up PacNet, in order to "share timely information on disease outbreaks" and "to ensure appropriate action was taken in response to public health threats." In 2000, the Global Outbreak Alert and Response Network was formalized by the WHO. In 2001, was formed the Southeastern European Health Network (SEEHN) which grouped the governments of Albania, Bosnia and Herzegovina, Bulgaria, Croatia, Moldova, Montenegro, Romania, and the Former Yugoslav Republic of Macedonia. In 2003, Israel, Jordan and the Palestinian Authority established the Middle East Consortium on Infectious Disease Surveillance (MECIDS). The growth of the CORDS can be categorised into several overlapping phases: from 1996 to 2007, the effort was to train and connect people to contain local epidemics from 2003 to 2009, the effort was aimed to enhance "cross-border and national surveillance systems to address regional threats", including a particular focus of EAIDSNet on zoonotic diseases from 2006 to at least 2017 the focus was to strengthen "preparedness for pandemics and other public health threats of regional and global scale. In 2005, the International Health Regulations (IHR) mandated official reporting of certain types of disease outbreaks to WHO. In 2007, the Rockefeller Foundation (RF) used funds from the Nuclear Threat Initiative (NTI) to convene in Bellagio "regional surveillance networks from across the globe to initiate a dialogue about how to harness lessons learned, emerging technologies, and nascent support." In 2009 the RF used funds from NTI to "create a community of practice" named CORDS, which in 2012 was concretized in Lyon France as a legal, non-profit international organization. CORDS convened the 1st Global Conference on Regional Disease Surveillance Networks at the Prince Mahidol Award Conference in 2013. References Public health Epidemiology 2012 establishments in France Public health organizations Infectious disease organizations Bioinformatics organizations Disaster management tools Emergency communication Warning systems Organizations established in 2012 Organizations based in Lyon Non-profit organizations based in France European medical and health organizations
Connecting Organizations for Regional Disease Surveillance
[ "Technology", "Engineering", "Biology", "Environmental_science" ]
750
[ "Epidemiology", "Bioinformatics organizations", "Safety engineering", "Measuring instruments", "Bioinformatics", "Warning systems", "Environmental social science" ]
62,168,097
https://en.wikipedia.org/wiki/Bivariant%20theory
In mathematics, a bivariant theory was introduced by Fulton and MacPherson , in order to put a ring structure on the Chow group of a singular variety, the resulting ring called an operational Chow ring. On technical levels, a bivariant theory is a mix of a homology theory and a cohomology theory. In general, a homology theory is a covariant functor from the category of spaces to the category of abelian groups, while a cohomology theory is a contravariant functor from the category of (nice) spaces to the category of rings. A bivariant theory is a functor both covariant and contravariant; hence, the name “bivariant”. Definition Unlike a homology theory or a cohomology theory, a bivariant class is defined for a map not a space. Let be a map. For such a map, we can consider the fiber square (for example, a blow-up.) Intuitively, the consideration of all the fiber squares like the above can be thought of as an approximation of the map . Now, a birational class of is a family of group homomorphisms indexed by the fiber squares: satisfying the certain compatibility conditions. Operational Chow ring The basic question was whether there is a cycle map: If X is smooth, such a map exists since is the usual Chow ring of X. has shown that rationally there is no such a map with good properties even if X is a linear variety, roughly a variety admitting a cell decomposition. He also notes that Voevodsky's motivic cohomology ring is "probably more useful " than the operational Chow ring for a singular scheme (§ 8 of loc. cit.) References Dan Edidin and Matthew Satriano, Towards an intersection Chow cohomology for GIT quotients The last two lectures of Vakil, Math 245A Topics in algebraic geometry: Introduction to intersection theory in algebraic geometry External links nLab- bivariant cohomology theory Abelian group theory Algebraic geometry Cohomology theories Functors Homology theory
Bivariant theory
[ "Mathematics" ]
445
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory", "Functors", "Algebraic geometry" ]
53,264,410
https://en.wikipedia.org/wiki/Multipole%20density%20formalism
The Multipole Density Formalism (also referred to as Hansen-Coppens Formalism) is an X-ray crystallography method of electron density modelling proposed by Niels K. Hansen and Philip Coppens in 1978. Unlike the commonly used Independent Atom Model, the Hansen-Coppens Formalism presents an aspherical approach, allowing one to model the electron distribution around a nucleus separately in different directions and therefore describe numerous chemical features of a molecule inside the unit cell of an examined crystal in detail. Theory Independent Atom Model The Independent Atom Model (abbreviated to IAM), upon which the Multipole Model is based, is a method of charge density modelling. It relies on an assumption that electron distribution around the atom is isotropic, and that therefore charge density is dependent only on the distance from a nucleus. The choice of the radial function used to describe this electron density is arbitrary, granted that its value at the origin is finite. In practice either Gaussian- or Slater-type 1s-orbital functions are used. Due to its simplistic approach, this method provides a straightforward model that requires no additional parameters (other than positional and Debye–Waller factors) to be refined. This allows the IAM to perform satisfactorily while a relatively low amount of data from the diffraction experiment is available. However, the fixed shape of the singular basis function prevents any detailed description of aspherical atomic features. Kappa Formalism In order to adjust some valence shell parameters, the Kappa formalism was proposed. It introduces two additional refineable parameters: an outer shell population (denoted as ) and its expansion/contraction (). Therefore, the electron density is formulated as: While , being responsible for the charge flow part, is linearly coupled with partial charge, the normalised parameter scales radial coordinate . Therefore, lowering the parameter results in expansion of the outer shell and, conversely, raising it results in contraction. Although the Kappa formalism is still, strictly speaking, a spherical method, it is an important step towards understanding modern approaches as it allows one to distinguish chemically different atoms of the same element. Multipole description In the multipole model description, the charge density around a nucleus is given by the following equation: The spherical part remains almost indistinguishable from the Kappa formalism, the only difference being one parameter corresponding to the population of the inner shell. The real strength of the Hansen-Coppens formalism lies in the right, deformational part of the equation. Here fulfils a role similar to in the Kappa formalism (expansion/contraction of the aspherical part), whereas individual are fixed spherical functions, analogous to . Spherical harmonics (each with its populational parameter ) are, however, introduced to simulate the electrically anisotropic charge distribution. In this approach, a fixed coordinate system for each atom needs to be applied. Although at first glance it seems practical to arbitrarily and indiscriminately make it contingent on the unit cell for all atoms present, it is far more beneficial to assign each atom its own local coordinates, which allows for focusing on hybridisation-specific interactions. While the singular sigma bond of the hydrogen can be described well using certain z-parallel pseudoorbitals, xy-plane oriented multipoles with a 3-fold rotational symmetry will prove more beneficial for flat aromatic structures. Applications The primary advantage of the Hansen-Coppens formalism is its ability to free the model from spherical restraints and describe the surroundings of a nucleus far more accurately. In this way it becomes possible to examine some molecular features which would normally be only roughly approximated or completely ignored. Hydrogen positioning X-ray crystallography allows the researcher to precisely determine the position of peak electron density and to reason about the placement of nuclei based on this information. This approach works without any problems for heavy (non-hydrogen) atoms, whose inner shell electrons contribute to the density function to a far greater degree then outer shell electrons. However, hydrogen atoms possess a feature unique among all the elements - they possess exactly one electron, which additionally is located on their valence shell and therefore is involved in creating strong covalent bonds with atoms of various other elements. While a bond is forming, the maximum of the electron density function moves significantly away from the nucleus and towards the other atom. This prevents any spherical approach from determining hydrogen position correctly by itself. Therefore, usually the hydrogen position is estimated basing on neutron crystallography data for similar molecules, or it is not modelled at all in the case of low-quality diffraction data. It is possible (albeit disputable) to freely refine hydrogen atoms' positions using the Hansen-Coppens formalism, after releasing the bond lengths from any restraints derived from neutron measurements. The bonding orbital simulated with adequate multipoles describes the density distribution neatly while preserving believable bond lengths. It may be worth approximating hydrogen atoms' anisotropic displacement parameters, e.g. using SHADE, before introducing the formalism and, possibly, discarding bond distance constraints. Bonding modelling In order to analyse the length and strength of various interactions within the molecule, Richard Bader's "Atoms in molecules" theorem may be applied. Due to the complex description of the electron field provided by this aspherical model, it becomes possible to establish realistic bond paths between interacting atoms as well as to find and characterise their critical points. Deeper insight into this data yields useful information about bond strength, type, polarity or ellipticity, and when compared with other molecules brings greater understanding about the actual electron structure of the examined compound. Charge flow Due to the fact that for each multipole of every atom its population is being refined independently, individual charges will rarely be integers. In real cases, electron density flows freely through the molecule and is not bound by any restrictions resulting from the outdated Bohr atom model and found in IAM. Therefore, through e.g. an accurate Bader analysis, net atomic charges may be estimated, which again is beneficial for deepening the understanding of systems under investigation. Drawbacks and limitations Although the Multipole Formalism is a simple and straightforward alternative means of structure refinement, it is definitely not flawless. While usually for each atom either three or nine parameters are to be refined, depending on whether an anisotropic displacement is being taken into account or not, a full multipole description of heavy atoms belonging to the fourth and subsequent periods (such as chlorine, iron or bromine) requires refinement of up to 37 parameters. This proves problematic for any crystals possessing large asymmetric units (especially macromolecular compounds) and renders a refinement using the Hansen-Coppens Formalism unachievable for low-quality data with an unsatisfactory ratio of independent reflections to refined parameters. Caution should be taken while refining some of the parameters simultaneously (i.e. or , multipole populations and thermal parameters), as they may correlate strongly, resulting in an unstable refinement or unphysical parameter values. Applying additional constraints resulting from local symmetry for each atom in a molecule (which decreases the number of refined multipoles) or importing populational parameters from existing databases may also be necessary to achieve a passable model. On the other hand, the aforementioned approaches significantly reduce the amount of information required from experiments, while preserving some level of detail concerning aspherical charge distribution. Therefore, even macromolecular structures with satisfactory X-ray diffraction data can be modelled aspherically in a similar fashion. Despite their similarity, individual multipoles do not correspond to atomic projections of molecular orbitals of a wavefuntion as resulting from quantum calculations. Nevertheless, as brilliantly summarized by Stewart, "The structure of the model crystal density, as a superposition of pseudoatoms [...] does have quantitative features which are close to many results based on quantum chemical calculations". If the overlap between the atomic wavefunctions is small enough, as it occurs for example in transition metal complexes, the atomic multipoles may be correlated with the atomic valence orbitals and multipolar coefficients may be correlated with populations of metal d-orbitals. A stronger correlation between the X-ray measured diffracted intensities and quantum mechanical wavefunctions is possible using the wavefunction based methods of Quantum Crystallography, as for example the X-ray atomic orbital model, the so-called experimental wavefunction or the Hirshfeld Atom Refinement. References Theoretical chemistry X-ray crystallography Crystallography Diffraction
Multipole density formalism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,774
[ "Spectrum (physical sciences)", "Materials science", "Theoretical chemistry", "Crystallography", "Diffraction", "Condensed matter physics", "nan", "X-ray crystallography", "Spectroscopy" ]
53,264,857
https://en.wikipedia.org/wiki/Probability%20management
The discipline of probability management communicates and calculates uncertainties as data structures that obey both the laws of arithmetic and probability, while preserving statistical coherence. The simplest approach is to use vector arrays of simulated or historical realizations and metadata called Stochastic Information Packets (SIPs). A set of SIPs, which preserve statistical relationships between variables, is said to be coherent and is referred to as a Stochastic Library Unit with Relationships Preserved (SLURP). SIPs and SLURPs allow stochastic simulations to communicate with one another. For example, see Analytica (Wikipedia), Analytica (SIP page), Oracle Crystal Ball, Frontline Solvers, and Autobox. The first large documented application of SIPs involved the exploration portfolio of Royal Dutch Shell in 2005 as reported by Savage, Scholtes, and Zweidler, who formalized the discipline of probability management in 2006. The topic is also explored at length in. Vectors of simulated realizations of probability distributions have been used to drive stochastic optimization since at least 1991. Andrew Gelman described such arrays of realizations as Random Variable Objects in 2007. A recent approach does not store the actual realizations, but delivers formulas known as Virtual SIPs that generate identical simulation trials in the host environment regardless of platform. This is accomplished through inverse transform sampling, also known as the F-Inverse method, coupled to a portable pseudo random number generator, which produces the same stream of uniform random numbers across platforms. Quantile parameterized distributions (QPDs) are convenient for inverse transform sampling in this context. In particular, the Metalog distribution is a flexible continuous probability distribution that has simple closed form equations, can be directly parameterized by data, using only a handful of parameters. An ideal pseudo random number generator for driving inverse transforms is the HDR generator developed by Douglas W. Hubbard. It is a counter-based generator with a four-dimensional seed plus an iteration index that runs in virtually all platforms including Microsoft Excel. This allows simulation results derived in R, Python, or other readily available platforms to be delivered identically, trial by trial to a wide audience in terms of a combination of a few parameters for a Metalog distribution accompanied by the five inputs to the HDR generator. In 2013, ProbabilityManagement.org was incorporated as a 501(c)(3) nonprofit that supports this approach through education, tools, and open standards. Executive Director Sam Savage is the author of The Flaw of Averages: Why we Underestimate Risk in the Face of Uncertainty and is an adjunct professor at Stanford University. Harry Markowitz, Nobel Laureate in Economics, was a co-founding board member. The nonprofit has received financial support from Chevron Corporation, General Electric, Highmark Health, Kaiser Permanente, Lockheed Martin, PG&E, and Wells Fargo Bank. The SIPmath 2.0 Standard supports XLSX, CSV, and XML Formats. The SIPmath 3.0 Standard uses JSON objects to convey virtual SIPs based on the Metalog Distribution and HDR Generator. References Stochastic simulation Monte Carlo methods Probability distributions Risk analysis
Probability management
[ "Physics", "Mathematics" ]
640
[ "Functions and mappings", "Probability distributions", "Monte Carlo methods", "Mathematical objects", "Computational physics", "Mathematical relations" ]
53,267,005
https://en.wikipedia.org/wiki/Design%20for%20verification
Design for verification (DfV) is a set of engineering guidelines to aid designers in ensuring right first time manufacturing and assembly of large-scale components. The guidelines were developed as a tool to inform and direct designers during early stage design phases to trade off estimated measurement uncertainty against tolerance, cost, assembly, measurability and product requirements. Background Increased competition in the aerospace market has placed additional demands on aerospace manufacturers to reduce costs, increase product flexibility and improve manufacturing efficiency. There is a knowledge gap within the sphere of digital to physical dimensional verification and on how to successfully achieve dimensional specifications within real-world assembly factories that are subject to varying environmental conditions. The DfV framework is an engineering principle to be used within low rate and high value and complexity manufacturing industries to aid in achieving high productivity in assembly via the effective dimensional verification of large volume structures, during final assembly. The DfV framework has been developed to enable engineers to design and plan the effective dimensional verification of large volume, complex structures in order to reduce failure rates and end-product costs, improve process integrity and efficiency, optimise metrology processes, decrease tooling redundancy and increase product quality and conformance to specification. The theoretical elements of the DfV methods were published in 2016, together with their testing using industrial case studies of representative complexity. The industrial tests published on ScienceDirect proved that by using the new design for verification methods alongside the traditional ‘design for X’ toolbox, the resultant process achieved improved tolerance analysis and synthesis, optimized large volume metrology and assembly processes and more cost-effective tool and jig design. See also Design for assembly Design for inspection Design for manufacturability Design for X References Quality control
Design for verification
[ "Engineering" ]
351
[ "Design stubs", "Design", "Design for X" ]