id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
73,041,236 | https://en.wikipedia.org/wiki/Poly%28phthalaldehyde%29 | Poly(phthalaldehyde), abbreviated as PPA, is a metastable stimuli-responsive polymer first synthesized in 1967. It has garnered significant attention during the past couple of years due to its ease of synthesis and outstanding transient and mechanical properties. for this reason, It has been exploited for a variety of applications including sensing, drug delivery, and EUV lithography. As of 2023, it is considered the only aromatic aldehyde polymerized through a living chain growth polymerization.
Discovery and history
Poly(phthalaldehyde) was first reported in 1967 by Chuji Aso and Sanae Tagami from the department of Organic Synthesis at Kyushu University by an addition homopolymerization reaction of aromatic o-phthalaldehyde. This polymer, consisting of a polyacetal main chain, is still to date, the only aromatic aldehyde that can be homopolymerized through a chain-growth polymerization method. It is a white brittle solid with a low ceiling temperature and significant self-immolative properties. It has gathered significant attention in recent years especially in the development of novel responsive materials and applications.
Synthesis techniques
Since its first inception in 1967, many synthesis techniques have been developed and employed for the polymerization of o-phthalaldehyde. Most notably, living polymerization methods are among the most common and promising techniques used, as can be seen in the high number of publications in the literature depicting their usage in poly(phthalaldehyde) preparation.
Living cationic polymerization (LCP)
History and main idea
Aso and Tagami were the first to report the polymerization of o-phthalaldehyde in 1967 using the cationic living polymerization technique. This technique, which was initially thought to require the usage of a strong Brönsted acid to initiate polymerization in addition to a strong nucleophile to depress polymerization and endcap the polymer chain was proven successful in a number of polymerization processes reported earlier. Interestingly, the authors were able to produce this polymer without using an initiator nor a terminator and determined the polymer's structure to be cyclic. In fact, they worked at liquid nitrogen temperature and relied on Boron trifluoride etherate catalyst which was sufficient to produce a polymer stable enough at room temperature for a few days.
Current trends
In the following years, polymer chemists started studying the characteristics of this polymer and worked on enhancing its thermal stability and mechanical properties. In particular, Moore and coworkers conducted rigorous mechanistic studies on poly(phthalaldehyde) by modifying the type of catalyst used, as well as the starting monomer concentration in an effort to control the molar mass, decrease the polydispersity index, and increase the polymer's purity. Among the catalysts used were triethyloxonium borofluoride, tin chloride, and triphenylmethylium tetrafluoroborate.
Limitations
While LCP was the first and sole method used to produce poly(phthalaldehyde), its usage nowadays has dramatically decreased in favor of other polymerization techniques which allow a better control over the polymer properties including molar mass and thermal stability.
Living anionic polymerization (LAP)
History and main idea
While this polymerization technique did not typically gain fame and popularity until 2010, it was also reported by Aso and Tagami in 1969. In general, LAP involves the usage of a strong nucleophile to initiate polymerization in addition to the employment of an electrophile as a terminator to endcap the polymer chain. In Tagami's article, PPA was prepared by utilizing tert-butyllithium as an initiator and acetic anhydride as a terminator. However, the drawbacks faced when utilizing LCP (low polydispersity index (PDI), low yield, and no control over molecular weight) were also encountered in this polymerization technique.
Current trends
It was not until 1987 when two chemists, Hedrick and Schlemper, from the University of Freiburg proposed the use of phosphazene bases to speed up the reaction and lower the polydispersity index. Up until 2023, three different phosphazene bases have been used in PPA polymerization. Moreover, most of the published research articles describing PPA synthesis between 2008 and 2023 revolve around the usage of LAP, rendering it the most common and effective polymerization technique.
Advantages
The major advantage this polymerization technique presents over LCP lies in the fact that the polymer can be end capped on both sides of the chain with stimuli-responsive groups. The tuning process of PPA by these functional groups have not only expanded the set of applications this polymer can be used in, but has also improved its properties and attributes. For instance, by controlling the o-phthalaldehyde monomer/alcohol initiator concentration ratio, ultra-high molecular weights (50-150 KDa) PPA can be obtained. Furthermore, PPA synthesized through LAP is more thermally and mechanically stable. Generally, the presence of endcaps on both ends stabilizes the polymer and results in a more flexible chain with a high thermal stability. And because linear polymers synthesized by LAP method can be end capped whereas cyclic polymers prepared via LCP method cannot be end capped with functional groups, LAP results in more thermally stable polymers. It has a much lower PDI ranging between 1.3 and 1.9 as opposed to PPA synthesized through LCP which has a PDI ranging between 2 and 4.5. This is because of the ability to control the character, molecular weight, and end group of the polymer. Furthermore, the initiator used in LAP synthesis method, which is a strong nucleophile, acts as the first endcap, and hence by controlling the amount of initiator used, a control over the molar mass and PDI can be obtained. This is in contrary to cyclic PPA which is synthesized through LCP where the initiator (Lewis acid) will not be part of the final PPA product, and hence, controlling the amount of Lewis acid used will have no to little effect on the final molar mass and PDI of cyclic PPA polymer.
Coordinative polymerization (CP)
Although a less known polymerization technique, coordinative polymerization has been used a few times in PPA preparation. It mostly requires the activation of transition metal catalysts with trimethylaluminum or diethyl aluminum chloride and allows a control over the stereoselectivity of the compound. Another advantage of this technique lies within the usage of water as a co-catalyst in PPA synthesis which is deemed impossible in other polymerization methods. Professor Hisaya Tani from the Department of Polymer Science at Osaka University was the first to report a stereospecific polymerization of o-phthalaldehyde by employing dimeric dimethylaluminumoxybenzylideneaniline [Me2AlOCMeNPh]2 as catalyst and water as a co-catalyst. He was able to synthesize a fibrous PPA in exclusively trans-configuration which had never been reported before. Nonetheless, due to the inability to endcap the polymer with functional groups, this technique is rarely utilized at present and the mechanism of formation of PPA remains ambiguous and not well studied.
Types of poly(phthalaldehyde)
Depending on the polymerization technique applied, two different types of poly(phthalaldehyde) can be acquired, linear and cyclic.
Linear PPA
Linear PPA is produced by anionic polymerization methods using a strong nucleophile as an initiator. This technique prevents the cyclization of the polymer chain as the propagating species have only one charged terminus that cannot backbite the other terminus which, in turn, is neutral in charge. Although processing linear PPA requires highly sensitive reaction conditions and is more time demanding, this type of polymer has many advantages over its cyclic counterpart. For instance, a control over the polymer's molar mass can easily be achieved by controlling the monomer and alcohol initiator ratios. Furthermore, it has been proven to be more thermally stable than its cyclic counterpart due to the presence of functionalized endcaps that stabilizes the polymer chain from depolymerization. For these reasons, it has been studied to a far greater extent than cyclic PPA. Various linear PPA with distinct end groups have been reported and studied for a variety of applications including sensing, drug delivery, and lithography. For instance, once these end groups are cleaved as a response to the exposure of PPA to a specific stimulus, the polymer will sequentially disassemble from head to tail through an unzipping reaction to form the monomer in short times that can be as low as a few minutes.
Cyclic PPA
Cyclic PPA is obtained through a cationic polymerization of o-phthalaldehyde using a Lewis acid, typically Boron trifluoride etherate, as an initiator. When Aso and Tagami first reported the successful synthesis of PPA using this technique in 1967, they were unaware of the fact that the polymer they prepared was cyclic and instead reported the structure as linear in their research paper. It was not until 2013 that polymer chemists proved that the structure is cyclic using a combination of characterization techniques including Nuclear Magnetic Resonance (NMR), Fourier Transform Infrared Spectroscopy (FT-IR), Gel Permeation Chromatography (GPC), and Mass Spectrometry (MS). Cyclic PPA is easy to synthesize; it is reported by Prof. Jeffrey Moore that the cationic polymerization of o-phthalaldehyde is very fast, yielding cyclic PPA within few minutes. Furthermore, the polymer can be isolated without the addition of pyridine nor methanol nor a strong base terminator, which in general makes this polymerization technique easy, fast, and cheap. Nevertheless, a known issue of this technique is the fact that the molecular weight cannot be controlled based on the initial concentration of the monomer used, which has led typically to cyclic PPA with a wide variety of molecular weights ranging between 3 kDa to 100 kDa using the same starting conditions. Furthermore, because of its cyclic structure, no end caps are used or needed. The absence of functionalized end caps in the structure has limited the usage of cyclic PPA especially in stimuli responsive applications.
Properties and characteristics
PPA is a metastable polymer known for its ease of synthesis and rapid depolymerization. In addition, its properties can be easily influenced and manipulated upon either functionalizing the phthalaldehyde monomer with different groups, most efficiently, electron withdrawing groups, or employing different functional groups as end caps.3
Mechanical properties
PPA is known to have a rigid and brittle backbone which limits its flexibility and usage in some applications. However, it can be easily tuned by adding additives rendering it a soft material. The mechanical properties of cyclic PPA films drop cast using different solvents have recently been investigated. The study showed the polymer to possess a large elastic modulus of 2.5-3 GPa which was also previously reported in another study, in addition to tensile strength values ranging between 25 and 35 MPa and a failure strain of 1-1.5% that is highly dependent of the solvent used.
Plasticizers as additives
With the insurgence in the usage of PPA during the past few years for various applications, the need to ameliorate the transient properties and enhance the mechanical features of this polymer has come to surface. PPA is known to be brittle; it possesses a large storage modulus, and a glass transition temperature that is above its thermal degradation point, which renders the polymer unsuitable for a broad range of applications. One way to ameliorate its intrinsic properties is via the addition of a plasticizing agent that can disrupt the polymer's intermolecular packing, and thus making it more flexible, decreasing its storage modulus, depressing its glass transition temperature, and increasing its shear strength. A few examples of plasticizers that have been used with PPA include dimethyl phthalate, bis(2-ethylhexyl) phthalate, diethyl adipate, and tri-isononyl trimellitate (TINTM). In a recent study, the effect of two ether-ester plasticizers on the mechanical flexibility and photo-transience speed of cyclic PPA was investigated. The authors were able to show that the addition of these additives broadened the storage modulus range and decreased it from 2300 MPa in the case of pure PPA down to 19 MPa in the PPA/plasticizer mixture, hence making the polymer more flexible and in need of less energy to be distorted. In another study published by the same research group, the effect of diethyl adipate (DEA) plasticizer on the glass transition temperature of cyclic PPA was investigated. After determining the glass transition temperature of pure PPA to be 187 °C, PPA films with various DEA concentrations were prepared. By varying DEA concentration, the authors were able depress Tg to 12.5 °C demonstrating the importance of plasticizers in enhancing the mechanical flexibility and thermal properties of PPA. Similar results were previously observed where the thermal transitions were depressed from 95 °C for cPPA to 24 °C for diethyl phthalate (DEP)-plasticized cPPA. Among the few studies that have been reported on the usage of plasticizers with PPA, it has been noted that the usage of plasticizers results in a decrease in the tensile stress of the polymers which indicate that PPA is becoming more flexible and hence the film can fold more easily. Nevertheless, a control on the amount of plasticizer used is important. For instance, in the study discussed above, it has been reported that the usage of a large amount of plasticizer (more than 50% w/w in comparison with PPA polymer) results in phase segregation and a decrease in the flexibility of the PPA film. Furthermore, the nature of the used solvent can highly affect the mechanical properties of PPA as well. In particular, in another study published in 2019, both the elastic modulus and tensile strength increase when dichloromethane was used as a solvent to drop-cast PPA in comparison to dioxane and chloroform.
Thermal properties
The thermal stability of PPA is highly dependent on whether the polymer is end-capped or isolated without end groups. Cyclic PPA, in addition to functionalized linear PPA chains are known to be thermally stable for up to 150oC as determined by both Differential Scanning Calorimetry (DSC) and Thermogravimetric Analysis (TGA). Moreover, the polymer is known for its long-term shelf life wherein it can be stored at room-temperature for a significant amount of time. Various chemists have studied substitution effects on the thermal stability of PPA. For instance, scientists at The International Business Machines Corporation (IBM) concluded, after extensive studies, that o-phthalaldehyde monomers functionalized with chloro, bromo, and 4-trimethylsilyl functional groups result in highly stable PPA compared to the unsubstituted polymer. Similarly, Phillips et al. proved that the substituted and end-capped poly(4,5-dichlorophthalaldehyde) possesses higher thermal degradation temperatures than its unsubstituted counterparts.
Chemical properties
By means of controlling the identity and reactivity of the endcaps, PPA can withstand harsh chemical conditions with no significant changes in its structure. For instance, while functionalizing PPA with an allyl acetate and tert-butyldimethylsilyl ether functional groups can lead to its rapid depolymerization in the presence of Pd(0) and F− respectively, a simple change in the nature of the endcaps will preserve the chain even in the presence of both corrosive agents. On a separate note, while PPA is insoluble in aqueous solvents and alcohols, it is highly soluble in organic solvents such as THF, DCM, and DMSO where it can be dissolved for days without triggering depolymerization.
Applications
Due to its unique stability, chemical properties, and outstanding tunability and reactivity, PPA has been employed in a variety of applications.
Photoresist
The high solubility and stability of PPA in organic solvents have allowed its investigation as a base material in first generation amplified photoresist for lithography in the early 80s by three scientists, Grant Willson, Jean Fréchet, and Hiroshi Ito who were working at IBM at the time. The story of how this successful achievement started and progressed can be found in the review paper written by Hiroshi Ito. Because PPA by itself does not undergo complete depolymerization upon its subjection to light, it is usually end-capped or used along photoacid generators (PAGs) for enhanced sensitivity. In this case, depolymerization is triggered upon irradiation either by end-cap removal and self-immolation or by the generated acid. Ober et al. stated that the use of PPA as photoresist under extreme ultraviolet (EUV) irradiation is yet to be successful due to the instability of PPA and the volatility of its monomers.30 However, they were able to report one of the first PPA derivatives without the use of PAGs with enhanced photoresist properties upon EUV exposure.19
Drug release
Owing to its high reactivity and the ability to tune its endcap groups, PPA has been lately utilized in drug delivery applications. In one recent study, UV-sensitive PPA microcapsules containing different types of drugs were prepared. Once the capsules were subjected to a UV-light trigger, an unzipping reaction took place and the shell ruptured which led to the release of the core contain of these microcapsules. A unique advantage of these microcapsules is that they allow the immediate release of the drug upon exposure to the trigger rather than its continuous release over a period of time ranging from minutes to hours as other common microcapsules function. In an earlier publication, DiLauro et al. reported the ability to predesign and control the thickness of the microcapsule shells and length of the PPA used to form the shell, which have stimuli-responsive endcaps allowing head-to-tail fluoride-triggered depolymerization.
Sensing through depolymerization
PPA is known as a self-immolative material which depolymerizes through endcap cleavage in response to a specific stimulus. For this reason, several PPA polymers with different endcaps have been synthesized and used as self-immolative materials for sensing toxic and specific compounds.
Acid-triggered depolymerization
Due to the presence of two types of oxygen atoms in the PPA backbone, in addition to the fact that H+ tends to protonate oxygen atoms easily, depolymerization can occur through both endcap cleavage and protonation of oxygen atoms present in the backbone. For this reason, polymer chemists tend to use endcaps rich in oxygen atoms to accelerate depolymerization rate. For example, Moore and co-workers reported the use of a specific ion coactivation (SICA) effect that allowed the ion and acid coactivated-triggered depolymerization of a cyclic PPA microcapsules at the solid/liquid interface of the polymer and solution.
Fluoride-triggered depolymerization
Silyl groups can be deprotected with fluoride ions resulting in a strong Si-F bond that is hard and challenging to break. For this reason, different polymer chemists started to employ PPA in fluoride sensing by using t-butyldimethylsilyl (TBS) containing initiators and terminators. The fluoride sensing ability of PPA has been previously used in applications such as drug release, as previously reported by DiLauro et al. Another application studied by Phillips and co-workers includes the use of fluoride-triggered PPA depolymerization in changing the structure of plastics in a predetermined way.
UV-light triggered depolymerization
To demonstrate its capability in rapidly depolymerizing in presence of UV-light, DiLauro et al. synthesized a PPA polymer with two UV-sensitive endcaps, 2-nitro-4,5-dimethoxybenzyl alcohol and 1-[[(chlorocarbonyl)oxy]methyl]-4,5-dimethoxy-2 nitrobenzene, and were able to achieve complete depolymerization in a few minutes. In a practical application in organic electronics, cyclic PPA in the presence of 2-(4-methoxystyryl)-4,6-bis(trichloromethyl)-1,3,5-triazine (MBTT used as PAG) undergoes depolymerization upon exposure to UV-light, which in turn deactivates the transient electronics. Another similar application in transient electronics was reported where an organic light-emitting diode (OLED) was integrated on the PPA substrate and can cause depolymerization in the presence of a PAG.
Pd(0)-triggered depolymerization
Apart from its usage in sensing acids and fluoride anions, PPA has been used in sensing Pd(0) metal by employing allyl chloroformate as a terminating end cap. This has been reported by Phillips and his research group, where they used an allyl formate endcap that stoichiometrically depolymerized within minutes upon its exposure to a catalytic amount of tetrakis(triphenylphosphine)palladium(0) (Pd(PPh3)4).
Health and safety
According to the safety data sheet of PPA, it should not be allowed in contact with the skin or eyes as it may lead to skin, eye, and respiratory irritations or allergic reactions. In addition, as some unfunctionalized PPA are unstable at temperatures even lower than room temperature, it is important to note that PPA should be stored at temperatures below -10 °C under inert atmosphere and away from sunlight, moisture, and heat, but with proper ventilation.
Since the depolymerization of PPA is greatly studied in its applications, it is important to also note the possible safety concerns of its monomer. In addition to the abovementioned hazards of PPA, phthalaldehyde is very toxic if swallowed and for aquatic life.
References
Polymer chemistry
Smart materials
Soft matter | Poly(phthalaldehyde) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,752 | [
"Soft matter",
"Materials science",
"Condensed matter physics",
"Polymer chemistry",
"Smart materials"
] |
73,052,423 | https://en.wikipedia.org/wiki/Double%20operator%20integral | In functional analysis, double operator integrals (DOI) are integrals of the form
where is a bounded linear operator between two separable Hilbert spaces,
are two spectral measures, where stands for the set of orthogonal projections over , and is a scalar-valued measurable function called the symbol of the DOI. The integrals are to be understood in the form of Stieltjes integrals.
Double operator integrals can be used to estimate the differences of two operators and have application in perturbation theory. The theory was mainly developed by Mikhail Shlyomovich Birman and Mikhail Zakharovich Solomyak in the late 1960s and 1970s, however they appeared earlier first in a paper by Daletskii and Krein.
Double operator integrals
The map
is called a transformer. We simply write , when it's clear which spectral measures we are looking at.
Originally Birman and Solomyak considered a Hilbert–Schmidt operator and defined a spectral measure by
for measurable sets , then the double operator integral can be defined as
for bounded and measurable functions . However one can look at more general operators as long as stays bounded.
Examples
Perturbation theory
Consider the case where is a Hilbert space and let and be two bounded self-adjoint operators on . Let and be a function on a set , such that the spectra and are in . As usual, is the identity operator. Then by the spectral theorem and and , hence
and so
where and denote the corresponding spectral measures of and .
Literature
References
Functional analysis
Definitions of mathematical integration | Double operator integral | [
"Mathematics"
] | 315 | [
"Functional analysis",
"Mathematical objects",
"Functions and mappings",
"Mathematical relations"
] |
60,866,579 | https://en.wikipedia.org/wiki/Trisporic%20acid | Trisporic acids (TSAs) are C-18 terpenoid compounds synthesized via β-carotene and retinol pathways in the zygomycetes. They are pheromone compound responsible for sexual differentiation in those fungal species. TSAs and related compounds make up the trisporoid group of chemicals.
History
Trisporic acid was discovered in 1964 as a metabolite that caused enhanced carotene production in Blakeslea trispora. It was later shown to be the hormone that brought about zygophore production in Mucor mucedo. The American mycologist and geneticist Albert Francis Blakeslee, discovered that some species of Mucorales were self-sterile (heterothallic), in which interactions of two strains, designated (+) and (-), being necessary for the initiation of sexual activity. This interaction was found by Hans Burgeff of the University of Goettingen to be due to the exchange of low molecular weight substances that diffused through the substratum and atmosphere. This work constituted the first demonstration of sex hormone activity in any fungus. The elucidation of the hormonal control of sexual interaction in the Mucorales extends over 60 years and involved mycologists and biochemists from Germany, Italy, the Netherlands, UK and the USA.
Functions in Mucorales
Recognition of compatible sexual partners in zygomycota is based on a cooperative biosynthesis pathway of trisporic acid. Early trisporoid derivatives and trisporic acid induce swelling of two potential hyphae, hence called zygophores, and a chemical gradient of these inducer molecules results in a growth towards each other. These progametangia come in contact with each other and build a strong connection. In the next stage, septae are established to limit the developing zygospore from the vegetative mycelium and in this way the zygophores become suspensor hyphae and gametangia are formed. After dissolving of the fusion wall, cytoplasm and a high number of nuclei from both gametangia are mixed. A selectional process (unstudied) results in a reduction of nuclei and meiosis takes place (also unstudied until today). Several cell wall modifications, as well as incorporation of sporopollenin (dark colour of spores) take place resulting in a mature zygospore.
Triporic acid, as the endpoint of this recognition pathway, can solely be produced in presence of both compatible partners, which enzymatically produce trisporoid precursors to be further utilized by the potential sexual partner. Species specificity of these reactions is among others obtained by spatial segregation, physicochemical features of derivatives (volatility and light sensitivity), chemical modifications of trisporoids and transcriptional/posttranscriptional regulation.
Biosynthesis
Parasexualism
Trisporoids are also used in the mediation of the recognition between parasite and host. An example is the host-parasite interaction of a parasexual nature observed between Parasitella parasitica, a facultative mycoparasite of zygomycetes, and Absidia glauca. This interaction is an example for biotrophic fusion parasitism, because genetic information is transferred into the host. Many morphological similarities in comparison to zygospore formation are seen, but the mature spore is called a sikyospore and is parasitic. During this process, gall-like structures are produced by the host Absidia glauca. This coupled with further evidence has led to the assumption that trisporiods are not strictly species specific and that trisporiods represent the general principle of mating recognition in Mucorales.
References
Carotenoids
Carboxylic acids | Trisporic acid | [
"Chemistry",
"Biology"
] | 803 | [
"Biomarkers",
"Carboxylic acids",
"Functional groups",
"Carotenoids"
] |
51,599,932 | https://en.wikipedia.org/wiki/Norclostebol%20acetate | Norclostebol acetate (brand name Anabol 4-19), or norchlorotestosterone acetate (NClTA), also known as 4-chloro-19-nortestosterone 17β-acetate or as 4-chloroestr-4-en-17β-ol-3-one, is a synthetic, injectable anabolic-androgenic steroid (AAS) and derivative of 19-nortestosterone (nandrolone). It is an androgen ester – specifically, the C17β acetate ester of norclostebol (4-chloro-19-nortestosterone).
See also
Clostebol
Clostebol acetate
Clostebol caproate
Clostebol propionate
Oxabolone
Oxabolone cipionate
References
Acetate esters
Androgen esters
Anabolic–androgenic steroids
Estranes
Organochlorides
Prodrugs | Norclostebol acetate | [
"Chemistry"
] | 214 | [
"Chemicals in medicine",
"Prodrugs"
] |
51,600,173 | https://en.wikipedia.org/wiki/Ofeq-11 | Ofeq-11, also known as Ofek 11 (Horizon in Hebrew), is part of the Ofeq family of reconnaissance satellites designed and built by Israel Aerospace Industries (IAI) for the Israeli Ministry of Defense.
Launch
Ofek-11 was launched on 13 September 2016, at 14:38 UTC from the Palmachim Airbase in Israel, two years after the launch of Ofeq-10. It was delivered using IAI's Shavit 2 launcher. Compared to its predecessor, the new satellite features an improved version of El-Op's "Jupiter High-Resolution Imaging System", with resolution increased to 0.5 meter, and uses a new satellite bus - OPSAT-3000 - which is a derivative of the satellite bus used in TecSAR-1.
Mission
According to reports, the launch initially looked like a success, but about 90 minutes later, engineers realized that while the satellite had entered orbit, not all systems were functioning or responding to instructions. However, after several days of remote repairs, the satellite was operational and taking high-quality pictures. It has been reported that South Korea is considering utilizing the satellite to obtain reconnaissance on North Korean activities.
References
Reconnaissance satellites of Israel
2016 in Israel
Israel Aerospace Industries satellites
Spacecraft launched in 2016
Spacecraft launched by Shavit rockets | Ofeq-11 | [
"Astronomy"
] | 268 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
51,600,966 | https://en.wikipedia.org/wiki/Dynamical%20energy%20analysis | Dynamical energy analysis (DEA) is a method for numerically modelling structure borne sound and vibration in complex structures. It is applicable in the mid-to-high frequency range and is in this regime computational more efficient than traditional deterministic approaches (such as finite element and boundary element methods). In comparison to conventional statistical approaches such as statistical energy analysis (SEA), DEA provides more structural details and is less problematic with respect to subsystem division. The DEA method predicts the flow of vibrational wave energy across complex structures in terms of (linear) transport equations. These equations are then discretized and solved on meshes.
Key point summary of DEA
High frequency method in numerical acoustics.
The flow of energy is tracked across a mesh. Can be thought of as ray tracing using density of rays instead of individual rays.
Can use existing FEM meshes. No remodelling necessary.
Computational time is independent of frequency.
The necessary mesh resolution does not depend on frequency and can be chosen coarser than in FEM. It only should resolve the geometry.
Fine structural details can be resolved, in contrast to SEA which gives only one number per subsystem.
Greater flexibility for the models usable by DEA. No implicit assumptions (equilibrium in weakly coupled subsystems) as in SEA.
Introduction
Simulations of the vibro-acoustic properties of complex structures (such as cars, ships, airplanes,...) are routinely carried out in various design stages. For low frequencies, the established method of choice is the finite element method (FEM). But high frequency analysis using FEM requires very fine meshes of the body structure to capture the shorter wavelengths and therefore is computational extremely costly. Furthermore the structural response at high frequencies is very sensitive to small variations in material properties, geometry and boundary conditions. This makes the output of a single FEM calculation less reliable and makes ensemble averages necessary furthermore enhancing computational cost. Therefore at high frequencies other numerical methods with better computational efficiency are preferable.
The statistical energy analysis (SEA) has been developed to deal with high frequency problems and leads to relatively small and simple models. However, SEA is based on a set of often hard to verify assumptions, which effectively require diffuse wave fields and quasi-equilibrium of wave energy within weakly coupled (and weakly damped) sub-systems.
One alternative to SEA is to instead consider the original vibrational wave problem in the high frequency limit, leading to a ray tracing model of the structural vibrations. The tracking of individual rays across multiple reflection is not computational feasible because of the proliferation of trajectories. Instead, a better approach is tracking densities of rays propagated by a transfer operator. This forms the basis of the Dynamical Energy Analysis (DEA) method introduced in reference. DEA can be seen as an improvement over SEA where one lifts the diffusive field and the well separated subsystem assumption. One uses an energy density which depends both on position and momentum. DEA can work with relatively fine meshes where energy can flow freely between neighboring mesh cells. This allows far greater flexibility for the models used by DEA in comparison to the restriction imposed by SEA. No remodeling as for SEA is necessary as DEA can use meshes created for a FE analysis. As a result, finer structural details than SEA can be resolved by DEA.
Method
The implementation of DEA on meshes is called Discrete Flow Mapping (DFM). We will here briefly describe the idea behind DFM, for details see the references below. Using DFM it is possible to compute vibro-acoustic energy densities in complex structures at high frequencies, including multi-modal propagation and curved surfaces. DFM is a mesh based technique where a transfer operator is used to describe the flow of energy through boundaries of subsystems of the structure; the energy flow is represented in terms of a density of rays , that is, the energy flux through a given surface is given through the density of rays passing through the surface at point with direction . Here, parametrises the surface and is the direction component tangential to the surface. In what follows, the surfaces is represented by the union of all boundaries of the mesh cells of the FE mesh describing the car floor. The density , with phase space coordinate , is transported from one boundary to the adjacent boundary intersection via the boundary integral operator
where is the map determining where a ray starting on a boundary segment at point with direction passes through another boundary segment, and is a factor containing damping and reflection/transmission coefficients (akin to the coupling loss factors in SEA). It also governs the mode conversion probabilities in the case of both in-plane and flexural waves, which are derived from wave scattering theory. This allows DEA to take curvature and varying material parameters into account. Equation () is a way to write ray tracing across one single mesh cell in terms of an integral equation transferring an energy density from one surface to an adjacent surface.
In a next step, the transfer operator () is discretised using a set of basis functions of the phase space. Once the matrix has been constructed, the final energy density on the boundary phase-space of each element is given in terms of the initial density by the solution of a linear system of the form
The initial density models some source distribution for vibrational excitations, for example the engine in ship. Once the final density (describing the energy density
on all cell boundaries) has been computed, the energy density at any location inside the structure may be computed as a post-processing step.
Concerning the terminology, there is some ambiguity concerning the terms "Discrete Flow Mapping(DFM)" and "Dynamical Energy Analysis". To some extent, one can use one term in place of the other. For example, consider a plate. In DFM, one would subdivide the plate into many small triangles and propagate the flow of energy from triangle to (neighbouring) triangle. In DEA, one would not subdivide the plate, but use some high order basis functions (both in position and momentum) on the boundary of the plate. But in principle it would be admissible to describe both procedures as either DFM or DEA.
Examples
As an example application, a simulation of a car floor panel is shown here. A point excitation at 2500 Hz with 0.04 hysteretic damping was applied. The results from a frequency averaged FEM simulation are compared with a DEA simulation (for DEA, no frequency averaging is necessary). The results also show a good quantitative agreement. In particular, we see the directional dependence of the energy flow, which is predominantly in the horizontal direction as plotted. This is caused by several horizontally extended out-of-plane bulges. It is only in the lower right part of the panel, with negligible energy content, that deviations between the FEM and DFM predictions are visible. The total kinetic energy given by the DFM prediction is within 12% of the FEM prediction.
As a more applied example, the result of a DEA simulation on a Yanmar tractor model (body in blue: chassis/cabin steel frame and windows) is shown here to the left. The numerical DEA results are compared with experimental measurements at frequencies between 400 Hz and 4000 Hz for an excitation on the back of the gear casing. Both results agree favorably. The DEA simulation can be extended to predict the sound pressure level at driver's ear.
Notes
References
External links
University of Nottingham Wave Modelling Research Group . One of the foci of this research group is on DEA.
Mechanical vibrations
Acoustics | Dynamical energy analysis | [
"Physics",
"Engineering"
] | 1,547 | [
"Structural engineering",
"Classical mechanics",
"Acoustics",
"Mechanics",
"Mechanical vibrations"
] |
51,603,270 | https://en.wikipedia.org/wiki/Scoot-Mobile | The Scoot-Mobile prototype was built mostly from aircraft parts, manufactured by the Norman Anderson Co in Corunna, Michigan in 1946. The Scoot-Mobile was a 3-wheeler with automatic gear change, 3 wheel brakes, and could reach a top speed of 40 mph.
References
Defunct motor vehicle manufacturers of the United States
Motor vehicle manufacturers based in Michigan
Defunct manufacturing companies based in Michigan | Scoot-Mobile | [
"Physics"
] | 80 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
51,609,856 | https://en.wikipedia.org/wiki/Mass-assignment%20protection | In the computing world, where software frameworks make life of developer easier, there are problems associated with it which the developer does not intend. Software frameworks use object-relational mapping (ORM) tool or active record pattern for converting data of different types and if the software framework does not have a strong mechanism to protect the fields of a class (the types of data), then it becomes easily exploitable by the attackers. These frameworks allow developers to bind parameters with HTTP and manipulate the data externally. The HTTP request that is generated carries the parameters that is used to create or manipulate objects in the application program.
The phrase mass assignment or overposting refers to assigning values to multiple attributes in a single go. It is a feature available in frameworks like Ruby on Rails that allows the modifications of multiple object attributes at once using modified URL. For example, @person = Person.new(params[:person]) #params contains multiple fields like name, email, isAdmin and contactThis Mass Assignment saves substantial amount of work for developers as they need not set each value individually.
Threats
In Mass Assignment, a malicious agent can attack and manipulate the data in various ways. It can send the tags which can make him assign various permissions which would otherwise be forbidden. For example, a database schema has a table "users" having field "admin" which specifies if corresponding user is admin or not. Malicious agent can easily send the value for this field to the server through HTTP request and mark himself as an admin. This is called Mass assignment vulnerability. It explores the security breaches that can be done using mass assignment.
GitHub got hacked in 2012 by exploiting mass assignment feature. Homakov who attacked the GitHub gained private access to Rails by replacing his SSH with SSH key of one of the members of Rails GitHub.
Protection
ASP.NET Core
In ASP.NET Core use the Bind attribute.
[HttpPost]
public IActionResult OnPost(
[Bind("LastName,FirstMidName,HireDate")] Instructor instructor)
Ruby
We can perform some changes in the active record models to ensure the protection of our data.
To use attr_protected: We specify the attributes which need to be protected. If the user tries mass assignment, then the user will get an error page which says Mass Assignment Security error and the attribute value will not be changed. This is also called blacklisting In this method, sometimes keeping track of all the attributes we want to protect is difficult. For example, in the code below, assign_project attribute is protected.Class Person < ActiveRecord::Base
has_many :projects
attr_protected :assign_project
endThis method optionally takes a role option using :as which enables to define multiple mass-assignment groupings. These attributes will have the :default role in case no role role is assigned. Here is an example which illustrates that assign_project will only be visible to admin. attr_protected :assign_project, :as => :admin
To use attr_accessible: We add attributes that are accessible to everyone and need not be protected. This is easier to manage as the attributes that can be mass-assigned can be explicitly selected. All others are considered as protected. This is sometimes referred to as whitelisting. attr_accessible :name, :email, :contact
To use Sanitize method: Another configuration which we can do to avoid mass assignment problems is called mass assignment sanitizer. This is a method called sanitize. This method filters the incoming requests and takes care that there should be no malicious tags. It only allows those tags that are whitelisted by the user. If the config config.active_record.mass_assignment_sanitizer is set to strict, it will raise ActiveModel::MassAssignmentSecurity::Error when mass assignment is not as intended.
To use Require and Permit: These methods are used in Rails 4. These provide functionalities that check the incoming requests and parameters. Require method checks whether all the required parameters are present. If not, it throws error. Permit method checks whether a particular parameter is permitted to be passed in mass assignment. It returns the list of the permitted parameters. This is also referred to as strong parameters.
Sometimes developer might forget adding attributes as accessible. So as to avoid this, recent versions of Rail has config setting config.active_record.whitelist_attributes = true" which creates blank white list of attributes and protects from Mass Assignment Vulnerability. Models still need to explicitly whitelist or blacklist accessible parameters.
References
Software development | Mass-assignment protection | [
"Technology",
"Engineering"
] | 971 | [
"Software engineering",
"Computer occupations",
"Software development"
] |
57,763,698 | https://en.wikipedia.org/wiki/EC508 | EC508, also known as estradiol 17β-(1-(4-(aminosulfonyl)benzoyl)--proline), is an estrogen which is under development by Evestra for use in menopausal hormone therapy and as a hormonal contraceptive for the prevention of pregnancy in women. It is an orally active estrogen ester – specifically, a C17β sulfonamide–proline ester of the natural and bioidentical estrogen estradiol – and acts as a prodrug of estradiol in the body. However, unlike oral estradiol and conventional oral estradiol esters such as estradiol valerate, EC508 undergoes little or no first-pass metabolism, has high oral bioavailability, and does not have disproportionate estrogenic effects in the liver. As such, it has a variety of desirable advantages over oral estradiol, similarly to parenteral estradiol, but with the convenience of oral administration. EC508 is a candidate with the potential to replace not only oral estradiol in clinical practice, but also ethinylestradiol in oral contraceptives. Evestra intends to seek Investigational New Drug status for EC508 in the second quarter of 2018.
Pharmacokinetics
Relative to parenteral routes of estradiol like vaginal, transdermal, and injection, oral estradiol is characterized by low bioavailability and disproportionate effects on liver protein synthesis. Due to extensive metabolism during the first-pass into estrone and estrogen conjugates like estrone sulfate, the oral bioavailability of estradiol and conventional estradiol esters like estradiol valerate is only about 5%, and there is high interindividual variability in estradiol levels achieved. Moreover, because of the first-pass, estradiol levels in the liver are 4 or 5 times higher with oral estradiol than those in the circulation. As a result, oral estradiol has disproportionate estrogenic effects on the hepatic production of lipids, hemostatic factors, growth hormone/insulin-like growth factor 1 axis proteins, angiotensinogen, and other proteins. This is unfavorable and may result in an increased risk of venous thromboembolism, cardiovascular events, and other adverse effects.
The pharmacokinetics of EC508 were assessed in rats; its bioavailability was found to be complete (100%), its clearance in blood was low, and its biological half-life was prolonged at about 5 hours. A single oral dose of 5.0 mg/kg EC508 in rats resulted in peak levels of estradiol of 6,104 ng/mL (6,104,000 pg/mL). EC508 itself showed poor activity as an agonist of the estrogen receptor, with an value of 432 nM relative to 2.3 nM for estradiol (a 188-fold difference), indicating that the estrogenic activity of the compound is solely due to hydrolysis into estradiol. EC508 showed very high oral estrogenic potency, around 100 times that of estradiol and 10 times that of ethinylestradiol in rats. This was determined by uterotrophic effect in ovariectomized rats; an oral dosage of 10 μg/day resulted in uterine weight doubling with EC508, while no effect was observed with estradiol and only a small effect on uterine weight was measured with ethinylestradiol. Conversely, across the dosages assessed, oral estradiol and ethinylestradiol showed marked effects on cholesterol and angiotensinogen levels, while oral EC508 and parenteral estradiol showed no effect at all on these hepatic proteins. These findings are in accordance with the notion that oral EC508, unlike oral estradiol and ethinylestradiol but similarly to parenteral estradiol, bypasses first-pass metabolism and the liver.
The absence of first-pass metabolism and lack of disproportionate liver exposure with EC508 is thought to be due to reversible binding of the sulfonamide moiety of EC508 to an enzyme called carbonic anhydrase II (CAII). EC508 shows moderate affinity for human CAII, with an for binding inhibition of 110 nM. CAII is highly concentrated in erythrocytes (red blood cells), which are present in large quantity in the blood of the hepatic portal vein. It is believed that following its absorption in the intestines and its entrance into the hepatic portal vein, EC508 is taken up by and massively accumulated in erythrocytes, which prevents it from entering the liver and results in it being transported by erythrocytes straight into the circulation. From circulating erythrocytes, EC508 is thought to be slowly released and then hydrolyzed into estradiol. However, one sulfonamide ester of estradiol related to EC508 known as EC518 showed similar properties with very low or absent binding to CAII and hence probably lacking erythrocyte binding, which raises questions about the necessity of CAII binding for such properties and the mechanism by which they occur.
History
Estradiol sulfamate (E2MATE) is a C3 sulfamate ester of estradiol which was developed in the 1990s and was a predecessor of EC508. It binds to CAII, is taken up into and stored within erythrocytes, and shows similar properties to EC508. As a result, E2MATE was under development for potential clinical use as an oral estrogen. However, it showed no increase in estradiol levels and no estrogenic effects in human clinical trials. It appears that there are species differences with E2MATE between rats and primates and that the lack of activity in humans is because E2MATE additionally acts as a highly potent inhibitor of steroid sulfatase. This enzyme is responsible for the hydrolysis of sulfur-based estradiol esters like E2MATE and EC508 into estradiol. By inhibiting steroid sulfatase, E2MATE prevents its own activation into estradiol, which effectively abolishes its estrogenic activity. In addition, it was found that E2MATE was substantially transformed into estrone sulfamate (EMATE) in erythrocytes, which may have further impeded its capacity to be activated into estradiol. In contrast to E2MATE, EC508 is not thought to be a steroid sulfatase inhibitor and cannot be transformed into the corresponding estrone equivalent.
A C17β sulfonamide–proline testosterone ester known as EC586, which has similar properties to those of EC508, is also under development by Evestra, specifically as an androgen and potent oral testosterone prodrug for use in androgen replacement therapy in men.
Clinical trials for EC586 and EC508 are undergoing as of 2023.
See also
List of estrogen esters § Estradiol esters
List of investigational sex-hormonal agents § Estrogenics
References
External links
R&D Research / Research Pipeline - Evestra, Inc.
Amino acids
Carboxylic acids
Diols
Estradiol esters
Estranes
Experimental sex-hormone agents
Pyrrolidines
Sulfonamides
Synthetic estrogens | EC508 | [
"Chemistry"
] | 1,596 | [
"Amino acids",
"Biomolecules by chemical classification",
"Carboxylic acids",
"Functional groups"
] |
57,766,017 | https://en.wikipedia.org/wiki/DTaP-IPV%20vaccine | DTaP-IPV vaccine is a combination vaccine whose full generic name is diphtheria and tetanus toxoids and acellular pertussis adsorbed and inactivated poliovirus vaccine (IPV).
It is also known as DTaP/IPV, dTaP/IPV, DTPa-IPV, or DPT-IPV. It protects against the infectious diseases diphtheria, tetanus, pertussis, and poliomyelitis.
Branded formulations marketed in the USA are Kinrix from GlaxoSmithKline and Quadracel from Sanofi Pasteur.
Repevax is available in the UK.
In Japan, the formulation is called 四種混合(shishukongou - "mixture of 4").
Astellas markets it under the クアトロバック ('Quattro-back') formulation, while another is available from Mitsubishi Tanabe Pharma named テトラビック ('Tetrabic').
A previous product by Takeda Pharmaceutical Company has been withdrawn by the company.
References
Combination vaccines
Diphtheria
Whooping cough
Tetanus
Polio
Vaccines | DTaP-IPV vaccine | [
"Biology"
] | 249 | [
"Vaccination",
"Vaccines"
] |
57,766,938 | https://en.wikipedia.org/wiki/DPT-Hib%20vaccine | DPT-Hib vaccine is a combination vaccine whose generic name is diphtheria and tetanus toxoids and whole-cell pertussis vaccine adsorbed with Hib conjugate vaccine, sometimes abbreviated to DPT-Hib. It protects against the infectious diseases diphtheria, tetanus, pertussis, and Haemophilus influenzae type B.
A branded formulation was marketed in the US as Tetramune by Lederle Praxis Biologicals (subsequently acquired by Wyeth). Tetramune has since been discontinued.
References
Vaccines
Combination vaccines
Diphtheria
Tetanus
Whooping cough
Haemophilus | DPT-Hib vaccine | [
"Biology"
] | 144 | [
"Vaccination",
"Vaccines"
] |
57,769,929 | https://en.wikipedia.org/wiki/Articulated%20soft%20robotics | The term “soft robots” designs a broad class of robotic systems whose architecture includes soft elements, with much higher elasticity than traditional rigid robots. Articulated Soft Robots are robots with both soft and rigid parts, inspired to the muscloloskeletal system of vertebrate animals – from reptiles to birds to mammalians to humans. Compliance is typically concentrated in actuators, transmission and joints (corresponding to muscles, tendons and articulations) while structural stability is provided by rigid or semi-rigid links (corresponding to bones in vertebrates).
The other subgroup in the broad family of soft robots includes continuum soft robots, i.e. robots whose body is a deformable continuum, including its structural, actuating and sensing elements, and take inspiration from invertebrate animals such as octopuses or slugs, or parts of animals, such as an elephant trunk.
Soft robots are often designed to exhibit natural behaviours, robustness and adaptivity, and sometimes mimick the mechanical characteristics of biological systems.
Characteristics and Design
Articulated Soft Robots are built taking inspiration from the intrinsic properties of muscle-skeletal system of vertebrates, whose compliant nature enables humans and animals to effectively and safely perform a large variety of tasks, ranging from walking on uneven terrains, running, and climbing, to grasping and manipulating. It also makes them resilient to highly dynamic, unexpected events such as impacts with the environment. The interplay of the physical properties of vertebrates with the neural sensory-motor control makes motion very energy–efficient, safe and effective.
Robots able to co-exist and co-operate with people and reach or even surpass their performance require a technology of actuators, responsible for moving and controlling the robot, which can reach the functional performance of the biological muscle and its neuro-mechanical control.
The most promising class of actuators for soft robots is the class of Variable Impedance Actuators (VIA) and the subclass of Variable Stiffness Actuators (VSA), complex mechatronic devices that are developed to build passively compliant, robust, and dexterous robots. VSAs can vary their impedance directly at the physical level, thus without the need of an active control capable to simulate different stiffness values. The idea of varying the mechanical impedance of actuation comes directly from natural musculo-skeletal systems, which often exhibit this feature.
A class of Variable Stiffness Actuators achieve simultaneous control of the robot by using two motors antagonistically to manipulate a non-linear spring which acts as an elastic transmission between each of the motors and the moving part, so as to control both the equilibrium point of the robot, and its rigidity or compliance.
Such control model is very similar in philosophy to Equilibrium-Point hypothesis of human motor control. This similitude makes soft robotics an interesting field of research capable to exchange ideas and insights with the research community in motor neuroscience.
Variable Impedance Actuators increase the performance of the soft robotics systems in comparison with the traditional rigid robots in three key aspects: Safety, Resilience and Energy-Efficiency.
Safety in physical human-robot interaction
One of the most revolutionary and challenging features of the class of articulated soft robots is Physical Human-Robot Interaction. Soft robots designed to physically interact with people are designed to coexist and cooperate with humans in applications such as assisted industrial manipulation, collaborative assembly, domestic work, entertainment, rehabilitation or medical applications.
Clearly, such robots must fulfill different requirements from those typically met in conventional industrial applications: while it might be possible to relax requirements on velocity of execution and absolute accuracy, concerns such as safety and dependability become of great importance when robots have to interact with humans.
Safety can be increased in different ways. The classical methods include control and sensorization, e.g. proximity-sensitive skins, or the addition of external soft elements (soft and compliant coverings or airbags placed around the arm for increasing the energy-absorbing properties of protective layers).
Advanced sensing and control can realize a “soft” behavior via software. Articulated Soft Robotics realizes a different approach at increasing the safety level of robots interacting with humans by introducing mechanical compliance and damping directly at the mechanical design level.,“By this approach, researchers tend to replace the sensor-based computation of a behavior and its error-prone realization using active actuator control, by its direct physical embodiment, as in the natural example. Having compliance and damping in the robot structure is by no means sufficient to ensure its safety, as it might indeed be even couterproductive for the elastic energy potentially stored: just like a human arm, a soft robot arm will need intelligent control to make it behave softly as when caressing a baby, or strongly as when punching”.
Resilience
Physical interaction of a robot with its environment can also be dangerous to the robot itself. Indeed, the number of times a robot is damaged because of impacts or force overexertion is rather large.
Resilience to shocks is not only instrumental to achieve viable applications of robots in everyday life, but would also be very useful in industrial environments, substantially enlarging the scope of applicability of robot technology.
Soft robotics technologies can provide solutions that are effective in absorbing shocks and reducing accelerations: soft materials can be used as coverings or even as structural elements in robot limbs, but the main technological challenge remains with soft actuators and transmissions.
Performance and Energy Efficiency
The dynamic behavior of the actuators with controllable compliance guarantees high-performance, lifelike motion and higher energy efficiency than rigid robots.
The natural dynamics of the robot can adapt to the environment, and thus the intrinsic physical behavior of the resulting system is close to the desired motion. In these circumstances, actuators would only have to inject and extract energy into and out of the system for small corrective actions, thus reducing energy consumption.
The idea of embodying desirable dynamics in the physical properties of soft robots finds its natural application in humanoid robots, having to resemble the movements of humans, or in robotic systems realized for prosthetic uses, e.g. anthropomorphic artificial hands. A relevant example of use is in and walking/running robots: indeed, the fact that natural systems change the compliance of their muscular system depending on the gait and environmental conditions, and even during the different phases of the gait, seem to indicate the potential usefulness of Variable Impedance Actuators (VIA) for locomotion. An emerging trend of use of VIA technologies is connected to the growing of a novel category of industrial robots connected to Industry4.0, the Co-Bots.
Exploration of soft robots’ full potential is leading to more and more applications in which the robots overcome conventional- robot performance, and it is widely believed that more applications are yet to come
Related European Projects and Initiatives
SOMA (Soft Manipulation)
SOFTPRO (Synergy-based Open-source Foundations and Technologies for Prosthetics and Rehabilitation)
SOFTHANDS
Natural Natural Machine Motion Initiative (NMMI)
SAPHARI (Safe and Autonomous Physical Human-Aware Robot Interaction)
VIACTORS (Variable Impedance Actuation systems embodying advanced interaction behaviors)
ROBLOG (Cognitive Robot for Automation of Logistic Processes)
THE (The Hand Embodied)
PHRIENDS (Physical Human-Robot Interaction, dependability and safety)
STIFF (Enhancing biomorphic agility through variable stiffness)
See also
Continuum soft robotics
Collaborative Robotics
Human-Robot Interaction
Humanoid robots
Autonomous robots
Personal robot
References
Robots | Articulated soft robotics | [
"Physics",
"Technology"
] | 1,530 | [
"Physical systems",
"Machines",
"Robots"
] |
57,774,517 | https://en.wikipedia.org/wiki/Progesterone%20dioxime | Progesterone dioxime, or progesterone 3,20-dioxime (P4-3,20-DO), also known as 3,20-di(hydroxyimino)pregn-4-en-3-one, is a progesterone derivative which was never marketed. It is a progestogen oxime – specifically, the C3 and C20 dioxime of the progestogen progesterone. Progesterone C3 and C20 oxime conjugates have been found to be water-soluble prodrugs of progesterone and pregnane neurosteroids.
See also
List of progestogen esters § Oximes of progesterone derivatives
References
Abandoned drugs
Ketones
Pregnanes
Progestogens
Steroid oximes | Progesterone dioxime | [
"Chemistry"
] | 180 | [
"Ketones",
"Functional groups",
"Drug safety",
"Abandoned drugs"
] |
57,774,849 | https://en.wikipedia.org/wiki/Haemophilus%20B%20and%20hepatitis%20B%20vaccine | Haemophilus B and hepatitis B vaccine is a combination vaccine whose generic name is Haemophilus b conjugate and hepatitis B recombinant vaccine. It protects against the infectious diseases Haemophilus influenzae type B and hepatitis B.
A branded formulation, Comvax, was marketed in the US by Merck. It was discontinued in 2014.
References
Vaccines
Combination vaccines
Haemophilus
Hepatitis B
Withdrawn drugs | Haemophilus B and hepatitis B vaccine | [
"Chemistry",
"Biology"
] | 93 | [
"Vaccines",
"Vaccination",
"Drug safety",
"Withdrawn drugs"
] |
70,190,096 | https://en.wikipedia.org/wiki/Price-Jones%20curve | A Price-Jones curve is a graph showing the distribution of diameters of red blood cells. Higher diameter may be seen in pernicious anaemia, while lower diameter may be seen after haemorrhage.
Medical uses
A Price-Jones curve can be used in the diagnosis of anaemia. Price-Jones curves usually vary both by average red blood cell size, and the distribution of sizes.
Interpretation of results
Higher red blood cell diameter and wider variation in size are often seen in pernicious anaemia. Lower diameter with normal variation in size are often seen after haemorrhage. A higher variation in size is known as anisocytosis.
Procedure
A blood smear can be used to view individual red blood cells. The diameter of each red blood cell can be measured, which is usually analogous to volume. This is usually performed automatically by particle counters. Data is then converted into a histogram. This can be used to assess red blood cell distribution width (RDW).
History
Cecil Price-Jones first proposed using the Price-Jones curve in a 1922 paper. It has been used for assessing red blood cells since then.
References
Blood tests
Statistical charts and diagrams
Estimation of densities
Frequency distribution
Nonparametric statistics | Price-Jones curve | [
"Chemistry",
"Mathematics"
] | 251 | [
"Blood tests",
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Frequency distribution",
"Chemical pathology"
] |
70,193,952 | https://en.wikipedia.org/wiki/Lattice%20confinement%20fusion | Lattice confinement fusion (LCF) is a type of nuclear fusion in which deuteron-saturated metals are exposed to gamma radiation or ion beams, such as in an IEC fusor, avoiding the confined high-temperature plasmas used in other methods of fusion.
History
In 2020, a team of NASA researchers seeking a new energy source for deep-space exploration missions published the first paper describing a method for triggering nuclear fusion in the space between the atoms of a metal solid, an example of screened fusion. The experiments did not produce self-sustaining reactions, and the electron source itself was energetically expensive.
Technique
The reaction is fueled with deuterium, a widely available non-radioactive hydrogen isotope composed of one proton, one neutron, and one electron. The deuterium is confined in the space between the atoms of a metal solid such as erbium or titanium. Erbium can indefinitely maintain 1023 cm−3 deuterium atoms (deuterons) at room temperature. The deuteron-saturated metal forms an overall neutral plasma. The electron density of the metal reduces the likelihood that two deuterium nuclei will repel each other as they get closer together.
A dynamitron electron-beam accelerator generates an electron beam that hits a tantalum target and produces gamma rays, irradiating titanium deuteride or erbium deuteride. A gamma ray of about 2.2 megaelectron volts (MeV) strikes a deuteron and splits it into proton and neutron. The neutron collides with another deuteron. This second, energetic deuteron can experience screened fusion or a stripping reaction.
Although the lattice is notionally at room temperature, LCF creates an energetic environment inside the lattice where individual atoms achieve fusion-level energies. Heated regions are created at the micrometer scale.
Screened fusion
The energetic deuteron fuses with another deuteron, yielding either a 3helium nucleus and a neutron or a 3hydrogen nucleus and a proton. These fusion products may fuse with other deuterons, creating an alpha particle, or with another 3helium or 3hydrogen nucleus. Each releases energy, continuing the process.
Stripping reaction
In a stripping reaction, the metal strips a neutron from accelerated deuteron and fuses it with the metal, yielding a different isotope of the metal. If the produced metal isotope is radioactive, it may decay into another element, releasing energy in the form of ionizing radiation in the process.
Palladium-silver
A related technique pumps deuterium gas through the wall of a palladium-silver alloy tubing. The palladium is electrolytically loaded with deuterium. In some experiments this produces fast neutrons that trigger further reactions. Other experimenters (Fralick et al.) also made claims of anomalous heat produced by this system.
Comparison to other fusion techniques
Pyroelectric fusion has previously been observed in erbium hydrides. A high-energy beam of deuterium ions generated by pyroelectric crystals was directed at a stationary, room-temperature or target, and fusion was observed.
In previous fusion research, such as inertial confinement fusion (ICF), fuel such as the rarer tritium is subjected to high pressure for a nano-second interval, triggering fusion. In magnetic confinement fusion (MCF), the fuel is heated in a plasma to temperatures much higher than those at the center of the Sun. In LCF, conditions sufficient for fusion are created in a metal lattice that is held at ambient temperature during exposure to high-energy photons. ICF devices momentarily reach densities of 1026 cc−1, while MCF devices momentarily achieve 1014.
Lattice confinement fusion requires energetic deuterons and is therefore not cold fusion.
See also
Inertial confinement fusion
Magnetized target fusion
Pyroelectric fusion
References
Nuclear fusion
Nuclear fusion reactions
NASA research centers
Space exploration | Lattice confinement fusion | [
"Physics",
"Chemistry",
"Astronomy"
] | 809 | [
"Outer space",
"Space exploration",
"Nuclear fusion reactions",
"Nuclear physics",
"Nuclear fusion"
] |
70,197,321 | https://en.wikipedia.org/wiki/NcRNA%20therapy | A majority of the human genome is made up of non-protein coding DNA. It infers that such sequences are not commonly employed to encode for a protein. However, even though these regions do not code for protein, they have other functions and carry necessary regulatory information.They can be classified based on the size of the ncRNA. Small noncoding RNA is usually categorized as being under 200 bp in length, whereas long noncoding RNA is greater than 200bp. In addition, they can be categorized by their function within the cell; Infrastructural and Regulatory ncRNAs. Infrastructural ncRNAs seem to have a housekeeping role in translation and splicing and include species such as rRNA, tRNA, snRNA.Regulatory ncRNAs are involved in the modification of other RNAs.
RNA Classification
Long non-coding RNA
Long non-coding RNA (LncRNA) are a type of RNA which is usually defined as transcripts which are greater than 200 base-pairs in length and not translated into proteins. This limitation distinguishes lncRNA from small non-coding RNAs which encompasses microRNAs (miRNAs), small interfering RNAs (siRNAs), Piwi-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), and other short RNAs. Long non-coding RNAs include lincRNAs, intronic ncRNAs, circular and linear ncRNA.
Long intergenic Non-coding RNA
Long intergenic Non-coding RNA (LincRNA) is defined as RNA transcripts that are longer than 200 nucleotides. These RNAs must not have open reading frames that encode proteins. The term “intergenic” refers to the identification of these transcripts from regions of the genome that do not contain protein-encoding genes. LncRNAs also contain promoter - or enhancer-associated RNAs that are gene proximal and can be either in the sense or antisense orientation.
Circular RNA
Circular RNA (CircRNA) are a novel class of endogenous noncoding RNAs and are characterized by their covalently closed loop structures. This class of ncRNA does not have a 5’ cap or 3’ Poly A tail. It has been hypothesized that cirRNAs may function as potential molecular markers for disease diagnosis and treatment and play an important role in the initiation and progression of human diseases.
Small non-coding RNA
Small non-coding RNA (sncRNA) are a type of RNA. which is usually defined as transcripts which are lesser than 200 base-pairs in length and not translated into proteins. This limitation distinguishes sncRNA from lncRNA. This class includes but is not limited to microRNAs (miRNAs), small interfering RNAs (siRNAs), Piwi-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), and other short RNAs.
microRNA
microRNA (miRNA) plays an important role in regulating gene expression. Majority of miRNAs are transcribed from DNA sequences into primary miRNAs. These primary miRNAs are further processed into precursor miRNAs, and finally into mature miRNAs. The miRNAs in most cases interact with the 3’ UTR region of target to induce mRNA degradation and translational repression. Interactions of miRNAs with other regions, including the 5’ UTR, coding sequence, and gene promoters have also been reported. Under certain conditions, miRNAs are also able to activate translation or regulate transcription, but this is dependent on factors such as location of the effect. This process of interaction is very dynamic and dependent on multiple factors.
Ribosomal RNA
Ribosomal RNA (rRNA) includes non-coding RNAs that play essential roles in rRNA regulation. Ribosomal RNA (rRNA) takes part in protein synthesis. Occasional RNA molecules act catalytically, as RNA enzymes (ribozymes) or take part in protein export. The most important ribozyme is the major rRNA of the large subunit of the ribosome (28s rRNA in eukaryotes). It is now accepted that 28S rRNA catalyzes the critical step in polypeptide synthesis in addition to playing a major structural role.
Small nuclear RNA and small nucleolar RNA
Small nuclear RNA (snRNA) and small nucleolar RNA (snoRNA) are widely known to guide the nucleotide modifications and processing of rRNA. Both snRNA and snoRNA are categorized into a class of small RNA molecules that are present in the nucleus. However, they vary a lot by function. snRNA are 80-350nucletides long while snoRNA are 80-1000 nucleotides long in yeast. snRNA plays a critical role in regulating the pre-mRNA silencing. On the other hand, snoRNAs are involved in mRNA editing, modification of the rRNA and tRNA, and genome imprinting. Major function of snoRNA includes the maturation of rRNA during ribosomal formation. Small nuclear and small nucleolar RNAs are critical components of snRNPs and snoRNPs and play an essential role in the maturation of, respectively, mRNAs and rRNAs within the nucleus of eukaryotic cells. Both snRNA and snoRNA are involved in modifying RNA just after transcription. snRNA can be found in splicing speckles and Cajal bodies of the nucleus of the cell.snRNA and snoRNA requires a phosphorylated adaptor for nuclear export (PHAX) to get transported to the site of action within the nucleus.
Transfer RNA
Transfer RNA (tRNA) helps decode a messenger RNA sequence into a protein. They function at specific sites within the ribosome during translation (the process going from code to protein). Within the mRNA molecule we have three nucleotides in length codons. These codons all have a unique universal code which represents a particular amino acid. tRNAs can be classified as an adaptor molecule, being typically 76 to 90 nucleotides in length.
History
Non-coding RNA
DNA purification in 1869 by Dr. Friedrich Miescher’s, from salmon sperm and pus cells guided the scientists towards the presence of additional molecules in the cell except for proteins. Miescher identified the presence of a highly acidic molecule that he isolated from the pus cells and labeled it “nuclein”. The term was coined as the DNA isolated by Miescher was not protein and was derived from the nucleus of the cell. It wasn’t until 1944, when Oswald Avery proposed the DNA as a genetic carrier of information that the Miescher discovery was brought back to light.
Following the X-ray crystallography, by Rosalind Franklin and the determination of DNA double helix by Watson and Crick in 1953, further enhanced the understanding of DNA structure and allowed for the establishment of central dogma of molecular biology. However, one of the flaws with central dogma was the postulation that information flow proceeds from DNA to RNA to protein, which hinders the understanding of different regulatory mechanisms.
In 1955, George Palade identified the first ncRNA as a part of the large ribonucleoprotein complex (RNP). The second class of ncRNA to be discovered was transfer RNA (tRNA) in 1957. However, the first regulatory ncRNA was a microRNA discovered in 1988 from E.coli and was labeled as micF. On other hand, the first eukaryotic microRNA was discovered in C.elegans in 1993. It was derived from gene lin-4 and was identified as a small RNA molecule (as compared to longer mRNA molecules) forming stem-loop structures. This structure gets further modified to generate a shorter RNA that is complementary to the 3’UTR region of lin-14 transcript. This pathway allowed for a better understanding of different post translational gene silencing pathways. Since then, many other miRNAs have been discovered.
Detailed understanding of the mechanism behind this post translational silencing pathway was established in 2001 by Thomas Tuschl. It was discovered that the double stranded RNA gets processed into a shorter 25 nucleotides long fragment which is then modified into a short hairpin like structure by Drosha complex. The molecule is then diced by dicer enzymes into a functional double stranded RNA (dsRNA). These are then loaded onto the RISC complex which then finds and cleaves the targeted mRNA of interest in the cytoplasm.
It wasn’t until 1989 that the imprinting genes were discovered and the genome imprinting was established. The first two genomic imprinting genes were paternally expressed Igf2r and H19. These were both discovered independently in mice and were localized to chromosome 7. H19 is peculiar as it functions as a lncRNA but undergoes modifications similar to that of pre-mRNA processing such as splicing, 3’ polyadenylation and is transcribed by RNA polymerase II. This lncRNA plays a significant role in mice embryonic development and can be lethal if expressed during prenatal stages. More lncRNAs have been discovered in eukaryotes overtime. One such discovery that allowed for better understanding between H19 functions was a lncRNA called XIST (X inactive-specific transcript).
ncRNA drugs and therapy
The first ncRNA therapeutic drug approved by food and drug administration (FDA) (1998) and the European medicine agency (EMA) (1999) is called Fomivirsen or Vitravene. The target organ is the eye and works against the cytomegalovirus (CMV) retinitis in immunocompromised patients. The drug functions as an antisense oligonucleotide and binds to the complementary sequence of the mRNA that inhibits the replication of human cytomegalovirus. This therapy can also be categorized as Antisense oligonucleotide (ASO) therapy.
There have been many ASO RNA therapeutics that have been approved by FDA and/or EMA over the years, but it wasn’t until 2018 that the EMA approved the drug called Patisiran/Onpattro. The drug uses ds-siRNA as a mechanism of action and is deemed effective against hereditary transthyretin amyloidosis. The mechanism specifically targets the Transthyretin (TTR) mRNA.
RNA therapeutic targets are not limited to mature mRNA but have been used to target mRNA at different stages of maturation. One such example is Nusinersen (Spinaraza), it functions as an ASO and targets pre-mRNA before splicing that corresponds to Survival of motor neuron 2 gene (SMN 2). This drug therapy was approved by FDA and EMA in 2016 and 2017 respectively.
There are some drugs that have been approved by FDA and not by EMA. This can be seen in the case of an ASO type therapeutics called Eteplirsen (Exondys51) which has been approved by FDA in 2016 but not by EMA. It targets pre-mRNA corresponding to Dystrophin (DMD) and works against Duchenne muscular dystrophy.
There are many additional therapeutics that have been developed and are either in phase I or II of the clinical trials. Current RNA therapeutics in clinical trials range from a variety of target organs and diseases ranging from skin (potential treatment for disease such as keloid) to tumors (squamous cell lung cancer).
To date, for both the FDA and the EMA, ncRNAs are considered as "simple" medical products because of their production by chemical synthesis. When some of them, produced biologically (known as bioengineered ncRNA agents: BERAs), will be put on the market, the status of biological medical products will be applied, which could lead to inconsistencies in the legislation.
Applications
Antisense oligonucleotides
Antisense oligonucleotides (ASOs) are single-stranded DNA molecules with full complementarity to one select target mRNA and may act by blocking protein translation (via steric hindrance), causing mRNA degradation (via RNase H-cleavage) or changing pre-mRNA splicing. These short oligonucleotides have already been approved by the FDA for ten genetic disorders and many are currently in the pipeline to be approved/tested. Using oligonucleotide technology, we are now able to control protein expression via RNA interference, and are able to affect previously defined “undruggable” proteins. Even though this therapy has a lot of promise and potential, it comes with many limitations.
Compared to siRNA and microRNA, ASOs are more versatile in reducing protein expression, they have the ability to also enhance target translation. ASOs can also be customized with ease and accuracy, allowing for the targeting of virtually any mutated gene. This allows for a greater level of application in the field of precision and personalized medicine.
The main challenge of ASO therapies to specific tissues and cellular uptake is what poses a great challenge and limitation. Liposomal delivery is one such way to overcome such issues. Liposomal delivery system comes with its own share of limitations. Serum proteins in the bloodstream destabilize the lipoprotein. This destabilization leads to the depletion of protein and exposing cargo to the unstable environment. This hindrance can be overcome by using PEGs (poly(ethylene glycol) . However, PEGs are not biodegradable causing them to accumulate within the body leading to adverse effects and causing hypersensitivity. In addition, multiple rounds of therapy with PEGs can lead to the formation of PEG antibodies, which can lead to lack of efficiency in preventing the rupture of the liposome that it is attached to.
Using immunoliposomes it has been shown that targeting can be more specific as by using antibody’s specific to the protein of expression in that area, it results in the ASO drug directly impact the target site and nowhere else. Moreover, immunoliposomes are slow to dissociate leading to precise release of the ASO drug which they encapsulate.
LncRNA as a therapeutic approach
Long noncoding RNAs (lncRNAs) are large transcripts (more than 200 nucleotides long) that have similar mechanism of synthesis as that of mRNAs but unlike mRNAs, lncRNAs are not translated to a protein. lncRNA contains interactor elements and structural elements. Interactor elements directly interact with other nucleic acids or proteins while the structural elements indicate the ability of some lncRNAs to form secondary and/or tertiary structures. This ability of the lncRNAs to interact with nucleic acids using its interactor elements and its ability to interact with protein using its secondary structures allows it to function in a more diverse manner than other ncRNAs such as miRNA (microRNA). LncRNA has been established to play a role in gene regulation by influencing the ability of specific regions of the gene to bind to transcriptional elements and different epigenetic modifications. One such example can be seen in the case X inactive specific transcript (XIST). In humans, 46,XX females carry an extra X chromosome (155Mb of DNA) compared to 46,XY males. To overcome this dosage imbalance, one X chromosome is randomly inactivated in human females at around the 2-8 cell stage of embryo development. This inactivation is very stable across cell divisions due to epigenetic contributions both during the initial silencing and the subsequent maintenance of the inactive X chromosome (Xi). This inactivation is carried by the lncRNA, XIST. XIST is produced in cis and inactivates the X-chromosome that it has been generated from. The inactive X chromosome can be observed as a condensed heterochromatin structure called “Barr Body”.
A study in 2013 utilized this ability of XIST as a potential therapeutic approach for treatment of trisomy 21. Trisomy 21 is commonly known as down syndrome and is caused due to presence of an additional copy of chromosome 21. The study was one of its kind as no other studies have been able to incorporate the XIST gene into a chromosome due to its large size. The study incorporated the XIST into one of the chromosomes 21 in the cells gathered from patients with down syndrome. The study was able to observe the inactivation of one of chromosome 21 in the form of a condensed heterochromatin and labeled it as a chromosome 21 barr body. Such experiments have shown to work in cells in the lab setting although no lncRNA based therapeutics are in clinical trials. The implications of such work can bring trisomy 21 and other chromosomal disorders in the realm of consideration for future gene therapy research.
Challenges
One of the major issues that hinders the ncRNA therapy is the stability of the single stranded RNA molecule. RNA is typically single stranded therefore slightly unstable as compared to dsDNA molecules. This however can be overcome by fabricating the single stranded RNA to double stranded RNA(dsRNA). This is quite effective as the dsRNA is more stable at room temperature and has a longer shelf life.
Second major issue is the cell/tissue/organ specific targeting of the RNA molecules. Generally, this is overcome by containing the dsRNA in a lipid nanoparticle and using that as a ligand to bind to a receptor on the surface of the target cell. The lipid particles are taken into the liver cells through their specific receptors and this mechanism seems to be effective at targeting the liver cells/cancer.
Another organ with a relatively easy delivery mechanism is the eye. This requires an invasive technique of directly injecting the ncRNA of interest directly into the eye. These techniques are not only invasive but also don’t ensure if all the cells in the target organ are being targeted by the ncRNA of interest.
Additional issues arise once the RNA molecule enters the cell. One of the issues being the immune system. Our immune system can recognize RNA using the intracellular pathogen associated molecular pattern (PAMP) receptors and extracellular toll-like receptors (TLR). Activation of the receptors leads to a cytokine (IFNy-Interferon gamma) mediated immune response. Common applications to overcome the immune response include second generation chemical modifications. This process includes the introduction of small one at a time chemical modifications to avoid the immune response. However, there are some reports of adverse immune responses in clinical trials employing such modified reagents. There’s no fixed answer to issues with immunogenicity and ncRNA therapy.
Modified adenovirus vectors have been used extensively in many clinical trials as a ncRNA delivery mechanism. In particular, adenovirus vector is considered an efficient delivery system due to its stability within live cells and non-pathogenicity. Even though viral transfections have achieved significant results in basic research, one of the issues is the non-specificity leading to off target transfections. Further research needs to be done to improve the accuracy of viral transfections for future tests and clinical trials.
ASO Guidelines
In December 2021, the FDA came up with a draft guidance for the use of ASO drug products. This draft guidance was directed towards sponsor-investigators who are developing individualized investigational antisense oligonucleotides (ASO) drug products for severely debilitating or life threatening diseases. Severely debilitating corresponds to a disease or condition that causes major irreversible morbidity. However, life-threatening is defined as the disease or condition has a likelihood of death unless the course of treatment leads to an endpoint of survival. Usually individuals that have a severely debilitating life threatening disease don't have any alternative treatment options, and their diseases will be rapidly progressing, leading to an early death and/or devastating or irreversible morbidity within a short time frame without treatment.
Drug development is usually targeted for a large number of individuals, in this case that is not possible because of the specificity of the mechanism of action of the ASO combined with the rarity of the treatment-amenable patient population. Under FDA regulations, a protocol under which an individual ASO product is administered to a human subject must be reviewed and approved by an institutional review board (IRB) before it can be administered to human subjects. When the individual is a child, additional safeguards need to be identified in order to prevent any developmental issues from occurring that may affect the life of the individual. The sponsor-investigator needs to get informed consent from the individual or from the person who is responsible for the individual. The consent needs to include a description of reasonably foreseeable risks or discomforts as part of the use of the ASO drug. The sponsor also needs to get individuals clinical and genetic diagnosis to confirm that the ASO will be beneficial. The analysis may be through gene sequencing, enzymatic analysis, biochemical testing, imaging evaluations. All results need to be included in the application. Also the sponsor needs to include evidence that establishes the role of the gene variant targeted by the ASO drug. The sponsor/investigator need to also provide evidence that the identified gene variant or variants are unique to the individual.
The guidance suggests that the starting dose should be based on available non-clinical data that has been collected from model organisms or in vitro studies and should be in correlation with other ASO drug product dosing information that is available. At the starting dose, pharmacological effects are expected. Furthermore, It is advised that a dosing escalation method be utilized. This includes the step of escalating the dodge from its initial dose based on pharmacodynamic effects and/or trial participants' response to the ASO.
In addition, protocols submitted to the FDA need to have a clear dosing plan and justification for selecting the starting dose, dosing interval, and plan for dose escalation or dose reduction based on clinical pharmacodynamic effects of the drug on the individual. Also all anticipated outcomes should be included in the drug plan when submitted to the FDA. It is extremely important for the investigators to monitor the patient closely during dose escalation. During the escalation period, adequate time should be provided in order to see therapeutic results. It is advised that the investigator not make concurrent changes to the dosing interval along with the dose without justification. The submitted plan should include a de-escalation/discontinuation plan if toxicity is observed. All drug administration needs to take place in an inpatient setting just to get a grasp of the adverse effects the drug may have. Once drug toxicity, beneficiancy and adverse effects are identified, the drug can be administered in an outpatient manner as long as the same concentration of drug is administered.
See also
RNA therapeutics
Messenger RNA
RNA editing
References
RNA
Gene expression
Protein biosynthesis
Molecular genetics
Spliceosome
RNA splicing
Life sciences industry | NcRNA therapy | [
"Chemistry",
"Biology"
] | 4,772 | [
"Protein biosynthesis",
"Life sciences industry",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
70,197,345 | https://en.wikipedia.org/wiki/Christoph%20Weder | Christoph Weder is the former director of the Adolphe Merkle Institute (AMI) at the University of Fribourg, Switzerland, and a professor of polymer chemistry and materials. He is best known for his work on stimuli-responsive polymers, polymeric materials that change one or more of their properties when exposed to external cues. His research is focused on the development, investigation, and application of functional materials, in particular stimuli-responsive and bio-inspired polymers.
Education and career
Christoph Weder was born on July 30, 1966. He began elementary school in Mülheim a. Main, Germany, in 1972 before moving to Thalwil, Switzerland, in 1974, where he completed elementary and secondary school. He then attended the high school Kantonsschule Enge in Zurich, from which he graduated in 1985. Following in the footsteps of his father, who was also a polymer chemist, Weder studied chemistry at the Swiss Federal Institute of Technology (ETH Zurich) in Zurich, where he received his diploma in chemistry in 1990. He then joined the research group of Professor Ulrich W. Suter as a doctoral student and in 1994 was awarded the degree of Doctor of Natural Sciences for his dissertation “New Polyamides with Stable Nonlinear Optical Properties.” While at ETH, Weder was also trained as a chemistry teacher and received his teaching certification in 1992. With a fellowship from the Swiss National Science Foundation, Weder then spent one year as a postdoctoral research fellow in the Department of Chemistry at the Massachusetts Institute of Technology, where he worked under the guidance of then-provost Mark S. Wrighton.
Weder returned to the Department of Materials of ETH Zurich in 1995, where he joined the group of Professor Paul Smith and continued to work on photofunctional polymers. Based on his habilitation thesis entitled “Polarizing Light with Polymers,” Weder received his habilitation and, bestowed with the venia legendi, became an independent lecturer in 1999. In 2001, Weder left ETH and joined Case Western Reserve University in Cleveland, Ohio, as an associate professor in the Department of Macromolecular Science and Engineering. He was promoted to professor in 2007, and in 2008 was named the F. Alex Nason Professor.
In 2009, Weder returned to Switzerland and joined the Adolphe Merkle Institute (AMI) as Professor of Polymer Chemistry and Materials. AMI, which was founded in 2008 thanks to a gift from Adolphe Merkle, is an interdisciplinary research center that focuses on fundamental and application-oriented research in soft nano- and materials sciences. In January 2010, Weder was appointed as the institute’s director, serving until April 2022.
Weder led a team that was awarded a grant from the Swiss National Science Foundation (SNSF) to establish the National Competence Center in Research (NCCR) Bio-Inspired Materials. He served as the center’s director from its launch in 2014 until 2020. The NCCRs are a research instrument of the Swiss National Science Foundation (SNSF) that aim to strengthen research in areas of strategic importance for the future of Swiss science, business and society.
Weder remains an adjunct professor at CWRU and has served as a visiting professor at Chulalongkorn University in Bangkok, Thailand since 2003. He serves as an Associate Editor of ACS Macro Letters and was a co-editor of the RSC Book Series Polymer Chemistry from 202-2021.
Weder has co-authored more than 300 peer-reviewed articles in scientific journals and over twenty book chapters. He also edited two books. As of March 2022, Weder has an h-index of 87 and his works have been cited more than 27,000 times.
Weder is a co-inventor of more than twenty patent families that protect technologies such as light-polarizing security features, mechanochromic materials, sea-cucumber inspired dynamic mechanical polymer nanocomposites, stimuli-responsive supramolecular polymers, materials for optical upconversion, shape memory polymers, and optical data storage systems. He was a co-founding board member of the ETH-spinoff company Omlidon Technologies, LLC (1999–2002), and served on the board of directors of Gel Instrumente AG (1994–2006).
Weder is the recipient of a 3M Non-Tenured Faculty Award, a DuPont Young Professor Award, an NSF Special Creativity Award, and the Case School of Engineering Award. He was awarded a prestigious European Research Council (ERC) Advanced Grant, and is a Fellow of the American Chemical Society’s Division of Polymer Chemistry. In 2017, he was nominated as a member of the Swiss Academy of Technical Sciences "in recognition of his pioneering work in the development of nanomaterials through combination of fundamental research and practical applications as well as his contribution to the successful establishment of the Adolphe Merkle Institute".
Weder is married and has three children.
Research
Weder’s early research activities in the 1990s focused on polymers with special optical properties. This involved the development of nonlinear optical polymers and investigations of the structure-property relationships of photoluminescent poly(p-phenylene ethynylene)s. He demonstrated the usefulness of these semiconducting polymers as the active layer in polymer-based light-emitting diodes. His group also exploited the possibility to orient such rod-like molecules to create fluorescent materials that display linearly polarized absorption and emission. Such materials formed the basis of security features that Weder’s group developed, which were used as an anti-counterfeiting element in security paper. His team also discovered a light-polarizing energy transfer effect that can be used to produce highly efficient fluorescent polarizers. Such elements are useful in display and other applications.
Weder’s research focus turned to stimuli-responsive polymers shortly after he moved to CWRU in 2001. In 2002, Weder’s research lab developed a novel method to create polymeric materials that change their fluorescence color upon deformation. Recognizing the potential for practical applications this effect had, Weder established a research program to develop polymers that translate mechanical forces into optical signals, which is still active today, and shortly thereafter, mechanochromic polymers began to attract widespread interest. Most of the mechanochrochromic materials reported by Weder’s group in the following two decades operate on the basis of the same general transduction principle, which involves changing the interactions among optically active motifs in response to mechanical deformation. Recent discoveries include the development of new mechanically responsive motifs or “mechanophores” based on rotaxanes and loop-forming dye pairs.
Controlling the interactions between molecular or nanoscale building blocks through an external stimulus has become one of Weder’s main design tools for the creation of stimuli-responsive polymers. In 2008, in collaboration with his colleague Stuart Rowan, Weder introduced stimuli-responsive mechanically adaptive polymer nanocomposites whose architecture and function was inspired by sea cucumbers. The mechanical properties of these materials, which were made by incorporating nanocellulose crystals as a reinforcing filler into polymer matrices, depend on the interactions among the cellulose nanocrystals (CNCs), and can be regulated by an external stimulus. The approach was initially used to create mechanically morphing implant materials, which soften upon exposure to physiological conditions. This work led to sustained research efforts in Weder’s group on bio-inspired mechanically morphing polymers, the development of protocols for the processing of CNC/polymer nanocomposites, and the development of new cellulose-based nanocomposites. Adaptive polymers that show such mechanical morphing upon exposure to physiological conditions were reported to increase the functionality of cortical implants.
The possibility to heal defects in polymeric materials can increase the reliability and durability of polymer products. In 2011, also in collaboration with Rowan, Weder demonstrated that the UV-light induced temporary disassembly of metallosupramolecular polymers can be used to heal defects in these materials. Expanding on this concept, Weder’s team introduced light healable nanocomposites, and modified the structure to include different binding motifs and architectures, for example glassy hydrogen-bonded supramolecular polymer networks. His group also used this approach to develop adhesives with the capability to bond or debond on demand. Weder’s group sought to push the mechanical properties of supramolecular polymers towards those of conventional thermoplastics. In 2019, his team demonstrated that it is possible to toughen stiff but brittle glassy supramolecular polymer networks by forming blends with a rubbery component. More recent versions of such materials were shown to be healable and to display property combinations that are comparable to some conventional plastics.
References
1966 births
Living people
21st-century Swiss chemists
Massachusetts Institute of Technology alumni
20th-century Swiss chemists
Polymer chemistry
Polymer scientists and engineers
ETH Zurich alumni
Christoph Weder | Christoph Weder | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,856 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry",
"Materials science"
] |
70,201,444 | https://en.wikipedia.org/wiki/Glass%20instrument | This family of musical instruments (also called crystallophones) includes those whose primary material is glass. They may be played using percussive techniques, such as striking the glass to produce a sound, or by utilizing friction to generate a resonant sound (a playing technique used for friction idiophones). Many glass instruments produce an ethereal, otherworldly timbre. A well-known glass instrument is Ben Franklin's glass harmonica.
History
First glass instruments
Historical records suggest that early versions of glass and porcelain instruments were first developed around the 14th century in China, Japan, and Persia. An encyclopedia of Chinese instruments in 1300 A.D. mentions an instrument consisting of "nine cups, struck with a stick". Similar musical bowls were recorded in Japan, taking the form of a porcelain gong, and in Persia as a set of earthenware water-filled cups which were tapped to produce notes. These percussive instruments spread to Europe in the following centuries, but they may also have developed independently. Records published at Milan in 1492 contain a woodcut depicting playing glasses as part of an experiment, and an inventory of the in Vienna compiled in 1596 includes descriptions of a glasswork instrument with various octaves and semitones. Publications in 1677 describe using a wet finger to create resonant sounds by stroking the rim of eight glasses with various quantities of wine or other liquids, described as "making cheerful wine music". The music produced by different liquids of the glasses was thought to correspond to emotions relating to the four "humours" of the body, and even was attributed to curing various medical conditions such as "thickness of the blood". These descriptions resemble what is now known as the glass harp, an instrument consisting of various wine glasses filled with water to varying amounts and played by running a wet finger along the rim.
Popularization in the 18th century
The earliest instance of a glass instrument being used in concert music was noted in 1732, with records stating that a series of partially-filled wine glasses were tapped with a muffled stick to perform concertos with supporting instrumentation from bass and violins. Instructions on how to construct a glass harp, referred in a text by the French name Verrillon, were included in a 1738 text published from Germany.
The invention of the glass harp is often attributed to Richard Pockrich, an Irish inventor who popularized the instrument in 1741. His invention, which he referred to as an "angelic organ," consisted of large glass bells which he would strike with a muffled stick and may have later played by running a moist finger along their rims. He popularized the instrument by going on tour in England and Ireland and performing famous pieces such as Handel's Water Music. In 1759, both Pockrich and his angelic organ were destroyed in a fire while staying in London. Performances using the glass harp grew in popularity in the following decades, accompanied by publications on instruction, maintenance, and repertoire.
In 1763, Benjamin Franklin applied the principles behind the glass harp to invent his own glass instrument, the glass harmonica. This instrument places the glass bowls horizontally along a rotating axis. The glasses were originally kept wet by a sponge, but later improvements to the invention by other inventors redesigned the instrument so that the bowls rested in a trough of water, ensuring that their surface is always moist and improving tonal quality and ease of play. The player may rotate the bowls using a treadle operated by their foot, and rest their hands along the bowls to produce a ringing sound. Popularized by its ethereal sound, over 100 composers wrote works to feature this instrument.
The glass harmonica grew very popular in Germany, and is attributed to supporting the romanticism movement in the late 18th century. By 1830, however, the popularity of the glass harmonica declined. The glass harp continued to have widespread popularity in England, in part due to its ease of accessibility compared to the glass harmonica.
The glass dulcimer was also performed during this time. Accounts of this instrument, similar in design to a glass harmonica but struck with soft mallets instead of rubbing with ones hands, appear in books and plays around the 1770s. It is possible this is the same instrument as the glasschord, an instrument consisting of glass bars struck by padded hammers which would be activated by a keyboard, similar to a celesta.
The glass flute (or crystal flute) was patented in 1806 in France by Claude Laurent. These crystal flutes grew in popularity and were owned by emperors, kings, and heads of state, including James Madison. Only 185 of Laurent's glass flutes have survived to the modern day.
Post 18th century and modern
While records of glass instruments persist into the 20th century, evidence indicates that their general popularity and performances declined past the 1850s. Nowadays, glass instruments are used in compositions looking to capture an ethereal atmosphere. Some famous examples include using the glass harmonica in the 1982 film Star Trek II: The Wrath of Khan, as well as Marco Beltrami's film scores for The Minus Man (1999) and The Faculty (1998).
The Library of Congress invited the singer Lizzo to play a crystal flute from their collection in September 2022.
Various modern glassblowers, instrumentalists, and performers have invented and utilized contemporary glass instruments. Some examples include:
Glass marimba, utilized by the Brazilian percussion ensemble, Uakti
Verrophone, invented in 1983 by Sascha Reckert
Cristal Baschet, invented in 1952 by the brothers Bernard and François Baschet
Acoustics and functionality
Glass instruments are typically played as either percussion instruments or friction idiophones, with a few exceptions such as the crystal flute and glass violin.
Percussion
When played using percussive techniques, the exposed glass would be struck with a mallet, finger, or hammer to produce a sound. Due to the fragility of glass as a material, the hammers or mallets used are often dampened with cloth or other material to mitigate the risk of fracturing the instruments. The glass may also be rubbed with the mallet to produce sound.
Friction idiophones
Friction idiophones produce sound by being rubbed or scraped with a non-sounding object. In the case of glass instruments, the glass is often rubbed with a moistened finger to gradually draw out a note. In the example of the glass harp, "as the player's finger moves around the rim of the glass, the nodes and antinodes move with it, resulting in a pulsating sound."
List of glass instruments
References
Glass
Crystallophones | Glass instrument | [
"Physics",
"Chemistry"
] | 1,326 | [
"Homogeneous chemical mixtures",
"Amorphous solids",
"Unsolved problems in physics",
"Glass"
] |
53,163,128 | https://en.wikipedia.org/wiki/Space-based%20measurements%20of%20carbon%20dioxide | Space-based measurements of carbon dioxide () are used to help answer questions about Earth's carbon cycle. There are a variety of active and planned instruments for measuring carbon dioxide in Earth's atmosphere from space. The first satellite mission designed to measure was the Interferometric Monitor for Greenhouse Gases (IMG) on board the ADEOS I satellite in 1996. This mission lasted less than a year. Since then, additional space-based measurements have begun, including those from two high-precision (better than 0.3% or 1 ppm) satellites (GOSAT and OCO-2). Different instrument designs may reflect different primary missions.
Purposes and highlights of findings
There are outstanding questions in carbon cycle science that satellite observations can help answer. The Earth system absorbs about half of all anthropogenic emissions. However, it is unclear exactly how this uptake is partitioned to different regions across the globe. It is also uncertain how different regions will behave in terms of flux under a different climate. For example, a forest may increase uptake due to the fertilization or β-effect, or it could release due to increased metabolism by microbes at higher temperatures. These questions are difficult to answer with historically spatially and temporally limited data sets.
Even though satellite observations of are somewhat recent, they have been used for a number of different purposes, some of which are highlighted here:
Megacity enhancements were observed with the GOSAT satellite and minimum observable space-based changes in emissions were estimated.
Satellite observations have been used for visualizing how is distributed globally, including studies that have focused on anthropogenic emissions.
Flux estimates were made of into and out of different regions.
Correlations were observed between anomalous temperatures and measurements in boreal regions.
Zonal asymmetric patterns of were used to observe fossil fuel signatures.
Emission ratios with methane were measured from forest fires.
emission ratios with carbon monoxide (a marker of incomplete combustion) measured by the MOPITT instrument were analyzed over major urban regions across the globe to measure developing/developed status.
OCO-2 observations were used to estimate emissions from wildfires in Indonesia in 2015.
OCO-2 observations were also used to estimate the excess land-ocean flux due to the 2014–16 El Niño event.
GOSAT observations were used to attribute 2010-2011 El Niño Modoki on the Brazilian carbon balance.
OCO-2 observations were used to quantify emissions from individual power plants, demonstrating the potential for future space-based emission monitoring.
Challenges
Remote sensing of trace gases has several challenges. Most techniques rely on observing infrared light reflected off Earth's surface. Because these instruments use spectroscopy, at each sounding footprint a spectrum is recorded—this means there is a significantly (about 1000×) more data to transfer than what would be required of just an RGB pixel. Changes in surface albedo and viewing angles may affect measurements, and satellites may employ different viewing modes over different locations; these may be accounted for in the algorithms used to convert raw into final measurements. As with other space-based instruments, space debris must be avoided to prevent damage.
Water vapor can dilute other gases in air and thus change the amount of in a column above the surface of the Earth, so often column-average dry-air mole fractions (X) are reported instead. To calculate this, instruments may also measure O, which is diluted similarly to other gases, or the algorithms may account for water and surface pressure from other measurements. Clouds may interfere with accurate measurements so platforms may include instruments to measure clouds. Because of measurement imperfections and errors in fitting signals to obtain X, space-based observations may also be compared with ground-based observations such as those from the TCCON.
List of instruments
Partial column measurements
In addition to the total column measurements of down to the ground, there have been several limb sounders that have measured through the edge of Earth's upper atmosphere, and thermal instruments that measure the upper atmosphere during the day and night.
Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) onboard TIMED launched 7 December 2001 makes measurements in the mesosphere and lower thermosphere in thermal bands.
ACE-FTS (Atmospheric Chemistry Experiment-Fourier Transform Spectrometer) onboard SCISAT-1 launched 13 August 2003 measures solar spectra, from which profiles of can be calculated.
SOFIE (Solar Occultation for Ice Experiment) is a limb sounder on board the AIM satellite launched 25 April 2007.
Conceptual Missions
There have been other conceptual missions which have undergone initial evaluations but have not been chosen to become a part of space-based observing systems. These include:
Active Sensing of Emissions over Nights, Days, and Seasons (ASCENDS) is a lidar-based mission
Geostationary Fourier Transform Spectrometer (GeoFTS)
Atmospheric Imaging Mission for Northern regions (AIM-North) would involve a constellation of two satellites in elliptical orbits to focus on northern regions. The concept is undergoing a Phase 0 study in 2019–2020.
Carbon Monitoring Satellite (CarbonSat) was a concept for an imaging satellite with global coverage approximately every 6 days. This mission never proceeded beyond the concept phase.
References
Satellite meteorology
Atmosphere of Earth
Carbon dioxide
Satellites monitoring GHG emissions | Space-based measurements of carbon dioxide | [
"Chemistry"
] | 1,066 | [
"Greenhouse gases",
"Carbon dioxide"
] |
53,163,575 | https://en.wikipedia.org/wiki/Avinash%20Kumar%20Agarwal | Avinash Kumar Agarwal (born 22 August 1972) is director of Indian Institute of Technology, Jodhpur. He is an Indian mechanical engineer, tribologist and a professor at the Department of Mechanical Engineering of the Indian Institute of Technology, Kanpur. He is known for his studies on internal combustion engines, Emissions, alternate fuels and CNG engines and is an elected fellow of the American Society of Mechanical Engineering (2013), Society of Automotive Engineers, US (2012), National Academy of Science, Allahabad (2018), Royal Society of Chemistry, UK (2018), International Society for Energy, Environment and Sustainability (2016), and Indian National Academy of Engineering (2015). The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards for his contributions to Engineering Sciences in 2016. Agarwal has been bestowed upon Prestigious J C Bose Fellowship of Science and Engineering Research Board. Government of India (August 2019). Agarwal is among the top ten highly cited researchers (HCR) of 2018 from India, as per Clarivate Analytics, an arm of Web of Science.
Biography
Avinash Kumar Agarwal, born on 22 August 1972 at Karauli, in the Indian state of Rajasthan, earned his graduate degree in mechanical engineering (BE) from Malaviya Regional Engineering College (MREC) Jaipur (present-day Malaviya National Institute of Technology, Jaipur) of the University of Rajasthan in 1994 and did his master's degree at the Centre for Energy Studies of the Indian Institute of Technology, Delhi from where he obtained an MTech in energy studies in 1996. immediately after this, he pursued his PhD at IIT Delhi, in Center for energy studies, under the guidance of L. M. Das, which he successfully defended in 1999 for his thesis, Performance evaluation and tribological studies on a biodiesel-fuelled compression ignition engine. Thereafter he moved to the US for his postdoctoral work which he completed at the Engine Research Center of the University of Wisconsin-Madison between 1999 and 2001. On his return to India in March 2001, he joined the Indian Institute of Technology, Kanpur as an assistant professor. He was promoted as an associate professor in 2007 and has been serving the institute since 2012 as a professor at the Department of Mechanical Engineering. He had seven short stints abroad as visiting professor during this period, first at Wolfson School of Mechanical and Manufacturing Engineering of the University of Loughborough in 2002 second and third, at Photonics Institute of Technical University of Vienna in 2004 and 2013, and the fourth, fifth and sixth at Hanyang University, South Korea in 2013, 2014 and 2015 and the last stint at Korea Advanced Institute of Science and Technology (KAIST) in 2016. On 19 April 2024, he was appointed as the director of IIT Jodhpur.
Agarwal is married to Dr. Rashmi A. Agarwal and the couple has two children, Aditya (b. 2003) and Rithwik (b. 2006). The family lives in Kanpur in Uttar Pradesh.
Legacy
Agrawal's researches has covered the fields of engine combustion, alternate fuels, emission and particulate control, optical diagnosis, Methanol Engine Development, fuel spray optimization and tribology and his work has assisted in the development of low-cost diesel oxidation catalysts and homogeneous charge compression ignition engines. His studies of laser ignition of methane-air hydrogen-air mixtures and biodiesel based on Indian feedstocks have widened the understanding of the subjects; he carried out a project on biodiesels during 2010–13 for the Department of Science and Technology of India. He has documented his researches by way of over 280 articles; Google Scholar and ResearchGate, online repositories of scientific articles, have listed many of them. Besides, he has edited forty books, most of which are published by Springer, including, Combustion for Power Generation and Transportation, and Novel Combustion Concepts for Sustainable Energy Development and has contributed forty two chapters to many books. He is also a co-editor of a five-volume reference text, Handbook of Combustion, published by Wiley-VCH in 2010.
Agarwal is the Associate Principle Editor of Journal "Fuel", Editor-in-chief of Journal of Energy and Environmental Sustainability (JEES) and is Associate Editor of two other journals, "ASME Journal of Energy Resources Technology", and "Journal of the Institution of Engineers (India): Series C". He is a member of the editorial board of several prestigious journals such as "International Journal of Engine Research, published by SAE International and IMechE, London, UK", Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering and Recent Patents on Mechanical Engineering of Bentham Science. He is also a former associate editor of the International Journal of Vehicle Systems Modelling and Testing of Inderscience Publishers, "International Journal of Oil, Gas and Coal Technology" (IJOGCT), Published by Inderscience Publishers and guest-edited a special issue of the Journal of Automobile Engineering on Alternative Fuels in 2007. He is a member of the Methanol Task Force of the Department of Science and Technology since 2017, former member of Technology Systems Group of the Department of Science and Technology and a former member of the Experts Group on Biofuels and Retrofitting of Engines of the Government of India. He is a member of the board of associates of the Internal Combustion Engines Division of American Society of Mechanical Engineering and is associated with SAE International, sitting in many of their review committees.
He was the session organizer for 2005, 2006, 2007, 2008 and 2009 editions of SAE World Congress and chaired the 2004, 2005 and 2006 sessions on alternative fuel and internal combustion engines.
Awards and honors
Agarwal received the Young Scientist Award of the Department of Science and Technology in 2002, followed by the Career Award for Young Teachers of the All India Council of Technical Education (AICTE) in 2004. The Young Engineer Award of the Indian National Academy of Engineering reached him in 2005 and the Young Scientists Medal of the Indian National Science Academy came his way in 2007. He received the Alkyl Amine Young Scientist Award of the Institute of Chemical Technology the same year and a year later, SAE International selected him for the 2008 Ralph R. Teetor Educational Award. He received the C. V. Raman Young Teachers Award of the IES Group in 2011 and NASI-Reliance Industries Platinum Jubilee Award of the National Academy of Sciences, India in 2012. The Indian National Academy of Engineering honored him again in 2012 with the Silver Jubilee Young Engineer Award, which was followed by "Rajib Goyal Prize" in Physical Sciences-2015 from Kurukshetra University and then Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards in 2016. Afterwards, he was conferred "Er. M P Baya National Award-2017" in Mechanical Engineering by Institution of Engineers, Udaipur and Clarivate Analytics India Research Excellence - Citation Award-2017, which was 6th Edition prize for high citations and high impact work from India, given by Clarivate Analytics.
Agarwal, who held the BOYSCAST Fellowship of Department of Science and Technology in 2002 and Devendra Shukla Research Fellowship of IIT Kanpur in 2009, was elected as a fellow by the Indian National Academy of Engineering in 2015. He is also a fellow of the American Society of Mechanical Engineers and SAE International, Royal Society of Chemistry, Indian National Academy of Engineering, National Academy of Sciences and International Society for Energy, Environment and Sustainability He was listed in several editions of Marquis Who's Who in Science and Engineering, Who's Who (Emerging Leaders) and Who's Who in the World. Agarwal was Poonam and Prabhu Goyal Chair Professor at IIT Kanpur from 2012 to 2016. He is currently SBI Endowed chair Professor in the same institution (2018-2021).
Selected bibliography
Books
Environmental Contaminants, 431 pages, Published by Springer, Singapore (2018), (Eds.) Tarun Gupta, Avinash K Agarwal, Rashmi A Agarwal, Nitin K Labhsetwar () DOI: 10.1007/978-981-10-7332-8.
Air Pollution and Control, 260 pages, Published by Springer, Singapore (2018), (Eds.) Nikhil Sharma, Avinash K Agarwal, Peter Eastwood, Tarun Gupta, Akhilendra P Singh () DOI: 10.1007/978-981-10-7185-0.
Coal and Biomass Gasification, 521 pages, Published by Springer, Singapore (2018), (Eds.) Santanu De, Avinash K Agarwal, V S Moholkar, Thallada Bhaskar () DOI: 10.1007/978-981-10-7335-9.
Droplets and Sprays, 430 pages, Published by Springer, Singapore (2018), (Eds.) Saptarshi Basu, Avinash K Agarwal, Achintya Mukhopadhyay, Chetan Patel () DOI: 10.1007/978-981-10-7449-3.
Advances in Internal Combustion Engine Research, 345 pages, Published by Springer, Singapore (2018), (Eds.) Dhananjay K Srivastava, Avinash K Agarwal, Amitava Datta, Rakesh K Maurya () DOI: 10.1007/978-981-10-7575-9.
Modeling and Simulations of Turbulent Combustion, 661 pages, Published by Springer, Singapore (2018), (Eds.) Santanu De, Avinash K Agarwal, Swetoprovo Chaudhuri, Swarnendu Sen () DOI: 10.1007/978-981-10-7410-3.
Prospects of Alternative Transportation Fuels, 405 pages, Published by Springer, Singapore (2018), (Eds.) Akhilendra P Singh, Avinash K Agarwal, Rashmi A Agarwal, Atul Dhar, Mritunjay Kumar Shukla () DOI: 10.1007/978-981-10-7518-6.
Environmental, Chemical and Medical Sensors, 409 pages, Published by Springer, Singapore (2018), (Eds.) Shantanu Bhattacharya, Avinash K Agarwal, Nripen Chanda, Ashok Pandey, Ashis Kumar Sen () DOI: 10.1007/978-981-10-7751-7.
Applications of Solar Energy, 364 pages, Published by Springer, Singapore (2018), (Eds.) Himanshu Tyagi, Avinash K Agarwal, Prodyut R Chakraborty, Satvasheel Powar () DOI: 10.1007/978-981-10-7206-2.
Bioremediation: Applications for Environmental Protection and Management, 411 pages, Published by Springer, Singapore (2018), (Eds.) Sunita J Varjani, Avinash K Agarwal, Edgard Ghansounou, Baskar Gurunathan () DOI: 10.1007/978-981-10-7485-1.
Applications Paradigms of Droplet and Spray Transport: Paradigms and Applications, 379 pages, Published by Springer, Singapore (2018), (Eds.) Saptarshi Basu, Avinash K Agarwal, Achintya Mukhopadhyay, Chetan Patel () DOI: 10.1007/978-981-10-7233-8.
Combustion for Power Generation and Transportation: Technology, Challenges and Prospects, 451 pages, Published by Springer, Singapore (2017), (Eds.) Avinash Kumar Agarwal, Santanu De, Ashok Pandey, Akhilendra Pratap Singh (). DOI: 10.1007/978-981-10-3785-6.
Locomotives and Rail Road Transportation: Technology, Challenges and Prospects, 247 pages, Published by Springer, Singapore (2017), (Eds.) Avinash Kumar Agarwal, Atul Dhar, Anirudh Gautam, Ashok Pandey (). DOI: 10.1007/978-981-10-3788-7.
Biofuels: Technology, Challenges and Prospects, 245 pages, Published by Springer, Singapore (2017), (Eds.) Avinash Kumar Agarwal, Rashmi Avinash Agarwal, Tarun Gupta, Bhola Ram Gurjar. DOI: 10-1007/978-981-10-3791-7 (). DOI: 10.1007/978-981-10-3791-7.
Technology Vision 2015: Technology Roadmap Transportation, 237 pages, (Eds.) Avinash Kumar Agarwal, S S Thipse, Akhilendra P Singh, Gautam Goswami, Mukti Prasad, Published by TIFAC, New Delhi, December 2016.
Energy, Combustion and Propulsion: New Perspectives, 609 pages, Published by Athena Academic, London, UK (2016), (Eds.) Avinash K Agarwal, Suresh K. Aggarwal, Ashwani K. Gupta, Abhijit Kushari, Ashok Pandey ()
Novel Combustion Concepts for Sustainable Energy Development, 562 pages, Published by Springer, Singapore (2014), (Eds.) Avinash K. Agarwal, Ashok Pandey, Ashwani K. Gupta, Suresh K. Aggarwal, Abhijit Kushari. (). DOI: 10.1007/978-81-322-2211-8-18.
Handbook of Combustion, 5 Volumes, 3168 pages, Hardcover, April 2010, Published by Wiley VCH, (Eds.) Maximilian Lackner, Franz Winter, Avinash K. Agarwal ().
CI Engine Performance for Use with Alternative Fuels, 2009(SP-2237), 185 pages, Published by SAE International, US, 2009, (Eds.) Amiyo K Basu, Avinash Kumar Agarwal, Paul Richards, G. J. Thompson, Scott A Miers, Sundar Rajan Krishnan ().
Combustion Science and Technology: Recent Trends Published by Narosa Publishing House, New Delhi, 2009 (Eds.) A. K. Agarwal, A. Kushari, S. K. Aggarwal, A. K. Runchal, 300 Pages ().
CI Engine Performance for use with Alternative Fuels (SP-2176), Published by SAE International, US, 2008, (Eds.) Avinash K. Agarwal, G. J. Thompson, Scott A. Miers, Sundar R. Krishnan ().
Alternative Fuels and CI Engine Performance (SP-2067), 160 Pages, Published by SAE International, US, 2007, (Eds.) Avinash K. Agarwal, G. J. Thompson ().
New Diesel Engines and Components and CI Engine performance for Use with Alternative Fuels (SP-2014), 171 Pages, Published by SAE International, US, 2006, (Eds.) A. Jain, J. E. Mossberg, Avinash K. Agarwal, G. J. Thompson ().
CI Engine performance for Use with Alternative Fuels, and New Diesel Engines and Components (SP-1978), 196 Pages, Published by SAE International, US, 2005, (Eds.) J. E. Mossberg, A. Jain, G. J. Thompson, Avinash K. Agarwal ().
Articles
Avinash K Agarwal*, Bushra Ateeq, Tarun Gupta, Akhilendra P. Singh, Swaroop K Pandey, Nikhil Sharma, Rashmi A Agarwal, Neeraj K. Gupta, Hemant Sharma, Ayush Jain, Pravesh C Shukla, "Toxicity and mutagenicity of exhaust from compressed natural gas: Could this be a clean solution for megacities with mixed-traffic conditions?”, Environmental Pollution. 239, 499–511, 2018.doi: 10.1016/j.envpol.2018.04.028
See also
Diesel fuel
Vegetable oil
References
Further reading
Recipients of the Shanti Swarup Bhatnagar Award in Engineering Science
1972 births
Indian technology writers
People from Rajasthan
Indian mechanical engineers
Tribologists
University of Rajasthan alumni
IIT Delhi alumni
Academic staff of IIT Delhi
Academic staff of IIT Kanpur
Academics of Loughborough University
Living people
Fellows of the Indian National Academy of Engineering
University of Wisconsin–Madison fellows | Avinash Kumar Agarwal | [
"Materials_science"
] | 3,545 | [
"Tribology",
"Tribologists"
] |
42,987,364 | https://en.wikipedia.org/wiki/Bott%E2%80%93Samelson%20resolution | In algebraic geometry, the Bott–Samelson resolution of a Schubert variety is a resolution of singularities. It was introduced by in the context of compact Lie groups. The algebraic formulation is independently due to and .
Definition
Let G be a connected reductive complex algebraic group, B a Borel subgroup and T a maximal torus contained in B.
Let Any such w can be written as a product of reflections by simple roots. Fix minimal such an expression:
so that . (ℓ is the length of w.) Let be the subgroup generated by B and a representative of . Let be the quotient:
with respect to the action of by
It is a smooth projective variety. Writing for the Schubert variety for w, the multiplication map
is a resolution of singularities called the Bott–Samelson resolution. has the property: and In other words, has rational singularities.
There are also some other constructions; see, for example, .
Notes
References
.
.
.
.
.
.
Algebraic geometry
Singularity theory | Bott–Samelson resolution | [
"Mathematics"
] | 206 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
42,989,614 | https://en.wikipedia.org/wiki/Laplace%20equation%20for%20irrotational%20flow | Irrotational flow occurs where the curl of the velocity of the fluid is zero everywhere. That is when
Similarly, if it is assumed that the fluid is incompressible:
Then, starting with the continuity equation:
The condition of incompressibility means that the time derivative of the density is 0, and that the density can be pulled out of the divergence, and divided out, thus leaving the continuity equation for an incompressible system:
Now, the Helmholtz decomposition can be used to write the velocity as the sum of the gradient of a scalar potential and as the curl of a vector potential. That is:
Note that imposing the condition that implies that
The curl of the gradient is always 0. Note that the curl of the curl of a function is only uniformly 0 for the vector potential being 0 itself. So, by the condition of irrotational flow:
And then using the continuity equation , the scalar potential can be substituted back in to find Laplace's Equation for irrotational flow:
Note that the Laplace equation is a well-studied linear partial differential equation. Its solutions are infinite; however, most solutions can be discarded when considering physical systems, as boundary conditions completely determine the velocity potential.
Examples of common boundary conditions include the velocity of the fluid, determined by , being 0 on the boundaries of the system.
There is a great amount of overlap with electromagnetism when solving this equation in general, as the Laplace equation also models the electrostatic potential in a vacuum.
There are many reasons to study irrotational flow, among them;
Many real-world problems contain large regions of irrotational flow.
It can be studied analytically.
It shows us the importance of boundary layers and viscous forces.
It provides us tools for studying concepts of lift and drag.
See also
Irrotational vector fields
Irrotational vortices
Potential flow around a circular cylinder
Potential flow around an airfoil section
References
Fluid dynamics | Laplace equation for irrotational flow | [
"Chemistry",
"Engineering"
] | 406 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
42,991,435 | https://en.wikipedia.org/wiki/Two-dimensional%20flow | In fluid mechanics, a two-dimensional flow is a form of fluid flow where the flow velocity at every point is parallel to a fixed plane. The velocity at any point on a given normal to that fixed plane should be constant.
Flow velocity in two dimensional flows
Flow velocity in Cartesian co-ordinates
Considering a two dimensional flow in the plane, the flow velocity at any point at time can be expressed as –
Velocity in cylindrical co-ordinates
Considering a two dimensional flow in the plane, the flow velocity at a point at a time can be expressed as –
Vorticity in two dimensional flows
Vorticity in Cartesian co-ordinates
Vorticity in two dimensional flows in the plane can be expressed as –
Vorticity in cylindrical co-ordinates
Vorticity in two dimensional flows in the plane can be expressed as –
Two dimensional sources and sinks
Line/point source
A line source is a line from which fluid appears and flows away on planes perpendicular to the line. When we consider 2-D flows on the perpendicular plane, a line source appears as a point source.
By symmetry, we can assume that the fluid flows radially outward from the source.
The strength of a source can be given by the volume flow rate that it generates.
Line/point sink
Similar to a line source, a line sink is a line which absorbs fluid flowing towards it, from planes perpendicular to it. When we consider 2-D flows on the perpendicular plane, it appears as a point sink.
By symmetry, we assume the fluid flows radially inwards towards the sink.
The strength of a sink is given by the volume flow rate of the fluid it absorbs.
Types of two-dimensional flows
Uniform source flow
A radially symmetrical flow field directed outwards from a common point is called a source flow. The central common point is the line source described above. Fluid is supplied at a constant rate from the source. As the fluid flows outward, the area of flow increases. As a result, to satisfy continuity equation, the velocity decreases and the streamlines spread out. The velocity at all points at a given distance from the source is the same.
The velocity of fluid flow can be given as -
We can derive the relation between flow rate and velocity of the flow. Consider a cylinder of unit height, coaxial with the source. The rate at which the source emits fluid should be equal to the rate at which fluid flows out of the surface of the cylinder.
The stream function associated with source flow is –
The steady flow from a point source is irrotational, and can be derived from velocity potential. The velocity potential is given by –
Uniform sink flow
Sink flow is the opposite of source flow. The streamlines are radial, directed inwards to the line source. As we get closer to the sink, area of flow decreases. In order to satisfy the continuity equation, the streamlines get bunched closer and the velocity increases as we get closer to the source. As with source flow, the velocity at all points equidistant from the sink is equal.
The velocity of the flow around the sink can be given by –
The stream function associated with sink flow is –
The flow around a line sink is irrotational and can be derived from velocity potential. The velocity potential around a sink can be given by –
Irrotational vortex
A vortex is a region where the fluid flows around an imaginary axis.
For an irrotational vortex, the flow at every point is such that a small particle placed there undergoes pure translation and does not rotate.
Velocity varies inversely with radius in this case. Velocity will tend to at that is the reason for center being a singular point. The velocity is mathematically expressed as –
Since the fluid flows around an axis,
The stream function for irrotational vortices is given by –
While the velocity potential is expressed as –
For the closed curve enclosing origin, circulation (line integral of velocity field) and for any other closed curves,
Doublet
A doublet can be thought of as a combination of a source and a sink of equal strengths kept at an infinitesimally small distance apart. Thus the streamlines can be seen to start and end at the same point.
The strength of a doublet made by a source and sink of strength kept a distance is given by –
The velocity of fluid flow can be expressed as –
The equations and the plot are for the limiting condition of
The concept of a doublet is very similar to that of electric dipoles and magnetic dipoles in electrodynamics.
References
External links
Two-dimensional sources and sinks
Flow regimes
Fluid dynamics
Planes (geometry) | Two-dimensional flow | [
"Chemistry",
"Mathematics",
"Engineering"
] | 950 | [
"Chemical engineering",
"Infinity",
"Mathematical objects",
"Flow regimes",
"Piping",
"Planes (geometry)",
"Fluid dynamics"
] |
42,992,666 | https://en.wikipedia.org/wiki/Command%20Query%20Responsibility%20Segregation | In information technology, Command Query Responsibility Segregation (CQRS) is a system architecture that extends the idea behind command–query separation (CQS) to the level of services. Such a system will have separate interfaces to send queries and to send commands. As in CQS, fulfilling a query request will only retrieve data and will not modify the state of the system (with some exceptions like logging access), while fulfilling a command request will modify the state of the system.
Many systems push the segregation to the data models used by the system. The models used to process queries are usually called read models and the models used to process commands write models.
Although its origin is usually attributed to Greg Young in 2010, everything indicates that the precursor of CQRS was Udi Dahan who in August 2008 published on his blog a training course that aimed to apply CQRS together with SOA and in more detail in December 2009 in the article Clarified CQRS.
References
External links
CQRS Journey by Microsoft patterns & practices
DDD/CQRS/Event Sourcing List
The CQRS Frequently Asked Questions
CQRS - a new architecture precept based on segregation of commands and queries
Systems architecture
Software architecture | Command Query Responsibility Segregation | [
"Engineering"
] | 251 | [
"Systems engineering",
"Design",
"Systems architecture"
] |
42,993,306 | https://en.wikipedia.org/wiki/Are%20Quanta%20Real%3F | Are Quanta Real?: A Galilean Dialogue (1973) is a book by Swiss-American physicist J.M. Jauch, in which the three main characters meet over the period of several days to discuss various interpretations and philosophical consequences of quantum mechanics. Are Quanta Real? was inspired by and written in the style of Galileo's Dialogue Concerning the Two Chief World Systems. In the book, Jauch "resurrects" Galileo's three characters, Salviati, Sagredo, and Simplicio, centuries after their deaths to resume their previous dialogue in light of new developments in natural philosophy, specifically, quantum mechanics. The three characters engage in a series of debates and dialectic discussions to better their understanding of quantum phenomena using a series of thought experiments.
In a foreword to the 1989 edition, Douglas Hofstadter explains how the book initially "electrified" him and offered a sense of encouragement while he was in the initial stages of writing Gödel, Escher, Bach: an Eternal Golden Braid.
Are Quanta Real? received positive reviews from scientific journals and popular science magazines, has inspired essays on philosophy and science and was a finalist for a National Book Award.
References
Popular physics books
Quantum mechanics
1973 books | Are Quanta Real? | [
"Physics"
] | 252 | [
"Quantum mechanics",
"Works about quantum mechanics"
] |
42,993,331 | https://en.wikipedia.org/wiki/Time-dependent%20viscosity | In continuum mechanics, time-dependent viscosity is a property of fluids whose viscosity changes as a function of time. The most common type of this is thixotropy, in which the viscosity of fluids under continuous shear decreases with time; the opposite is rheopecty, in which viscosity increases with time.
Thixotropic fluids
Some non-Newtonian pseudoplastic fluids show a time-dependent change in viscosity and a non-linear stress-strain behavior in which the longer the fluid undergoes shear stress, the lower its viscosity becomes. A thixotropic fluid is one that takes time to attain viscosity equilibrium when introduced to a step change in shear rate. When shearing in a thixotropic fluid exceeds a certain threshold, it results in a breakdown of the fluid's microstructure and the exhibition of a shear thinning property.
Certain gels or fluids that are thick (viscous) under static conditions will begin to thin and flow as they are shaken, agitated, or otherwise stressed. When stress ceases, they regress to their more viscous state after a passage of time. Some thixotropic fluids return to a gel state almost instantly, such as ketchup, and are called pseudoplastic fluids. Others, such as yogurt, take much longer and can become nearly solid. Many gels and colloids are thixotropic materials, exhibiting a stable form at rest but becoming increasingly fluid when agitated.
Examples and applications
Cytoplasm, synovial fluid (found in joints between some bones), and the ground substance in the human body are all thixotropic, as is semen. Some varieties of honey (e.g.heather honey)can exhibit thixotropy under certain conditions.
Some clays (including bentonite and montmorillonite) exhibit thixotropy, as do certain clay deposits found in caves (slow flowing underground streams tend to layer fine-grained sediment into mudbanks that initially appear dry and solid but then become moist and soupy when dug into or otherwise disturbed). Drilling muds used in geotechnical applications can be thixotropic.
Semi-solid casting processes such as thixomoulding use the thixotropic property of some alloys (mostly light metals, e.g. bismuth) to great advantage. Within certain temperature ranges and with appropriate preparation, these alloys can be injected into molds in a semi-solid state, resulting in a cast with less shrinkage and other superior properties than those cast in normal injection molding processes.
Solder pastes used in electronics manufacturing printing processes are thixotropic.
Many kinds of paints and inks (e.g. the plastisols used in silkscreen textile printing) exhibit thixotropic qualities. In many cases it is desirable for an ink or paint to flow sufficiently fast to form a uniform layer, but then resist further flow (which on vertical surfaces can result in sagging). Thixotropic inks that quickly regain a high viscosity are used in CMYK-type printing processes; this is necessary to protect the structure of the dots for accurate color reproduction.
Thread-locking fluid is a thixotropic adhesive that cures anaerobically.
Thixotropy has been proposed as a scientific explanation of blood liquefaction miracles such as that of Saint Januarius in Naples.
Other examples of thixotropic fluids are gelatine, shortening, cream, xanthan gum solutions, aqueous iron oxide gels, pectin gels, hydrogenated castor oil, carbon black suspension in molten tire rubber, many floc suspensions, and many colloidal suspensions.
Rheopectic fluids
Basically the mirror of thixotropy, rheopectic fluids are an even rarer class of non-Newtonian fluids that exhibit a time-dependent increase in viscosity; they thicken or solidify when shaken or agitated. The longer they undergo a shearing force, the higher their viscosity becomes, as the microstructure of a rheopectic fluid builds under continuous shearing (possibly due to shear-induced crystallization).
Examples and Applications
Examples of rheopectic fluids include some gypsum pastes, printer inks, and lubricants.
There is also aggressive ongoing research into rheopectic materials especially with regard to potential uses in shock absorption. In addition to obvious potential military applications, rheopectic padding and armor could offer significant advantages over alternative materials currently in use in a wide range of fields from sporting goods and athletic footwear to skydiving and automobile safety.
Additional insights into rheopecty and the possible uses of rheopectic fluids can be gained through further research into the physics of hysteresis.
See also
Fluid dynamics
Viscosity
Rheopecty: The longer the fluid is subjected to a shear force, the higher the viscosity. Time-dependent shear thickening behavior.
Thixotropy: The longer a fluid is subjected to a shear force, the lower its viscosity. It is a time-dependent shear thinning behavior.
Shear thickening: Similar to rheopecty, but independent of the passage of time.
Shear thinning: Similar to thixotropy, but independent of the passage of time.
Notes
References
J. R. Lister and H. A. Stone (1996). Time-dependent viscous deformation of a drop in a rapidly rotating denser fluid. Journal of Fluid Mechanics, 317, pp 275–299 doi:10.1017/S0022112096000754
Reiner, M., and Scott Blair, Rheology terminology, in Rheology, Vol. 4 pp. 461, (New York: Achedemic Press, 1967)
Viscosity | Time-dependent viscosity | [
"Physics"
] | 1,248 | [
"Physical phenomena",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties"
] |
42,993,804 | https://en.wikipedia.org/wiki/Rational%20series | In mathematics and computer science, a rational series is a generalisation of the concept of formal power series over a ring to the case when the basic algebraic structure is no longer a ring but a semiring, and the indeterminates adjoined are not assumed to commute. They can be regarded as algebraic expressions of a formal language over a finite alphabet.
Definition
Let R be a semiring and A a finite alphabet.
A non-commutative polynomial over A is a finite formal sum of words over A. They form a semiring .
A formal series is a R-valued function c, on the free monoid A*, which may be written as
The set of formal series is denoted and becomes a semiring under the operations
A non-commutative polynomial thus corresponds to a function c on A* of finite support.
In the case when R is a ring, then this is the Magnus ring over R.
If L is a language over A, regarded as a subset of A* we can form the characteristic series of L as the formal series
corresponding to the characteristic function of L.
In one can define an operation of iteration expressed as
and formalised as
The rational operations are the addition and multiplication of formal series, together with iteration.
A rational series is a formal series obtained by rational operations from
See also
Formal power series
Rational language
Rational set
Hahn series (Malcev–Neumann series)
Weighted automaton
References
Further reading
Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28.
Sakarovitch, J. Rational and Recognisable Power Series. Handbook of Weighted Automata, 105–174 (2009).
W. Kuich. Semirings and formal power series: Their relevance to formal languages and automata theory. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 1, Chapter 9, pages 609–677. Springer, Berlin, 1997
Abstract algebra
Formal languages
Mathematical series | Rational series | [
"Mathematics"
] | 424 | [
"Sequences and series",
"Mathematical structures",
"Series (mathematics)",
"Calculus",
"Mathematical logic",
"Formal languages",
"Abstract algebra",
"Algebra"
] |
74,479,627 | https://en.wikipedia.org/wiki/Reversible%20Michaelis%E2%80%93Menten%20kinetics | Enzymes are proteins that act as biological catalysts by accelerating chemical reactions. Enzymes act on small molecules called substrates, which an enzyme converts into products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. The study of how fast an enzyme can transform a substrate into a product is called enzyme kinetics.
The rate of reaction of many chemical reactions shows a linear response as function of the concentration of substrate molecules. Enzymes however display a saturation effect where,, as the substrate concentration is increased the reaction rate reaches a maximum value. Standard approaches to describing this behavior are based on models developed by Michaelis and Menten as well and Briggs and Haldane. Most elementary formulations of these models assume that the enzyme reaction is irreversible, that is product is not converted back to substrate. However, this is unrealistic when describing the kinetics of enzymes in an intact cell because there is product available. Reversible Michaelis–Menten kinetics, using the reversible form of the Michaelis–Menten equation, is therefore important when developing computer models of cellular processes involving enzymes.
In enzyme kinetics, the Michaelis–Menten kinetics kinetic rate law that describes the conversion of one substrate to one product, is often commonly depicted in its irreversible form as:
where is the reaction rate, is the maximum rate when saturating levels of the substrate are present, is the Michaelis constant and the substrate concentration.
In practice, this equation is used to predict the rate of reaction when little or no product is present. Such situations arise in enzyme assays. When used to model enzyme rates in vivo , for example, to model a metabolic pathway, this representation is inadequate because under these conditions product is present. As a result, when building computer models of metabolism or other enzymatic processes, it is better to use the reversible form of the Michaelis–Menten equation.
To model the reversible form of the Michaelis–Menten equation, the following reversible mechanism is considered:
{E} + {S}
<=>[k_{1}][k_{-1}]
ES
<=>[k_{2}][k_{-2}]
{E} + {P}
To derive the rate equation, it is assumed that the concentration of enzyme-substrate complex is at steady-state, that is .
Following current literature convention, we will be using lowercase Roman lettering to indicate concentrations (this avoids cluttering the equations with square brackets). Thus indicates the concentration of enzyme-substrate complex, ES.
The net rate of change of product (which is equal to ) is given by the difference in forward and reverse rates:
The total level of enzyme moiety is the sum total of free enzyme and enzyme-complex, that is . Hence the level of free is given by the difference in the total enzyme concentration, and the concentration of complex, that is:
Using mass conservation we can compute the rate of change of using the balance equation:
where has been replaced using . This leaves as the only unknown. Solving for gives:
Inserting into the rate equation and rearranging gives:
The following substitutions are now made:
and
after rearrangement, we obtain the reversible Michaelis–Menten equation in terms of four constants:
Haldane relationship
This is not the usual form in which the equation is used. Instead, the equation is set to zero, meaning , indicating we are at equilibrium and the concentrations and are now equilibrium concentrations, hence:
Rearranging this gives the so-called Haldane relationship:
The advantage of this is that one of the four constants can be eliminated and replaced with the equilibrium constant which is more likely to be known. In addition, it allows one to make a useful interpretation in terms of the thermodynamic and saturation effects (see next section). Most often the reverse maximum rate is eliminated to yield the final equation:
Decomposition of the rate law
The reversible Michaelis–Menten law, as with many enzymatic rate laws, can be decomposed into a capacity term, a thermodynamic term, and an enzyme saturation level. This is more easily seen when we write the reversible rate law as:
where is the capacity term, the thermodynamic term and
the saturation term. The separation can be even better appreciated if we look at the elasticity coefficient . According to elasticity algebra, the elasticity of a product is the sum of the sub-term elasticities, that is:
Hence the elasticity of the reversible Michaelis–Menten rate law can easily be shown to be:
Since the capacity term is a constant, the first elasticity is zero. The thermodynamic term can be easily shown to be:
where is the disequilibrium ratio and equals and the mass–action ratio
The saturation term becomes:
References
Enzyme kinetics
Chemical kinetics
Catalysis
Biochemistry methods
Metabolism
Mathematical and theoretical biology
Systems biology | Reversible Michaelis–Menten kinetics | [
"Chemistry",
"Mathematics",
"Biology"
] | 1,047 | [
"Catalysis",
"Biochemistry methods",
"Chemical reaction engineering",
"Enzyme kinetics",
"Mathematical and theoretical biology",
"Applied mathematics",
"Cellular processes",
"Biochemistry",
"Chemical kinetics",
"Metabolism",
"Systems biology"
] |
62,989,733 | https://en.wikipedia.org/wiki/Dental%20cermet | Dental cermets, or silver cermets, are a type of restorative material dentists use to fill tooth cavities.
Silver cermets were created to improve the wear resistance and hardness of another type of filling material, glass ionomer cements, through the addition of silver. While the incorporation of silver achieved this, cermets have poorer aesthetics, appearing metallic rather than white. Cermets also have a similar compressive strength, flexural strength, and solubility as glass ionomer cements, some of the main limiting factors for both materials. Therefore, silver cermets are not a popular choice of restorative material.
Composition
Silver cermets can come in two forms:
Two bottles, one containing a powder and the other containing a liquid. The powder and liquid must be measured out individually and mixed together by hand.
A capsule in which the cermet components have already been proportioned correctly. The capsule is shaken in an amalgamator to mix the components.
The powder contains silver and fluoroaluminosilicate glass particles; the silver and glass particles may be fused together or separate. Other components in the powder include titanium oxide which acts as a whitening agent to improve aesthetics.
The liquid is an aqueous solution of a co-polymer of either 37% acrylic or maleic acid, or both, and 9% tartaric acid.
When the liquid and powder are mixed, an acid-base reaction occurs, initiating setting of the cermet.
Properties
Fluoride release
Like glass ionomer cements and dental compomers, silver cermets are able to release fluoride over a sustained period of time. However, the evidence suggests the fluoride releasing abilities of cermets are poorer than glass ionomer cements.
Adhesion
Cermets are able to bond to tooth tissue similar to glass ionomer cements. Like glass ionomer cements, it is recommended that the tooth tissue is conditioned with polyacrylic acid (a weak acid) before application.
Wear resistance
There is evidence that cermets have poor wear resistance when used to restore a large surface area. Therefore, it is advisable to limit their use to small restorations, particularly Class I cavities (see Green Vardiman Black Classification section on the Wiki page for Dental Restoration).
Radiopacity
The added silver imparts radio-opacity to cermets which aids radiographic detection of recurrent caries at a future date.
Clinical application
Silver cermets have performed poorly in clinical practice despite their theorised advantages over glass ionomer cements. As such, they are no longer a popular choice of material and it is unclear whether cermets will continue to be used.
Summary
References
Dental materials | Dental cermet | [
"Physics"
] | 579 | [
"Materials",
"Dental materials",
"Matter"
] |
62,990,925 | https://en.wikipedia.org/wiki/B%C3%A9la%20Paizs | Béla Paizs is a Hungarian bioinformatician.
His research interests revolve around fragmentation of peptides in mass spectrometry. In top-down proteomics, the interpretation of fragment ion spectra of peptides is a crucial step. The research of Béla Paizs have led to detailed characterization of peptide fragment ion structures and dissociation mechanisms, and have shown underlying fundamental physical and chemical principles. His work has been recognized with the American Society for Mass Spectrometry Biemann Medal in 2011.
Paizs received his Ph.D. in Chemistry in 1998 from Eötvös University in Budapest and graduated with summa cum laude honors. He worked as postdoctoral fellow there and later at the DKFZ in Heidelberg. He held a position as group leader since 2004 at the German Cancer Research Center in Heidelberg until 2013 when he moved to Bangor University.
References
21st-century chemists
Mass spectrometrists
Living people
Year of birth missing (living people) | Béla Paizs | [
"Physics",
"Chemistry"
] | 203 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
71,651,194 | https://en.wikipedia.org/wiki/Silver%27s%20dichotomy | In descriptive set theory, a branch of mathematics, Silver's dichotomy (also known as Silver's theorem) is a statement about equivalence relations, named after Jack Silver.
Statement and history
A relation is said to be coanalytic if its complement is an analytic set. Silver's dichotomy is a statement about the equivalence classes of a coanalytic equivalence relation, stating any coanalytic equivalence relation either has countably many equivalence classes, or else there is a perfect set of reals that are each incomparable to each other. In the latter case, there must be continuum many equivalence classes of the relation.
The first published proof of Silver's dichotomy was by Jack Silver, appearing in 1980 in order to answer a question posed by Harvey Friedman. One application of Silver's dichotomy appearing in recursive set theory is since equality restricted to a set is coanalytic, there is no Borel equivalence relation such that , where denotes Borel equivalence relation. Some later results motivated by Silver's dichotomy founded a new field known as invariant descriptive set theory, which studies definable equivalence relations. Silver's dichotomy also admits several weaker recursive versions, which have been compared in strength with subsystems of second-order arithmetic from reverse mathematics, while Silver's dichotomy itself is provably equivalent to over .
References
Set theory | Silver's dichotomy | [
"Mathematics"
] | 295 | [
"Mathematical logic",
"Set theory"
] |
71,652,129 | https://en.wikipedia.org/wiki/Weak%20stability%20boundary | Weak stability boundary (WSB), including low-energy transfer, is a concept introduced by Edward Belbruno in 1987. The concept explained how a spacecraft could change orbits using very little fuel.
Weak stability boundary is defined for the three-body problem. This problem considers the motion of a particle P of negligible mass moving with respect to two larger bodies, P1, P2, modeled as point masses, where these bodies move in circular or elliptical orbits with respect to each other, and P2 is smaller than P1.
The force between the three bodies is the classical Newtonian gravitational force. For example, P1 is the Earth, P2 is the Moon and P is a spacecraft; or P1 is the Sun, P2 is Jupiter and P is a comet, etc. This model is called the restricted three-body problem. The weak stability boundary defines a region about P2 where P is temporarily captured. This region is in position-velocity space. Capture means that the Kepler energy between P and P2 is negative. This is also called weak capture.
Background
This boundary was defined for the first time by Edward Belbruno of Princeton University in 1987. He described a Low-energy transfer which would allow a spacecraft to change orbits using very little fuel. It was for motion about Moon (P2) with P1 = Earth. It is defined algorithmically by monitoring cycling motion of P about the Moon and finding the region where cycling motion transitions between stable and unstable after one cycle. Stable motion means P can completely cycle about the Moon for one cycle relative to a reference section, starting in weak capture. P needs to return to the reference section with negative Kepler energy. Otherwise, the motion is called unstable, where P does not return to the reference section within one cycle or if it returns, it has non-negative Kepler energy.
The set of all transition points about the Moon comprises the weak stability boundary, . The motion of is sensitive or chaotic as it moves about the Moon within . A mathematical proof that the motion within is chaotic was given in 2004. This is accomplished by showing that the set about an arbitrary body P2 in the restricted three-body problem contains a hyperbolic invariant set of fractional dimension consisting of the infinitely many intersections Hyperbolic manifolds.
The weak stability boundary was originally referred to as the fuzzy boundary. This term was used since the transition between capture and escape defined in the algorithm is not well defined and limited by the numerical accuracy. This defines a "fuzzy" location for the transition points. It is also due the inherent chaos in the motion of P near the transition points. It can be thought of as a fuzzy chaos region. As is described in an article in Discover magazine, the WSB can be roughly viewed as the fuzzy edge of a region, referred to as a gravity well, about a body (the Moon), where its force of gravity becomes small enough to be dominated by force of gravity of another body (the Earth) and the motion there is chaotic.
A much more general algorithm defining was given in 2007. It defines relative to -cycles, where = 1,2,3,..., yielding boundaries of order n. This gives a much more complex region consisting of the union of all the weak stability boundaries of order n. This definition was explored further in 2010. The results suggested that W consists, in part, of the hyperbolic network of invariant manifolds associated to the Lyapunov orbits about the L1, L2 Lagrange points near P2. The explicit determination of the set about P2 = Jupiter, where P1 is the Sun, is described in "Computation of Weak Stability Boundaries: Sun-Jupiter Case". It turns out that a weak stability region can also be defined relative to the larger mass point, P1. A proof of the existence of the weak stability boundary about P1 was given in 2012, but a different definition is used. The chaos of the motion is analytically proven in "Geometry of Weak Stability Boundaries". The boundary is studied in "Applicability and Dynamical Characterization of the Associated Sets of the Algorithmic Weak Stability Boundary in the Lunar Sphere of Influence".
Applications
There are a number of important applications for the weak stability boundary (WSB). Since the WSB defines a region of temporary capture, it can be used, for example, to find transfer trajectories from the Earth to the Moon that arrive at the Moon within the WSB region in weak capture, which is called ballistic capture for a spacecraft. No fuel is required for capture in this case. This was numerically demonstrated in 1987. This is the first reference for ballistic capture for spacecraft and definition of the weak stability boundary. The boundary was operationally demonstrated to exist in 1991 when it was used to find a ballistic capture transfer to the Moon for Japan's Hiten spacecraft. Other missions have used the same transfer type as Hiten, including Grail, Capstone, Danuri, Hakuto-R Mission 1 and SLIM. The WSB for Mars is studied in "Earth-Mars Transfers with Ballistic Capture" and ballistic capture transfers to Mars are computed. The BepiColombo mission of ESA will achieve ballistic capture at the WSB of Mercury in 2025.
The WSB region can be used in the field of Astrophysics. It can be defined for stars within open star clusters. This is done in "Chaotic Exchange of Solid Material Between Planetary Systems: Implications for the Lithopanspermia Hypothesis" to analyze the capture of solid material that may have arrived on the Earth early in the age of the Solar System to study the validity of the lithopanspermia hypothesis.
Numerical explorations of trajectories for P starting in the WSB region about P2 show that after the particle P escapes P2 at the end of weak capture, it moves about the primary body, P1, in a near resonant orbit, in resonance with P2 about P1. This property was used to study comets that move in orbits about the Sun in orbital resonance with Jupiter, which change resonance orbits by becoming weakly captured by Jupiter. An example of such a comet is 39P/Oterma.
This property of change of resonance of orbits about P1 when P is weakly captured by the WSB of P2 has an interesting application to the field of quantum mechanics to the motion of an electron about the proton in a hydrogen atom. The transition motion of an electron about the proton between different energy states described by the Schrödinger equation is shown to be equivalent to the change of resonance of P about P1 via weak capture by P2 for a family of transitioning resonance orbits. This gives a classical model using chaotic dynamics with Newtonian gravity for the motion of an electron.
References
Further reading
Belbruno, E.; Green, J (2022). “When Leaving the Solar System: Dark Matter Makes a Difference”, Monthly Notices of the Royal Astronomical Society, V510, 5154.
Belbruno, Edward (2007) Fly Me to the Moon. Princeton University Press. ISBN 9780691128221
Adler, Robert (Nov. 30, 2000) “To the Planets on a Shoe String”, Nature, V408, No. 6812, 510-512
Osserman, J (April 2005) “Mathematics of the Heavens”, Notices of the American Mathematical Society, V52, No. 4
Ross, Shane (April 2008) Book Review of Fly me to the Moon, Notices of American Mathematical Society, Volume 55, No. 4, 478-430
Casselman, R (April 2008). “Chaos in the Weak Stability Boundary”, Cover of Notices of American Mathematical Society, p549
Mathematics of Planet Earth "Low Fuel Spacecraft Trajectories to the Moon"
Physics theorems
Algorithms | Weak stability boundary | [
"Physics",
"Mathematics"
] | 1,598 | [
"Equations of physics",
"Algorithms",
"Mathematical logic",
"Applied mathematics",
"Physics theorems"
] |
54,474,095 | https://en.wikipedia.org/wiki/Contained%20earth | Contained earth (CE) is a structurally designed natural building material that combines containment, inexpensive reinforcement, and strongly cohesive earthen walls. CE is earthbag construction that can be calibrated for several seismic risk levels based on building soil strength and plan standards for adequate bracing.
There is a recognized need for structural understanding of alternative building materials. Construction guidelines for CE are currently under development, based on the New Zealand's performance-based code for adobe and rammed earth.
CE is differentiated from contained gravel (CG) or contained sand (CS) by the use of damp, tamped, cured cohesive fill. CE can be modular, built in poly-propylene rice bag material containers, or solid, built in mesh tubing that allows earthen fill to solidify between courses.
CG, filled with pumice or ordinary gravel and/ or small stones, is often used as water-resistant base walls under CE, which also provides an effective capillary break. Soil bags used mostly in horizontal applications by civil engineers contain loose fill which includes both CG and CS. CG courses, like soil bags, may contribute base isolation and/or vibration damping qualities, although out-of-plane strength needs research.
For clarity, earthbag built with a low cohesion fill, or filled with dry soil that does not solidify, is not CE but CS. Uncured CE also performs structurally like CS.
Earthbag variations
Builders used to working without engineers are proud of earthbag's unlimited variations. Few trainers discuss risk levels of building sites, or recommend accurate tests of soil strength, even though soil strength is a key factor of improved seismic performance for earthen walls.
Need for or use of metal components are disputed, including rebar hammered into walls and barbed wire between courses, although static friction of smooth bag-to-bag surfaces of heavy modular CE walls is 0.4 with no adhesion.
Engineering knowledge of earthbag has been growing. More is known about the performance of walls made with sand or dry or uncured soil than about the overwhelming majority of earthbag buildings which have used damp, cohesive soil fill. Reports based on tests of soil bags and loose or granular fill (or uncured fill) assumes that soil strength is less important to wall strength than bag fabric strength for. However, shear tests show clearly that stronger cured, cohesive fill increases contained earth wall strength substantially.
Earthbag for high risk environments
Earthbag developed gradually without structural analysis, first for small domes, then for vertical wall buildings of many shapes. Although domes passed structural testing in California, no structural information was extracted from tests of the inherently stable shapes. Builders borrowed guidelines for adobe to recommend plan details, but code developed in low seismic risk New Mexico does not address issues for higher risk areas. California's seismic risk levels are almost three times as high as New Mexico's, and risk worldwide rises much higher.
Earthbag is often tried after disasters in the developing world, including Sri Lanka's 2004 tsunami, Haiti's 2010 earthquake and Nepal's 2015 earthquake.
CE walls fail in shear tests when barbs flex or bend back or (with weak soil fill) by chipping cured bag fill. CS walls or uncured CE walls fail differently, by slitting bag fabric as barbs move through loose fill.
Because no earthbag buildings were seriously damaged by seismic motion up to 0.8 g in Nepal's 2015 quakes, Nepal's building code recognizes earthbag, although the code does not discuss soil strengths or improved reinforcement. Nepal requires buildings to resist 1.5 g risk although hazard maps show higher values. Better trainers assume the use of cohesive soil and barbed wire, and recommend vertical rebar, buttresses, and bond beams, but rule of thumb earthbag techniques should be differentiated from contained earth that follows more complete guidelines.
CE compared to New Zealand wall strengths
Earthquake damage results confirm the validity of New Zealand's detailed standards for non-engineered adobe and rammed earth which allow unreinforced buildings to 0.6 g force levels.
Although earthbag without specific guidelines may often be this strong, conventional adobe can have severe damage at levels below 0.2 g forces. Non-traditional earthbag built with barbed wire, barely cohesive soil and no rebar can have half the shear strength of NZ's unreinforced adobe. Somewhere between 0.3 and 0.6 g forces, CE guidelines become important.
Based on static shear testing (Stouter, P. May 2017):
The following approximate guidelines assume a single story of wide walls with 2 strands of 4 point barbed wire per course. Check NZS 4299 for bracing wall spacing and size of bracing walls and/ or buttresses. Vertical rebar must be spaced on center average and embedded in wall fill while damp. Follow NZS 4299 restrictions on building size, site slope, climate, and uses.
Discuss foundation concerns with an engineer, since NZS 4299 assumes a full reinforced concrete footing.
For comparison to NZS 4299 the following risk levels are based roughly on 0.2 second spectral acceleration (Ss) from 2% probability of exceedance in 50 years. Builders may refer to the Unified Facilities Handbook online for these values for some cities worldwide. These risk levels are based on ultimate strength, but deformation limits may require stiffer detailing or lower risk levels.
Medium strength soil: unconfined compressive strength
±0.75 g risk if 2 separate pieces of rebar are inserted, overlapped
1.6 g risk if an entire internal rebar extends from base to bond beam
Strong soil: unconfined compressive strength
±1.6 g risk if 2 separate pieces of rebar are inserted, overlapped
±2.1 g risk if a single rebar extends from base to bond beam
Additional research and engineering analysis is needed to create valid CE manuals.
References
Natural materials
Building materials | Contained earth | [
"Physics",
"Engineering"
] | 1,206 | [
"Natural materials",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
44,431,852 | https://en.wikipedia.org/wiki/Cyclooctatetraenide%20anion | In chemistry, the cyclooctatetraenide anion or cyclooctatetraenide, more precisely cyclooctatetraenediide, is an aromatic species with a formula of [C8H8]2− and abbreviated as COT2−. It is the dianion of cyclooctatetraene. Salts of the cyclooctatetraenide anion can be stable, e.g., Dipotassium cyclooctatetraenide or disodium cyclooctatetraenide. More complex coordination compounds are known as cyclooctatetraenide complexes, such as the actinocenes.
The structure is a planar symmetric octagon stabilized by resonance, meaning each atom bears a charge of −. The length of the bond between carbon atoms is 1.432 Å. There are 10 π electrons. The structure can serve as a ligand with various metals.
List of salts
See also
Tropylium ion
Cyclopentadienyl anion
References
Simple aromatic rings
Anions
Non-benzenoid aromatic carbocycles | Cyclooctatetraenide anion | [
"Physics",
"Chemistry"
] | 237 | [
"Ions",
"Matter",
"Anions"
] |
44,433,500 | https://en.wikipedia.org/wiki/Gas%20hydrate%20stability%20zone | Gas hydrate stability zone, abbreviated GHSZ, also referred to as methane hydrate stability zone (MHSZ) or hydrate stability zone (HSZ), refers to a zone and depth of the marine environment at which methane clathrates naturally exist in the Earth's crust.
Description
Gas hydrate stability primarily depends upon temperature and pressure, however other variables such as gas composition and ionic impurities in water influence stability boundaries. The existence and depth of a hydrate deposit is often indicated by the presence of a bottom-simulating reflector (BSR). A BSR is a seismic reflection indicating the lower limit of hydrate stability in sediments due to the different densities of hydrate saturated sediments, normal sediments and those containing free gas.
Limits
The upper and lower limits of the HSZ, as well as its thickness, depend upon the local conditions in which the hydrate occurs. The conditions for hydrate stability generally restrict natural deposits to polar regions and deep oceanic regions. In polar regions, due to low temperatures, the upper limit of the hydrate stability zone occurs at a depth of approximately 150 meters.1 The maximal depth of the hydrate stability zone is limited by the geothermal gradient. Along continental margins the average thickness of the HSZ is about 500 m. The upper limit in oceanic sediments occurs when bottom water temperatures are at or near 0 °C, and at a water depth of approximately 300 meters.1 The lower limit of the HSZ is bounded by the geothermal gradient. As depth below seafloor increases, the temperature eventually becomes too high for hydrates to exist. In areas of high geothermal heat flow, the lower limit of the HSZ may become shallower, therefore decreasing the thickness of the HSZ. Conversely, the thickest hydrate layers and widest HSZ are observed in areas of low geothermal heat flow. Generally, the maximum depth of HSZ extension is 2000 meters below the Earth's surface.1,3 Using the location of a BSR, as well as the pressure-temperature regimen necessary for hydrate stability, the HSZ may be used to determine geothermal gradients.2
Transport
If processes such as sedimentation or subduction transport hydrates below the lower limit of the HSZ, the hydrate becomes unstable and disassociates, releasing gas. This free gas may become trapped beneath the overlying hydrate layer, forming gas pockets, or reservoirs. The pressure from the presence of gas reservoirs impacts the stability of the hydrate layer. If this pressure is substantially changed, the stability of the methane layer above will be altered and may result in significant destabilization and disassociation of the hydrate deposit. Landslides of rock or sediment above the hydrate stability zone may also impact the hydrate stability. A sudden decrease in pressure can release gasses or destabilize portions of the hydrate deposit. Changing atmospheric and oceanic temperatures may impact the presence and depth of the hydrate stability zone, however, is still uncertain to what extent. In oceanic sediments, increasing pressure due to a rise in sea level may offset some of the impact of increasing temperature upon the hydrate stability equilibrium.1
References
Clathrate hydrates
Hydrates
Hydrocarbons
Methane
Oceanographical terminology
Physical oceanography | Gas hydrate stability zone | [
"Physics",
"Chemistry"
] | 669 | [
"Hydrocarbons",
"Applied and interdisciplinary physics",
"Methane",
"Hydrates",
"Organic compounds",
"Physical oceanography",
"Clathrates",
"Clathrate hydrates",
"Greenhouse gases"
] |
44,436,862 | https://en.wikipedia.org/wiki/Quadrature%20based%20moment%20methods | Quadrature-based moment methods (QBMM) are a class of computational fluid dynamics (CFD) methods for solving Kinetic theory and is optimal for simulating phases such as rarefied gases or dispersed phases of a multiphase flow. The smallest "particle" entities which are tracked may be molecules of a single phase or granular "particles" such as aerosols, droplets, bubbles, precipitates, powders, dust, soot, etc. Moments of the Boltzmann equation are solved to predict the phase behavior as a continuous (Eulerian) medium, and is applicable for arbitrary Knudsen number and arbitrary Stokes number . Source terms for collision models such as Bhatnagar-Gross-Krook (BGK) and models for evaporation, coalescence, breakage, and aggregation are also available. By retaining a quadrature approximation of a probability density function (PDF), a set of abscissas and weights retain the physical solution and allow for the construction of moments that generate a set of partial differential equations (PDE's). QBMM has shown promising preliminary results for modeling granular gases or dispersed phases within carrier fluids and offers an alternative to Lagrangian methods such as Discrete Particle Simulation (DPS). The Lattice Boltzmann Method (LBM) shares some strong similarities in concept, but it relies on fixed abscissas whereas quadrature-based methods are more adaptive. Additionally, the Navier–Stokes equations(N-S) can be derived from the moment method approach.
Method
QBMM is a relatively new simulation technique for granular systems and has attracted interest from researchers in computational physics, chemistry, and engineering. QBMM is similar to traditional CFD methods, which solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, but QBMM accomplishes this by modeling the fluid as consisting of fictive particles, or nodes, that constitute a discretized PDF. A node consists of an abscissa/weight pair and the weight defines the probability of finding a particle that has the value of its abscissa. This quadrature approximation may also be adaptive, meaning that the number of nodes can increase/decrease to accommodate appropriately complex/simple PDF's. Due to its statistical nature, QBMM has several advantages over other conventional Lagrangian methods, especially in dealing with complex boundaries, incorporating microscopic interactions (such as collisions), parallelization of the algorithm, and computational costs being largely independent of particle population. The numerical methods for solving the system of partial differential equations can be interpreted as the propagation (with a flux term) and interactions (source terms) of fictitious particle probabilities in an Eulerian framework.
Implementations
QBMM is a family of methods encompassing a variety of models, some of which are designed specifically to handle PDF's of passive variables, and others more complex, capable of multidimensional PDF's of active variables (such as velocity). Note that the full representation of the PDF is , where the parameters and represent the external coordinates of time and space respectively, while the internal coordinate vector, , may contain any additional desired degrees of freedom to represent the particles, e.g., temperature , diameter , velocity , angular velocity, etc.
The applicability of these methods depends upon which particle parameters are important (velocity, diameter, temperature, etc.), and importantly upon two values of the phase: and . For example, a monokinetic fluid will have a single velocity vector at each point in space, ; therefore, its corresponding PDF, , is a Dirac Delta function at every point in space. Similarly, a monodisperse phase has a constant diameter for all particles so that is also a Delta function at every point in space. In those cases a PDF is superfluous and can instead be modeled by just tracking a single value corresponding to the abscissa of the Delta function, and the Navier-Stokes equations may be far more optimal to implement.
QMOM
One of the earliest applications of QBMM was the Quadrature Method of Moments (QMOM) by McGraw in 1997. This method was used mainly for aerosol sprays and droplets by tracking their diameters through phenomenon such as breakage, coalescence, evaporation, etc.
DQMOM
Direct QMOM (DQMOM) is a mathematical simplification of QMOM that works best for dispersed phases with low Stokes numbers. DQMOM is a very efficient model because the weights and abscissas appear directly in the transport equations alleviating any need for moment construction and inversion. When dealing with low inertia particles where tracking few passive variables is of concern, DQMOM is very advantageous; however, because a large set of unknowns (abscissas and weights) is solved simultaneously, the matrix inversions cannot guaranteed realizable results in some circumstances, even with expensive iterative processes.
CQMOM
In 2011 the Conditional QMOM (CQMOM) method was published by Yuan and Fox and this comprehensive method is applicable to modeling very general problems by tracking moments of the PDF, , with an arbitrary number of internal parameters. This requires a moment construction and inversion process that converts the set of moments into nodes, and vice versa. The inversion process is the main source of computational costs, but overall CQMOM offers realizable results that DQMOM cannot guarantee.
Polykinetic
CQMOM has the ability to model a fully 3D velocity PDF, known as a polykinetic approach where is not assumed to be a single Delta function. The method is computationally expensive, but very cost-effective when collisions are considered or in dense particle regimes, , which cannot be modeled using N-S and where DPS is computationally restrictive. CQMOM is also applicable for a dispersed phase where .
The specialized Boltzmann Equation for is
where is the acceleration source term (drag, gravity, etc.) and is the collision source term. The velocity moment of in 3D space is defined as
where is the velocity in the d'th dimension, are the multiplicities (arbitrary integer exponents) used to "weight" the PDF integration, and is the order of the moment . Similarly, by taking moments of the entire Boltzmann equation, any number of arbitrary integro-differential equations may be generated,
where is a vector of the arbitrary integer indices and is the element-wise scalar multiplication of the vectors. The convective term includes moments of order and requires closure. Moment closure is achieved using the quadrature approximation of the moments,
where are the velocity abscissas, the weight for the 'th node, and the total number of nodes in the quadrature approximation.
EQMOM
Extended QMOM (EQMOM) gives the quadrature representation of the PDF more flexibility. Instead of relying solely on Dirac delta functions as the basis functions, it uses a Gaussian distribution, thus allowing more complex PDF's to be represented with fewer quadrature nodes.
Limitations
Despite the increasing popularity of QBMM in solving the kinetic equations of granular gases, this novel approach has some limitations. At present, CQMOM's computational costs are significantly higher than that of the N-S Equations when or the DPS costs when or . Additionally, the finite-volume flux methods introduce errors that can lead to instabilities in the moment-inversion process. Nevertheless, the wide applications of this method show its potential in computational physics, including microfluidics. QBMM demonstrates promising results in the area of high Knudsen number and high Stokes number flows.
Further reading
Notes
External links
: ECQMOM presentation
: QMOM presentation
Computational fluid dynamics | Quadrature based moment methods | [
"Physics",
"Chemistry"
] | 1,602 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
67,321,510 | https://en.wikipedia.org/wiki/Nivaflex | Nivaflex is an octavariant alloy important in watchmaking, used primarily for the mainspring. The name was registered as a trademark in 1957 by Reinhard Straumann, a Swiss metallurgist. Nivaflex is "wholly non-magnetic" and displays a very low coefficient of thermal expansion. Its composition is of 45% cobalt, 21% nickel, 18% chromium, 5% iron, 4% tungsten, 4% molybdenum, 1% titanium and 0.2% beryllium; carbon content is less than 0.1 percent of the alloy's weight.
References
Horology
Timekeeping components
Springs (mechanical)
Cobalt alloys
Nickel alloys
Ferroalloys
Chromium alloys
Tungsten alloys
Molybdenum alloys
Titanium alloys
Beryllium alloys
Antiferromagnetic alloys | Nivaflex | [
"Physics",
"Chemistry",
"Technology"
] | 178 | [
"Nickel alloys",
"Titanium alloys",
"Components",
"Physical quantities",
"Horology",
"Time",
"Molybdenum alloys",
"Tungsten alloys",
"Alloys",
"Timekeeping components",
"Antiferromagnetic alloys",
"Spacetime",
"Beryllium alloys",
"Chromium alloys",
"Cobalt alloys"
] |
67,325,019 | https://en.wikipedia.org/wiki/Andrei%20Gritsan | Andrei V. Gritsan is an American-Siberian particle physicist. He was a member of a team of researchers at the Large Hadron Collider, who, in 2012, announced the discovery of a new subatomic particle, a Higgs boson.
Early life and education
Gritsan was born in Russia and graduated from Novosibirsk State University with his Bachelor of Science degree and master's degree in physics. He then enrolled at the University of Colorado, Boulder in the United States for his PhD.
Career
Gritsan joined the faculty at Johns Hopkins University in 2005 after working at the Lawrence Berkeley National Laboratory. As an assistant professor in the department of physics and astronomy, Gritsan won both the National Science Foundation's Faculty Early Career Development Award and a Sloan Research Fellowship in 2007. A few years later, he worked alongside more than 2,000 other scientists and researchers on the Higgs boson which was the recipient of a Nobel Prize in Physics.
In recognition of his "significant contributions to the discovery and to the characterization of the Higgs Boson at the CERN Large Hadron Collider, and for significant contributions to the measurement of sin2alpha at the SLAC PEP II collider," Gritsan was elected a Fellow of the American Physical Society.
References
External links
Living people
Year of birth missing (living people)
Novosibirsk State University alumni
University of Colorado Boulder alumni
Johns Hopkins University faculty
Fellows of the American Physical Society
21st-century American physicists
Particle physicists
People associated with CERN | Andrei Gritsan | [
"Physics"
] | 311 | [
"Particle physicists",
"Particle physics"
] |
67,326,725 | https://en.wikipedia.org/wiki/Zolt%C3%A1n%20Spakovszky | Zoltán S. Spakovszky is an aerospace engineer, academic and researcher. He is best known for his work on fluid system instabilities and internal flow in turbomachinery. He is T. Wilson (1953) Professor in Aeronautics at the Massachusetts Institute of Technology, and the Director of the MIT Gas Turbine Laboratory.
Education
Spakovszky received his Diplom Ingenieur degree in Mechanical Engineering from the Swiss Federal Institute of Technology (ETH Zurich) in 1997. He then moved to United States and earned his Master's and Doctoral Degree in Aeronautics and Astronautics from Massachusetts Institute of Technology in 1999 and 2001, respectively.
Career
Following his doctoral studies, Spakovszky joined the Department of Aeronautics and Astronautics at MIT as professorial faculty in 2001. He became the Director of the MIT Gas Turbine Laboratory in 2008.
Spakovszky's research focuses on solving complex, real world, high relevance technological issues related to aeroengines, power and propulsion systems. He has conducted work in compressor aerodynamics, aeroengine instabilities, rotordynamics, thermodynamics, aero-acoustics, propulsion and energy conversion, and aircraft design for environment. He investigated and explained the mechanisms of flow instabilities leading to in-flight aeroengine shutdowns. The results helped improve an engine diagnostic and health monitoring test employed by airlines for fleet management purposes and required by an FAA airworthiness directive. Spakovszky worked as chief engineer on the Silent Aircraft Initiative, a joint project between the University of Cambridge, MIT, and industry partners, that took step beyond aviation industry's noise reduction goals by delivering a credible conceptual aircraft design inaudible on take-off and landing. He also led a team to develop ultra high-speed gas bearings that enabled operation of multi-wafer rotating MEMS machines for power and propulsion applications at micro scale.
Spakovszky is a Fellow of American Society of Mechanical Engineers (ASME), an Associate Fellow of American Institute of Aeronautics and Astronautics (AIAA), and the Leader of the ASME Gas Turbine Segment Leadership Team,
Awards and honors
1997 - Georg Fischer Award, ETH Zurich
2003 - NASA Group Achievement Award
2003 - ASME Melville Medal
2009 - Ruth and Joel Spira Award for Excellence in Teaching, Massachusetts Institute of Technology
2012 - ASME Gas Turbine Award, International Gas Turbine Institute
2016 - ASME John P. Davis Gas Turbine Applications Award
2021 - ASME Scholar Award
Bibliography
Z. Spakovszky, “Analysis of Aerodynamically Induced Whirling Forces in Axial Flow Compressors”. ASME Journal of Turbomachinery 122, pp. 761 – 768, October 2000.
Z. Spakovszky "Backward Traveling Rotating Stall Waves in Centrifugal Compressors". ASME Journal of Turbomachinery 126.1 (2004): 1. January 2004.
Z. Spakovszky, “Stamp of Authenticity”, Mechanical Engineering 128, pp. 8, April 2006.
V. Lei, Z. Spakovszky, E. Greitzer "A Criterion for Axial Compressor Hub-Corner Stall" Journal of Turbomachinery 130.3 (2008): 031006. January 2008.
Z. Spakovszky, C. Roduner "Spike and Modal Stall Inception in an Advanced Turbocharger Centrifugal Compressor" Journal of Turbomachinery 131.3 (2009): 031012. January 2009
Z. Spakovszky, “High-Speed Gas Bearings for Mirco-Turbomachinery “, in Multi-Wafer Rotating MEMS Machines, Lang, J., ed., Springer, MEMS Reference Shelf Series, January 2009.
A. Peters, Z. Spakovszky, W. Lord, B. Rose "Ultrashort Nacelles for Low Fan Pressure Ratio Propulsors" Journal of Turbomachinery [0889504X] 137.2 (2014): 021001. September 2014.
G. Pullan, A. Young, I. Day, E. Greitzer, Z. Spakovszky "Origins and Structure of Spike-Type Rotating Stall" Journal of Turbomachinery [0889504X] 137.5 (2015): 051007. May 2015.
N. Shah, G. Pfeiffer, R. Davis, T. Hartley, Z. Spakovszky, Full-Scale Turbofan Demonstration of a Deployable Engine Air-Brake for Drag Management Applications," J. Eng. Gas Turbines Power. 2017; 139(11):111202-111202-13. August 2017.
C. Lettieri, D. Paxson, Z. Spakovszky, P. Bryanston-Cross "Characterization of Nonequilibrium Condensation of Supercritical Carbon Dioxide in a de Laval Nozzle" Journal of Engineering for Gas Turbines and Power, 140.4 (2018): 041701, April 2018
A. Kiss, Z. Spakovszky, “Effects of Transient Heat Transfer on Compressor Stability”, ASME J. Turbomach. 2018; 140(12):121003-121003-9. December 2018.
Z. Spakovszky, "Advanced Low-Noise Aircraft Configurations and Their Assessment: Past, Present, and Future", CEAS Aeronautical Journal, Volume: 10, Issue Number: 1, April 2019.
References
Living people
1972 births
Aerospace engineers
Massachusetts Institute of Technology alumni
ETH Zurich alumni | Zoltán Spakovszky | [
"Engineering"
] | 1,145 | [
"Aerospace engineers",
"Aerospace engineering"
] |
67,327,667 | https://en.wikipedia.org/wiki/2-Methylpentamethylenediamine | 2-Methylpentamethylenediamine is an organic compound part of the amine family with the formula H2NCH2CH2CH2CH(CH3)CCH2NCH2. A colorless liquid, this diamine is obtained by the hydrogenation of 2-methylglutaronitrile. It is better known by the trade name "Dytek A".
Uses
2-Methylpentamethylenediamine can serve as a curing agent for epoxy resin systems. It gives good adhesion to metals and resistance against corrosion and other chemicals. It provides toughness, low blush, uniform finish, high gloss, and improves UV stability. It reduces gel time and is compatible with epoxy resins. It is suitable for marine, industrial, and decorative coatings.
2-Methylpentamethylenediamine can also be used as a chain extender for polyurethane applications, and in particular with PUDs. Its derivatives like aspartic esters, secondary amines, aldimines and ketoimines serve as curatives in polyurea systems. In polyamides, 2-Methylpentamethylenediamine acts as a crystallinity disruptor. This makes polymers amorphous in structure and slows down crystallization. It lowers melting point, improves surface appearance, increases abrasion resistance, and dye uptake. It also reduces water absorption, gelling, melt and quench temperatures.
In summary, its uses are:
Corrosion inhibitor
Polyamide adhesive and ink resins.
Polyamide films, plastics, and fibers
Epoxy curing agents
Metalworking Fluids
Chain extenders
Water treatment chemicals
Isocyanates
Hazards
2-Methylpentamethylenediamine has many uses, but is a hazardous chemical. It can cause burns, is corrosive to skin, harmful when swallowed, and it can cause pulmonary edema and acute pneumonitis when inhaled in high concentrations.
See also
1,2-Diaminocyclohexane
Hexamethylenediamine
References
External links
http://www.chemspider.com/Chemical-Structure.77450.html
https://webbook.nist.gov/cgi/cbook.cgi?ID=15520-10-2
Amines | 2-Methylpentamethylenediamine | [
"Chemistry"
] | 490 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
56,036,030 | https://en.wikipedia.org/wiki/Fission%20barrier | In nuclear physics and nuclear chemistry, the fission barrier is the activation energy required for a nucleus of an atom to undergo fission. This barrier may also be defined as the minimum amount of energy required to deform the nucleus to the point where it is irretrievably committed to the fission process. The energy to overcome this barrier can come from either neutron bombardment of the nucleus, where the additional energy from the neutron brings the nucleus to an excited state and undergoes deformation, or through spontaneous fission, where the nucleus is already in an excited and deformed state.
Efforts to understand fission processes are ongoing and have been a very difficult problem since fission was first discovered by Lise Meitner, Otto Hahn, and Fritz Strassmann in 1938. While nuclear physicists understand many aspects of the fission process, there is currently no encompassing theoretical framework that gives a satisfactory account of the basic observations.
Scission
The fission process can be understood when a nucleus with some equilibrium deformation absorbs energy (through neutron capture, for example), becomes excited and deforms to a configuration known as the "transition state" or "saddle point" configuration. As the nucleus deforms, the nuclear Coulomb energy decreases while the nuclear surface energy increases. At the saddle point, the rate of change of the Coulomb energy is equal to the rate of change of the nuclear surface energy. The formation and eventual decay of this transition state nucleus is the rate-determining step in the fission process and corresponds to the passage over an activation energy barrier to the fission reaction. When this occurs, the neck between the nascent fragments disappears and the nucleus divides into two fragments. The point at which this occurs is called the "scission point".
Liquid drop model
From the description of the beginning of the fission process to the "scission point," it is apparent that the change of the shape of the nucleus is associated with a change of energy of some kind. In fact, it is the change of two types of energies: (1) the macroscopic energy related to the nuclear bulk properties as given by the liquid drop model and (2) the quantum mechanical energy associated with filling the shell model orbitals. For the nuclear bulk properties with small distortions, the surface, , and Coulomb, , energies are given by:
where and are the surface and Coulomb energies of the undistorted spherical drops, respectively, and is the quadrupole distortion parameter. When the changes in the Coulomb and surface energies (, ) are equal, the nucleus becomes unstable with respect to fission. At that point, the relationship between the undistorted surface and Coulomb energies becomes:
where is called the fissionability parameter. If , the liquid drop energy decreases with increasing , which leads to fission. If , then the liquid drop energy decreases with decreasing , which leads to spherical shapes of the nucleus.
The Coulomb and surface energies of a uniformly charged sphere can be approximated by the following expressions:
where is the atomic number of the nucleus, is the mass number of the nucleus, is the charge of an electron, is the radius of the undistorted spherical nucleus, is the surface tension per unit area of the nucleus, and . The equation for the fissionability parameter then becomes:
where the ratio of the constant is referred to as . The fissionability of a given nucleus can then be categorized relative to . As an example, plutonium-239 has a value of 36.97, while less fissionable nuclei like bismuth-209 have a value of 32.96.
For all stable nuclei, must be less than 1. In that case, the total deformation energy of nuclei undergoing fission will increase by an amount , as the nucleus deforms towards fission. This increase in potential energy can be thought of as the activation energy barrier for the fission reaction. However, modern calculations of the potential energy of deformation for the liquid drop model involve many deformation coordinates aside from and represent major computational tasks.
Shell corrections
In order to get more reasonable values for the nuclear masses in the liquid drop model, it is necessary to include shell effects. Soviet physicist Vilen Strutinsky proposed such a method using "shell correction" and corrections for nuclear pairing to the liquid drop model. In this method, the total energy of the nucleus is taken as the sum of the liquid drop model energy, , the shell, , and pairing, , corrections to this energy as:
The shell corrections, just like the liquid drop energy, are functions of the nuclear deformation. The shell corrections tend to lower the ground state masses of spherical nuclei with magic or near-magic numbers of neutrons and protons. They also tend to lower the ground state mass of mid shell nuclei at some finite deformation thus accounting for the deformed nature of the actinides. Without these shell effects, the heaviest nuclei could not be observed, as they would decay by spontaneous fission on a time scale much shorter than we can observe.
This combination of macroscopic liquid drop and microscopic shell effects predicts that for nuclei in the U-Pu region, a double-humped fission barrier with equal barrier heights and a deep secondary minimum will occur. For heavier nuclei, like californium, the first barrier is predicted to be much larger than the second barrier and passage over the first barrier is rate determining. In general, there is ample experimental and theoretical evidence that the lowest energy path in the fission process corresponds to having the nucleus, initially in an axially symmetric and mass (reflection) symmetric shape pass over the first maximum in the fission barrier with an axially asymmetric but mass symmetric shape and then to pass over the second maximum in the barrier with an axially symmetric but mass (reflection) asymmetric shape. Because of the complicated multidimensional character of the fission process, there are no simple formulas for the fission barrier heights. However, there are extensive tabulations of experimental characterizations of the fission barrier heights for various nuclei.
See also
Cold fission
Nuclear fusion
References
Nuclear physics
Nuclear fission
Nuclear chemistry
Otto Hahn
1938 in science
1938 in Germany | Fission barrier | [
"Physics",
"Chemistry"
] | 1,234 | [
"Nuclear fission",
"Nuclear chemistry",
"nan",
"Nuclear physics"
] |
56,040,144 | https://en.wikipedia.org/wiki/Coinduction%20%28anesthetics%29 | Coinduction in anesthesia is a pharmacological tool whereby a combination of sedative drugs may be used to greater effect than a single agent, achieving a smoother onset of general anesthesia. The use of coinduction allows lower doses of the same anesthetic agents to be used which provides enhanced safety, faster recovery, fewer side-effects, and more predictable pharmacodynamics. Coinduction is used in human medicine and veterinary medicine as standard practice to provide optimum anesthetic induction. The onset or induction phase of anesthesia is a critical period involving the loss of consciousness and reactivity in the patient, and is arguably the most dangerous period of a general anesthetic. A great variety of coinduction combinations are in use and selection is dependent on the patient's age and health, the specific situation, and the indication for anesthesia. As with all forms of anesthesia the resources available in the environment are a key factor.
Commonly used coinduction regimens
A standard coinduction regimen for an adult might consist of a benzodiazepine sedative amnesic such as midazolam, followed by an opioid analgesic with further sedating properties such as fentanyl which has a fast onset, then an intravenous induction agent: propofol. A muscle relaxant such as atracurium would be administered after this, though this would not strictly be a part of coinduction. For a child on the other hand, a commonly used regimen would be fentanyl, ketamin and rocuronium. In all cases the choice of agents would be tailored to the situation; for a neonatal intubation the aforementioned regimes would be inappropriate as sedation and especially amnesia are less important. Fentanyl alone would be used, followed by the short-action muscle relaxant suxamethonium: coinduction is typically not used in neonatal anesthesia.
References
Anesthesia
Pharmacology
Sedatives | Coinduction (anesthetics) | [
"Chemistry"
] | 408 | [
"Pharmacology",
"Medicinal chemistry"
] |
56,045,552 | https://en.wikipedia.org/wiki/Quantum%20dot%20single-photon%20source | A quantum dot single-photon source is based on a single quantum dot placed in an optical cavity. It is an on-demand single-photon source. A laser pulse can excite a pair of carriers known as an exciton in the quantum dot. The decay of a single exciton due to spontaneous emission leads to the emission of a single photon. Due to interactions between excitons, the emission when the quantum dot contains a single exciton is energetically distinct from that when the quantum dot contains more than one exciton. Therefore, a single exciton can be deterministically created by a laser pulse and the quantum dot becomes a nonclassical light source that emits photons one by one and thus shows photon antibunching. The emission of single photons can be proven by measuring the second order intensity correlation function. The spontaneous emission rate of the emitted photons can be enhanced by integrating the quantum dot in an optical cavity. Additionally, the cavity leads to emission in a well-defined optical mode increasing the efficiency of the photon source.
History
With the growing interest in quantum information science since the beginning of the 21st century, research in different kinds of single-photon sources was growing. Early single-photon sources such as heralded photon sources that were first reported in 1985 are based on non-deterministic processes. Quantum dot single-photon sources are on-demand. A single-photon source based on a quantum dot in a microdisk structure was reported in 2000. Sources were subsequently embedded in different structures such as photonic crystals or micropillars. Adding distributed bragg reflectors (DBRs) allowed emission in a well-defined direction and increased emission efficiency. Most quantum dot single-photon sources need to work at cryogenic temperatures, which is still a technical challenge. The other challenge is to realize high-quality quantum dot single-photon sources at telecom wavelength for fiber telecommunication application. The first report on Purcell-enhanced single-photon emission of a telecom-wavelength quantum dot in a two-dimensional photonic crystal cavity with a quality factor of 2,000 shows the enhancements of the emission rate and the intensity by five and six folds, respectively.
Theory of realizing a single-photon source
Exciting an electron in a semiconductor from the valence band to the conduction band creates an excited state, a so-called exciton. The spontaneous radiative decay of this exciton results in the emission of a photon. Since a quantum dot has discrete energy levels, it can be achieved that there is never more than one exciton in the quantum dot simultaneously. Therefore, the quantum dot is an emitter of single photons. A key challenge in making a good single-photon source is to make sure that the emission from the quantum dot is collected efficiently. To do that, the quantum dot is placed in an optical cavity. The cavity can, for instance, consist of two DBRs in a micropillar (Fig. 1). The cavity enhances the spontaneous emission in a well-defined optical mode (Purcell effect), facilitating efficient guiding of the emission into an optical fiber. Furthermore, the reduced exciton lifetime (see Fig. 2) reduces the significance of linewidth broadening due to noise.
The system can then be approximated by the Jaynes-Cummings model. In this model, the quantum dot only interacts with one single mode of the optical cavity. The frequency of the optical mode is well defined. This makes the photons indistinguishable if their polarization is aligned by a polarizer. The solution of the Jaynes-Cummings Hamiltonian is a vacuum Rabi oscillation. A vacuum Rabi oscillation of a photon interacting with an exciton is known as an exciton-polariton.
To eliminate the probability of the simultaneous emission of two photons it has to be made sure that there can only be one exciton in the cavity at one time. The discrete energy states in a quantum dot allow only one excitation. Additionally, the Rydberg blockade prevents the excitation of two excitons at the same space... The electromagnetic interaction with the already existing exciton changes the energy for creating another exciton at the same space slightly. If the energy of the pump laser is tuned on resonance, the second exciton cannot be created.
Still, there is a small probability of having two excitations in the quantum dot at the same time. Two excitons confined in a small volume are called biexcitons. They interact with each other and thus slightly change their energy. Photons resulting from the decay of biexcitons have a different energy than photons resulting from the decay of excitons. They can be filtered out by letting the outgoing beam pass an optical filter.
The quantum dots can be excited both electrically and optically. For optical pumping, a pulsed laser can be used for excitation of the quantum dots. In order to have the highest probability of creating an exciton, the pump laser is tuned on resonance. This resembles a -pulse on the Bloch sphere. However, this way the emitted photons have the same frequency as the pump laser. A polarizer is needed to distinguish between them. As the direction of polarization of the photons from the cavity is random, half of the emitted photons are blocked by this filter.
Experimental realization
There are several ways to realize a quantum dot-cavity system that can act as a single-photon source. Typical cavity structures are micro-pillars, photonic crystal cavities, or tunable micro-cavities. Inside the cavity, different types of quantum dots can be used. The most widely used type are self-assembled InAs quantum dots grown in the Stranski-Krastanov growth mode, but other materials and growth methods such as local droplet etching have been used. A list of different experimental realizations is shown below:
Micropillars: In this approach, quantum dots are grown between two distributed bragg reflectors (DBR mirrors). The DBRs are typically both grown by molecular beam epitaxy (MBE). For the mirrors two materials with different indices of refraction are grown in alternate order. Their lattice parameters should match to prevent strain. A possible combination is a combination of aluminum arsenide and gallium arsenide-layers. After the first DBR, material with smaller band gap is used to grow the quantum dot above the first DBR. The second layer of DBRs can now be grown on top of the layer with the quantum dots. The diameter of the pillar is only a few microns wide. To prevent the optical mode from exiting the cavity the micropillar must act as a waveguide. Semiconductors usually have relatively high indices of refraction about n≅3. Therefore, their extraction cone is small. On a smooth surface the micropillar works as an almost perfect waveguide. However losses increase with roughness of the walls and decreasing diameter of the micropillar. The edges thus must be as smooth as possible to minimize losses. This can be achieved by structuring the sample with Electron beam lithography and processing the pillars with reactive ion etching.
Tunable micro-cavities hosting quantum dots can be also used as single-photon source. Different compared to micro-pillars, only a single DBR is grown below the quantum dots. The second part of the cavity is a curved top mirror that is physically detached from the semiconductor. The top-mirror can be moved with respect to the quantum dot position which allows tuning the cavity quantum dot coupling as needed. A further advantage over micro-pillars is that the charge-environment of the quantum dots can be stabilized by using diode structures. A disadvantage of the micro-cavity system is that it requires additional mechanical components to tune the cavity which increases the overall system size.
Microlens and solid immersion lens: To increase the brightness of a quantum dot single-photon source, also microlens structures have been used. The concept is to reduce losses due to total internal reflection similar to what can be achieved with a solid immersion lens.
Other single-photon sources are nanobeam or photonic crystal waveguides that contain quantum dots. For such structures, no DBRs are needed but can be used to improve the outcoupling efficiency. Compared to micropillars, this architecture has the advantage that on-chip routing of photons is possible. On the other side, the structure sizes are much smaller requiring more advanced nano-fabrication techniques. The close proximity of quantum dots to the surface is a further challenge.
Verification of emission of single photons
Single photon sources exhibit antibunching. As photons are emitted one at a time, the probability of seeing two photons at the same time for an ideal source is 0. To verify the antibunching of a light source, one can measure the autocorrelation function . A photon source is antibunched if ≤ . For an ideal single photon source, . Experimentally, is measured using the Hanbury Brown and Twiss effect. Using resonant excitation schemes, experimental values for are typically in the regime of just a few percent. Values down to have been reached without resonant excitation.
Indistinguishability of the emitted photons
For applications the photons emitted by a single photon source must be indistinguishable. The theoretical solution of the Jaynes-Cummings Hamiltonian is a well-defined mode in which only the polarization is random. After aligning the polarization of the photons, their indistinguishability can be measured. For that, the Hong-Ou-Mandel effect is used. Two photons of the source are prepared so that they enter a 50:50 beam splitter at the same time from the two different input channels. A detector is placed on both exits of the beam splitter. Coincidences between the two detectors are measured. If the photons are indistinguishable, no coincidences should occur. Experimentally, almost perfect indistinguishability is found.
Applications
Single-photon sources are of great importance in quantum communication science. They can be used for truly random number generators. Single photons entering a beam splitter exhibit inherent quantum indeterminacy. Random numbers are used extensively in simulations using the Monte Carlo method.
Furthermore, single photon sources are essential in quantum cryptography. The BB84 scheme is a provable secure quantum key distribution scheme. It works with a light source that perfectly emits only one photon at a time. Due to the no-cloning theorem, no eavesdropping can happen without being noticed. The use of quantum randomness while writing the key prevents any patterns in the key that can be used to decipher the code.
Apart from that, single photon sources can be used to test some fundamental properties of quantum field theory.
See also
Optical microcavity
Quantum dot
Single-photon source
References
Quantum optics
Condensed matter physics | Quantum dot single-photon source | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,267 | [
"Quantum optics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Matter"
] |
56,046,803 | https://en.wikipedia.org/wiki/Near%20letter-quality%20printing | Near letter-quality (NLQ) printing is a process where dot matrix printers produce high-quality text by using multiple passes to produce higher dot density. The tradeoff for the improved print quality is reduced printing speed. Software can also be used to produce this effect. The term was coined in the 1980s to distinguish NLQ printing from true letter-quality printing, as produced by a printer based on traditional typewriter technology such as a daisy wheel, or by a laser printer.
In 1985 The New York Times described the marketing of printers with the terms "near letter-quality, or N.L.Q." as "just a neat little bit of hype", but acknowledged that they "really show their stuff in the area of fonts, print enhancements and graphics".
Technology overview
Near letter-quality is a form of impact dot matrix printing. What The New York Times called "dot-matrix impact
printing", was deemed almost good enough to be used in a business letter
Reviews in the later 1980s ranged from "good but not great" to "endowed with a simulated typewriter-like quality".
By using multiple passes of the carriage, and higher dot density, the printer could increase the effective resolution. For example, the Epson FX-86 could achieve a theoretical addressable dot-grid of 240 by 216 dots/inch using a print head with a vertical dot density of only 72 dots/inch, by making multiple passes of the print head for each line. For 240 by 144 dots/inch, the print head would make one pass, printing 240 by 72 dots/inch, then the printer would advance the paper by half of the vertical dot pitch (1/144 inch), then the print head would make a second pass. For 240 by 216 dots/inch, the print head would make three passes with smaller paper movement (1/3 vertical dot pitch,
or 1/216 inch) between the passes. To cut hardware costs, some manufacturers merely used a double strike (doubly printing each line) to increase the printed text's boldness, resulting in bolder but still jagged text. In all cases, NLQ mode incurred a severe speed penalty.
Because of the slow speed of NLQ printing, all NLQ printers have at least one "draft mode", in which the same fonts are used, but with only one pass of the print head per line. This produces lower-resolution printing, but at higher speed.
Expensive NLQ printers had multiple fonts built-in, and some had a slot where a font cartridge could be inserted to add more fonts. Printer utility software could be used to print with multiple fonts on less-expensive printers. Not all of these utilities worked with all printers and applications, however.
References
Dot matrix printers
History of computing hardware | Near letter-quality printing | [
"Technology"
] | 572 | [
"History of computing hardware",
"History of computing"
] |
47,524,120 | https://en.wikipedia.org/wiki/Hastatic%20order | Hastatic order is a fundamental way of breaking double "time-reversal" symmetry. It is present in the heavy-fermion compound URu2Si2. This order was dubbed hastatic from hasta, the Latin word for "spear". Its cycle is twice as complex as magnetism.
Discovery
Hastatic order was first reported in January 2013 when the heavy-fermion uranium compound URu2Si2 was cooled to nearly . It was said to produce extra heat and the heat was the main mystery. After the extra heat was released, particles were arranged at this way, making the hastatic order present on that reaction.
References
Fermions
Uranium | Hastatic order | [
"Physics",
"Materials_science"
] | 141 | [
"Fermions",
"Subatomic particles",
"Condensed matter physics",
"Particle physics",
"Particle physics stubs",
"Matter"
] |
47,524,831 | https://en.wikipedia.org/wiki/United%20States%20Drought%20Monitor | The United States Drought Monitor is a collection of measures that allows experts to assess droughts in the United States. The monitor is not an agency but a partnership between the National Drought Mitigation Center at the University of Nebraska-Lincoln, the United States Department of Agriculture, and the National Oceanic and Atmospheric Administration. Different experts provide their best judgment to outline a single map every week that shows droughts throughout the United States. The effort started in 1999 as a federal, state, and academic partnership, growing out of an initiative by the Western Governors Association to provide timely and understandable scientific information on water supply and drought for policymakers.
The monitor is produced by a rotating group of authors and incorporates review from a group of 250 climatologists, extension agents, and others across the nation. Each week the authors revise the previous map based on rainfall, snowfall, and other events, and observers' reports of how drought is affecting crops, wildlife, and other indicators. Authors balance conflicting data and reports to come up with a new map every Wednesday afternoon. The map is then released on the following Thursday morning.
See also
Palmer drought index
References
External links
Drought Monitor summary at the U.S. Drought Portal
Droughts
Droughts in the United States
Hydrology
Meteorological quantities
Climate change and agriculture | United States Drought Monitor | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 259 | [
"Hydrology",
"Physical quantities",
"Quantity",
"Meteorological quantities",
"Environmental engineering"
] |
47,529,684 | https://en.wikipedia.org/wiki/Hilton%20Cleveland%20Downtown%20Hotel | The Hilton Cleveland Downtown Hotel (HCDH) is a skyscraper on the corner of Ontario Street and Lakeside Avenue along The Mall in downtown Cleveland, Ohio, United States. It opened in 2016, has 600 rooms and is 32 stories high. It is one of four Hilton properties in downtown Cleveland, the other three being Hilton Garden Inn, the DoubleTree Hotel Cleveland, and Hampton Inn.
The building was constructed under a partnership between the city of Cleveland and Cuyahoga County to attract larger conventions to the city of Cleveland. The agreement was entered into under the first chief executive of Cuyahoga County, Ed FitzGerald's administration and the Cleveland mayor Frank G. Jackson.
The hotel is the tallest and largest in the city. Previously, the largest hotel in the city was the Renaissance Cleveland Hotel which has 500 rooms. This is the first major hotel constructed in the city since the building of the Marriott at Key Center in 1991 at a height of 320 feet with 385 rooms. The new Hilton is managed by Teri Agosta.
Impetus for hotel
Following the completion of the new Global Center for Health Innovation and spurred by a tax over run that was raised by the county to construct that facility, the first chief executive of Cuyahoga County, Ed FitzGerald proposed the county mount a hotel project to meet demand for conventions that would otherwise overlook Cleveland which had no hotel to accommodate over 500 guests at a time since the 1990s when the Stouffer's company renovated its 1000-room Hotel Cleveland at Public Square (connected to the Terminal Tower) down to 500 rooms. The Hilton Hotel project was considered instrumental in landing the 2016 Republican National Convention.
Financing
The Hotel cost $272 million.
The city of Cleveland passed legislation that led to the financing structure for the hotel in December 2013.
Cuyahoga County followed suit by passing approval for the project in April 2014.
Design
The lead architect on the project was the Atlanta firm of Cooper Carry. The project is LEED certified and uses glass extensively in three slender towers jutting up from a four-story concrete pedestal base. It contains two ballrooms. The hotel's bar is called the Burnham in honor of Daniel Burnham, whose Cleveland Group Plan was instituted in 1903. The skyscraper was erected by the New York City firm Turner Construction.
See also
List of tallest buildings in Cleveland
References
Skyscraper hotels in Cleveland
Cleveland
Buildings and structures in Cleveland
Leadership in Energy and Environmental Design certified buildings
Hotel buildings completed in 2016
Downtown Cleveland
2016 establishments in Ohio | Hilton Cleveland Downtown Hotel | [
"Engineering"
] | 492 | [
"Building engineering",
"Leadership in Energy and Environmental Design certified buildings"
] |
64,380,035 | https://en.wikipedia.org/wiki/Sphere%20packing%20in%20a%20cylinder | Sphere packing in a cylinder is a three-dimensional packing problem with the objective of packing a given number of identical spheres inside a cylinder of specified diameter and length. For cylinders with diameters on the same order of magnitude as the spheres, such packings result in what are called columnar structures.
These problems are studied extensively in the context of biology, nanoscience, materials science, and so forth due to the analogous assembly of small particles (like cells and atoms) into cylindrical crystalline structures.
The book "Columnar Structures of Spheres: Fundamentals and Applications" serves as a notable contributions to this field of study. Authored by Winkelmann and Chan, the book reviews theoretical foundations and practical applications of densely packed spheres within cylindrical confinements.
Appearance in science
Columnar structures appear in various research fields on a broad range of length scales from metres down to the nanoscale. On the largest scale, such structures can be found in botany where seeds of a plant assemble around the stem. On a smaller scale bubbles of equal size crystallise to columnar foam structures when confined in a glass tube. In nanoscience such structures can be found in man-made objects which are on length scales from a micron to the nanoscale.
Botany
Columnar structures were first studied in botany due to their diverse appearances in plants. D'Arcy Thompson analysed such arrangement of plant parts around the stem in his book "On Growth and Form" (1917). But they are also of interest in other biological areas, including bacteria, viruses, microtubules, and the notochord of the zebra fish.
One of the largest flowers where the berries arrange in a regular cylindrical form is the titan arum. This flower can be up to 3m in height and is natively solely found in western Sumatra and western Java.
On smaller length scales, the berries of the Arum maculatum form a columnar structure in autumn. Its berries are similar to that of the corpse flower, since the titan arum is its larger relative. However, the cuckoo-pint is much smaller in height (height ≈ 20 cm). The berry arrangement varies with the stem to berry size.
Another plant that can be found in many gardens of residential areas is the Australian bottlebrush. It assembles its seed capsules around a branch of the plant. The structure depends on the seed capsule size to branch size.
Foams
A further occurrence of ordered columnar arrangement on the macroscale are foam structures confined inside a glass tube. They can be realised experimentally with equal-sized soap bubbles inside a glass tube, produced by blowing air of constant gas flow through a needle dipped in a surfactant solution. By putting the resulting foam column under forced drainage (feeding it with surfactant solution from the top), the foam can be adjusted to either a dry (bubbles shaped as polyhedrons) or wet (spherical bubbles) structure.
Due to this simple experimental set-up, many columnar structures have been discovered and investigated in the context of foams with experiments as well as simulation. Many simulations have been carried out using the Surface Evolver to investigate dry structure or the hard sphere model for the wet limit where the bubbles are spherical.
In the zigzag structure the bubbles are stacked on top of each other in a continuous w-shape. For this particular structure a moving interface with increasing liquid fraction was reported by Hutzler et al. in 1997. This included an unexpected 180° twist interface, whose explanation is still lacking.
The first experimental observation of a line-slip structure was discovered by Winkelmann et al. in a system of bubbles.
Further discovered structures include complex structures with internal spheres/foam cells. Some dry foam structures with interior cells were found to consist of a chain of pentagonal dodecahedra or Kelvin cells in the centre of the tube. For many more arrangements of this type, it was observed that the outside bubble layer is ordered, with each internal layer resembling a different, simpler columnar structure by using X-ray tomography.
Nanoscience
Columnar structures have also been studied intensively in the context of nanotubes. Their physical or chemical properties can be altered by trapping identical particles inside them. These are usually done by self-assembling fullerenes such as C60, C70, or C78 into carbon nanotubes, but also boron nitride nanotubes
Such structures also assemble when particles are coated on the surface of a spherocylinder as in the context of pharmaceutical research. Lazáro et al. examined the morphologies of virus capsid proteins self-assembled around metal nanorods. Drug particles were coated as densely as possible on a spherocylinder to provide the best medical treatment.
Wu et al. built rods of the size of several microns. These microrods are created by densely packing silica colloidal particles inside cylindrical pores. By solidifying the assembled structures the microrods were imaged and examined using scanning electron microscopy (SEM).
Columnar arrangements are also investigated as a possible candidate of optical metamaterials (i.e. materials with a negative refractive index) which find applications in super lenses or optical cloaking. Tanjeem et al. are constructing such a resonator by self-assembling nanospheres on the surface of the cylinder. The nanospheres are suspended in an SDS solution together with a cylinder of diameter , much larger than the diameter of the nanospheres (). The nanospheres then stick to the surface of the cylinders by a depletion force.
Classification using phyllotactic notation
The most common way of classifying ordered columnar structures uses the phyllotactic notation, adopted from botany. It is used to describe arrangements of leaves of a plant, pine cones, or pineapples, but also planar patterns of florets in a sunflower head. While the arrangement in the former are cylindrical, the spirals in the latter are arranged on a disk. For columnar structures phyllotaxis in the context of cylindrical structures is adopted.
The phyllotactic notation describes such structures by a triplet of positive integers with . Each number , , and describes a family of spirals in the 3-dimensional packing. They count the number of spirals in each direction until the spiral repeats. This notation, however, only applies to triangular lattices and is therefore restricted to the ordered structures without internal spheres.
Types of ordered columnar structures without internal spheres
Ordered columnar structures without internal spheres are categorised into two separate classes: uniform and line-slip structures. For each structure that can be identified with the triplet , there exist a uniform structure and at least one line slip.
Uniform structure
A uniform structure is identified by each sphere having the same number of contacting neighbours. This gives each sphere an identical neighbourhood. In the example image on the side each sphere has six neighbouring contacts.
The number of contacts is best visualised in the rolled-out contact network. It is created by rolling out the contact network into a plane of height and azimuthal angle of each sphere. For a uniform structure such as the one in the example image, this leads to a regular hexagonal lattice. Each dot in this pattern represents a sphere of the packing and each line a contact between adjacent spheres.
For all uniform structures above a diameter ratio of , the regular hexagonal lattice is its characterising feature since this lattice type has the maximum number of contacts. For different uniform structures the rolled-out contact pattern only varies by a rotation in the plane. Each uniform structure is thus distinguished by its periodicity vector , which is defined by the phyllotactic triplet .
Line-slip structure
For each uniform structure, there also exists a related but different structure, called a line-slip arrangement.
The differences between uniform and line-slip structures are marginal and difficult to spot from images of the sphere packings. However, by comparing their rolled-out contact networks, one can spot that certain lines (which represent contacts) are missing.
All spheres in a uniform structure have the same number of contacts, but the number of contacts for spheres in a line slip may differ from sphere to sphere. For the example line slip in the image on the right side, some spheres count five and others six contacts. Thus a line slip structure is characterised by these gaps or loss of contacts.
Such a structure is termed line slip because the losses of contacts occur along a line in the rolled-out contact network. It was first identified by Picket et al., but not termed line slip.
The direction, in which the loss of contacts occur can be denoted in the phyllotactic notation , since each number represents one of the lattice vectors in the hexagonal lattice. This is usually indicated by a bold number.
By shearing the row of spheres below the loss of contact against a row above the loss of contact, one can regenerate two uniform structures related to this line slip. Thus, each line slip is related to two adjacent uniform structures, one at a higher and one at a lower diameter ratio .
Winkelmann et al. were the first to experimentally realise such a structure using soap bubbles in a system of deformable spheres.
Dense sphere packings in cylinders
Columnar structures arise naturally in the context of dense hard sphere packings inside a cylinder. Mughal et al. studied such packings using simulated annealing up to the diameter ratio of for cylinder diameter to sphere diameter . This includes some structures with internal spheres that are not in contact with the cylinder wall.
They calculated the packing fraction for all these structures as a function of the diameter ratio. At the peaks of this curve lie the uniform structures. In-between these discrete diameter ratios are the line slips at a lower packing density. Their packing fraction is significantly smaller than that of an unconfined lattice packing such as fcc, bcc, or hcp due to the free volume left by the cylindrical confinement.
The rich variety of such ordered structures can also be obtained by sequential depositioning the spheres into the cylinder. Chan reproduced all dense sphere packings up to using an algorithm, in which the spheres are placed sequentially dropped inside the cylinder.
Mughal et al. also discovered that such structures can be related to disk packings on a surface of a cylinder. The contact network of both packings are identical. For both packing types, it was found that different uniform structures are connected with each other by line slips.
Fu et al. extended this work to higher diameter ratios using linear programming and discovered 17 new dense structures with internal spheres that are not in contact with the cylinder wall.
A similar variety of dense crystalline structures have also been discovered for columnar packings of spheroids through Monte Carlo simulations. Such packings include achiral structures with specific spheroid orientations and chiral helical structures with rotating spheroid orientations.
Columnar structures created by rapid rotations
A further dynamic method to assemble such structures was introduced by Lee et al. Here, polymeric beads are placed together with a fluid of higher density inside a rotating lathe.
When the lathe is static, the beads float on top of the liquid. With increasing rotational speed, the centripetal force then pushes the fluid outwards and the beads toward the central axis. Hence, the beads are essentially confined by a potential given by the rotational energywhere is the mass of the beads, the distance from the central axis, and the rotational speed. Due to the proportionality, the confining potential resembles that of a cylindrical harmonic oscillator.
Depending on number of spheres and rotational speed, a variety of ordered structures that are comparable to the dense sphere packings were discovered.
A comprehensive theory to this experiment was developed by Winkelmann et al. It is based on analytic energy calculations using a generic sphere model and predicts peritectoid structure transitions.
See also
Sphere packing
Close-packing of equal spheres
Packing problems
References
External links
Becker, Aaron T. and Huang, L. "Packing spheres into a Thin Cylinder". MathWorld.
Packing problems
Spheres
Discrete geometry
Crystallography | Sphere packing in a cylinder | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,489 | [
"Discrete mathematics",
"Packing problems",
"Discrete geometry",
"Materials science",
"Crystallography",
"Condensed matter physics",
"Mathematical problems"
] |
64,384,635 | https://en.wikipedia.org/wiki/Forward%20problem%20of%20electrocardiology | The forward problem of electrocardiology is a computational and mathematical approach to study the electrical activity of the heart through the body surface. The principal aim of this study is to computationally reproduce an electrocardiogram (ECG), which has important clinical relevance to define cardiac pathologies such as ischemia and infarction, or to test pharmaceutical intervention. Given their important functionalities and the relative small invasiveness, the electrocardiography techniques are used quite often as clinical diagnostic tests. Thus, it is natural to proceed to computationally reproduce an ECG, which means to mathematically model the cardiac behaviour inside the body.
The three main parts of a forward model for the ECG are:
a model for the cardiac electrical activity;
a model for the diffusion of the electrical potential inside the torso, which represents the extracardiac region;
some specific heart-torso coupling conditions.
Thus, to obtain an ECG, a mathematical electrical cardiac model must be considered, coupled with a diffusive model in a passive conductor that describes the electrical propagation inside the torso.
The coupled model is usually a three-dimensional model expressed in terms of partial differential equations. Such model is typically solved by means of finite element method for the solution's space evolution and semi-implicit numerical schemes involving finite differences for the solution's time evolution. However, the computational costs of such techniques, especially with three dimensional simulations, are quite high. Thus, simplified models are often considered, solving for example the heart electrical activity independently from the problem on the torso. To provide realistic results, three dimensional anatomically realistic models of the heart and the torso must be used.
Another possible simplification is a dynamical model made of three ordinary differential equations.
Heart tissue models
The electrical activity of the heart is caused by the flow of ions across the cell membrane, between the intracellular and extracellular spaces, which determines a wave of excitation along the heart muscle that coordinates the cardiac contraction and, thus, the pumping action of the heart that enables it to push blood through the circulatory system. The modeling of cardiac electrical activity is thus related to the modelling of the flow of ions on a microscopic level, and on the propagation of the excitation wave along the muscle fibers on a macroscopic level.
Between the mathematical model on the macroscopic level, Willem Einthoven and Augustus Waller defined the ECG through the conceptual model of a dipole rotating around a fixed point, whose projection on the lead axis determined the lead recordings. Then, a two-dimensional reconstruction of the heart activity in the frontal plane was possible using the Einthoven's limbs leads I, II and III as theoretical basis.
Later on, the rotating cardiac dipole was considered inadequate and was substituted by multipolar sources moving inside a bounded torso domain. The main shortcoming of the methods used to quantify these sources is their lack of details, which are however very relevant to realistically simulate cardiac phenomena.
On the other hand, microscopic models try to represent the behaviour of single cells and to connect them considering their electrical properties. These models present some challenges related to the different scales that need to be captured, in particular considering that, especially for large scale phenomena such as re-entry or body surface potential, the collective behaviour of the cells is more important than that of every single cell.
The third option to model the electrical activity of the heart is to consider a so-called "middle-out approach", where the model incorporates both lower and higher level of details. This option considers the behaviour of a block of cells, called a continuum cell, thus avoiding scale and detail problems. The model obtained is called bidomain model, which is often replaced by its simplification, the monodomain model.
Bidomain model
The basic assumption of the bidomain model is that the heart tissue can be divided in two ohmic conducting continuous media, connected but separated through the cell membrane. This media are called intracellular and extracellular regions, the former representing the cellular tissues, and the latter representing the space between cells.
The standard formulation of the bidomain model, including a dynamical model for the ionic current, is the following
where and are the transmembrane and extracellular potentials respectively, is the ionic current, which depends also from a so-called gating variable (accounting for cellular-level ionic behavior), and is an external current applied to the domain. Moreover, and are the intracellular and extracellular conductivity tensors, is the surface to volume ratio of the cell membrane and is the membrane capacitance per unit area. Here the domain represents the heart muscle.
The boundary conditions for this version of the bidomain model are obtained through the assumption that there is no flow of intracellular potential outside of the heart, which means that
where denotes the boundary of the heart domain and is the outward unit normal to .
Monodomain model
The monodomain model is a simplification of the bidomain model that, in spite of some unphysiological assumptions, is able to represent realistic electrophysiological phenomena at least for what concerns the transmembrane potential .
The standard formulation is the following partial differential equation, whose only unknown is the transmembrane potential:
where is a parameter that relates the intracellular and extracellular conductivity tensors.
The boundary condition used for this model is
Torso tissue model
In the forward problem of electrocardiography, the torso is seen as a passive conductor and its model can be derived starting from the Maxwell's equations under quasi-static assumption.
The standard formulation consists of a partial differential equation with one unknown scalar field, the torso potential . Basically, the torso model is the following generalized Laplace equation
where is the conductivity tensor and is the domain surrounding the heart, i.e. the human torso.
Derivation
As for the bidomain model, the torso model can be derived from the Maxwell's equations and the continuity equation after some assumptions. First of all, since the electrical and magnetic activity inside the body is generated at low level, a quasi-static assumption can be considered. Thus, the body can be viewed as a passive conductor, which means that its capacitive, inductive and propagative effect can be ignored.
Under quasi-static assumption, the Maxwell's equations are
and the continuity equation is
Since its curl is zero, the electrical field can be represented by the gradient of a scalar potential field, the torso potential
where the negative sign means that the current flows from higher to lower potential regions.
Then, the total current density can be expressed in terms of the conduction current and other different applied currents so that, from continuity equation,
Then, substituting () in ()
in which is the current per unit volume.
Finally, since aside from the heart there is no current source inside the torso, the current per unit volume can be set to zero, giving the generalized Laplace equation, which represents the standard formulation of the diffusive problem inside the torso
Boundary condition
The boundary conditions accounts for the properties of the media surrounding the torso, i.e. of the air around the body. Generally, air has null conductivity which means that the current cannot flow outside the torso. This is translated in the following equation
where is the unit outward normal to the torso and is the torso boundary, which means the torso surface.
Torso conductivity
Usually, the torso is considered to have isotropic conductivity, which means that the current flows in the same way in all directions. However, the torso is not an empty or homogeneous envelope, but contains different organs characterized by different conductivity coefficients, which can be experimentally obtained. A simple example of conductivity parameters in a torso that considers the bones and the lungs is reported in the following table.
Heart-torso models
The coupling between the electrical activity model and the torso model is achieved by means of suitable boundary conditions at the epicardium, i.e. at the interface surface between the heart and the torso.
The heart-torso model can be fully coupled, if a perfect electrical transmission between the two domains is considered, or can be uncoupled, if the heart electrical model and the torso model are solved separately with a limited or imperfect exchange of information between them.
Fully coupled heart-torso models
The complete coupling between the heart and the torso is obtained imposing a perfect electrical transmission condition between the heart and the torso. This is done considering the following two equations, that establish a relationship between the extracellular potential and the torso potential
This equations ensure the continuity of both the potential and the current across the epicardium.
Using these boundary conditions, it is possible to obtain two different fully coupled heart-torso models, considering either the bidomain or the monodomain model for the heart electrical activity. From the numerical viewpoint, the two models are computationally very expensive and have similar computational costs.
Alternative boundary conditions
Boundary conditions that represent a perfect electrical coupling between the heart and the torso are the most used and the classical ones. However, between the heart and the torso there is the pericardium, a sac with a double wall that contains a serous fluid which has a specific effect on the electrical transmission. Considering the capacitance and the resistive effect that the pericardium has, alternative boundary conditions that take into account this effect can be formulated as follows
Formulation with the bidomain model
The fully coupled heart-torso model, considering the bidomain model for the heart electrical activity, in its complete form is
where the first four equations are the partial differential equations representing the bidomain model, the ionic model and the torso model, while the remaining ones represent the boundary conditions for the bidomain and torso models and the coupling conditions between them.
Formulation with the monodomain model
The fully coupled heart-torso model considering the monodomain model for the electrical activity of the heart is more complicated that the bidomain problem. Indeed, the coupling conditions relate the torso potential with the extracellular potential, which is not computed by the monodomain model. Thus, it is necessary to use also the second equation of the bidomain model (under the same assumptions under which the monodomain model is derived), yielding:
This way, the coupling conditions do not need to be changed, and the complete heart-torso model is composed of two different blocks:
First the monodomain model with its usual boundary condition must be solved:
Then, the coupled model that includes the computation of the extracellular potential, the torso model and the coupling conditions must be solved:
Uncoupled heart-torso models
The fully coupled heart-torso models are very detailed models but they are also computationally expensive to solve. A possible simplification is provided by the so-called uncoupled assumption in which the heart is considered completely electrically isolated from the heart. Mathematically, this is done imposing that the current cannot flow across the epicardium, from the heart to the torso, namely
Applying this equation to the boundary conditions of the fully coupled models, it is possible to obtained two uncoupled heart-torso models, in which the electrical models can be solved separately from the torso model reducing the computational costs.
Uncoupled heart-torso model with the bidomain model
The uncoupled version of the fully coupled heart-torso model that uses the bidomain to represent the electrical activity of the heart is composed of two separated parts:
The bidomain model in its isolated form
The torso diffusive model in its standard formulation, with the potential continuity condition
Uncoupled heart-torso model with the modomain model
As in the case of the fully coupled heart-torso model which uses the monodomain model, also in the corresponding uncoupled model extracellular potential needs to be computed. In this case, three different and independent problems must be solved:
The monodomain model with its usual boundary condition:
The problem to compute the extracellular potential with a boundary condition on the epicardium prescribing no intracellular current flow:
The torso diffusive model with the potential continuity boundary condition at the epicardium:
Electrocardiogram computation
Solving the fully coupled or the uncoupled heart-torso models allows to obtain the electrical potential generated by the heart in every point of the human torso, and in particular on the whole surface of the torso. Defining the electrodes positions on the torso, it is possible to find the time evolution of the potential on such points. Then, the electrocardiograms can be computed, for example according to the 12 standard leads, considering the following formulas
where and are the standard locations of the electrodes.
Numerical methods
The heart-torso models are expressed in terms of partial differential equations whose unknowns are function of both space and time. They are in turn coupled with an ionic model which is usually expressed in terms of a system of ordinary differential equations. A variety of numerical schemes can be used for the solution of those problems. Usually, the finite element method is applied for the space discretization and semi-implicit finite-difference schemes are used for the time discretization.
Uncoupled heart-torso model are the simplest to treat numerically because the heart electrical model can be solved separately from the torso one, so that classic numerical methods to solve each of them can be applied. This means that the bidomain and monodomain models can be solved for example with a backward differentiation formula for the time discretization, while the problems to compute the extracellular potential and torso potential can be easily solved by applying only the finite element method because they are time independent.
The fully coupled heart-torso models, instead, are more complex and need more sophisticated numerical models. For example, the fully heart-torso model that uses the bidomain model for the electrical simulation of the cardiac behaviour can be solved considering domain decomposition techniques, such as a Dirichlet-Neumann domain decomposition.
Geometric torso model
To simulate and electrocardiogram using the fully coupled or uncoupled models, a three-dimensional reconstruction of the human torso is needed. Today, diagnostic imaging techniques such as MRI and CT can provide a sufficiently accurate images that allow to reconstruct in detail anatomical human parts and, thus, obtain a suitable torso geometry.
For example, the Visible Human Data is a useful dataset to create a three-dimensional torso model detailed with internal organs including the skeletal structure and muscles.
Dynamical model for the electrocardiogram
Even if the results are quite detailed, solving a three-dimensional model is usually quite expensive. A possible simplification is a dynamical model based on three coupled ordinary differential equations.
The quasi-periodicity of the heart beat is reproduced by a three-dimensional trajectory around an attracting limit cycle in the plane. The principal peaks of the ECG, which are the P,Q,R,S and T, are described at fixed angles , which give the following three ODEs
with , ,
The equations can be easily solved with classical numerical algorithms like Runge-Kutta methods for ODEs.
See also
Monodomain model
Bidomain model
Electrocardiography
References
Cardiac electrophysiology
Cardiac procedures
Electrodiagnosis
Electrophysiology
Mathematics in medicine
Medical tests
Partial differential equations
Mathematical modeling
Numerical analysis | Forward problem of electrocardiology | [
"Mathematics"
] | 3,122 | [
"Mathematical modeling",
"Applied mathematics",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Mathematics in medicine",
"Approximations"
] |
62,069,340 | https://en.wikipedia.org/wiki/De%20Brouckere%20mean%20diameter | The De Brouckere mean diameter is the mean of a particle size distribution weighted by the volume (also called volume-weighted mean diameter, volume moment mean diameter. or volume-weighted mean size). It is the mean diameter, which is directly obtained in particle size measurements, where the measured signal is proportional to the volume of the particles. The most prominent examples are laser diffraction and acoustic spectroscopy (Coulter counter).
The De Brouckere mean is defined in terms of the moment-ratio system as,
Where ni is the frequency of occurrence of particles in size class i, having a mean Di diameter. Usually in logarithmic spaced classes, the geometric mean size of the size class is taken.
Applications
The De Brouckere mean has the advantage of being more sensitive to the larger particles, which take up the largest volume of the sample, therefore giving crucial information about the product in the mining and milling industries. It was also used in combustion analysis, as the D[4,3] is less affected by the presence of very small particulate residuals, which enabled the evaluation of the primary diesel spray.
Further reading
See also
Sauter mean diameter
References
Fluid dynamics | De Brouckere mean diameter | [
"Chemistry",
"Engineering"
] | 240 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
62,078,466 | https://en.wikipedia.org/wiki/Journal%20of%20Materials%20in%20Civil%20Engineering | The Journal of Materials in Civil Engineering is a monthly peer-reviewed scientific journal established in 1989 by the American Society of Civil Engineers. It covers research and best practices concerns on development, processing, evaluation, applications, and performance of construction materials in civil engineering. It consists of four sections: cementitious material, asphalt, geo-materials, and hybrids (which encompass steel, timber, masonry, and composite materials).
Abstracting and indexes
The journal is abstracted and indexed in Ei Compendex, ProQuest databases, Civil engineering database, Inspec, Scopus, and EBSCO databases.
References
External links
Civil engineering journals
American Society of Civil Engineers academic journals
Monthly journals
Materials science journals
English-language journals
Academic journals established in 1989 | Journal of Materials in Civil Engineering | [
"Materials_science",
"Engineering"
] | 154 | [
"Civil engineering journals",
"Civil engineering",
"Materials science journals",
"Materials science"
] |
73,060,426 | https://en.wikipedia.org/wiki/Hydroxyethyl%20acrylate | Hydroxyethyl acrylate is an organic chemical and an aliphatic compound. It has the formula C5H8O3 and the CAS Registry Number 818–61–1. It is REACH registered with an EU number of 212–454–9. It has dual functionality containing a polymerizable acrylic group and a terminal hydroxy group. It is used to make emulsion polymers along with other monomers and the resultant resins are used in coatings, sealants, adhesives and elastomers and other applications.
Synthesis
There are a number of patents and synthesis papers to produce the material mostly aimed at reducing or removing heavy metals as catalysts. The traditional manufacturing process calls for the reaction of ethylene oxide with acrylic acid in the presence of a metal catalyst.
Properties
The material is a clear water-white liquid with a mild but pungent ester like odor. It has a low freezing point.
Applications
The most common use for the material is to be copolymerized with other acrylate and methacrylate monomers to make emulsion and other polymers including hydrogels. Modification of rubbers and similar compounds is also a use for the material. The resultant polymers may be used to manufacture pressure-sensitive adhesives.
Toxicity
The toxicity of the material has been studied and is fairly well understood.
See also
(Hydroxyethyl)methacrylate
Synthetic resin
References
Monomers
Acrylate esters | Hydroxyethyl acrylate | [
"Chemistry",
"Materials_science"
] | 305 | [
"Monomers",
"Polymer chemistry"
] |
73,065,907 | https://en.wikipedia.org/wiki/Contingency%20%28electrical%20grid%29 | In an electrical grid, contingency is an unexpected failure of a single principal component (e.g., an electrical generator or a power transmission line) that causes the change of the system state large enough to endanger the grid security. Some protective relays are set up in a way that multiple individual components are disconnected due to a single fault, in this case, taking out of all the units in a group counts as a single contingency. A scheduled outage (like maintenance) is not a contingency.
The choice of term emphasizes the fact that a single fault can cause severe damage to the system so quickly that the operator will not have time to intervene, and therefore a reaction to the fault has to be defensively pre-built into the system configuration. Some sources use the term interchangeably with "disturbance" and "fault".
Contingency analysis
The contingency analysis application periodically runs on the computers at the operations centers providing suggestions to the operators based on the current state of the grid and the contingency selection. The software provides answers to the "what if" scenarios in the form of "alarms": "Loss of component X will result in overload of Y by Z%". By the 1990s analysis of a large interconnected system involved testing of many thousands of contingency events (millions if double contingencies were considered). An effect of each contingency requires performing a power flow calculation. Due to the rapid change of the state of a power system the run of the application shall complete in minutes (up to 30) for the results to be useful. Typically only selected contingencies, mostly single ones with some double ones are considered to speed up the process. The selection of contingencies is using engineering judgment to choose the ones most likely to cause problems.
Credible contingencies
The foreseen and analyzed contingencies are called credible. Examples of these are failures of:
a transmission line or tie line / HVDC link;
a generator;
a transformer;
a variable renewable energy cluster;
a voltage compensation device.
In continental Europe these contingencies are considered "normal", with "exceptional" credible contingencies being the failures of:
a double circuit transmission line;
two generators;
a bus bar.
Non-credible (also called "out-of-range") contingencies are not used in planning, as they are rare and their effects are hard to predict, for example, failures of:
an entire electrical substation;
a transmission tower that carries more than two lines.
N-X contingency planning
Reliability of the energy supply usually requires that any single major unit failure leaves the system with enough resources to supply the current load. The system that satisfies this requirement is described as meeting the N-1 contingency criterion (N designates the number of pieces of equipment). The N-2 and N-3 contingency refers to planning for a simultaneous loss of, respectively, 2 or 3 major units; this is sometimes done for the critical area (e.g. downtown).
The N-1 requirement is used throughout the network, from generation to substations. At the distribution level, however, the planners frequently allow a more relaxed interpretation: a single failure should ensure uninterrupted delivery of power to almost all the customers at least at the "emergency level" (Range B of the ANSI C84.1), but a small section of the network that contains the original fault might require manual switching with a service interruption for about an hour.
The popularity of contingency planning is based on its advantages:
each of the N elements in the system is analyzed separately, limiting the amount of work to be done and simplifying the failure options (e.g., generator failure, short circuit);
the process inherently provides a way to deal with the contingency if and when it will happen.
The N-1 contingency planning is typically sufficient for the systems with the usual ratio of peak load to capacity (below 70%). For a system with a substantially higher ratio, the N-1 planning will not deliver satisfactory reliability, and even N-2 and N-3 criteria might not be sufficient; therefore the reliability-based planning is used that considers the probabilities of the individual contingencies.
N-1-1 contingency is defined as a single fault followed by manual recovery procedures, with another fault occurring after the successful recovery from the first failure. Normal operating conditions are sometimes referred to as N-0.
References
Sources
Power engineering | Contingency (electrical grid) | [
"Engineering"
] | 934 | [
"Power engineering",
"Electrical engineering",
"Energy engineering"
] |
73,069,302 | https://en.wikipedia.org/wiki/Magnetoactive%20phase%20transitional%20matter | Magnetoactive phase transitional matter (MPTM) are miniature robotic machines that can change their shape by switching between liquid and solid state.
Description
MPTMs consist of liquid metal embedded with a neodymium magnet. MPTMs can be programmed to change shape when needed, by using heating and ambient cooling. Heat is generated from an incorporated heating element, or by use of magnetic pulses, switching the robot into liquid mode. Ambient temperatures provides cooling to change the robot into a solid state. The magnetism of the metal holds the machine together while in liquid mode.
History
MPTMs were first created by a collaboration of scientists from Sun Yat-sen University, Carnegie Mellon University, Chinese University of Hong Kong, and Zhejiang University. Their robot incorporated a heating element, and was able to melt itself to change shape. The first MPTM incorporated neodymium, iron, and boron microparticles in gallium and had a melting point of 29.8 °C.
Potential uses
A January 2023 academic paper demonstrated the potential to use MPTMs for mechanical assembly in hard to reach locations, and in medical procedures. Medical use cases were delivery of drugs in the human stomach and the removal of foreign objects.
See also
Shapeshifting
References
2022 in science
Matter
Shapeshifting
2020s robots
Medical robots | Magnetoactive phase transitional matter | [
"Physics"
] | 265 | [
"Matter"
] |
73,074,374 | https://en.wikipedia.org/wiki/Yan%27s%20theorem | In probability theory, Yan's theorem is a separation and existence result. It is of particular interest in financial mathematics where one uses it to prove the Kreps-Yan theorem.
The theorem was published by Jia-An Yan. It was proven for the L1 space and later generalized by Jean-Pascal Ansel to the case .
Yan's theorem
Notation:
is the closure of a set .
.
is the indicator function of .
is the conjugate index of .
Statement
Let be a probability space, and be the space of non-negative and bounded random variables. Further let be a convex subset and .
Then the following three conditions are equivalent:
For all with exists a constant , such that .
For all with exists a constant , such that .
There exists a random variable , such that almost surely and
.
Literature
Freddy Delbaen and Walter Schachermayer: The Mathematics of Arbitrage (2005). Springer Finance
References
Probability theorems | Yan's theorem | [
"Mathematics"
] | 195 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
73,075,588 | https://en.wikipedia.org/wiki/Community%20Notes | Community Notes, formerly known as Birdwatch, is a feature on X (formerly Twitter) where contributors can add context such as fact-checks under a post, image or video. It is a community-driven content moderation program, intended to provide helpful and informative context, based on a crowd-sourced system. Notes are applied to potentially misleading content by a bridging-based algorithm not based on majority rule, but instead agreement from users on different sides of the political spectrum.
The program launched in 2021 and became widespread on X in 2023. Initially shown to U.S. users only, notes were popularized in March 2022 over misinformation in the Russian invasion of Ukraine followed by COVID-19 misinformation in October. Birdwatch was then rebranded to Community Notes and expanded in November 2022. As of November 2023, it had approximately 133,000 contributors; notes reportedly receive tens of millions of views per day, with its goal being to counter propaganda and misinformation. According to investigation and studies, the vast majority of users do not see notes correcting content. In May 2024, a study of COVID-19 vaccine notes were deemed accurate 96% of the time.
Critics have also highlighted how it has spread disinformation, is vulnerable to manipulation, and has been inconsistent in its application of notes, as well as its efforts in combating of misinformation. Elon Musk, the owner of X, considers the program as a game changer and having considerable potential. After a post by Musk received a Community Note, he claimed the program had been manipulated by state actors.
History
In February 2020, Twitter began introducing labels and warning messages intended to limit potentially harmful and misleading content. In August 2020, development of Birdwatch was announced, initially described as a moderation tool. Twitter first launched the Birdwatch program in January 2021, intended as a way to debunk misinformation and propaganda, with a pilot program of 1,000 contributors, weeks after the January 6 United States Capitol attack. The aim was to "build Birdwatch in the open, and have it shaped by the Twitter community." In November 2021, Twitter updated the Birdwatch moderation tool to limit the visibility of contributors' identities by creating aliases for their accounts, in an attempt to limit bias towards the author of notes.
Twitter then expanded access to notes made by the Birdwatch contributors in March 2022, giving a randomized set of US users the ability to view notes attached to tweets and rate them, with a pilot of 10,000 contributors. On average, contributors were noting 43 times a day in 2022 prior to the Russian invasion of Ukraine. This then increased to 156 on the day of the invasion, estimated to be a very small portion of the misleading posts on the platform. By March 1, only 359 of 10,000 contributors had proposed notes in 2022, while a Twitter spokeswoman described plans to scale up the program, with the focus on "ensuring that Birdwatch is something people find helpful and can help inform understanding".
By September 2022, the program had expanded to 15,000 users. In October 2022, the most commonly published notes were related to COVID-19 misinformation based on historical usage. In November 2022, at the request of new owner Elon Musk, Birdwatch was rebranded to Community Notes, taking an open-source approach to deal with misinformation, and expanded to Europe and countries outside of the US.
Community Notes was then extended to include notes on misleading images in May 2023 and in September 2023 further extended to videos, but only for a group of power-users referred to as "Top Writers". Twitter subsequently ended the ability to report misleading posts, instead relying exclusively on Community Notes, with contributors proposing over 21,200 notes on the platform.
In October 2023, Elon Musk announced that posts "corrected" by Community Notes would no longer be eligible for ad revenue in order to "maximize the incentive for accuracy over sensationalism" and in order to discourage the spread of misinformation and disinformation on the platform. The move was criticised by some users and applauded by others. As of November 2023, it has expanded to over 50 countries, with approximately 133,000 contributors.
In January 2025, Mark Zuckerberg announced that Meta will remove fact-checkers for Facebook, Instagram, and Threads, replacing them with a community-orientated system, similar to Community Notes. According to Meta, the feature will initially be launched for U.S. users.
Operation
The Community Notes algorithm publishes notes based on agreement from contributors who have a history of disagreeing. Rather than based on majority rule, the program's algorithm prioritizes notes that receive ratings from a "diverse range of perspectives". Programmer Vitalik Buterin has described the open-source algorithm as "insanely complicated". For a note to be published, a contributor must first propose a note under a tweet. The program assigns different values to contributors' ratings, categorising users with similar rating histories as a form of "opinion classification", determined by a vague alignment with the left and right-wing political spectrum. The bridging-based machine-learning algorithm requires ratings from both sides of the spectrum in order to publish notes, that can have the intended effect of decreasing interaction with such content.
Contributors are volunteers with access to an interface from which they have the ability to monitor tweets and replies that may be misleading. Notes in need of ratings by contributors are located under a "Needs your help" section of the interface. Other contributors then give their opinion on the usefulness of the note, identifying notes as "Helpful" or "Not Helpful". The contributor gets points if their note is validated, known as "Rating Impact", that reflects how helpful a contributors' ratings have been. X users are able to vote on whether they find notes helpful or not, but must apply to become contributors in order to write notes, the latter being restricted by "Rating Impact" as well as the Community Notes guidelines.
Application
Since 2023, Community Notes are often attached to shared articles missing context, misleading advertisements or political tweets with false arguments, from content receiving widespread attention.
Notes have appeared on posts by government accounts and various politicians: the White House, the Federal Bureau of Investigation, and U.S. President Joe Biden; UK Prime Ministers Rishi Sunak and Liz Truss; former U.S. speakers of the House and presidential candidates Ron DeSantis and Vivek Ramaswamy; U.S. representatives, senators, and Australian ministers; as well as X owner Elon Musk multiple times, that in February 2024 led to Musk arguing with the program.
The feature does not directly mention fact-checking but instead indicates that "readers added context". They can also note when an image is digitally altered or AI-generated. X allows contributors to add Community Notes to adverts, which the Financial Times noted was good for consumers but not for advertisers. This resulted in brands such as Apple, Samsung, Uber and Evony receiving notes on their adverts and being accused of false or misleading posts, advertisers deleting certain posts that received notes, as well as modifying content for future advertisements.
A source is attached to the note so the information can be verified, in a similar manner to Wikipedia, and notes reportedly received tens of millions of views per day. Elon Musk, the owner of X, considers the program as a "gamechanger for combating wrong information" and having "incredible potential for improving information accuracy". In December 2023, after receiving a note on one of his posts, Musk thanked contributors for "jumping in the honey pot" after stating that the system had been "gamed by state actors", with the intent of detecting so-called bad actors.
In July 2024, as part of a pilot program, X announced the ability for eligible users to request Community Notes for certain posts, that would be directed to "Top Writers" of the software. The threshold of five requests within 24 hours would determine a note being published.
Analysis
Former head of Twitter's Trust and Safety, Yoel Roth, has since expressed concern over the effectiveness of the system in the early stages of the program, stating that Birdwatch was never supposed to replace the curation team, but instead intended to complement it. Another former employee said it was "an imperfect replacement for Trust and Safety staff". In April 2022, a study presented by MIT researchers subsequently found users overwhelming prioritised political content, even though 80% were correctly considered misleading.
Wired noted that in the backend of the database most notes remain unpublished, and that numerous contributors engage in "conspiracy-fueled" discussions. According to Musk, anyone trying to "weaponize Community Notes to demonetize people will be immediately obvious", due to the open-source nature of the code and data.
Regarding the situation in Israel and Gaza, with the difficulty of identifying accurate information and the number of unknown factors, MIT professor David Rand said "what I expect the crowd to produce is a lot of noise", regarding the crowd-sourced system. A contributor otherwise described that the system is "not really scalable for the amount of media that's being consumed or posted in any given day", while X states that the program is having a "significant impact on tackling disinformation on the platform".
Studies
In October 2023, Community Notes experienced multi-day delays in publishing notes on misinformation in the 2023 Israel-Hamas war or failed to do so. One study by NBC News found that in the case of a fake White House press release claiming the destruction of the St. Porphyrius Orthodox Church – a week before the destruction – only 8% of posts had notes published, 26% had unpublished notes, while the majority had no proposed notes. Analysis from NewsGuard of 250 of the most-engaged posts, spreading the most common unsubstantiated claims about the Israel-Hamas war and viewed more than 100 million times, failed to receive notes 68% of the time. The report found Community Notes were "inconsistently applied to top myths relating to the conflict." The fact-checking website Snopes discovered three posts from verified users, who had shared a video of a hospitalized man from Gaza with false captions claiming it showed "crisis actors", had failed to receive any Community Notes after 24 hours. Bellingcat found the program spread false information, in reference to Taylor Swift's bodyguard due to misinformation. Wired has documented that Community Notes is susceptible to disinformation, after a graphic Hamas video shared by Donald Trump Jr. was falsely flagged as being a year old, but was instead found to be part of the recent conflict. The original note was later replaced with another citing the report from Wired.
In November 2023, the Atlantic Council conducted an interactive study of Community Notes highlighting how the system operated slowly and inconsistently regarding Israel and Gaza misinformation. In one example, an image originally received a Community Note but continued to spread regardless receiving over 3 million views after a week. Hundreds of viral posts from the notes public database were analyzed and according to researchers fast-moving breaking news wasn't labeled. Across 400 posts of misinformation, a note took on average 7 hours to appear, while others took 70 hours. The analysis however did show that over 50% of the posts received a note within 8 hours, with only a few taking longer than 2 days. The study included 100 tweets from 83 users who had signed up to X Premium in the past 4 months, along with 42 tweets from 25 accounts that were reinstated by Elon Musk, including Laura Loomer. The study also included Jackson Hinkle, who appeared multiple times.
Another NewsGuard report found advertising appearing on 15 posts with Community Notes attached in the week of November 13, 2023, indicating that "misinformation super-spreaders" may still be eligible for ad revenue, despite posts with notes attached being ineligible according to Musk. On November 30, a Mashable investigation found most users never see published notes, with examples of notes seen by less than 1% to 5% of users who viewed misinformation content, and overall, a disproportionate number of views on posts compared to the attached notes.
In May 2024, John W. Ayers, a behavioural scientist from the University of California, San Diego, published a study in the Journal of the American Medical Association based on fact-checking of COVID-19 vaccines. In the sample of 205 Community Notes, according to Ayers and other researches, the information was accurate in 96% of notes, and 87% of sources were of high quality. The lead author, according to Bloomberg UK, stated that only a small percentage of misinformation received a note, while published notes were among the most viral content.
In July 2024, after the attempted assassination of Donald Trump, the Center for Countering Digital Hate (CCDH) published a report that of the 100 most popular conspiratorial posts on X about the shooting, only five Community Notes were published to counter the false claim. In October 2024, the CCDH reported that 74% of misinformation about the 2024 United States elections failed to receive notes, based on a sample of 283 posts. Where notes were published, they received 13 times less views than the original post, according to the group.
See also
List of Twitter features
Virtual volunteering
Notes
References
2021 software
Twitter
Software features
Crowdsourcing
Volunteering
Misinformation
Disinformation | Community Notes | [
"Technology"
] | 2,836 | [
"Software features"
] |
68,721,613 | https://en.wikipedia.org/wiki/Nina%20Buchmann | Nina Buchmann is a German ecologist known for her research on the physiology of plants and the impact of plants on biogeochemical cycling. She is a member of the German National Academy of Sciences Leopoldina and an elected fellow of the American Geophysical Union.
Education and career
Buchmann has an undergraduate degree from the University of Bayreuth (1989). In 1993 she finished her Ph.D. there working with with a research project tracking the incorporation of inorganic nitrogen into trees. Following this, she spent three years at the University of Utah working with James Ehleringer. In 1996 she returned to the University of Bayreuth and finished her habilitation working on the exchange of carbon dioxide between soils and the atmosphere. Starting in 1993, she worked at the Max Planck Institute for Biogeochemistry until she moved to ETH Zurich in 2003 where she is a full professor.
In 2018, she was elected a fellow of the American Geophysical Union who cited "her pioneering work to understand ecophysiological mechanisms regulating ecosystem carbon dynamics locally, regionally and across diverse ecosystems".
Research
Buchmann's research centers on the role of plants in biogeochemical cycling. Some examples of her research include investigations into the ecophysiology of plants and ecosystems, the flux of carbon and water in terrestrial ecosystems, and biogeochemical processes such as the carbon dynamics of the Amazonian rainforest. Buchmann's early research tracked inorganic nitrogen uptake by trees using stable isotopes, and examined the carbon isotopic signature of C-4 grasses and forests, and soils.
Selected publications
Awards
Founding member of the Junge Akademie (Young Academy of Sciences) (2000–2005)
Member of the German National Academy of Sciences Leopoldina (2007)
Fellow, American Geophysical Union (2018)
References
Fellows of the American Geophysical Union
Members of the German National Academy of Sciences Leopoldina
University of Bayreuth alumni
Academic staff of ETH Zurich
Plant ecologists
Biogeochemists
Women ecologists
1965 births
Living people | Nina Buchmann | [
"Chemistry"
] | 421 | [
"Geochemists",
"Biogeochemistry",
"Biogeochemists"
] |
68,723,848 | https://en.wikipedia.org/wiki/Century%20common%20year | A century common year is a common year in the Gregorian calendar that is divisible by 100 but not by 400. Like all common years, these years do not get an extra day in February, meaning they have 365 days instead of 366. These years are the only common years that are divisible by 4.
In the obsolete Julian Calendar, all years that were divisible by 4 were leap years, meaning no century years could be common years. However, this rule adds too many leap days, resulting in the calendar drifting with respect to the seasons, which is the same thing that would happen if there were no leap years at all. So, in 1582, Pope Gregory XIII introduced a slightly modified version of the Julian Calendar, the Gregorian Calendar, where century years would not be leap years if they are not divisible by 400. Therefore, 1700 is the first century year in the Gregorian Calendar being a common year. The years 1800 and 1900 were also century common years, and so will be 2100, 2200, 2300, 2500, 2600, 2700, 2900, and 3000.
The Gregorian Calendar repeats itself every 400 years, so century common years start on a Friday if the remainder obtained when dividing the year by 400 is 100 (dominical letter C), Wednesday if the remainder is 200 (dominical letter E), and Monday if the remainder is 300 (dominical letter G). This means that century leap years always begin on a Saturday (dominical letter BA).
References
External links
An Introduction to Calendars courtesy of the United States Naval Observatory
Frequently Asked Questions about Calendars
History of Gregorian Calendar
Units of time
Calendars
Gregorian calendar | Century common year | [
"Physics",
"Mathematics"
] | 342 | [
"Calendars",
"Physical quantities",
"Time",
"Time stubs",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
68,726,215 | https://en.wikipedia.org/wiki/THC%20morpholinylbutyrate | THC morpholinylbutyrate (SP-111, Δ9-THC-O-[4-(morpholin-4-yl)butyrate]) is a synthetic derivative of tetrahydrocannabinol, developed in the 1970s. It is a prodrug which is converted into THC inside the body, and was one of the first derivatives of THC that is able to form water-soluble salts, giving it a significant advantage over THC for some applications. However, it is less potent than THC and the metabolic conversion to THC is relatively slow and variable, giving it unpredictable pharmacokinetics which has limited its research applications.
See also
THC-O-acetate
THC-O-phosphate
THC hemisuccinate
THC-VHS
Nabitan
O-1057
References
Benzochromenes
Cannabinoids
Prodrugs
4-Morpholinyl compounds | THC morpholinylbutyrate | [
"Chemistry"
] | 199 | [
"Chemicals in medicine",
"Prodrugs"
] |
68,730,378 | https://en.wikipedia.org/wiki/Conformal%20prediction | Conformal prediction (CP) is a machine learning framework for uncertainty quantification that produces statistically valid prediction regions (prediction intervals) for any underlying point predictor (whether statistical, machine, or deep learning) only assuming exchangeability of the data. CP works by computing nonconformity scores on previously labeled data, and using these to create prediction sets on a new (unlabeled) test data point. A transductive version of CP was first proposed in 1998 by Gammerman, Vovk, and Vapnik, and since, several variants of conformal prediction have been developed with different computational complexities, formal guarantees, and practical applications.
Conformal prediction requires a user-specified significance level for which the algorithm should produce its predictions. This significance level restricts the frequency of errors that the algorithm is allowed to make. For example, a significance level of 0.1 means that the algorithm can make at most 10% erroneous predictions. To meet this requirement, the output is a set prediction, instead of a point prediction produced by standard supervised machine learning models. For classification tasks, this means that predictions are not a single class, for example 'cat', but instead a set like {'cat', 'dog'}. Depending on how good the underlying model is (how well it can discern between cats, dogs and other animals) and the specified significance level, these sets can be smaller or larger. For regression tasks, the output is prediction intervals, where a smaller significance level (fewer allowed errors) produces wider intervals which are less specific, and vice versa – more allowed errors produce tighter prediction intervals.
History
The conformal prediction first arose in a collaboration between Gammerman, Vovk, and Vapnik in 1998; this initial version of conformal prediction used what are now called E-values though the version of conformal prediction best known today uses p-values and was proposed a year later by Saunders et al. Vovk, Gammerman, and their students and collaborators, particularly Craig Saunders, Harris Papadopoulos, and Kostas Proedrou, continued to develop the ideas of conformal prediction; major developments include the proposal of inductive conformal prediction (a.k.a. split conformal prediction), in 2002.
A book on the topic was written by Vovk and Shafer in 2005, and a tutorial was published in 2008.
Theory
The data has to conform to some standards, such as data being exchangeable (a slightly weaker assumption than the standard IID imposed in standard machine learning). For conformal prediction, a n% prediction region is said to be valid if the truth is in the output n% of the time. The efficiency is the size of the output. For classification, this size is the number of classes; for regression, it is interval width.
In the purest form, conformal prediction is made for an online (transductive) section. That is, after a label is predicted, its true label is known before the next prediction. Thus, the underlying model can be re-trained using this new data point and the next prediction will be made on a calibration set containing n + 1 data points, where the previous model had n data points.
Classification algorithms
The goal of standard classification algorithms is to classify a test object into one of several discrete classes. Conformal classifiers instead compute and output the p-value for each available class by performing a ranking of the nonconformity measure (α-value) of the test object against examples from the training data set. Similar to standard hypothesis testing, the p-value together with a threshold (referred to as significance level in the CP field) is used to determine whether the label should be in the prediction set. For example, for a significance level of 0.1, all classes with a p-value of 0.1 or greater are added to the prediction set. Transductive algorithms compute the nonconformity score using all available training data, while inductive algorithms compute it on a subset of the training set.
Inductive conformal prediction (ICP)
Inductive Conformal Prediction was first known as inductive confidence machines, but was later re-introduced as ICP. It has gained popularity in practical settings because the underlying model does not need to be retrained for every new test example. This makes it interesting for any model that is heavy to train, such as neural networks.
Mondrian inductive conformal prediction (MICP)
In MICP, the alpha values are class-dependent (Mondrian) and the underlying model does not follow the original online setting introduced in 2005.
Training algorithm:
Train a machine learning model (MLM)
Run a calibration set through the MLM, save output from the chosen stage
In deep learning, the softmax values are often used
Use a non-conformity function to compute α-values
A data point in the calibration set will result in an α-value for its true class
Prediction algorithm:
For a test data point, generate a new α-value
Find a p-value for each class of the data point
If the p-value is greater than the significance level, include the class in the output
Regression algorithms
Conformal prediction was initially formulated for the task of classification, but was later modified for regression. Unlike classification, which outputs p-values without a given significance level, regression requires a fixed significance level at prediction time in order to produce prediction intervals for a new test object. For classic conformal regression, there is no transductive algorithm. This is because it is impossible to postulate all possible labels for a new test object, because the label space is continuous. The available algorithms are all formulated in the inductive setting, which computes a prediction rule once and applies it to all future predictions.
Inductive conformal prediction (ICP)
All inductive algorithms require splitting the available training examples into two disjoint sets: one set used for training the underlying model (the proper training set) and one set for calibrating the prediction (the calibration set). In ICP, this split is done once, thus training a single ML model. If the split is performed randomly and that data is exchangeable, the ICP model is proven to be automatically valid (i.e. the error rate corresponds to the required significance level).
Training algorithm:
Split the training data into proper training set and calibration set
Train the underlying ML model using the proper training set
Predict the examples from the calibration set using the derived ML model → ŷ-values
Optional: if using a normalized nonconformity function
Train the normalization ML model
Predict normalization scores → 𝜺 -values
Compute the nonconformity measures (α-values) for all calibration examples, using ŷ- and 𝜺-values
Sort the nonconformity measure and generate nonconformity scores
Save underlying ML model, normalization ML model (if any) and nonconformity scores
Prediction algorithm:
Required input: significance level (s)
Predict the test object using the ML model → ŷt
Optional: if using a normalized nonconformity function
Predict the test object using normalization model → 𝜺t
Pick the nonconformity score from the list of scores produced by the calibration set in training, corresponding to the significance level s → αs
Compute the prediction interval half width (d) from rearranging the nonconformity function and input αs (and optionally 𝜺) → d
Output prediction interval (ŷ − d, ŷ + d) for the given significance level s
Split conformal prediction (SCP)
The SCP, often called aggregated conformal predictor (ACP), can be considered an ensemble of ICPs. SCP usually improves the efficiency of predictions (that is, it creates smaller prediction intervals) compared to a single ICP, but loses the automatic validity in the generated predictions.
A common type of SCPs is the cross-conformal predictor (CCP), which splits the training data into proper training and calibration sets multiple times in a strategy similar to k-fold cross-validation. Regardless of the splitting technique, the algorithm performs n splits and trains an ICP for each split. When predicting a new test object, it uses the median ŷ and d from the n ICPs to create the final prediction interval as
Applications
Types of learning models
Several machine learning models can be used in conjunction with conformal prediction. Studies have shown that it can be applied to for example convolutional neural networks, support-vector machines and others.
Use case
Conformal prediction is used in a variety of fields and is an active area of research. For example, in biotechnology it has been used to predict uncertainties in breast cancer, stroke risks, data storage, and disk drive scrubbing. In the domain of hardware security it has been used to detect the evolving hardware trojans. Within language technology, conformal prediction papers are routinely presented at the Symposium on Conformal and Probabilistic Prediction with Applications (COPA).
Conferences
Conformal prediction is one of the main subjects discussed during the COPA conference each year. Both theory and applications of conformal predictions are presented by leaders of the field. The conference has been held since 2012. It has been hosted in several different European countries including Greece, Great Britain, Italy and Sweden.
Books
Published books on Conformal Prediction includes Algorithmic Learning in a Random World, Conformal Prediction for Reliable Machine Learning: Theory, Adaptations and Applications, Practical Guide to Applied Conformal Prediction in Python: Learn and Apply the Best Uncertainty Frameworks to Your Industry Applications, Conformal Prediction: A Gentle Introduction (Foundations and Trends in Machine Learning), and Conformal Prediction for Inventors.
See also
Calibration (statistics)
Bootstrap method
Quantile regression
References
External links
Video Lecture on YouTube
Computational statistics | Conformal prediction | [
"Mathematics"
] | 2,030 | [
"Computational statistics",
"Computational mathematics"
] |
68,732,225 | https://en.wikipedia.org/wiki/HyCOM | The Hybrid Coordinate Ocean Model (HyCOM) is an open-source ocean general circulation modeling system. HyCOM is a primitive equation type of ocean general circulation model. The vertical levels of this modeling system are slightly different than other models, because the vertical coordinates remain isopycnic in the open stratified ocean, smoothly transitioning to z-level coordinates in the weakly stratified upper-ocean mixed layer, to terrain-following sigma coordinates in shallow water regions, and back to z-level coordinates in very shallow water. Therefore, the setup is a “hybrid” between z-level and terrain-following vertical levels. HyCOM outputs are provided online for the global ocean at a spatial resolution of 0.08 degrees (approximately 9 km) from 2003 to present. HyCOM uses netCDF data format for model outputs.
Applications
HyCOM model experiments are used to study the interactions between the ocean and atmosphere, including short-term and long-term processes. This modeling system has also been used to create forecasting tools. For example, HyCOM has been used to:
Assimilate data and provide operational oceanographic forecasting for the United States Navy
Determine the ideal way to parametrize how the sun heats the upper ocean (solar radiation and heat flux) in darker waters like the Black Sea
Study mesoscale variability in sea surface height and temperature in the Gulf of Mexico
Simulate drifting patterns of loggerhead sea turtles of the North American east coast
Predict the extent of Arctic sea ice for naval operations
See also
Climate model
Computational geophysics
General circulation model (GCM)
Ocean general circulation model (OGCM)
Oceanography
List of ocean circulation models
Physical oceanography
ROMS
Sigma coordinate system
References
External links
https://www.hycom.org/
Physical oceanography
Oceanography
Earth system sciences
Computational science
Geophysics
Numerical climate and weather models | HyCOM | [
"Physics",
"Mathematics",
"Environmental_science"
] | 382 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Applied mathematics",
"Computational science",
"Physical oceanography",
"Geophysics"
] |
51,614,413 | https://en.wikipedia.org/wiki/Non-Hermitian%20quantum%20mechanics | In physics, non-Hermitian quantum mechanics describes quantum mechanical systems where Hamiltonians are not Hermitian.
History
The first paper that has "non-Hermitian quantum mechanics" in the title was published in 1996 by Naomichi Hatano and David R. Nelson. The authors mapped a classical statistical model of flux-line pinning by columnar defects in high-Tc superconductors to a quantum model by means of an inverse path-integral mapping and ended up with a non-Hermitian Hamiltonian with an imaginary vector potential in a random scalar potential. They further mapped this into a lattice model and came up with a tight-binding model with asymmetric hopping, which is now widely called the Hatano-Nelson model. The authors showed that there is a region where all eigenvalues are real despite the non-Hermiticity.
Parity–time (PT) symmetry was initially studied as a specific system in non-Hermitian quantum mechanics. In 1998, physicist Carl Bender and former graduate student Stefan Boettcher published a paper where they found non-Hermitian Hamiltonians endowed with an unbroken PT symmetry (invariance with respect to the simultaneous action of the parity-inversion and time reversal symmetry operators) also may possess a real spectrum. Under a correctly-defined inner product, a PT-symmetric Hamiltonian's eigenfunctions have positive norms and exhibit unitary time evolution, requirements for quantum theories. Bender won the 2017 Dannie Heineman Prize for Mathematical Physics for his work.
A closely related concept is that of pseudo-Hermitian operators, which were considered by physicists Paul Dirac, Wolfgang Pauli, and Tsung-Dao Lee and Gian Carlo Wick. Pseudo-Hermitian operators were discovered (or rediscovered) almost simultaneously by mathematicians Mark Krein and collaborators as G-Hamiltonian in the study of linear dynamical systems. The equivalence between pseudo-Hermiticity and G-Hamiltonian is easy to establish.
In the early 1960s, Olga Taussky, Michael Drazin, and Emilie Haynsworth demonstrated that the necessary and sufficient criteria for a finite-dimensional matrix to have real eigenvalues is that said matrix is pseudo-Hermitian with a positive-definite metric.
In 2002, Ali Mostafazadeh showed that diagonalizable PT-symmetric Hamiltonians belong to the class of pseudo-Hermitian Hamiltonians. In 2003, it was proven that in finite dimensions, PT-symmetry is equivalent to pseudo-Hermiticity regardless of diagonalizability,
thereby applying to the physically interesting case of non-diagonalizable Hamiltonians at exceptional points. This indicates that the mechanism of PT-symmetry breaking at exception points, where the Hamiltionian is usually not diagonalizable, is the Krein collision between two eigenmodes with opposite signs of actions.
In 2005, PT symmetry was introduced to the field of optics by the research group of Gonzalo Muga by noting that PT symmetry corresponds to the presence of balanced gain and loss. In 2007, the physicist Demetrios Christodoulides and his collaborators further studied the implications of PT symmetry in optics. The coming years saw the first experimental demonstrations of PT symmetry in passive and active systems. PT symmetry has also been applied to classical mechanics, metamaterials, electric circuits, and nuclear magnetic resonance.
In 2017, a non-Hermitian PT-symmetric Hamiltonian was proposed by Dorje Brody and Markus Müller that "formally satisfies the conditions of the Hilbert–Pólya conjecture."
References
Quantum optics | Non-Hermitian quantum mechanics | [
"Physics"
] | 731 | [
"Quantum optics",
"Quantum mechanics"
] |
51,619,662 | https://en.wikipedia.org/wiki/Directed%20information | Directed information is an information theory measure that quantifies the information flow from the random string to the random string . The term directed information was coined by James Massey and is defined as
where is the conditional mutual information .
Directed information has applications to problems where causality plays an important role such as the capacity of channels with feedback, capacity of discrete memoryless networks, capacity of networks with in-block memory, gambling with causal side information, compression with causal side information, real-time control communication settings, and statistical physics.
Causal conditioning
The essence of directed information is causal conditioning. The probability of causally conditioned on is defined as
.
This is similar to the chain rule for conventional conditioning except one conditions on "past" and "present" symbols rather than all symbols . To include "past" symbols only, one can introduce a delay by prepending a constant symbol:
.
It is common to abuse notation by writing for this expression, although formally all strings should have the same number of symbols.
One may also condition on multiple strings: .
Causally conditioned entropy
The causally conditioned entropy is defined as:
Similarly, one may causally condition on multiple strings and write
.
Properties
A decomposition rule for causal conditioning is
.
This rule shows that any product of gives a joint distribution .
The causal conditioning probability is a probability vector, i.e.,
.
Directed Information can be written in terms of causal conditioning:
.
The relation generalizes to three strings: the directed information flowing from to causally conditioned on is
.
Conservation law of information
This law, established by James Massey and his son Peter Massey, gives intuition by relating directed information and mutual information. The law states that for any , the following equality holds:
Two alternative forms of this law are
where .
Estimation and optimization
Estimating and optimizing the directed information is challenging because it has terms where may be large. In many cases, one is interested in optimizing the limiting average, that is, when grows to infinity termed as a multi-letter expression.
Estimation
Estimating directed information from samples is a hard problem since the directed information expression does not depend on samples but on the joint distribution which may be unknown. There are several algorithms based on context tree weighting and empirical parametric distributions and using long short-term memory.
Optimization
Maximizing directed information is a fundamental problem in information theory. For example, given the channel distributions , the objective might be to optimize over the channel input distributions .
There are algorithms to optimize the directed information based on the Blahut-Arimoto, Markov decision process, Recurrent neural network, Reinforcement learning. and Graphical methods (the Q-graphs).
For the Blahut-Arimoto algorithm, the main idea is to start with the last mutual information of the directed information expression and go backward. For the Markov decision process, the main ideas is to transform the optimization into an infinite horizon average reward Markov decision process. For a Recurrent neural network, the main idea is to model the input distribution using a Recurrent neural network and optimize the parameters using Gradient descent. For Reinforcement learning, the main idea is to solve the Markov decision process formulation of the capacity using Reinforcement learning tools, which lets one deal with large or even continuous alphabets.
Marko's theory of bidirectional communication
Massey's directed information was motivated by Marko's early work (1966) on developing a theory of bidirectional communication. Marko's definition of directed transinformation differs slightly from Massey's in that, at time , one conditions on past symbols only and one takes limits:
Marko defined several other quantities, including:
Total information: and
Free information: and
Coincidence:
The total information is usually called an entropy rate. Marko showed the following relations for the problems he was interested in:
and
He also defined quantities he called residual entropies:
and developed the conservation law and several bounds.
Relation to transfer entropy
Directed information is related to transfer entropy, which is a truncated version of Marko's directed transinformation .
The transfer entropy at time and with memory is
where one does not include the present symbol or the past symbols before time .
Transfer entropy usually assumes stationarity, i.e., does not depend on the time .
References
Information theory | Directed information | [
"Mathematics",
"Technology",
"Engineering"
] | 864 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
70,211,075 | https://en.wikipedia.org/wiki/Oncology%20information%20system | Oncology Information System (OIS) is a software solution that manages departmental, administrative and clinical activities in cancer care. It aggregates information into a complete oncology-specific electronic health record to support medical information management. The OIS allows the capture of patient history information, the documentation of the treatment response, medical prescription of the treatment, the storage of patient documentation and the capture of activities for billing purposes.
Unlike a hospital information system (HIS), which is intended to manage patient records more generally, or radiological information system (RIS), intended to track and manage radiology requests and workflow, the OIS supports the delivery of integrated care and long-term treatment for cancer patients by collecting data during various phases of treatment, maintaining a history of treatment fractions, screening, prevention, diagnosis, image reviews, palliative care and end-of-life care. An OIS will be designed around the specific requirements of chemotherapy, radiotherapy and other supportive activities.
Basic features of an OIS
OIS generally support the following features:
Treatment workflow
Doctor's prescription
Patient register
Management of the treatment schedule
Management of patient documents
Financial control
Health Level 7(HL7) and DICOM RT interoperability
References
Radiation therapy
Medical physics
Electronic health records | Oncology information system | [
"Physics",
"Technology"
] | 257 | [
"Electronic health records",
"Information technology",
"Applied and interdisciplinary physics",
"Medical physics"
] |
70,212,637 | https://en.wikipedia.org/wiki/Innovative%20Clean%20Transit%20rule | The Innovative Clean Transit Rule (ICT) is a regulation promulgated by the California Air Resources Board which requires public transit agencies in the state of California to shift their bus fleets to zero emissions buses (ZEB), either electric buses or fuel cell buses. By 2029, only ZEBs will be allowed for new bus purchases, and the entire fleet must use ZEBs by 2040.
History
CARB's first regulation to control transit fleet emissions was the Fleet Rule for Transit Agencies, Section 2023 under Title 13 of the California Code of Regulations (CCR); 13 CCR §2023 was adopted in February 2000 after diesel particulate matter was identified as a toxic air contaminant. The Fleet Rule effectively shifted most agencies off diesel fuel. A similar regulation (13 CCR §2022) was issued in 2005 to cover trucks owned by public agencies and utilities, and expanded via 13 CCR 2025/2027 as the 2008 California Statewide Truck and Bus Rule to all diesel-fueled trucks and buses in California.
The ICT rule was adopted in December 2018. ICT amends the existing Fleet Rule. It is the first such ZEB mandate in the United States, and was supported unanimously by CARB's sixteen-member board, led by then-chair Mary D. Nichols.
Fleet Rule
Under the previous Fleet Rule, transit agencies were required to meet emissions requirements for urban buses under a "diesel path" or "alternative fuel path", with the exception of agencies in the South Coast Air Quality Management District (SCAQMD), which were required to follow the "alternative fuel path". SCAQMD separately mandated that diesel-fueled buses would no longer be purchased (Rule 1192, adopted June 2000), and later amendments to the Fleet Rule required transit agencies in the SCAQMD to choose the "alternative fuel path" by October 7, 2006. Urban buses were defined as vehicles that have a capacity of at least 15 passengers and were intended for intra-city operation. The regulations were extended in 2005 to apply to smaller vehicles operated by transit agencies, including the maintenance fleet.
The Fleet Rule required that transit agencies choose their path by January 31, 2001. Under the "alternative fuel path", at least 85% of urban buses purchased were required to use alternative fuels or with engines that met the emissions requirements of 13 CCR 1956.1. Under the "diesel path", average fleet emissions for and diesel particulate matter (PM) were gradually tightened. For both paths, diesel PM emissions were calculated as a fleet total and compared to the fleet diesel PM emissions in 2002; starting in 2004, diesel PM were required to be 60% or less (diesel path) or 80% or less (alternative fuel path) of the 2002 values, followed by ≤40% (diesel) or ≤60% (alternative) by 2005, and continuing to decrease in future years.
In addition, under the Fleet Rule, agencies with large fleets (more than 200 buses) were required to participate in the Zero Emission Bus (ZEB) demonstration program. ZEBs were defined as buses with electric motor drivetrains that drew from traction batteries, hydrogen fuel cell, or overhead wire via trolley poles. The Initial Demonstration Project was required to have at least three ZEBs in revenue service for one calendar year, to start no later than February 28, 2006. In addition, large transit agencies on the "diesel path" were required to implement an Advanced ZEB Demonstration Project, using a minimum of six ZEBs in revenue service for one calendar year, to start no later than January 1, 2009. Starting in 2011 (diesel path) or 2012 (alternative fuel path), transit agencies were required to make ZEBs a minimum of 15% of their new purchases/leases through 2026, with additional credits earned for early implementation.
Pilot ZEB programs
CARB funded a pilot program for the San Joaquin Valley Air Pollution Control District to help transit agencies including Visalia Transit, FCRTA (Fresno County), San Joaquin RTD, and MAX (Modesto) purchase battery-electric buses from Proterra starting in 2016. However, the ICT rule was much broader than the individual regional programs, eliminating all transit vehicle emissions and applying to all transit agencies state-wide.
Requirements
Under ICT all public transit agencies in the state will gradually transition their fleets to zero emissions buses, with the goal of having all operating buses on the road by zero-emissions by 2040. ICT applies to all agencies in the state that own, operate, or lease buses with a Gross Vehicle Weight greater than . Individual transit agencies have varying requirements under the rule, depending on their size, but by the year 2029, all new transit bus purchases must by zero-emissions buses.
CARB estimated the rule would reduce greenhouse gas emissions by 19 million metric tons, the equivalent of taking four million cars off the road.
Transition schedules and plans
Large transit agencies are required to have 25% of new bus purchases as zero-emission buses (ZEBs) starting in 2023, 50% of new purchases as ZEBs starting in 2026, and 100% of new purchases as ZEBs starting in 2029. Small transit agencies are required to make 25% of new purchases as ZEBs in 2026 and 100% of new purchases as ZEBs in 2029 and all years thereafter. An agency is considered large if it operates at least 100 buses, or if it operates at least 65 buses in the San Joaquin Valley or the SCAQMD.
Under ICT, agencies are required to develop and submit rollout plans for their operations to transition to zero-emissions. Large agencies must complete their plans by July 1, 2020, and small agencies must complete their plans by July 1, 2023.
Scope
Per the regulation, ZEBs are defined to include battery electric buses and fuel cell buses, but do not include electric trolleybuses which draw power from overhead lines. Those are exempt from the regulation as they are already electric. The rule does not apply to any vehicle operated by Caltrans, Caltrain, Amtrak, or any local school district. It also does not apply to trolleybuses or any vehicle that operates on rails or a fixed guideway.
Implementation
The Antelope Valley Transit Authority (AVTA) has set a goal to be the first all-electric fleet by the end of 2018, ahead of the tightened regulations. The Los Angeles Department of Transportation also plans to complete its transition well in advance of the state mandate, by 2026. The San Francisco Municipal Transit Agency plan to purchase only electric buses starting 2025, to complete the transition by 2035.
In April 2020, AVTA decommissioned its last diesel transit bus; in September 2020, AVTA began replacing its microtransit (demand-responsive) fleet with battery-electric vans, and in August 2021, AVTA began replacing its commuter/highway coach fleet with battery-electric buses, completing their transition to an all-electric fleet in March 2022. This made AVTA the first all-electric transit agency in North America.
References
External links
Electric buses
Sustainable transport
Environment of California
California Environmental Protection Agency
Air pollution in California
Public transportation in California | Innovative Clean Transit rule | [
"Physics"
] | 1,449 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
70,215,499 | https://en.wikipedia.org/wiki/Electron%20bifurcation | In biochemistry, electron bifurcation (EB) refers to a system that enables an unfavorable (endergonic) transformation by coupling to a favorable (exergonic) transformation. Two electrons are involved: one flows to an acceptor with a "higher reduction potential and the other with a lower reduction potential" than the donor. The process is suspected of being common in bioenergetics.
Two versions of EB are recognized. One involves redox of quinones and the other involves flavins. Quinones and flavins are cofactors that are capable of undergoing 2 – 2 proton redox.
A pervasive example of electron bifurcation is the Q cycle, which is part of the machinery that results in oxidative phosphorylation. In that case one electron from ubiquinol is directed to a Rieske cluster and the other electron is directed to a cytochrome b.
References
Thermodynamic processes
Chemical thermodynamics
Iron–sulfur proteins | Electron bifurcation | [
"Physics",
"Chemistry"
] | 215 | [
"Chemical thermodynamics",
"Thermodynamic processes",
"Thermodynamics"
] |
60,876,593 | https://en.wikipedia.org/wiki/Nilsson%20model | The Nilsson model is a nuclear shell model treating the atomic nucleus as a deformed sphere. In 1953, the first experimental examples were found of rotational bands in nuclei, with their energy levels following the same J(J+1) pattern of energies as in rotating molecules. Quantum mechanically, it is impossible to have a collective rotation of a sphere, so this implied that the shape of these nuclei was nonspherical. In principle, these rotational states could have been described as coherent superpositions of particle-hole excitations in the basis consisting of single-particle states of the spherical potential. But in reality, the description of these states in this manner is intractable, due to the large number of valence particles—and this intractability was even greater in the 1950s, when computing power was extremely rudimentary. For these reasons, Aage Bohr, Ben Mottelson, and Sven Gösta Nilsson constructed models in which the potential was deformed into an ellipsoidal shape. The first successful model of this type is the one now known as the Nilsson model. It is essentially a nuclear shell model using a harmonic oscillator potential, but with anisotropy added, so that the oscillator frequencies along the three Cartesian axes are not all the same. Typically the shape is a prolate ellipsoid, with the axis of symmetry taken to be z.
Hamiltonian
For an axially symmetric shape with the axis of symmetry being the z axis, the Hamiltonian is
Here m is the mass of the nucleon, N is the total number of harmonic oscillator quanta in the spherical basis, is the orbital angular momentum operator, is its square (with eigenvalues ), is the average value of over the N shell, and s is the intrinsic spin.
The anisotropy of the potential is such that the length of an equipotential along the z is greater than the length on the transverse axes in the ratio . This is conventionally expressed in terms of a deformation parameter δ so that the harmonic oscillator part of the potential can be written as the sum of a spherically symmetric harmonic oscillator and a term proportional to δ. Positive values of δ indicate prolate deformations, like an American football. Most nuclei in their ground states have equilibrium shapes such that δ ranges from 0 to 0.2, while superdeformed states have (a 2-to-1 axis ratio).
The mathematical details of the deformation parameters are as follows. Considering the success of the nuclear liquid drop model, in which the nucleus is taken to be an incompressible fluid, the harmonic oscillator frequencies are constrained so that remains constant with deformation, preserving the volume of equipotential surfaces. Reproducing the observed density of nuclear matter requires , where A is the mass number. The relation between δ and the anisotropy is , while the relation between δ and the axis ratio is .
The remaining two terms in the Hamiltonian do not relate to deformation and are present in the spherical shell model as well. The spin-orbit term represents the spin-orbit dependence of the strong nuclear force; it is much larger than, and has the opposite sign compared to, the special-relativistic spin-orbit splitting. The purpose of the term is to mock up the flat profile of the nuclear potential as a function of radius. For nuclear wavefunctions (unlike atomic wavefunctions) states with high angular momentum have their probability density concentrated at greater radii. The term prevents this from shifting a major shell up or down as a whole. The two adjustable constants are conventionally parametrized as and . Typical values of κ and μ for heavy nuclei are 0.06 and 0.5. With this parametrization, occurs as a simple scaling factor throughout all the calculations.
Choice of basis and quantum numbers
For ease of computation using the computational resources of the 1950s, Nilsson used a basis consisting of eigenstates of the spherical hamiltonian. The Nilsson quantum numbers are . The difference between the spherical and deformed Hamiltonian is proportional to , and this has matrix elements that are easy to calculate in this basis. They couple the different N shells. Eigenstates of the deformed Hamiltonian have good parity (corresponding to even or odd N) and Ω, the projection of the total angular momentum along the symmetry axis. In the absence of a cranking term (see below), time-reversal symmetry causes states with opposite signs of Ω to be degenerate, so that in the calculations only positive values of Ω need to be considered.
Interpretation
In an odd, well-deformed nucleus, the single-particle levels are filled up to the Fermi level, and the odd particle's Ω and parity give the spin and parity of the ground state.
Cranking
Because the potential is not spherically symmetric, the single-particle states are not states of good angular momentum J. However, a Lagrange multiplier , known as a "cranking" term, can be added to the Hamiltonian. Usually the angular frequency vector ω is taken to be perpendicular to the symmetry axis, although tilted-axis cranking can also be considered. Filling the single-particle states up to the Fermi level then produces states whose expected angular momentum along the cranking axis has the desired value set by the Lagrange multiplier.
Total energy
Often one wants to calculate a total energy as a function of deformation. Minima of this function are predicted equilibrium shapes. Adding the single-particle energies does not work for this purpose, partly because kinetic and potential terms are out of proportion by a factor of two, and partly because small errors in the energies accumulate in the sum. For this reason, such sums are usually renormalized using a procedure introduced by Strutinsky.
Plots of energy levels
Single-particle levels can be shown in a "spaghetti plot," as functions of the deformation. A large gap between energy levels at zero deformation indicates a particle number at which there is a shell closure: the traditional "magic numbers." Any such gap, at a zero or nonzero deformation, indicates that when the Fermi level is at that height, the nucleus will be stable relative to the liquid drop model.
External links
An open-source software implementation
References
Nilsson, S.G. "Binding states of individual nucleons in strongly deformed nuclei," doctoral thesis, 1955
Olivius, P., "Extending the nuclear cranking model to tilted axis rotation and alternative mean field potentials," doctoral thesis, Lund University, 2004, http://www.matfys.lth.se/staff/Peter.Olivius/thesis.pdf — describes a modern implementation of the model
Strutinsky, Nucl. Phys. A122 (1968) 1 -- original paper on the Strutinsky method
Salamon and Kruppa, "Curvature Correction in the Strutinsky's Method," http://arxiv.org/abs/1004.0079 — an open-access description of the Strutinsky method
Unknown author, "Appendix Nuclear Structure", with a full array of Nilsson charts for both proton and neutron shells, as well as an equivalent diagram for simple harmonic oscillator nuclei at different deformations: https://application.wiley-vch.de/books/info/0-471-35633-6/toi99/www/struct/struct.pdf ***
Nuclear physics | Nilsson model | [
"Physics"
] | 1,568 | [
"Nuclear physics"
] |
60,881,090 | https://en.wikipedia.org/wiki/Tidal%20strait | A tidal strait is a strait connecting two oceans or seas through which a tidal current flows. Tidal currents are usually unidirectional but sometimes are bidirectional. Tidal straits, though they are narrow seaways, are technically not rivers. They are frequently of tectonic origin. In them, currents develop because of elevation differences between the water basins at both ends.
Tides sometimes allow sediments to collect in tidal straits.
See also
Sediment trap (geology)
Tidal circularization
References
External links
A facies‐based depositional model for ancient and modern, tectonically–confined tidal straits
Deltas sourcing tidal straits: observations from some field case studies
Notes on shipwrecks in the Arthur Kill ship graveyard
Oceanography | Tidal strait | [
"Physics",
"Environmental_science"
] | 148 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
54,476,668 | https://en.wikipedia.org/wiki/Australian%20wormy%20chestnut | Australian wormy chestnut or firestreak is a common name for lumber of Eucalyptus obliqua, Eucalyptus sieberi and Eucalyptus fastigata grown in Victoria, southern New South Wales, and Tasmania in Australia. It is a hardwood species commonly used in flooring applications.
References
Trees of Australia
Eucalyptus
Wood
Plant common names | Australian wormy chestnut | [
"Biology"
] | 66 | [
"Plant common names",
"Common names of organisms",
"Plants"
] |
54,476,844 | https://en.wikipedia.org/wiki/Boolean%20algebra | In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra uses logical operators such as conjunction (and) denoted as , disjunction (or) denoted as , and negation (not) denoted as . Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations in the same way that elementary algebra describes numerical operations.
Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term Boolean algebra was first suggested by Henry M. Sheffer in 1913, although Charles Sanders Peirce gave the title "A Boolian Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880. Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics.
History
A precursor of Boolean algebra was Gottfried Wilhelm Leibniz's algebra of concepts. The usage of binary in relation to the I Ching was central to Leibniz's characteristica universalis. It eventually created the foundations of algebra of concepts. Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets.
Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington and others, until it reached the modern conception of an (abstract) mathematical structure. For example, the empirical observation that one can manipulate expressions in the algebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets.
In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.
Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for very-large-scale integration (VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification.
Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first order logic.
Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics. The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity.
Values
Whereas expressions denote mainly numbers in elementary algebra, in Boolean algebra, they denote the truth values false and true. These values are represented with the bits, 0 and 1. They do not behave like the integers 0 and 1, for which , but may be identified with the elements of the two-element field , that is, integer arithmetic modulo 2, for which . Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunction (inclusive-or) definable as and negation as . In , may be replaced by , since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in which is not implemented).
Boolean algebra also deals with functions which have their values in the set . A sequence of bits is a commonly used example of such a function. Another common example is the totality of subsets of a set : to a subset of , one can define the indicator function that takes the value on , and outside . The most general example is the set elements of a Boolean algebra, with all of the foregoing being instances thereof.
As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables.
Operations
Basic operations
While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations: conjunction, disjunction, and negation, expressed with the corresponding binary operators AND () and OR () and the unary operator NOT (), collectively referred to as Boolean operators. Variables in Boolean algebra that store the logical value of 0 and 1 are called the Boolean variables. They are used to store either true or false values. The basic operations on Boolean variables x and y are defined as follows:
{| class="wikitable" style="text-align: center"
|-
!Logical operation
!Operator
!Notation
!Alternative notations
!Definition
|-
|Conjunction
|AND
|
|
|
|-
|Disjunction
|OR
|
|
|
|-
|Negation
|NOT
|¬x
|{{math|NOT x, Nx, x̅, x, !x}}
|
|}
Alternatively, the values of , , and ¬x can be expressed by tabulating their values with truth tables as follows:
When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules.
If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (where x + y uses addition and xy uses multiplication), or by the minimum/maximum functions:
One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws):
Secondary operations
Operations composed from the basic operations include, among others, the following:
These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs.
{| class="wikitable" style="text-align: center"
|+Secondary operations. Table 1
|-
!
!
!
!
!
|-
!0
!0
| 1 || 0 || 1
|-
!1
!0
|0 || 1 || 0
|-
!0
!1
|1 || 1 || 0
|-
!1
!1
| 1 || 0 || 1
|}
Material conditional The first operation, x → y, or Cxy, is called material implication. If x is true, then the result of expression x → y is taken to be that of y (e.g. if x is true and y is false, then x → y is also false). But if x is false, then the value of y can be ignored; however, the operation must return some Boolean value and there are only two choices. So by definition, x → y is true when x is false (relevance logic rejects this definition, by viewing an implication with a false premise as something other than either true or false).
Exclusive OR (XOR)
The second operation, x ⊕ y, or Jxy, is called exclusive or (often abbreviated as XOR) to distinguish it from disjunction as the inclusive kind. It excludes the possibility of both x and y being true (e.g. see table): if both are true then result is false. Defined in terms of arithmetic it is addition where mod 2 is 1 + 1 = 0.
Logical equivalence The third operation, the complement of exclusive or, is equivalence or Boolean equality: x ≡ y, or Exy, is true just when x and y have the same value. Hence x ⊕ y as its complement can be understood as x ≠ y, being true just when x and y are different. Thus, its counterpart in arithmetic mod 2 is x + y. Equivalence's counterpart in arithmetic mod 2 is x + y + 1.
Laws
A law of Boolean algebra is an identity such as between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of from (as treated in ).
Monotone laws
Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:
{|
|-
| Associativity of : ||style="width:2em"| ||style="text-align: right"| ||
|-
| Associativity of : || ||style="text-align: right"| ||
|-
| Commutativity of : || ||style="text-align: right"| ||
|-
| Commutativity of : || ||style="text-align: right"| ||
|-
| Distributivity of over : || ||style="text-align: right"| ||
|-
| Identity for : || ||style="text-align: right"| ||
|-
| Identity for : || ||style="text-align: right"| ||
|-
| Annihilator for : || ||style="text-align: right"| ||
|-
|}
The following laws hold in Boolean algebra, but not in ordinary algebra:
{|
|- Annihilator for : || ||style="text-align: right"| ||
|-
|Annihilator for : || ||style="text-align: right"|
|
|-
| Idempotence of : || ||style="text-align: right"| ||
|-
| Idempotence of : || ||style="text-align: right"| ||
|-
| Absorption 1: || ||style="text-align: right"| ||
|-
| Absorption 2: || ||style="text-align: right"| ||
|-
|Distributivity of over :
|
|
|
|-
| Distributivity of over : |
|}
Taking in the third law above shows that it is not an ordinary algebra law, since . The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be , while the right hand side would be 1 (and so on).
All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows.
Nonmonotone laws
The complement operation is defined by the following two laws.
All properties of negation including the laws below follow from the above two laws alone.
In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law)
But whereas ordinary algebra satisfies the two laws
Boolean algebra satisfies De Morgan's laws:
Completeness
The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The laws complementation 1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as the models of these axioms as treated in .
Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras.
This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in . Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables over 0 and 1. All these definitions of Boolean algebra can be shown to be equivalent.
Duality principle
Principle: If {X, R} is a partially ordered set, then {X, R(inverse)} is also a partially ordered set.
There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed to α and β, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences.
But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used.
But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged, now there is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns for and in the truth tables have changed places, but that switch is immaterial.
When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are called dual to each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. The duality principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged.
One change not needed to make as part of this interchange was to complement. Complement is a self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is . There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, if , then is a self-dual operation of four arguments x, y, z, t.
The principle of duality can be explained from a group theory perspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set of Boolean polynomials back to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form a group under function composition, isomorphic to the Klein four-group, acting on the set of Boolean polynomials. Walter Gottschalk remarked that consequently a more appropriate name for the phenomenon would be the principle (or square) of quaternality.
Diagrammatic representations
Venn diagrams
A Venn diagram can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention).
The three Venn diagrams in the figure below represent respectively conjunction , disjunction , and complement ¬x.
For conjunction, the region inside both circles is shaded to indicate that is 1 when both variables are 1. The other regions are left unshaded to indicate that is 0 for the other three combinations.
The second diagram represents disjunction by shading those regions that lie inside either or both circles. The third diagram represents complement ¬x by shading the region not inside the circle.
While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle for x in those boxes, in which case each would denote a function of one argument, x, which returns the same value independently of x, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes one argument, which it ignores, and is a unary operation.
Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchanging x and y would have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry.
Idempotence of ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨.
To see the first absorption law, , start with the diagram in the middle for x ∨ y and note that the portion of the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, , start with the left diagram for and note that shading the whole of the x circle results in just the x circle being shaded, since the previous shading was inside the x circle.
The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades the x circle.
To visualize the first De Morgan's law, , start with the middle diagram for and complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes.
The second De Morgan's law, , works the same way with the two diagrams interchanged.
The first complement law, , says that the interior and exterior of the x circle have no overlap. The second complement law, , says that everything is either inside or outside the x circle.
Digital logic gates
Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows:
The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports.
Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port.
The duality principle, or De Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged.
More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x, y, ¬x, and ¬y; and the remaining two are x ⊕ y (XOR) and its complement x ≡ y.
Boolean algebras
The term "algebra" denotes both a subject, namely the subject of algebra, and an object, namely an algebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then give the formal definition of the general notion.
Concrete Boolean algebras
A concrete Boolean algebra or field of sets is any nonempty set of subsets of a given set X closed under the set operations of union, intersection, and complement relative to X.
(Historically X itself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be empty.)
Example 1. The power set 2X of X, consisting of all subsets of X. Here X may be any set: empty, finite, infinite, or even uncountable.
Example 2. The empty set and X. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets of X must contain the empty set and X. Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so as to make the empty set and X coincide.
Example 3. The set of finite and cofinite sets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers.
Example 4. For a less trivial example of the point made by example 2, consider a Venn diagram formed by n closed curves partitioning the diagram into 2n regions, and let X be the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset of X, and every point in X is in exactly one region. Then the set of all 22n possible unions of regions (including the empty set obtained as the union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection, and complement relative to X and therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the case n = 0 of no curves.
Subsets as bit vectors
A subset Y of X can be identified with an indexed family of bits with index set X, with the bit indexed by being 1 or 0 according to whether or not . (This is the so-called characteristic function notion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if where are viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} of X can be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]).
From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of bitwise ∧, ∨, and ¬, as in , , and , the bit vector realizations of intersection, union, and complement respectively.
Prototypical Boolean algebra
The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called the prototypical Boolean algebra, justified by the following observation.
The laws satisfied by all nondegenerate concrete Boolean algebras coincide with those satisfied by the prototypical Boolean algebra.
This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector.
The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete.
Boolean algebras: the definition
The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the laws of Boolean algebra.
Instead of showing that the Boolean laws are satisfied, we can instead postulate a set X, two binary operations on X, and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract definition.
A Boolean algebra is any set with binary operations ∧ and ∨ and a unary operation ¬ thereon satisfying the Boolean laws.
For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axioms by fiat is entirely analogous to the abstract definitions of group, ring, field etc. characteristic of modern or abstract algebra.
Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice, a sufficient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition.
A Boolean algebra is a complemented distributive lattice.
The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent definition.
Representable Boolean algebras
Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations of greatest common divisor, least common multiple, and division into n (that is, ¬x = n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not concrete according to our definitions.
However, if each divisor of n is represented by the set of its prime factors, this nonconcrete Boolean algebra is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division into n. So this example, while not technically concrete, is at least "morally" concrete via this representation, called an isomorphism. This example is an instance of the following notion.
A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra.
The next question is answered positively as follows.
Every Boolean algebra is representable.
That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability.
The laws satisfied by all Boolean algebras coincide with those satisfied by the prototypical Boolean algebra.
It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras.
Axiomatizing Boolean algebra
The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold.
In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to be finitely axiomatizable or finitely based.
Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as a complemented distributive lattice.
By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing the Sheffer stroke operation, the single axiom is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; see Minimal axioms for Boolean algebra.
Propositional logic
Propositional logic is a logical system that is intimately connected to Boolean algebra. Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra.
Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variables x, y, ... become propositional variables (or atoms) P, Q, ... Boolean terms such as x ∨ y become propositional formulas P ∨ Q; 0 becomes false or ⊥, and 1 becomes true or T'''. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talking about propositional calculus) to denote propositions.
The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then the truth value of a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra).
These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used.
Applications
One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language. Whereas the proposition "if x = 3, then x + 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "if x = 3, then x = 3" does not; it is true merely by virtue of its structure, and remains true whether "x = 3" is replaced by "x = 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "if P, then P," or in the language of Boolean algebra, P → P.
Replacing P by x = 3 or any other proposition is called instantiation of P by that proposition. The result of instantiating P in an abstract proposition is called an instance of the proposition. Thus, x = 3 → x = 3 is a tautology by virtue of being an instance of the abstract tautology P → P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense as P → x = 3 or x = 3 → x = 4.
Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiating Q by Q → P in P → (Q → P) to yield the instance P → ((Q → P) → P).
(The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.)
Deductive systems for propositional logic
An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for producing new tautologies from old. A proof in an axiom system A is a finite nonempty sequence of propositions each of which is either an instance of an axiom of A or follows by some rule of A from propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem.
Sequent calculus
Propositional calculus is commonly organized as a Hilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form is sequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions called sequents, such as The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ, A ⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional proposition A appended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as the entailment of the succedent by the antecedent.
Entailment differs from implication in that whereas the latter is a binary operation that returns a value in a Boolean algebra, the former is a binary relation which either holds or does not hold. In this sense, entailment is an external form of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined by x ≤ y just when . This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus.
Applications
Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.
Computers
In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits.
Today, all modern general-purpose computers perform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in ferromagnetic storage devices, as holes in punched cards or paper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.)
Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low.
Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming in machine code, assembly language, and certain other programming languages, programmers work with the low-level digital structure of the data registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of the carry operation in the first but not the second.
Two-valued logic
Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, making two-valued logic deserving of organization and study in its own right.
A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low.
Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory.
Two-valued logic can be extended to multi-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 − x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined via De Morgan's law. Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
Boolean operations
The original application for Boolean operations was mathematical logic, where it combines the truth values, true or false, of individual formulas.
Natural language
Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of these logical connectives often have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive commands such love me or leave me or fish or cut bait tend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or milk is a choice. However, context can reverse these senses, as in your choices are coffee and tea which usually means the same as your choices are coffee or tea (alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and although P necessarily implies "not not P," the converse is suspect in English, much as with intuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them.
Digital logic
Boolean operations are used in digital logic to combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector of n identical binary gates are used to combine two bit vectors each of n bits, the individual bit operations can be understood collectively as a single operation on values from a Boolean algebra with 2n elements.
Naive set theory
Naive set theory interprets Boolean operations as acting on subsets of a given set X. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on.
Video cards
The 256-element free Boolean algebra on three generators is deployed in computer displays based on raster graphics, which use bit blit to manipulate whole regions consisting of pixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called the mask. Modern video cards offer all ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constants or , or , and or allow Boolean operations such as (meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time, in the example, if just , etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression.
Modeling and CAD
Solid modeling systems for computer aided design offer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a set S of voxels (the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets of S, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operation or , which in set theory is set difference, remove the elements of y from those of x. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference.
Boolean searches
Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported by Google.
Doublequotes are used to combine whitespace-separated words into a single search term.
Whitespace is used to specify logical AND, as it is the default operator for joining search terms:
"Search term 1" "Search term 2"
The OR keyword is used for logical OR:
"Search term 1" OR "Search term 2"
A prefixed minus sign is used for logical NOT:
"Search term 1" −"Search term 2"
See also
Boolean algebras canonically defined
Boolean differential calculus
Booleo
Cantor algebra
Heyting algebra
List of Boolean algebra topics
Logic design
Principia Mathematica Three-valued logic
Vector logic
Notes
References
Further reading
Bocheński, Józef Maria (1959). A Précis of Mathematical Logic''. Translated from the French and German editions by Otto Bird. Dordrecht, South Holland: D. Reidel.
Historical perspective
, several relevant chapters by Hailperin, Valencia, and Grattan-Guinness
External links
1847 introductions
Algebraic logic
Articles with example code | Boolean algebra | [
"Mathematics"
] | 11,081 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic",
"Algebraic logic"
] |
54,477,794 | https://en.wikipedia.org/wiki/Resistive%20plate%20chamber | A Resistive plate chamber (RPC) is a particle detector widely used in high energy physics. They are used for detecting muons in most of the modern experiments including ATLAS, CMS, and BES III.
References
Particle detectors
Science experiments | Resistive plate chamber | [
"Technology",
"Engineering"
] | 51 | [
"Particle detectors",
"Measuring instruments"
] |
54,477,872 | https://en.wikipedia.org/wiki/Young%27s%20inequality%20for%20products | In mathematics, Young's inequality for products is a mathematical inequality about the product of two numbers. The inequality is named after William Henry Young and should not be confused with Young's convolution inequality.
Young's inequality for products can be used to prove Hölder's inequality. It is also widely used to estimate the norm of nonlinear terms in PDE theory, since it allows one to estimate a product of two terms by a sum of the same terms raised to a power and scaled.
Standard version for conjugate Hölder exponents
The standard form of the inequality is the following, which can be used to prove Hölder's inequality.
A second proof is via Jensen's inequality.
Yet another proof is to first prove it with an then apply the resulting inequality to . The proof below illustrates also why Hölder conjugate exponent is the only possible parameter that makes Young's inequality hold for all non-negative values. The details follow:
Young's inequality may equivalently be written as
Where this is just the concavity of the logarithm function.
Equality holds if and only if or
This also follows from the weighted AM-GM inequality.
Generalizations
Elementary case
An elementary case of Young's inequality is the inequality with exponent
which also gives rise to the so-called Young's inequality with (valid for every ), sometimes called the Peter–Paul inequality.
This name refers to the fact that tighter control of the second term is achieved at the cost of losing some control of the first term – one must "rob Peter to pay Paul"
Proof: Young's inequality with exponent is the special case However, it has a more elementary proof.
Start by observing that the square of every real number is zero or positive. Therefore, for every pair of real numbers and we can write:
Work out the square of the right hand side:
Add to both sides:
Divide both sides by 2 and we have Young's inequality with exponent
Young's inequality with follows by substituting and as below into Young's inequality with exponent
Matricial generalization
T. Ando proved a generalization of Young's inequality for complex matrices ordered
by Loewner ordering. It states that for any pair of complex matrices of order there exists a unitary matrix such that
where denotes the conjugate transpose of the matrix and
Standard version for increasing functions
For the standard version of the inequality,
let denote a real-valued, continuous and strictly increasing function on with and Let denote the inverse function of Then, for all and
with equality if and only if
With and this reduces to standard version for conjugate Hölder exponents.
For details and generalizations we refer to the paper of Mitroi & Niculescu.
Generalization using Fenchel–Legendre transforms
By denoting the convex conjugate of a real function by we obtain
This follows immediately from the definition of the convex conjugate. For a convex function this also follows from the Legendre transformation.
More generally, if is defined on a real vector space and its convex conjugate is denoted by (and is defined on the dual space ), then
where is the dual pairing.
Examples
The convex conjugate of is with such that and thus Young's inequality for conjugate Hölder exponents mentioned above is a special case.
The Legendre transform of is , hence for all non-negative and This estimate is useful in large deviations theory under exponential moment conditions, because appears in the definition of relative entropy, which is the rate function in Sanov's theorem.
See also
Notes
References
External links
Young's Inequality at PlanetMath
Articles containing proofs
Inequalities | Young's inequality for products | [
"Mathematics"
] | 749 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems"
] |
54,480,612 | https://en.wikipedia.org/wiki/DMTMM | DMTMM (4-(4,6-dimethoxy-1,3,5-triazin-2-yl)-4-methyl-morpholinium chloride) is an organic triazine derivative commonly used for activation of carboxylic acids, particularly for amide synthesis. Amide coupling is one of the most common reactions in organic chemistry and DMTMM is one reagent used for that reaction. The mechanism of DMTMM coupling is similar to other common amide coupling reactions involving activated carboxylic acids. Its precursor, 2-chloro-4,6,-dimethoxy-1,3,5-triazine (CDMT), has also been used for amide coupling. DMTMM has also been used to synthesize other carboxylic functional groups such as esters and anhydrides. DMTMM is usually used in the chloride form but the tetrafluoroborate salt is also commercially available.
Synthesis
DMTMM is prepared by reaction of 2-chloro-4,6-dimethoxy-1,3,5-triazine (CDMT) with N-methylmorpholine (NMM). It was first reported in 1999. CDMT spontaneously reacts with NMM to form the quaternary ammonium chloride salt of DMTMM.
Reactions
Amides
Amides can be readily prepared from the corresponding carboxylic acid and amine using DMTMM coupling. DMTMM has been shown to be preferable to other coupling agents in several cases, such as for sterically hindered amines and for ligation of polysaccharides such as hyaluronic acid.
Other carboxylic derivatives
Despite primarily being used for amide synthesis, DMTMM can also be used to make esters from the corresponding alcohol and carboxylic acid. DMTMM has also been applied to anhydride synthesis. The synthesis of each carboxylic derivative is similar, relying on the activation of the starting carboxylic acid followed by nucleophilic attack by another molecule.
Coupling mechanism
DMTMM uses a typical mechanism to form carboxylic acid derivatives. First, the carboxylic acid reacts with DMTMM to form the active ester, releasing a molecule of N-methylmorpholinium (NMM). The resulting ester is highly reactive and can undergo a nucleophilic attack by an amine, an alcohol, or another nucleophile. A molecule of 4,6,-dimethoxy-1,3,5-triazin-2-ol is released and the corresponding carboxylic derivative is formed.
Safety
DMTMM can cause damage to the skin and eyes and may be toxic if ingested. Protective gloves, lab coats, and eye protection should be employed to reduce exposure while using DMTMM. DMTMM should be stored at -20 °C and kept dry.
References
Morpholines
Triazines
Peptide coupling reagents
Chlorides
Quaternary ammonium compounds | DMTMM | [
"Chemistry",
"Biology"
] | 654 | [
"Chlorides",
"Inorganic compounds",
"Peptide coupling reagents",
"Salts",
"Reagents for organic chemistry",
"Reagents for biochemistry"
] |
54,483,847 | https://en.wikipedia.org/wiki/Chevalley%20restriction%20theorem | In the mathematical theory of Lie groups, the Chevalley restriction theorem describes functions on a Lie algebra which are invariant under the action of a Lie group in terms of functions on a Cartan subalgebra.
Statement
Chevalley's theorem requires the following notation:
Chevalley's theorem asserts that the restriction of polynomial functions induces an isomorphism
.
Proofs
gives a proof using properties of representations of highest weight. give a proof of Chevalley's theorem exploiting the geometric properties of the map .
References
Lie groups
Lie algebras
Representation theory
Algebraic geometry | Chevalley restriction theorem | [
"Mathematics"
] | 118 | [
"Lie groups",
"Mathematical structures",
"Fields of abstract algebra",
"Algebraic structures",
"Algebraic geometry"
] |
54,484,502 | https://en.wikipedia.org/wiki/Thomas%E2%80%93Fermi%20equation | In mathematics, the Thomas–Fermi equation for the neutral atom is a second order non-linear ordinary differential equation, named after Llewellyn Thomas and Enrico Fermi, which can be derived by applying the Thomas–Fermi model to atoms. The equation reads
subject to the boundary conditions
If approaches zero as becomes large, this equation models the charge distribution of a neutral atom as a function of radius . Solutions where becomes zero at finite model positive ions. For solutions where becomes large and positive as becomes large, it can be interpreted as a model of a compressed atom, where the charge is squeezed into a smaller space. In this case the atom ends at the value of for which .
Transformations
Introducing the transformation converts the equation to
This equation is similar to Lane–Emden equation with polytropic index except the sign difference.
The original equation is invariant under the transformation . Hence, the equation can be made equidimensional by introducing into the equation, leading to
so that the substitution reduces the equation to
Treating as the dependent variable and as the independent variable, we can reduce the above equation to
But this first order equation has no known explicit solution, hence, the approach turns to either numerical or approximate methods.
Sommerfeld's approximation
The equation has a particular solution , which satisfies the boundary condition that as , but not the boundary condition y(0)=1. This particular solution is
Arnold Sommerfeld used this particular solution and provided an approximate solution which can satisfy the other boundary condition in 1932. If the transformation is introduced, the equation becomes
The particular solution in the transformed variable is then . So one assumes a solution of the form and if this is substituted in the above equation and the coefficients of are equated, one obtains the value for , which is given by the roots of the equation . The two roots are , where we need to take the positive root to avoid the singularity at the origin. This solution already satisfies the first boundary condition (), so, to satisfy the second boundary condition, one writes to the same level of accuracy for an arbitrary
The second boundary condition will be satisfied if as . This condition is satisfied if and since , Sommerfeld found the approximation as . Therefore, the approximate solution is
This solution predicts the correct solution accurately for large , but still fails near the origin.
Solution near origin
Enrico Fermi provided the solution for and later extended by Edward B. Baker. Hence for ,
where .
It has been reported by Salvatore Esposito that the Italian physicist Ettore Majorana found in 1928 a semi-analytical series solution to the Thomas–Fermi equation for the neutral atom, which however remained unpublished until 2001. Using this approach it is possible to compute the constant B mentioned above to practically arbitrarily high accuracy; for example, its value to 100 digits is .
References
Eponymous equations of physics
Ordinary differential equations | Thomas–Fermi equation | [
"Physics"
] | 585 | [
"Eponymous equations of physics",
"Equations of physics"
] |
54,484,655 | https://en.wikipedia.org/wiki/List%20of%20nonlinear%20ordinary%20differential%20equations | Differential equations are prominent in many scientific areas. Nonlinear ones are of particular interest for their commonality in describing real-world systems and how much more difficult they are to solve compared to linear differential equations. This list presents nonlinear ordinary differential equations that have been named, sorted by area of interest.
Mathematics
{|class="wikitable" style="background: white; color: black; text-align: left"
|-style="background: #eee"
!Name
!Order
!Equation
!Application
!Reference
|-
|Abel's differential equation of the first kind
|1
|
|Class of differential equation which may be solved implicitly
|
|-
|Abel's differential equation of the second kind
|1
|
|Class of differential equation which may be solved implicitly
|
|-
|Bernoulli equation
|1
|
|Class of differential equation which may be solved exactly
|
|-
|Binomial differential equation
|
|
|Class of differential equation which may sometimes be solved exactly
|
|-
|Briot-Bouquet Equation
|1
|
|Class of differential equation which may sometimes be solved exactly
|
|-
|Cherwell-Wright differential equation
|1
| or the related form
|An example of a nonlinear delay differential equation; applications in number theory, distribution of primes, and control theory
|
|-
|Chrystal's equation
|1
|
|Generalization of Clairaut's equation with a singular solution
|
|-
|Clairaut's equation
|1
|
|Particular case of d'Alembert's equation which may be solved exactly
|
|-
|d'Alembert's equation or Lagrange's equation
|1
|
|May be solved exactly
|
|-
|Darboux equation
|1
|
|Can be reduced to a Bernoulli differential equation; a general case of the Jacobi equation
|
|-
|Elliptic function
|1
|
|Equation for which the elliptic functions are solutions
|
|-
|Euler's differential equation
|1
|
|A separable differential equation
|
|-
|Euler's differential equation
|1
|
|A differential equation which may be solved with Bessel functions
|
|-
|Jacobi equation
|1
|
|Special case of the Darboux equation, integrable in closed form
|
|-
|Loewner differential equation
|1
|
|Important in complex analysis and geometric function theory
|
|-
|Logistic differential equation (sometimes known as the Verhulst model)
|2
|
|Special case of the Bernoulli differential equation; many applications including in population dynamics
|
|-
|Lorenz attractor
|1
|
|Chaos theory, dynamical systems, meteorology
|
|-
|Nahm equations
|1
|
|Differential geometry, gauge theory, mathematical physics, magnetic monopoles
|
|-
|Painlevé I transcendent
|2
|
|One of fifty classes of differential equation of the form ; the six Painlevé transcendents required new special functions to solve
|
|-
|Painlevé II transcendent
|2
|
|One of fifty classes of differential equation of the form ; the six Painlevé transcendents required new special functions to solve
|
|-
|Painlevé III transcendent
|2
|
|One of fifty classes of differential equation of the form ; the six Painlevé transcendents required new special functions to solve
|
|-
|Painlevé IV transcendent
|2
|
|One of fifty classes of differential equation of the form ; the six Painlevé transcendents required new special functions to solve
|
|-
|Painlevé V transcendent
|2
|
|One of fifty classes of differential equation of the form ; the six Painlevé transcendents required new special functions to solve
|
|-
|Painlevé VI transcendent
|2
|
|All of the other Painlevé transcendents are degenerations of the sixth
|
|-
|Rabinovich–Fabrikant equations
|1
|
|Chaos theory, dynamical systems
|
|-
|Riccati equation
|1
|
|Class of first order differential equations that is quadratic in the unknown. Can reduce to Bernoulli differential equation or linear differential equation in certain cases
|
|-
|Rössler attractor
|1
|
|Chaos theory, dynamical systems
|
|}
Physics
{|class="wikitable" style="background: white; color: black; text-align: left"
|-style="background: #eee"
!Name
!Order
!Equation
!Applications
!Reference
|-
|Bellman's equation or Emden-Fowler's equation
|2
| (Emden-Fowler) which reduces to if (Bellman)
|Diffusion in a slab
|
|-
|Besant-Rayleigh-Plesset equation
|2
|
|Spherical bubble in fluid dynamics
|
|-
|Blasius equation
|3
|
|Blasius boundary layer
|
|-
|Chandrasekhar's white dwarf equation
|2
|
|Gravitational potential of white dwarf in astrophysics
|
|-
|De Boer-Ludford equation
|2
|
|Plasma physics
|
|-
|Emden–Chandrasekhar equation
|2
|
|Astrophysics
|
|-
|Ermakov-Pinney equation
|2
|
|Electromagnetism, oscillation, scalar field cosmologies
|
|-
|Falkner–Skan equation
|3
|
|Falkner–Skan boundary layer
|
|-
|Friedmann equations
|2
| and
|Physical cosmology
|
|-
|Heisenberg equation of motion
|1
|
|Quantum mechanics
|
|-
|Ivey's equation
|2
|
|Space charge theory
|
|-
|Kidder equation
|2
|
|Flow through porous medium
|
|-
|Krogdahl equation
|2
|
|Stellar pulsation in astrophysics
|
|-
|Lagerstrom equation
|2
|
|One dimensional viscous flow at low Reynolds numbers
|
|-
|Lane–Emden equation or polytropic differential equation
|2
|
|Astrophysics
|
|-
|Liñán's equation
|2
|
|Combustion
|
|-
|Pendulum equation
|2
|
|Mechanics
|
|-
|Poisson–Boltzmann equation (1d case)
|2
|
|Inflammability and the theory of thermal explosions
|
|-
|Stuart–Landau equation
|1
|
|Hydrodynamic stability
|
|-
|Taylor–Maccoll equation
|2
| where
|Flow behind a conical shock wave
|
|-
|Thomas–Fermi equation
|2
|
|Quantum mechanics
|
|-
|Toda lattice
|1
|where
|Model of one-dimensional crystal in solid-state physics, Langmuir oscillations in plasma, quantum cohomology; notable for being a completely integrable system
|
|}
Engineering
{|class="wikitable" style="background: white; color: black; text-align: left"
|-style="background: #eee"
!Name
!Order
!Equation
!Applications
!Reference
|-
|Duffing equation
|2
|
|Oscillators, hysteresis, chaotic dynamical systems
|
|-
|Lewis regulator
|2
|
|Oscillators
|
|-
|Liénard equation
|2
| with odd and even
|Oscillators, electrical engineering, dynamical systems
|
|-
|Rayleigh equation
|2
|
|Oscillators (especially auto-oscillation), acoustics; the Van der Pol equation is a Rayleigh equation
|
|-
|Van der Pol equation
|2
|
|Oscillators, electrical engineering, chaotic dynamical systems
|
|}
Chemistry
Biology and medicine
Economics and finance
See also
List of linear ordinary differential equations
List of nonlinear partial differential equations
List of named differential equations
List of stochastic differential equations
References
differential, ordinary, nonlinear
Nonlinear systems | List of nonlinear ordinary differential equations | [
"Mathematics"
] | 1,633 | [
"Mathematical objects",
"Equations",
"Nonlinear systems",
"Mathematical tables",
"Lists of equations",
"Dynamical systems"
] |
53,178,099 | https://en.wikipedia.org/wiki/Color%20Genomics | Color Health, Inc. (founded as Color Genomics) makes population-scale cancer detection and care accessible, convenient, and cost-effective for employers, health plans, and unions. With nearly a decade of experience, Color has served millions of patients and partnered with innovators such as the National Institutes of Health, the CDC, large public health departments, and more.
Color’s cancer detection and management solution, built in partnership with the American Cancer Society, is a comprehensive, integrated care model that supports individuals from screening to diagnosis and care. Color provides risk education and assessment, accessible screenings, a nationwide clinical care network, and ongoing educational programming to help individuals and organizations take control of cancer.
History
The company was co-founded in 2015 by Elad Gil, Nish Bhat, Taylor Sittler, and Othman Laraki, who is the CEO, in Burlingame, California.
In November 2021, the company had a valuation of $4.6B and collected $100 million in a series E financing round.
Expansion to population health
Color provides technology, software, and clinical services for population health programs.
Color has partnered with health systems including NorthShore University Health System, Ochsner Health System, and Jefferson Health.
COVID-19 testing programs
In early 2020, recognizing the growing threat that the COVID-19 pandemic presented, Color mobilized its existing software, logistics expertise and lab operations to focus on mass COVID-19 testing.
Color operates a high-throughput CLIA-certified COVID-19 testing laboratory and processes thousands of samples a day. The integrated process includes sign up, the self-collection kit, and results returned via text and email to patients, clinicians, and public health authorities. Color returns results, on average, within 24–48 hours.
In August 2020, Color was running some of the highest-capacity test sites in the country. Color was also responsible for the majority of San Francisco’s COVID-19 testing with an average turnaround time within 24 hours, and most results returned in under 48 hours. Color worked with San Francisco’s CityTestSF program, Alameda County Health Services and federally qualified health centers in Alameda County, Marin County and others. The company has also partnered with a wide variety of universities, employers and public health entities, including USC and United Airlines.
In January 2022, Color had computer difficulties that resulted in delayed test results and closed testing stations.
Testing system
Genetics
Color’s physician-ordered test can be initiated by individuals online, and a sample collection kit is sent in the mail. Individuals provide a saliva sample and return the kit in a pre-paid package. Color's CLIA-certified and CAP-approved lab analyzes for variants in the breast cancer genes BRCA1 and BRCA2, as well as 28 other genes associated with breast, prostate, colon, uterine, stomach, melanoma, pancreatic, and ovarian cancers. The test also identifies variants in 30 genes related to hereditary heart conditions as well as genes that may impact medication response. Genetic counseling with board-certified genetic counselors is available for free to all individuals who use Color.
COVID-19
Color’s FDA Emergency Use Authorization (EUA) COVID-19 test can be accessed as a part of testing programs initiated by a public health entity, university, employer or other organization. The test is a dry anterior nasal swab, approved for use either in an on-site or at-home setting without the need for a healthcare provider to monitor sample collection, which eases the burden on the healthcare system and reduces testing costs.
The company has received an FDA EUA for the testing assay, which is a nucleic acid amplification method called LAMP, or loop-mediated isothermal amplification. This enables processing test results 50% faster than RT-PCR, the amplification method used at most other labs. LAMP relies on a different set of chemical reagents than standard PCR tests, which helps the process avoids supply chain scarcity.
Research
In 2018, Color was selected, alongside the Broad Institute of MIT and Harvard, and the Laboratory for Molecular Medicine (LMM) at Partners HealthCare, to establish one of three genome centers around the country for the National Institutes of Health’s historic All of Us Research Program. All of Us will sequence one million or more people across the US, with the goal of accelerating health research and enabling individualized prevention, treatment, and care. The program has a focus on recruitment from populations that have been historically underrepresented in clinical science and genomic medicine, in order to build a diverse biomedical data resource that provides a foundation for better insights into the biological, environmental, and behavioral factors that influence health.
In 2019, Color was named the sole awardee to deliver all of the genetic counseling for All of Us. As the awardee, Color will customize software and tools to integrate data from all the genome centers, standardize reporting across the program, and ensure all results are returned in a unified way. This is a first year $4.6 million grant as part of a multi-year $25 million project.
In collaboration with the Women’s Health Initiative and Dr. Mary-Claire King at the University of Washington, Color provided genetic sequencing for the cohort of 10,000 Fabulous Ladies Over Seventy (FLOSSIES). This is the largest publicly available dataset of genetic variants associated with hereditary cancer in healthy, older individuals.
Color Data, a database containing aggregated genetic and clinical information from 50,000 individuals who took a Color test, helps researchers and scientists identify genotype-phenotype correlations and novel variants for functional analysis, as well as enables data-driven drug discovery and development. It is the largest public database of its kind.
As a part of the MAGENTA Study, which aims to improve availability of genetic testing for hereditary cancer syndromes to at-risk individuals through the use of an online genetic testing service, Color is working with a Stand Up to Cancer Dream Team that includes physicians, scientists and researchers from the MD Anderson Cancer Center and the University of Washington to provide genetic counseling to high-risk individuals through delivery models such as tele-counseling.
In collaboration with Dr. Laura Esserman at University of California and Sanford Health, Color is providing genetic testing for WISDOM, a 100,000-woman study that is comparing annual screenings with personalized, risk-based breast cancer screenings.
As part of the GENtleMEN Study, Color is working with Dr. Heather Cheng at the Fred Hutchinson Cancer Research Center and the University of Washington to provide genetic testing and counseling to men with advanced prostate cancer.
Color contributes anonymized variants to ClinVar, a free database managed by the National Center for Biotechnology Information (NCBI) at the National Institutes of Health (NIH) that helps researchers identify links between genes and disease.
Color's research collaborators include:
Mary-Claire King: advisor, scientific collaborator
Sekar Kathiresan: advisor, scientific collaborator
Heidi Rehm: scientific collaborator
References
Genomics companies
Medical tests
Biotechnology
Cancer research
Companies based in Burlingame, California
American companies established in 2015
2015 establishments in California
Privately held companies based in California | Color Genomics | [
"Biology"
] | 1,467 | [
"nan",
"Biotechnology"
] |
53,181,475 | https://en.wikipedia.org/wiki/NMR%20line%20broadening%20techniques | In chemistry, NMR line broadening techniques (or NMR line broadening experiments) can be used to determine the rate constant and the Gibbs free energy of exchange reactions of two different chemical compounds. If the two species are in equilibrium and exchange to each other, peaks of both species get broadened in the spectrum. This observation of broadened peaks can be used to obtain kinetic and thermodynamic information of the exchange reaction.
Determining bond rotational energies
A basic NMR line broadening experiment is to determine the rotational energy barrier of a certain chemical bond. If the bond rotates slowly enough compared to the NMR time scale (e.g., amide bond), two different species can be detected by the NMR spectrometer. Considering that the time scale of NMR spectroscopy is about a few seconds, this technique can be used to examine the kinetics and/or thermodynamics of chemical exchange reactions on the order of seconds.
In general, the energy barrier to rotate a bond is low enough at room temperature, which means that the rotation is fast, making the two different species indistinguishable. At low temperatures, however, it is harder for a bond to overcome the energy barrier to rotate, resulting in two separate peaks in the spectrum. With these principles, NMR spectra of a molecule with a high rotational barrier should be obtained at several different temperatures (i.e., variable temperature NMR) to distinguish two different peaks at low temperature in slow exchange and to find the temperature at which the two peaks merge.
Especially at the coalescence temperature (), where the two peaks coalesce, the rate constant of rotation at and the energy barrier of the rotation can be easily calculated. As increasing the temperature, the exchange reaction get faster, and at a certain temperature, which is , the appearance of the peaks changes from two separate peaks in slow exchange to a single peak in fast exchange. The rate constant at can be calculated with the following equation: ,where and are the chemical shift of each species at lower temperatures where they are in slow exchange.
By using the Eyring equation, the Gibbs free energy of rotation, , can be determined: (Eyring equation)where is gas constant, is the Boltzmann constant, and is the Planck constant.
Determining electron transfer self-exchange rates
Electron transfer self-exchange rates can be also determined with the experimental value of line-width and chemical shift. Sharp peaks of diamagnetic compounds can be broadened during the electron transfer with its partner paramagnetic compound (one-electron oxidized species), since paramagnetic compounds exhibit broader peaks at a different chemical shift. If their self-exchange rate is sufficiently faster than the NMR timescale, the line-broadening of the peaks is observed at shifted chemical shifts in the spectrum. In order to determine the self-exchange rate of sample compounds, one can choose a certain characteristic peak of the sample diamagnetic compound, and examine the peak broadening in the mixture with its partner paramagnetic compound. The broadened line-widths are proportional to mole fraction, and the equation can be used to determine self-exchange rate with the value of mole fraction, chemical shift and line-width: ,where is the rate constant of electron transfer self-exchange, and are the mole fractions of diamagnetic and paramagnetic compounds, is the chemical shift difference (in Hz) between pure diamagnetic and paramagnetic compounds, and is the half-width of the peak (width at half height) of the selected peak. and are the peak widths of the pure diamagnetic and paramagnetic species, and C is the total concentration of the exchanging species in M (mol/L).
For more accurate calculation of each mole fraction, the following equations can be used; ,where is a shifted chemical shift of the selected peak, and is the original chemical shift of the diamagnetic species based on the assumption that the change in chemical shift is linearly correlated to the mole fraction of the paramagnetic species.
References
Nuclear magnetic resonance | NMR line broadening techniques | [
"Physics",
"Chemistry"
] | 853 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
53,181,502 | https://en.wikipedia.org/wiki/Polymer%20solution | Polymer solutions are solutions containing dissolved polymers. These may be (e.g. in ), or solid solutions (e.g. a substance which has been plasticized).
The introduction into the polymer of small amounts of a solvent (plasticizer) reduces the temperature of glass transition, the yield temperature, and the viscosity of a melt.
An understanding of the thermodynamics of a polymer solution is critical to prediction of its behavior in manufacturing processes — for example, its shrinkage or expansion in injection molding processes, or whether pigments and solvents will mix evenly with a polymer in the manufacture of paints and coatings. A recent theory on the viscosity of polymer solutions gives a physical explanation for various well-known empirical relations and numerical values including the Huggins constant, but reveals also novel simple concentration and molar mass dependence.
Applications
Polymer solutions are used in producing fibers, films, glues, lacquers, paints, and other items made of polymer materials. Thin layers of polymer solution can be used to produce light-emitting devices. Guar polymer solution gels can be used in hydraulic fracturing ("fracking").
See also
Flory–Huggins solution theory
Colloid systems
Gel
Solution polymerization
References
Further reading
Polymer chemistry | Polymer solution | [
"Chemistry",
"Materials_science",
"Engineering"
] | 261 | [
"Polymer stubs",
"Materials science",
"Homogeneous chemical mixtures",
"Polymer chemistry",
"Solutions",
"Organic chemistry stubs"
] |
57,780,625 | https://en.wikipedia.org/wiki/Diffraction-limited%20storage%20ring | Diffraction-limited storage rings (DLSR), or ultra-low emittance storage rings, are synchrotron light sources where the emittance of the electron-beam in the storage ring is smaller or comparable to the emittance of the x-ray photon beam they produce at the end of their insertion devices.
These facilities operate in the soft to hard x-ray range (100eV—100keV) with extremely high brilliance (in the order of 1021—1022 photons/s/mm2/mrad2/0.1%BW)
Together with X-ray free-electron lasers, they constitute the fourth generation of light sources, characterized by a relatively high coherent flux (in the order of 1014—1015photons/s/0.1%BW for DLSR) and enable extended physical and chemical characterizations at the nano-scale.
Existing diffraction-limited storage rings
MAX IV Laboratory, in Lund, Sweden.
Sirius, in Campinas, Brazil
European Synchrotron Radiation Facility, Extremely Brilliant Source (ESRF-EBS), in Grenoble, France
DLSR upgrade or facilities under construction
Advanced Photon Source Upgrade (APS-U), in Argonne, Illinois, USA
Swiss Light Source 2, Upgrade (SLS 2.0), in Villigen, Switzerland
Planned or projected DLSR upgrades or new facilities
Upgrades
PETRA IV, Upgrade (PETRA IV), at DESY, Hamburg, Germany
Advanced Light Source, Upgrade (ALS-U), in Berkeley, California, USA
Diamond II (Diamond II), in Didcot, Oxfordshire, UK
ELETTRA 2.0 (Elettra 2.0), in Trieste, Italy
ALBA II, in Barcelona, Spain
SOLEIL II, in Saint-Aubin, France
New facilities
High Energy Photon Source (HEPS), in Beijing, China
See also
X-Ray Free Electron Lasers
References
X-rays
Photons
Accelerator physics | Diffraction-limited storage ring | [
"Physics"
] | 410 | [
"Applied and interdisciplinary physics",
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Experimental physics",
"Accelerator physics"
] |
57,787,323 | https://en.wikipedia.org/wiki/Three-Axis%20Acceleration%20Switch | The three-axis acceleration switch is a micromachined microelectromechanical systems (MEMS) sensor that detects whether an acceleration event has exceeded a predefined threshold. It is a small, compact device, only 5mm by 5mm, and measures acceleration in the x, y, and z axes. It was developed by the Army Research Laboratory for the purposes of traumatic brain injury (TBI) research and was first introduced in 2012 at the 25th International Conference on Micro Electro Mechanical Systems (MEMS).
The three-axis acceleration switch was designed to obtain acceleration data more effectively than a conventional accelerometer in order to more accurately characterize the forces and shocks responsible for TBI. While miniature accelerometers require a constant power draw, the three-axis acceleration switch only draws current when it senses an acceleration event, using up less energy and allowing the use of smaller batteries. The three-axis acceleration switch has shown to exhibit an expected battery lifetime that is about 100 times better than that of a digital accelerometer. In return, however, the acceleration switch has a lower resolution than that of a digital or analog accelerometer.
One potential application of the three-axis acceleration switch is in studying the head impacts of players in high-risk contact sports. Due to the size of conventional accelerometers, measuring the acceleration requires the device to be implemented inside the player's helmet, which is designed to mitigate the collision forces and thus may not accurately reflect the true level of injury potential. In contrast, the miniature nature of the acceleration switch makes it easier for the switch to be affixed directly onto the participant's head.
References
Accelerometers
Transducers
Microelectronic and microelectromechanical systems | Three-Axis Acceleration Switch | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 371 | [
"Accelerometers",
"Physical quantities",
"Microtechnology",
"Acceleration",
"Materials science",
"Measuring instruments",
"Microelectronic and microelectromechanical systems"
] |
75,981,281 | https://en.wikipedia.org/wiki/Graphic%20statics | In a broad sense, the term graphic statics is used to describe the technique of solving particular practical problems of statics using graphical means. Actively used in the architecture of the 19th century, the methods of graphic statics were largely abandoned in the second half of the 20th century, primarily due to widespread use of frame structures of steel and reinforced concrete that facilitated analysis based on linear algebra. The beginning of the 21st century was marked by a "renaissance" of the technique driven by its addition to the computer-aided design tools thus enabling engineers to instantly visualize form and forces.
History
Markou and Ruan trace the origins of the graphic statics to da Vinci and Galileo who used the graphical means to calculate the sum of forces, Simon Stevin's parallelogram of forces and the 1725 introduction of the force polygon and funicular polygon by Pierre Varignon. Giovanni Poleni used the graphical calculations (and Robert Hooke's analogy between the hanging chain and standing structure) while studying the dome of the Saint Peter's Basilica in Rome (1748). Gabriel Lamé and Émile Clapeyron studied of the dome of the Saint Isaac's Cathedral with the help of the force and funicular polygons (1823).
Finally, Carl Culmann had established the new discipline (and gave it a name) in his 1864 work Die Graphische Statik. Culmann was inspired by preceding work by Jean-Victor Poncelet on earth pressure and Lehrbuch der Statik by August Möbius. The next twenty years saw rapid development of methods that involved, among others, major physicists like James Clerk Maxwell and William Rankine. In 1872 Luigi Cremona introduced the Cremona diagram to calculate trusses, in 1873 Robert H. Bow established the "Bow's notation" that is still in use. It fell out of use, especially since construction methods, such as concrete post and beam, allowed for familiar numerical calculations. Access to powerful computation gave structural engineers new tools to compute stresses for shell structures such as Finite element method.
While the method is not commonly used for construction today, graphic statics was proposed as an educational tool to teach intuition in engineering education. It is employed in classes at MIT and ETH. for architecture and structural engineering students.
Concepts
Polygon of forces
To graphically determine the resultant force of multiple forces, the acting forces can be arranged as edges of a polygon by attaching the beginning of one force vector to the end of another in an arbitrary order. Then the vector value of the resultant force would be determined by the missing edge of the polygon. In the diagram, the forces P1 to P6 are applied to the point O. The polygon is constructed starting with P1 and P2 using the parallelogram of forces (vertex a). The process is repeated (adding P3 yields the vertex b, etc.). The remaining edge of the polygon O-e represents the resultant force R.
In the case of two applied forces, their sum (resultant force) can be found graphically using a parallelogram of forces.
Digital Adaptations of Graphic Statics
With the advent of computational tools and parametric design, graphic statics has undergone significant evolution, transitioning from manual drawing techniques to digital workflows. These adaptations have enhanced its precision, accessibility, and integration into modern architectural and engineering practices.
Software Tools
Several software platforms have integrated graphic statics principles, enabling designers to explore equilibrium-based forms and optimize structures efficiently. A few examples include:
RhinoVAULT: A plug-in for Rhinoceros 3D developed by the Block Research Group at ETH Zurich. RhinoVAULT uses Thrust Network Analysis (TNA) to apply graphic statics for the design of compression-only structures, including shell vaults and domes.
Grasshopper Add-ons: Extensions such as Kangaroo Physics and Millipede in Grasshopper 3D have incorporated elements of graphic statics to facilitate form-finding and structural analysis within parametric design frameworks.
3D Graphic Statics Tools: Emerging tools like Compass and eQuilibrium allow for the visualization and manipulation of 3D graphic statics diagrams, broadening its applications in three-dimensional design.
Applications
Digital adaptations have expanded the scope of graphic statics, making it a valuable tool for:
Material Optimization: By visualizing force flows, designers can reduce material usage while maintaining structural efficiency.
Complex Geometries: Parametric tools enable the exploration of intricate geometries that would be challenging to model manually.
Interactive Design: Real-time manipulation of force diagrams in software provides immediate feedback on structural behavior, fostering intuitive decision-making during the design process.
Educational Impact
The digitization of graphic statics has also influenced its role in education. Many universities such as MIT now teach graphic statics through interactive software, enabling students to experiment with equilibrium concepts in a hands-on manner.
Limitations
Despite its advantages, digital graphic statics faces challenges such as scalability for highly complex systems and integration with advanced analytical tools like Finite Element Method (FEM). However, ongoing research continues to address these limitations.
References
Sources
Structural analysis | Graphic statics | [
"Engineering"
] | 1,035 | [
"Structural engineering",
"Structural analysis",
"Mechanical engineering",
"Aerospace engineering"
] |
75,985,917 | https://en.wikipedia.org/wiki/Avionics%20bay | Avionics bay, also known as E&E bay or electronic equipment bay in aerospace engineering is known as compartment in an aircraft that houses the avionics and other electronic equipment, such as flight control computers, navigation systems, communication systems, and other electronic equipment essential for the operation. It is designed to be modular with individual components that can be easily removed and replaced in case of failure and is designed to be highly reliable and fault-tolerant with various backup systems.
In larger commercial airplanes, the main avionics compartment is typically located in the forward section of the aircraft under the cockpit. Purpose of its location is to provide easy access to the avionics and other electronic equipment for maintenance and repair.
For example, on larger aircraft such as the Boeing 747-400, the avionics bays are divided into 3 parts - the main equipment center (MEC), the center equipment center (CEC) and the aft equipment center (AEC).
Components
Typically avionics bay contain plug-in modules for:
Flight Control Computer (FCC)
Autopilot
Automatic flight director system (AFDS)
Autothrottle system (A/T)
Mode control panel (MCP)
Flight management computer (FMC)
Primary flight computers (PFC)
Actuator control electronics (ACE)
Flight data recorder
Cockpit voice recorder
Battery and battery charger
The avionics bay also contains the oxygen tanks for the pilots in case of a cabin depressurization
Thermal management in spacecraft
In spacecraft, smoke detection is not practical for avionics bays as there is no forced airflow in the compartment. Suppressants, such as Halon, operate by either chemically interrupting the combustion process or by reducing the oxygen concentration within the bay's atmosphere.
In popular culture
The avionics bay of a 747-200 was used as a way to deploy the military into an aircraft in the movie Executive Decision
References
Avionics
Aircraft instruments
Aircraft systems
Aerospace engineering
Aircraft | Avionics bay | [
"Technology",
"Engineering"
] | 397 | [
"Systems engineering",
"Avionics",
"Measuring instruments",
"Aircraft systems",
"Aircraft instruments",
"Aerospace engineering"
] |
67,331,224 | https://en.wikipedia.org/wiki/Anderson%27s%20bridge | In electronics, Anderson's bridge is a bridge circuit used to measure the self-inductance of the coil. It enables measurement of inductance by utilizing other circuit components like resistors and capacitors.
Anderson's bridge was invented by Alexander Anderson in 1891. He modified Maxwell's inductance capacitance bridge so that it gives very accurate measurement of self-inductance.
Balance conditions
The balance conditions for Anderson's bridge or, equivalently the values of the self-inductance and resistance of the given coil can be found using basic circuit analysis techniques such as KCL, KVL and using phasors.
Consider the circuit diagram of Anderson's bridge in the given figure. Let L1 be the self-inductance and R1 be the electrical resistance of the coil under consideration. Since the voltmeter is ideally assumed to have nearly infinite impedance, the currents in branches ab and bc and those in the branches de and ec are taken to be equal. Applying Kirchhoff's current law at node d, it can be shown that-
Since the analysis is being made under the balanced condition of the bridge, it can be said that the voltage drop across the voltmeter is essentially zero. On applying Kirchhoff's voltage law to the appropriate loops(in the anti-clockwise direction), the following relations hold-
On solving these sets of equations, one can finally obtain the self-inductance and resistance of the coil as-
Advantages
The Anderson's bridge can also be used the other way round- that is, it can be used to measure the capacitance of an unknown capacitor using an inductor coil whose self-inductance and electrical resistance have been pre-determined to a high degree of precision. An interesting point to note is the fact that the measured self-inductance of the coil does not change even on taking dielectric loss within the capacitor into account. Another advantage of using this modified bridge is that unlike the variable capacitor used in Maxwell bridge, it makes use of a fixed capacitor which is relatively quite cheaper.
Disadvantages
One of the obvious difficulties associated with Anderson's bridge are the relatively complex balance equation calculations compared to the Maxwell bridge. The circuit connections and computations are similarly more cumbersome in comparison to the Maxwell bridge.
References
Measuring instruments
Bridge circuits
Analog circuits
Irish inventions
Impedance measurements | Anderson's bridge | [
"Physics",
"Technology",
"Engineering"
] | 499 | [
"Physical quantities",
"Analog circuits",
"Measuring instruments",
"Electronic engineering",
"Impedance measurements",
"Electrical resistance and conductance"
] |
67,332,262 | https://en.wikipedia.org/wiki/Aida%20El-Khadra | Aida Xenia El-Khadra is a particle physicist who is a professor of high energy physics at the University of Illinois at Urbana–Champaign. She is the co-chair of the Muon g-2 Theory Initiative, which reported hints at new physics in the Standard Model in 2021. She is a fellow of the American Physical Society and the Alfred P. Sloan Foundation.
Early life and education
El-Khadra was an undergraduate student at the Free University of Berlin, where she earned a master's degree in physics. She moved to the University of California, Los Angeles for her doctoral research, where she studied semi-leptonic decays. El-Khadra was a postdoctoral researcher at the Brookhaven National Laboratory, Fermilab, and the Ohio State University.
Research and Pathways
In 1995, El-Khadra joined the faculty at the University of Illinois at Urbana–Champaign, where she was promoted to professor in 2008. Her research makes use of quantum chromodynamics to understand processes in flavor physics. She spent 1998 as a Fellow at the Center for Advanced Study, where she developed and tested new lattice actions. El-Khadra directs the Fermilab Lattice collaboration and was named a distinguished scholar at Fermilab in 2016.
El-Khadra oversaw the theoretical aspects of the Muon g-2 experiments. The collaboration measured the magnetic moment of the muon with unparalleled precision. El-Khadra has been involved with several theoretical attempts to predict the anomalous magnetic moment based on the Standard Model. In 2021, the experimental component of the collaboration reported a magnetic moment that was considerably larger than the value predicted by the Standard Model. These finding hints at new particles or forces in the Standard Model.
Achievements
El-Khadra was elected fellow of the American Physical Society in 2011 "for contributions to lattice QCD and flavor physics including pioneering studies of heavy quarks on the lattice, semi-leptonic and leptonic heavy-light meson decays, the strong coupling constant, and quark masses". She was named to the 2021 class of fellows of the American Association for the Advancement of Science. In 2022 she was awarded a Simons Fellowship.
Selected publications
References
Living people
Year of birth missing (living people)
Free University of Berlin alumni
University of Illinois Urbana-Champaign faculty
University of California, Los Angeles alumni
Particle physicists
Brookhaven National Laboratory staff
Ohio State University staff
Fellows of the American Physical Society
Fellows of the American Association for the Advancement of Science
21st-century women physicists | Aida El-Khadra | [
"Physics"
] | 522 | [
"Particle physicists",
"Particle physics"
] |
67,337,770 | https://en.wikipedia.org/wiki/Hydroxymethylation | Hydroxymethylation is a chemical reaction that installs the CH2OH group. The transformation can be implemented in many ways and applies to both industrial and biochemical processes.
Hydroxymethylation with formaldehyde
A common method for hydroxymethylation involves the reaction of formaldehyde with active C-H and N-H bonds:
R3C-H + CH2O → R3C-CH2OH
R2N-H + CH2O → R2N-CH2OH
A typical active C-H bond is provided by a terminal acetylene or the alpha protons of an aldehyde. In industry, hydroxymethylation of acetaldehyde with formaldehyde is used in the production of pentaerythritol:
P-H bonds are also prone to reaction with formaldehyde. Tetrakis(hydroxymethyl)phosphonium chloride ([P(CH2OH)4]Cl) is produced in this way from phosphine (PH3).
Hydroxymethylation in demethylation
5-Methylcytosine is a common epigenetic marker. The methyl group is modified by oxidation of the methyl group in a process called hydroxymethylation:
RCH3 + O → RCH2OH
This oxidation is thought to be a prelude to removal, regenerating cytosine.
Representative reactions
A two-step hydroxymethylation of aldehydes involves methylenation followed by hydroboration-oxidation:
RCHO + Ph3P=CH2 → RCH=CH2 + Ph3PO
RCH=CH2 + R2BH → RCH2-CH2BR2
RCH2-CH2BR2 + H2O2 → RCH2-CH2OH + "HOBR2"
Silylmethyl Grignard reagents are nucleophilic reagents for hydroxymethylation of ketones:
R2C=O + ClMgCH2SiR'3 → R2C(OMgCl)CH2SiR'3
R2C(OMgCl)CH2SiR'3 + H2O + H2O2 → R2C(OH)CH2OH + "HOSiR'3"
Reactions of hydroxymethylated compounds
A common reaction of hydroxymethylated compounds is further reaction with a second equivalent of an active X-H bond:
hydroxymethylation: X-H + CH2O → X-CH2OH
crosslinking: X-H + X-CH2OH → X-CH2-X + H2O
This pattern is illustrated by the use of formaldehyde in the production various polymers and resins from phenol-formaldehyde condensations (Bakelite, Novolak, and calixarenes). Similar crosslinking occurs in urea-formaldehyde resins.
The hydroxymethylation of N-H and P-H bonds can often be reversed by base. This reaction is illustrated by the preparation of tris(hydroxymethyl)phosphine:
[P(CH2OH)4]Cl + NaOH → P(CH2OH)3 + H2O + H2C=O + NaCl
When conducted in the presence of chlorinating agents, hydroxymethylation leads to chloromethylation as illustrated by the Blanc chloromethylation.
Related
Hydroxyethylation involves the installation of the CH2CH2OH group, as practiced in ethoxylation.
Aminomethylation is often effected with Eschenmoser's salt, [(CH3)2NCH2]OTf
References
Carbon-carbon bond forming reactions | Hydroxymethylation | [
"Chemistry"
] | 786 | [
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
44,439,173 | https://en.wikipedia.org/wiki/Bradley%E2%80%93Terry%20model | The Bradley–Terry model is a probability model for the outcome of pairwise comparisons between items, teams, or objects. Given a pair of items and drawn from some population, it estimates the probability that the pairwise comparison turns out true, as
where is a positive real-valued score assigned to individual . The comparison can be read as " is preferred to ", " ranks higher than ", or " beats ", depending on the application.
For example, might represent the skill of a team in a sports tournament and the probability that wins a game against . Or might represent the quality or desirability of a commercial product and the probability that a consumer will prefer product over product .
The Bradley–Terry model can be used in the forward direction to predict outcomes, as described, but is more commonly used in reverse to infer the scores given an observed set of outcomes. In this type of application represents some measure of the strength or quality of and the model lets us estimate the strengths from a series of pairwise comparisons. In a survey of wine preferences, for instance, it might be difficult for respondents to give a complete ranking of a large set of wines, but relatively easy for them to compare sample pairs of wines and say which they feel is better. Based on a set of such pairwise comparisons, the Bradley–Terry model can then be used to derive a full ranking of the wines.
Once the values of the scores have been calculated, the model can then also be used in the forward direction, for instance to predict the likely outcome of comparisons that have not yet actually occurred. In the wine survey example, for instance, one could calculate the probability that someone will prefer wine over wine , even if no one in the survey directly compared that particular pair.
History and applications
The model is named after Ralph A. Bradley and Milton E. Terry, who presented it in 1952, although it had already been studied by Ernst Zermelo in the 1920s. Applications of the model include the ranking of competitors in sports, chess, and other competitions, the ranking of products in paired comparison surveys of consumer choice, analysis of dominance hierarchies within animal and human communities, ranking of journals, ranking of AI models, and estimation of the relevance of documents in machine-learned search engines.
Definition
The Bradley–Terry model can be parametrized in various ways. Equation () is perhaps the most common, but there are a number of others. Bradley and Terry themselves defined exponential score functions , so that
Alternatively, one can use a logit, such that
i.e.
for
This formulation highlights the similarity between the Bradley–Terry model and logistic regression. Both employ essentially the same model but in different ways. In logistic regression one typically knows the parameters and attempts to infer the functional form of ; in ranking under the Bradley–Terry model one knows the functional form and attempts to infer the parameters.
With a scale factor of 400, this is equivalent to the Elo rating system for players with Elo ratings and .
Estimating the parameters
The most common application of the Bradley–Terry model is to infer the values of the parameters given an observed set of outcomes , such as wins and losses in a competition. The simplest way to estimate the parameters is by maximum likelihood estimation, i.e., by maximizing the likelihood of the observed outcomes given the model and parameter values.
Suppose we know the outcomes of a set of pairwise competitions between a certain group of individuals, and let be the number of times individual beats individual . Then the likelihood of this set of outcomes within the Bradley–Terry model is and the log-likelihood of the parameter vector is
Zermelo showed that this expression has only a single maximum, which can be found by differentiating with respect to and setting the result to zero, which leads to
This equation has no known closed-form solution, but Zermelo suggested solving it by simple iteration. Starting from any convenient set of (positive) initial values for the , one iteratively performs the update
for all in turn. The resulting parameters are arbitrary up to an overall multiplicative constant, so after computing all of the new values they should be normalized by dividing by their geometric mean thus:
This estimation procedure improves the log-likelihood on every iteration, and is guaranteed to eventually reach the unique maximum. It is, however, slow to converge. More recently it has been pointed out that equation () can also be rearranged as
which can be solved by iterating
again normalizing after every round of updates using equation (). This iteration gives identical results to the one in () but converges much faster and hence is normally preferred over ().
Worked example of solution procedure
Consider a sporting competition between four teams, who play a total of 22 games among themselves. Each team's wins are given in the rows of the table below and the opponents are given as the columns:
For example, Team A has beat Team B twice and lost to team B three times; not played team C at all; won once and lost four times against team D.
We would like to estimate the relative strengths of the teams, which we do by calculating the parameters , with higher parameters indicating greater prowess. To do this, we initialize the four entries in the parameter vector arbitrarily, for example assigning the value 1 to each team: . Then we apply equation () to update , which gives
Now, we apply () again to update , making sure to use the new value of that we just calculated:
Similarly for and we get
Then we normalize all the parameters by dividing by their geometric mean to get the estimated parameters .
To improve the estimates further, we repeat the process, using the new values. For example,
Repeating this process for the remaining parameters and normalizing, we get . Repeating a further 10 times gives rapid convergence toward a final solution of . This indicates that Team D is the strongest and Team B the second strongest, while Teams A and C are nearly equal in strength but below Teams B and D. In this way the Bradley–Terry model lets us infer the relationship between all four teams, even though not all teams have played each other.
Variations
Crowd-BT
The Crowd-BT model, developed in 2013 by Chen et al, attempts to extend the standard Bradley–Terry model for crowdsourced settings while reducing the number of comparisons needed by taking into account the reliability of each judge. In particular, it identifies and excludes judges presumed to be spammers (selecting choices at random) or malicious (selecting always the wrong choice). In a crowdsourced task of ranking documents by reading difficulty with 624 judges contributing up to 40 pairwise comparisons each, Crowd-BT was shown to outperform both standard Bradley–Terry as well as ranking system TrueSkill. It has been recommended for use when quality results are valued over efficiency and the number of comparisons is high.
See also
Ordinal regression
Rasch model
Scale (social sciences)
Elo rating system
Thurstonian model
References
Machine learning
Statistical models
Logistic regression
Regression models | Bradley–Terry model | [
"Engineering"
] | 1,435 | [
"Artificial intelligence engineering",
"Machine learning"
] |
44,441,142 | https://en.wikipedia.org/wiki/Robert%20R.%20Squires | Robert Reed Squires (January 11, 1953 – September 30, 1998) was an American chemist known for his work in gas phase ion chemistry and flowing afterglow mass spectrometry.
Early life and education
Squires was born in Northern California and grew up in Los Angeles. He received an A.A. degree at El Camino College in 1973 and then returned to Northern California where he received a B.A. at California State University, Chico. He then went on to Yale University where he worked in the laboratory of Kenneth B. Wiberg on the thermochemistry of organic compounds. He received his M.Phil. degree in 1977 and a Ph.D. in 1980. He took a postdoctoral position with Charles DePuy and Veronica Bierbaum at the University of Colorado, Boulder where he studied the reactions of gas-phase ions using the flowing afterglow technique.
Academic career
Squires took a position as an assistant professor at Purdue University in 1981 where he constructed two unique mass spectrometers: a flowing afterglow triple quadrupole mass spectrometer and a flowing afterglow selected ion flow tube triple quadrupole mass spectrometer. In 1986, he was promoted to Associate Professor and in 1990 to Professor.
Awards
Among his awards were an Alfred P. Sloan Foundation Fellowship in 1987, the American Chemical Society Nobel Laureate Signature Award for Graduate Education in Chemistry (with Susan Graul) in 1991, and the American Society for Mass Spectrometry Biemann Medal in 1998. The Purdue University Department of Chemistry Robert R. Squires Scholarship was established in his honor.
References
1953 births
1998 deaths
20th-century American chemists
Mass spectrometrists | Robert R. Squires | [
"Physics",
"Chemistry"
] | 347 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
44,442,470 | https://en.wikipedia.org/wiki/Shoolery%27s%20rule | Shoolery's rule, which is named after James Nelson Shoolery, is a good approximation of the chemical shift δ of methylene groups in proton nuclear magnetic resonance. We can calculate shift of the CH2 protons in a A–CH2–B structure using the formula
where 0.23 ppm is the chemical shift of methane and the empirical adjustments are based on the identities of the A and B groups:
Shoolery's rule is a particular instance of a general class of rules of the form
,
with two substituents on methylene resulting in two parameters and .
References
External links
Organic Spectroscopy: Principles and Applications, page 206
Nuclear magnetic resonance | Shoolery's rule | [
"Physics",
"Chemistry"
] | 138 | [
"Nuclear chemistry stubs",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance stubs",
"Nuclear physics"
] |
44,443,296 | https://en.wikipedia.org/wiki/2-Octyne | 2-Octyne, also known as methylpentylethyne and oct-2-yne, is a type of alkyne with a triple bond at its second carbon (the '2-' indicates the location of the triple bond in the chain). Its formula is C8H14. Its density at 25 °C and otherwise stable conditions is 0.759 g/ml. The boiling point is 137 °C. The average molar mass is 110.20 g/mol.
It is formed by isomerization of 1-octyne catalyzed by a YbII complex.
References
Alkynes | 2-Octyne | [
"Chemistry"
] | 129 | [
"Organic compounds",
"Alkynes"
] |
44,443,349 | https://en.wikipedia.org/wiki/Americium%28IV%29%20fluoride | Americium(IV) fluoride is the inorganic compound with the formula AmF4. It is a tan solid. In terms of its structure, solid AmF4 features 8-coordinate Am centers interconnected by doubly bridging fluoride ligands.
References
Americium compounds
Fluorides
Actinide halides | Americium(IV) fluoride | [
"Chemistry"
] | 72 | [
"Inorganic compounds",
"Fluorides",
"Inorganic compound stubs",
"Salts"
] |
44,443,365 | https://en.wikipedia.org/wiki/Americium%28III%29%20fluoride | Americium(III) fluoride or americium trifluoride is the chemical compound composed of americium and fluorine with the formula AmF3. It is a water soluble, pink salt.
References
Americium compounds
Fluorides
Actinide halides | Americium(III) fluoride | [
"Chemistry"
] | 61 | [
"Inorganic compounds",
"Fluorides",
"Inorganic compound stubs",
"Salts"
] |
44,443,368 | https://en.wikipedia.org/wiki/Americium%28III%29%20bromide | Americium(III) bromide or americium tribromide is the chemical compound composed of americium and bromine with the formula AmBr3, with americium in a +3 oxidation state. The compound is a crystalline solid.
References
Americium compounds
Bromides
Actinide halides | Americium(III) bromide | [
"Chemistry"
] | 67 | [
"Bromides",
"Salts",
"Inorganic compounds",
"Inorganic compound stubs"
] |
44,443,379 | https://en.wikipedia.org/wiki/Americium%28III%29%20iodide | Americium(III) iodide or americium triiodide is the chemical compound, a salt composed of americium and iodine with the formula AmI3.
Preparation
Americium(III) iodide can be prepared by reacting americium(III) chloride with ammonium iodide:
Properties
Americium(III) iodide takes the form of yellow crystals. The crystal form is orthorhombic. It melts around 960 °C. The density is 6.9 g/cm3. The compound consists of Am3+ and I− ions. It crystallizes in the trigonal crystal system in the space group R (space group no. 148) with the lattice parameters a = 742 pm and c = 2055 pm and six formula units per unit cell. Its crystal structure is isotypic with bismuth(III) iodide.
References
Americium compounds
Iodides
Actinide halides | Americium(III) iodide | [
"Chemistry"
] | 200 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
44,443,385 | https://en.wikipedia.org/wiki/Americium%28II%29%20bromide | Americium(II) bromide or americium dibromide is the chemical compound, a salt composed of an americium cation in the +2 oxidation state and 2 bromide ions in each formula unit, with the formula AmBr2.
References
Americium compounds
Bromides
Actinide halides | Americium(II) bromide | [
"Chemistry"
] | 68 | [
"Bromides",
"Inorganic compounds",
"Inorganic compound stubs",
"Salts"
] |
44,443,391 | https://en.wikipedia.org/wiki/Americium%28II%29%20iodide | Americium(II) iodide is the inorganic compound with the formula AmI2. It is a black solid which crystallizes in the same motif as strontium bromide.
References
Americium compounds
Iodides
Actinide halides | Americium(II) iodide | [
"Chemistry"
] | 54 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
44,449,911 | https://en.wikipedia.org/wiki/Right2Water | Right2Water is a campaign to commit the European Union and member states to implement the human right to water and sanitation.
It has three stated goals:
Guaranteed water and sanitation for all in Europe.
No liberalisation of water services.
Universal (Global) access to water and sanitation.
The European Citizens' Initiative (ECI) represented more than 120 NGO and was supported by the German and Austrian trade unions. The backbone of the ECI was the European Federation of Public Service Unions (EPSU) whose President Anne-Marie Perret was also the president of the citizens Committee. On 21 March 2013, it became the first ECI to collect more than a million signatures and they reached the minimum quota of signatures in seven countries on 7 May 2013. It stopped the signature collection on 7 September 2013, with a total of 1,857,605 signatures. The initiative was submitted to the European Commission in December 2013 and the public hearing took place on 17 February 2014 at the European Parliament. In March 2014, the commission has adopted the Communication in response to the Right2Water initiative. On 1 July 2015 the Roadmap for the evaluation of the Drinking Water Directive has been published by the European Commission.
In response, the European Parliament criticised the commission for failing the meet the initiative's demands. The report by Sinn Féin MEP Lynn Boylan called on the Commission "to recognise that affordable access to water is a basic human right."
In 2010, three years before the petition, Paris was the first European local entity to have concluded the remunicipalization process of water and sanitation, entrusted to Eau de Paris.
From 2010 until 2022 many other cities and regions have declared its support to the human right to water (like Slovenia and Andalucia) many remunicipalisations also have taken place.
The commission’s answer
On 19 March 2014 the commission partly meet the proposals. The commission included the following:
A reinforcement of the implementation of the water quality legislation, building on the commitments presented in the 7th EAP and the Water Blueprint;
Launching of an EU-wide public consultation on the Drinking Water Directive, notably in view of improving access to quality water in the EU;
Improvement of the transparency for urban wastewater and drinking water data management and explore the idea of benchmarking water quality;
Set-up of a more structured dialogue between stakeholders on transparency in the water sector;
Cooperation with existing initiatives to provide a wider set of benchmarks for water services;
Stimulation of innovative approaches for development assistance (e.g. support to partnerships between water operators and to public-public partnerships); promote sharing of best practices between Member States (e.g. on solidarity instruments) and identify new opportunities for cooperation.
Advocation of universal access to safe drinking water and sanitation as a priority area for future Sustainable Development Goals.
Revision of the Drinking Water Directive
On 1 February 2018 Karmanu vella and Frans Timmermans announced the revision of the Drinking water Directive 'because this is a topic which is close to Europeans' hearts. Water was the subject of the first ever successful European Citizens' Initiative, with over 1.6 million people supporting the Right2Water Initiative before it was submitted to the Commission. EPSU the main organiser of the ECI reacted saying that "the European Commission missed an opportunity to implemented the human right to water". On 16 December 2020, the European Parliament formally adopted the revised Drinking Water Directive. The Directive will enter in force on 12 January 2021, and Member States will have two years to transpose it into national legislation.
See also
Human rights
References
European Union
Initiatives
Political organisations based in the Republic of Ireland
Water supply
Sanitation
de:Wasser ist ein Menschenrecht! | Right2Water | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 750 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
71,659,624 | https://en.wikipedia.org/wiki/PU%20Vulpeculae | PU Vulpeculae is a very slowly evolving symbiotic nova in the northern constellation of Vulpecula, abbreviated PU Vul. It is too faint to be visible to the naked eye, reaching a maximum apparent visual magnitude of 8.7 following a minimum of 16.6. The system is located at a distance of approximately 17,000 light years from the Sun based on parallax measurements.
The brightening of this object during April 1979 was independently discovered by Y. Kuwano and M. Honda. At detection, it had a visual magnitude of 9.1 and was initially designated Nova Vulpeculae 1979. Photographic plates taken since November 1977 showed a dramatic increase of five magnitudes at the time of discovery. In September, 1978, it had been catalogued as a stellar class of M4. A search of Harvard Observatory archival plates taken since 1898 showed several smaller eruptions of this star.
For much of 1979 the object had a brightness of magnitude 8.9 while varying by a magnitude of 0.15 with a period of about 80 says, then it began to fade rapidly in 1980, reaching a minimum magnitude of 13.65 in August. At this minimum, the spectrum showed bands of the TiO molecule, which is typical of lower temperature M-type stars. It began to brighten again at about the same rate as the decrease, reaching magnitude 8.5 in August, 1981. The star remained mostly stable at this level for about a year, displaying a pair of brief dips in brightness during 1982. Polarization of the light indicated the formation of large dust particles, which was suggested as a cause of the brightness decrease in 1980.
A soft X-ray halo was detected around the object in 1980, as well as a weaker ring-like structure. Infrared observations in 1980 suggested this is a symbiotic binary star system consisting of a variable, evolved star that has expanded to fill its Roche lobe and is periodically transferring mass to a faint, compact companion. However, the system did not show the expected emission lines from the infalling material. The spectrum at the minimum indicated the evolved star is a giant of class M6. The hot component showed a supergiant or bright giant spectrum that changed from a class of F5 in 1983 to A2 in 1986, while the brightness remained near magnitude 8.7. During this time the hot component changed from resembling a supergiant with a temperature similar to the Sun into a white dwarf smaller than the sun with a temperature in excess of . Emission lines became visible in 1988 as the outer layers were shed and became a nebula surrounding the white dwarf remnant.
The brightness of this object finally began to steadily decrease in 1987. By September 1989, it had declined to magnitude 10.5. The spectrum began to resemble a nebula, which came from a hot stellar wind expanding at a velocity of or more. In 1993, the emission features from the wind temporarily disappeared, which suggested the system was undergoing an eclipse. The data indicated this is an eclipsing binary with an orbital period of , which meant the orbital plane is nearly aligned with the line of sight from the Earth. An eclipse would explain the unusual minimum during 1980. The cool component was determined to be on the asymptotic giant branch and is pulsating with a period of 217 days, making it a Mira variable. The compact companion is a white dwarf with mass estimated at 60% of the mass of the Sun. The system displays an "illumination effect" caused by the ionization of the stellar wind from the giant by the dwarf. The light curve of this variation suggests an orbital eccentricity of at least 0.16.
References
Further reading
Mira variables
White dwarfs
Symbiotic novae
Eclipsing binaries
Vulpecula
Vulpeculae, PU | PU Vulpeculae | [
"Astronomy"
] | 774 | [
"Vulpecula",
"Constellations"
] |
71,665,529 | https://en.wikipedia.org/wiki/LY-2109761 | LY-2109761 is a synthetic compound which acts as a potent and selective inhibitor for the growth factor receptor TGF beta receptor 1. It is used for research into conditions such as pulmonary fibrosis and cancer.
See also
Galunisertib
GW788388
References
Kinase inhibitors | LY-2109761 | [
"Chemistry"
] | 66 | [
"Pharmacology",
"Growth factors",
"Neurochemistry",
"Medicinal chemistry stubs",
"Signal transduction",
"Receptor antagonists",
"Pharmacology stubs"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.