id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
61,768,338
https://en.wikipedia.org/wiki/Stromquist%E2%80%93Woodall%20theorem
The Stromquist–Woodall theorem is a theorem in fair division and measure theory. Informally, it says that, for any cake, for any n people with different tastes, and for any fraction w, there exists a subset of the cake that all people value at exactly a fraction w of the total cake value, and it can be cut using at most cuts. The theorem is about a circular 1-dimensional cake (a "pie"). Formally, it can be described as the interval [0,1] in which the two endpoints are identified. There are n continuous measures over the cake: ; each measure represents the valuations of a different person over subsets of the cake. The theorem says that, for every weight , there is a subset , which all people value at exactly : , where is a union of at most intervals. This means that cuts are sufficient for cutting the subset . If the cake is not circular (that is, the endpoints are not identified), then may be the union of up to intervals, in case one interval is adjacent to 0 and one other interval is adjacent to 1. Proof sketch Let be the subset of all weights for which the theorem is true. Then: . Proof: take (recall that the value measures are normalized such that all partners value the entire cake as 1). If , then also . Proof: take . If is a union of intervals in a circle, then is also a union of intervals. is a closed set. This is easy to prove, since the space of unions of intervals is a compact set under a suitable topology. If , then also . This is the most interesting part of the proof; see below. From 1-4, it follows that . In other words, the theorem is valid for every possible weight. Proof sketch for part 4 Assume that is a union of intervals and that all partners value it as exactly . Define the following function on the cake, : Define the following measures on : Note that . Hence, for every partner : . Hence, by the Stone–Tukey theorem, there is a hyper-plane that cuts to two half-spaces, , such that: Define and . Then, by the definition of the : The set has connected components (intervals). Hence, its image also has connected components (1-dimensional curves in ). The hyperplane that forms the boundary between and intersects in at most points. Hence, the total number of connected components (curves) in and is . Hence, one of these must have at most components. Suppose it is that has at most components (curves). Hence, has at most components (intervals). Hence, we can take . This proves that . Tightness proof Stromquist and Woodall prove that the number is tight if the weight is either irrational, or rational with a reduced fraction such that . Proof sketch for Choose equally-spaced points along the circle; call them . Define measures in the following way. Measure is concentrated in small neighbourhoods of the following points: . So, near each point , there is a fraction of the measure . Define the -th measure as proportional to the length measure. Every subset whose consensus value is , must touch at least two points for each of the first measures (since the value near each single point is which is slightly less than the required ). Hence, it must touch at least points. On the other hand, every subset whose consensus value is , must have total length (because of the -th measure). The number of "gaps" between the points is ; hence the subset can contain at most gaps. The consensus subset must touch points but contain at most gaps; hence it must contain at least intervals. See also Fair cake-cutting Fair pie-cutting Exact division Stone–Tukey theorem References Cake-cutting Theorems in measure theory
Stromquist–Woodall theorem
[ "Mathematics" ]
778
[ "Theorems in mathematical analysis", "Theorems in measure theory" ]
41,310,612
https://en.wikipedia.org/wiki/Academy%20Color%20Encoding%20System
The Academy Color Encoding System (ACES) is a color image encoding system created under the auspices of the Academy of Motion Picture Arts and Sciences. ACES is characterised by a color accurate workflow, with "seamless interchange of high quality motion picture images regardless of source". The system defines its own color primaries based on spectral locus as defined by the CIE xyY specification. The white point is approximate to the chromaticity of CIE Daylight with a Correlated Color Temperature (CCT) of 6000K. Most ACES compliant image files are encoded in 16-bit half-floats, thus allowing ACES OpenEXR files to encode 30 stops of scene information. The ACESproxy format uses integers with a log encoding. ACES supports both high dynamic range (HDR) and wide color gamut (WCG). The version 1.0 release occurred in December 2014. ACES received a Primetime Engineering Emmy Award in 2012. The system is standardized in part by the Society of Motion Picture and Television Engineers (SMPTE) standards body. History Background The ACES project began its development in 2004 in collaboration with 50 industry technologists. The project began due to the recent incursion of digital technologies into the motion picture industry. The traditional motion picture workflow had been based on film negatives, and with the digital transition, scanning of negatives and digital camera acquisition. The industry lacked a color management scheme for diverse sources coming from a variety of digital motion picture cameras and film. The ACES system is designed to control the complexity inherent in managing a multitude of file formats, image encoding, metadata transfer, color reproduction, and image interchanges that are present in the current motion picture workflow. Versions The following versions are available for the reference implementation: A number of pre-release versions were tagged from 0.1 (March 1, 2012) to 0.7.1 (February 26, 2014). ACES 1.0 (December 2014) is first release version. Three small patches followed. ACES 1.1 (June 21, 2018) adds some ODTs for P3, Rec. 2020, and DCDM. ACES 1.2 (April 1, 2020) introduces three new specification documents: ACES Metadata File (AML), updated Common LUT Format, new ACES Project Organization and Development Procedure. It also adds some transformations. ACES 1.3 (April 30, 2021) adds a colorspace conversion for Sony Venice, a gamut compression method for saturated objects, and some AMF refinements. System overview The system comprises several components which are designed to work together to create a uniform workflow: Academy Color Encoding Specification (ACES): The specification that defines the ACES color space, allowing half-float high-precision encoding in scene linear light as exposed in a camera, and archival storage in files. Input Device Transform (IDT): This name was deprecated in ACES version 1.0 and replaced by Input Transform. Input Transform (IT): The process that takes captured images from any indigestible source material and transforms the content into the ACES color space and encoding specifications. There are many IT’s, which are specific to each class of capture device and likely specified by the manufacturer using ACES guidelines. It is recommended that a different IT be used for tungsten versus daylight lighting conditions. Look Modification Transform (LMT): A specific change in look that is applied systematically in combination with the RRT and ODT’s. (part of the ACES Viewing Transform) Output Transform: As per ACES version 1.0 naming convention, this is the overall mapping from the standard scene-referred ACES colorimetry (SMPTE 2065-1 color space) to the output-referred colorimetry of a specific device or family of devices. It is always the concatenation of the Reference Rendering Transform (RRT) and a specific Output Device Transform (ODT), as defined below. For this reason the Output Transform is usually shortened in "RRT+ODT". Reference Rendering Transform (RRT): Converts the scene-referred colorimetry to display-referred, and resembles traditional film image rendering with an S-shaped curve. It has a larger gamut and dynamic range available to allow for rendering to any output device (even ones not yet in existence). Output Device Transform (ODT): A guideline for rendering the large gamut and wide dynamic range of the RRT to a physically realized output device with limited gamut and dynamic range. There are many ODT’s, which will be likely generated by the manufacturers to the ACES guidelines. Academy Viewing Transform: A combined reference of a LMT and an Output Transform, i.e. "LMT+RRT+ODT". Academy Printing Density (APD): A reference printing density defined by the AMPAS for calibrating film scanners and film recorders. Academy Density Exchange (ADX): A densitometric encoding similar to Kodak's Cineon used for capturing data from film scanners. ACES color space SMPTE Standard 2065-1 (ACES2065-1): The principal scene-referred color space used in the ACES framework for storing images. Standardized by SMPTE as document ST2065-1. Its gamut includes the full CIE standard observer's gamut with radio-metrically linear transfer characteristics. ACEScc (ACES color correction space): A color space definition that is slightly larger than the ITU Rec.2020 color space and logarithmic transfer characteristics for improved use within color correctors and grading tools. ACEScct (ACES color correction space with toe): A color space definition that is slightly larger than the ITU Rec.2020 color space and logarithmic-ally encoded for improved use within color correctors and grading tools that resembles the toe behavior of Cineon files. ACEScg (ACES computer graphics space): A color space definition that is slightly larger than the ITU Rec.2020 color space and linearly encoded for improved use within computer graphics rendering and compositing tools. ACESproxy (ACES proxy color space): A color space definition that is slightly larger than the ITU Rec.2020 color space, logarithmic-ally encoded (like ACEScc, not like ACEScct) and represented with either 10-bits/channel or 12-bits/channel, integer-arithmetics digital representation. This encoding is exclusively designed for transport-only of code values across digital devices that don't support floating-point arithmetic encodings, like SDI cables, monitors, and infrastructure in general. ACES Color Spaces ACES 1.0 is a color encoding system, defining one core archival color space, and then four additional working color spaces, and additional file protocols. The ACES system is designed to cover the needs film and television production, relating to the capture, generation, transport, exchange, grading, processing, and short & long term storage of motion picture and still image data. These color spaces all have a few common characteristics: They are based on the RGB color model. The image data is scene-referred, i.e. the numerical values are related to the original scene lighting, as reflected or emitted from the real objects & lights on the set at the time of filming. The space refers to a "standard reference camera", an imaginary camera that can capture all of human visual perception. Scene-referred code values captured by a real camera are directly related to luminous exposure. They are capable of holding 30 stops of exposure. The reference white point is sometimes, and incorrectly, referred to as "D60" though there is no such thing as a CIE D60 standard illuminant. Further, the white point is not on the CIE Daylight Locus nor the Planckian Locus, and does not define the neutral axis. Filmmakers are allowed to choose whatever effective whitepoint they need for technical or artistic reasons. The white point serves only as a mathematical reference for transforms, and should not be confused with a scene or display reference. It was chosen through an experiment, projecting film containing a LAD test patch onto a theater screen, using a projector with a xenon bulb. That measured white point was then adjusted to be close to, but not on, the CIE daylight locus. The CCT is close to 6000k, with CIE 1931 xy chromaticities of . The five color spaces use one of two defined sets of RGB color primaries called AP0 and AP1 (“ACES Primaries” #0 and #1); The chromaticity coordinates are listed in the table below: AP0 is defined as the smallest set of primaries that encloses the entire CIE 1931 standard-observer spectral locus; thus theoretically including, and exceeding, all the color stimuli that can be seen by the average human eye. The concept of using non-realizable or imaginary primaries is not new, and is often employed with color systems that wish to render a larger portion of the visible spectral locus. The ProPhoto RGB (developed by Kodak) and the ARRI Wide Gamut (developed by Arri) are two such color spaces. Values outside the spectral locus are maintained with the assumption that they will later be manipulated through color timing or in other cases of image interchange to eventually lie within the locus. This results in color values not being “clipped” or “crushed” as a result of post-production manipulation. AP1 gamut is smaller than the AP0 primaries, but is still considered “wide gamut”. The AP1 primaries are much closer to realizable primaries, but unlike AP0, none are negative. This is important for use as a working space, for a number of practical reasons: color-imaging and color-grading operations acting independently on the three RGB channels produce variations naturally-perceived on red, green, blue components. This might not be naturally perceived when operating on the “unbent” RGB axes of AP0 primaries. all the code values contained in the range represent colors that, converted into output-referred colorimetry via their respective Output Transforms (read above), can be displayed with either present or future projection/display technologies. ACES2065-1 This is the core ACES color space, and the only one using the AP0 RGB primaries. It uses photo-metrically linear transfer characteristics (i.e. gamma of 1.0), and is the only ACES space intended for interchange among facilities, and most importantly, archiving image/video files. ACES2065-1 code values are linear values scaled in an Input Transform so that: a perfectly white diffuser would map to RGB code value. a photographic exposure of an 18% grey card would map to RGB code value. ACES2065-1 code values often exceed for ordinary scenes, and a very high range of speculars and highlights can be maintained in the encoding. The internal processing and storage of ACES2065-1 code values must be in floating-point arithmetics with at least 16 bits per channel. Pre-release versions of ACES, i.e. those prior to 1.0, defined ACES2065-1 as the only color space. Legacy applications might therefore refer to ACES2065-1 when referring to “the ACES color space”. Furthermore, because of its importance and linear characteristics, and being the one based on AP0 primaries, it is also improperly referred to as either “Linear ACES”, “ACES.lin”, “SMPTE2065-1” or even “the AP0 color space”. Standards are defined for storing images in the ACES2065-1 color space, particularly on the metadata side of things, so that applications honoring ACES framework can acknowledge the color space encoding from the metadata rather than inferring it from other things. For example: SMPTE ST2065-4 defines the correct encoding of ACES2065-1 still images within OpenEXR files and file sequences and their mandatory metadata flags/fields. SMPTE 2065-5 defines the correct embedding of ACES2065-1 video sequences within MXF files and their mandatory metadata fields. ACEScg ACEScg is a scene-linear encoding, like ACES2065-1, but ACEScg is using the AP1 primaries, which are closer to realizable primaries. ACEScg was developed for use in visual effects work, when it became clear that ACES2065 was not a useful working space due to the negative blue primary, and the extreme distance of the other imaginary primaries. The AP1 primaries are much closer to the chromaticity diagram of real colors, and importantly, none of them are negative. This is important for rendering and compositing image data as needed for visual effects. ACEScc & ACEScct Like ACEScg, ACEScc and ACEScct are using the AP1 primaries. What sets them apart is that instead of a scene-linear transfer encoding, ACEScc and ACEScct use logarithmic curves, which makes them better suited to color-grading. The grading workflow has traditionally used log encoded image data, in large part as the physical film used in cinematography has a logarithmic response to light. ACEScc is a pure log function, but ACEScct has a "toe" near black, to simulate the minimum density of photographic negative film, and the legacy DPX or Cineon log curve. Converting ACES2065-1 RGB values to CIE XYZ values Converting CIE XYZ values to ACES2065-1 values Standards ACES is defined by several Standards by SMPTE (ST2065 family) and documentations by AMPAS, which include: SMPTE ST 2065-1:2012 - Academy Color Encoding Specification (ACES) SMPTE ST 2065-2:2012 - Academy Printing Density (APD): Spectral Responsivities, Reference Measurement Device and Spectral Calculation SMPTE ST 2065-3:2012 - Academy Density Exchange Encoding (ADX): Encoding Academy Printing Density (APD) Values SMPTE ST 2065-4:2013 - ACES Image Container File Layout SMPTE ST 2065-5:2016 - Material Exchange Format: Mapping ACES Image Sequences into the MXF Generic Container S-2013-001 - ACESproxy: An Integer Log Encoding of ACES Image Data S-2014-003 - ACEScc: A Logarithmic Encoding of ACES Data for use within Color Grading Systems S-2014-004 - ACEScg: A Working Space for CGI Render and Compositing S-2016-001 - ACEScct: A Quasi-Logarithmic Encoding of ACES Data for use within Color Grading Systems P-2013-001 - Recommended Procedures for the Creation and Use of Digital Camera System Input Device Transforms (IDTs) TB-2014-001 - Academy Color Encoding System (ACES) Documentation Guide TB-2014-002 - Academy Color Encoding System (ACES) Version 1.0 User Experience Guidelines TB-2014-004 - Informative Notes on SMPTE ST 2065-1 - Academy Color Encoding Specification (ACES) TB-2014-005 - Informative Notes on SMPTE ST 2065-2 - Academy Printing Density (APD) – Spectral Responsivities, Reference Measurement Device and Spectral Calculation and SMPTE ST 2065-3 Academy Printing Density Exchange Encoding (ADX) - Encoding Printing Density (APD) Values TB-2014-006 - Informative Notes on SMPTE ST 2065-4 - ACES Image Container File Layout TB-2014-007 - Informative Notes on SMPTE ST 268:2014 – File Format for Digital Moving Picture Exchange (DPX) TB-2014-009 - Academy Color Encoding System (ACES) Clip-level Metadata File Format Definition and Usage TB-2014-010 - Design, Integration and Use of ACES Look Modification Transforms TB-2014-012 - Academy Color Encoding System (ACES) Version 1.0 Component Names TB-2018-001 - Derivation of the ACES White Point CIE Chromaticity Coordinates A SMPTE standard is also under development to allow ACES code streams to be mapped to the Material Exchange Format (MXF) container. See also Academy of Motion Picture Arts and Sciences Color Decision List Color Management Society of Motion Picture and Television Engineers References External links List of ACES productions - ACES Central ACEScg - A Common Color Encoding for Visual Effects Applications ACEScg - A Common Color Encoding for Visual Effects Applications - DigiPro 2015, Slideshare Academy of Motion Picture Arts and Sciences Color space Film and video technology High dynamic range SMPTE standards
Academy Color Encoding System
[ "Mathematics", "Engineering" ]
3,426
[ "Space (mathematics)", "Metric spaces", "Color space", "Electrical engineering", "High dynamic range" ]
41,312,173
https://en.wikipedia.org/wiki/Linear%20optical%20quantum%20computing
Linear optical quantum computing or linear optics quantum computation (LOQC), also photonic quantum computing (PQC), is a paradigm of quantum computation, allowing (under certain conditions, described below) universal quantum computation. LOQC uses photons as information carriers, mainly uses linear optical elements, or optical instruments (including reciprocal mirrors and waveplates) to process quantum information, and uses photon detectors and quantum memories to detect and store quantum information. Overview Although there are many other implementations for quantum information processing (QIP) and quantum computation, optical quantum systems are prominent candidates, since they link quantum computation and quantum communication in the same framework. In optical systems for quantum information processing, the unit of light in a given mode—or photon—is used to represent a qubit. Superpositions of quantum states can be easily represented, encrypted, transmitted and detected using photons. Besides, linear optical elements of optical systems may be the simplest building blocks to realize quantum operations and quantum gates. Each linear optical element equivalently applies a unitary transformation on a finite number of qubits. The system of finite linear optical elements constructs a network of linear optics, which can realize any quantum circuit diagram or quantum network based on the quantum circuit model. Quantum computing with continuous variables is also possible under the linear optics scheme. The universality of 1- and 2-bit gates to implement arbitrary quantum computation has been proven. Up to unitary matrix operations () can be realized by only using mirrors, beam splitters and phase shifters (this is also a starting point of boson sampling and of computational complexity analysis for LOQC). It points out that each operator with inputs and outputs can be constructed via linear optical elements. Based on the reason of universality and complexity, LOQC usually only uses mirrors, beam splitters, phase shifters and their combinations such as Mach–Zehnder interferometers with phase shifts to implement arbitrary quantum operators. If using a non-deterministic scheme, this fact also implies that LOQC could be resource-inefficient in terms of the number of optical elements and time steps needed to implement a certain quantum gate or circuit, which is a major drawback of LOQC. Operations via linear optical elements (beam splitters, mirrors and phase shifters, in this case) preserve the photon statistics of input light. For example, a coherent (classical) light input produces a coherent light output; a superposition of quantum states input yields a quantum light state output. Due to this reason, people usually use single photon source case to analyze the effect of linear optical elements and operators. Multi-photon cases can be implied through some statistical transformations. An intrinsic problem in using photons as information carriers is that photons hardly interact with each other. This potentially causes a scalability problem for LOQC, since nonlinear operations are hard to implement, which can increase the complexity of operators and hence can increase the resources required to realize a given computational function. One way to solve this problem is to bring nonlinear devices into the quantum network. For instance, the Kerr effect can be applied into LOQC to make a single-photon controlled-NOT and other operations. KLM protocol It was believed that adding nonlinearity to the linear optical network was sufficient to realize efficient quantum computation. However, to implement nonlinear optical effects is a difficult task. In 2000, Knill, Laflamme and Milburn proved that it is possible to create universal quantum computers solely with linear optical tools. Their work has become known as the "KLM scheme" or "KLM protocol", which uses linear optical elements, single photon sources and photon detectors as resources to construct a quantum computation scheme involving only ancilla resources, quantum teleportations and error corrections. It uses another way of efficient quantum computation with linear optical systems, and promotes nonlinear operations solely with linear optical elements. At its root, the KLM scheme induces an effective interaction between photons by making projective measurements with photodetectors, which falls into the category of non-deterministic quantum computation. It is based on a non-linear sign shift between two qubits that uses two ancilla photons and post-selection. It is also based on the demonstrations that the probability of success of the quantum gates can be made close to one by using entangled states prepared non-deterministically and quantum teleportation with single-qubit operations Otherwise, without a high enough success rate of a single quantum gate unit, it may require an exponential amount of computing resources. Meanwhile, the KLM scheme is based on the fact that proper quantum coding can reduce the resources for obtaining accurately encoded qubits efficiently with respect to the accuracy achieved, and can make LOQC fault-tolerant for photon loss, detector inefficiency and phase decoherence. As a result, LOQC can be robustly implemented through the KLM scheme with a low enough resource requirement to suggest practical scalability, making it as promising a technology for QIP as other known implementations. Boson sampling The more limited boson sampling model was suggested and analyzed by Aaronson and Arkhipov in 2010. It is not believed to be universal, but can still solve problems that are believed to be beyond the ability of classical computers, such as the boson sampling problem. On 3 December 2020, a team led by Chinese Physicist Pan Jianwei (潘建伟) and Lu Chaoyang (陆朝阳) from University of Science and Technology of China in Hefei, Anhui Province submitted their results to Science in which they solved a problem that is virtually unassailable by any classical computer; thereby proving Quantum supremacy of their photon-based quantum computer called Jiu Zhang Quantum Computer (九章量子计算机). The boson sampling problem was solved in 200 seconds, they estimated that China's Sunway TaihuLight Supercomputer would take 2.5 billion years to solve - a quantum supremacy of around 10^14. Jiu Zhang was named in honor of China's oldest surviving mathematical text (Jiǔ zhāng suàn shù) The Nine Chapters on the Mathematical Art Ingredients DiVincenzo's criteria for quantum computation and QIP give that a universal system for QIP should satisfy at least the following requirements: a scalable physical system with well characterized qubits, the ability to initialize the state of the qubits to a simple fiducial state, such as , long relevant decoherence times, much longer than the gate operation time, a "universal" set of quantum gates (this requirement cannot be satisfied by a non-universal system), a qubit-specific measurement capability;if the system is also aiming for quantum communication, it should also satisfy at least the following two requirements: the ability to interconvert stationary and flying qubits, and the ability to faithfully transmit flying qubits between specified location. As a result of using photons and linear optical circuits, in general LOQC systems can easily satisfy conditions 3, 6 and 7. The following sections mainly focus on the implementations of quantum information preparation, readout, manipulation, scalability and error corrections, in order to discuss the advantages and disadvantages of LOQC as a candidate for QIP Qubits and modes A qubit is one of the fundamental QIP units. A qubit state which can be represented by is a superposition state which, if measured in the orthonormal basis , has probability of being in the state and probability of being in the state, where is the normalization condition. An optical mode is a distinguishable optical communication channel, which is usually labeled by subscripts of a quantum state. There are many ways to define distinguishable optical communication channels. For example, a set of modes could be different polarization of light which can be picked out with linear optical elements, various frequencies, or a combination of the two cases above. In the KLM protocol, each of the photons is usually in one of two modes, and the modes are different between the photons (the possibility that a mode is occupied by more than one photon is zero). This is not the case only during implementations of controlled quantum gates such as CNOT. When the state of the system is as described, the photons can be distinguished, since they are in different modes, and therefore a qubit state can be represented using a single photon in two modes, vertical (V) and horizontal (H): for example, and . It is common to refer to the states defined via occupation of modes as Fock states. In boson sampling, photons are not distinguished, and therefore cannot directly represent the qubit state. Instead, we represent the qubit state of the entire quantum system by using the Fock states of modes which are occupied by indistinguishable single photons (this is a -level quantum system). State preparation To prepare a desired multi-photon quantum state for LOQC, a single-photon state is first required. Therefore, non-linear optical elements, such as single-photon generators and some optical modules, will be employed. For example, optical parametric down-conversion can be used to conditionally generate the state in the vertical polarization channel at time (subscripts are ignored for this single qubit case). By using a conditional single-photon source, the output state is guaranteed, although this may require several attempts (depending on the success rate). A joint multi-qubit state can be prepared in a similar way. In general, an arbitrary quantum state can be generated for QIP with a proper set of photon sources. Implementations of elementary quantum gates To achieve universal quantum computing, LOQC should be capable of realizing a complete set of universal gates. This can be achieved in the KLM protocol but not in the boson sampling model. Ignoring error correction and other issues, the basic principle in implementations of elementary quantum gates using only mirrors, beam splitters and phase shifters is that by using these linear optical elements, one can construct any arbitrary 1-qubit unitary operation; in other words, those linear optical elements support a complete set of operators on any single qubit. The unitary matrix associated with a beam splitter is: , where and are determined by the reflection amplitude and the transmission amplitude (relationship will be given later for a simpler case). For a symmetric beam splitter, which has a phase shift under the unitary transformation condition and , one can show that , which is a rotation of the single qubit state about the -axis by in the Bloch sphere. A mirror is a special case where the reflecting rate is 1, so that the corresponding unitary operator is a rotation matrix given by . For most cases of mirrors used in QIP, the incident angle . Similarly, a phase shifter operator associates with a unitary operator described by , or, if written in a 2-mode format , which is equivalent to a rotation of about the -axis. Since any two rotations along orthogonal rotating axes can generate arbitrary rotations in the Bloch sphere, one can use a set of symmetric beam splitters and mirrors to realize an arbitrary operators for QIP. The figures below are examples of implementing a Hadamard gate and a Pauli-X-gate (NOT gate) by using beam splitters (illustrated as rectangles connecting two sets of crossing lines with parameters and ) and mirrors (illustrated as rectangles connecting two sets of crossing lines with parameter ). In the above figures, a qubit is encoded using two mode channels (horizontal lines): represents a photon in the top mode, and represents a photon in the bottom mode. Using integrated photonic circuits In reality, assembling a whole bunch (possibly on the order of ) of beam splitters and phase shifters in an optical experimental table is challenging and unrealistic. To make LOQC functional, useful and compact, one solution is to miniaturize all linear optical elements, photon sources and photon detectors, and to integrate them onto a chip. If using a semiconductor platform, single photon sources and photon detectors can be easily integrated. To separate modes, there have been integrated arrayed waveguide grating (AWG) which are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM). In principle, beam splitters and other linear optical elements can also be miniaturized or replaced by equivalent nanophotonics elements. Some progress in these endeavors can be found in the literature, for example, Refs. In 2013, the first integrated photonic circuit for quantum information processing has been demonstrated using photonic crystal waveguide to realize the interaction between guided field and atoms. Implementations comparison Comparison of the KLM protocol and the boson sampling model The advantage of the KLM protocol over the boson sampling model is that while the KLM protocol is a universal model, boson sampling is not believed to be universal. On the other hand, it seems that the scalability issues in boson sampling are more manageable than those in the KLM protocol. In boson sampling only a single measurement is allowed, a measurement of all the modes at the end of the computation. The only scalability problem in this model arises from the requirement that all the photons arrive at the photon detectors within a short-enough time interval and with close-enough frequencies. In the KLM protocol, there are non-deterministic quantum gates, which are essential for the model to be universal. These rely on gate teleportation, where multiple probabilistic gates are prepared offline and additional measurements are performed mid-circuit. Those two factors are the cause for additional scalability problems in the KLM protocol. In the KLM protocol the desired initial state is one in which each of the photons is in one of two modes, and the possibility that a mode is occupied by more than one photon is zero. In boson sampling, however, the desired initial state is specific, requiring that the first modes are each occupied by a single photon ( is the number of photons and is the number of modes) and all the other states are empty. Earlier models Another, earlier model which relies on the representation of several qubits by a single photon is based on the work of C. Adami and N. J. Cerf. By using both the location and the polarization of photons, a single photon in this model can represent several qubits; however, as a result, CNOT-gate can only be implemented between the two qubits represented by the same photon. The figures below are examples of making an equivalent Hadamard-gate and CNOT-gate using beam splitters (illustrated as rectangles connecting two sets of crossing lines with parameters and ) and phase shifters (illustrated as rectangles on a line with parameter ). In the optical realization of the CNOT gate, the polarization and location are the control and target qubit, respectively. References External links Quantum information science Quantum optics Quantum gates
Linear optical quantum computing
[ "Physics" ]
3,083
[ "Quantum optics", "Quantum mechanics" ]
41,314,653
https://en.wikipedia.org/wiki/C13H8O4
{{DISPLAYTITLE:C13H8O4}} The molecular formula C13H8O4 (molar mass: 228.20 g/mol, exact mass: 228.0423 u) may refer to: Euxanthone, a naturally occurring xanthonoid Urolithin A, a metabolite compound
C13H8O4
[ "Chemistry" ]
74
[ "Isomerism", "Set index articles on molecular formulas" ]
41,317,926
https://en.wikipedia.org/wiki/N-Methyl-L-glutamic%20acid
{{DISPLAYTITLE:N-Methyl-L-glutamic acid}} N-Methyl--glutamic acid (methylglutamate) is a chemical derivative of glutamic acid in which a methyl group has been added to the amino group. It is an intermediate in methane metabolism. Biosynthetically, it is produced from methylamine and glutamic acid by the enzyme methylamine—glutamate N-methyltransferase. It can also be demethylated by methylglutamate dehydrogenase to regenerate glutamic acid. References Amino acid derivatives Dicarboxylic acids Secondary amino acids
N-Methyl-L-glutamic acid
[ "Chemistry" ]
144
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
44,172,431
https://en.wikipedia.org/wiki/Eukaryotic%20Promoter%20Database
EPD (Eukaryotic Promoter Database) is a biological database and web resource of eukaryotic RNA polymerase II promoters with experimentally defined transcription start sites. Originally, EPD was a manually curated resource relying on transcript mapping experiments (mostly primer extension and nuclease protection assays) targeted at individual genes and published in academic journals. More recently, automatically generated promoter collections derived from electronically distributed high-throughput data produced with the CAGE or TSS-Seq protocols were added as part of a special subsection named EPDnew. The EPD web server offers additional services, including an entry viewer which enables users to explore the genomic context of a promoter in a UCSC Genome Browser window, and direct links for uploading EPD-derived promoter subsets to associated web-based promoter analysis tools of the Signal Search Analysis (SSA) and ChIP-Seq servers. EPD also features a collection of position weight matrices (PWMs) for common promoter sequence motifs. History and Impact EPD was created in 1986 as an electronic version of a eukaryotic promoter compilation published in an article and has been regularly updated since then. The database was initially distributed on magnetic tapes as part of the EMBL data library and later via the Internet. The collaboration between EPD and the EMBL library was cited as a pioneering example of remote nucleotide sequence annotation by domain experts. EPD has played an instrumental role in the development and evaluation of promoter prediction algorithms as it is broadly considered the most accurate promoter resource. As of November 2014, it has been cited about 2500 times in scientific literature. EPD has also received ample coverage by textbooks in bioinformatics (e.g. ) and systems biology (e.g. ). References External links SIB - Swiss Institute of Bioinformatics SSA Signal Search Analysis server ChIP-Seq ChIP-seq On-line Analysis Tools PWMTools Position Weight Matrix model generation and evaluation tools Biological databases Genetics databases Genomics Science and technology in Switzerland
Eukaryotic Promoter Database
[ "Biology" ]
420
[ "Bioinformatics", "Biological databases" ]
44,172,803
https://en.wikipedia.org/wiki/Lieb%E2%80%93Thirring%20inequality
In mathematics and physics, Lieb–Thirring inequalities provide an upper bound on the sums of powers of the negative eigenvalues of a Schrödinger operator in terms of integrals of the potential. They are named after E. H. Lieb and W. E. Thirring. The inequalities are useful in studies of quantum mechanics and differential equations and imply, as a corollary, a lower bound on the kinetic energy of quantum mechanical particles that plays an important role in the proof of stability of matter. Statement of the inequalities For the Schrödinger operator on with real-valued potential the numbers denote the (not necessarily finite) sequence of negative eigenvalues. Then, for and satisfying one of the conditions there exists a constant , which only depends on and , such that where is the negative part of the potential . The cases as well as were proven by E. H. Lieb and W. E. Thirring in 1976 and used in their proof of stability of matter. In the case the left-hand side is simply the number of negative eigenvalues, and proofs were given independently by M. Cwikel, E. H. Lieb and G. V. Rozenbljum. The resulting inequality is thus also called the Cwikel–Lieb–Rosenbljum bound. The remaining critical case was proven to hold by T. Weidl The conditions on and are necessary and cannot be relaxed. Lieb–Thirring constants Semiclassical approximation The Lieb–Thirring inequalities can be compared to the semi-classical limit. The classical phase space consists of pairs Identifying the momentum operator with and assuming that every quantum state is contained in a volume in the -dimensional phase space, the semi-classical approximation is derived with the constant While the semi-classical approximation does not need any assumptions on , the Lieb–Thirring inequalities only hold for suitable . Weyl asymptotics and sharp constants Numerous results have been published about the best possible constant in () but this problem is still partly open. The semiclassical approximation becomes exact in the limit of large coupling, that is for potentials the Weyl asymptotics hold. This implies that . Lieb and Thirring were able to show that for . M. Aizenman and E. H. Lieb proved that for fixed dimension the ratio is a monotonic, non-increasing function of . Subsequently was also shown to hold for all when by A. Laptev and T. Weidl. For D. Hundertmark, E. H. Lieb and L. E. Thomas proved that the best constant is given by . On the other hand, it is known that for and for . In the former case Lieb and Thirring conjectured that the sharp constant is given by The best known value for the physical relevant constant is and the smallest known constant in the Cwikel–Lieb–Rosenbljum inequality is . A complete survey of the presently best known values for can be found in the literature. Kinetic energy inequalities The Lieb–Thirring inequality for is equivalent to a lower bound on the kinetic energy of a given normalised -particle wave function in terms of the one-body density. For an anti-symmetric wave function such that for all , the one-body density is defined as The Lieb–Thirring inequality () for is equivalent to the statement that where the sharp constant is defined via The inequality can be extended to particles with spin states by replacing the one-body density by the spin-summed one-body density. The constant then has to be replaced by where is the number of quantum spin states available to each particle ( for electrons). If the wave function is symmetric, instead of anti-symmetric, such that for all , the constant has to be replaced by . Inequality () describes the minimum kinetic energy necessary to achieve a given density with particles in dimensions. If was proven to hold, the right-hand side of () for would be precisely the kinetic energy term in Thomas–Fermi theory. The inequality can be compared to the Sobolev inequality. M. Rumin derived the kinetic energy inequality () (with a smaller constant) directly without the use of the Lieb–Thirring inequality. The stability of matter (for more information, read the Stability of matter page) The kinetic energy inequality plays an important role in the proof of stability of matter as presented by Lieb and Thirring. The Hamiltonian under consideration describes a system of particles with spin states and fixed nuclei at locations with charges . The particles and nuclei interact with each other through the electrostatic Coulomb force and an arbitrary magnetic field can be introduced. If the particles under consideration are fermions (i.e. the wave function is antisymmetric), then the kinetic energy inequality () holds with the constant (not ). This is a crucial ingredient in the proof of stability of matter for a system of fermions. It ensures that the ground state energy of the system can be bounded from below by a constant depending only on the maximum of the nuclei charges, , times the number of particles, The system is then stable of the first kind since the ground-state energy is bounded from below and also stable of the second kind, i.e. the energy of decreases linearly with the number of particles and nuclei. In comparison, if the particles are assumed to be bosons (i.e. the wave function is symmetric), then the kinetic energy inequality () holds only with the constant and for the ground state energy only a bound of the form holds. Since the power can be shown to be optimal, a system of bosons is stable of the first kind but unstable of the second kind. Generalisations If the Laplacian is replaced by , where is a magnetic field vector potential in the Lieb–Thirring inequality () remains true. The proof of this statement uses the diamagnetic inequality. Although all presently known constants remain unchanged, it is not known whether this is true in general for the best possible constant. The Laplacian can also be replaced by other powers of . In particular for the operator , a Lieb–Thirring inequality similar to () holds with a different constant and with the power on the right-hand side replaced by . Analogously a kinetic inequality similar to () holds, with replaced by , which can be used to prove stability of matter for the relativistic Schrödinger operator under additional assumptions on the charges . In essence, the Lieb–Thirring inequality () gives an upper bound on the distances of the eigenvalues to the essential spectrum in terms of the perturbation . Similar inequalities can be proved for Jacobi operators. References Literature Inequalities
Lieb–Thirring inequality
[ "Mathematics" ]
1,432
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
44,174,021
https://en.wikipedia.org/wiki/Excitatory%20amino%20acid%20reuptake%20inhibitor
An excitatory amino acid reuptake inhibitor (EAARI) is a type of drug which inhibits the reuptake of the excitatory neurotransmitters glutamate and aspartate by blocking one or more of the excitatory amino acid transporters (EAATs). Examples of EAARIs include dihydrokainic acid (DHK) and WAY-213,613, selective blockers of EAAT2 (GLT-1), and L-trans-2,4-PDC, a non-selective blocker of all five EAATs. Amphetamine is a selective noncompetitive reuptake inhibitor of presynaptic EAAT3 (via transporter endocytosis) in dopamine neurons. L-Theanine is reported to competitively inhibit reuptake at EAAT1 (GLAST) and EAAT2 (GLT-1). See also Reuptake inhibitor Glutamatergic GABA reuptake inhibitor Glycine reuptake inhibitor Excitatory amino acid receptor agonist Excitatory amino acid receptor antagonist References External links
Excitatory amino acid reuptake inhibitor
[ "Chemistry" ]
240
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
67,056,721
https://en.wikipedia.org/wiki/Borate%20bromide
The borate bromides are mixed anion compounds that contain borate and bromide anions. They are in the borate halide family of compounds which also includes borate fluorides, borate chlorides, and borate iodides. List References Borates Bromides Mixed anion compounds
Borate bromide
[ "Physics", "Chemistry" ]
66
[ "Matter", "Mixed anion compounds", "Salts", "Bromides", "Ions" ]
67,065,244
https://en.wikipedia.org/wiki/Fluoride%20nitrate
Fluoride nitrates are mixed anion compounds that contain both fluoride ions and nitrate ions. Compounds are known for some amino acids and for some heavy elements. Some transition metal fluorido complexes that are nitrates are also known. There are also fluorido nitrato complex ions known in solution. List References Nitrates Fluorides Mixed anion compounds
Fluoride nitrate
[ "Physics", "Chemistry" ]
77
[ "Matter", "Mixed anion compounds", "Nitrates", "Salts", "Oxidizing agents", "Fluorides", "Ions" ]
67,065,630
https://en.wikipedia.org/wiki/Coral%20reef%20restoration
Coral reef restoration strategies use natural and anthropogenic processes to restore damaged coral reefs. Reefs suffer damage from a number of natural and man-made causes, and efforts are being made to rectify the damage and restore the reefs. This involves the fragmentation of mature corals, the placing of the living fragments on lines or frames, the nurturing of the fragments as they recover and grow, and the transplantation of the pieces into their final positions on the reef when they are large enough. Background Coral reefs are important buffers between the land and water and help to reduce storm damage and coastal erosion. They provide employment, recreational opportunities and they are a major source of food for coastal communities. It is estimated that $375 billion dollars come from ecosystem services provided by coral reefs each year. The most prevalent coral in tropical reefs are the stony corals Scleractinia that build hard skeletons of calcium carbonate which provide protection and structure to the reef. Coral polyps have a mutualistic relationship with single-celled algae referred to as zooxanthellae. These algae live in the tissue of coral polyps and provide energy to the coral through photosynthesis. In turn, the coral provides shelter and nutrients to the zooxanthellae. Half the world's coral since 1970 have disappeared, and all reefs being threatened with extinction by 2050. In order to ensure the existence of coral reefs in the future, new methods for restoring their ecosystems are being investigated. Fragmentation is the most common strategy for restoring reefs; often used to establish artificial reefs like coral trees, line nurseries, and fixed structures. Threats to coral reefs Some anthropogenic activities, such as coral mining, bottom trawling, canal digging, and blast fishing, cause physical disruption to coral reefs by damaging the corals' hard calcium carbonate skeletal structure. Another major threat to coral reefs comes from chemical degradation. Marine pollution from sunscreens, paints, and inland mining can introduce chemicals that are toxic to corals, leading to their decay. Coral disease is often prevalent in areas where coral are stressed, and has increased in severity in recent decades. Often a result of pollution, eutrophication can occur in coral reef ecosystems, limiting nutrients from the corals. With changes happening on coastal lands such as deforestation, mining, farm soil tiling and erosion, much more sediments are entering the water column. This is known as sediment loading, which can directly smother the coral, or block UV light, effectively blocking the coral from photosynthesizing. Additionally, increased CO2 emissions from human activities such as fossil fuel burning can effect the acidity of ocean waters. Ocean acidification occurs when excess CO2 reacts with ocean water and lowers the pH. Under acidic conditions, corals cannot produce their calcium carbonate skeletons, and certain zooxanthellae are not able to survive. Perhaps the biggest threat to coral reefs comes from rising global temperatures. Most corals can only tolerate a 4–5 °C range in water temperatures. Under these adverse conditions, corals may expel their zooxanthellae and become bleached. As ocean waters warm beyond the tolerated temperature range, corals are dying. One study of the Great Barrier Reef found the reef mortality rate to be 50% after an extreme heatwave with 3–4 °C temperature increase. Due to bleaching events similar to this one, injured corals continue to die after the event due to increased disease susceptibility, it takes decades after bleaching events for the reef to recover, and the slow growing corals are put under an immense amount of stress. The rising global temperature is a consequence of releasing high amounts of greenhouse gases into the atmosphere. A study showed that about 655 million people live close to coral reefs, accounting for 91% of the world's population who are part of developed countries such as The United States of America, the Middle East and China. The same study also revealed that of the 655 million people, 75% of the population living in close proximity to coral reefs are from poorly developing countries and even though these low-developing countries depend on the coral reef ecosystem they only contribute to a small fraction of greenhouse emissions. Emission statistics have shown that developed countries contribute to about 11 times more greenhouse gas emissions than poor developing countries. Propagation methods Marine Based The process of cultivating coral polyps to aid in the regeneration of reefs worldwide is known as coral gardening. Growing small coral fragments through asexual reproduction until they are fully mature is the fundamental technique of coral gardening, with ocean-based or land-based nurseries being the two primary methods utilized. Coral reefs are being restored through the use of ocean-based and land-based nurseries. Ocean-based nurseries involve growing coral fragments underwater, attaching them to steel structures and monitoring their growth for 6–12 months until they reach maturity. Once mature, the new polyp colonies can be transferred to damaged reefs. Land-based nurseries, on the other hand, grow coral fragments in laboratories or farms, which allows for faster processes like micro fragmenting. Since most corals grow only about an inch per year, faster-growing practices are important for the restoration of the reefs. Additionally, growing corals on land protect them from changing temperatures, predators, and other problems that can interfere with the restoration process. Additionally, with the help of the NOAA, over 40,000 coral reefs have been restored throughout the Caribbean region. Fragmentation is a method used to divide a wild colony of coral into smaller fragments, and these smaller pieces are grown into additional coral colonies. These fragmented colonies are genetically identical to the host colony. Up to 75% of the host colony may be removed without negative effect on its growth rate. This allows researchers to move forward with restoration projects with minimal impact, if any at all, on the growth rate or survivorship of the original colony. Fragmentation practices are used in virtually every kind of coral restoration strategy used today. Several different methods of growing fragmented corals are outlined below. Fragmentation allows for about an 8x increase in productivity compared to that of the original donor coral. The amount of fragmentation done to the donor coral is determined based on the amount of space available for attachment. Although fragmentation has great potential, it should be avoided when risk for disease and storms are high as it increases the potential risks from these stressors. This strategy may not be optimal for certain species that are less adapted to fragmentation or have slower growth rates. In vertical line nurseries, coral fragments are tied to a line suspended in the water. One end of the line is attached to a buoy while the other is anchored to the seafloor. The corals in this type of nursery are linked directly to the vertical line in the water column. In suspended line nurseries, two vertical line nurseries are placed apart from each other so they are parallel vertically in the water column. They are then connected together with rope tied perpendicularly between the two. Coral is then attached to this rope, but it is partially dangling off the lines so there is less contact with the rope itself. Less contact between the coral and the suspension lines leads to lower the partial mortality of the corals. Although these structures have some partial mortality, studies show high survival of the whole nursery (in both vertical and suspended). Raising corals on line structures increases the distance between the coral colonies and potential predators, benthic diseases, and there is less space to compete for. Corals grown in line nurseries need to be moved to fixed substrates after an initial growth period, while those propagated on fixed structures can grow indefinitely. Fixed structure nurseries are frames attached to the seafloor. These nurseries are often made from materials like PVC, plastic mesh, and cinder blocks. There are likely no differences in growth rates between corals grown horizontally in fixed nurseries, versus those grown vertically in line nurseries. Although, the survival rate of these nurseries are lower than line nurseries. A 2008 study found that fixed structure nurseries had a 43% survival rate, while line nurseries had a 100% survival rate. Initial mortality of fixed structure nurseries is also likely dependent on the time of year that the corals are transplanted. It is important to limit stressors that newly grafted corals are exposed to. A "coral tree" is the first type of nursery of its kind where coral is completely suspended in the water column. Low cost and availability of materials to create these coral trees make them an ideal method for propagation. These nurseries are less susceptible to damage from wave action, there is less interaction between benthic predators and disease, and reduced entanglement risk for other marine life (compared to line nurseries). Because these nurseries are only anchored in one place, there is minimal impact to the seafloor, they are portable and easily transported by one person, and they can be easily adjusted if depth is an issue. Land-based Land-based coral nurseries allow coral to grow to a reasonable size before out-planting. Tanks filled with circulating sea water provide an artificial place for coral seedlings to grow. Similar to plant nurseries, a coral nursery provides protection from storms, predation and other stressors as they grow. It is also a place to selectively breed for resistant genotypes. Techniques in growing coral on land can involve sexual and asexual reproduction of coral. When used together, coral specimen can be grown with higher resilience to stressors and fast growth rates. Asexual Coral Reproduction Coral are able to reproduce asexually when one polyp undergoes budding to produce another clonal polyp. A technique called micro-fragmentation was developed by Dr. David Vaughan in 2006, which uses the coral's ability to clone itself for coral production. Micro-fragmentation is the process of creating small (>1 cm) pieces of live coral from a parent coral colony. These pieces are then affixed to a ceramic or cement base called a plug and placed in land nursery tanks. Massive reef-building coral are the prime species used in this method, because it speeds up their growth rate. Rather than waiting decades for a coral to grow to a robust size, months are needed to see viable specimen. This is due to the quick healing response of coral. During micro-fragmentation, wounded edges are created where the colony is severed. These heal quickly by expanding their size radially outward, colonizing their plugs and eventual out-planting sites in the ocean. Fusion of multiple fragments of the same genotype can result in a larger area of coral cover. Sexual Coral Reproduction Coral reproduce sexually through broadcast spawning. Coral larvae are formed in the water column through the fertilization of suspended gamete bundles. In a land-based nursery, control over which specimen reproduce can allow for selective breeding of more resilient coral. Availability of coral gametes in the wild is highly dependent on environmental factors. Studies have shown that most spawning happens at the same time of evening, and depends on lunar cycles. Recent work has been attempting to trigger coral spawning in the nursery environment by mimicking these environmental controls. Restoration strategies Coral restoration has been occurring for over 40 years. When determining which restoration strategy is best for a given location, it is important to compare and contrast all methods. The effectiveness of a strategy can be dependent on the habitat a nursery resides in, the conditions of the environment, how the conditions vary annually, and the structure of the nursery chosen. Coral gardening for reef restoration, on any scale, may not be capable of saving a depleted species. Instead, restoration strategies should be used to aid natural recovery in the re-establishment of a larger genetic pool of a species of coral. This allows corals to sexually reproduce and recover naturally with time. Coral gardening and propagation of corals is important because it is much easier for a fragment of coral to survive than it is for the early life-stage of coral to establish itself in reef environments. Creating repositories for corals can aid in species reintroduction after coral die-off events. Not only do these repositories serve as a method for recovery, but they can also greatly enhance the genetic pool of isolated populations of corals. Through enhancing these genetic pools, we can expect higher future survival rates for the corals. One study found used an Acropora cervicornis nursery as a repository after an extreme cold-water event occurred that wiped out roughly 43% of its population in the area. The reintroduction of corals from these repositories reintroduced healthy coral tissues to the coral population, aiding in natural reproduction. These practices should be used simultaneously with practices such as watershed management, sustainable fishing practices, and the establishment of Marine Protected Areas. Coral gardening also offers indirect benefits, like the rapid creation of new fish and invertebrate habitat on depleted reefs. These reef restoration methods also create citizen science opportunities, getting the community involved in coral restoration and conservation. See also Coral reef Marine protected area Biorock References Coral reefs Wildlife conservation Ecological restoration
Coral reef restoration
[ "Chemistry", "Engineering", "Biology" ]
2,648
[ "Ecological restoration", "Coral reefs", "Wildlife conservation", "Biogeomorphology", "Biodiversity", "Environmental engineering" ]
68,478,511
https://en.wikipedia.org/wiki/Philip%20Bunker
Philip R. Bunker (born 29 June 1941) is a British-Canadian scientist and author, known for his work in theoretical chemistry and molecular spectroscopy. Education and early work Philip Bunker was educated at Battersea Grammar School in Streatham. He received a bachelor's degree at King's College in 1962 and earned a Ph.D. in theoretical chemistry from Cambridge University in 1965, advised by H.C. Longuet-Higgins. The subject of his Ph.D. thesis was the spectrum of the dimethylacetylene molecule and its torsional barrier. During Bunker's Ph.D. work in 1963, Longuet-Higgins published the paper that introduced molecular symmetry groups consisting of feasible nuclear permutations and permutation-inversions. Under the guidance of Longuet-Higgins, Bunker applied these new symmetry ideas and introduced the notations G36 and G100 for the molecular symmetry groups of dimethylacetylene and ferrocene, respectively. After obtaining his Ph.D. degree, he was a postdoctoral fellow with Jon T. Hougen in the spectroscopy group of Gerhard Herzberg at the National Research Council of Canada. He then spent his entire career at the National Research Council of Canada, eventually rising to the position of principal research officer in 1997. Career and important contributions Philip Bunker's published scientific work has focused on the use of fundamental quantum mechanics to predict and interpret the spectral properties of polyatomic molecules due to their combined rotational, vibrational, electronic and nuclear-spin states, and their symmetries. He has been particularly concerned with the study of the energy levels and spectra of molecules that undergo large amplitude vibrational motions. Applications of this work to the methylene (CH2) molecule proved to be important in determining the separation between the singlet and triplet electronic states, and in determining which singlet and triplet rotational levels interact. In the 1990s, he returned to the problem of determining the torsional barrier in dimethylacetylene after Robert McKellar and John Johns, experimentalists at the National Research Council of Canada, had obtained a very high resolution infrared spectrum of the molecule. Bunker is a well-known expert in the use of the molecular symmetry group. At the end of Longuet-Higgins' paper in which he introduced permutation and permutation-inversion molecular symmetry groups, Longuet-Higgins wrote: "In conclusion it should be added that the present definition can be extended to linear molecules, and to molecules where spin-orbit coupling is strong; but these topics are best dealt with separately." However, a few years later (in 1967) Longuet-Higgins left the field of theoretical chemistry; he wrote nothing more about molecular symmetry and did not make these extensions. Bunker then developed the extensions of these principles to linear molecules as well as to molecules with strong spin-orbit coupling. Bunker is also known for his work in the quantitative description of non-adiabatic effects in quantum molecular dynamics. Together with Per Jensen (1956-2022), who was a theoretical chemist at Bergische Universität Wuppertal, Bunker has written two books on theoretical chemistry and molecular spectroscopy; Molecular Symmetry and Spectroscopy (1998) and Fundamentals of Molecular Symmetry (2005). Currently, Bunker is Researcher Emeritus at the National Research Council of Canada and a guest scientist at the Fritz-Haber Institute of the Max Planck Society. He has also held visiting scientist positions at universities and institutions around the world during the course of his career, including ETH-Zurich, Massey University, Kyushu University and University of Florence. During the course of his career he has delivered over 400 invited lectures. Awards and honors Bunker received the Humboldt Prize (1995), the Medaili Jana Marca Marci of the Czech Spectroscopy Society (2002), and the 2002 Sir Harold Thompson Memorial Award, which is sponsored by Pergamon Press (now Elsevier) for the most significant advance in spectroscopy published in Spectrochimica Acta each year. He is a fellow of the International Union of Pure and Applied Chemistry. Personal life Bunker married Eva Cservenits in 1966. Their son, Alex E. Bunker, is a computational biophysicist at the University of Helsinki. References Selected presentations External links Medaili Jana Marca Marci. https://lcms.cz/companies/7 Prof. Bunker's personal web site at https://chemphys.ca/pbunker/ From Battersea Grammar School Magazine, 1959 Autumn Term. Courtesy of the Old Grammarians website. Living people Canadian chemists Spectroscopists National Research Council (Canada) Alumni of King's College London 1941 births
Philip Bunker
[ "Physics", "Chemistry" ]
959
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
50,539,177
https://en.wikipedia.org/wiki/Navigation%20%28journal%29
Navigation is an open access academic journal about navigation published by the Institute of Navigation in cooperation with HighWire Press. Its editor-in-chief is Richard B. Langley; its 2021 impact factor is 2.1. The Journal Citation Reports categorizes the journal under aerospace engineering, remote sensing, and telecommunications. The journal publishes original, peer-reviewed papers in an open access (OA) environment on all areas related to the art, science, and engineering of positioning, navigation and timing (PNT) covering land (including indoor use), sea, air, and space applications. PNT technologies of interest encompass navigation satellite systems (both global and regional); inertial navigation, electro-optical systems including LiDAR and imaging sensors; and radio-frequency ranging and timing systems, including those using signals of opportunity from communication systems and other non-traditional PNT sources. Papers about PNT algorithms and methods, such as for error characterization and mitigation, integrity analysis, PNT signal processing, and multi-sensor integration are welcome. The journal also accepts papers on non-traditional applications of PNT systems, including remote sensing of the Earth’s surface or atmosphere, as well as selected historical and survey articles. References Navigation Aerospace engineering journals Remote sensing journals Wiley (publisher) academic journals Academic journals associated with learned and professional societies of the United States
Navigation (journal)
[ "Engineering" ]
276
[ "Aerospace engineering journals", "Aerospace engineering" ]
50,540,012
https://en.wikipedia.org/wiki/Nusinersen
Nusinersen, marketed as Spinraza, is a medication used in treating spinal muscular atrophy (SMA), a rare neuromuscular disorder. In December 2016, it became the first approved drug used in treating this disorder. Since the condition it treats is so rare, Nusinersen has so-called "orphan drug" designation in the United States and the European Union. Medical uses The drug is used to treat spinal muscular atrophy associated with a mutation in the SMN1 gene. It is administered directly to the central nervous system (CNS) using intrathecal injection. In clinical trials, the drug halted the disease progression. In around 60% of infants affected by type 1 spinal muscular atrophy, it improves motor function. Side effects People treated with nusinersen had an increased risk of upper and lower respiratory infections and congestion, ear infections, constipation, pulmonary aspiration, teething, and scoliosis. There is a risk that growth of infants and children might be stunted. In older clinical trial subjects, the most common adverse events were headache, back pain, and other adverse effects from the spinal injection, such as post-dural-puncture headache. Although not observed in the trial patients, a reduction in platelets as well as a risk of kidney damage are theoretical risks for antisense drugs and therefore platelets and kidney function should be monitored during treatment. In 2018, several cases of communicating hydrocephalus in children and adults treated with nusinersen emerged; it remains unclear whether this was drug related. Pharmacology Spinal muscular atrophy is caused by loss-of-function mutations in the SMN1 gene which codes for survival motor neuron (SMN) protein. People survive owing to low amounts of the SMN protein produced from the SMN2 gene. Nusinersen modulates alternative splicing of the SMN2 gene, functionally converting it into SMN1 gene, thus increasing the level of SMN protein in the CNS. The drug distributes to CNS and peripheral tissues. The half-life is estimated to be 135 to 177 days in cerebrospinal fluid (CSF) and 63 to 87 days in blood plasma. The drug is metabolized via exonuclease (3′- and 5′)-mediated hydrolysis and does not interact with CYP450 enzymes. The primary route of elimination is likely by urinary excretion for nusinersen and its metabolites. Chemistry Nusinersen is an antisense oligonucleotide in which the 2'-hydroxy groups of the ribofuranosyl rings are replaced with 2'-O-2-methoxyethyl groups and the phosphate linkages are replaced with phosphorothioate linkages. History Nusinersen was developed in a collaboration between Adrian Krainer at Cold Spring Harbor Laboratory and Ionis Pharmaceuticals (formerly called Isis Pharmaceuticals). Initial work of target discovery of nusinersen was done by Ravindra N. Singh and co-workers at the University of Massachusetts Medical School funded by Cure SMA. Starting in 2012, Ionis partnered with Biogen on development and, in 2015, Biogen acquired an exclusive license to the drug for a license fee, milestone payments up to , and tiered royalties thereafter; Biogen also paid the costs of development subsequent to taking the license. The license to Biogen included licenses to intellectual property that Ionis had acquired from Cold Spring Harbor Laboratory and University of Massachusetts. In November 2016, the new drug application was accepted under the FDA's priority review process on the strength of the Phase III trial and the unmet need, and was also accepted for review at the European Medicines Agency (EMA) at that time. It was approved by the FDA in December 2016 and by EMA in May 2017 as the first drug to treat SMA. Subsequently, nusinersen was approved to treat SMA in Canada (July 2017), Japan (July 2017), Brazil (August 2017), Switzerland (September 2017), and China (February 2019). In 2023, additional clinical trials continued to validate the efficacy of nusinersen, particularly emphasizing the benefits of early intervention. The trials demonstrated significant improvements in motor function and survival rates among infants with SMA Type 1, underscoring the importance of prompt treatment to achieve optimal clinical outcomes. Society and culture Economics Nusinersen list price in the USA is per injection which puts the treatment cost at in the first year and annually after that. According to The New York Times, this places nusinersen "among the most expensive drugs in the world". In October 2017, the authorities in Denmark recommended nusinersen for use only in a small subset of people with SMA type 1 (young babies) and refused to offer it as a standard treatment for all other people with SMA quoting an "unreasonably high price" compared to the benefit. Norwegian authorities rejected the funding in October 2017 because the price of the medicine was "unethically high". In February 2018, the funding was approved for people under 18 years old. In April 2023 funding was expanded to include adults. In August 2018, the National Institute for Health and Care Excellence (NICE), which weighs the cost-effectiveness of therapies for the NHS in England and Wales, recommended against offering nusinersen to people with SMA. Children with SMA type 1 were treated in the UK under a Biogen-funded expanded access programme; after enrolling 80 children, the scheme closed to new people in November 2018. In May 2019, however, NICE reversed its stance and announced its decision to recommend nusinersen for use across a wide spectrum of SMA for a 5-year period. The Irish Health Service Executive decided in February 2019 that nusinersen was too expensive to fund, saying the cost would be about €600,000 per patient in the first year and around €380,000 a year thereafter "with an estimated budget impact in excess of €20 million over a five-year period" for the 25 children with SMA living in Ireland. Both the manufacturer and patient groups disputed the numbers and pointed out that actual pricing arrangements for Ireland are in line with the negotiated price for the BeneluxA initiative which Ireland has been a member of since June 2018. As of May 2019, nusinersen was available in public healthcare in more than 40 countries. In December 2021, nusinersen was included in the extended insurance coverage of China, and the price was reduced from ¥697,000 per vial to around ¥33,000 (~US$5,100) per vial. References Further reading Antisense RNA Drugs acting on the nervous system Orphan drugs Spinal muscular atrophy Therapeutic gene modulation Muscle protectors Muscle stabilizers
Nusinersen
[ "Biology" ]
1,422
[ "Therapeutic gene modulation" ]
64,142,099
https://en.wikipedia.org/wiki/William%20F.%20Egan
William F. Egan (1936 – December 16, 2012) was well-known expert and author in the area of PLLs. The first and second editions of his book Frequency Synthesis by Phase Lock as well as his book Phase-Lock Basics are references among electrical engineers specializing in areas involving PLLs. Egan's conjecture on the pull-in range of type II APLL In 1981, describing the high-order PLL, William Egan conjectured that type II APLL has theoretically infinite the hold-in and pull-in ranges. From a mathematical point of view, that means that the loss of global stability in type II APLL is caused by the birth of self-excited oscillations and not hidden oscillations (i.e., the boundary of global stability and the pull-in range in the space of parameters is trivial). The conjecture can be found in various later publications, see e.g. and for type II CP-PLL. The hold-in and pull-in ranges of type II APLL for a given parameters may be either (theoretically) infinite or empty, thus, since the pull-in range is a subrange of the hold-in range, the question is whether the infinite hold-in range implies infinite pull-in range (the Egan problem). Although it is known that for the second-order type II APLL the conjecture is valid, the work by Kuznetsov et al. shows that the Egan conjecture may be not valid in some cases. A similar statement for the second-order APLL with lead-lag filter arises in Kapranov's conjecture on the pull-in range and Viterbi's problem on the APLL ranges coincidence. In general, his conjecture is not valid and the global stability and the pull-in range for the type I APLL with lead-lag filters may be limited by the birth of hidden oscillations (hidden boundary of the global stability and the pull-in range). For control systems, a similar conjecture was formulated by R. Kalman in 1957 (see Kalman's conjecture). References 1936 births 2012 deaths American electrical engineers Hidden oscillation
William F. Egan
[ "Mathematics" ]
450
[ "Hidden oscillation", "Dynamical systems" ]
64,143,095
https://en.wikipedia.org/wiki/Thousands%20of%20Problems%20for%20Theorem%20Provers
TPTP (Thousands of Problems for Theorem Provers) is a freely available collection of problems for automated theorem proving. It is used to evaluate the efficacy of automated reasoning algorithms. Problems are expressed in a simple text-based format for first order logic or higher-order logic. TPTP is used as the source of some problems in CASC. References External links Automated theorem proving
Thousands of Problems for Theorem Provers
[ "Mathematics" ]
79
[ "Mathematical logic stubs", "Mathematical logic", "Computational mathematics", "Automated theorem proving" ]
39,923,620
https://en.wikipedia.org/wiki/Lithium%20%28medication%29
Certain lithium compounds, also known as lithium salts, are used as psychiatric medication, primarily for bipolar disorder and for major depressive disorder. Lithium is taken orally (by mouth). Common side effects include increased urination, shakiness of the hands, and increased thirst. Serious side effects include hypothyroidism, diabetes insipidus, and lithium toxicity. Blood level monitoring is recommended to decrease the risk of potential toxicity. If levels become too high, diarrhea, vomiting, poor coordination, sleepiness, and ringing in the ears may occur. Lithium is teratogenic and can cause birth defects at high doses, especially during the first trimester of pregnancy. The use of lithium while breastfeeding is controversial; however, many international health authorities advise against it, and the long-term outcomes of perinatal lithium exposure have not been studied. The American Academy of Pediatrics lists lithium as contraindicated for pregnancy and lactation. The United States Food and Drug Administration categorizes lithium as having positive evidence of risk for pregnancy and possible hazardous risk for lactation. Lithium salts are classified as mood stabilizers. Lithium's mechanism of action is not known. In the nineteenth century, lithium was used in people who had gout, epilepsy, and cancer. Its use in the treatment of mental disorders began with Carl Lange in Denmark and William Alexander Hammond in New York City, who used lithium to treat mania from the 1870s onwards, based on now-discredited theories involving its effect on uric acid. Use of lithium for mental disorders was re-established (on a different theoretical basis) in 1948 by John Cade in Australia. Lithium carbonate is on the World Health Organization's List of Essential Medicines, and is available as a generic medication. In 2022, it was the 212th most commonly prescribed medication in the United States, with more than 1million prescriptions. It appears to be underused in older people, and in certain countries, for reasons including patients’ negative beliefs about lithium. Medical uses In 1970, lithium was approved by the United States Food and Drug Administration (FDA) for the treatment of bipolar disorder, which remains its primary use in the US. It is sometimes used when other treatments are not effective in a number of other conditions, including major depression, schizophrenia, disorders of impulse control, and some psychiatric disorders in children. Because the FDA has not approved lithium for the treatment of other disorders, such use is off-label. Bipolar disorder Lithium is primarily used as a maintenance drug in the treatment of bipolar disorder to stabilize mood and prevent manic episodes, but it may also be helpful in the acute treatment of manic episodes. Although recommended by treatment guidelines for the treatment of depression in bipolar disorder, the evidence that lithium is superior to placebo for acute depression is low-quality; atypical antipsychotics are considered more effective for treating acute depressive episodes. Lithium carbonate treatment was previously considered to be unsuitable for children; however, more recent studies show its effectiveness for treatment of early-onset bipolar disorder in children as young as eight. The required dosage is slightly less than the toxic level (representing a low therapeutic index), requiring close monitoring of blood levels of lithium carbonate during treatment. Within the therapeutic range there is a dose-response relationship. A limited amount of evidence suggests lithium carbonate may contribute to the treatment of substance use disorders for some people with bipolar disorder. Although it is believed that lithium prevents suicide in people with bipolar disorder, a 2022 systematic review found that "Evidence from randomised trials is inconclusive and does not support the idea that lithium prevents suicide or suicidal behaviour." Schizophrenic disorders Lithium is recommended for the treatment of schizophrenic disorders only after other antipsychotics have failed; it has limited effectiveness when used alone. The results of different clinical studies of the efficacy of combining lithium with antipsychotic therapy for treating schizophrenic disorders have varied. Major depressive disorder Lithium is widely prescribed as an adjunct treatment for depression. Augmentation If therapy with antidepressants (such as selective serotonin reuptake inhibitors [SSRIs]) does not fully treat and discontinue the symptoms of major depressive disorder (MDD) (also known as refractory depression or treatment resistant depression [TRD]) then a second augmentation agent is sometimes added to the therapy. Lithium is one of the few augmentation agents for antidepressants to demonstrate efficacy in treating MDD in multiple randomized controlled trials and it has been prescribed (off-label) for this purpose since the 1980s. A 2019 systematic review found some evidence of the clinical utility of adjunctive lithium, but the majority of supportive evidence is dated. While SSRIs have been mentioned above as a drug class in which lithium is used to augment, there are other classes in which lithium is added to increase effectiveness. Such classes are antipsychotics (used for bipolar disorder) as well as antiepileptic drugs (used for both psychiatric and epileptic cases). Lamotrigine and topiramate are two specific antiepileptic drugs in which lithium is used to augment. Monotherapy There are a few old studies indicating efficacy of lithium for acute depression with lithium having the same efficacy as tricyclic antidepressants. A recent study concluded that lithium works best on chronic and recurrent depression when compared to modern antidepressant (i.e. citalopram) but not for patients with no history of depression. A 2019 systemic review found no evidence to support the use of lithium for monotherapy. Prevention of suicide Lithium is widely believed to prevent suicide and is often used in clinical practice towards that end. However, meta-analyses, faced with evidence base limitations, have yielded differing results, and it therefore remains unclear whether or not lithium is efficacious in the prevention of suicide. However, some evidence suggets it is effective in significantly reducing the risk of self-harm and unintentional injury for bipolar disorder in comparison to no treatment and to anti-psychotics or valporate. According to meta-analyses, the increased presence of lithium in drinking water is correlated with lower overall suicide rates, especially among men. It is noted that further testing is needed to confirm this benefit. Alzheimer's disease Alzheimer's disease affects forty-five million people and is the fifth leading cause of death in the 65-plus population. There is no complete cure for the disease, currently. However, lithium is being evaluated for its effectiveness as a potential therapeutic measure. One of the leading causes of Alzheimer's is the hyperphosphorylation of the tau protein by the enzyme GSK-3, which leads to the overproduction of amyloid peptides that cause cell death. To combat this toxic amyloid aggregation, lithium upregulates the production of neuroprotectors and neurotrophic factors, as well as inhibiting the GSK-3 enzyme. Lithium also stimulates neurogenesis within the hippocampus, making it thicker. Yet another cause of Alzheimer's disease is the dysregulation of calcium ions within the brain. Too much or too little calcium within the brain can lead to cell death. Lithium can restore intracellular calcium homeostasis by inhibiting the wrongful influx of calcium upstream. It also promotes the redirection of the influx of calcium ions into the lumen of the endoplasmic reticulum of the cells to reduce the oxidative stress within the mitochondria. In 2009, a study was performed by Hampel and colleagues that asked patients with Alzheimer's to take a low dose of lithium daily for three months; it resulted in a significant slowing of cognitive decline, benefitting patients being in the prodromal stage the most. Upon a secondary analysis, the brains of the Alzheimer's patients were studied and shown to have an increase in BDNF markers, meaning they had actually shown cognitive improvement. Another study, a population study this time by Kessing et al., showed a negative correlation between Alzheimer's disease deaths and the presence of lithium in drinking water. Areas with increased lithium in their drinking water showed less dementia overall in their population. Monitoring Those who use lithium should receive regular serum level tests and should monitor thyroid and kidney function for abnormalities, as it interferes with the regulation of sodium and water levels in the body, and can cause dehydration. Dehydration, which is compounded by heat, can result in increasing lithium levels. The dehydration is due to lithium inhibition of the action of antidiuretic hormone, which normally enables the kidney to reabsorb water from urine. This causes an inability to concentrate urine, leading to consequent loss of body water and thirst. Lithium concentrations in whole blood, plasma, serum, or urine may be measured using instrumental techniques as a guide to therapy, to confirm the diagnosis in potential poisoning victims, or to assist in the forensic investigation in a case of fatal overdosage. Serum lithium concentrations are usually in the range of 0.5–1.3 mmol/L (0.5–1.3 mEq/L) in well-controlled people, but may increase to 1.8–2.5 mmol/L in those who accumulate the drug over time and to 3–10 mmol/L in acute overdose. Lithium salts have a narrow therapeutic/toxic ratio, so should not be prescribed unless facilities for monitoring plasma concentrations are available. Doses are adjusted to achieve plasma concentrations of 0.4 to 1.2 mmol/L on samples taken 12 hours after the preceding dose. Given the rates of thyroid dysfunction, thyroid parameters should be checked before lithium is instituted and monitored after 3–6 months and then every 6–12 months. Given the risks of kidney malfunction, serum creatinine, and eGFR should be checked before lithium is instituted and monitored after 3–6 months at regular intervals. Patients who have a rise in creatinine on three or more occasions, even if their eGFR is > 60 ml/min/ 1.73m2 require further evaluation, including a urinalysis for haematuria, and proteinuria, a review of their medical history with attention paid to cardiovascular, urological, and medication history, and blood pressure control and management. Overt proteinuria should be further quantified with a urine protein-to-creatinine ratio. Discontinuation For patients who have achieved long-term remission, it is recommended to discontinue lithium gradually and in a controlled fashion. Discontinuation symptoms may occur in patients stopping the medication including irritability, restlessness, and somatic symptoms like vertigo, dizziness, or lightheadedness. Symptoms occur within the first week and are generally mild and self-limiting within weeks. Cluster headaches, migraine, and hypnic headache Studies testing prophylactic use of lithium in cluster headaches (when compared to verapamil), migraine attacks, and hypnic headache indicate good efficacy. Adverse effects The adverse effects of lithium include: Very Common (> 10% incidence) adverse effects Confusion Constipation (usually transient, but can persist in some) Decreased memory Diarrhea (usually transient, but can persist in some) Dry mouth EKG changes – usually benign changes in T waves Hand tremor (usually transient, but can persist in some) with an incidence of 27%. If severe, psychiatrist may lower lithium dosage, change lithium salt type or modify lithium preparation from long to short-acting (despite lacking evidence for these procedures) or use pharmacological help Headache Hyperreflexia — overresponsive reflexes Leukocytosis — elevated white blood cell count Muscle weakness (usually transient, but can persist in some) Myoclonus — muscle twitching Nausea (usually transient) Polydipsia — increased thirst Polyuria — increased urination Renal (kidney) toxicity which may lead to chronic kidney failure, although some cases may be misattributed Vomiting (usually transient, but can persist in some) Vertigo Common (1–10%) adverse effects Acne Extrapyramidal side effects — movement-related problems such as muscle rigidity, parkinsonism, dystonia, etc. Euthyroid goitre — i.e. the formation of a goitre despite normal thyroid functioning Hypothyroidism — a deficiency of thyroid hormone, though this condition is already common among patients with bipolar disorder. Hair loss/hair thinning Weight gain — 5% incidence, tends to start fast and then plateau. Usually ends at 1–2 kg. Unknown incidence Sexual dysfunction Hypoglycemia Glycosuria In addition to tremors, lithium treatment appears to be a risk factor for development of parkinsonism-like symptoms, although the causal mechanism remains unknown. In the average bipolar patient, chronic lithium use is not associated with cognitive decline. Most side effects of lithium are dose-dependent. The lowest effective dose is used to limit the risk of side effects. Hypothyroidism The rate of hypothyroidism is around six times higher in people who take lithium. Low thyroid hormone levels in turn increase the likelihood of developing depression. People taking lithium thus should routinely be assessed for hypothyroidism and treated with synthetic thyroxine if necessary. Because lithium competes with the antidiuretic hormone in the kidney, it increases water output into the urine, a condition called nephrogenic diabetes insipidus. Clearance of lithium by the kidneys is usually successful with certain diuretic medications, including amiloride and triamterene. It increases the appetite and thirst ("polydypsia") and reduces the activity of thyroid hormone (hypothyroidism). The latter can be corrected by treatment with thyroxine and does not require the lithium dose to be adjusted. Lithium is also believed to cause renal dysfunction, although this does not appear to be common. Lambert et al. (2016), comparing the rate of hypothyroidism in patients with bipolar disorder treated with 9 different medications, found that lithium users do not have a particularly high rate of hypothyroidism (8.8%) among BD patients – only 1.39 times the rate in oxcarbazepine users (6.3%). Lithium and quetiapine are not statistically different in terms of hypothyroidism rates. However, lithium users are tested much more frequently for hypothyroidism than those using other drugs. The authors write that there may be an element of surveillance bias in understanding lithium's effects on the thyroid glands, as lithium users are tested 2.3–3.1 times as often. Furthermore, the authors argue that because hypothyroidism is common among BD patients regardless of lithium treatment, regular thyroid testing should be applied to all BD patients, not just those on lithium. Pregnancy Lithium is a teratogen, which can cause birth defects in a small number of newborns. Case reports and several retrospective studies have demonstrated possible increases in the rate of a congenital heart defects including Ebstein's anomaly if taken during pregnancy. Teratogenicity is affected by trimester and dose of Lithium. Most significantly affecting first-trimester cardiac development with greater effects at higher doses. As the risks of stopping Lithium can be significant, patients are sometimes recommended to stay on this medicine while pregnant. Careful weighing of the risks and benefits should be made in consultation with a psychiatric physician. For patients who are exposed to lithium, or plan to stay on the medication throughout their pregnancy, fetal echocardiography is routinely performed to monitor for cardiac anomalies. While lithium is typically the most effective treatment, possible alternatives to Lithium include Lamotrigine and Second generation Antipsychotics for the treatment of acute bipolar depression or for the management of bipolar patients with normal mood during pregnancy. Breastfeeding While only small amounts of Lithium are transmitted to the infant in breastmilk, there is limited data on the safety of Breastfeeding while on Lithium. Medical evaluation and monitoring of infants consuming breastmilk during maternal prescription may be indicated. Kidney damage Lithium has been associated with several forms of kidney injury. It is estimated that impaired urinary concentrating ability is present in at least half of individuals on chronic lithium therapy, a condition called lithium-induced nephrogenic diabetes insipidus. Continued use of lithium can lead to more serious kidney damage in an aggravated form of diabetes insipidus. In rare cases, some forms of lithium-caused kidney damage may be progressive and lead to end-stage kidney failure with a reported incidence of 0.2% to 0.7%. Some reports of kidney damage may be wrongly attributed to lithium, increasing the apparent rate of this adverse effect. Nielsen et al. (2018), citing 6 large observational studies since 2010, argue that findings of decreased kidney function are partially inflated by surveillance bias. Furthermore, modern data does not show that lithium increases the risk of end-stage kidney disease. Davis et al. (2018), using literature from a wider timespan (1977–2018), also found that lithium's association with chronic kidney disease is unproven with various contradicting results. They also find contradicting results regarding end-stage kidney disease. A 2015 nationwide study suggests that chronic kidney disease can be avoided by maintaining the serum lithium concentration at a level of 0.6–0.8 mmol/L and by monitoring serum creatinine every 3–6 months. Hyperparathyroidism Lithium-associated hyperparathyroidism is the leading cause of hypercalcemia in lithium-treated patients. Lithium may lead to exacerbation of pre-existing primary hyperparathyroidism or cause an increased set-point of calcium for parathyroid hormone suppression, leading to parathyroid hyperplasia. Interactions Lithium plasma concentrations are known to be increased with concurrent use of diuretics—especially loop diuretics (such as furosemide) and thiazides—and non-steroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen. Lithium concentrations can also be increased with concurrent use of ACE inhibitors such as captopril, enalapril, and lisinopril. Lithium is primarily cleared from the body through glomerular filtration, but some is then reabsorbed together with sodium through the proximal tubule. Its levels are therefore sensitive to water and electrolyte balance. Diuretics act by lowering water and sodium levels; this causes more reabsorption of lithium in the proximal tubules so that the removal of lithium from the body is less, leading to increased blood levels of lithium. ACE inhibitors have also been shown in a retrospective case-control study to increase lithium concentrations. This is likely due to constriction of the afferent arteriole of the glomerulus, resulting in decreased glomerular filtration rate and clearance. Another possible mechanism is that ACE inhibitors can lead to a decrease in sodium and water. This will increase lithium reabsorption and its concentrations in the body. Some drugs can increase the clearance of lithium from the body, which can result in decreased lithium levels in the blood. These drugs include theophylline, caffeine, and acetazolamide. Additionally, increasing dietary sodium intake may also reduce lithium levels by prompting the kidneys to excrete more lithium. Lithium is known to be a potential precipitant of serotonin syndrome in people concurrently on serotonergic medications such as antidepressants, buspirone and certain opioids such as pethidine (meperidine), tramadol, oxycodone, fentanyl and others. Lithium co-treatment is also a risk factor for neuroleptic malignant syndrome in people on antipsychotics and other antidopaminergic medications. High doses of haloperidol, fluphenazine, or flupenthixol may be hazardous when used with lithium; irreversible toxic encephalopathy has been reported. Indeed, these and other antipsychotics have been associated with an increased risk of lithium neurotoxicity, even with low therapeutic lithium doses. Classical psychedelics such as psilocybin and LSD may cause seizures if taken while using lithium, although further research is needed. Overdose Lithium toxicity, which is also called lithium overdose and lithium poisoning, is the condition of having too much lithium in the blood. This condition also happens in persons who are taking lithium in which the lithium levels are affected by drug interactions in the body. In acute toxicity, people have primarily gastrointestinal symptoms such as vomiting and diarrhea, which may result in volume depletion. During acute toxicity, lithium distributes later into the central nervous system resulting in mild neurological symptoms, such as dizziness. In chronic toxicity, people have primarily neurological symptoms which include nystagmus, tremor, hyperreflexia, ataxia, and change in mental status. During chronic toxicity, the gastrointestinal symptoms seen in acute toxicity are less prominent. The symptoms are often vague and nonspecific. If the lithium toxicity is mild or moderate, lithium dosage is reduced or stopped entirely. If the toxicity is severe, lithium may need to be removed from the body. Mechanism of action The specific biochemical mechanism of lithium action in stabilizing mood is unknown. Upon ingestion, lithium becomes widely distributed in the central nervous system and interacts with a number of neurotransmitters and receptors, decreasing norepinephrine release and increasing serotonin synthesis. Unlike many other psychoactive drugs, typically produces no obvious psychotropic effects (such as euphoria) in normal individuals at therapeutic concentrations. Lithium may also increase the release of serotonin by neurons in the brain. In vitro studies performed on serotonergic neurons from rat raphe nuclei have shown that when these neurons are treated with lithium, serotonin release is enhanced during a depolarization compared to no lithium treatment and the same depolarization. Lithium both directly and indirectly inhibits GSK3β (glycogen synthase kinase 3β) which results in the activation of mTOR. This leads to an increase in neuroprotective mechanisms by facilitating the Akt signaling pathway. GSK-3β is a downstream target of monoamine systems. As such, it is directly implicated in cognition and mood regulation. During mania, GSK-3β is activated via dopamine overactivity. GSK-3β inhibits the transcription factors β-catenin and cyclic AMP (cAMP) response element binding protein (CREB), by phosphorylation. This results in a decrease in the transcription of important genes encoding for neurotrophins. In addition, several authors proposed that pAp-phosphatase could be one of the therapeutic targets of lithium. This hypothesis was supported by the low Ki of lithium for human pAp-phosphatase compatible within the range of therapeutic concentrations of lithium in the plasma of people (0.8–1 mM). The Ki of human pAp-phosphatase is ten times lower than that of GSK3β (glycogen synthase kinase 3β). Inhibition of pAp-phosphatase by lithium leads to increased levels of pAp (3′-5′ phosphoadenosine phosphate), which was shown to inhibit PARP-1. Another mechanism proposed in 2007 is that lithium may interact with nitric oxide (NO) signaling pathway in the central nervous system, which plays a crucial role in neural plasticity. The NO system could be involved in the antidepressant effect of lithium in the Porsolt forced swimming test in mice. It was also reported that NMDA receptor blockage augments antidepressant-like effects of lithium in the mouse forced swimming test, indicating the possible involvement of NMDA receptor/NO signaling in the action of lithium in this animal model of learned helplessness. Lithium possesses neuroprotective properties by preventing apoptosis and increasing cell longevity. Although the search for a novel lithium-specific receptor is ongoing, the high concentration of lithium compounds required to elicit a significant pharmacological effect leads mainstream researchers to believe that the existence of such a receptor is unlikely. Oxidative metabolism Evidence suggests that mitochondrial dysfunction is present in patients with bipolar disorder. Oxidative stress and reduced levels of anti-oxidants (such as glutathione) lead to cell death. Lithium may protect against oxidative stress by up-regulating complexes I and II of the mitochondrial electron transport chain. Dopamine and G-protein coupling During mania, there is an increase in neurotransmission of dopamine that causes a secondary homeostatic down-regulation, resulting in decreased neurotransmission of dopamine, which can cause depression. Additionally, the post-synaptic actions of dopamine are mediated through G-protein coupled receptors. Once dopamine is coupled to the G-protein receptors, it stimulates other secondary messenger systems that modulate neurotransmission. Studies found that in autopsies (which do not necessarily reflect living people), people with bipolar disorder had increased G-protein coupling compared to people without bipolar disorder. Lithium treatment alters the function of certain subunits of the dopamine-associated G-protein, which may be part of its mechanism of action. Glutamate and NMDA receptors Glutamate levels are observed to be elevated during mania. Lithium is thought to provide long-term mood stabilization and have anti-manic properties by modulating glutamate levels. It is proposed that lithium competes with magnesium for binding to NMDA glutamate receptor, increasing the availability of glutamate in post-synaptic neurons, leading to a homeostatic increase in glutamate re-uptake which reduces glutamatergic transmission. The NMDA receptor is also affected by other neurotransmitters such as serotonin and dopamine. Effects observed appear exclusive to lithium and have not been observed by other monovalent ions such as rubidium and cesium. GABA receptors GABA is an inhibitory neurotransmitter that plays an important role in regulating dopamine and glutamate neurotransmission. It was found that patients with bipolar disorder had lower GABA levels, which results in excitotoxicity and can cause apoptosis (cell loss). Lithium has been shown to increase the level of GABA in plasma and cerebral spinal fluid. Lithium counteracts these degrading processes by decreasing pro-apoptotic proteins and stimulating release of neuroprotective proteins. Lithium's regulation of both excitatory dopaminergic and glutamatergic systems through GABA may play a role in its mood-stabilizing effects. Cyclic AMP secondary messengers Lithium's therapeutic effects are thought to be partially attributable to its interactions with several signal transduction mechanisms. The cyclic AMP secondary messenger system is shown to be modulated by lithium. Lithium was found to increase the basal levels of cyclic AMP but impair receptor-coupled stimulation of cyclic AMP production. It is hypothesized that the dual effects of lithium are due to the inhibition of G-proteins that mediate cyclic AMP production. Over a long period of lithium treatment, cyclic AMP and adenylate cyclase levels are further changed by gene transcription factors. Inositol depletion hypothesis Lithium treatment has been found to inhibit the enzyme inositol monophosphatase, involved in degrading inositol monophosphate to inositol required in PIP2 synthesis. This leads to lower levels of inositol triphosphate, created by decomposition of PIP2. This effect has been suggested to be further enhanced with an inositol triphosphate reuptake inhibitor. Inositol disruptions have been linked to memory impairment and depression. It is known with good certainty that signals from the receptors coupled to the phosphoinositide signal transduction are affected by lithium. myo-inositol is also regulated by the high affinity sodium mI transport system (SMIT). Lithium is hypothesized to inhibit mI entering the cells and mitigate the function of SMIT. Reductions of cellular levels of myo-inositol results in the inhibition of the phosphoinositide cycle. Neurotrophic factors Lithium's actions on Gsk3 result in activation of CREB, leading to higher expression of BDNF. (Valproate, another mood stabilizer, also increases the expression of BDNF.) As expected of increased BDNF expression, chronic lithium treatment leads to increased grey matter volume in brain areas implicated in emotional processing and cognitive control. Bipolar patients treated with lithium also have higher white matter integrity compared to those taking other drugs. Lithium also increases the expression of mesencephalic astrocyte-derived neurotrophic factor (MANF), another neurotrophic factor, via the AP-1 transcription factor. MANF is able to regulate proteostasis by interacting with GRP78, a protein involved in the unfolded protein response. History Lithium was first used in the 19th century as a treatment for gout after scientists discovered that, at least in the laboratory, lithium could dissolve uric acid crystals isolated from the kidneys. The levels of lithium needed to dissolve urate in the body, however, were toxic. Because of prevalent theories linking excess uric acid to a range of disorders, including depressive and manic disorders, Carl Lange in Denmark and William Alexander Hammond in New York City used lithium to treat mania from the 1870s onwards. By the turn of the 20th century, as theory regarding mood disorders evolved and so-called "brain gout" disappeared as a medical entity, the use of lithium in psychiatry was largely abandoned; however, several lithium preparations were still produced for the control of renal calculi and uric acid diathesis. As accumulating knowledge indicated a role for excess sodium intake in hypertension and heart disease, lithium salts were prescribed to patients for use as a replacement for dietary table salt (sodium chloride). This practice and the sale of lithium itself were both banned in the United States in February 1949, following the publication of reports detailing side effects and deaths. Also in 1949, the Australian psychiatrist John Cade and Australian biochemist Shirley Andrews rediscovered the usefulness of lithium salts in treating mania while working at the Royal Park Psychiatric Hospital in Victoria. They were injecting rodents with urine extracts taken from manic patients in an attempt to isolate a metabolic compound which might be causing mental symptoms. Since uric acid in gout was known to be psychoactive, (adenosine receptors on neurons are stimulated by it; caffeine blocks them), they needed soluble urate for a control. They used lithium urate, already known to be the most soluble urate compound, and observed that it caused the rodents to become tranquil. Cade and Andrews traced the effect to the lithium-ion itself, and after Cade ingested lithium himself to ensure its safety in humans, he proposed lithium salts as tranquilizers. He soon succeeded in controlling mania in chronically hospitalized patients with them. This was one of the first successful applications of a drug to treat mental illness, and it opened the door for the development of medicines for other mental problems in the next decades. The rest of the world was slow to adopt this treatment, largely because of deaths that resulted from even relatively minor overdosing, including those reported from the use of lithium chloride as a substitute for table salt. Largely through the research and other efforts of Denmark's Mogens Schou and Paul Baastrup in Europe, and Samuel Gershon and Baron Shopsin in the U.S., this resistance was slowly overcome. Following the recommendation of the APA Lithium Task Force (William Bunney, Irvin Cohen (Chair), Jonathan Cole, Ronald R. Fieve, Samuel Gershon, Robert Prien, and Joseph Tupin), the application of lithium in manic illness was approved by the United States Food and Drug Administration in 1970, becoming the 50th nation to do so. Lithium has now become a part of Western popular culture. Characters in Pi, Premonition, Stardust Memories, American Psycho, Garden State, and An Unmarried Woman all take lithium. It's the chief constituent of the calming drug in Ira Levin's dystopian This Perfect Day. Sirius XM Satellite Radio in North America has a 1990s alternative rock station called Lithium, and several songs refer to the use of lithium as a mood stabilizer. These include: "Equilibrium met Lithium" by South African artist Koos Kombuis, "Lithium" by Evanescence, "Lithium" by Nirvana, "Lithium and a Lover" by Sirenia, "Lithium Sunset", from the album Mercury Falling by Sting, and "Lithium" by Thin White Rope. 7 Up As with cocaine in Coca-Cola, lithium was widely marketed as one of several patent medicine products popular in the late 19th and early 20th centuries and was the medicinal ingredient of a refreshment beverage. Charles Leiper Grigg, who launched his St. Louis-based company The Howdy Corporation, invented a formula for a lemon-lime soft drink in 1920. The product, originally named "Bib-Label Lithiated Lemon-Lime Soda", was launched two weeks before the Wall Street Crash of 1929. It contained the mood stabilizer lithium citrate, and was one of many patent medicine products popular in the late-19th and early-20th centuries. Its name was soon changed to 7 Up. All American beverage makers were forced to remove lithium from beverages in 1948. Despite the ban, in 1950, the Painesville Telegraph still carried an advertisement for a lithiated lemon beverage. Salts and product names Lithium carbonate () is the most commonly used form of lithium salts, a carbonic acid involving the lithium element and a carbonate ion. Other lithium salts are also used as medication, such as lithium citrate (), lithium sulfate, lithium chloride, and lithium orotate. Nanoparticles and microemulsions have also been invented as drug delivery mechanisms. As of 2020, there is a lack of evidence that alternate formulations or salts of lithium would reduce the need for monitoring serum lithium levels or lower systemic toxicity. As of 2017 lithium was marketed under many brand names worldwide, including Cade, Calith, Camcolit, Carbolim, Carbolit, Carbolith, Carbolithium, Carbolitium, Carbonato de Litio, Carboron, Ceglution, Contemnol, Efadermin (Lithium and Zinc Sulfate), Efalith (Lithium and Zinc Sulfate), Elcab, Eskalit, Eskalith, Frimania, Hypnorex, Kalitium, Karlit, Lalithium, Li-Liquid, Licarb, Licarbium, Lidin, Ligilin, Lilipin, Lilitin, Limas, Limed, Liskonum, Litarex, Lithane, Litheum, Lithicarb, Lithii carbonas, Lithii citras, Lithioderm, Lithiofor, Lithionit, Lithium, Lithium aceticum, Lithium asparagicum, Lithium Carbonate, Lithium Carbonicum, Lithium Citrate, Lithium DL-asparaginat-1-Wasser, Lithium gluconicum, Lithium-D-gluconat, Lithiumcarbonaat, Lithiumcarbonat, Lithiumcitrat, Lithiun, Lithobid, Lithocent, Lithotabs, Lithuril, Litiam, Liticarb, Litijum, Litio, Litiomal, Lito, Litocarb, Litocip, Maniprex, Milithin, Neurolepsin, Plenur, Priadel, Prianil, Prolix, Psicolit, Quilonium, Quilonorm, Quilonum, Téralithe, and Theralite. Research Tentative evidence in Alzheimer's disease showed that lithium may slow progression. It has been studied for its potential use in the treatment of amyotrophic lateral sclerosis (ALS), but a study showed lithium had no effect on ALS outcomes. Notes References Further reading External links Biology and pharmacology of chemical elements Drugs with unknown mechanisms of action Lithium Lithium in biology Metal-containing drugs Mood stabilizers Nephrotoxins World Health Organization essential medicines Wikipedia medicine articles ready to translate
Lithium (medication)
[ "Chemistry", "Biology" ]
7,661
[ "Pharmacology", "Lithium in biology", "Properties of chemical elements", "Biology and pharmacology of chemical elements", "Biochemistry" ]
39,924,732
https://en.wikipedia.org/wiki/Sphingosine-1-phosphate%20receptor
The sphingosine-1-phosphate receptors are a class of G protein-coupled receptors that are targets of the lipid signalling molecule Sphingosine-1-phosphate (S1P). They are divided into five subtypes: S1PR1, S1PR2, S1PR3, S1PR4 and S1PR5. Discovery In 1990, S1PR1 was the first member of the S1P receptor family to be cloned from endothelial cells. Later, S1PR2 and S1PR3 were cloned from rat brain and a human genomic library respectively. Finally, S1P4 and S1PR5 were cloned from in vitro differentiated human dendritic cells and rat cDNA library. Function The sphingosine-1-phosphate receptors regulate fundamental biological processes such as cell proliferation, angiogenesis, migration, cytoskeleton organization, endothelial cell chemotaxis, immune cell trafficking and mitogenesis. Sphingosine-1-phosphate receptors are also involved in immune-modulation and directly involved in suppression of innate immune responses from T cells. Subtypes Sphingosine-1-phosphate (S1P) receptors are divided into five subtypes: S1PR1, S1PR2, S1PR3, S1PR4 and S1PR5. They are expressed in a wide variety of tissues, with each subtype exhibiting a different cell specificity, although they are found at their highest density on leukocytes. S1PR1, 2 and 3 receptors are expressed ubiquitously. The expression of S1PR4 and S1PR5 are less widespread. S1PR4 is confined to lymphoid and hematopoietic tissues whereas S1PR5 primarily located in the white matter of the central nervous system (CNS) and spleen. G protein interactions and selective ligands The sphingosine-1-phosphate (S1P) is the endogenous agonist for the five subtypes. References G protein-coupled receptors
Sphingosine-1-phosphate receptor
[ "Chemistry" ]
437
[ "G protein-coupled receptors", "Signal transduction" ]
39,925,201
https://en.wikipedia.org/wiki/Hamburg%20Aviation
Hamburg Aviation, formerly the "Luftfahrtcluster Metropolregion Hamburg e.V." (Aviation Cluster Hamburg Metropolitan Region) is an association of aviation organizations in Hamburg, Germany. Its goal is to promote the aviation industry in the Hamburg Metropolitan Region. Hamburg Metropolitan Region Companies based in the Hamburg Metropolitan Region include the aircraft manufacturer Airbus and Lufthansa Technik. Hamburg Airport, which first opened in 1912, is one of the world's oldest operational airports to still be based at its original location. There are over 300 specialist suppliers, including branches of Diehl Aerospace. As of 2012, it had over 40,000 employees making it one of the largest sites for civil aviation in the world. Educational institutions Hamburg University of Applied Sciences (HAW Hamburg) Helmut Schmidt University / University of the German Federal Armed Forces Hamburg Hamburg University of Technology (TUHH) University of Hamburg Also based in Hamburg are the German Aerospace Center’s Institute of Aerospace Medicine and Institute of Air Transportation Systems. Crystal Cabin Award Hamburg is the host city of the annual Aircraft Interiors Expo, a trade show for the aircraft cabin industry. The Crystal Cabin Award was launched in 2007 to honour innovation in the field of cabin design. The prize is funded by sponsors from the aviation industry. Hamburg Aerospace Cluster In 2001, companies, universities and government bodies collaborated forming Hamburg Aviation. This developed into the “Luftfahrtcluster Metropolregion Hamburg E.V.” association, with 15 founding members, officially established in 2011. Its mission statement is to promote the aviation industry in the Hamburg business cluster. Recognitions and projects Leading-Edge Cluster competition Center of Applied Aeronautical Research European Aerospace Cluster Partnership Faszination Technik Klub Founding members Commercial enterprises Airbus Lufthansa Technik AG Hamburg Airport Associations Hanse-Aerospace E.V. HECAS – Hanseatic Engineering & Consulting Association German Aerospace Industries Association (BDLI) Research facilities German Aerospace Center (DLR) Hamburg Centre of Aviation Training (HCAT) Center for Applied Aeronautical Research (ZAL) Universities Hamburg University of Applied Sciences (HAW Hamburg) Hamburg University of Technology (TUHH) Helmut Schmidt University (HSU) University of Hamburg Public sector HWF Hamburgische Gesellschaft für Wirtschaftsförderung mbH (Hamburg Business Development Corporation) Department of the Economy, Transport and Innovation (BWVI) See also Aviation Notes External links http://www.hamburg-aviation.de http://www.faszination-fuer-technik.de http://www.eacp-aero.eu https://web.archive.org/web/20130829020312/http://care-aero.eu/ http://www.crystal-cabin-award.com Consortia in Germany Engineering university associations and consortia Regional science Business organisations based in Germany Aeronautics organizations Economy of Hamburg Organisations based in Hamburg
Hamburg Aviation
[ "Engineering" ]
595
[ "Aeronautics organizations" ]
62,745,033
https://en.wikipedia.org/wiki/Center%20for%20the%20Fundamental%20Laws%20of%20Nature
The Center for the Fundamental Laws of Nature is a research center at Harvard University that focuses on theoretical particle physics and cosmology. About The Center for the Fundamental Laws of nature is the high-energy theory group in Harvard's Physics Department. , it had 12 faculty and affiliate faculty, 18 postdoctoral, and 19 graduate student members, in addition to multiple affiliates, visiting scholars, and staff. A number of prominent particle theorists have earned degrees or worked at Harvard, including Nobel Laureates David Politzer (PhD 1974), Sheldon Glashow (PhD 1959), David Gross, Steven Weinberg, and Julian Schwinger. Research Current areas of research listed include: Quantum gravity String theory Black holes Applications of AdS/CFT Physics beyond the standard model Dark matter Effective field theories References External links Official Website Theoretical physics institutes Harvard University
Center for the Fundamental Laws of Nature
[ "Physics" ]
170
[ "Theoretical physics", "Theoretical physics institutes", "Particle physics", "Theoretical physics stubs", "Particle physics stubs" ]
65,593,498
https://en.wikipedia.org/wiki/Vibroacoustic%20therapy
Vibroacoustic therapy (VAT) is a type of sound therapy that involves passing low frequency sine wave vibrations into the body via a device with embedded speakers. This therapy was developed in Norway by Olav Skille in the 1980s. The Food and Drug Administration determined that vibroacoustic devices, such as the Next Wave® PhysioAcoustic therapeutic vibrator, are "substantially equivalent" to other therapeutic vibrators, which are "intended for various uses, such as relaxing muscles and relieving minor aches and pains"; thus, vibroacoustic devices (therapeutic vibrators) are "exempt from clinical investigations, Good Guidance Practices (GGPs), and premarket notification and approval procedures." Frequencies Vibroacoustic therapy uses low frequency sinusoidal vibrations between 0 and 500Hz depending on the product's frequency response and capabilities. This is similar to the range of subwoofers or vibrating theater seating. Human mechanoreceptors, such as Pacinian corpuscles, can detect vibrations up to 1,000 Hz, frequencies between 30 Hz and 120 Hz are generally considered to have a calming and relaxing effect, which is why they are often used in therapeutic contexts.40 Hz specifically, has been widely studied in vibroacoustic therapy and other fields due to its potential benefits, such as promoting relaxation and improving focus. In addition to sinusoidal waves, vibroacoustic music is specifically composed for vibroacoustic therapy. These compositions incorporate low-frequency musical instruments and advanced audio engineering techniques to create an immersive and enjoyable therapeutic experience. The combination of carefully engineered music and vibroacoustic technology enhances the physical and emotional benefits of the therapy. Devices Vibroacoustic devices come in a range of forms including beds, chairs, pillows, mats, wristbands, wearable backpacks, and simple DIY platforms. They generally function by playing sound files through transducers, bass shakers, or exciters which then transfer the vibrations into the body. Some devices attempt to target very specific parts of the body such as the wrist or the spine. Proposed mechanisms of action Pallesthesia, the ability to perceive vibration, plays a crucial role in vibroacoustic therapy. This form of therapy relies on the body's sensitivity to mechanical vibrations. By stimulating vibratory perception through therapeutic sound waves, vibroacoustic therapy aims to promote physical and emotional well-being. Another proposed mechanisms of action for vibroacoustic therapy is brainwave entrainment. Entrainment suggests that brainwaves will synchronize with rhythms from sensory input. This further suggests that some brainwave frequencies are preferable to others in given situations. Current practice Vibroacoustic therapy is available at a number of spas, resorts, and clinics around the world as well as a number of professional and holistic practitioners. Related therapies Vibroacoustic Therapy is closely related to Physio Acoustic Therapy (PAT) which was developed by Petri Lehikoinen in Finland. Both are examples of low frequency sound stimulation (LFSS). More broadly, they are subsets of Rhythmic Sensory Stimulation (RSS) which is being studied across a range of sensory modalities. Criticism The science behind vibroacoustic therapy has been questioned by multiple sources. Some sources refer to it as pseudoscience and the TedX talk by prominent vibroacoustic researcher Lee Bartel has been tagged as falling outside of the TED talk guidelines. Practitioners of VAT do agree that more research is needed as VAT has been a largely clinical practice since its inception. Academic research published in peer reviewed journals and meeting higher scientific standards is being pursued at the University of Toronto and other institutions to address these objections. References Music therapy Wave mechanics 21. VIBURE by both FZ LLC
Vibroacoustic therapy
[ "Physics" ]
807
[ "Waves", "Wave mechanics", "Physical phenomena", "Classical mechanics" ]
65,595,160
https://en.wikipedia.org/wiki/General%20relativity%20priority%20dispute
Albert Einstein's discovery of the gravitational field equations of general relativity and David Hilbert's almost simultaneous derivation of the theory using an elegant variational principle, during a period when the two corresponded frequently, has led to numerous historical analyses of their interaction. The analyses came to be called a priority dispute. Einstein and Hilbert The events of interest to historians of the dispute occurred in late 1915. At that time Albert Einstein, now perhaps the most famous modern scientist, had been working on gravitational theory since 1912. He had "developed and published much of the framework of general relativity, including the ideas that gravitational effects require a tensor theory, that these effects determine a non-Euclidean geometry, that this metric role of gravitation results in a redshift and in the bending of light passing near a massive body." While David Hilbert never became a celebrity, he was seen as a mathematician unequaled in his generation, with an especially wide impact on mathematics. When he met Einstein in the summer of 1915, Hilbert had started working on an axiomatic system for a unified field theory, combining the ideas of Gustav Mie's on electromagnetism with Einstein's general relativity. As the historians referenced below recount, Einstein and Hilbert corresponded extensively throughout the fall of 1915, culminating in lectures by both men in late November that were later published. The historians debate consequences of this friendly correspondence on the resulting publications. Undisputed facts The following facts are well established and referable: The proposal to describe gravity by means of a pseudo-Riemannian metric was first made by Einstein and Marcel Grossmann in the so-called Entwurf theory published 1913. Grossmann identified the contracted Riemann tensor as the key for the solution of the problem posed by Einstein. This was followed by several attempts of Einstein to find valid field equations for this theory of gravity. David Hilbert invited Einstein to the University of Göttingen for a week to give six two-hour lectures on general relativity, which he did in June–July 1915. Einstein stayed at Hilbert's house during this visit. Hilbert started working on a combined theory of gravity and electromagnetism, and Einstein and Hilbert exchanged correspondence until November 1915. Einstein gave four lectures on his theory on 4, 11, 18 and 25 November in Berlin, published as [Ein15a], [Ein15b], [Ein15c], [Ein15d]. 4 November: Einstein published non-covariant field equations and on 11 November returned to the field equations of the "Entwurf" papers, which he now made covariant by the assumption that the trace of the energy-momentum tensor was zero, as it was for electromagnetism. Einstein sent Hilbert proofs of his papers of 4 and 11 November. (Sauer 99, notes 63, 66) 15 November: Invitation issued for the 20 November meeting at the academy in Göttingen. "Hilbert legt vor in die Nachrichten: Grundgleichungen der Physik". (Sauer 99, note 73) 16 November: Hilbert spoke at the Göttingen Mathematical Society "Grundgleichungen der Physik" (Sauer 99, note 68). Talk not published. 16 or 17 November: Hilbert sent Einstein some information about his talk of 16 November (letter lost). 18 November: Einstein replied to Hilbert's letter (received by Hilbert on 19 November), saying as far as he (Einstein) could tell, Hilbert's system was equivalent to the one he (Einstein) had found in the preceding weeks. (Sauer 99, note 72). Einstein also told Hilbert in this letter that he (Einstein) had "considered the only possible generally covariant field equations three years earlier", adding that "The difficulty was not to find generally covariant equations for the ;this is easy with the help of the Riemann tensor. What was difficult instead was to recognize that these equations form a generalization, and that is, a simple and natural generalization of Newton's law" (A. Einstein to D. Hilbert, 18 November, Einstein Archives Call No. 13-093). Einstein also told Hilbert in that letter that he (Einstein) had calculated the correct perihelion advance for Mercury, using covariant field equations based on the assumption that the trace of the energy momentum tensor vanished as it did for electromagnetism. 18 November: Einstein presented the calculation of the perihelion advance to Prussian Academy. 20 November: Hilbert lectured at the Göttingen Academy. The content of his presentation and of the proofs of the paper later published on the presentation are at the heart of the dispute among historians (see below). 25 November: In his last lecture, Einstein submitted the correct field equations. The published paper (Einstein 1915d) appeared on 2 December and did not mention Hilbert Hilbert starts his paper by citing Einstein: "The vast problems posed by Einstein as well as his ingeniously conceived methods of solution, and the far-reaching ideas and formation of novel concepts by means of which Mie constructs his electrodynamics, have opened new paths for the investigation into the foundations of physics." Hilbert's paper took considerably longer to appear. He had galley proofs that were marked "December 6" by the printer in December 1915. Most of the galley proofs have been preserved, but about a quarter of a page is missing. The extant part of the proofs contains Hilbert's action from which the field equations can be obtained by taking a variational derivative, and using the contracted Bianchi identity derived in theorem III of Hilbert's paper, though this was not done in the extant proofs. Hilbert rewrote his paper for publication (in March 1916), changing the treatment of the energy theorem, dropping a non-covariant gauge condition on the coordinates to produce a covariant theory, and adding a new credit to Einstein for introducing the gravitational potentials into the theory of gravity. In the final paper, he said his differential equations seemed to agree with the "magnificent theory of general relativity established by Einstein in his later papers" Hilbert nominated Einstein for the third Bolyai prize in 1915 'for the high mathematical spirit behind all his achievements' The 1916 paper was rewritten and republished in 1924 [Hil24], where Hilbert wrote: Einstein [...] kehrt schließlich in seinen letzten Publikationen geradewegs zu den Gleichungen meiner Theorie zurück. (Einstein [...] in his most recent publications, returns directly to the equations of my theory.) Historians on Hilbert's point of view Historians have discussed Hilbert's view of his interaction with Einstein. Walter Isaacson points out that Hilbert's publication on his derivation of the equations of general relativity included the text: “The differential equations of gravitation that result are, as it seems to me, in agreement with the magnificent theory of general relativity established by Einstein.” Wuensch points out that Hilbert refers to the field equations of gravity as "meine Theorie" ("my theory") in his 6 February 1916 letter to Schwarzschild. This, however, is not at issue, since no one disputes that Hilbert had his own "theory", which Einstein criticized as naive and overly ambitious. Hilbert's theory was based on the work of Mie combined with Einstein's principle of general covariance, but applied to matter and electromagnetism as well as gravity. Mehra and Bjerknes point out that Hilbert's 1924 version of the article contained the sentence "... und andererseits auch Einstein, obwohl wiederholt von abweichenden und unter sich verschiedenen Ansätzen ausgehend, kehrt schließlich in seinen letzten Publikationen geradenwegs zu den Gleichungen meiner Theorie zurück" - "Einstein [...] in his last publications ultimately returns directly to the equations of my theory.". These statements of course do not have any particular bearing on the matter at issue. No one disputes that Hilbert had "his" theory, which was a very ambitious attempt to combine gravity with a theory of matter and electromagnetism along the lines of Mie's theory, and that his equations for gravitation agreed with those that Einstein presented beginning in Einstein's 25 November paper (which Hilbert refers to as Einstein's later papers to distinguish them from previous theories of Einstein). None of this bears on the precise origin of the trace term in the Einstein field equations (a feature of the equations that, while theoretically significant, does not have any effect on the vacuum equations, from which all the empirical tests proposed by Einstein were derived). Sauer says "the independence of Einstein's discovery was never a point of dispute between Einstein and Hilbert ... Hilbert claimed priority for the introduction of the Riemann scalar into the action principle and the derivation of the field equations from it," (Sauer mentions a letter and a draft letter where Hilbert defends his priority for the action functional) "and Einstein admitted publicly that Hilbert (and Lorentz) had succeeded in giving the equations of general relativity a particularly lucid form by deriving them from a single variational principle". Sauer also stated, "And in a draft of a letter to Weyl, dated 22 April 1918, written after he had read the proofs of the first edition of Weyl's 'Raum-Zeit-Materie' Hilbert also objected to being slighted in Weyl's exposition. In this letter again 'in particular the use of the Riemannian curvature [scalar] in the Hamiltonian integral' ('insbesondere die Verwendung der Riemannschen Krümmung unter dem Hamiltonschen Integral') was claimed as one of his original contributions. SUB Cod. Ms. Hilbert 457/17." Did Einstein develop the field equations independently? While Hilbert's paper was submitted five days earlier than Einstein's, it only appeared in 1916, after Einstein's field equations paper had appeared in print. For this reason, there was no good reason to suspect plagiarism on either side. In 1978, an 18 November 1915 letter from Einstein to Hilbert resurfaced, in which Einstein thanked Hilbert for sending an explanation of Hilbert's work. This was not unexpected to most scholars, who were well aware of the correspondence between Hilbert and Einstein that November, and who continued to hold the view expressed by Albrecht Fölsing in his Einstein biography: In November, when Einstein was totally absorbed in his theory of gravitation, he essentially only corresponded with Hilbert, sending Hilbert his publications and, on November 18, thanking him for a draft of his article. Einstein must have received that article immediately before writing this letter. Could Einstein, casting his eye over Hilbert's paper, have discovered the term which was still lacking in his own equations, and thus 'nostrified' Hilbert? In the very next sentence, after asking the rhetorical question, Folsing answers it with "This is not really probable...", and then goes on to explain in detail why [Einstein's] eventual derivation of the equations was a logical development of his earlier arguments—in which, despite all the mathematics, physical principles invariably predominated. His approach was thus quite different from Hilbert's, and Einstein's achievements can, therefore, surely be regarded as authentic. In their 1997 Science paper, Corry, Renn and Stachel quote the above passage and comment that "the arguments by which Einstein is exculpated are rather weak, turning on his slowness in fully grasping Hilbert's mathematics", and so they attempted to find more definitive evidence of the relationship between the work of Hilbert and Einstein, basing their work largely on a recently discovered pre-print of Hilbert's paper. A discussion of the controversy around this paper is given below. Those who contend that Einstein's paper was motivated by the information obtained from Hilbert have referred to the following sources: The correspondence between Hilbert and Einstein mentioned above. More recently, it became known that Einstein was also given notes of Hilbert's 16 November talk about his theory. Einstein's 18 November paper on the perihelion motion of Mercury, which still refers to the incomplete field equations of 4 and 11 November. (The perihelion motion depends only on the vacuum equations, which are unaffected by the trace term that was added to complete the field equations.) Reference to the final form of the equations appears only in a footnote added to the paper, indicating that Einstein had not known the final form of the equations on 18 November. This is not controversial, and is consistent with the well-known fact that Einstein did not complete the field equations (with the trace term) until 25 November. Letters of Hilbert, Einstein, and other scientists may be used in attempts to make guesses about the content of Hilbert's letter to Einstein, which is not preserved, or of Hilbert's lecture in Göttingen on 16 November. Those who contend that Einstein's work takes priority over Hilbert's, or that both authors worked independently have used the following arguments: Hilbert modified his paper in December 1915, and the 18 November version sent to Einstein did not contain the final form of the field equations. The extant part of the printer proofs does not have the explicit field equations. This is the point of view defended by Corry, Renn, Stachel, and Sauer. Sauer (1999) and Todorov (2005) agree with Corry, Renn and Satchel that Hilbert's proofs show that Hilbert had originally presented a non-covariant theory, which was dropped from the revised paper. Corry et al. quote from the proofs: "Since our mathematical theorem ... can provide only ten essentially independent equations for the 14 potentials [...] and further, maintaining general covariance makes quite impossible more than ten essential independent equations [...] then, in order to keep the deterministic characteristic of the fundamental equations of physics [...] four further non-covariant equations ... [are] unavoidable." (proofs, pages 3 and 4. Corry et al.) Hilbert derives these four extra equations and continues "these four differential equations [...] supplement the gravitational equations [...] to yield a system of 14 equations for the 14 potentials , : the system of fundamental equations of physics". (proofs, page 7. Corry et al.). Hilbert's first theory (16 November lecture, 20 November lecture, 6 December proofs) was titled "The fundamental equations of Physics". In proposing non-covariant fundamental equations, based on the Ricci tensor but restricted in this way, Hilbert was following the causality requirement that Einstein and Grossmann had introduced in the Entwurf papers of 1913. One may attempt to reconstruct the way in which Einstein arrived at the field equations independently. This is, for instance, done in the paper of Logunov, Mestvirishvili and Petrov quoted below. Renn and Sauer investigate the notebook used by Einstein in 1912 and claim he was close to the correct theory at that time. Scholars This section cites notable publications where people have expressed a view on the issues outlined above. Albrecht Fölsing on the Hilbert-Einstein interaction (1993) From Fölsing's 1993 (English translation 1998) Einstein biography " Hilbert, like all his other colleagues, acknowledged Einstein as the sole creator of relativity theory." Corry/Renn/Stachel and Friedwardt Winterberg (1997/2003) In 1997, Corry, Renn and Stachel published a three-page article in Science entitled "Belated Decision in the Hilbert-Einstein Priority Dispute" concluding that Hilbert had not anticipated Einstein's equations. Friedwardt Winterberg, a professor of physics at the University of Nevada, Reno, disputed these conclusions, observing that the galley proofs of Hilbert's articles had been tampered with - part of one page had been cut off. He goes on to argue that the removed part of the article contained the equations that Einstein later published, and he wrote that "the cut off part of the proofs suggests a crude attempt by someone to falsify the historical record". Science declined to publish this; it was printed in revised form in Zeitschrift für Naturforschung, with a dateline of 5 June 2003. Winterberg criticized Corry, Renn and Statchel for having omitted the fact that part of Hilbert's proofs was cut off. Winterberg wrote that the correct field equations are still present on the existing pages of the proofs in various equivalent forms. In this paper, Winterberg asserted that Einstein sought the help of Hilbert and Klein to help him find the correct field equation, without mentioning the research of Fölsing (1997) and Sauer (1999), according to which Hilbert invited Einstein to Göttingen to give a week of lectures on general relativity in June 1915, which however does not necessarily contradict Winterberg. Hilbert at the time was looking for physics problems to solve. A short reply to Winterberg's article can be found at ; the original long reply can be accessed via the Internet Archive at . In this reply, Winterberg's hypothesis is called "paranoid" and "speculative". Corry et al. offer the following alternative speculation: "it is possible that Hilbert himself cropped off the top of p. 7 to include it with the three sheets he sent Klein, in order that they not end in mid-sentence." As of September 2006, the Max Planck Institute of Berlin has replaced the short reply with a note saying that the Max Planck Society "distances itself from statements published on this website [...] concerning Prof. Friedwart Winterberg" and stating that "the Max Planck Society will not take a position in [this] scientific dispute". Ivan Todorov, in a paper published on ArXiv, says of the debate: Their [CRS's] attempt to support on this ground Einstein's accusation of "nostrification" goes much too far. A calm, non-confrontational reaction was soon provided by a thorough study of Hilbert's route to the "Foundations of Physics" (see also the relatively even handed survey (Viz 01)). In the paper recommended by Todorov as calm and non-confrontational, Tilman Sauer concludes that the printer's proofs show conclusively that Einstein did not plagiarize Hilbert, stating any possibility that Einstein took the clue for the final step toward his field equations from Hilbert's note [Nov 20, 1915] is now definitely precluded. Max Born's letters to David Hilbert, quoted in Wuensch, are quoted by Todorov as evidence that Einstein's thinking towards general covariance was influenced by the competition with Hilbert. Todorov ends his paper by stating: Einstein and Hilbert had the moral strength and wisdom - after a month of intense competition, from which, in a final account, everybody (including science itself) profited - to avoid a lifelong priority dispute (something in which Leibniz and Newton failed). It would be a shame to subsequent generations of scientists and historians of science to try to undo their achievement. Anatoly Alexeevich Logunov on general relativity (2004) Anatoly Logunov (a former vice president of the Soviet Academy of Sciences and at the time the scientific advisor of the Institute for High Energy Physics), is author of a book about Poincaré's relativity theory and coauthor, with Mestvirishvili and Petrov, of an article rejecting the conclusions of the Corry/Renn/Stachel paper. They discuss both Einstein's and Hilbert's papers, claiming that Einstein and Hilbert arrived at the correct field equations independently. Specifically, they conclude that: Their pathways were different but they led exactly to the same result. Nobody "nostrified" the other. So no "belated decision in the Einstein–Hilbert priority dispute", about which [Corry, Renn, and Stachel] wrote, can be taken. Moreover, the very Einstein–Hilbert dispute never took place. All is absolutely clear: both authors made everything to immortalize their names in the title of the gravitational field equations. But general relativity is Einstein's theory. Wuensch and Sommer (2005) Daniela Wuensch, a historian of science and a Hilbert and Kaluza expert, responded to Bjerknes, Winterberg and Logunov's criticisms of the Corry/Renn/Stachel paper in a book which appeared in 2005, where in she defends the view that the cut to Hilbert's printer proofs was made in recent times. Moreover, she presents a theory about what might have been on the missing part of the proofs, based upon her knowledge of Hilbert's papers and lectures. She defends the view that knowledge of Hilbert's 16 November 1915 letter was crucial to Einstein's development of the field equations: Einstein arrived at the correct field equations only with Hilbert's help ("nach großer Anstrengung mit Hilfe Hilberts"), but nevertheless calls Einstein's reaction (his negative comments on Hilbert in the 26 November letter to Zangger) "understandable" ("Einsteins Reaktion ist verständlich") because Einstein had worked on the problem for a long time. According to her publisher, Klaus Sommer, Wuensch concludes though that: This comprehensive study concludes with a historical interpretation. It shows that while it is true that Hilbert must be seen as the one who first discovered the field equations, the general theory of relativity is indeed Einstein's achievement, whereas Hilbert developed a unified theory of gravitation and electromagnetism. In 2006, Wuensch was invited to give a talk at the annual meeting of the German Physics Society (Deutsche Physikalische Gesellschaft) about her views about the priority issue for the field equations. Wuensch's publisher, Klaus Sommer, in an article in Physik in unserer Zeit, supported Wuensch's view that Einstein obtained some results not independently but from the information obtained from Hilbert's 16 November letter and from the notes of Hilbert's talk. While he does not call Einstein a plagiarist, Sommer speculates that Einstein's conciliatory 20 December letter was motivated by the fear that Hilbert might comment on Einstein's behaviour in the final version of his paper. Sommer claimed that a scandal caused by Hilbert could have done more damage to Einstein than any scandal before ("Ein Skandal Hilberts hätte ihm mehr geschadet als jeder andere zuvor"). David E. Rowe (2006) The contentions of Wuensch and Sommer have been strongly contested by the historian of mathematics and natural sciences David E. Rowe in a detailed review of Wuensch's book published in Historia Mathematica in 2006. Rowe argues that Wuensch's book offers nothing but tendentious, unsubstantiated, and in many cases highly implausible, speculations. In popular works by famous physicists Wolfgang Pauli's Encyclopedia entry for the theory of relativity pointed out two reasons physicists did not consider Hilbert's derivation equivalent to Einstein's: 1) it required accepting the stationary-action principle as a physical axiom and more important 2) it was based on Mie unified field theory. In his 1999 article for Time Magazine which featured Einstein Man of the Century Stephen Hawking wrote: Kip Thorne concludes, in remarks based on Hilbert's 1924 paper, that Hilbert regarded the general theory of relativity as Einstein's: However, Kip Thorne also stated, "Remarkably, Einstein was not the first to discover the correct form of the law of warpage [. . . .] Recognition for the first discovery must go to Hilbert" based on "the things he had learned from Einstein's summer visit to Göttingen." This last point is also mentioned by Corry et al. Insignificance of the dispute As noted by the historians John Earman and Clark Glymour, "questions about the priority of discoveries are often among the least interesting and least important issues in the history of science." There was no real controversy between Einstein and Hilbert themselves: And: See also History of Lorentz transformations History of general relativity List of scientific priority disputes Multiple discovery Notes Citations References Works of physics (primary sources) [Ein05c] : Albert Einstein: Zur Elektrodynamik bewegter Körper, Annalen der Physik 17(1905), 891–921. Received 30 June, published 26 September 1905. Reprinted with comments in [Sta89], pp. 276–306 English translation, with footnotes not present in the 1905 paper, available on the net [Ein05d] : Albert Einstein: Ist die Trägheit eines Körpers von seinem Energiegehalt abhängig?, Annalen der Physik 18(1905), 639–641, Reprinted with comments in [Sta89], Document 24 English translation available on the net [Ein06] : Albert Einstein: Das Prinzip von der Erhaltung der Schwerpunktsbewegung und die Trägheit der Energie Annalen der Physik 20(1906):627-633, Reprinted with comments in [Sta89], Document 35 [Ein15a]: Einstein, A. (1915) "Die Feldgleichungun der Gravitation". Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 844–847. [Ein15b]: Einstein, A. (1915) "Zur allgemeinen Relativatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 778-786 [Ein15c]: Einstein, A. (1915) "Erklarung der Perihelbewegung des Merkur aus der allgemeinen Relatvitatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 799-801 [Ein15d]: Einstein, A. (1915) "Zur allgemeinen Relativatstheorie", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 831-839 [Ein16]: Einstein, A. (1916) "Die Grundlage der allgemeinen Relativitätstheorie", Annalen der Physik, 49 [Hil24]: Hilbert, D., Die Grundlagen der Physik - Mathematische Annalen, 92, 1924 - "meiner theorie" quote on page 2 - online at Uni Göttingen - index of journal [Lan05]:Langevin, P. (1905) "Sur l'origine des radiations et l'inertie électromagnétique", Journal de Physique Théorique et Appliquée, 4, pp. 165–183. [Lan14]:Langevin, P. (1914) "Le Physicien" in Henri Poincaré Librairie (Felix Alcan 1914) pp. 115–202. [Lor99]:Lorentz, H. A. (1899) "Simplified Theory of Electrical and Optical Phenomena in Moving Systems", Proc. Acad. Science Amsterdam, I, 427–43. [Lor04]: Lorentz, H. A. (1904) "Electromagnetic Phenomena in a System Moving with Any Velocity Less Than That of Light", Proc. Acad. Science Amsterdam, IV, 669–78. [Lor11]:Lorentz, H. A. (1911) Amsterdam Versl. XX, 87 [Lor14]:. [Pla07]:Planck, M. (1907) Berlin Sitz., 542 [Pla08]:Planck, M. (1908) Verh. d. Deutsch. Phys. Ges. X, p218, and Phys. ZS, IX, 828 [Poi89]:Poincaré, H. (1889) Théorie mathématique de la lumière, Carré & C. Naud, Paris. Partly reprinted in [Poi02], Ch. 12. [Poi97]:Poincaré, H. (1897) "The Relativity of Space", article in English translation [Poi00] : . See also the English translation [Poi02] : [Poi04] : English translation as The Principles of Mathematical Physics, in "The value of science" (1905a), Ch. 7–9. [Poi05] : [Poi06] : [Poi08] : [Poi13] : [Ein20]: Albert Einstein: "Ether and the Theory of Relativity", An Address delivered on May 5, 1920, in the University of Leyden. [Sta89] : John Stachel (Ed.), The collected papers of Albert Einstein, volume 2, Princeton University Press, 1989 Further reading Nándor Balázs (1972) "The acceptability of physical theories: Poincaré versus Einstein", pages 21–34 in General Relativity: Papers in Honour of J.L. Synge, L. O'Raifeartaigh editor, Clarendon Press. Albert Einstein Theory of relativity Discovery and invention controversies
General relativity priority dispute
[ "Physics" ]
6,270
[ "Theory of relativity" ]
65,596,601
https://en.wikipedia.org/wiki/JT-010
JT-010 is a chemical compound which acts as a potent, selective activator of the TRPA1 channel, and has been used to study the role of this receptor in the perception of pain, as well as other actions such as promoting repair of dental tissue after damage. See also ASP-7663 PF-4840154 References Nitrogen mustards Thiazoles Amides Phenyl compounds Ethers Transient receptor potential channel agonists
JT-010
[ "Chemistry" ]
98
[ "Organic compounds", "Amides", "Functional groups", "Ethers" ]
65,596,802
https://en.wikipedia.org/wiki/HC-030031
HC-030031 is a drug which acts as a potent and selective antagonist for the TRPA1 receptor, and has analgesic and antiinflammatory effects. References Xanthines Acetamides
HC-030031
[ "Chemistry" ]
47
[ "Pharmacology", "Xanthines", "Medicinal chemistry stubs", "Alkaloids by chemical classification", "Pharmacology stubs" ]
65,601,334
https://en.wikipedia.org/wiki/Overcategory
In mathematics, specifically category theory, an overcategory (also called a slice category), as well as an undercategory (also called a coslice category), is a distinguished class of categories used in multiple contexts, such as with covering spaces (espace etale). They were introduced as a mechanism for keeping track of data surrounding a fixed object in some category . There is a dual notion of undercategory, which is defined similarly. Definition Let be a category and a fixed object of pg 59. The overcategory (also called a slice category) is an associated category whose objects are pairs where is a morphism in . Then, a morphism between objects is given by a morphism in the category such that the following diagram commutesThere is a dual notion called the undercategory (also called a coslice category) whose objects are pairs where is a morphism in . Then, morphisms in are given by morphisms in such that the following diagram commutesThese two notions have generalizations in 2-category theory and higher category theorypg 43, with definitions either analogous or essentially the same. Properties Many categorical properties of are inherited by the associated over and undercategories for an object . For example, if has finite products and coproducts, it is immediate the categories and have these properties since the product and coproduct can be constructed in , and through universal properties, there exists a unique morphism either to or from . In addition, this applies to limits and colimits as well. Examples Overcategories on a site Recall that a site is a categorical generalization of a topological space first introduced by Grothendieck. One of the canonical examples comes directly from topology, where the category whose objects are open subsets of some topological space , and the morphisms are given by inclusion maps. Then, for a fixed open subset , the overcategory is canonically equivalent to the category for the induced topology on . This is because every object in is an open subset contained in . Category of algebras as an undercategory The category of commutative -algebras is equivalent to the undercategory for the category of commutative rings. This is because the structure of an -algebra on a commutative ring is directly encoded by a ring morphism . If we consider the opposite category, it is an overcategory of affine schemes, , or just . Overcategories of spaces Another common overcategory considered in the literature are overcategories of spaces, such as schemes, smooth manifolds, or topological spaces. These categories encode objects relative to a fixed object, such as the category of schemes over , . Fiber products in these categories can be considered intersections, given the objects are subobjects of the fixed object. See also Comma category References Category theory
Overcategory
[ "Mathematics" ]
610
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
65,602,236
https://en.wikipedia.org/wiki/RBC%20EXT8
RBC EXT8 is a globular cluster in the galaxy Messier 31, 27 kpc from the Galaxy Center. The spectral lines reveal levels of iron 800 times lower than the Sun. Its position is right ascension 00h53m14s.53, declination +41°33′24′′. (J2000 equinox) according to the Revised Bologna Catalogue (10). Its magnitude is 15.79, and 15.5" across. References Andromeda (constellation) Globular clusters Andromeda Galaxy
RBC EXT8
[ "Astronomy" ]
115
[ "Andromeda (constellation)", "Constellations" ]
55,794,265
https://en.wikipedia.org/wiki/Hexafluoroisobutylene
Hexafluoroisobutylene is an organofluorine compound with the formula (CF3)2C=CH2. This colorless gas is structurally similar to isobutylene. It is used as a comonomer in the production of modified polyvinylidene fluoride. It is produced in a multistep process starting with the reaction of acetic anhydride with hexafluoroacetone. It is oxidized by sodium hypochlorite to hexafluoroisobutylene oxide. As expected, it is a potent dienophile. See also Perfluoroisobutene References Trifluoromethyl compounds Fluoroalkenes Gases Vinylidene compounds Hydrofluoroolefins
Hexafluoroisobutylene
[ "Physics", "Chemistry" ]
166
[ "Statistical mechanics", "Phases of matter", "Gases", "Matter" ]
55,796,356
https://en.wikipedia.org/wiki/Voltage-controlled%20resistor
A voltage-controlled resistor (VCR) is a three-terminal active device with one input port and two output ports. The input-port voltage controls the value of the resistor between the output ports. VCRs are most often built with field-effect transistors (FETs). Two types of FETs are often used: the JFET and the MOSFET. There are both floating voltage-controlled resistors and grounded voltage-controlled resistors. Floating VCRs can be placed between two passive or active components. Grounded VCRs, the more common and less complicated design, require that one port of the voltage-controlled resistor be grounded. Usages Voltage-controlled resistors are one of the most commonly used analog design blocks: adaptive analog filters, automatic gain-control circuits, clock generators, compressors, electrometers, energy harvesters, expanders, hearing aids, light dimmers, modulators (mixers), artificial neural networks, programmable-gain amplifiers, phased arrays, phase-locked loops, phase-controlled dimming circuits, phase-delay and -advance circuits, tunable filters, variable attenuators, voltage-controlled oscillators, voltage-controlled multivibrators, as well as waveform generators, all include voltage-controlled resistors. The JFET is one of the more common active devices used for the design of voltage-controlled resistors. So much so, that JFET devices are packaged and sold as voltage-controlled resistors. Typically, JFETs when they are packaged as VCRs often have high pinch-off voltages, which result in a greater dynamic resistance range. JFETs for VCRs are often packaged in pairs, which allows VCR designs that require matched transistor parameters. For VCR applications that involve sensor signal amplification or audio, discrete JFETs are often used. One reason is that JFETs and circuit topologies built with JFETs feature low-noise (specifically low 1/f flicker noise and low burst noise). In these applications, low-noise JFETs allow more reliable and accurate measurements and heightened levels of sound purity. Another reason discrete JFETs are used is that JFETs are better suited for rugged environments. JFETs can withstand electrical, electromagnetic interference (EMI) and other high radiation shocks better than MOSFET circuits. JFETs can even serve as an input surge-protection device. JFETs are also less susceptible to electrostatic discharge than MOSFETs. Voltage-controlled resistor design Two of the more common and most cost-effective designs for JFET VCR are the non-linearized and linearized VCR design. The non-linearized design only requires one JFET, The linearized design also uses one JFET, but has two linearization resistors. The linearized designs are used for VCR applications that require high input-signal voltage levels. The non-linearized designs are used in low input signal level and cost-driven DC applications. Non-linearized VCR design In the circuit on the figure, a non-linearized VCR design, the voltage-controlled resistor, the LSK489C JFET, is used as a programmable voltage divider. The VGS supply sets the level of the output resistance of the JFET. The drain-to-source resistance of the JFET (RDS) and the drain resistor (R1) form the voltage-divider network. The output voltage can be determined from the equation Vout = VDC · RDS / (R1 + RDS). An LTSpice simulation of the non-linearized VCR design verifies that the JFET resistance changes with a change in gate-to-source voltage (VGS). In the simulation (below), a constant input voltage is applied (the VDC supply is set to 4 volts), and the gate-to-source voltage is reduced in steps, which increases the JFET drain-to-source resistance. The resistance between the drain to source terminals of the JFET increases as the gate-to-source voltage becomes more negative and decreases as the gate-to-source voltage approaches 0 volts. The simulation below bears this out. The output voltage is about 2.5 volts with a gate-to-source voltage of −1 volt. Conversely, the output voltage drops to about 1.6 volts when the gate-to-source voltage is 0 volts. With a 4-volt input signal and R1 of 300 ohms, the range of resistance for the JFET VCR can be calculated from the simulation results as VGS varies between −1 volt and 0 volts using the equation RDS = V0 · R1 / (VDS − V0). Using the above equation, at VGS = −1 V, the VCR resistance is about 500 ohms, and at VGD = 0 V, the VCR resistance is about 200 ohms. Applying a ramp voltage to the input of a similar VCR circuit (the load resistor has been changed to 3000 ohms) allows one to determine the exact value of the resistance of the JFET as the input voltage is varied. The ramp simulation, below, reveals that the drain-to-source resistance of the JFET is fairly constant (about 280 ohms) up until the input sweep voltage, Vsweep (Vsignal), reaches about 2 V. At this point the drain-to-source resistance starts to rise slowly until the input voltage reaches 8 V. At around 8 V, for this bias condition (VGS = 0 V and R = 3 kΩ), the JFET drain current (ID(J1)) saturates, and the resistance is no longer constant and changes with an increase in input voltage. The ramp simulation also indicates that even below 2 V, the VCR's resistance is not completely independent of the input voltage level. That is, the VCR resistance does not represent a perfectly linear resistor. Because the resistance is not constant above 2 V, this non-linearized VCR design is most often used when the input voltage signal is below 1 V, such as in sensor applications or in applications where distortion is not a concern at higher input voltage levels. Or in other cases, when a constant resistor value is not required (for example, in LED dimmer applications and musical pedal-effect circuits). Linearized VCR design To increase the dynamic range of the input voltage, maintain a constant resistance over the input signal range, and to improve the signal-to-noise ratio and total harmonic distortion specifications, linearization resistors are used. A fundamental limitation of voltage-controlled resistors is that input signal must be kept below the linearization voltage (approximately the point when the JFET enters saturation). If the linearization voltage is exceeded, the voltage control resistor value will change both with the level of the input voltage signal and the gate-to-source voltage. For the evaluation of this design's ability to handle larger input signals, a ramp is applied to the VCR input. From the results of the ramp simulation, how closely the VCR emulates a real resistor and over what range of input voltages the VCR behaves as a resistor is determined. The linearized VCR ramp simulation, below, indicates that the VCR resistance is constant at approximately 260 ohms for an input signal range from about −6 V to 6 V (the V(Vout)/I(R1) curve). The sweep also indicates that the VCR resistance starts to dramatically increase, as does in the non-linearized design, once the JFET enters its saturation region. Because of the linearized VCR's wider constant resistance region, much larger input signals than the non-linearized designs can be applied to the VCR without distortion. However, it is also important to consider that the drain resistor value will slightly affect the range of drain-to-source voltages that the VCR resistance is constant. Because of the increased linearization range, the linearized circuit is able to handle AC signals that are in the order of 8 V peak-to-peak before visual levels of distortion set in. The simulation below, which uses a 3000-ohm drain resistor, illustrates that the VCR can be successfully used at fairly high input voltage input signals. For this design, the 8 V peak-to-peak input voltage signal can be attenuated from 2.2 volts peak to 0.5 volts peak when the control voltage is varied from −2.5 volts to 0.5 volts. What is important to note about the linearized VCR design, as opposed to the non-linearized design, is that the output signal does not have any significant offset. It stays centered at 0 V as the control voltage is changed. Simulations of the non-linearized design indicate a significant offset voltage at the output. Another important characteristic of the linearized VCR design is that it has a higher output current than the non-linearized design. The effect of the linearization resistors is to effectively increase the transconductance gain of the VCR. Resistance range selection Different JFETs can be used to obtain different VCR resistance ranges. Typically, the higher the IDSS value for a JFET, the lower the resistance value obtained. Similarly, JFETs with lower values of IDSS have higher values of resistance. With a bank of JFETs, with different IDSS values (and hence, RDS values), banks of programmable automatic gain-control circuits can be constructed that offer a wide range of resistance ranges. For example, the LSK489A and LSK489C, graded IDSS JFETS, show a 3:1 resistance variation. Distortion considerations Distortion is a major concern with voltage-controlled resistors. When an AC or non-DC input signal is applied that results in the VCR resistor moving out of the linear triode region (or operated in a less than perfectly linear triode region), uneven amplification of the input signal results (as a direct result of a non-linear increase in resistance). This results in distortion of the output signal. In order to overcome this problem, non-linearized VCRs are simply operated at fairly low signal levels. Linearized VCR designs, on the other hand, will have significantly less distortion at much higher input voltage signal levels and allow an improvement in total harmonic distortion specification. For example, the simulation below shows a significant amount of visual distortion when the input signal of 5 V peak-to-peak is applied to a non-linearized VCR design. On the other hand, a simulation of a linearized VCR design shows very little distortion when a 8 V peak-to-peak input signal is applied (Figure 7). Other VCR topologies and designs Besides these more basic VCR designs, there are numerous more sophisticated designs. These designs often include a differential difference conveyor current (DDCC) circuit, a differential amplifier, two or more matched JFET transistors or one or two operational amplifiers. These designs offer improvements in dynamic range, distortion, signal-to-noise ratio and sensitivity to temperature variations. Design theory – IV analysis The current–voltage (IV) transfer characteristics determine how the JFET VCR will perform. Specifically, the linear regions of the IV curves determine the input signal range where the VCR will behave as a resistor. The curves of a specific JFET also dictate the range of resistor values that the VCR can be programmed to. The mathematical function that defines a JFET IV curve is not linear. However, there are regions of these curves that are very linear. These include the triode region (also known as the ohmic or linear region) and the saturation region (also known as the active region or constant-current-source region). In the triode region, the JFET acts like a resistor, however, in the saturation region it behaves like a constant-current source. The point that separates the triode region and the saturation region is roughly the point where VDS is equal to VGS on each of the IV curves. In the triode region, changes in the drain-to-source voltage will not change (or change very little) the resistance between the JFET's drain and source terminals. In the saturation region, or more appropriately the constant-current region, changes in the drain-to-source voltage will require the drain-to-source resistance to change such that the current remains at a constant value for different drain-to-source voltage levels. For values of VGS near zero, the drain-to-source voltage linearization voltage or triode breakpoint is much higher than when VGS levels are near the pinch-off voltage. This means in order to maintain constant resistor behavior for different values of VGS, the maximal linearization value would be set according to the highest value of VGS used. The linear triode region actually includes negative values of VGS. The figure below, shows an LTSPICE (LTSPICE) simulation of the IV curves in the triode region. As can be seen, a non-linearized LSK489 is approximately linear from about −0.1 V to 0.1 V. For VGS levels near 0 V, the triode linear range extends from about −0.2 V to 0.2 V. As the value of VGS is increased, the linear triode region is significantly reduced. Conversely, when linearization resistors are used, a similar IV curve swept simulation indicates that the linear triode region is significantly extended. From the IV curves, one can see that the linearization region for the linearized design extends easily from −6 V to 6 V (the IDS versus VDS versus Vin curves). Far above the approximately 200 mV range the non-linearized design produces. Of further interest is that the linearization results in linearization of the gate-to-source voltage even though the input voltage (Vin) is held at a constant DC level during each of the sweeps. This is because as the input voltage changes, the value of the VGS voltage changes such that VGS is always equal to one-half VDS. The change in VGS for changes in VDS is such that the JFET behaves as a resistor up until the point where the JFET saturates. The mathematics of linearization The mathematics behind linearization resistors is directly related to the cancellation of the second degree VDS term in the JFET triode equation. This equation relates the drain current to VGS and VDS. Kleinfeld applies Kirchhoff's current law to prove that the VDS non-linear term cancels with linearization resistors. The linearization resistors, in order to effect cancellation of the second-degree (quadratic) term must be equal. Equal valued linearization resistors divide the drain-to-source voltage by 2, effectively cancelling out the non-linear VDS term in the JFET triode equation. The future of voltage-controlled resistors Everyday and high-performance VCRs are essential to the successful design of many analog electronic circuit designs and will continue to be so. VCR designs are expected to play a central role in the advancement of artificial intelligence (neural) based sensor networks. The VCR, basically the heart of the synaptic cells in a neural network, is necessary to enable high-speed analog data processing and control of information that microcontrollers, digital-to-analog converters and analog-to-digital converters presently do. Low-noise JFETs because of their low-signal sensitivity, electromagnetic and radiation resilience, and their ability to be configured both as a VCR in a synaptic cell and as a low-noise high-performance sensor preamplifier, offer a solution to the implementation of artificial-intelligent-based sensor nodes. This is a natural extension of the fact that low-noise JFETs and low-noise JFET circuit topologies are extensively used in the design of low-noise VCRs and low-noise preamplifiers in sensor measurement applications. References Resistive components
Voltage-controlled resistor
[ "Physics" ]
3,411
[ "Resistive components", "Physical quantities", "Electrical resistance and conductance" ]
49,021,215
https://en.wikipedia.org/wiki/L%20band%20%28NATO%29
The NATO L band is the obsolete designation given to the radio frequencies from 40 to 60 GHz (equivalent to wavelengths between 7.5 and 5 mm) during the cold war period. Since 1992 frequency allocations, allotment and assignments are in line to NATO Joint Civil/Military Frequency Agreement (NJFA). However, in order to identify military radio spectrum requirements, e.g. for crises management planning, training, Electronic warfare activities, or in military operations, this system is still in use. References Radio spectrum
L band (NATO)
[ "Physics" ]
106
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
49,021,419
https://en.wikipedia.org/wiki/PET%20response%20criteria%20in%20solid%20tumors
PET response criteria in solid tumors (PERCIST) is a set of rules that define when tumors in cancer patients improve ("respond"), stay the same ("stabilize"), or worsen ("progress") during treatment, using positron emission tomography (PET). The criteria were published in May 2009 in the Journal of Nuclear Medicine (JNM). A pooled analysis from 2016 concluded that its application may give rather different results from RECIST, and might be a more suitable tool for understanding tumor response to treatment. Details Complete metabolic response (CMR) Complete resolution of 18F-FDG uptake within the measurable target lesion so that it is less than mean liver activity and at the level of surrounding background blood pool activity. Disappearance of all other lesions to background blood pool levels. No new suspicious 18F-FDG avid lesions. If progression by RECIST must verify with follow up Partial metabolic response (PMR) Reduction of a minimum of 30% in target measurable tumor 18F-FDG SUL peak, with absolute drop in SUL of at least 0.8 SUL units. No increase >30% of SUL or size in all other lesions No new lesions Stable metabolic disease (SMD) Not CMR, PMR, or Progressive metabolic disease (PMD) No new lesions Progressive metabolic disease (PMD) >30% increase in 18F-FDG SUL peak, with >0.8 SUL units increase in tumor SUV peak from the baseline scan in pattern typical of tumor and not of infection/treatment effect. or Visible increase in the extent of 18F-FDG tumor uptake. or New 18F-FDG avid lesions which are typical of cancer and not related to treatment effect or infection. See also Response evaluation criteria in solid tumors References Cancer research Nuclear medicine PET radiotracers Positron emission tomography
PET response criteria in solid tumors
[ "Physics", "Chemistry" ]
392
[ "Antimatter", "Medicinal radiochemistry", "Positron emission tomography", "PET radiotracers", "Chemicals in medicine", "Matter" ]
49,022,545
https://en.wikipedia.org/wiki/IBM%20Journal%20of%20Research%20and%20Development
IBM Journal of Research and Development is a former, peer-reviewed bimonthly scientific journal covering research on information systems. This Journal has ceased production in 2020. According to the Journal Citation Reports in 2019, the journal had an impact factor of 1.27. IBM also published the IBM Systems Journal () starting in 1962; it ceased publication in 2008 and was absorbed in part by the IBM Journal of Research and Development. References External links English-language journals IBM Information systems journals
IBM Journal of Research and Development
[ "Technology" ]
97
[ "Information systems journals", "Information systems" ]
49,024,305
https://en.wikipedia.org/wiki/Lepiota%20erminea
Lepiota erminea, commonly known as the dune dapperling, is a species of agaric fungus in the family Agaricaceae. It is found in Europe and North America. See also List of Lepiota species References External links erminea Fungi described in 1821 Fungi of Europe Fungi of North America Taxa named by Elias Magnus Fries Fungus species
Lepiota erminea
[ "Biology" ]
76
[ "Fungi", "Fungus species" ]
52,889,231
https://en.wikipedia.org/wiki/Testosterone%E2%80%93cortisol%20ratio
In human biology, the testosterone–cortisol ratio describes the ratio between testosterone, the primary male sex hormone and an anabolic steroid, and cortisol, another steroid hormone, in the human body. The ratio is often used as a biomarker of physiological stress in athletes during training, during athletic performance, and during recovery, and has been explored as a predictor of performance. At least among weight-lifters, the ratio tracks linearly with increases in training volume over the first year of training but the relationship breaks down after that. A lower ratio in weight-lifters just prior to performance appears to predict better performance. The ratio has been studied as a possible biomarker for criminal aggression, but as of 2009 its usefulness was uncertain. References Testosterone Athletic training Biomarkers
Testosterone–cortisol ratio
[ "Biology" ]
165
[ "Biomarkers" ]
52,890,895
https://en.wikipedia.org/wiki/Expanded%20crater
An expanded crater is a type of secondary impact crater. Large impacts often create swarms of small secondary craters from the debris that is blasted out as a consequence of the impact. Studies of a type of secondary craters, called expanded craters, have given insights into places where abundant ice may be present in the ground. Expanded craters have lost their rims, this may be because any rim that was once present has collapsed into the crater during expansion or, lost its ice, if composed of ice. Excess ice (ice in addition to what is in the pores of the ground) is widespread throughout the Martian mid-latitudes, especially in Arcadia Planitia. In this region, are many expanded secondary craters that probably form from impacts that destabilize a subsurface layer of excess ice, which subsequently sublimates. With sublimation the ice changes directly from a solid to gaseous form. In the impact, the excess ice is broken up, resulting in an increase in surface area. Ice will sublimate much more if there is more surface area. After the ice disappears into the atmosphere, dry soil material will collapse and cause the crater diameter to become larger. Since this region still has abundant expanded craters, the area between the expanded craters would have abundant ice under the surface. If all the ice was gone, all the expanded craters would also be gone. Expanded craters are more frequent in the inner layer of a type of crater called double-layer ejecta craters (formerly called rampart craters). Double layer craters are believed to form in ice-rich ground. Research, published in 2015, mapped expanded craters in Arcadia Planitia, found in the northern mid latitudes, and the research team concluded that the ice may be tens of millions of years old. The age was determined from the age of four primary craters that produced the secondary craters that later expanded when ice sublimated. The craters were Steinheim, Gan, Domoni, and an unnamed crater with a diameter of 6 km. Based on measurements and models, the researchers calculated that at least 6000 Km3 of ice is still preserved in non-cratered portions of Arcadia Planitia. Places on Mars that display expanded craters may indicate where future colonists can find water ice. See also Diacria quadrangle References Impact craters
Expanded crater
[ "Astronomy" ]
464
[ "Astronomical objects", "Impact craters" ]
52,891,920
https://en.wikipedia.org/wiki/Regional%20associations%20of%20road%20authorities
This article lists the main regional associations for road authorities from around the world. Many of these are associated with the World Road Association. Africa The Association des Gestionnaires et Partenaires Africains de la Route (AGEPAR) or African Road Managers and Partners Association is the association for road authorities predominantly in north and western africa. The Association of Southern Africa National Road Agencies (ASANRA) is an association of national roads agencies or authorities in the Southern African Development Community. Asia and Australasia The Road Engineering Association of Asia and Australasia was established in 1973 as a regional body to promote and advance the science and practice of road engineering and related professions. Europe and Asia The Baltic Roads Association was established for the cooperation of the Estonian, Latvian and Lithuanian Road Administrations. The Conference of European Directors of Roads or Conférence Européenne des Directeurs des Routes is a Brussels-based organisation for the Directors of National Road Authorities in Europe. Межправительственный совет дорожников (MSD) or Intergovernmental Council of Roads, is the road authority organisation of the Commonwealth of Independent States. MSD was founded in 1992 as the Interstate Council of Roads In 1998 the Council of Roads was given Intergovernmental organisation status. It assists in the cooperation between member road administrations in the field of design, construction, maintenance and scientific and technological policies in the road sector. The Nordic Road Association (NVF) was established in 1935. The founding members were Denmark, Finland, Iceland, Norway and Sweden; the Faroe Islands became a member in 1975. North and South America The Consejo de Directores de Carreteras de Iberia e Iberoamérica (DIRCAIBEA) / Board of Directors of Iberia and Latin America Roads was created in 1995. Twenty-two countries have representation in DIRCAIBEA; The two Iberian countries, Spain and Portugal, and 20 countries of the Americas and the Caribbean, Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba; Ecuador, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Puerto Rico, the Dominican Republic, Uruguay and Venezuela. American Association of State Highway and Transportation Officials (AASHTO): although technically a national association of state authorities, AASHTO activities also include most Canadian provinces. References Civil engineering organizations Road authorities
Regional associations of road authorities
[ "Engineering" ]
503
[ "Civil engineering", "Civil engineering organizations" ]
52,897,329
https://en.wikipedia.org/wiki/Multi-time-step%20integration
In numerical analysis, multi-time-step integration, also referred to as multiple-step or asynchronous time integration, is a numerical time-integration method that uses different time-steps or time-integrators for different parts of the problem. There are different approaches to multi-time-step integration. They are based on domain decomposition and can be classified into strong (monolithic) or weak (staggered) schemes. Using different time-steps or time-integrators in the context of a weak algorithm is rather straightforward, because the numerical solvers operate independently. However, this is not the case in a strong algorithm. In the past few years a number of research articles have addressed the development of strong multi-time-step algorithms. In either case, strong or weak, the numerical accuracy and stability needs to be carefully studied. Other approaches to multi-time-step integration in the context of operator splitting methods have also been developed; i.e., multi-rate GARK method and multi-step methods for molecular dynamics simulations. References Numerical analysis Applied mathematics
Multi-time-step integration
[ "Mathematics" ]
223
[ "Applied mathematics", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations" ]
54,221,153
https://en.wikipedia.org/wiki/Network%20Performance%20Monitoring%20Solution
Network Performance Monitor (NPM) in Operations Management Suite, a component of Microsoft Azure, monitors network performance between office sites, data centers, clouds and applications in near real-time. It helps a network administrator locate and troubleshoot bottlenecks like network delay, data loss and availability of any network link across on-premises networks, Microsoft Azure VNets, Amazon Web Services VPCs, hybrid networks, VPNs or even public internet links. Network Performance Monitor Network Performance Monitor (NPM) is network monitoring from the Operations Management Suite, that monitors networks. NPM monitors the availability of connectivity and quality of connectivity between multiple locations within and across campuses, private and public clouds. It uses synthetic transactions to test for reachability and can be used on any IP network irrespective of the make and model of network routers or switches deployed. Features A dashboard is generated to display summarized information about the Network including Network health events, alleged unhealthy Network links, and the Subnetwork links with the most loss and most latency. Custom dashboards can also be created to find the state of the network at a point in time in history. An interactive topology map is also generated to show the routes between Nodes. Network administrator can use it to distinguish the unhealthy path to find out the root cause of the issue. Alerts can be configured to send e-mails to stakeholders when a threshold is reached. Use Cases Two on-premises networks: Monitor connectivity between two office sites which could be connected using an MPLS WAN link or VPN Multiple sites: Monitor connectivity to a central site from multiple sites. For example, scenarios where users from multiple office locations are accessing applications hosted at a central location Hybrid Networks: Monitor connectivity between on-premises and Azure VNets that could be connected using S2S VPN or ExpressRoute Multiple Virtual Networks in Cloud: Monitor connectivity between multiple VNets in the same or different Azure regions. These could be peered V-Nets or V-nets connected using a VPN. Any Cloud: Monitor connectivity between Amazon Web Services and on-premises Networks. And also between Amazon Web Services and Azure V-Nets. Operation It does not require any access to network devices. Microsoft Monitoring Agent (MMA) or OMS extension (valid only for Virtual machines hosted in Azure) is to be installed on the servers in the Subnetworks that are to be monitored. OMS Agent auto downloads the Network Monitoring Intelligence Packs which spawns an NPM agent that detects the subnets it is connected to and this information is sent to OMS. NPM Agent gets to know the list of the IP addresses of other agents from OMS. NPM Agent IP starts active probes using Internet Control Message Protocol (ICMP) or Transmission Control Protocol (TCP) Ping and the roundtrip time for a ping between two nodes is used to calculate network performance metrics such as packet loss and link latency. This data is pushed to OMS where it's used to create a customizable dashboard. A video-based demo of NPM is available online. Synthetic transactions NPM uses synthetic transactions to test for reachability and calculate network performance metrics across the network. Tests are performed using either TCP or ICMP and users have the option of choosing between these protocols. Users must evaluate their environments and weigh the pros and cons of the protocols. The following is a summary of the differences. TCP provides more accurate results compared to ICMP ECHO because routers and switches assign lower priority to ICMP ECHO packets compared to TCP Ping. TCP needs configuration of network firewall and local firewall on the computers where agents are installed to allow traffic on default port 8084. Some other ports can also be chosen for this. ICMP does not need to configure a firewall but it needs more agents to provide information about all the paths between two subnets. Consequently, the OMS agent must be installed on more machines in the subnet as compared to when TCP is used. Timeline February 27, 2017 NPM Solution became generally available (GA). The launch was picked up by eWeek July 27, 2016 NPM solution was announced in the Public Preview Operating systems supported Windows Server 2008 SP 1 or later Linux distributions CentOS Linux 7 RedHat Enterprise Linux 7.2 Ubuntu 14.04 LTS, 15.04, 16.04 LTS Debian 8 SUSSUSE LinuxE Linux Server 12 Client operating systems Windows 7 SP1 or later Availability in regions Network Performance Monitor is available in the following Azure regions: Eastern US Western Europe South East Asia South East Australia West Central US South UK US Gov Virginia Data collection frequency TCP handshakes every 5 seconds, data sent every 3 minutes References Servers (computing) Network performance Network software Computer performance
Network Performance Monitoring Solution
[ "Technology", "Engineering" ]
981
[ "Network software", "Computer networks engineering", "Computer performance" ]
54,224,602
https://en.wikipedia.org/wiki/Open%20Pluggable%20Specification
Open Pluggable Specification (OPS) is a computing module plug-in format available for adding computing capability to flat panel displays. The format was first announced by NEC, Intel, and Microsoft in 2010. Computing modules in the OPS format are available on Intel- and ARM-based CPUs, running operating systems including Microsoft Windows and Google Android. The main benefit of using OPS in digital signage is to reduce downtime and maintenance cost by making it extremely easy to replace the computing module in case of a failure. Technical specification A computing module fully enclosed in a 180mm x 119mm x 30mm box JAE TX25 plug connector and TX24 receptacle 80-pin contacts Supported interfaces: Power HDMI/DVI and DisplayPort Audio USB 2.0/3.0 UART OPS control signals Pin definition Succession The OPS format is planned to be succeeded by the Smart Display Module (SDM) format. References Display technology
Open Pluggable Specification
[ "Engineering" ]
188
[ "Electronic engineering", "Display technology" ]
54,225,122
https://en.wikipedia.org/wiki/Trafermin
Trafermin (brand name Fiblast), also known as recombinant human basic fibroblast growth factor (rhbFGF), is a recombinant form of human basic fibroblast growth factor (bFGF) which is marketed in Japan as a topical spray for the treatment of skin ulcers. It is also currently in preregistration for the treatment of periodontitis. As a recombinant form of bFGF, trafermin is a potent agonist of the FGFR1, FGFR2, FGFR3, and FGFR4. The drug has been marketed in Japan since June 2001. References External links Trafermin - AdisInsight Growth factors Human proteins Recombinant proteins
Trafermin
[ "Chemistry", "Biology" ]
165
[ "Protein stubs", "Biotechnology products", "Growth factors", "Recombinant proteins", "Signal transduction", "Biochemistry stubs" ]
54,226,663
https://en.wikipedia.org/wiki/Genetically%20modified%20food%20in%20Africa
Genetically modified (GM) crops have been commercially cultivated in four African countries; South Africa, Burkina Faso, Egypt and Sudan. Beginning in 1998, South Africa is the major grower of GM crops, with Burkina Faso and Egypt starting in 2008. Sudan grew GM cotton in 2012. Other countries, with the aid of international governments and foundation, are conducting trials and research on crops important for Africa. Crops under research for use in Africa include cotton, maize, cassava, cowpea, sorgum, potato, banana, sweet potato, sugar cane, coconut, squash and grape. As well as disease, insect and virus resistance some of the research projects focus on traits particularly crucial for Africa like drought resistance and biofortification. In 2010, after nine years of talks, the Common Market for Eastern and Southern Africa (COMESA) produced a draft policy on GM technology, which was sent to all 19 national governments for consultation in September 2010. Under the proposed policy, new GM crops would be scientifically assessed by COMESA. If the GM crop was deemed safe for the environmental and human health, permission would be granted for the crop to be grown in all 19 member countries, although the final decision would be left to each individual country. Kenya passed laws in 2011, and Ghana and Nigeria passed laws in 2012 which allowed the production and importation of GM crops. By 2013 Cameroon, Malawi and Uganda had approved trials of genetically altered crops. Ethiopia has also revised its biosafety laws and in 2015 was trying to source GM cotton seeds for trials. A study investigating voluntary labeling in South Africa found that 31% of products labeled GMO-free had a GM content above 1.0%. 2011 studies for Uganda showed that transgenic bananas had a high potential to reduce rural poverty but that urban consumers with a relatively higher income might reject the introduction. In 2002, Zambia cut off the flow of genetically modified food (mostly maize) from UN's World Food Programme on the basis of the Cartagena Protocol. This left the population without food aid during a famine. In December 2005 the Zambian government changed its position in the face of further famine and allowed the importation of GM maize. However, the Zambian Minister for Agriculture Mundia Sikatana insisted in 2006, that the ban on genetically modified maize remained, saying "We do not want GM (genetically modified) foods and our hope is that all of us can continue to produce non-GM foods." References Genetic engineering by country
Genetically modified food in Africa
[ "Engineering", "Biology" ]
505
[ "Genetic engineering", "Genetic engineering by country", "Biotechnology by country" ]
54,230,166
https://en.wikipedia.org/wiki/Cyber%20PHA
A cyber PHA or cyber HAZOP is a safety-oriented methodology to conduct a cybersecurity risk assessment for an industrial control system (ICS) or safety instrumented system (SIS). It is a systematic, consequence-driven approach that is based upon industry standards such as ISA 62443-3-2, ISA TR84.00.09, ISO/IEC 27005:2018, ISO 31000:2009 and NIST Special Publication (SP) 800-39. The names, Cyber PHA or Cyber HAZOP, were given to this method because they are similar to process hazard analysis (PHA) or the hazard and operability study (HAZOP) studies that are popular in process safety management, particularly in industries that operate highly hazardous industrial processes (e.g. oil and gas, chemical, etc.). The cyber PHA or cyber HAZOP methodology reconciles the process safety and cybersecurity approaches and requires instrumentation, operations and engineering disciplines to collaborate. Modeled on the process safety PHA/HAZOP methodology, a cyber PHA/HAZOP enables cyber hazards to be identified and analyzed in the same manner as any other process risk, and, because it can be conducted as a separate follow-on activity to a traditional HAZOP, it can be used in both existing brownfield sites and newly constructed greenfield sites without unduly meddling with well-established process safety processes. The technique is typically used in a workshop environment that includes a facilitator and a scribe with expertise in the Cyber PHA/HAZOP process, as well as multiple subject matter experts who are familiar with the industrial process, the industrial automation and control system (IACS) and related IT systems. The workshop team typically includes representatives from operations, engineering, IT and health and safety. A multidisciplinary team is important in developing realistic threat scenarios, assessing impacts and achieving consensus on the realistic of the threat, the known vulnerabilities and existing countermeasures. The facilitator and scribe are typically responsible for gathering and organizing all of the information required to conduct the workshop (e.g. system architecture diagrams, vulnerability assessments, and previous PHA/HAZOPs) and training the workshop team on the method, if necessary. A worksheet is commonly used to document the cyber PHA/HAZOP assessment. Various spreadsheet templates, databases and commercial software tools have been developed to support the cyber method. The organization's risk matrix is typically integrated directly into the worksheet to facilitate assessment of severity and likelihood and to look up the resulting risk score. The workshop facilitator guides the team through the process and strives to gather all input, reach consensus and keep the process proceeding smoothly. The workshop proceeds until all zone and conduits have been assessed. The results are then consolidated and reported to the workshop team and appropriate stakeholders. Another popular safety-oriented methodology for conducting ICS cybsersecurity risk assessments is the cyber bowtie method. Cyber bowtie is based on the proven Bow-tie diagram Bow-tie diagram technique but adapted to assess cybersecurity risk. References External links Safety requires cybersecurity Security process hazard analysis review Cyber Security Risk Analysis for Process Control Systems Using Rings of Protection Analysis Building Cybersecurity into a Greenfield ICS Project Intro to Cyber PHA Video: Cyber PHA Overview Video Video: Cyber Process Hazards Analysis (PHA) to Assess ICS Cybersecurity Risk presentation at S4x17 Video: Consequence Based ICS Risk Management presentation at S4x19 How Secure are your Process Safety Systems? Process Safety & Cybersecurity Securing ICS Safety Requires Cybersecurity The Familial Relationship between Cybersecurity and Safety Cybersecurity Depends on Up-to-Date Intelligence Cybersecurity Risk Assessment Dale Peterson Unsolicited Response Podcast: Truth or Consequences Impact assessment Evaluation methods Process safety Risk analysis methodologies Management cybernetics
Cyber PHA
[ "Chemistry", "Engineering" ]
825
[ "Chemical process engineering", "Safety engineering", "Process safety" ]
41,319,353
https://en.wikipedia.org/wiki/International%20Facility%20for%20Food%20Irradiation%20Technology
The International Facility for Food Irradiation Technology (IFFIT) was a research and training centre at the Institute of Atomic Research in Agriculture in Wageningen, Netherlands, sponsored by the Food and Agriculture Organization (FAO) of the United Nations, the International Atomic Energy Agency (IAEA) and the Dutch Ministry of Agriculture and Fisheries. Aims The organisation's aim was to address food loss and food safety in developing countries by speeding up the practical introduction of the food irradiation process. They achieved this by training initiatives, research and feasibility studies. It was founded in 1978 and was operational until 1990, and during those twelve years over four hundred key personnel from over fifty countries were trained in aspects of food irradiation, making a significant contribution to the development and use of the radiation process. The Facility also co-ordinated research into the technology, economics and implementation of food irradiation, assisted in the assessment of the feasibility of using radiation to preserve foodstuffs, and evaluated trial shipments of irradiated material. Facilities The Facility had a pilot plant with a cobalt-60 source whose activity was , which was stored underwater. Drums or boxes containing products were placed on rotating tables or conveyor belts, and irradiation took place by raising the source out of the pool. Details During IFFIT's first five years of operation, 109 scientists from 40 countries attended six training courses, five of them being general training courses on food irradiation and the sixth being a specialised course on public health aspects. IFFIT also evaluated shipments of irradiated mangoes, spices, avocado, shrimp, onions and garlic, and produced 46 reports. The publications are available on WorldCat. One trainee noted that Professor D. A. A. Mossel (1918–2004) assisted with the training courses with what he described as "remarkably suggestive lectures and his phenomenal foreign language abilities". From 1988 onwards, Ari Brynjolfsson was director of IFFIT. References External links List of publications produced by the International Facility for Food Irradiation Technology Food preservation International organizations based in Europe Radiation Wageningen History of agriculture in the Netherlands
International Facility for Food Irradiation Technology
[ "Physics", "Chemistry" ]
440
[ "Transport phenomena", "Waves", "Physical phenomena", "Radiation" ]
41,321,254
https://en.wikipedia.org/wiki/Polar%20point%20group
In geometry, a polar point group is a point group in which there is more than one point that every symmetry operation leaves unmoved. The unmoved points will constitute a line, a plane, or all of space. While the simplest point group, C1, leaves all points invariant, most polar point groups will move some, but not all points. To describe the points which are unmoved by the symmetry operations of the point group, we draw a straight line joining two unmoved points. This line is called a polar direction. The electric polarization must be parallel to a polar direction. In polar point groups of high symmetry, the polar direction can be a unique axis of rotation, but if the symmetry operations do not allow any rotation at all, such as mirror symmetry, there can be an infinite number of such axes: in that case the only restriction on the polar direction is that it must be parallel to any mirror planes. A point group with more than one axis of rotation or with a mirror plane perpendicular to an axis of rotation cannot be polar. Polar crystallographic point group Of the 32 crystallographic point groups, 10 are polar: The space groups associated with a polar point group do not have a discrete set of possible origin points that are unambiguously determined by symmetry elements. When materials having a polar point group crystal structure are heated or cooled, they may temporarily generate a voltage called pyroelectricity. Molecular crystals which have symmetry described by one of the polar space groups, such as sucrose, may exhibit triboluminescence. References Symmetry Crystallography Group theory
Polar point group
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
329
[ "Materials science", "Group theory", "Fields of abstract algebra", "Crystallography", "Condensed matter physics", "Geometry", "Symmetry" ]
41,323,011
https://en.wikipedia.org/wiki/Regularization%20by%20spectral%20filtering
Spectral regularization is any of a class of regularization techniques used in machine learning to control the impact of noise and prevent overfitting. Spectral regularization can be used in a broad range of applications, from deblurring images to classifying emails into a spam folder and a non-spam folder. For instance, in the email classification example, spectral regularization can be used to reduce the impact of noise and prevent overfitting when a machine learning system is being trained on a labeled set of emails to learn how to tell a spam and a non-spam email apart. Spectral regularization algorithms rely on methods that were originally defined and studied in the theory of ill-posed inverse problems (for instance, see) focusing on the inversion of a linear operator (or a matrix) that possibly has a bad condition number or an unbounded inverse. In this context, regularization amounts to substituting the original operator by a bounded operator called the "regularization operator" that has a condition number controlled by a regularization parameter, a classical example being Tikhonov regularization. To ensure stability, this regularization parameter is tuned based on the level of noise. The main idea behind spectral regularization is that each regularization operator can be described using spectral calculus as an appropriate filter on the eigenvalues of the operator that defines the problem, and the role of the filter is to "suppress the oscillatory behavior corresponding to small eigenvalues". Therefore, each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function (which needs to be derived for that particular algorithm). Three of the most commonly used regularization algorithms for which spectral filtering is well-studied are Tikhonov regularization, Landweber iteration, and truncated singular value decomposition (TSVD). As for choosing the regularization parameter, examples of candidate methods to compute this parameter include the discrepancy principle, generalized cross validation, and the L-curve criterion. It is of note that the notion of spectral filtering studied in the context of machine learning is closely connected to the literature on function approximation (in signal processing). Notation The training set is defined as , where is the input matrix and is the output vector. Where applicable, the kernel function is denoted by , and the kernel matrix is denoted by which has entries and denotes the Reproducing Kernel Hilbert Space (RKHS) with kernel . The regularization parameter is denoted by . (Note: For and , with and being Hilbert spaces, given a linear, continuous operator , assume that holds. In this setting, the direct problem would be to solve for given and the inverse problem would be to solve for given . If the solution exists, is unique and stable, the inverse problem (i.e. the problem of solving for ) is well-posed; otherwise, it is ill-posed.) Relation to the theory of ill-posed inverse problems The connection between the regularized least squares (RLS) estimation problem (Tikhonov regularization setting) and the theory of ill-posed inverse problems is an example of how spectral regularization algorithms are related to the theory of ill-posed inverse problems. The RLS estimator solves and the RKHS allows for expressing this RLS estimator as where with . The penalization term is used for controlling smoothness and preventing overfitting. Since the solution of empirical risk minimization can be written as such that , adding the penalty function amounts to the following change in the system that needs to be solved: In this learning setting, the kernel matrix can be decomposed as , with and are the corresponding eigenvectors. Therefore, in the initial learning setting, the following holds: Thus, for small eigenvalues, even small perturbations in the data can lead to considerable changes in the solution. Hence, the problem is ill-conditioned, and solving this RLS problem amounts to stabilizing a possibly ill-conditioned matrix inversion problem, which is studied in the theory of ill-posed inverse problems; in both problems, a main concern is to deal with the issue of numerical stability. Implementation of algorithms Each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function, denoted here by . If the Kernel matrix is denoted by , then should control the magnitude of the smaller eigenvalues of . In a filtering setup, the goal is to find estimators where . To do so, a scalar filter function is defined using the eigen-decomposition of the kernel matrix: which yields Typically, an appropriate filter function should have the following properties: As goes to zero, . The magnitude of the (smaller) eigenvalues of is controlled by . While the above items give a rough characterization of the general properties of filter functions for all spectral regularization algorithms, the derivation of the filter function (and hence its exact form) varies depending on the specific regularization method that spectral filtering is applied to. Filter function for Tikhonov regularization In the Tikhonov regularization setting, the filter function for RLS is described below. As shown in, in this setting, . Thus, The undesired components are filtered out using regularization: If , then . If , then . The filter function for Tikhonov regularization is therefore defined as: Filter function for Landweber iteration The idea behind the Landweber iteration is gradient descent: c0 := 0 for i = 1, ..., t − 1 ci := ci−1 + η(Y − Kci−1) end In this setting, if is larger than 's largest eigenvalue, the above iteration converges by choosing as the step-size:. The above iteration is equivalent to minimizing (i.e. the empirical risk) via gradient descent; using induction, it can be proved that at the -th iteration, the solution is given by Thus, the appropriate filter function is defined by: It can be shown that this filter function corresponds to a truncated power expansion of ; to see this, note that the relation , would still hold if is replaced by a matrix; thus, if (the kernel matrix), or rather , is considered, the following holds: In this setting, the number of iterations gives the regularization parameter; roughly speaking, . If is large, overfitting may be a concern. If is small, oversmoothing may be a concern. Thus, choosing an appropriate time for early stopping of the iterations provides a regularization effect. Filter function for TSVD In the TSVD setting, given the eigen-decomposition and using a prescribed threshold , a regularized inverse can be formed for the kernel matrix by discarding all the eigenvalues that are smaller than this threshold. Thus, the filter function for TSVD can be defined as It can be shown that TSVD is equivalent to the (unsupervised) projection of the data using (kernel) Principal Component Analysis (PCA), and that it is also equivalent to minimizing the empirical risk on the projected data (without regularization). Note that the number of components kept for the projection is the only free parameter here. References Mathematical analysis Inverse problems Computer engineering
Regularization by spectral filtering
[ "Mathematics", "Technology", "Engineering" ]
1,491
[ "Mathematical analysis", "Computer engineering", "Applied mathematics", "Inverse problems", "Electrical engineering" ]
57,540,823
https://en.wikipedia.org/wiki/Advanced%20Propulsion%20Centre
The Advanced Propulsion Centre (APC) is a non-profit organisation that facilitates funding to UK-based research and development projects developing net-zero emission technologies. It is headquartered at the University of Warwick in Coventry, England. The APC manages a £1 billion investment fund, which is jointly supplied by the automotive industry – via the Automotive Council – and the UK government through the Department for Business and Trade (DBT) and managed by Innovate UK. History The APC was founded in 2013 as a joint venture between the automotive industry and UK government to "research, develop and commercialise technologies for vehicles of the future". Both government and the automotive industry committed to investing £500 million each, totalling £1 billion over a ten year period. The creation of the APC was part of the coalition government's automotive industrial strategy. In January 2014, Gerhard Schmidt was appointed as Chair and Tony Pixton as Chief Executive. It announced its first round of funding in April 2014, awarding £28.8 million funding to projects worth £133 million, led by Cummins, Ford, GKN and JCB. The Advanced Propulsion Centre was officially opened by Vince Cable in November 2014. Ian Constance was appointed Chief Executive in September 2015. In the 2015 Autumn Statement, the Chancellor, George Osborne, announced that an additional £225 million budget for automotive research and development would be facilitated by the APC. Funding competitions The Advanced Propulsion Centre awards funding to consortia of organizations including vehicle manufacturers, tier 1 automotive suppliers, SMEs and academic institutions, which are developing low carbon powertrain technology. The APC has several kinds of funding mechanisms available: Advanced Route to Market Demonstrator (ARMD) Automotive Transformation Fund Collaborative R&D Competitions Production Readiness Competition Technology Demonstrator Accelerator Programme (TDAP) Spokes The Advanced Propulsion Centre operates a 'hub and spoke' model, where the 'hub' is its headquarters at the University of Warwick, and the 'spokes' are universities across the UK with specialisms in particular areas of net-zero emission vehicle technology. Spoke locations: Newcastle University - Newcastle upon Tyne, England – Electric Machines University of Nottingham – Nottingham, England – Power Electronics University of Warwick – Coventry, England – Electrical Energy Storage University of Bath – Bath, England – TPS System Efficiency Loughborough University – London, England – Institute of Digital Engineering University of Brighton – Brighton, England – TPS Thermal Efficiency Activities In April 2018, APC announced that an APC-funded project has enabled Ford to develop new low emissions technology, which will go into production on its 1.0-litre EcoBoost engine. In February 2018, Nissan completed an APC-funded project with Hyperdrive, the Newcastle University, Warwick Manufacturing Group and Zero Carbon futures, to develop a new production process for its 40kWh battery cells. The cells are produced in Sunderland, England, and are fitted to the Nissan Leaf. In January 2018, Yasa, an electric motor manufacturer based in Oxford, England, opened a new factory to produce 100,000 motors per year, using APC funding. The facility created 150 jobs, with 80% of production expected to be exported. In September 2017, the Metropolitan Police trialled a fleet of hydrogen-powered Suzuki Burgman scooters, which were developed as part of an APC-funded project. In January 2017, an APC grant allowed Ford to begin a 12-month pilot of its Transit Custom Plug-in Hybrid in London, England. See also Automotive Council Innovate UK Department for Business, Energy and Industrial Strategy Society of Motor Manufacturers and Traders References External links Official website Automotive industry in the United Kingdom College and university associations and consortia in the United Kingdom Emissions reduction Engineering education in the United Kingdom Engineering research institutes Engineering university associations and consortia Innovation in the United Kingdom Non-profit organisations based in the United Kingdom Research institutes in the West Midlands (county) University of Warwick Vehicle emission controls
Advanced Propulsion Centre
[ "Chemistry", "Engineering" ]
799
[ "Greenhouse gases", "Engineering research institutes", "Emissions reduction" ]
61,775,878
https://en.wikipedia.org/wiki/Simulation%20of%20Urban%20MObility
Simulation of Urban MObility (Eclipse SUMO or simply SUMO) is an open source, portable, microscopic and continuous multi-modal traffic simulation package designed to handle large networks. SUMO is developed by the German Aerospace Center and community users. It has been freely available as open-source since 2001, and since 2017 it is an Eclipse Foundation project. Purpose Traffic simulation within SUMO uses software tools for simulation and analysis of road traffic and traffic management systems. New traffic strategies can be implemented via a simulation for analysis before they are used in real-world situations. SUMO has also been proposed as a toolchain component for the development and validation of automated driving functions via various X-in-the-Loop and digital twin approaches. SUMO is used for research purposes like traffic forecasting, evaluation of traffic lights, route selection, or in the field of vehicular communication systems. SUMO users are able to make changes to the program source code through the open-source license to experiment with new approaches. Projects SUMO was used in the following national and international projects: AMITRAN, a assessment methodology achieved by ICT applied to the transport sector via intelligent transportation systems (ITS). COLOMBO CityMobil, a project for integration of automated transport systems in the urban environment. Completed in 2011. DRIVE C2X iTETRIS Soccer traffic data collection from the air during the 2006 FIFA World Cup football championship VABENE project to improve safety at mass events. See also Intelligent transportation system Traffic optimization Traffic estimation and prediction system References Notes External links SUMO website SUMO Documentation Repository on GitHub Traffic simulation Transportation engineering Free simulation software
Simulation of Urban MObility
[ "Engineering" ]
320
[ "Civil engineering", "Transportation engineering", "Industrial engineering" ]
61,780,076
https://en.wikipedia.org/wiki/MAZ-529
The MAZ-529 (МАЗ-529) is a uniaxial tractor designed by the Soviet vehicle manufacturer Minsky Automobilny Zavod (MAZ), which started production in 1959. From 1958, production of this type was relocated to MoAZ as part of the specialization of the Soviet automobile industry and continued there until 1973 under the name MoAZ-529. Background Prior to 1955 American models of heavy uniaxial tractors had been introduced to the Soviet Union. For example, from 1955 on there were tests with these very vehicles in order to examine them for their possible applications. In 1956 the first indigenous prototype of such a tractor was built at MAZ. This too was extensively tested, with the goal of universal applicability. Trailers were built and tested in the form of scrapers, cement mixers, simple flatbeds and even an artificial ice rink. After completing the tests, MAZ quickly started series production of the machines. Even in 1956, the Soviet automobile industry was already so specialized that the complete train was no longer manufactured by MAZ. Only the tractor was made in Minsk, the trailer (in the standard production version a scraper) was already built at MoAZ. Since the tractor is not able to drive on its own without a trailer, a much smaller support wheel was mounted at the front to prevent the vehicle tipping over. Before normal operation, this jockey wheel was removed. These vehicles should not be confused with single axle tractors, even if they also have only one axis and are used as a tractor for attachments. A special feature of the vehicle is that no suspension was installed. Only the big tires are used for damping. In 1965 the improved MAZ-529E was built at MoAZ. It had an increased output of and was also manufactured until 1969, before being replaced by the MoAZ-546P. Production of the MoAZ-529 continued until 1973. Technical data The information refers to the basic version of the MAZ-529. Engine: six-cylinder two-stroke diesel engine Engine type: JaAZ -206 (according to other data also a version with JaAZ-204 four-cylinder diesel engine) Power: 121 kW (165 hp) (or 120 hp with JaAZ-204 engine) Transmission: manual five-speed gearbox Top speed: 40 km / h Tire dimension: 21.00-28 (1790 mm diameter) Trailer as a scraper: D-357 Total permissible weight of the train: 34 tons Drive formula (tractor): (2 × 2) References Military vehicles of the Soviet Union Bulldozers
MAZ-529
[ "Engineering" ]
539
[ "Engineering vehicles", "Bulldozers" ]
61,784,960
https://en.wikipedia.org/wiki/Ethylene%20bis%28iodoacetate%29
Ethylene bis(iodoacetate), also known as S-10, is the iodoacetate ester of ethylene glycol. It's an alkylating agent that has been studied as an anticancer drug. See also Ethylene glycol Iodoacetic acid Ethyl iodoacetate Iodoacetamide References Alkylating agents Iodoacetates Glycol esters
Ethylene bis(iodoacetate)
[ "Chemistry" ]
94
[ "Alkylating agents", "Reagents for organic chemistry" ]
44,181,442
https://en.wikipedia.org/wiki/Arcadis
Arcadis NV is a global design, engineering and management consulting company based in the Zuidas, Amsterdam, Netherlands. It currently operates in excess of 350 offices across 40 countries. The company is a member of the Next 150 index. Arcadis was founded as the land reclamation specialist Nederlandsche Heidemaatschappij in 1888. Over the following decades, the firm became involved in various development projects, initially with a rural focus. As a consequence of a restructuring in 1972 that divided the company, it became Heidemij. During 1993, the firm merged with the North American business Geraghty & Miller, resulting in its listing on the Nasdaq index. During 1997, the company adopted its current name, Arcadis; subsidiaries were similarly rebranded. Since 1990, the company has largely expanded itself via a series of acquisitions and mergers, which have allowed it to both expand its presence in existing markets as well as to enter new ones. It performs design and consultancy services on a wide variety of undertakings. Arcadis (either directly or via subsidiaries) has been involved in several high profile construction projects, including London City Airport and the A2 motorway. History 19th century: foundation The company has its origins in the Nederlandsche Heidemaatschappij (English: Association for Wasteland Redevelopment), a land reclamation company founded in the Netherlands in 1888. As such, its original business focus was on the encouragement of agricultural development in the Dutch heather lands. 20th century: domestic, US and Brazil expansion In the early 20th century the Heidemaatschappij was involved in all aspects of rural development including irrigation and forestry. In 1972, it was decided to heavily restructure the company, resulting in its division into two separate entities, the Association (or KNHM), which continued the company's traditional activities, and Heidemij, which became the commercial arm. In 1993, the company merged with Geraghty & Miller, granting the new Arcadis a presence in the North American market and an initial listing on the Nasdaq index. Geraghty and Miller, headquartered in Long Island, New York, was subsequently rebranded as Arcadis North America. Two years later, Heidemij became a listed company on the Next 150 index. During October 1997, the company opted to rebrand itself, changing its name to Arcadis. Two years later, it established a presence in the Brazilian market via the firm's acquisition of Logos Engenharia. 2000s: UK expansion In June 2005, Axtell Yates Hallett was acquired and became a subsidiary of Arcadis NV, being rebranded as Arcadis UK. AYH was a British quantity surveying firm, founded by Stanley Axtell and his colleagues Messrs Yates and Hallett in the City of London in 1946. Since 1946, Axtell Yates Hallett firm grew steadily and broadened both its service base to include firstly project management and subsequently building surveying and facilities consultancy and its area of operation with the opening of regional and overseas offices in the United Kingdom. Following a period of retrenchment during the economic recession of the beginning of the 1990s, the firm was incorporated as the AYH Partnership in February 1994. One of the firm's last projects was Arsenal F.C.'s Emirates Stadium. In April 2006, Arcadis acquired Summerfield Robb Clark, a practice in Scotland. Four months later, it also bought Berkeley Consulting, a practice specialising in infrastructure work. Arcadis UK merged with EC Harris on 2 November 2011, after a vote of EC Harris' 183 partners on 31 October 2011. In April 2012, the company acquired Langdon & Seah, an international construction consultancy company. 2010s–2020s: EU and North American expansion In October 2011, Arcadis acquired EC Harris, an international built asset consultancy firm headquartered in the United Kingdom. In 2012, Arcadis purchased Langdon & Seah, an Asia-based cost and project management consultancy. In October 2013, Princess Beatrix of the Netherlands visited Arcadis as part of the celebrations for its 125th anniversary. In 2014, Arcadis purchased Hyder Consulting, a multi-national advisory and design consultancy specializing in the transport, property, utilities and environmental sectors. In October 2014, the company acquired Hyder Consulting for £296 million. Hyder Consulting, now an integrated component of Arcadis can trace its roots back to as early as 1739. In 2014, Arcadis purchased Callison, an international architecture firm based in Seattle, Washington. In October 2015, a new subsidiary, CallisonRTKL, was formed through the merger of two existing Arcadis subisiaries, Callison and RTKL. In March 2017, Arcadis announced that its Supervisory Board has nominated Peter Oosterveer as CEO and Chairman of the Arcadis Executive Board. In 2020, Arcadis acquired Over Morgen, a Dutch development and energy transition consulting firm. In 2020, Arcadis acquired EAMS Group, a provider of enterprise asset management systems. In July 2022, Arcadis announced it was acquiring IBI Group, a global architectural and engineering services company based in Canada, to be effective in September 2022. In October 2022, Arcadis announced CEO Peter Oosterveer’s retirement, and that current Chief Operating Officer Alan Brookes would be nominated as his successor in 2023. In December 2022, Arcadis acquired DPS Group, an Ireland-based international construction consultancy specializing in life sciences and semiconductor facilities. Projects Projects in which Arcadis was involved include: London City Airport, completed in 1987 Millau Viaduct, France, completed in 2004 Tietê River Project, Brazil, completed in 2015 Skip Spann Connector Bridge, Georgia, completed in 2016 Tunnel section of A2 motorway, Netherlands, completed in 2019 Ten stations on the Doha Metro Gold Line, completed in 2019 Long Beach International Gateway, California, completed in 2020 Port of Calais Breakwater, France, completed in 2020 Six stations on the Sydney Metro City & Southwest, Australia, due to be completed in 2024 Rail systems for Old Oak Common railway station, due to be completed in 2032 Publications The Arcadis Sustainable Cities Index 2022 2022 Global Construction Disputes Report International Construction Costs 2022 References Companies based in Amsterdam Companies listed on Euronext Amsterdam Engineering companies of the Netherlands International engineering consulting firms Multinational companies headquartered in the Netherlands Dutch companies established in 1888
Arcadis
[ "Engineering" ]
1,323
[ "Engineering consulting firms", "International engineering consulting firms" ]
44,182,725
https://en.wikipedia.org/wiki/Composite%20Higgs%20models
In particle physics, composite Higgs models (CHM) are speculative extensions of the Standard Model (SM) where the Higgs boson is a bound state of new strong interactions. These scenarios are models for physics beyond the SM presently tested at the Large Hadron Collider (LHC) in Geneva. In all composite Higgs models the Higgs boson is not an elementary particle (or point-like) but has finite size, perhaps around 10−18 meters. This dimension may be related to the Fermi scale (100 GeV) that determines the strength of the weak interactions such as in β-decay, but it could be significantly smaller. Microscopically the composite Higgs will be made of smaller constituents in the same way as nuclei are made of protons and neutrons. History Often referred to as "natural" composite Higgs models, CHMs are constructions that attempt to alleviate fine-tuning or "naturalness" problem of the Standard Model. These typically engineer the Higgs boson as a naturally light pseudo-Goldstone boson or Nambu-Goldstone field, in analogy to the pion (or more precisely, like the K-mesons) in QCD. These ideas were introduced by Georgi and Kaplan as a clever variation on technicolor theories to allow for the presence of a physical low mass Higgs boson. These are forerunners of Little Higgs theories. In parallel, early composite Higgs models arose from the heavy top quark and its renormalization group infrared fixed point, which implies a strong coupling of the Higgs to top quarks at high energies. This formed the basis of top quark condensation theories of electroweak symmetry breaking in which the Higgs boson is composite at extremely short distance scales, composed of a pair of top and anti-top quarks. This was described by Yoichiro Nambu and subsequently developed by Miransky, Tanabashi, and Yamawaki and Bardeen, Hill, and Lindner, who connected the theory to the renormalization group and improved its predictions. While these ideas are still compelling, they suffer from a "naturalness problem", a large degree of fine-tuning. To remedy the fine tuning problem, Chivukula, Dobrescu, Georgi and Hill introduced the "Top See-Saw" model in which the composite scale is reduced to the several TeV (trillion electron volts, the energy scale of the LHC). A more recent version of the Top Seesaw model of Dobrescu and Cheng has an acceptable light composite Higgs boson. Top Seesaw models have a nice geometric interpretation in theories of extra dimensions, which is most easily seen via dimensional deconstruction (the latter approach does away with the technical details of the geometry of the extra spatial dimension and gives a renormalizable D-4 field theory). These schemes also anticipate "partial compositeness". These models are discussed in the extensive review of strong dynamical theories of Hill and Simmons. CHMs typically predict new particles with mass around a TeV (or tens of TeV as in the Little Higgs schemes) that are excitations or ingredients of the composite Higgs, analogous to the resonances in nuclear physics. The new particles could be produced and detected in collider experiments if the energy of the collision exceeds their mass or could produce deviations from the SM predictions in "low energy observables" – results of experiments at lower energies. Within the most compelling scenarios each Standard Model particle has a partner with equal quantum numbers but heavier mass. For example, the photon, W and Z bosons have heavy replicas with mass determined by the compositeness scale, expected around 1 TeV. Though naturalness requires that new particles exist with mass around a TeV which could be discovered at LHC or future experiments, nonetheless as of 2018, no direct or indirect signs that the Higgs or other SM particles are composite has been detected. From the LHC discovery of 2012, it is known that there exists a physical Higgs boson (a weak iso-doublet) that condenses to break the electro-weak symmetry. This differs from the prediction ordinary technicolor theories where new strong dynamics directly breaks the electro-weak symmetry without the need of a physical Higgs boson. The CHM proposed by Georgi and Kaplan was based on known gauge theory dynamics that produces the Higgs doublet as a Goldstone boson. It was later realized, as with the case of Top Seesaw models described above, that this can naturally arise in five-dimensional theories, such as the Randall–Sundrum scenario or by dimensional deconstruction. These scenarios can also be realized in hypothetical strongly coupled conformal field theories (CFT) and the AdS-CFT correspondence. This spurred activity in the field. At first the Higgs was a generic scalar bound state. In the influential work the Higgs as a Goldstone boson was realized in CFTs. Detailed phenomenological studies showed that within this framework agreement with experimental data can be obtained with a mild tuning of parameters. The more recent work on the holographic realization of CHM, which is based on the AdS/QCD correspondence, provided an explicit realization of the strongly coupled sector of CHM and the computation of meson masses, decay constants and the top-partner mass. Examples CHM can be characterized by the mass (m) of the lightest new particles and their coupling (g). The latter is expected to be larger than the SM couplings for consistency. Various realizations of CHM exist that differ for the mechanism that generates the Higgs doublet. Broadly they can be divided in two categories: Higgs is a generic bound state of strong dynamics. Higgs is a Goldstone boson of spontaneous symmetry breaking In both cases the electro-weak symmetry is broken by the condensation of a Higgs scalar doublet. In the first type of scenario there is no a priori reason why the Higgs boson is lighter than the other composite states and moreover larger deviations from the SM are expected. Higgs as Goldstone boson These are essentially Little Higgs theories. In this scenario the existence of the Higgs boson follows from the symmetries of the theory. This allows to explain why this particle is lighter than the rest of the composite particles whose mass is expected from direct and indirect tests to be around a TeV or higher. It is assumed that the composite sector has a global symmetry spontaneously broken to a subgroup where and are compact Lie groups. Contrary to technicolor models the unbroken symmetry must contain the SM electro-weak group According to Goldstone's theorem the spontaneous breaking of a global symmetry produces massless scalar particles known as Goldstone bosons. By appropriately choosing the global symmetries it is possible to have Goldstone bosons that correspond to the Higgs doublet in the SM. This can be done in a variety of ways and is completely determined by the symmetries. In particular group theory determines the quantum numbers of the Goldstone bosons. From the decomposition of the adjoint representation one finds where is the representation of the Goldstone bosons under The phenomenological request that a Higgs doublet exists selects the possible symmetries. Typical example is the pattern that contains a single Higgs doublet as a Goldstone boson. The physics of the Higgs as a Goldstone boson is strongly constrained by the symmetries and determined by the symmetry breaking scale that controls their interactions. An approximate relation exists between mass and coupling of the composite states, In CHM one finds that deviations from the SM are proportional to where is the electro-weak vacuum expectation value. By construction these models approximate the SM to arbitrary precision if is sufficiently small. For example, for the model above with global symmetry the coupling of the Higgs to W and Z bosons is modified as Phenomenological studies suggest and thus at least a factor of a few larger than    . However the tuning of parameters required to achieve is inversely proportional to so that viable scenarios require some degree of tuning. Goldstone bosons generated from the spontaneous breaking of an exact global symmetry are exactly massless. Therefore, if the Higgs boson is a Goldstone boson the global symmetry cannot be exact. In CHM the Higgs potential is generated by effects that explicitly break the global symmetry Minimally these are the SM Yukawa and gauge couplings that cannot respect the global symmetry but other effects can also exist. The top coupling is expected to give a dominant contribution to the Higgs potential as this is the largest coupling in the SM. In the simplest models one finds a correlation between the Higgs mass and the mass of the top partners, In models with as suggested by naturalness this indicates fermionic resonances with mass around Spin-1 resonances are expected to be somewhat heavier. This is within the reach of future collider experiments. Partial compositeness One ingredient of modern CHM is the hypothesis of partial compositeness proposed by D.B. Kaplan. This is similar to a (deconstructed) extra dimension, in which every Standard Model particle has a heavy partner(s) that can mix with it. In practice, the SM particles are linear combinations of elementary and composite states: where denotes the mixing angle. Partial compositeness is naturally realized in the gauge sector, where an analogous phenomenon happens quantum chromodynamics and is known as – mixing (after the photon and rho meson – two particles with identical quantum numbers which engage in similar intermingling). For fermions it is an assumption that in particular requires the existence of heavy fermions with equal quantum numbers to S.M. quarks and leptons. These interact with the Higgs through the mixing. One schematically finds the formula for the S.M. fermion masses, where subscripts L and R mark the left and right mixings, and is a composite sector coupling. The composite particles are multiplets of the unbroken symmetry H. For phenomenological reasons this should contain the custodial symmetry SU(2)×SU(2) extending the electro-weak symmetry SU(2)×U(1). Composite fermions often belong to representations larger than the SM particles. For example, a strongly motivated representation for left-handed fermions is the (2,2) that contains particles with exotic electric charge or with special experimental signatures. Partial compositeness ameliorates the phenomenology of CHM providing a logic why no deviations from the S.M. have been measured so far. In the so-called anarchic scenarios the hierarchies of S.M. fermion masses are generated through the hierarchies of mixings and anarchic composite sector couplings. The light fermions are almost elementary while the third generation is strongly or entirely composite. This leads to a structural suppression of all effects that involve first two generations that are the most precisely measured. In particular flavor transitions and corrections to electro-weak observables are suppressed. Other scenarios are also possible with different phenomenology. Experiments The main experimental signatures of CHM are: New heavy partners of Standard Model particles, with SM quantum numbers and masses around a TeV Modified SM couplings New contributions to flavor observables Supersymmetric models also predict that every Standard Model particle will have a heavier partner. However, in supersymmetry the partners have a different spin: they are bosons if the SM particle is a fermion, and vice versa. In composite Higgs models the partners have the same spin as the SM particles. All the deviations from the SM are controlled by the tuning parameter ξ. The mixing of the SM particles determines the coupling with the known particles of the SM. The detailed phenomenology depends strongly on the flavor assumptions and is in general model-dependent. The Higgs and the top quark typically have the largest coupling to the new particles. For this reason third generation partners are the most easy to produce and top physics has the largest deviations from the SM. Top partners have also special importance given their role in the naturalness of the theory. After the first run of the LHC direct experimental searches exclude third generation fermionic resonances up to 800 GeV. Bounds on gluon resonances are in the multi-TeV range and somewhat weaker bounds exist for electro-weak resonances. Deviations from the SM couplings is proportional to the degree of compositeness of the particles. For this reason the largest departures from the SM predictions are expected for the third generation quarks and Higgs couplings. The first have been measured with per mille precision by the LEP experiment. After the first run of the LHC the couplings of the Higgs with fermions and gauge bosons agree with the SM with a precision around 20%. These results pose some tension for CHM but are compatible with a compositeness scale f~TeV. The hypothesis of partial compositeness allows to suppress flavor violation beyond the SM that is severely constrained experimentally. Nevertheless, within anarchic scenarios sizable deviations from the SM predictions exist in several observables. Particularly constrained is CP violation in the Kaon system and lepton flavor violation for example the rare decay μ->eγ. Overall flavor physics suggests the strongest indirect bounds on anarchic scenarios. This tension can be avoided with different flavor assumptions. Summary The nature of the Higgs boson remains a conundrum. Philosophically, the Higgs boson is either a composite state, built of more fundamental constituents, or it is connected to other states in nature by a symmetry such as supersymmetry (or some blend of these concepts). So far there is no evidence of either compositeness or supersymmetry. The fact that nature provides a single (weak isodoublet) scalar field that ostensibly uniquely generates fundamental particle masses has yet to be explained. At present, we have no idea what mass / energy scale will reveal additional information about the Higgs boson that may shed useful light on these issues. While theorists remain busy concocting explanations, this limited insight poses a major challenge to experimental particle physics: We have no clear idea whether feasible accelerators might provide new useful information beyond the S.M. It is hoped that upgrades in luminosity and energy at the LHC may possibly provide new clues. See also Alternatives to the Standard Higgs Model Two-Higgs-doublet model Preon References Physics beyond the Standard Model Hypothetical composite particles
Composite Higgs models
[ "Physics" ]
3,017
[ "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
44,184,942
https://en.wikipedia.org/wiki/Austin%20L.%20Wahrhaftig
Austin Levy Wahrhaftig (May 5, 1917 – November 11, 1997) was an American chemist and mass spectrometrist known for his development of the quasi-equilibrium theory of fragmentation of molecular ions. The Wahrhaftig diagram that illustrates the relationship between internal energy and unimolecular ion decomposition is named after him. Early life and education Wahrhaftig was born in Sacramento, California, where he attended grade school, high school, and two years at Sacramento Junior College. He attended the University of California, Berkeley where he did undergraduate research with Joel Hildebrand and received an A.B. in chemistry in 1938. He went to graduate school at the California Institute of Technology where he worked under Richard M. Badger and Verner Schomaker . He received his Ph.D. in 1941. He was a research fellow at Caltech from 1941 to 1945. He then worked at the Wright Air Development Center in Pasadena, California, and as a University Fellow at the Ohio State University with Herrick L. Johnston. Academic career Wahrhaftig joined the faculty at the Chemistry Department at the University of Utah in 1947 where he rose through the ranks and spent the rest of his career. He retired to become an emeritus professor in 1987. References Further reading 1917 births 1997 deaths 20th-century American chemists Mass spectrometrists University of California, Berkeley alumni California Institute of Technology alumni California Institute of Technology fellows University of Utah faculty
Austin L. Wahrhaftig
[ "Physics", "Chemistry" ]
299
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
44,185,220
https://en.wikipedia.org/wiki/Wilf%20equivalence
In the study of permutations and permutation patterns, Wilf equivalence is an equivalence relation on permutation classes. Two permutation classes are Wilf equivalent when they have the same numbers of permutations of each possible length, or equivalently if they have the same generating functions. The equivalence classes for Wilf equivalence are called Wilf classes; they are the combinatorial classes of permutation classes. The counting functions and Wilf equivalences among many specific permutation classes are known. Wilf equivalence may also be described for individual permutations rather than permutation classes. In this context, two permutations are said to be Wilf equivalent if the principal permutation classes formed by forbidding them are Wilf equivalent. References Enumerative combinatorics Permutation patterns Equivalence (mathematics)
Wilf equivalence
[ "Mathematics" ]
173
[ "Enumerative combinatorics", "Combinatorics" ]
44,187,456
https://en.wikipedia.org/wiki/Tactical%20urbanism
Tactical urbanism, also commonly referred to as guerrilla urbanism, pop-up urbanism, city repair, D.I.Y. urbanism, planning-by-doing, urban acupuncture, and urban prototyping, is a low-cost, temporary change to the built environment, usually in cities, intended to improve local neighbourhoods and city gathering places. Tactical urbanism is often citizen-led but can also be initiated by government entities. Community-led temporary installations are often intended to pressure government agencies into installing a more permanent or expensive version of the improvement. Terminology The term was popularized around 2010 to refer to a range of existing techniques. The Street Plans Collaborative defines "tactical urbanism" as an approach to urban change that features the following five characteristics: A deliberate, phased approach to instigating change; The offering of local solutions for local planning challenges; Short-term commitment as a first step towards longer-term change; Lower-risk, with potentially high rewards; and The development of social capital between citizens and the building of organizational capacity between public and private institutions, non-profits, and their constituents. While the 1984 English translation of The Practice of Everyday Life by French author Michel de Certeau used the term tactical urbanism, this was in reference to events occurring in Paris in 1968; the "tactical urbanism" that Certeau described was in opposition to "strategic urbanism", which modern concepts of tactical urbanism tend not to distinguish. The modern sense of the term is attributed to New York-based urban planner Mike Lydon. The Project for Public Spaces uses the phrase "Lighter, Quicker, Cheaper", coined by urban designer Eric Reynolds, to describe the same basic approach expressed by tactical urbanism. Origin The tactical urbanist movement takes inspiration from urban experiments including Ciclovía, Paris-Plages, and the introduction of plazas and pedestrian malls in New York City during the tenure of Janette Sadik-Khan as Commissioner of the New York City Department of Transportation. Tactical urbanism formally emerged as a movement following a meeting of the Next Generation of New Urbanist (CNU NextGen) group in November 2010 in New Orleans. A driving force of the movement is to put the onus back on individuals to take personal responsibility in creating sustainable buildings, streets, neighborhoods, and cities. Following the meeting, an open-source project called Tactical Urbanism: Short TermAction | Long Term Change was developed by a group from NextGen to define tactical urbanism and to promote various interventions to improve urban design and promote positive change in neighbourhoods and communities. Examples Honolulu, Hawaii has some of the highest pedestrian fatality rates in the United States (Wong 2012). Many of their busiest intersections reflect city standards from years past without modification as the quantity of vehicular traffic and associated speeds have changed dramatically. Some residents chose to take a stand in 2014. Within the crosswalk of one of these busy intersections, residents altered the crosswalk lines so that they spelled out "Aloha," the traditional Hawaiian salutation. While the perpetrators sought to introduce a level of humanity to the dangerous location, city officials stated that the change was a "deviation from the standard." In spring of 2016, the city of Chicago posted unique "no right turn" signage to an intersection. To call attention to this new condition, an unknown person installed two small planter boxes within the crosswalk with flowering plants. Many responded positively while local businesses expressed concern for the traffic pattern change and its effect on their business. Types of interventions Tactical urbanism projects vary significantly in scope, size, budget, legality, and support. Projects often begin as grassroots interventions and spread to other cities, and are in some cases later adopted by municipal governments as best practices. Some common interventions are listed below: Improving public spaces Better block initiatives: Temporarily transforming retail streets using cheap or donated materials and volunteers. Spaces are transformed by introducing food carts, sidewalk tables, temporary bike lanes and narrowing of streets; Chair bombing: The act of removing salvageable materials and using it to build public seating. The chairs are placed in areas that either are quiet or lack comfortable places to sit. Food carts/trucks: Food carts and trucks are used to attract people to underused public spaces and offer small business opportunities for entrepreneurs; Open streets: To temporarily provide safe spaces for walking, bicycling, skating, and social activities; promote local economic development; and raise awareness about the impact of cars in urban spaces. "Open Streets" is an anglicized term for the South American 'Ciclovia', which originated in Bogota Park(ing) Day: An annual event where on street parking is converted into park-like spaces. Park(ing) Day was launched in 2005 by Rebar art and design studio; Pavement To Plazas: Popularized in New York City, Pavement to Plazas involve converting space on streets to usable public space. The closure of Times Square to vehicular traffic, and its low-cost conversion to a pedestrian plaza, is a primary example of a pavement plaza; Pop-up cafes: Temporary patios or terraces built in parking spots to provide overflow seating for a nearby cafe or for passersby. Most common in cities where sidewalks are narrow and where there otherwise is not room for outdoor sitting or eating areas; Pop-up parks: Temporary or permanent transformations of underused spaces into community gathering areas through beautification; Pop-up retail: Temporary retail stores that are set up in vacant stores or property. Infrastructure Crosswalk painting: Guerrilla crosswalks are zebra crossings painted by the community on roadways and at intersections where the city government has failed to provide a marked pedestrian crossing; Practical walkways: Desire paths are footpaths or other paths that form via natural use rather than paths designed for use by humans in urban environments. Some of these paths are later improved or paved to offer a more practical route to a particular destination. Protected bike lanes: Pop-up bicycle lanes are usually done by placing potted plants or other physical barriers to make painted bike lanes feel safer. Sometimes there is no pre-existing bike lane, and the physical protection is the only delineator. Removal De-fencing: The act of removing unnecessary fences to break down barriers between neighbours, beautify communities, and encourage community building; Depaving: The act of removing unnecessary pavement to transform driveways and parking into green space so that rainwater can be absorbed and neighbourhoods beautified; Sabotaging hostile architecture: The act of obstructing, defacing, or removing hostile architecture, usually anti-homeless spikes or armrests, to undermine their intended effects, often to protest anti-homelessness legislation. These actions in particular are often considered acts of vandalism. Nature Guerrilla gardening: Cultivating land that the gardeners do not have the legal rights to utilize, such as abandoned sites, areas not being cared for, or private property; Guerrilla grafting: Grafting fruitbearing branches onto sterile street trees to make an edible city. See also Road diet Sneckdown Street reclamation Urban Interventionism References Further reading The Street Plans Collaborative, Inc. (dba Street Plans) in collaboration with Ciudad Emergente and Codesign studio, produces a series of free tactical urbanism e-books. Volumes 1 and 2 focus on North American case studies, Volume 3 is a Spanish-language guide to Latin American projects, and Volume 4 covers Australia and New Zealand, including responses to the 2011 Christchurch earthquake. Street Plans' Mike Lydon and Anthony Garcia published a tactical urbanism book in March 2015. New Urbanism Urban design Environmentalism Sustainable transport Sustainable urban planning Urban planning Urban studies and planning terminology Cultural activism
Tactical urbanism
[ "Physics", "Engineering" ]
1,565
[ "Physical systems", "Transport", "Sustainable transport", "Urban planning", "Architecture" ]
67,070,255
https://en.wikipedia.org/wiki/Dual-rotor%20motor
A dual-rotor motor is a motor having two rotors within the same motor housing. This rotor arrangement can increase power volume density, efficiency, and reduce cogging torque. Stator on the outside In one arrangement, the motor has an ordinary stator. A squirrel-cage rotor connected to the output shaft rotates within the stator at slightly less than the rotating field from the stator. Within the squirrel-cage rotor is a freely rotating permanent magnet rotor, which is locked in with rotating field from the stator. The effect of the inner rotor is to reenforce the field from the stator. Because the stator slips behind the rotating magnetic field inducing a current in the rotor, this type of motor meets the definition of an induction motor. Stator between rotors In another arrangement, one rotor is inside the stator with a second rotor on the outside of the stator. The photo labelled FIG. 8 is from a patent application. It shows two rotors assembled into a single unit, with eight permanent magnets attached to the outer surface of the inner rotor, and eight to the inner surface of the outer rotor. Vendors are working on both axial and radial flux configurations. In one axial flux design, the rotor is a disk that sits between two symmetric rotor disks. References Electric motors
Dual-rotor motor
[ "Technology", "Engineering" ]
265
[ "Engines", "Electric motors", "Mechanical engineering", "Electrical engineering", "Mechanical engineering stubs" ]
67,072,520
https://en.wikipedia.org/wiki/Anelasticity
Anelasticity is a property of materials that describes their behaviour when undergoing deformation. Its formal definition does not include the physical or atomistic mechanisms but still interprets the anelastic behaviour as a manifestation of internal relaxation processes. It is a behaviour differing (usually very slightly) from elastic behaviour. Definition and elasticity Considering first an ideal elastic material, Hooke's law defines the relation between stress and strain as: The constant is called the modulus of elasticity (or just modulus) while its reciprocal is called the modulus of compliance (or just compliance). There are three postulates that define the ideal elastic behaviour: (1) the strain response to each level of applied stress (or vice versa) has a unique equilibrium value; (2) the equilibrium response is achieved instantaneously; (3) the response is linear. These conditions may be lifted in various combinations to describe different types of behaviour, summarized in the following table: Anelasticity is therefore by the existence of a part of time dependent reaction, in addition to the elastic one in the material considered. It is also usually a very small fraction of the total response and so, in this sense, the usual meaning of "anelasticity" as "without elasticity" is improper in a physical sense. The formal definition of linearity is: "If a given stress history produces the strain , and if a stress gives rise to , then the stress will give rise to the strain ." The postulate of linearity is used because of its practical usefulness. The theory would become much more complicated otherwise, but in cases of materials under low stress this postulate can be considered true. In general, the change of an external variable of a thermodynamic system causes a response from the system called thermal relaxation that leads it to a new equilibrium state. In the case of mechanical changes, the response is known as anelastic relaxation, and in the same formal way can be also described for example dielectric or magnetic relaxation. The internal values are coupled to stress and strain through kinetic processes such as diffusion. So that the external manifestation of the internal relaxation behaviours is the stress strain relation, which in this case is time dependant. Static response functions Experiments can be made where either the stress or strain is held constant for a certain time. These are called quasi-static, and in this case, anelastic materials exhibit creep, elastic aftereffect, and stress relaxation. In these experiments a stress applied and held constant while the strain is observed as a function of time. This response function is called creep defined by and characterizes the properties of the solid. The initial value of is called the unrelaxed compliance, the equilibrium value is called relaxed compliance and their difference is called the relaxation of the compliance. After a creep experiment has been run for a while, when stress is released the elastic spring-back is in general followed by a time dependent decay of the strain. This effect is called the elastic aftereffect or “creep recovery”. The ideal elastic solid returns to zero strain immediately, without any after-effect, while in the case of anelasticity total recovery takes time, and that is the aftereffect. The linear viscoelastic solid only recovers partially, because the viscous contribution to strain cannot be recovered. In a stress relaxation experiment the stress σ is observed as a function of time while keeping a constant strain and defining a stress relaxation function similarly to the creep function, with unrelaxed and relaxed modulus MU and MR. At equilibrium, , and at a short timescale, when the material behaves as if ideally elastic, also holds. Dynamic response functions and loss angle To get information about the behaviour of a material over short periods of time dynamic experiments are needed. In this kind of experiment a periodic stress (or strain) is imposed on the system, and the phase lag of the strain (or stress) is determined. The stress can be written as a complex number where is the amplitude and the frequency of vibration. Then the strain is periodic with the same frequency where  is the strain amplitude and  is the angle by which the strain lags, called loss angle. For ideal elasticity . For the anelastic case  is in general not zero, so the ratio is complex. This quantity is called the complex compliance . Thus, where , the absolute value of , is called the absolute dynamic compliance, given by . This way two real dynamic response functions are defined, and . Two other real response functions can also be introduced by writing the previous equation in another notation: where the real part is called "storage compliance" and the imaginary part is called "loss compliance". J1 and J2 being called "storage compliance" and "loss compliance" respectively is significant, because calculating the energy stored and the energy dissipated in a cycle of vibration gives following equations: where is the energy dissipated in a full cycle per unit of volume while the maximum stored energy per unit volume is given by: The ratio of the energy dissipated to the maximum stored energy is called the "specific damping capacity”. This ratio can be written as a function of the loss angle by . This shows that the loss angle gives a measure of the fraction of energy lost per cycle due to anelastic behaviour, and so it is known as the internal friction of the material. Resonant and wave propagation methods The dynamic response functions can only be measured in an experiment at frequencies below any resonance of the system used. While theoretically easy to do, in practice the angle is difficult to measure when very small, for example in crystalline materials. Therefore, subresonant methods are not generally used. Instead, methods where the inertia of the system is considered are used. These can be divided into two categories: methods employing resonant systems at a natural frequency (forced vibration or free decay) wave propagation methods Forced vibrations The response of a system in a forced-vibration experiment with a periodic force has a maximum of the displacement  at a certain frequency of the force. This is known as resonance, and the resonant frequency. The resonance equation is simplified in the case of . In this case the dependence of   on frequency is plotted as a Lorentzian curve. If the two values and  are the ones at which  falls to half maximum value, then: The loss angle that measures the internal friction can be obtained directly from the plot, since it is the width of the resonance peak at half-maximum. With this and the resonant frequency it is then possible to obtain the primary response functions. By changing the inertia of the sample the resonant frequency changes, and so can the response functions at different frequencies can be obtained. Free vibrations The more common way of obtaining the anelastic response is measuring the damping of the free vibrations of a sample. Solving the equation of motion for this case includes the constant called logarithmic decrement. Its value is constant and is . It represents the natural logarithm of the ratio of successive vibrations' amplitudes: It is a convenient and direct way of measuring the damping, as it is directly related to the internal friction. Wave propagation Wave propagation methods utilize a wave traveling down the specimen in one direction at a time to avoid any interference effects. If the specimen is long enough and the damping high enough, this can be done by continuous wave propagation. More commonly, for crystalline materials with low damping, a pulse propagation method is used. This method employs a wave packet whose length is small compared to the specimen. The pulse is produced by a transducer at one end of the sample, and the velocity of the pulse is determined either by the time it takes to reach the end of the sample, or the time it takes to come back after a reflection at the end. The attenuation of the pulse is determined by the decrease in amplitude after successive reflections. Boltzmann superposition principle Each response function constitutes a complete representation of the anelastic properties of the solid. Therefore, any one of the response functions can be used to completely describe the anelastic behaviour of the solid, and every other response function can be derived from the chosen one. The Boltzmann superposition principle states that every stress applied at a different time deforms the material as it if were the only one. This can be written generally for a series of stresses that are applied at successive times . In this situation, the total strain will be: or in the integral form, is the stress is varied continuously: The controlled variable can always be changed, expressing the stress as a function of time in a similar way: These integral expressions are a generalization of Hooke's law in the case of anelasticity, and they show that material acts almost as they have a memory of their history of stress and strain. These two of equations imply that there is a relation between the J(t) and M(t). To obtain it the method of Laplace transforms can be used, or they can be related implicitly by: In this way though they are correlated in a complicated manner and it is not easy to evaluate one of these functions knowing the other. Hover it is still possible in principle to derive the stress relaxation function from the creep function and vice versa thanks to the Boltzamann principle. Mechanical models It is possible to describe anelastic behaviour considering a set of parameters of the material. Since the definition of anelasticity includes linearity and a time dependant stress–strain relation, it can be described by using a differential equation with terms including stress, strain, and their derivatives. To better visualize the anelastic behaviour appropriate mechanical models can be used. The simplest one contains three elements (two springs and a dashpot) since that is the least number of parameters necessary for a stress–strain equation describing a simple anelastic solid. This specific basic behaviour is of such importance that a material that exhibits it is called standard anelastic solid. Differential stress–strain equations Since from the definition of anelasticity linearity is required, all differential stress–strain equations of anelasticity must be of first degree. These equations can contain many different constants to the describe the specific solid. The most general one can be written as: For the specific case of anelasticity, which requires the existence of an equilibrium relation, additional restrictions must be placed on this equation. Each stress–strain equation can be accompanied by a mechanical model to help visualizing the behaviour of materials. Mechanical models In the case where only the constants and are not zero, the body is ideally elastic and is modelled by the Hookean spring. To add internal friction to a model, the Newtonian dashpot is used, represented by a piston moving in an ideally viscous liquid. Its velocity is proportional to the applied force, therefore entirely dissipating work as heat. These two mechanical elements can be combined in series or in parallel. In a series combination the stresses are equal, while the strains are additive. Similarly, for a parallel combination of the same elements the strains are equal and the stresses additive. Having said that, the two simplest models that combine more than one element are the following: a spring and dashpot in parallel, called the Voigt (or Kelvin) model a spring and dashpot in series, called the Maxwell model The Voigt model, described by the equation , allows for no instantaneous deformation, therefore it is not a realistic representation of a crystalline solid. The generalized stress–strain equation for the Maxwell model is , and since it displays steady viscous creep rather than recoverable creep is yet again not suited to describe an anelastic material. Standard anelastic solid Considering the Voigt model, what it lacks is the instantaneous elastic response, characteristic of crystals. To obtain this missing feature, a spring is attached in series with the Voigt model. This is called the Voigt unit. A spring in series with a Voigt unit shows all the characteristics of an anelastic material despite its simplicity. It is differential stress–strain equation it therefore interesting, and can be calculated to be: The solid whose properties are defined by this equation is called the standard anelastic solid. The solution of this equation for the creep function is: where is called the relaxation time at constant stress. To describe the stress relaxation behaviour, one can also consider another three-parameter model more suited to the stress relaxation experiment, consisting of a Maxwell unit placed in parallel with a spring. Its differential stress–strain equation is the same as the other model considered, therefore the two models are equivalent. The Voigt-type is more convenient in the analysis of creep, while the Maxwell-type for the stress relaxation. Dynamic properties of the standard anelastic solid The dynamic response functions and , are: These are often called the Debye equations since were first derived by P. Debye for the case of dielectric relaxation phenomena. The width of the peak at half maximum value for is given by The equation for the internal friction may also be expressed as a Debye peak, in the case where as: The relaxation strength can be obtained from the height of such a peak, while the relaxation time from the frequency at which the peak occurs. Dynamic properties as functions of time The dynamic properties plotted as function of  are considered keeping  constant while varying . However, taking a sample through a Debye peak by varying the frequency continuously is not possible with the more common resonance methods. It is however possible to plot the peak by varying  while keeping  constant. The basis of why this is possible is that in many cases the relaxation rate  is expressible by an Arrhenius equation: where is the absolute temperature,  is a frequency factor, is the activation energy, is the Boltzmann constant. Therefore, where this equation applies, the quantity  may be varied over a wide range simply by changing the temperature. It then becomes possible to treat the dynamic response functions as functions of temperature. Discrete spectra The next level of complexity in the description of an anelastic solid is a model containing n Voigt units in series with each other and with a spring. This corresponds to a differential stress–strain equation which contains all terms up to order n in both the stress and the strain. Similarly, a model containing n Maxwell units all in parallel with each other and with a spring is also equivalent to a differential stress–strain equation of the same form. In order to have both elastic and anelastic behaviour, the differential stress–strain equation must be of the same order in the stress and strain and must start from terms of order zero. A solid described by such function shows a “discrete spectrum” of relaxation processes, or simply a "discrete relaxation spectrum". Each "line" of the spectrum is characterized by a relaxation time , and a magnitude . The standard anelastic solid considered before is just a particular case of a one-line spectrum, that can be also called having a "single relaxation time". Mechanical spectroscopy applications A technique that measures internal friction and modulus of elasticity is called Mechanical Spectroscopy. It is extremely sensitive and can give information not attainable with other experimental methodologies. Despite being historically uncommon, it has some great utility in solving practical problems regarding industrial production where knowledge and control of the microscopic structure of materials is becoming more and more important. Some of these applications are the following. Measurement of quantity of C, N, O and H in solution in metals Unlike other chemical methods of analysis, mechanical spectroscopy is the only technique that can determine the quantity of interstitial elements in a solid solution. In body centered cubic structures, like iron's, interstitial atoms position themselves in octahedral sites. In an undeformed lattice all octahedral positions are the same, having the same probability of being occupied. Applying a certain tensile stress in one direction parallel to a side of the cube dilates the side while compressing other orthogonal ones. Because of this, the octahedral positions stop being equivalent, and the larger ones will be occupied instead of the smallest ones, making the interstitial atom jump from one to the other. Inverting the direction of the stress has obviously the opposite effect. By applying an alternating stress, the interstitial atom will keep jumping from one site to the other, in a reversible way, causing dissipation of energy and a producing a so-called Snoek peak. The more atoms take part in this process the more the Snoek peak will be intense. Knowing the energy dissipation of a single event and the height of the Snoek peak can make possible to determine the concentration of atoms involved in the process. Structural stability in nanocrystalline materials Grain boundaries in nanocrystalline materials form are significant enough to be responsible for some specific properties of these types of materials. Both their size and structure are important to determine the mechanical effects they have. High resolution microscopy show that material put under severe plastic deformation are characterized by significant distortions and dislocations over and near the grain boundaries. Using mechanical spectroscopy techniques one can determine whether nanocrystalline metals under thermal treatments change their mechanical behaviour by changing their grain boundaries structure. One example is nanocrystalline aluminium. Determination of critical points in martensitic transformations Mechanical spectroscopy allows to determine the critical points martensite start and martensite finish in martensitic transformations for steel and other metals and alloys. They can be identified by anomalies in the trend of the modulus. Using steel AISI 304 as an example, an anomaly in the distribution of the elements in the alloy can cause a local increase in , especially in areas with less nickel, and when usually martensite formation can only be induced by plastic deformation, around 9% can get formed anyway during cooling. Magnetoelastic effects in ferromagnetic materials Ferromagnetic materials have specific anelastic effects that influence internal friction and dynamic modulus. A non-magnetized ferromagnetic material forms Weiss domains, each one possessing a spontaneous and randomly directed magnetization. The boundary zones, called Bloch walls, are about one hundred atoms long, and here the orientation of one domain gradually changes into the one of the adjacent one. Applying an external magnetic field makes domains with the same orientations increase in size, until all Bloch walls are removed, and the material is magnetized. Crystalline defects tend to anchor the domains, opposing their movement. So, materials can be divided into magnetically soft or hard based on how much the walls are strongly anchored. In these kind of materials magnetic and elastic phenomena are correlated, like in the case of magnetostriction, that is the property of changing size when under a magnetic field, or the opposite case, changing magnetic properties when a mechanical stress is applied. These effects are dependent on the Weiss domains and their ability to re-orient. When a magnetoelastic material is put under stress, the deformation is caused by the sum of the elastic and magnetoelastic ones. The presence of this last one changes the internal friction, by adding an additional dissipation mechanism. References A.S. Nowick, B.S. Berry, Anelastic Relaxation in Crystalline Solids, Academic Press, New York and London 1972 R. Montanari, E. Bonetti, Spettroscopia Meccanica, AIM (2010) ISBN 97-88-88529-87-81 C.M. Zener, Elasticity and anelasticity of metals, University of Chicago Press, Chicago, Illinois, 1948 M.S. Blanter, Igor S. Golovin, H. Neuhauser, Hans-Rainer Sinning, Internal Friction in Metallic Glasses, Springer Series in Materials Science, 2007 Elasticity (physics)
Anelasticity
[ "Physics", "Materials_science" ]
4,055
[ "Deformation (mechanics)", "Physical phenomena", "Physical properties", "Elasticity (physics)" ]
50,543,273
https://en.wikipedia.org/wiki/DENIS%20J082303.1%E2%88%92491201
DENIS-P J082303.1-491201 (also known as DENIS J082303.1-491201, DE0823-49), is a binary system of two brown dwarfs, located from Earth. The system is located in the constellation Vela. The primary has a spectral class of L1.5, a mass of and a temperature of . The secondary is also a brown dwarf but with a spectral type of L5.5, a mass of , and a temperature of . The mass ratio is around 0.64 to 0.74. The system has an orbital period of 248 days. The age of the system is estimated to be around 80 to 500 million years old, a relatively young object in the solar neighbourhood, however it does not seem to have any association with any moving groups. DENIS J082303.1-491201 was discovered in 2007 by Ngoc Phan-Bao et al as part of the Deep Near Infrared Survey of the Southern Sky or DENIS for short. Planetary system A substellar companion, DENIS-P J082303.1−491201 b was discovered in 2013 and included in the NASA Exoplanet Archive as the first exoplanet discovered by the Astrometry exoplanet detection method. References Brown dwarfs L-type brown dwarfs Binary stars J08230313-4912012 Vela (constellation) Planetary systems with one confirmed planet
DENIS J082303.1−491201
[ "Astronomy" ]
308
[ "Vela (constellation)", "Constellations" ]
50,543,416
https://en.wikipedia.org/wiki/Hyperpolarized%20carbon-13%20MRI
Hyperpolarized carbon-13 MRI is a functional medical imaging technique for probing perfusion and metabolism using injected substrates. It is enabled by techniques for hyperpolarization of carbon-13-containing molecules using dynamic nuclear polarization and rapid dissolution to create an injectable solution. Following the injection of a hyperpolarized substrate, metabolic activity can be mapped based on enzymatic conversion of the injected molecule. In contrast with other metabolic imaging methods such as positron emission tomography, hyperpolarized carbon-13 MRI provides chemical as well as spatial information, allowing this technique to be used to probe the activity of specific metabolic pathways. This has led to new ways of imaging disease. For example, metabolic conversion of hyperpolarized pyruvate into lactate is increasingly being used to image cancerous tissues via the Warburg effect. Hyperpolarization While hyperpolarization of inorganic small molecules (like 3He and 129Xe) is generally achieved using spin-exchange optical pumping (SEOP), compounds useful for metabolic imaging (such as 13C or 15N) are typically hyperpolarized using dynamic nuclear polarization (DNP). DNP can be performed at operating temperatures of 1.1-1.2 K, and high magnetic fields (~4T). The compounds are then thawed and dissolved to yield a room temperature solution containing hyperpolarized nuclei which can be injected. Dissolution and injection Hyperpolarized samples of 13C pyruvic acid are typically dissolved in some form of aqueous solution containing various detergents and buffering reagents. For example, in a study detecting tumor response to etoposide treatment, the sample was dissolved in 40 mM HEPES, 94 mM NaOH, 30 mM NaCl, and 50 mg/L EDTA. Preclinical models Hyperpolarized carbon-13 MRI is currently being developed as a potentially cost effective diagnostic and treatment progress tool in various cancers, including prostate cancer. Other potential uses include neuro-oncological applications such as the monitoring of real-time in vivo metabolic events. Clinical trials The majority of clinical studies utilizing 13C hyperpolarization are currently studying pyruvate metabolism in prostate cancer, testing reproducibility of the imaging data, as well as feasibility of acquiring time. Imaging methods Spectroscopic imaging Spectroscopic imaging techniques enable chemical information to be extracted from hyperpolarized carbon-13 MRI experiments. The distinct chemical shift associated with each metabolite can be exploited to probe the exchange of magnetization between pools corresponding to each of the metabolites. Metabolite-selective excitation Using techniques for simultaneous spatial and spectral selective excitation, RF pulses can be designed to perturb metabolites individually. This enables the encoding of metabolite-selective images without the need for spectroscopic imaging. This technique also allows different flip angles to be applied to each metabolite, which enables pulse sequences to be designed that make optimal use of the limited polarization available for imaging. Dynamic imaging models In contrast with conventional MRI, hyperpolarized experiments are inherently dynamic as images must be acquired as the injected substrate spreads through the body and is metabolized. This necessitates dynamical system modelling and estimation for quantifying metabolic reaction rates. A number of approaches exist for modeling the evolution of magnetization within a single voxel. Two-species model with unidirectional flux The simplest model of metabolic flux assumes unidirectional conversion of the injected substrate S to a product P. The rate of conversion is assumed to be governed by the reaction rate constant Exchange of magnetization between the two species can then be modeled using the linear ordinary differential equation where denotes the rate at which the transverse magnetization decays to thermal equilibrium polarization, for the product species P. Two-species model with bidirectional flux The unidirectional flux model can be extended to account for bidirectional metabolic flux with forward rate and backward rate The differential equation describing the magnetization exchange is then Effect of radio-frequency excitation Repeated radio-frequency (RF) excitation of the sample causes additional decay of the magnetization vector. For constant flip angle sequences, this effect can be approximated using a larger effective rate of decay computed as where is the flip angle and is the repetition time. Time-varying flip angle sequences can also be used, but require that the dynamics be modeled as a hybrid system with discrete jumps in the system state. Metabolism mapping The goal of many hyperpolarized carbon-13 MRI experiments is to map the activity of a particular metabolic pathway. Methods of quantifying the metabolic rate from dynamic image data include temporally integrating the metabolic curves, computing the definite integral referred to in pharmacokinetics as the area under the curve (AUC), and taking the ratio of integrals as a proxy for rate constants of interest. Area-under-the-curve ratio Comparing the definite integral under the substrate and product metabolite curves has been proposed as an alternative to model-based parameter estimates as a method of quantifying metabolic activity. Under specific assumptions, the ratio \frac{AUC(P)}{AUC(S)} of area under the product curve AUC(P) to the area under the substrate curve AUC(S) is proportional to the forward metabolic rate . Rate parameter mapping When the assumptions under which this ratio is proportional to are not met, or there is significant noise in the collected data, it is desirable to compute estimates of model parameters directly. When noise is independent and identically distributed and Gaussian, parameters can be fit using non-linear least squares estimation. Otherwise (for example if magnitude images with Rician-distributed noise are used), parameters can be estimated by maximum likelihood estimation. The spatial distribution of metabolic rates can be visualized by estimating metabolic rates corresponding to the time series from each voxel, and plotting a heat map of the estimated rates. See also Carbon-13 nuclear magnetic resonance Dynamic nuclear polarization Functional imaging Magnetic resonance spectroscopic imaging References Magnetic resonance imaging
Hyperpolarized carbon-13 MRI
[ "Chemistry" ]
1,262
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
39,932,177
https://en.wikipedia.org/wiki/Shannon%20capacity%20of%20a%20graph
In graph theory, the Shannon capacity of a graph is a graph invariant defined from the number of independent sets of strong graph products. It is named after American mathematician Claude Shannon. It measures the Shannon capacity of a communications channel defined from the graph, and is upper bounded by the Lovász number, which can be computed in polynomial time. However, the computational complexity of the Shannon capacity itself remains unknown. Graph models of communication channels The Shannon capacity models the amount of information that can be transmitted across a noisy communication channel in which certain signal values can be confused with each other. In this application, the confusion graph or confusability graph describes the pairs of values that can be confused. For instance, suppose that a communications channel has five discrete signal values, any one of which can be transmitted in a single time step. These values may be modeled mathematically as the five numbers 0, 1, 2, 3, or 4 in modular arithmetic modulo 5. However, suppose that when a value is sent across the channel, the value that is received is (mod 5) where represents the noise on the channel and may be any real number in the open interval from −1 to 1. Thus, if the recipient receives a value such as 3.6, it is impossible to determine whether it was originally transmitted as a 3 or as a 4; the two values 3 and 4 can be confused with each other. This situation can be modeled by a graph, a cycle of length 5, in which the vertices correspond to the five values that can be transmitted and the edges of the graph represent values that can be confused with each other. For this example, it is possible to choose two values that can be transmitted in each time step without ambiguity, for instance, the values 1 and 3. These values are far enough apart that they can't be confused with each other: when the recipient receives a value between 0 and 2, it can deduce that the value that was sent must have been 1, and when the recipient receives a value in between 2 and 4, it can deduce that the value that was sent must have been 3. In this way, in steps of communication, the sender can communicate up to different messages. Two is the maximum number of values that the recipient can distinguish from each other: every subset of three or more of the values 0, 1, 2, 3, 4 includes at least one pair that can be confused with each other. Even though the channel has five values that can be sent per time step, effectively only two of them can be used with this coding scheme. However, more complicated coding schemes allow a greater amount of information to be sent across the same channel, by using codewords of length greater than one. For instance, suppose that in two consecutive steps the sender transmits one of the five code words "11", "23", "35", "54", or "42". (Here, the quotation marks indicate that these words should be interpreted as strings of symbols, not as decimal numbers.) Each pair of these code words includes at least one position where its values differ by two or more modulo 5; for instance, "11" and "23" differ by two in their second position, while "23" and "42" differ by two in their first position. Therefore, a recipient of one of these code words will always be able to determine unambiguously which one was sent: no two of these code words can be confused with each other. By using this method, in steps of communication, the sender can communicate up to messages, significantly more than the that could be transmitted with the simpler one-digit code. The effective number of values that can be transmitted per unit time step is . In graph-theoretic terms, this means that the Shannon capacity of the 5-cycle is at least . As showed, this bound is tight: it is not possible to find a more complicated system of code words that allows even more different messages to be sent in the same amount of time, so the Shannon capacity of the 5-cycle is Relation to independent sets If a graph represents a set of symbols and the pairs of symbols that can be confused with each other, then a subset of symbols avoids all confusable pairs if and only if is an independent set in the graph, a subset of vertices that does not include both endpoints of any edge. The maximum possible size of a subset of the symbols that can all be distinguished from each other is the independence number of the graph, the size of its maximum independent set. For instance, ': the 5-cycle has independent sets of two vertices, but not larger. For codewords of longer lengths, one can use independent sets in larger graphs to describe the sets of codewords that can be transmitted without confusion. For instance, for the same example of five symbols whose confusion graph is , there are 25 strings of length two that can be used in a length-2 coding scheme. These strings may be represented by the vertices of a graph with 25 vertices. In this graph, each vertex has eight neighbors, the eight strings that it can be confused with. A subset of length-two strings forms a code with no possible confusion if and only if it corresponds to an independent set of this graph. The set of code words {"11", "23", "35", "54", "42"} forms one of these independent sets, of maximum size. If is a graph representing the signals and confusable pairs of a channel, then the graph representing the length-two codewords and their confusable pairs is , where the symbol represents the strong product of graphs. This is a graph that has a vertex for each pair of a vertex in the first argument of the product and a vertex in the second argument of the product. Two distinct pairs and are adjacent in the strong product if and only if and are identical or adjacent, and and are identical or adjacent. More generally, the codewords of length  can be represented by the graph , the -fold strong product of with itself, and the maximum number of codewords of this length that can be transmitted without confusion is given by the independence number . The effective number of signals transmitted per unit time step is the th root of this number, . Using these concepts, the Shannon capacity may be defined as the limit (as becomes arbitrarily large) of the effective number of signals per time step of arbitrarily long confusion-free codes. Computational complexity The computational complexity of the Shannon capacity is unknown, and even the value of the Shannon capacity for certain small graphs such as (a cycle graph of seven vertices) remains unknown. A natural approach to this problem would be to compute a finite number of powers of the given graph , find their independence numbers, and infer from these numbers some information about the limiting behavior of the sequence from which the Shannon capacity is defined. However (even ignoring the computational difficulty of computing the independence numbers of these graphs, an NP-hard problem) the unpredictable behavior of the sequence of independence numbers of powers of implies that this approach cannot be used to accurately approximate the Shannon capacity. Upper bounds In part because the Shannon capacity is difficult to compute, researchers have looked for other graph invariants that are easy to compute and that provide bounds on the Shannon capacity. Lovász number The Lovász number (G) is a different graph invariant, that can be computed numerically to high accuracy in polynomial time by an algorithm based on the ellipsoid method. The Shannon capacity of a graph G is bounded from below by α(G), and from above by (G). In some cases, (G) and the Shannon capacity coincide; for instance, for the graph of a pentagon, both are equal to . However, there exist other graphs for which the Shannon capacity and the Lovász number differ. Haemers' bound Haemers provided another upper bound on the Shannon capacity, which is sometimes better than Lovász bound: where B is an n × n matrix over some field, such that bii ≠ 0 and bij = 0 if vertices i and j are not adjacent. References Graph invariants Information theory
Shannon capacity of a graph
[ "Mathematics", "Technology", "Engineering" ]
1,676
[ "Telecommunications engineering", "Applied mathematics", "Graph theory", "Graph invariants", "Information theory", "Mathematical relations", "Computer science" ]
39,937,659
https://en.wikipedia.org/wiki/Eigenstate%20thermalization%20hypothesis
The eigenstate thermalization hypothesis (or ETH) is a set of ideas which purports to explain when and why an isolated quantum mechanical system can be accurately described using equilibrium statistical mechanics. In particular, it is devoted to understanding how systems which are initially prepared in far-from-equilibrium states can evolve in time to a state which appears to be in thermal equilibrium. The phrase "eigenstate thermalization" was first coined by Mark Srednicki in 1994, after similar ideas had been introduced by Josh Deutsch in 1991. The principal philosophy underlying the eigenstate thermalization hypothesis is that instead of explaining the ergodicity of a thermodynamic system through the mechanism of dynamical chaos, as is done in classical mechanics, one should instead examine the properties of matrix elements of observable quantities in individual energy eigenstates of the system. Motivation In statistical mechanics, the microcanonical ensemble is a particular statistical ensemble which is used to make predictions about the outcomes of experiments performed on isolated systems that are believed to be in equilibrium with an exactly known energy. The microcanonical ensemble is based upon the assumption that, when such an equilibrated system is probed, the probability for it to be found in any of the microscopic states with the same total energy have equal probability. With this assumption, the ensemble average of an observable quantity is found by averaging the value of that observable over all microstates with the correct total energy: Importantly, this quantity is independent of everything about the initial state except for its energy. The assumptions of ergodicity are well-motivated in classical mechanics as a result of dynamical chaos, since a chaotic system will in general spend equal time in equal areas of its phase space. If we prepare an isolated, chaotic, classical system in some region of its phase space, then as the system is allowed to evolve in time, it will sample its entire phase space, subject only to a small number of conservation laws (such as conservation of total energy). If one can justify the claim that a given physical system is ergodic, then this mechanism will provide an explanation for why statistical mechanics is successful in making accurate predictions. For example, the hard sphere gas has been rigorously proven to be ergodic. This argument cannot be straightforwardly extended to quantum systems, even ones that are analogous to chaotic classical systems, because time evolution of a quantum system does not uniformly sample all vectors in Hilbert space with a given energy. Given the state at time zero in a basis of energy eigenstates the expectation value of any observable is Even if the are incommensurate, so that this expectation value is given for long times by the expectation value permanently retains knowledge of the initial state in the form of the coefficients . In principle it is thus an open question as to whether an isolated quantum mechanical system, prepared in an arbitrary initial state, will approach a state which resembles thermal equilibrium, in which a handful of observables are adequate to make successful predictions about the system. However, a variety of experiments in cold atomic gases have indeed observed thermal relaxation in systems which are, to a very good approximation, completely isolated from their environment, and for a wide class of initial states. The task of explaining this experimentally observed applicability of equilibrium statistical mechanics to isolated quantum systems is the primary goal of the eigenstate thermalization hypothesis. Statement Suppose that we are studying an isolated, quantum mechanical many-body system. In this context, "isolated" refers to the fact that the system has no (or at least negligible) interactions with the environment external to it. If the Hamiltonian of the system is denoted , then a complete set of basis states for the system is given in terms of the eigenstates of the Hamiltonian, where is the eigenstate of the Hamiltonian with eigenvalue . We will refer to these states simply as "energy eigenstates." For simplicity, we will assume that the system has no degeneracy in its energy eigenvalues, and that it is finite in extent, so that the energy eigenvalues form a discrete, non-degenerate spectrum (this is not an unreasonable assumption, since any "real" laboratory system will tend to have sufficient disorder and strong enough interactions as to eliminate almost all degeneracy from the system, and of course will be finite in size). This allows us to label the energy eigenstates in order of increasing energy eigenvalue. Additionally, consider some other quantum-mechanical observable , which we wish to make thermal predictions about. The matrix elements of this operator, as expressed in a basis of energy eigenstates, will be denoted by We now imagine that we prepare our system in an initial state for which the expectation value of is far from its value predicted in a microcanonical ensemble appropriate to the energy scale in question (we assume that our initial state is some superposition of energy eigenstates which are all sufficiently "close" in energy). The eigenstate thermalization hypothesis says that for an arbitrary initial state, the expectation value of will ultimately evolve in time to its value predicted by a microcanonical ensemble, and thereafter will exhibit only small fluctuations around that value, provided that the following two conditions are met: The diagonal matrix elements vary smoothly as a function of energy, with the difference between neighboring values, , becoming exponentially small in the system size. The off-diagonal matrix elements , with , are much smaller than the diagonal matrix elements, and in particular are themselves exponentially small in the system size. These conditions can be written as where and are smooth functions of energy, is the many-body Hilbert space dimension, and is a random variable with zero mean and unit variance. Conversely if a quantum many-body system satisfies the ETH, the matrix representation of any local operator in the energy eigen basis is expected to follow the above ansatz. Equivalence of the diagonal and microcanonical ensembles We can define a long-time average of the expectation value of the operator according to the expression If we use the explicit expression for the time evolution of this expectation value, we can write The integration in this expression can be performed explicitly, and the result is Each of the terms in the second sum will become smaller as the limit is taken to infinity. Assuming that the phase coherence between the different exponential terms in the second sum does not ever become large enough to rival this decay, the second sum will go to zero, and we find that the long-time average of the expectation value is given by This prediction for the time-average of the observable is referred to as its predicted value in the diagonal ensemble, The most important aspect of the diagonal ensemble is that it depends explicitly on the initial state of the system, and so would appear to retain all of the information regarding the preparation of the system. In contrast, the predicted value in the microcanonical ensemble is given by the equally-weighted average over all energy eigenstates within some energy window centered around the mean energy of the system where is the number of states in the appropriate energy window, and the prime on the sum indices indicates that the summation is restricted to this appropriate microcanonical window. This prediction makes absolutely no reference to the initial state of the system, unlike the diagonal ensemble. Because of this, it is not clear why the microcanonical ensemble should provide such an accurate description of the long-time averages of observables in such a wide variety of physical systems. However, suppose that the matrix elements are effectively constant over the relevant energy window, with fluctuations that are sufficiently small. If this is true, this one constant value A can be effectively pulled out of the sum, and the prediction of the diagonal ensemble is simply equal to this value, where we have assumed that the initial state is normalized appropriately. Likewise, the prediction of the microcanonical ensemble becomes The two ensembles are therefore in agreement. This constancy of the values of over small energy windows is the primary idea underlying the eigenstate thermalization hypothesis. Notice that in particular, it states that the expectation value of in a single energy eigenstate is equal to the value predicted by a microcanonical ensemble constructed at that energy scale. This constitutes a foundation for quantum statistical mechanics which is radically different from the one built upon the notions of dynamical ergodicity. Tests Several numerical studies of small lattice systems appear to tentatively confirm the predictions of the eigenstate thermalization hypothesis in interacting systems which would be expected to thermalize. Likewise, systems which are integrable tend not to obey the eigenstate thermalization hypothesis. Some analytical results can also be obtained if one makes certain assumptions about the nature of highly excited energy eigenstates. The original 1994 paper on the ETH by Mark Srednicki studied, in particular, the example of a quantum hard sphere gas in an insulated box. This is a system which is known to exhibit chaos classically. For states of sufficiently high energy, Berry's conjecture states that energy eigenfunctions in this many-body system of hard sphere particles will appear to behave as superpositions of plane waves, with the plane waves entering the superposition with random phases and Gaussian-distributed amplitudes (the precise notion of this random superposition is clarified in the paper). Under this assumption, one can show that, up to corrections which are negligibly small in the thermodynamic limit, the momentum distribution function for each individual, distinguishable particle is equal to the Maxwell–Boltzmann distribution where is the particle's momentum, m is the mass of the particles, k is the Boltzmann constant, and the "temperature" is related to the energy of the eigenstate according to the usual equation of state for an ideal gas, where N is the number of particles in the gas. This result is a specific manifestation of the ETH, in that it results in a prediction for the value of an observable in one energy eigenstate which is in agreement with the prediction derived from a microcanonical (or canonical) ensemble. Note that no averaging over initial states whatsoever has been performed, nor has anything resembling the H-theorem been invoked. Additionally, one can also derive the appropriate Bose–Einstein or Fermi–Dirac distributions, if one imposes the appropriate commutation relations for the particles comprising the gas. Currently, it is not well understood how high the energy of an eigenstate of the hard sphere gas must be in order for it to obey the ETH. A rough criterion is that the average thermal wavelength of each particle be sufficiently smaller than the radius of the hard sphere particles, so that the system can probe the features which result in chaos classically (namely, the fact that the particles have a finite size ). However, it is conceivable that this condition may be able to be relaxed, and perhaps in the thermodynamic limit, energy eigenstates of arbitrarily low energies will satisfy the ETH (aside from the ground state itself, which is required to have certain special properties, for example, the lack of any nodes ). Alternatives Three alternative explanations for the thermalization of isolated quantum systems are often proposed: For initial states of physical interest, the coefficients exhibit large fluctuations from eigenstate to eigenstate, in a fashion which is completely uncorrelated with the fluctuations of from eigenstate to eigenstate. Because the coefficients and matrix elements are uncorrelated, the summation in the diagonal ensemble is effectively performing an unbiased sampling of the values of over the appropriate energy window. For a sufficiently large system, this unbiased sampling should result in a value which is close to the true mean of the values of over this window, and will effectively reproduce the prediction of the microcanonical ensemble. However, this mechanism may be disfavored for the following heuristic reason. Typically, one is interested in physical situations in which the initial expectation value of is far from its equilibrium value. For this to be true, the initial state must contain some sort of specific information about , and so it becomes suspect whether or not the initial state truly represents an unbiased sampling of the values of over the appropriate energy window. Furthermore, whether or not this were to be true, it still does not provide an answer to the question of when arbitrary initial states will come to equilibrium, if they ever do. For initial states of physical interest, the coefficients are effectively constant, and do not fluctuate at all. In this case, the diagonal ensemble is precisely the same as the microcanonical ensemble, and there is no mystery as to why their predictions are identical. However, this explanation is disfavored for much the same reasons as the first. Integrable quantum systems are proved to thermalize under condition of simple regular time-dependence of parameters, suggesting that cosmological expansion of the Universe and integrability of the most fundamental equations of motion are ultimately responsible for thermalization. Temporal fluctuations of expectation values The condition that the ETH imposes on the diagonal elements of an observable is responsible for the equality of the predictions of the diagonal and microcanonical ensembles. However, the equality of these long-time averages does not guarantee that the fluctuations in time around this average will be small. That is, the equality of the long-time averages does not ensure that the expectation value of will settle down to this long-time average value, and then stay there for most times. In order to deduce the conditions necessary for the observable's expectation value to exhibit small temporal fluctuations around its time-average, we study the mean squared amplitude of the temporal fluctuations, defined as where is a shorthand notation for the expectation value of at time t. This expression can be computed explicitly, and one finds that Temporal fluctuations about the long-time average will be small so long as the off-diagonal elements satisfy the conditions imposed on them by the ETH, namely that they become exponentially small in the system size. Notice that this condition allows for the possibility of isolated resurgence times, in which the phases align coherently in order to produce large fluctuations away from the long-time average. The amount of time the system spends far away from the long-time average is guaranteed to be small so long as the above mean squared amplitude is sufficiently small. If a system poses a dynamical symmetry, however, it will periodically oscillate around the long-time average. Quantum fluctuations and thermal fluctuations The expectation value of a quantum mechanical observable represents the average value which would be measured after performing repeated measurements on an ensemble of identically prepared quantum states. Therefore, while we have been examining this expectation value as the principal object of interest, it is not clear to what extent this represents physically relevant quantities. As a result of quantum fluctuations, the expectation value of an observable is not typically what will be measured during one experiment on an isolated system. However, it has been shown that for an observable satisfying the ETH, quantum fluctuations in its expectation value will typically be of the same order of magnitude as the thermal fluctuations which would be predicted in a traditional microcanonical ensemble. This lends further credence to the idea that the ETH is the underlying mechanism responsible for the thermalization of isolated quantum systems. General validity Currently, there is no known analytical derivation of the eigenstate thermalization hypothesis for general interacting systems. However, it has been verified to be true for a wide variety of interacting systems using numerical exact diagonalization techniques, to within the uncertainty of these methods. It has also been proven to be true in certain special cases in the semi-classical limit, where the validity of the ETH rests on the validity of Shnirelman's theorem, which states that in a system which is classically chaotic, the expectation value of an operator in an energy eigenstate is equal to its classical, microcanonical average at the appropriate energy. Whether or not it can be shown to be true more generally in interacting quantum systems remains an open question. It is also known to explicitly fail in certain integrable systems, in which the presence of a large number of constants of motion prevent thermalization. It is also important to note that the ETH makes statements about specific observables on a case-by-case basis - it does not make any claims about whether every observable in a system will obey ETH. In fact, this certainly cannot be true. Given a basis of energy eigenstates, one can always explicitly construct an operator which violates the ETH, simply by writing down the operator as a matrix in this basis whose elements explicitly do not obey the conditions imposed by the ETH. Conversely, it is always trivially possible to find operators which do satisfy ETH, by writing down a matrix whose elements are specifically chosen to obey ETH. In light of this, one may be led to believe that the ETH is somewhat trivial in its usefulness. However, the important consideration to bear in mind is that these operators thus constructed may not have any physical relevance. While one can construct these matrices, it is not clear that they correspond to observables which could be realistically measured in an experiment, or bear any resemblance to physically interesting quantities. An arbitrary Hermitian operator on the Hilbert space of the system need not correspond to something which is a physically measurable observable. Typically, the ETH is postulated to hold for "few-body operators," observables which involve only a small number of particles. Examples of this would include the occupation of a given momentum in a gas of particles, or the occupation of a particular site in a lattice system of particles. Notice that while the ETH is typically applied to "simple" few-body operators such as these, these observables need not be local in space - the momentum number operator in the above example does not represent a local quantity. There has also been considerable interest in the case where isolated, non-integrable quantum systems fail to thermalize, despite the predictions of conventional statistical mechanics. Disordered systems which exhibit many-body localization are candidates for this type of behavior, with the possibility of excited energy eigenstates whose thermodynamic properties more closely resemble those of ground states. It remains an open question as to whether a completely isolated, non-integrable system without static disorder can ever fail to thermalize. One intriguing possibility is the realization of "Quantum Disentangled Liquids." It also an open question whether all eigenstates must obey the ETH in a thermalizing system. The eigenstate thermalization hypothesis is closely connected to the quantum nature of chaos (see quantum chaos). Furthermore, since a classically chaotic system is also ergodic, almost all of its trajectories eventually explore uniformly the entire accessible phase space, which would imply the eigenstates of the quantum chaotic system fill the quantum phase space evenly (up to random fluctuations) in the semiclassical limit . In particular, there is a quantum ergodicity theorem showing that the expectation value of an operator converges to the corresponding microcanonical classical average as . However, the quantum ergodicity theorem leaves open the possibility of non-ergodic states such as quantum scars. In addition to the conventional scarring, there are two other types of quantum scarring, which further illustrate the weak-ergodicity breaking in quantum chaotic systems: perturbation-induced and many-body quantum scars. Since the former arise a combined effect of special nearly-degenerate unperturbed states and the localized nature of the perturbation (potential bums), the scarring can slow down the thermalization process in disordered quantum dots and wells, which is further illustrated by the fact that these quantum scars can be used to propagate quantum wave packets in a disordered nanostructure with high fidelity. On the other hand, the latter form of scarring has been speculated to be the culprit behind the unexpectedly slow thermalization of cold atoms observed experimentally. See also Equilibrium thermodynamics Fluctuation dissipation theorem Important Publications in Statistical Mechanics Non-equilibrium thermodynamics Quantum thermodynamics Statistical physics Configuration entropy Chaos Theory Hard spheres Quantum statistical mechanics Microcanonical Ensemble H-theorem Adiabatic theorem Footnotes References External links "Overview of Eigenstate Thermalization Hypothesis" by Mark Srednicki, UCSB, KITP Program: Quantum Dynamics in Far from Equilibrium Thermally Isolated Systems "The Eigenstate Thermalization Hypothesis" by Mark Srednicki, UCSB, KITP Rapid Response Workshop: Black Holes: Complementarity, Fuzz, or Fire? "Quantum Disentangled Liquids" by Matthew P. A. Fisher, UCSB, KITP Conference: From the Renormalization Group to Quantum Gravity Celebrating the science of Joe Polchinski Hypotheses Quantum mechanics Statistical mechanics Thermodynamics
Eigenstate thermalization hypothesis
[ "Physics", "Chemistry", "Mathematics" ]
4,363
[ "Theoretical physics", "Quantum mechanics", "Thermodynamics", "Statistical mechanics", "Dynamical systems" ]
39,937,914
https://en.wikipedia.org/wiki/Convention%20on%20Early%20Notification%20of%20a%20Nuclear%20Accident
The Convention on Early Notification of a Nuclear Accident is a 1986 International Atomic Energy Agency (IAEA) treaty whereby states have agreed to provide notification of any nuclear accident that occur within its jurisdiction that could affect other states. It, along with the Convention on Assistance in the Case of a Nuclear Accident or Radiological Emergency, was adopted in direct response to the April 1986 Chernobyl disaster. By agreeing to the Convention, a state acknowledges that when any nuclear or radiation accident occurs within its territory that has the potential of affecting another state, it will promptly notify the IAEA and the other states that could be affected. The information to be reported includes the incident's time, location, and the suspected amount of radioactivity release. The Convention was concluded and signed at a special session of the IAEA general conference on 26 September 1986; the special session was called because of the Chernobyl disaster, which had occurred five months before. Significantly, the Soviet Union and the Ukrainian SSR—the states that were responsible for the Chernobyl disaster—both signed the treaty at the conference and quickly ratified it. It was signed by 69 states and the Convention entered into force on 27 October 1986 after the third ratification. As of 2021, 115 state parties are full participants in the Convention, along with the European Atomic Energy Community, the Food and Agriculture Organization, the World Health Organization, and the World Meteorological Organization. A further 8 states have signed the treaty but not ratified it - Afghanistan, Democratic Republic of the Congo, Holy See, Niger, North Korea, Sierra Leone, Sudan, and Zimbabwe. Technical Implementation To implement the agreement, the IAEA operates the web portal USIE. National competent authorities can use this web portal to fulfill their information and reporting obligations in case of an emergency. Alternative reporting methods and a description of further details are outlined in the corresponding IAEA Operations Manual for Incident and Emergency Communication. See also European Community Urgent Radiological Information Exchange References External links Convention on Early Notification of a Nuclear Accident, IAEA information page. Text of the Convention. Signatures and ratifications. 1986 in Austria Aftermath of the Chernobyl disaster International Atomic Energy Agency treaties Treaties concluded in 1986 Treaties entered into force in 1986 Treaties of Albania Treaties of Algeria Treaties of Angola Treaties of Argentina Treaties of Armenia Treaties of Australia Treaties of Austria Treaties of Bahrain Treaties of Bangladesh Treaties of the Byelorussian Soviet Socialist Republic Treaties of Belgium Treaties of Bolivia Treaties of Bosnia and Herzegovina Treaties of Botswana Treaties of Brazil Treaties of Burkina Faso Treaties of Cambodia Treaties of Cameroon Treaties of Canada Treaties of Chile Treaties of the People's Republic of China Treaties of Colombia Treaties of Costa Rica Treaties of Croatia Treaties of Cuba Treaties of Cyprus Treaties of the Czech Republic Treaties of Czechoslovakia Treaties of Denmark Treaties of the Dominican Republic Treaties of Egypt Treaties of El Salvador Treaties of Estonia Treaties of Finland Treaties of France Treaties of Gabon Treaties of Georgia (country) Treaties of West Germany Treaties of East Germany Treaties of Greece Treaties of Guatemala Treaties of Iceland Treaties of India Treaties of Indonesia Treaties of Iran Treaties of Ba'athist Iraq Treaties of Ireland Treaties of Israel Treaties of Italy Treaties of Japan Treaties of Jordan Treaties of Kazakhstan Treaties of South Korea Treaties of Kuwait Treaties of Laos Treaties of Latvia Treaties of Lebanon Treaties of Lesotho Treaties of the Libyan Arab Jamahiriya Treaties of Liechtenstein Treaties of Lithuania Treaties of Luxembourg Treaties of Malaysia Treaties of Mali Treaties of Mauritania Treaties of Mauritius Treaties of Mexico Treaties of Monaco Treaties of Montenegro Treaties of Morocco Treaties of Mozambique Treaties of Myanmar Treaties of the Netherlands Treaties of New Zealand Treaties of Nicaragua Treaties of Nigeria Treaties of Norway Treaties of Oman Treaties of Pakistan Treaties of Panama Treaties of Paraguay Treaties of Peru Treaties of the Philippines Treaties of Portugal Treaties of Qatar Treaties of Moldova Treaties of Romania Treaties of the Soviet Union Treaties of Saint Vincent and the Grenadines Treaties of Saudi Arabia Treaties of Senegal Treaties of Serbia and Montenegro Treaties of Singapore Treaties of Slovakia Treaties of Slovenia Treaties of South Africa Treaties of Spain Treaties of Sri Lanka Treaties of Sweden Treaties of Switzerland Treaties of Tajikistan Treaties of Thailand Treaties of North Macedonia Treaties of Tunisia Treaties of Turkey Treaties of the Ukrainian Soviet Socialist Republic Treaties of the United Arab Emirates Treaties of the United Kingdom Treaties of Tanzania Treaties of the United States Treaties of Uruguay Treaties of Venezuela Treaties of Vietnam Treaties entered into by the European Atomic Energy Community Treaties of Yugoslavia Treaties extended to the Faroe Islands Treaties extended to Greenland Treaties extended to Aruba Treaties extended to the Netherlands Antilles Convention on Early Notification of a Nuclear Accident Treaties entered into by the World Health Organization Treaties entered into by the Food and Agriculture Organization Treaties entered into by the World Meteorological Organization Treaties of Poland Treaties of Hungary Treaties of Mongolia Treaties of Bulgaria
Convention on Early Notification of a Nuclear Accident
[ "Chemistry", "Technology", "Engineering" ]
933
[ "Nuclear accidents and incidents", "Aftermath of the Chernobyl disaster", "Safety engineering", "Measuring instruments", "Environmental impact of nuclear power", "Warning systems", "Radioactivity" ]
42,726,277
https://en.wikipedia.org/wiki/Beryllium%20sulfide
Beryllium sulfide (BeS) is an ionic compound from the sulfide group with the formula BeS. It is a white solid with a sphalerite structure that is decomposed by water and acids. Preparation Beryllium sulfide powders can be prepared by the reaction of sulfur and beryllium in a hydrogen atmosphere by heating the mixture for 10-20 minutes at temperatures from 1000-1300 °C. If done at 900 °C, there is beryllium metal impurities. Alternatively, it can be prepared by the reaction of beryllium chloride and hydrogen sulfide at 900 °C. References Beryllium compounds Monosulfides II-VI semiconductors Zincblende crystal structure
Beryllium sulfide
[ "Chemistry" ]
144
[ "Semiconductor materials", "II-VI semiconductors", "Inorganic compounds", "Inorganic compound stubs" ]
42,726,919
https://en.wikipedia.org/wiki/Deferred%20measurement%20principle
The deferred measurement principle is a result in quantum computing which states that delaying measurements until the end of a quantum computation doesn't affect the probability distribution of outcomes. A consequence of the deferred measurement principle is that measuring commutes with conditioning. The choice of whether to measure a qubit before, after, or during an operation conditioned on that qubit will have no observable effect on a circuit's final expected results. Thanks to the deferred measurement principle, measurements in a quantum circuit can often be shifted around so they happen at better times. For example, measuring qubits as early as possible can reduce the maximum number of simultaneously stored qubits; potentially enabling an algorithm to be run on a smaller quantum computer or to be simulated more efficiently. Alternatively, deferring all measurements until the end of circuits allows them to be analyzed using only pure states. References Quantum information science
Deferred measurement principle
[ "Physics" ]
186
[ "Quantum mechanics", "Quantum physics stubs" ]
42,727,389
https://en.wikipedia.org/wiki/Kuhn%27s%20theorem
In game theory, Kuhn's theorem relates perfect recall, mixed and unmixed strategies and their expected payoffs. It is named after Harold W. Kuhn. The theorem states that in a game where players may remember all of their previous moves/states of the game available to them, for every mixed strategy there is a behavioral strategy that has an equivalent payoff (i.e. the strategies are equivalent). The theorem does not specify what this strategy is, only that it exists. It is valid both for finite games, as well as infinite games (i.e. games with continuous choices, or iterated infinitely). References Game theory Mathematical economics Economics theorems
Kuhn's theorem
[ "Mathematics" ]
140
[ "Applied mathematics", "Game theory", "Mathematical economics" ]
42,730,010
https://en.wikipedia.org/wiki/Tritrophic%20interactions%20in%20plant%20defense
Tritrophic interactions in plant defense against herbivory describe the ecological impacts of three trophic levels on each other: the plant, the herbivore, and its natural enemies. They may also be called multitrophic interactions when further trophic levels, such as soil microbes, endophytes, or hyperparasitoids (higher-order predators) are considered. Tritrophic interactions join pollination and seed dispersal as vital biological functions which plants perform via cooperation with animals. Natural enemies—predators, pathogens, and parasitoids that attack plant-feeding insects—can benefit plants by hindering the feeding behavior of the harmful insect. It is thought that many plant traits have evolved in response to this mutualism to make themselves more attractive to natural enemies. This recruitment of natural enemies functions to protect against excessive herbivory and is considered an indirect plant defense mechanism. Traits attractive to natural enemies can be physical, as in the cases of domatia and nectaries; or chemical, as in the case of induced plant volatile chemicals that help natural enemies pinpoint a food source. Humans can take advantage of tritrophic interactions in the biological control of insect pests. Chemical mechanisms of enemy attraction Plants produce secondary metabolites known as allelochemicals. Rather than participating in basic metabolic processes, they mediate interactions between a plant and its environment, often attracting, repelling, or poisoning insects. They also help produce secondary cell wall components such as those that require amino acid modification. In a tritrophic system, volatiles, which are released into the air, are superior to surface chemicals in drawing foraging natural enemies from afar. Plants also produce root volatiles which will drive tritrophic interactions between below-ground herbivores and their natural enemies. Some plant volatiles can be smelled by humans and give plants like basil, eucalyptus, and pine their distinctive odors. The mixture and ratios of individual volatiles emitted by a plant under given circumstances (also referred to as synomones in the context of natural enemy attraction) is referred to as a volatile profile. These are highly specific to certain plant species and are detectable meters from the source. Predators and parasitoids exploit the specificity of volatile profiles to navigate the complex infochemical signals presented by plants in their efforts to locate a particular prey species. The production of volatiles is likely to be beneficial given two conditions: that they are effective in attracting natural enemies and that the natural enemies are effective in removing or impeding herbivores. However, volatile chemicals may not have evolved initially for this purpose; they act in within-plant signaling, attraction of pollinators, or repulsion of herbivores that dislike such odors. Induced defenses When an herbivore starts eating a plant, the plant may respond by increasing its production of volatiles or changing its volatile profile. This plasticity is controlled by either the jasmonic acid pathway or the salicylic acid pathway, depending largely on the herbivore; these substances are often called herbivore-induced plant volatiles (HIPVs). The plant hormone jasmonic acid increases in concentration when plants are damaged and is responsible for inducing the transcription of enzymes that synthesize secondary metabolites. This hormone also aids in the production of defensive proteins such as α-amylase inhibitors, as well as lectins. Since α-amylase breaks down starch, α-amylase inhibitors prevent insects from deriving nutrition from starch. Lectins likewise interfere with insect nutrient absorption as they bind to carbohydrates.  Though volatiles of any kind have an attractive effect on natural enemies, this effect is stronger for damaged plants than for undamaged plants, perhaps because induced volatiles signal definitive and recent herbivore activity. The inducibility gives rise to the idea that plants are sending out a "distress call" to the third trophic level in times of herbivore attack. Natural enemies can distinguish between mechanical tissue damage, which might occur during events other than herbivory, and damage that is the direct result of insect feeding behavior. The presence of herbivore saliva or regurgitant mediates this differentiation, and the resulting chemical pathway leads to a stronger natural enemy response than mechanical damage could. The reliability of HIPVs in broadcasting the location of prey means that, for many foraging enemies, induced plant volatiles are more attractive than even the odors emitted by the prey insect itself. Plants are able to determine what types of herbivore species are present, and will react differently given the herbivore's traits. If certain defense mechanisms are not effective, plants may turn to attracting natural enemies of herbivore populations. For example, wild tobacco plants use nicotine, a neurotoxin, to defend against herbivores. However, when faced with nicotine-tolerant herbivores, they will attract natural enemies. Local and systemic signals When herbivores trigger an inducible chemical defense pathway, the resulting HIPVs may be emitted either from the site of feeding damage (local induction) or from undamaged tissues belonging to a damaged plant (systemic induction). For example, when an herbivore feeds on a single corn seedling leaf, the plant will emit volatiles from all its leaves, whether or not they too have been damaged. Locally induced defenses aid parasitoids in targeting their foraging behaviors to the exact location of the herbivore on the plant. Systemic defenses are less spatially specific and may serve to confuse the enemy once the source plant is located. A plant might employ both local and systemic responses simultaneously. Morphological mechanisms of enemy attraction Domatia Natural enemies must survive long enough and respond quickly enough to plant volatiles in order to benefit the plant through predatory behavior. Certain plant structures, called domatia, can selectively reinforce mutualisms with natural enemies and increase the fitness benefit they receive from that mutualism by ensuring the survival and proximity of natural enemies. Domatia provide a kind of housing or refuge for predators from both abiotic stressors, such as desiccation, and biotic stressors, such as predation from higher-order predators. Therefore, they not only ensure better survival, but eliminate the time required for natural enemies to locate and travel to the damaged plant. Natural enemies that make use of domatia are often said to serve as "bodyguards" for the plant on or in which they live. Domatia may be as well-developed as acacia tree thorns or as simple and incidental as a depression or crevice in a leaf stem, but they are distinguishable from galls and other similar structures in that they are not induced by the insect but formed constitutively by the plant. Nutritional rewards As long as natural enemies have some potential to be omnivorous, plants can provide food resources to encourage their retention and increase the impact they have on herbivore populations. This potential, however, can hinge on a number of the insect's traits. For example, hemipteran predators can use their sucking mouthparts to make use of leaves, stems, and fruits, but spiders with chelicerae cannot. Still, insects widely considered to be purely carnivorous have been observed to diverge from expected feeding behavior. Some plants simply tolerate a low level of herbivory by natural enemies for the service they provide in ridding the plant of more serious herbivores. Others, however, have structures thought to serve no purpose other than attracting and provisioning natural enemies. These structures derive from a long history of coevolution between the first and third trophic levels. A good example is the extrafloral nectaries that many myrmecophytes and other angiosperms sport on leaves, bracts, stems, and fruits. Nutritionally, extrafloral nectaries are similar to floral nectaries, but they do not lead the visiting insect to come into contact with pollen. Their existence is therefore not the product of a pollinator–plant mutualism, but rather a tritrophic, defensive interaction. Herbivore sequestration of plant defensive compounds The field of chemical ecology has elucidated additional types of plant multitrophic interactions that entail the transfer of defensive compounds across multiple trophic levels. For example, certain plant species in the Castilleja and Plantago genera have been found to produce defensive compounds called iridoid glycosides that are sequestered in the tissues of the Taylor's checkerspot butterfly larvae that have developed a tolerance for these compounds and are able to consume the foliage of these plants. These sequestered iridoid glycosides then confer chemical protection against bird predators to the butterfly larvae. Another example of this sort of multitrophic interaction in plants is the transfer of defensive alkaloids produced by endophytes living within a grass host to a hemiparasitic plant that is also using the grass as a host. Human uses Exploitation of tritrophic interactions can benefit agricultural systems. Biocontrol of crop pests can be exerted by the third trophic level, given an adequate population of natural enemies. However, the widespread use of pesticides or Bt crops can undermine natural enemies’ success. In some cases, populations of predators and parasitoids are decimated, necessitating even greater use of insecticide because the ecological service they provided in controlling herbivores has been lost. Even when pesticides are not widely used, monocultures often have difficulty support natural enemies in great enough numbers for them to diminish pest populations. A lack of diversity in the first trophic level is linked to low abundance in the third because alternative resources that are necessary for stable, large natural enemy populations are missing from the system. Natural enemy diets can be subsidized by increasing landscape diversity through companion planting, border crops, cover crops, intercropping, or tolerance of some weed growth. When nectar or other sugar-rich resources are provided, the natural enemy population thrives. Biological control Morphological plant characteristics and natural enemy success Beyond domatia and nutritional rewards, other plant characteristics influence the colonization of plants by natural enemies. These can include the physical size, shape, density, maturity, colour, and texture of a given plant species. Specific plant features such as the hairiness or glossiness of vegetation can have mixed effects on different natural enemies. For example, trichomes decrease hunting efficiency of many natural enemies, as trichomes tend to slow or prevent movement due to the physical obstacles they present or the adhesive secretions they produce. However, sometimes the prey species may be more impeded than the predator. For example, when the whitefly prey of the parasitoid Encarsia formosa is slowed by plant hairs, the parasitoid can detect and parasitize a higher number of juvenile whiteflies. Many predatory coccinelid beetles have a preference for the type of leaf surface they frequent. Presented with the opportunity to land on glossy or hairy Brassica oleracea foliage, the beetles prefer the glossy foliage as they are better able to cling to these leaves. Studies are evaluating the effect of various plant genotypes on natural enemies. Volatile organic compounds Two ways the release of volatile organic compounds (VOCs) may benefit plants are the deterrence of herbivores and the attraction of natural enemies. Synthetic products could replicate the distinct VOC profiles released by different plants; these products could be applied to plants suffering from pests that are targeted by the attracted natural enemy. This could cause natural enemies to enter crops that are occupied by pest populations that would otherwise likely remain undetected by the natural enemies. The four elements that must be considered before manipulating VOCs are as follows: The VOCs must effectively aid the natural enemy in finding the prey; the pest must have natural enemies present; the fitness cost of potentially attracting more herbivores must be exceeded by attracting natural enemies; and the natural enemies must not be negatively affected by direct plant defenses that may be present. Extrafloral nectaries The level of domestication of cotton plants correlates to indirect defense investment in the form of extrafloral nectaries. Wild varieties produce higher volumes of nectar and attract a wider variety of natural enemies. Thus, the process of breeding new cotton varieties has overlooked natural resistance traits in the pursuit of high-yielding varieties that can be protected by pesticides. Plants bearing extrafloral nectaries have lower pest levels along with greater levels of natural enemies. These findings illustrate the potential benefits that could be gained through incorporating the desirable genetics of wild varieties into cultivated varieties. Domatia Certain tropical plants host colonies of ants in their hollow domatia and provide the ants with nutrition delivered from nectaries or food bodies. These ant colonies have become dependent on the host plants for their survival and therefore actively protect the plant; this protection can take the form of killing or warding off pests, weeds, and certain fungal pathogens. Chinese citrus farmers have capitalized on this mutualistic relationship for many years by incorporating artificial ant nests into their crops to suppress pests. Parasitoids Parasitoids have successfully been incorporated into biological pest control programs for many years. Plants can influence the effect of parasitoids on herbivores by releasing chemical cues that attract parasitoids and by providing food sources or domatia. Certain parasitoids may be dependent on this plant relationship. Therefore, in agricultural areas where parasitoid presence is desired, ensuring the crops being grown meet all of these requirements is likely to promote higher parasitoid populations and better pest control. In a sugar beet crop, when only beets were grown, few aphids were parasitized. However, when collard crops were grown next to the sugar beets, parasitism of aphids increased. Collard crops release more VOCs than sugar beets. As a result, the companion collard plants attract more aphid parasitoids, which kill aphids in both the collard and the nearby sugar beets. In a related study, ethylene and other compounds released by rice plants in response to brown planthopper feeding attracted a facultative parasitoid that parasitizes brown planthopper eggs. In another study, the presence of plant extrafloral nectaries in cotton crops caused parasitoids to spend more time in the cotton and led to the parasitization of more moth larva than in cotton crops with no nectaries. Since the publication of this study, most farmers have switched to cotton varieties with nectaries. A separate study found that a naturalized cotton variety emitted seven times more VOCs than cultivated cotton varieties when experiencing feeding damage. It is unknown whether this generalizes to other crops; there are cases of other crops that do not show the same trend. These findings reveal the specific variables a farmer can manipulate to influence parasitoid populations and illustrate the potential impact parasitoid habitat management can have on pest control. In the case of cotton and other similar high-VOC crop scenarios, there is interest in genetically engineering the chemical pathways of cultivated varieties to selectively produce the high VOC's that were observed in the naturalized varieties in order to attract greater natural enemy populations. This presents challenges but could produce promising pest control opportunities. Insect pathogens Entomopathogens are another group of organisms that are influenced by plants. The extent of the influence largely depends on the evolutionary history shared between the two and the pathogens' method of infection and survival duration outside of a host. Different insect host plants contain compounds that cause modulate insect mortality when certain entomopathogens are simultaneously injected. Increases in mortality of up to 50-fold have been recorded. However, certain plants influence entomopathogens in negative ways, reducing their efficacy. It is primarily the leaf surface of the plant that influences the entomopathogen; plants can release various exudates, phytochemicals, and alleolochemicals through their leaves, some of which have the ability to inactivate certain entomopathogens. In contrast, in other plant species, leaf characteristics can increase the efficacy of entomopathogens. For example, the mortality of pea aphids was higher in the group of aphids that were found on plants with fewer wax exudates than in those on plants with more wax exudates. This reduced waxiness increases the transmission of Pandora neoaphidus conidia from the plant to the aphids. Feeding-induced volatiles emitted by different plants increase the amount of spores released by certain entomopathogenic fungi, increasing the likelihood of infection of some herbivores but not others. Plants can also influence pathogen efficacy indirectly, and this typically occurs either by increasing the susceptibility of the herbivore hosts or by changing their behavior. This influence can often take the form of altered growth rates, herbivore physiology, or feeding habits. Thus, there are various ways that host plant species can influence entomopathogenic interactions. In one study, brassicas were found to defend themselves by acting as a vector for entomopathogens. Virus-infected aphids feeding on the plants introduce a virus into the phloem. The virus is passively transported in the phloem and carried throughout the plant. This causes aphids feeding apart from the infected aphids to become infected as well. This finding offers the possibility of injecting crops with compatible entomopathogenic viruses to defend against susceptible insect pests. Below-ground tritrophic interactions Less studied than above-ground interactions, but proving to be increasingly important, are the below-ground interactions that influence plant defense. There is a complex network of signal transduction pathways involved in plant responses to stimuli, and soil microbes can influence these responses. Certain soil microbes aid plant growth, producing increased tolerance to various environmental stressors, and can protect their host plants from many different pathogens by inducing systemic resistance. Organisms in above- and below-ground environments can interact indirectly through plants. Many studies have shown both the positive and negative effects that one organism in one environment can have on other organisms in the same or opposite environment, with the plant acting as the intermediary. The colonization of plant roots with mycorhizae typically results in a mutualistic relationship between the plant and the fungus, inducing a number of changes in the plant. Such colonization has a mixed impact on herbivores; insects with different feeding methods are affected differently, some positively and others negatively. The mycorhizal species involved also matters. One common species, Rhizophagus irregularis, has been observed to have a negative effect on the feeding success of chewing herbivores, whereas other species studied have positive effects. The roots of some maize plants produce a defense chemical when roots are damaged by leaf beetle larvae; this chemical attracts the entomopathogenic nematode species Heterorhabditis megidis. Only certain maize varieties produce this chemical; plants that release the chemical see up to five times as much parasitization of leaf beetle larvae as those that do not. Incorporating these varieties or their genes into commercial maize production could increase the efficacy of nematode treatments. Further studies suggest that the plant-emitted chemicals act as the primary source of attractant to the nematodes. Herbivores are believed to have evolved to evade detection on the part of the nematodes, whereas the plants have evolved to release highly attractive chemical signals. A high degree of specificity is involved; species that make up these tritrophic interactions have evolved with one another over a long period of time and as a result have close interrelationships. Microorganisms can also influence tritrophic interactions. The bacterium Klebsiella aerogenes produces the volatile 2,3-butanediol, which modulates interactions between plants, pathogens, and insects. When maize plants are grown in a soil culture containing the bacterium or the plants are inoculated with the bacterium, the maize is more resistant to the fungus Setosphaeria turcica. The bacterium does not deter insect herbivory; it actually increases weight gain and leaf consumption in the caterpillar Spodoptera littoralis. However, the parasitic wasp Cotesia marginiventris is attracted more readily to maize plants grown in soil cultures containing either the volatile-producing bacterium or pure 2,3-butanediol. Considerations in utilizing tritrophic interactions in biological control Sustainable crop production is becoming increasingly important, if humans are to support a growing population and avoid a collapse of production systems. While the understanding and incorporation of tritrophic interactions in pest control offers a promising control option, the sustainable biological control of pests requires a dynamic approach that involves diversity in all of the species present, richness in natural enemies, and limited adverse activity (i.e., minimal pesticide use). This approach is especially important in conservation biological control efforts. There are typically more than three trophic levels at play in a given production setting, so the tritrophic interaction model may represent an oversimplification. Furthermore, ecological complexity and interactions between species of the same trophic level can come into play. Research thus far has had a relatively narrow focus, which may be suitable for controlled environments such as greenhouses but which has not yet addressed multi-generational plant interactions with dynamic communities of organisms. References Antipredator adaptations Herbivory Biological pest control Chemical ecology
Tritrophic interactions in plant defense
[ "Chemistry", "Biology" ]
4,409
[ "Chemical ecology", "Biological defense mechanisms", "Herbivory", "Antipredator adaptations", "Biochemistry", "Eating behaviors" ]
42,732,027
https://en.wikipedia.org/wiki/Transition%20metal%20dichalcogenide%20monolayers
Transition-metal dichalcogenide (TMD or TMDC) monolayers are atomically thin semiconductors of the type MX2, with M a transition-metal atom (Mo, W, etc.) and X a chalcogen atom (S, Se, or Te). One layer of M atoms is sandwiched between two layers of X atoms. They are part of the large family of so-called 2D materials, named so to emphasize their extraordinary thinness. For example, a MoS2 monolayer is only 6.5 Å thick. The key feature of these materials is the interaction of large atoms in the 2D structure as compared with first-row transition-metal dichalcogenides, e.g., WTe2 exhibits anomalous giant magnetoresistance and superconductivity. The discovery of graphene shows how new physical properties emerge when a bulk crystal of macroscopic dimensions is thinned down to one atomic layer. Like graphite, TMD bulk crystals are formed of monolayers bound to each other by van-der-Waals attraction. TMD monolayers have properties that are distinctly different from those of the semimetal graphene: TMD monolayers MoS2, WS2, MoSe2, WSe2, MoTe2 have a direct band gap, and can be used in electronics as transistors and in optics as emitters and detectors. The TMD monolayer crystal structure has no inversion center, which allows to access a new degree of freedom of charge carriers, namely the k-valley index, and to open up a new field of physics: valleytronics The strong spin–orbit coupling in TMD monolayers leads to a spin–orbit splitting of hundreds meV in the valence band and a few meV in the conduction band, which allows control of the electron spin by tuning the excitation laser photon energy and handedness. 2D nature and high spin–orbit coupling in TMD layers can be used as promising materials for spintronic applications. The work on TMD monolayers is an emerging research and development field since the discovery of the direct bandgap and the potential applications in electronics and valley physics. TMDs are often combined with other 2D materials like graphene and hexagonal boron nitride to make van der Waals heterostructures. These heterostructures need to be optimized to be possibly used as building blocks for many different devices such as transistors, solar cells, LEDs, photodetectors, fuel cells, photocatalytic and sensing devices. Some of these devices are already used in everyday life and can become smaller, cheaper and more efficient by using TMD monolayers. Crystal structure Transition-metal dichalcogenides (TMDs) are composed of three atomic planes and often two atomic species: a metal and two chalcogens. The honeycomb, hexagonal lattice has threefold symmetry and can permit mirror plane symmetry and/or inversion symmetry. In the macroscopic bulk crystal, or more precisely, for an even number of monolayers, the crystal structure has an inversion center. In the case of a monolayer (or any odd number of layers), the crystal may or may not have an inversion center. Broken inversion symmetry Two important consequences of that are: nonlinear optical phenomena, such as second-harmonic generation. When the crystal is excited by a laser, the output frequency can be doubled. an electronic band structure with direct energy gaps, where both conduction and valence band edges are located at the non-equivalent K points (K+ and K−) of the 2D hexagonal Brillouin zone. The interband transitions in the vicinity of the K+ (or K−) point are coupled to right (or left) circular photon polarization states. These so-called valley dependent optical selection rules arise from inversion symmetry breaking. This provides a convenient method to address specific valley states (K+ or K−) by circularly polarized (right or left) optical excitation. In combination with strong spin-splitting, the spin and valley degree of freedom are coupled, enabling stable valley polarization. These properties indicate that TMD monolayers represent a promising platform to explore spin and valley physics with the corresponding possible applications. Properties Transport properties At submicron scales, 3D materials no longer have the same behavior as their 2D form, which can be an advantage. For example, graphene has a very high carrier mobility, and accompanying lower losses through the Joule effect. But graphene has zero bandgap, which results in a disqualifyingly low on/off ratio in transistor applications. TMD monolayers might be an alternative: they are structurally stable, display a band gap and show electron mobilities comparable to those of silicon, so they can be used to fabricate transistors. Although thin-layer TMDs have been found to have a lower electron mobility than bulk TMDs, most likely because their thinness makes them more susceptible to damage, it has been found that coating the TMDs with HfO2 or hexagonal boron nitride (hBN) increases their effective carrier mobility. Optical properties A semiconductor can absorb photons with energy larger than or equal to its bandgap. This means that light with a shorter wavelength is absorbed. Semiconductors are typically efficient emitters if the minimum of the conduction band energy is at the same position in k-space as the maximum of the valence band, i.e., the band gap is direct. The band gap of bulk TMD material down to a thickness of two monolayers is still indirect, so the emission efficiency is lower compared to monolayered materials. The emission efficiency is about 104 greater for TMD monolayer than for bulk material. The band gaps of TMD monolayers are in the visible range (between 400 nm and 700 nm). The direct emission shows two excitonic transitions called A and B, separated by the spin–orbit coupling energy. The lowest energy and therefore most important in intensity is the A emission. Owing to their direct band gap, TMD monolayers are promising materials for optoelectronics applications. Atomic layers of MoS2 have been used as a phototransistor and ultrasensitive detectors. Phototransistors are important devices: the first with a MoS2 monolayer active region shows a photoresponsivity of 7.5 mA W−1 which is similar to graphene devices that reach 6.1 mA W−1. Multilayer MoS2 show higher photoresponsivities, about 100 mA W−1, which is similar to silicon devices. Making a gold contact at the far edges of a monolayer allows an ultrasensitive detector to be fabricated. Such a detector has a photoresponsivity reaching , 106 greater than the first graphene photodetectors. This high degree of electrostatic control is due to the thin active region of the monolayer. Its simplicity and the fact that it has only one semiconductor region, whereas the current generation of photodetectors is typically a p–n junction, makes possible industrial applications such as high-sensitivity and flexible photodetectors. The only limitation for currently available devices is the slow photoresponse dynamics. Utilizing WSe2 the photoresponse was improved to a bandwidth of over 230 MHz by device symmetry optimization Mechanical properties Interest in the use of TMD monolayers such as MoS2, WS2, and WSe2 for the use in flexible electronics due to a change from an indirect band gap in 3D to a direct band gap in 2D emphasizes the importance of the mechanical properties of these materials. Unlike in bulk samples it is much more difficult to uniformly deform 2D monolayers of material and as a result, taking mechanical measurements of 2D systems is more challenging. A method that was developed to overcome this challenge, called atomic force microscopy (AFM) nanoindentation, involves bending a 2D monolayer suspended over a holey substrate with an AFM cantilever and measuring the applied force and displacement. Through this method, defect free mechanically exfoliated monolayer flakes of MoS2 were found to have a Young's modulus of 270 GPa with a maximum experienced strain of 10% before breaking. In the same study, it was found that bilayer mechanically exfoliated MoS2 flakes have a lower Young's modulus of 200 GPa, which is attributed to interlayer sliding and defects in the monolayer. With increasing flake thickness the bending rigidity of the flake plays a dominant role and it is found that the Young's modulus of multilayer, 5- 25 layers, mechanically exfoliated MoS2 flakes is 330 GPa. The mechanical properties of other TMDs such as WS2 and WSe2 have also been determined. The Young's modulus of multilayer, 5-14 layers, mechanically exfoliated WSe2 is found to be 167 GPa with a maximum strain of 7%. For WS2, the Young's modulus of chemical vapor deposited monolayer flakes is 272 GPa. From this same study the Young's modulus of CVD-grown monolayer flakes of MoS2 is found to be 264 GPa. This is an interesting result as the Young's modulus of the exfoliated MoS2 flake is nearly the same as that of the CVD grown MoS2 flake. It is generally accepted that chemically vapor deposited TMDs will include more defects when compared with the mechanically exfoliated films that are obtained from bulk single crystals, which implies that defects (points defects, etc.) that are included in the flake do not drastically affect the strength of the flake itself. Under the application of strain, a decrease in the direct and indirect band gap is measured that is approximately linear with strain. Importantly, the indirect bandgap decreases faster with applied strain to the monolayer than the direct bandgap, resulting in a crossover from direct to indirect band gap at a strain level of around 1%. As a result, the emission efficiency of monolayers is expected to decrease for highly strained samples. This property allows mechanical tuning of the electronic structure and also the possibility of fabrication of devices on flexible substrates. Fabrication of TMD monolayers Exfoliation Exfoliation is a top down approach. In the bulk form, TMDs are crystals made of layers, which are coupled by Van-der-Waals forces. These interactions are weaker than the chemical bonds between the Mo and S in MoS2, for example. So TMD monolayers can be produced by micromechanical cleavage, just as graphene. The crystal of TMD is rubbed against the surface of another material (any solid surface). In practice, adhesive tape is placed on the TMD bulk material and subsequently removed. The adhesive tape, with tiny TMD flakes coming off the bulk material, is brought down onto a substrate. On removing the adhesive tape from the substrate, TMD monolayer and multilayer flakes are deposited. This technique produces small samples of monolayer material, typically about 5–10 micrometers in diameter. Large quantities of exfoliated material can also be produced using liquid-phase exfoliation by blending TMD materials with solvents and polymers. Chemical vapor deposition Chemical vapor deposition (CVD) is another approach used to synthesize transition-metal dichalcogenides. It has been used broadly to synthesize many different TMDs because it can be easily adapted for different TMD materials. Generally, CVD growth of TMDs is achieved by putting precursors to the material, typically a transition-metal oxide and pure chalcogen, into a furnace with the substrate on which the material will form. The furnace is heated to high temperatures (anywhere from 650 to 1000 °C) with an inert gas, typically N2 or Ar, flowing through the tube. Some materials require H2 gas as a catalyst for formation, so it may be flowed through the furnace in smaller quantities than the inert gas. Outside of traditional CVD, metal organic chemical vapor deposition (MOCVD) has been used to synthesize TMDs. Unlike traditional CVD described above, MOCVD uses gaseous precursors, as opposed to solid precursors and MOCVD is usually carried out at lower temperatures, anywhere from 300 to 900 °C. MOCVD has been shown to provide more consistent wafer-scale growth than traditional CVD. CVD is often used over mechanical exfoliation despite its added complexity because it can produce monolayers ranging anywhere from 5 to 100 microns in size as opposed to the surface areas of roughly 5-10 microns produced using the mechanical exfoliation method. Not only do TMD monolayers produced by CVD have a larger surface area than those flakes produced by mechanical exfoliation, they are often more uniform. Monolayer TMD flakes with very little or no multilayer areas can be produced by chemical vapor deposition, in contrast to samples produced by mechanical exfoliation, which often have many multilayered areas. An alternative method has demonstrated that transition metal sulfides, including those of Ti, Zr, Hf, V, Nb, Ta, Cr, Mo, and W, can be synthesized through sulfurization of metal oxides in CS₂ vapor, achieving gram-scale production with simpler equipment and precursors. Geometrically confined-growth techniques are also recently applied to realize wafer-scale single-domain TMD monolayer arrays and their heterostructures. Molecular-beam epitaxy Molecular-beam epitaxy (MBE) is an established technique for growing semiconductor devices with atomic monolayer thickness control. MBE has been used to grow different TMDs, such as MoSe2, WSe2, and early transition metals, including titanium, vanadium, and chromium, tellurides, resulting in extremely clean samples with a thickness of only 0.5 monolayer. The growth takes place in ultra-high vacuum (UHV). Precursors for the target materials are placed into evaporation cells, usually as powder (for example selenium), or as a rod (for example molybdenum). Some elements, such as selenium and tellurium, both of which are chalcogens, can be used in pure solid form as precursors. Some elements, however, can only be used when extracted from solid compounds, such as sulfur from FeS2. The compound materials are broken down by heating up the material at UHV pressures. The evaporation cells are either Knudsen cells or electron beam evaporation based, depending on the materials; electron beam evaporation works with rods and can be used to reach high temperatures without overheating heating filaments, while Knudsen cells are suitable for powders and materials with a lower evaporation point. The evaporated materials are then directed towards the substrate; some common ones are MoS2, HOPG, mica, or a sapphire substrate, such as Al2O3. A specific substrate is chosen to fit the targeted growth the best. The substrate is kept heated during the process to enhance the growth, with the temperatures ranging from 300 °C to 700 °C. The temperature of the substrate is one key factor of the growth, and altering it can be used to grow different phases, such as 1T and 2H, of the same material. MBE holds some advantages in regards to both manual exfoliation and CVD. Use of reflection high-energy electron diffraction (RHEED) enables the in-situ monitoring of the growth, and this additionally with UHV and slow growth speed allows one to create clean, atomically thin monolayers. The improvement in sample quality is considerable when compared to exfoliation, as MBE is more effective in getting rid of the large flakes and impurities. In contrast to CVD, MBE proves beneficial when single-layerd TMDs are required. The disadvantage of MBE is that it is a relatively complicated process that requires large amounts of specialized equipment. Maintaining UHV can be difficult, and the preparation of samples is slower than in the other two methods. Electrochemical Deposition Electrodeposition is among the techniques that have emerged to produce TMDC semiconductors such as MoS2, WS2 and WSe2. Several reports have shown controlled electrodeposition of TMDC layers down to a monolayer. The materials have so far shown continuous films of good uniformity but typically require annealing temperatures > 500 °C. Electrodepositions of TMDC films have been successfully reported over conducting films such as graphene and TiN, and over a SiO2 insulator by growing the TMDC laterally starting from a conductive film. Electronic band structure Band gap In the bulk form, TMD have an indirect gap in the center of the Brillouin zone, whereas in monolayer form the gap becomes direct and is located in the K points. Spin–orbit coupling For TMDs, the atoms are heavy and the outer layers electronic states are from d-orbitals that have a strong spin–orbit coupling. This spin orbit coupling removes the spins degeneracy in both the conduction and valence band i.e. introduces a strong energy splitting between spin up and down states. In the case of MoS2, the spin splitting in conduction band is in the meV range, it is expected to be more pronounced in other material like WS2. The spin orbit splitting in the valence band is several hundred meV. Spin-valley coupling and the electron valley degree of freedom By controlling the charge or spin degree of freedom of carriers, as proposed by spintronics, novel devices have already been made. If there are different conduction/valence band extrema in the electronic band structure in k-space, the carrier can be confined in one of these valleys. This degree of freedom opens up a new field of physics: the controlling of carriers k-valley index, also called valleytronics. For TMD monolayers crystals, the parity symmetry is broken, there is no more inversion center. K valleys of different directions in the 2D hexagonal Brillouin zone are no longer equivalent. So there are two kinds of K valley called K+ and K−. Also there is a strong energy degeneracy of different spin states in valence band. The transformation of one valley to another is described by the time reversal operator. Moreover, crystal symmetry leads to valley dependent optical selection rules: a right circular polarized photon (σ+) initializes a carrier in the K+ valley and a left circular polarized photon (σ-) initializes a carrier in the K− valley. Thanks to these two properties (spin-valley coupling and optical selection rules), a laser of specific polarization and energy allows to initialize the electron valley states (K+ or K−) and spin states (up or down). Emission and absorption of light: excitons A single layer of TMD can absorb up to 20% of incident light, which is unprecedented for such a thin material. When a photon of suitable energy is absorbed by a TMD monolayer, an electron is created in the conduction band; the electron now missing in the valence band is assimilated by a positively charged quasi-particle called a hole. The negatively charged electron and the positively charged hole are attracted via the Coulomb interaction, forming a bound state called an exciton which can be thought as a hydrogen atom (with some difference). This Bosonic-like quasi-particle is very well known and studied in traditional semiconductors, such as GaAs and ZnO but in TMD it provides exciting new opportunities for applications and for studying fundamental physics. Indeed, the reduced dielectric screening and the quantum size effect present in these ultrathin materials make the binding energy of excitons much stronger than those in traditional semiconductors. Binding energies of several hundreds of meV are observed for all the four principal members of the TMD family. As mentioned before, we can think about an exciton as if it were a hydrogen atom, with an electron bound to a hole. The main difference is that this system is not stable and tends to relax to the vacuum state, which is here represented by an electron in the valence band. The energy difference between the exciton 'ground state' (n=1) and the 'vacuum state' is called optical gap and is the energy of the photon emitted when an exciton recombines. This is the energy of the photons emitted by TMD monolayers and observed as huge emission peaks in photoluminescence (PL) experiments, such as the one labelled X0 in the figure. In this picture the binding energy EB is defined as the difference between the free particle band gap and the optical band gap and represent, as usual, the energy needed to take the hole and the electron apart. The existence of this energy difference is called band gap renormalization. The analogy with hydrogen atom doesn't stop here as excitonic excited states were observed at higher energies and with different techniques. Because of the spin–orbit splitting of the valence band two different series of excitons exist in TMD, called A- and B-excitons. In the A series the hole is located in the upper branch of the Valence band while for the B-exciton the hole is in the lower branch. As a consequence the optical gap for B-exciton is larger and the corresponding peak is found at higher energy in PL and reflectivity measurements. Another peak usually appears in the PL spectra of TMD monolayers, which is associated to different quasi-particles called trions. These are excitons bound to another free carrier which can be either an electron or a hole. As a consequence a trion is a negative or positively charged complex. The presence of a strong trion peak in a PL spectrum, eventually stronger than the peak associated with exciton recombination, is a signature of a doped monolayer. It is believed now that this doping is extrinsic, which means that it arises from charged trap states present in the substrate (generally SiO2). Positioning a TMD monolayer between two flakes of hBN removes this extrinsic doping and greatly increase the optical quality of the sample. At higher excitation powers biexcitons have also been observed in monolayer TMDs. These complexes are formed by two bound excitons. Theory predicts that even larger charge-carrier complexes, such as charged biexcitons (quintons) and ion-bound biexcitons, are stable and should be visible in the PL spectra. Additionally, quantum light has been observed to originate from point defects in these materials in a variety of configurations. Radiation effects of TMD monolayers Common forms of radiation used to create defects in TMDs are particle and electromagnetic irradiation, impacting the structure and electronic performance of these materials. Scientist have been studying the radiation response of these materials to be used in high-radiation environments, such as space or nuclear reactors. Damage to this unique class of materials occurs mainly through sputtering and displacement for metals or radiolysis and charging for insulators and semiconductors. To sputter away an atom, the electron must be able to transfer enough energy to overcome the threshold for knock-on damage. Yet, the exact quantifiable determination of this energy still needs to be determined for TMDs. Consider MoS2 as an example, TEM exposure via sputtering creates vacancies in the lattice, these vacancies are then observed to be collected together in spectroscopic lines. Additionally, when looking at the radiation response of these materials, the three parameters that are proven to matter most are the choice of substrate, the sample thickness, and the sample preparation process. Janus TMD monolayers A new type of asymmetric transitional metal dichalcogenide, the Janus TMDs monolayers, has been synthesized by breaking the out-of-plane structural symmetry via plasma assisted chemical vapor deposition. Janus TMDs monolayers show an asymmetric structure MXY (M = Mo or W, X/Y = S, Se or Te) exhibiting out-of-plane optical dipole and piezoelectricity due to the imbalance of the electronic wave-function between the dichalcogenides, which are absent in a non-polar TMDs monolayer, MX2. In addition, the asymmetric structure of Janus MoSSe provides an enhanced Rashba spin–orbit interaction, which suggests asymmetrically Janus TMDs monolayer can be a promising candidate for spintronic applications. In addition, Janus TMDs monolayer has been considered as an excellent material for electrocatalysis or photocatalysis. Janus MoSSe can be synthesized by inductively coupled plasma CVD (ICP-CVD). The top layer of sulfur atoms on MoS2 is stripped using hydrogen ions, forming an intermediate state, MoSH. Afterward, the intermediate state is selenized by thermal annealing at 250 °C in an environment of hydrogen and argon gases. Aspirational uses Electronics A field-effect transistor (FET) made of monolayer MoS2 showed an on/off ratio exceeding 108 at room temperature owing to electrostatic control over the conduction in the 2D channel. FETs made from MoS2, MoSe2, WS2, and WSe2 have been made. All show promise not just because of their electron mobility and band gap, but because their very thin structure makes them promising for use in thin, flexible electronics. Sensing The band gap TMDs possess makes them attractive for sensors as a replacement for graphene. FET-based biosensors rely on receptors attached to the monolayer TMD. When target molecules attach to the receptors, it affects the current flowing through the transistor. However, it has been shown that one can detect nitrogenous bases in DNA when they pass through nanopores made in MoS2. Nanopore sensors are based upon measuring ionic current through a nanopore in a material. When a single strand of DNA passes through the pore, there is a marked decrease in ionic current for each base. By measuring the current flowing through the nanopore, the DNA can then be sequenced. To this date, most sensors have been created from MoS2, although WS2 has been explored as well. Specific examples Molybdenum disulfide Molybdenum disulfide monolayers consist of a unit of one layer of molybdenum atoms covalently bonded to two layers of sulfur atoms. While bulk molybdenum sulfide exists as 1T, 2H, or 3R polymorphs, molybdenum disulfide monolayers are found only in the 1T or 2H form. The 2H form adopts a trigonal prismatic geometry while the 1T form adopts an octahedral or trigonal antiprismatic geometry. Molybdenum monolayers can also be stacked due to Van der Waals interactions between each layer. Electrical The electrical properties of molybdenum sulfide in electrical devices depends on factors such as the number of layers, the synthesis method, the nature of the substrate on which the monolayers are placed on, and mechanical strain. As the number of layers decrease, the band gap begins to increase from 1.2eV in the bulk material up to a value of 1.9eV for a monolayer. Odd number of molybdenum sulfide layers also produce different electrical properties than even numbers of molybdenum sulfide layers due to cyclic stretching and releasing present in the odd number of layers. Molybdenum sulfide is a p-type material, but it shows ambipolar behavior when molybdenum sulfide monolayers that were 15 nm thick were used in transistors. However, most electrical devices containing molybdenum sulfide monolayers tend to show n-type behavior. The band gap of molybdenum disulfide monolayers can also be adjusted by applying mechanical strain or an electrical field. Increasing mechanical strain shifts the phonon modes of the molybdenum sulfide layers. This results in a decrease of the band gap and metal-to-insulator transition. Applying an electric field of 2-3Vnm−1 also decreases the indirect bandgap of molybdenum sulfide bilayers to zero. Solution phase lithium intercalation and exfoliation of bulk molybdenum sulfide produces molybdenum sulfide layers with metallic and semiconducting character due to the distribution of 1T and 2H geometries within the material. This is due to the two forms of molybdenum sulfide monolayers having different electrical properties. The 1T polymorph of molybdenum sulfide is metallic in character while the 2H form is more semiconducting. However, molybdenum disulfide layers produced by electrochemical lithium intercalation are predominantly 1T and thus metallic in character as there is no conversion to the 2H form from the 1T form. Thermal The thermal conductivity of molybdenum disulfide monolayers at room temperature is 34.5W/mK while the thermal conductivity of few-layer molybdenum disulfide is 52W/mK. The thermal conductivity of graphene, on the other hand, is 5300W/mK. Due to the rather low thermal conductivity of molybdenum disulfide nanomaterials, it is not as promising material for high thermal applications as some other 2D materials. Synthesis Exfoliation Exfoliation techniques for the isolating of molybdenum disulfide monolayers include mechanical exfoliation, solvent assisted exfoliation, and chemical exfoliation. Solvent assisted exfoliation is done by sonicating bulk molybdenum disulfide in an organic solvent such as isopropanol and N-methyl-2-pyrrolidone, which disperses the bulk material into nanosheets as the Van der Waals interactions between the layers in the bulk material are broken. The amount of nanosheets produced is controlled by the sonication time, the solvent-molybdenum disulfide interactions, and the centrifuge speed. Compared to other exfoliation techniques, solvent assisted exfoliation is the simplest method for large scale production of molybdenum disulfide nanosheets. The micromechanical exfoliation of molybdenum disulfide was inspired by the same technique used in the isolation of graphene nanosheets. Micromechanical exfoliation allows for low defect molybdenum disulfide nanosheets but is not suitable for large scale production due to low yield. Chemical exfoliation involves functionalizing molybdenum difsulfide and then sonicating to disperse the nanosheets. The most notable chemical exfoliation technique is lithium intercalation in which lithium is intercalated into bulk molybdenum disulfide and then dispersed into nanosheets by the addition of water. Chemical vapor deposition Chemical vapor deposition of molybdenum disulfide nanosheets involves reacting molybdenum and sulfur precursors on a substrate at high temperatures. This technique is often used in the preparing electrical devices with molybdenum disulfide components because the nanosheets are applied directly on the substrate; unfavorable interactions between the substrate and the nanosheets that would have occurred had they been separately synthesized are decreased. In addition, since the thickness and area of the molybdenum disulfide nanosheets can be controlled by the selection of specific precursors, the electrical properties of the nanosheets can be tuned. Electroplating Among the techniques that have been used to deposit molybdenum disulfide is electroplating. Ultra-thin films consisting of few-layers have been produced via this technique over graphene electrodes. In addition, other electrode materials were also electroplated with MoS2, such as Titanium Nitride (TiN), glassy carbon and polytetrafluoroethylene. The advantage that this technique offers in producing 2D materials is its spatial growth selectivity and its ability to deposit over 3D surfaces. Controlling the thickness of electrodeposited materials can be achieved by adjusting the deposition time or current. Laser ablation Pulsed laser deposition involves the thinning of bulk molybdenum disulfide by laser to produce single or multi-layer molybdenum disulfide nanosheets. This allows for synthesis of molybdenum disulfide nanosheets with a defined shape and size. The quality of the nanosheets are determined by the energy of the laser and the irradation angle. Lasers can also be used to form molybdenum disulfide nanosheets from molybdenum disulfide fullerene-like molecules. Hafnium disulfide Hafnium disulfide () has a layered structure with strong covalent bonding between the Hf and S atoms in a layer and weak van der Waals forces between layers. The compound has type structure and is an indirect band gap semiconducting material. The interlayer spacing between the layers is 0.56 nm, which is small compared to group VIB TMDs like , making it difficult to cleave its atomic layers. However, recently its crystals with large interlayer spacing has grown using a chemical vapor transport route. These crystals exfoliate in solvents like N-Cyclohexyl-2-pyrrolidone (CHP) in a time of just some minutes resulting in a high-yield production of its few-layers resulting in increase of its indirect bandgap from 0.9 eV to 1.3 eV. As an application in electronics, its field-effect transistors has been realised using its few layers as a conducting channel material offering a high current modulation ratio larger than 10000 at room temperature. Therefore, group IVB TMDs also holds potential applications in the field of opto-electronics. Tungsten diselenide Tungsten diselenide is an inorganic compound with the formula . The compound adopts a hexagonal crystalline structure similar to molybdenum disulfide. Every tungsten atom is covalently bonded to six selenium ligands in a trigonal prismatic coordination sphere, while each selenium is bonded to three tungsten atoms in a pyramidal geometry. The tungsten – selenium bond has a bond distance of 2.526 Å and the distance between selenium atoms is 3.34 Å. Layers stack together via van der Waals interactions. is a stable semiconductor in the group-VI transition-metal dichalcogenides. The electronic bandgap of can be tuned by mechanical strain which can also allow for conversion of the band type from indirect-to-direct in a bilayer. References External links Semiconductor analysis Transition metal dichalcogenides Monolayers
Transition metal dichalcogenide monolayers
[ "Physics" ]
7,410
[ "Monolayers", "Atoms", "Matter" ]
42,732,399
https://en.wikipedia.org/wiki/Thermal%20history%20coating
A thermal history coating (THC) is a robust coating containing various non-toxic chemical compounds whose crystal structures irreversibly change at high temperatures. This allows for temperature measurements and thermal analysis to be performed on intricate and inaccessible components, which operate in harsh environments. Like thermal barrier coatings, THCs provide protection from intense heat to the surfaces on which they are applied. The temperature range that THCs provide accurate temperature measurements in is 900 °C to 1400 °C with an accuracy of ±10 °C. Application of THCs THCs are applied by atmospheric plasma spraying, which is a thermal spraying technique. This ensures that the coatings are robust to allow long life-times in harsh environments, such as on jet engine components, which experience temperatures in excess of 1000 °C and angular velocities of up to 10,000rpm (revolutions per minute). Temperature Measurement Phosphorescent Properties THCs are composed of phosphor materials, whose luminescent characteristics are temperature- and duration-dependent. Phosphor thermometry is the measurement technique used for determining the past temperatures of THCs, whereby the luminescent characteristics of the coatings are exploited and matched to calibration tables. Instrumentation The phosphorescence of THCs is excited by use of an external light source such as a laser pen. An optical system then collects a reflected light signal, whose characteristics provide information on the crystal structure of the THC. Crystal structure properties are then converted into temperatures, which had previously been experience by the coatings. This allows for point measurements to be made across the coated surfaces of components and allows thermal analysis to be carried out. Applications R&D THCs are used in high temperature applications where temperature knowledge is essential in research and development programmes, for example in identifying hot spots, which could lead to structural damage of components. Warranty As the THCs provide historic temperature information, they can be used as warranty tools, where certain components, such as valves or particular engine or machinery components must not exceed certain temperatures. Other High-Temperature Detection Technologies Thermocouple Thermocrystal Pyrometer References Materials science Thermal protection
Thermal history coating
[ "Physics", "Materials_science", "Engineering" ]
441
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
42,734,050
https://en.wikipedia.org/wiki/Complex%20Projects%20Contract
The CIOB Complex Projects Contract 2013 was a form of construction and engineering contract, developed by the Chartered Institute of Building (CIOB). Its formal name was the 'Contract for Use with Complex Projects, First Edition 2013'. In November 2015, the Complex Projects Contract was updated by the Chartered Institute of Building in response to feedback from the industry. Despite effectively being a second edition, it was renamed the Time and Cost Management Contract 2015 (abbreviated to TCM15) to reflect the core strengths of the contract. Launch Based upon extensive research carried out by the CIOB, the contract was formally launched on 23 April 2013. The contract was billed as the world's first contract specifically aimed at the management of time in complex construction and engineering projects. The authors also stated that it was the first form to follow the Society of Construction Law Delay and Disruption Protocol, and that it was also the first standard form contract to cater for Building Information Modelling (BIM) and the future of collaborative design. Purpose It is suited for works of high value or complexity, major real estate projects and engineering or infrastructure projects. It is not suited to simpler works, those of short duration or with inexperienced clients /contractors. It anticipates Special Conditions for the particular requirements of each project and if using construction management as the procurement method, it cannot be used without the appropriate terms being included in the Special Conditions. CPC2013 is designed for use by companies and public authorities in the UK and in any other country where works comprise complex building and / or engineering, which cannot reasonably be expected to be managed intuitively. It can be used where the contractor is expected to construct only that designed by or under the direction of the client with traditional drawings, specification and/or bills of quantities, or BIM, or on projects which require a contractor's design in part, or for design-build projects in which the contractor designs the whole of the works with or without an employer's reference design. The contract requires a collaborative approach to the management of design, quality, time and cost. The working schedule, planning method statement and progress records (which are to be inspected and accepted by a competent project time manager and independently audited for quality assurance) are at the core of management. They are the tools by which all time and time-related cost issues are to be determined and are to conform with the standards required by the conditions, appendices and the CIOB's Guide to Good Practice in the Management of Time in Complex Projects. Collaboration In order to promote collaboration and to ensure transparency of data, schedule and database submittals are to be made in native file format either by maintenance of the material in a common data environment or transfer by a file transfer protocol to all having a continuing design, administrative, or supervisory role, who are identified as Listed Persons. The contract also requires the appointment of a Contract Administrator to carry out administrative functions during the course of construction, a Project Time Manager to advise on time related matters, a Valuer to advise on cost matters, a Design Coordination Manager to manage the integration of the Contractor's design, and a Data Security Manager to supervise and maintain the integrity and security of digital communications. Risk management Central to the philosophy of the Complex Projects Contract is its approach to transparency in risk management. CPC2013 provides for both the owner and contractor each to identify one or more time contingency allowance, which each can use as it wishes to manage its own risks. Unusually for a standard form construction contract, it defines "float" and provides that if either party creates free float or total float as a result of its own improvement of progress, that party may keep the created float as its own contingency. Additional powers are provided in CPC2013 to enable the construction project's developer, following consultation, to instruct acceleration, recovery and changes in resources sequences and logic in order to manage its risks contemporaneously. Time management CPC2013 is distinctive in taking a prescriptive approach to the management of time and associated cost risk and combining critical path network techniques with resource-based planning. The time model, referred to as the Working Schedule, combines a high-density, short-term look-ahead similar in concept to that used in agile software development with medium and long-term lower density schedule along the lines of that used in the waterfall model planning technique, the whole being revised regularly on the Rolling Wave planning principle. In the short-term look-ahead, the logic is to be resource and location-related, instead of activity based, as it is in waterfall. The agile part of the schedule is to have its activity durations calculated by reference to the resources to be applied and their expected productivity. Cost management The activities in the Working Schedule are also to be valued so that the Working Schedule also functions as the pricing schedule to predict current value for the purposes of interim payment and the ultimate out-turn cost for cost risk management purposes. Progress records Progress is required to be recorded in a database identifying, at specified intervals, the resources used, productivity achieved and earned value. Apart from being the source data for subsequent progress update of the schedule, the database also serves for benchmarking productivity achievable for quality assurance of the schedule and future planning. Dispute resolution The emphasis of CPC2013 is on contemporaneous Issue Resolution by an appropriate expert. CPC2013 it also provides for an escalating Dispute Resolution procedure, embracing negotiation, mediation, adjudication and arbitration. Where issues are required to be determined by Issue Resolution within a certain timescale, if the procedure is not invoked there are deeming provisions which determine the outcome of an issue. If Dispute Resolution is required, the expert, adjudicator or arbitrator are either those named in the contract or, in default, they are appointed by the Academy of Experts in London or the CIOB. The default adjudication rules are those of the Scheme for Construction Contracts and the default arbitration procedure is that of the London Court of International Arbitration. Ancillary publications A back-to-back Consultancy Appointment and Subcontract, both of which followed the same principles of time and cost risk management, were published in November 2015 as part of the Time and Cost Management Contract 2015 suite. Industry reception A number of reviews and commentaries have been written on the subject of the Complex Projects Contract. Some have criticized the complexity of the contract itself . whereas others have noted the benefits of the clear language, and have commented on the incorporation of its positive features. Reviews published have consistently commented that it will be necessary for the contract to be tested on a live project before the effectiveness can be proven. External links CIOB Time Management's website CIOB Time and Cost Management Contract 2015's website References Construction
Complex Projects Contract
[ "Engineering" ]
1,372
[ "Construction" ]
42,734,157
https://en.wikipedia.org/wiki/CoCalc
CoCalc (formerly called SageMathCloud) is a web-based cloud computing (SaaS) and course management platform for computational mathematics. It supports editing of Sage worksheets, LaTeX documents and Jupyter notebooks. CoCalc runs an Ubuntu Linux environment that can be interacted with through a terminal, additionally giving access to most of the capabilities of Linux. CoCalc offers both free and paid accounts. Subscriptions starting at $14/month provide internet access and more storage and computing resources. One subscription can be used to increase quotas for one project used by multiple accounts. There are subscription plans for courses. Over 200 courses have used CoCalc. Features CoCalc directly supports Sage worksheets, which interactively evaluate Sage code. The worksheets support Markdown and HTML for decoration, and R, Octave, Cython, Julia and others for programming in addition to Sage. CoCalc supports Jupyter notebooks, which are enhanced with real-time synchronization for collaboration and a history recording function. Additionally, there is also a full LaTeX editor, with collaboration support, a preview of the resulting document and also support for SageTeX. With its online Linux terminal, CoCalc also indirectly supports editing and running many other languages, including Java, C/C++, Perl, Ruby, and other popular languages that can be run on Linux. Other packages can be installed on request. Users can have multiple projects on CoCalc, and each project has separate disk space and may be on an entirely different server. Many users can collaborate on a single project, and documents are synced, so multiple users can edit the same file at once, similar to Google Docs. All the data on projects is automatically backed up about every five minutes with bup, and snapshots of previous versions are accessible. Through the terminal, files can be tracked using revision control systems like Git. Development CoCalc operated by SageMath Inc. The creator and lead developer of CoCalc is William Stein, a former professor of mathematics at the University of Washington who also created the Sage software system. Initial development was funded by the University of Washington and grants from the National Science Foundation and Google. Now CoCalc is mostly funded by paying users. It is intended as a replacement for sagenb, which also let users edit and share Sage worksheets online. References External links CoCalc homepage CoCalc documentation CoCalc FAQ Google Chrome extension Source code used for running CoCalc Collaborative real-time editors Free mathematics software Free software programmed in Python Free software websites Mathematical software
CoCalc
[ "Mathematics", "Technology" ]
531
[ "Computing websites", "Free software websites", "Free mathematics software", "Collaborative real-time editors", "Mathematical software" ]
47,291,523
https://en.wikipedia.org/wiki/Ingression%20coast
An ingression coast or depressed coast is a generally level coastline that is shaped by the penetration of the sea as a result of crustal movements or a rise in the sea level. Such coasts are characterised by a subaerially formed relief that has previously experienced little deformation by littoral (tidal) processes, because the sea level, which had fallen by more than 100 metres during the last glacial period, did not reach its current level until about 6,000 years ago. Depending on the geomorphological shaping of the flooded landform – e.g. glacially or fluvially formed relief – various types of ingression coast emerge, such as rias, skerry and fjard coasts as well as förde and bodden coasts. See also Marine transgression References Geomorphology Hydrology Coastal and oceanic landforms
Ingression coast
[ "Chemistry", "Engineering", "Environmental_science" ]
177
[ "Hydrology", "Environmental engineering" ]
44,192,006
https://en.wikipedia.org/wiki/Lieb%E2%80%93Oxford%20inequality
In quantum chemistry and physics, the Lieb–Oxford inequality provides a lower bound for the indirect part of the Coulomb energy of a quantum mechanical system. It is named after Elliott H. Lieb and Stephen Oxford. The inequality is of importance for density functional theory and plays a role in the proof of stability of matter. Introduction In classical physics, one can calculate the Coulomb energy of a configuration of charged particles in the following way. First, calculate the charge density , where is a function of the coordinates . Second, calculate the Coulomb energy by integrating: In other words, for each pair of points and , this expression calculates the energy related to the fact that the charge at is attracted to or repelled from the charge at . The factor of corrects for double-counting the pairs of points. In quantum mechanics, it is also possible to calculate a charge density , which is a function of . More specifically, is defined as the expectation value of charge density at each point. But in this case, the above formula for Coulomb energy is not correct, due to exchange and correlation effects. The above, classical formula for Coulomb energy is then called the "direct" part of Coulomb energy. To get the actual Coulomb energy, it is necessary to add a correction term, called the "indirect" part of Coulomb energy. The Lieb–Oxford inequality concerns this indirect part. It is relevant in density functional theory, where the expectation value ρ plays a central role. Statement of the inequality For a quantum mechanical system of particles, each with charge , the -particle density is denoted by The function is only assumed to be non-negative and normalized. Thus the following applies to particles with any "statistics". For example, if the system is described by a normalised square integrable -particle wave function then More generally, in the case of particles with spin having spin states per particle and with corresponding wave function the -particle density is given by Alternatively, if the system is described by a density matrix , then is the diagonal The electrostatic energy of the system is defined as For , the single particle charge density is given by and the direct part of the Coulomb energy of the system of particles is defined as the electrostatic energy associated with the charge density , i.e. The Lieb–Oxford inequality states that the difference between the true energy and its semiclassical approximation is bounded from below as where is a constant independent of the particle number . is referred to as the indirect part of the Coulomb energy and in density functional theory more commonly as the exchange plus correlation energy. A similar bound exists if the particles have different charges . No upper bound is possible for . The optimal constant While the original proof yielded the constant , Lieb and Oxford managed to refine this result to . Later, the same method of proof was used to further improve the constant to . It is only recently that the constant was decreased to . With these constants the inequality holds for any particle number . The constant can be further improved if the particle number is restricted. In the case of a single particle the Coulomb energy vanishes, , and the smallest possible constant can be computed explicitly as . The corresponding variational equation for the optimal is the Lane–Emden equation of order 3. For two particles () it is known that the smallest possible constant satisfies . In general it can be proved that the optimal constants increase with the number of particles, i.e. , and converge in the limit of large to the best constant in the inequality (). Any lower bound on the optimal constant for fixed particle number is also a lower bound on the optimal constant . The best numerical lower bound was obtained for where . This bound has been obtained by considering an exponential density. For the same particle number a uniform density gives . The largest proved lower bound on the best constant is , which was first proven by Cotar and Petrache. The same lower bound was later obtained in using a uniform electron gas, melted in the neighborhood of its surface, by Lewin, Lieb & Seiringer. Hence, to summarise, the best known bounds for are . The Dirac constant Historically, the first approximation of the indirect part of the Coulomb energy in terms of the single particle charge density was given by Paul Dirac in 1930 for fermions. The wave function under consideration is With the aim of evoking perturbation theory, one considers the eigenfunctions of the Laplacian in a large cubic box of volume and sets where forms an orthonormal basis of . The allowed values of are with . For large , , and fixed , the indirect part of the Coulomb energy can be computed to be with . This result can be compared to the lower bound (). In contrast to Dirac's approximation the Lieb–Oxford inequality does not include the number of spin states on the right-hand side. The dependence on in Dirac's formula is a consequence of his specific choice of wave functions and not a general feature. Generalisations The constant in () can be made smaller at the price of adding another term to the right-hand side. By including a term that involves the gradient of a power of the single particle charge density , the constant can be improved to . Thus, for a uniform density system . References Further reading Inequalities Density functional theory
Lieb–Oxford inequality
[ "Physics", "Chemistry", "Mathematics" ]
1,111
[ "Mathematical theorems", "Density functional theory", "Quantum chemistry", "Quantum mechanics", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems" ]
44,194,851
https://en.wikipedia.org/wiki/Beta-sandwich
Beta-sandwich or β-sandwich domains consisting of 80 to 350 amino acids occur commonly in proteins. They are characterized by two opposing antiparallel beta sheets (β-sheets). The number of strands found in such domains may differ from one protein to another. β-sandwich domains are subdivided in a variety of different folds. The immunoglobulin-type fold found in antibodies (Ig-fold) consists of a sandwich arrangement of 7-9 antiparallel β-strands arranged in two β-sheets with a Greek-key topology. The Greek-key topology is also found in Human Transthyretin. The jelly-roll topology is found in carbohydrate binding proteins such as concanavalin A and various lectins, in the collagen binding domain of Staphylococcus aureus Adhesin and in modules that bind fibronectin as found in Tenascin (Third Fibronectin Type III Repeat). The L-type lectin domain is a variation of the jelly roll fold. The C2 domain in its typical version (PKC-C2) is a β-sandwich composed of 8 beta-strands (β-strands). References Protein structural motifs
Beta-sandwich
[ "Chemistry", "Biology" ]
256
[ "Biochemistry stubs", "Protein structural motifs", "Protein stubs", "Protein classification" ]
44,195,454
https://en.wikipedia.org/wiki/Cinnamosma%20fragrans
Cinnamosma fragrans is a species of flowering plant in the family Canellaceae. It is endemic to Madagascar, where it is commonly known as saro. Description Cinnamosma fragrans is a shrub or medium-sized tree, growing up to 8 meters tall. It can be distinguished from the other species of Cinnamosma by its oval-shaped fruits; the fruits of C. macrocarpa and C. madagascariensis are globose. Range and habitat Cinnamosma fragrans native to the provinces of Antsiranana and Mahajanga in northern and western Madagascar. It is widespread in dry deciduous forests between sea level and 500 meters elevation. It typically grows on unconsolidated sands, sandstone, or limestone substrates. There are dense populations in Melaky, and Diana regions. The species' estimated extent of occurrence (EOO) is 151,773 km2. Specimens collected from higher-elevation subhumid forests are misidentified specimens of C. madagascariensis or C. macrocarpa. Uses Cinnamosma fragrans is a traditional medicinal plant used to treat respiratory problems and gastrointestinal infections. The leaves of the plant are harvested in Mahajanga Province to make essential oil for national and international trade. References Canellaceae Endemic flora of Madagascar Taxa named by Henri Ernest Baillon Flora of the Madagascar dry deciduous forests Essential oils
Cinnamosma fragrans
[ "Chemistry" ]
292
[ "Essential oils", "Natural products" ]
44,198,675
https://en.wikipedia.org/wiki/Binomial%20sum%20variance%20inequality
The binomial sum variance inequality states that the variance of the sum of binomially distributed random variables will always be less than or equal to the variance of a binomial variable with the same n and p parameters. In probability theory and statistics, the sum of independent binomial random variables is itself a binomial random variable if all the component variables share the same success probability. If success probabilities differ, the probability distribution of the sum is not binomial. The lack of uniformity in success probabilities across independent trials leads to a smaller variance. and is a special case of a more general theorem involving the expected value of convex functions. In some statistical applications, the standard binomial variance estimator can be used even if the component probabilities differ, though with a variance estimate that has an upward bias. Inequality statement Consider the sum, Z, of two independent binomial random variables, X ~ B(m0, p0) and Y ~ B(m1, p1), where Z = X + Y. Then, the variance of Z is less than or equal to its variance under the assumption that p0 = p1 = , that is, if Z had a binomial distribution with the success probability equal to the average of X and Y 's probabilities. Symbolically, . Proof We wish to prove that We will prove this inequality by finding an expression for Var(Z) and substituting it on the left-hand side, then showing that the inequality always holds. If Z has a binomial distribution with parameters n and p, then the expected value of Z is given by E[Z] = np and the variance of Z is given by Var[Z] = np(1 – p). Letting n = m0 + m1 and substituting E[Z] for np gives The random variables X and Y are independent, so the variance of the sum is equal to the sum of the variances, that is In order to prove the theorem, it is therefore sufficient to prove that Substituting E[X] + E[Y] for E[Z] gives Multiplying out the brackets and subtracting E[X] + E[Y] from both sides yields Multiplying out the brackets yields Subtracting E[X] and E[Y] from both sides and reversing the inequality gives Expanding the right-hand side gives Multiplying by yields Deducting the right-hand side gives the relation or equivalently The square of a real number is always greater than or equal to zero, so this is true for all independent binomial distributions that X and Y could take. This is sufficient to prove the theorem. Although this proof was developed for the sum of two variables, it is easily generalized to greater than two. Additionally, if the individual success probabilities are known, then the variance is known to take the form where is the average probability and . This expression also implies that the variance is always less than that of the binomial distribution with , because the standard expression for the variance is decreased by ns2, a positive number. Applications The inequality can be useful in the context of multiple testing, where many statistical hypothesis tests are conducted within a particular study. Each test can be treated as a Bernoulli variable with a success probability p. Consider the total number of positive tests as a random variable denoted by S. This quantity is important in the estimation of false discovery rates (FDR), which quantify uncertainty in the test results. If the null hypothesis is true for some tests and the alternative hypothesis is true for other tests, then success probabilities are likely to differ between these two groups. However, the variance inequality theorem states that if the tests are independent, the variance of S will be no greater than it would be under a binomial distribution. References Probability theorems Theorems in statistics Statistical inequalities
Binomial sum variance inequality
[ "Mathematics" ]
806
[ "Theorems in statistics", "Statistical inequalities", "Theorems in probability theory", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
49,031,124
https://en.wikipedia.org/wiki/Nitrokey
Nitrokey is an open-source USB key used to enable the secure encryption and signing of data. The secret keys are always stored inside the Nitrokey which protects against malware (such as computer viruses) and attackers. A user-chosen PIN and a tamper-proof smart card protect the Nitrokey in case of loss and theft. The hardware and software of Nitrokey are open-source. The free software and open hardware enables independent parties to verify the security of the device. Nitrokey is supported on Microsoft Windows, macOS, Linux, and BSD. History In 2008 Jan Suhr, Rudolf Böddeker, and another friend were travelling and found themselves looking to use encrypted emails in internet cafés, which meant the secret keys had to remain secure against computer viruses. Some proprietary USB dongles existed at the time, but lacked in certain ways. Consequently, they established as an open source project - Crypto Stick - in August 2008 which grew to become Nitrokey. It was a spare-time project of the founders to develop a hardware solution to enable the secure usage of email encryption. The first version of the Crypto Stick was released on 27 December 2009. In late 2014, the founders decided to professionalize the project, which was renamed Nitrokey. Nitrokey's firmware was audited by German cybersecurity firm Cure53 in May 2015, and its hardware was audited by the same company in August 2015. The first four Nitrokey models became available on 18 September 2015. Technical features Several Nitrokey models exist which each support different standards. For reference S/MIME is an email encryption standard popular with businesses while OpenPGP can be used to encrypt emails and also certificates used to login to servers with OpenVPN or OpenSSH. One-time passwords are similar to TANs and used as a secondary security measure in addition to ordinary passwords. Nitrokey supports the HMAC-based One-time Password Algorithm (HOTP, RFC 4226) and Time-based One-time Password Algorithm (TOTP, RFC 6238), which are compatible with Google Authenticator. The Nitrokey Storage product has the same features as the Nitrokey Pro 2 and additionally contains an encrypted mass storage. Characteristics Nitrokey's devices store secret keys internally. As with earlier technologies including the trusted platform module they are not readable on demand. This reduces the likelihood of a private key being accidentally leaked which is a risk with software-based public key cryptography. The keys stored in this way are also not known to the manufacturer. Supported algorithms include AES-256 and RSA with key lengths of up to 2048 bits or 4096 bits depending on the model. For accounts that accept Nitrokey credentials, a user-chosen PIN can be used to protect these against unauthorized access in case of loss or theft. However, loss of or damage to a Nitrokey (which is designed to last for 5-10 years) can also prevent the key's owner from being able to access his or her accounts. To guard against this, it is possible to generate keys in software so that they may be securely backed up to the best of the user's ability before they undergo a one-way transfer to the secure storage of a Nitrokey. Nitrokey is published as open source software and free software which ensures a wide range of cross platform support including Microsoft Windows, macOS, Linux, and BSD. It is designed to be usable with popular software such as Microsoft Outlook, Mozilla Thunderbird, and OpenSSH. It is also open hardware to enable independent reviews of the source code and hardware layout and to ensure the absence of back doors and other security flaws. Philosophy Nitrokey's developers believe that proprietary systems cannot provide strong security and that security systems need to be open source. For instance there have been cases in which the NSA has intercepted security devices being shipped and implanted backdoors into them. In 2011 RSA was hacked and secret keys of securID tokens were stolen which allowed hackers to circumvent their authentication. As revealed in 2010, many FIPS 140-2 Level 2 certified USB storage devices from various manufacturers could easily be cracked by using a default password. Nitrokey, because of being open source and because of its transparency, wants to provide highly secure system and avoid security issues which its proprietary rivals are facing. Nitrokey's mission is to provide the best open source security key to protect the digital lives of its users. References External links Authentication methods Computer access control Open hardware organizations and companies Open hardware electronic devices Open-source hardware Cryptographic hardware
Nitrokey
[ "Engineering" ]
968
[ "Cybersecurity engineering", "Computer access control" ]
49,031,352
https://en.wikipedia.org/wiki/Single-parameter%20utility
In mechanism design, an agent is said to have single-parameter utility if his valuation of the possible outcomes can be represented by a single number. For example, in an auction for a single item, the utilities of all agents are single-parametric, since they can be represented by their monetary evaluation of the item. In contrast, in a combinatorial auction for two or more related items, the utilities are usually not single-parametric, since they are usually represented by their evaluations to all possible bundles of items. Notation There is a set of possible outcomes. There are agents which have different valuations for each outcome. In general, each agent can assign a different and unrelated value to every outcome in . In the special case of single-parameter utility, each agent has a publicly known outcome proper subset which are the "winning outcomes" for agent (e.g., in a single-item auction, contains the outcome in which agent wins the item). For every agent, there is a number which represents the "winning-value" of . The agent's valuation of the outcomes in can take one of two values: for each outcome in ; 0 for each outcome in . The vector of the winning-values of all agents is denoted by . For every agent , the vector of all winning-values of the other agents is denoted by . So . A social choice function is a function that takes as input the value-vector and returns an outcome . It is denoted by or . Monotonicity The weak monotonicity property has a special form in single-parameter domains. A social choice function is weakly-monotonic if for every agent and every , if: and then: I.e, if agent wins by declaring a certain value, then he can also win by declaring a higher value (when the declarations of the other agents are the same). The monotonicity property can be generalized to randomized mechanisms, which return a probability-distribution over the space . The WMON property implies that for every agent and every , the function: is a weakly-increasing function of . Critical value For every weakly-monotone social-choice function, for every agent and for every vector , there is a critical value , such that agent wins if-and-only-if his bid is at least . For example, in a second-price auction, the critical value for agent is the highest bid among the other agents. In single-parameter environments, deterministic truthful mechanisms have a very specific format. Any deterministic truthful mechanism is fully specified by the set of functions c. Agent wins if and only if his bid is at least , and in that case, he pays exactly . Deterministic implementation It is known that, in any domain, weak monotonicity is a necessary condition for implementability. I.e, a social-choice function can be implemented by a truthful mechanism, only if it is weakly-monotone. In a single-parameter domain, weak monotonicity is also a sufficient condition for implementability. I.e, for every weakly-monotonic social-choice function, there is a deterministic truthful mechanism that implements it. This means that it is possible to implement various non-linear social-choice functions, e.g. maximizing the sum-of-squares of values or the min-max value. The mechanism should work in the following way: Ask the agents to reveal their valuations, . Select the outcome based on the social-choice function: . Every winning agent (every agent such that ) pays a price equal to the critical value: . Every losing agent (every agent such that ) pays nothing: . This mechanism is truthful, because the net utility of each agent is: if he wins; 0 if he loses. Hence, the agent prefers to win if and to lose if , which is exactly what happens when he tells the truth. Randomized implementation A randomized mechanism is a probability-distribution on deterministic mechanisms. A randomized mechanism is called truthful-in-expectation if truth-telling gives the agent a largest expected value. In a randomized mechanism, every agent has a probability of winning, defined as: and an expected payment, defined as: In a single-parameter domain, a randomized mechanism is truthful-in-expectation if-and-only if: The probability of winning, , is a weakly-increasing function of ; The expected payment of an agent is: Note that in a deterministic mechanism, is either 0 or 1, the first condition reduces to weak-monotonicity of the Outcome function and the second condition reduces to charging each agent his critical value. Single-parameter vs. multi-parameter domains When the utilities are not single-parametric (e.g. in combinatorial auctions), the mechanism design problem is much more complicated. The VCG mechanism is one of the only mechanisms that works for such general valuations. See also Single peaked preferences References Mechanism design
Single-parameter utility
[ "Mathematics" ]
1,022
[ "Game theory", "Mechanism design" ]
49,032,906
https://en.wikipedia.org/wiki/David%20Benney
David John Benney (8 April 1930 – 9 October 2015) was a New Zealand applied mathematician, known for work on the nonlinear partial differential equations of fluid dynamics. Education and early life Born in Wellington, New Zealand, on 8 April 1930 to Cecil Henry (Matt) Benney and Phyllis Marjorie Jenkins, Benney was educated at Wellington College. He graduated BSc from Victoria University College in 1950, and MSc from the same institution in 1951. He then went to Emmanuel College, Cambridge, from where he graduated BA in the Mathematical Tripos in 1954. He was at Canterbury University College for two years as a lecturer, before taking leave of absence in August 1957 to undertake doctoral studies at Massachusetts Institute of Technology (MIT), graduating PhD in 1959. Career and research Benney joined the mathematics faculty at MIT in 1960. He spent the rest of his career there, as a prolific researcher in fluid dynamics and supervisor of students, becoming emeritus professor. He received a Guggenheim Fellowship in 1964. Notes References 1930 births 2015 deaths New Zealand mathematicians Fluid dynamicists Victoria University of Wellington alumni Academic staff of the University of Canterbury Alumni of Emmanuel College, Cambridge Massachusetts Institute of Technology School of Science faculty People educated at Wellington College, Wellington Massachusetts Institute of Technology School of Science alumni
David Benney
[ "Chemistry" ]
251
[ "Fluid dynamicists", "Fluid dynamics" ]
49,037,112
https://en.wikipedia.org/wiki/Q-plate
A q-plate is an optical device that can form a light beam with orbital angular momentum (OAM) from a beam with well-defined spin angular momentum (SAM). Q-plates are based on the SAM-OAM coupling that may occur in media that are both anisotropic and inhomogeneous, such as an inhomogeneous anisotropic birefringent waveplate. Q-plates are also currently realized using total internal reflection devices, liquid crystals, metasurfaces based on polymers, and sub-wavelength gratings. The sign of the OAM is controlled by the input beam's polarization. References Optical components Nonlinear optics
Q-plate
[ "Materials_science", "Technology", "Engineering" ]
141
[ "Glass engineering and science", "Optical components", "Components" ]
49,039,963
https://en.wikipedia.org/wiki/Anagestone
Anagestone (), also known as 3-deketo-6α-methyl-17α-hydroxyprogesterone or as 6α-methyl-17α-hydroxypregn-4-en-20-one, is a progestin which was never marketed. An acylated derivative, anagestone acetate, was formerly used clinically as a pharmaceutical drug. While anagestone is sometimes used as a synonym for anagestone acetate, it usually refers to anagestone acetate, not anagestone. References Abandoned drugs Ketones Pregnanes Progestogens
Anagestone
[ "Chemistry" ]
130
[ "Ketones", "Functional groups", "Drug safety", "Abandoned drugs" ]
49,039,968
https://en.wikipedia.org/wiki/Anagestone%20acetate
Anagestone acetate, sold under the brand names Anatropin and Neo-Novum, is a progestin medication which was withdrawn from medical use due to carcinogenicity observed in animal studies. Medical uses Anagestone acetate was used in combination with the estrogen mestranol as a combined birth control pill. Pharmacology Based on its chemical structure, namely the lack of a C3 ketone, it is probable that anagestone acetate is a prodrug of medroxyprogesterone acetate (the 3-keto analogue). Chemistry Anagestone acetate, also known as 3-deketo-6α-methyl-17α-acetoxyprogesterone or as 6α-methyl-17α-acetoxypregn-4-en-20-one, is a synthetic pregnane steroid and a derivative of progesterone and 17α-hydroxyprogesterone. It is the C17α acetate ester of anagestone, which, in contrast to anagestone acetate, was never marketed. Anagestone acetate is closely related structurally to medroxyprogesterone acetate (6α-methyl-17α-acetoxyprogesterone). History Anagestone acetate was introduced in combination with mestranol as a birth control pill in 1968 by Ortho Pharmaceutical. It was withdrawn in 1969. In 1969, along with a variety of other progestogens including progesterone, chlormadinone acetate, megestrol acetate, medroxyprogesterone acetate, ethynerone, and chloroethynyl norgestrel, anagestone acetate was found to induce the development of mammary gland tumors in Beagle dogs after extensive treatment (2–7 years) with very high doses (10–25 times the recommended human dose), though notably not with 1–2 times the human dosage. In contrast, the non-halogenated 19-nortestosterone derivatives norgestrel, norethisterone, noretynodrel, and etynodiol diacetate were not found to produce such nodules. Because of these findings, anagestone acetate was voluntarily withdrawn from the market by the manufacturer in 1969. The findings also led to the virtual disappearance of most 17α-hydroxyprogesterone derivatives as hormonal contraceptives from the market (though medroxyprogesterone acetate, cyproterone acetate, and chlormadinone acetate have continued to be used). According to Hughes et al., "It is still doubtful how much relevance these findings have for humans as the dog mammary gland seems to be the only one which can be directly maintained by progestogens." Subsequent research revealed species differences between dogs and humans and established that there is no similar risk in humans. Society and culture Generic names Anagestone acetate is the generic name of the drug and its . It is also known by its developmental code name ORF-1658. Brand names Anagestone acetate was marketed under the brand names Anatropin and Neo-Novum, the latter in combination with the estrogen mestranol. Availability Anagestone acetate was withdrawn from the market and is no longer available. See also Acetomepregenol References Acetate esters Ketones Pregnanes Prodrugs Progestogen esters Progestogens Withdrawn drugs
Anagestone acetate
[ "Chemistry" ]
745
[ "Ketones", "Functional groups", "Drug safety", "Prodrugs", "Chemicals in medicine", "Withdrawn drugs" ]
49,040,272
https://en.wikipedia.org/wiki/Molecular%20Biology%20Reports
Molecular Biology Reports is a monthly peer-reviewed scientific journal covering research on normal and pathological molecular processes. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports. The journal has a 2020 impact factor of 2.316. References External links English-language journals Molecular and cellular biology journals Springer Science+Business Media academic journals Monthly journals Academic journals established in 1974
Molecular Biology Reports
[ "Chemistry" ]
82
[ "Molecular and cellular biology journals", "Molecular biology" ]
49,040,767
https://en.wikipedia.org/wiki/Spacer%20patterning
Spacer patterning is a technique employed for patterning features with linewidths smaller than can be achieved by conventional lithography. In the most general sense, the spacer is a layer that is deposited over a pre-patterned feature, often called the mandrel. The spacer is subsequently etched back so that the spacer portion covering the mandrel is etched away while the spacer portion on the sidewall remains. The mandrel may then be removed, leaving two spacers (one for each edge) for each mandrel. The spacers may be further trimmed to narrower widths, especially to act as mandrels for a subsequent 2nd spacer formation. Hence this is a readily practiced form of multiple patterning. Alternatively, one of the two spacers may be removed and the remaining one trimmed to a much smaller final linewidth. Whereas immersion lithography has a resolution of ~40 nm lines and spaces, spacer patterning may be applied to attain 20 nm. This resolution improvement technique is also known as Self-Aligned Double Patterning (SADP). SADP may be re-applied for even higher resolution, and has already been demonstrated for 15 nm NAND flash memory. Spacer patterning has also been adopted for sub-20 nm logic nodes, e.g., 14 nm and 10 nm. At advanced nodes, spacer-based patterning can reduce the number of masks used for some cases by a factor of two. Spacer Patterning Without Mandrel Removal The mandrel is not removed after the spacer is etched to leave only the sidewall portion, in the case where the mandrel is the MOSFET gate stack. The silicon nitride sidewall spacer is retained to protect the gate stack and underlying gate oxide during subsequent processing. Self-Aligned Anti-Spacer Double Patterning An approach related derived from self-aligned spacer double patterning is so-called "anti-spacer" double patterning. In this approach a first layer coating the mandrel is eventually removed, while a second coated layer over the first layer is planarized and retained. A purely spin-on and wet-processed approached has been demonstrated. Spacer-Is-Dielectric (SID) Spacers which define conducting features need to be cut to avoid forming loops. In the alternative spacer-is-dielectric (SID) approach, the spacers define dielectric spaces between conducting features, and so no longer need cuts. The mandrel definition becomes more strategic in the layout, and there is no longer a preference for 1D line-like features. The SID approach has gained popularity due to its flexibility with minimal additional mask exposures. The anti-spacer double patterning approach described above naturally fits the SID approach since an additional layer is deposited after the spacer before its removal. References Lithography (microfabrication)
Spacer patterning
[ "Materials_science" ]
603
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
49,041,186
https://en.wikipedia.org/wiki/R7%20%28drug%29
R7 is a small-molecule flavonoid and orally active, potent, and selective agonist of the tropomyosin receptor kinase B (TrkB) – the main signaling receptor for the neurotrophin brain-derived neurotrophic factor (BDNF) – which is under development for the treatment of Alzheimer's disease. It is a structural modification and prodrug of tropoflavin (7,8-DHF) with improved potency and pharmacokinetics, namely oral bioavailability and duration. Discovery R7 was synthesized by the same researchers who were involved in the discovery of tropoflavin. A patent was filed for R7 in 2013 and was published in 2015. In 2016, it was reported to be in the preclinical stage of development. R7 was superseded by R13 because while R7 had a good drug profile in animals, it showed almost no conversion into tropoflavin in human liver microsomes. Tropoflavin, a naturally occurring flavonoid, was found to act as an agonist of the TrkB with nanomolar affinity (Kd ≈ 320 nM). Due to the presence of a vulnerable catechol group on its 2-phenyl-4H-chromene ring, tropoflavin is extensively conjugated via glucuronidation, sulfation, and methylation during first-pass metabolism in the liver and has a poor oral bioavailability of only 5% in mice upon oral administration. As such, tropoflavin itself is a poor candidate for clinical development as an oral medication. R7 is a derivative of tropoflavin with carbamate moieties on its hydroxyl groups, thereby protecting it from metabolism. Pharmacokinetics As R7 is a slightly larger molecule than tropoflavin, 72.5 mg R7 is molecularly equivalent to 50 mg tropoflavin. Relative to a roughly molecularly equivalent dose of tropoflavin, the area-under-curve levels of R7 were found to be 7.2-fold higher upon oral administration to mice, and R7 hence has a greatly improved oral bioavailability in mice of approximately 35%. Moreover, whereas tropoflavin itself is mostly metabolized in mice within 30 minutes, tropoflavin as a metabolite was still detectable in plasma at 8 hours after administration with R7, indicating that R7 sustainably releases tropoflavin into circulation. In accordance, the terminal half-life of R7 is about 195 minutes (3.25 hours) in mice. The Tmax of R7 is about 60 minutes in mice, and its Cmax for a 78 mg/kg dose was 262 ng/mL, whereas that for a 50 mg/kg dose of tropoflavin was 70 ng/mL. Animal studies Like tropoflavin, administration of R7 has been found to activate the TrkB in vivo in the mouse brain. Moreover, R7 was found to potently activate the TrkB and the downstream Akt signaling pathway upon oral administration, an action that was tightly correlated with plasma concentrations of tropoflavin. As such, R7 has shown in vivo efficacy as an agonist of the TrkB, including central activity, similarly to tropoflavin. See also List of investigational antidepressants Tropomyosin receptor kinase B § Agonists References External links 7,8-Dihydoxyflavone and 7,8-substituted flavone derivatives, compositions, and methods related thereto (US 20150274692 A1) Antidementia agents Carbamates Esters Experimental drugs Flavones Neuroprotective agents Nootropics Prodrugs TrkB agonists
R7 (drug)
[ "Chemistry" ]
822
[ "Esters", "Functional groups", "Prodrugs", "Organic compounds", "Chemicals in medicine" ]
49,042,024
https://en.wikipedia.org/wiki/Committee%20on%20the%20Biological%20Effects%20of%20Ionizing%20Radiation
The Committee on the Biological Effects of Ionizing Radiation (BEIR) is a committee of the American National Research Council. It publishes reports on the effects of ionizing radiation. Reports BEIR I 1972: “The Effects on Populations of Exposure to Low Levels of Ionizing Radiation” BEIR II 1977: “Considerations of Health Benefit-Cost Analysis for Activities Involving Ionizing Radiation Exposure and Alternatives” BEIR III 1980: “The Effects on Populations of Exposure to Low Levels of Ionizing Radiation” BEIR IV 1988: “Health Effects of Radon and Other Internally Deposited Alpha-Emitters” BEIR V 1990: “Health Effects of Exposure to Low Levels of Ionizing Radiation” BEIR VI 1999: “The Health Effects of Exposure to Indoor Radon” BEIR VII, Phase 1 1998: “Health Risks from Exposure to Low Levels of Ionizing Radiation, Phase 1” BEIR VII, Phase 2 2006: “Health Risks from Exposure to Low Levels of Ionizing Radiation, Phase 2” References Medical imaging organizations Nuclear medicine organizations
Committee on the Biological Effects of Ionizing Radiation
[ "Engineering" ]
210
[ "Nuclear medicine organizations", "Nuclear organizations" ]
49,043,127
https://en.wikipedia.org/wiki/Profit%20at%20risk
Profit-at-Risk (PaR) is a risk management quantity most often used for electricity portfolios that contain some mixture of generation assets, trading contracts and end-user consumption. It is used to provide a measure of the downside risk to profitability of a portfolio of physical and financial assets, analysed by time periods in which the energy is delivered. For example, the expected profitability and associated downside risk (PaR) might be calculated and monitored for each of the forward looking 24 months. The measure considers both price risk and volume risk (e.g. due to uncertainty in electricity generation volumes or consumer demand). Mathematically, the PaR is the quantile of the profit distribution of a portfolio. Example If the confidence interval for evaluating the PaR is 95%, there is a 5% probability that due to changing commodity volumes and prices, the profit outcome for a specific period (e.g. December next year) will fall short of the expected profit result by more than the PaR value. Note that the concept of a set 'holding period' does not apply since the period is always up until the realisation of the profit outcome through the delivery of energy. That is the holding period is different for each of the specific delivery time periods being analysed e.g. it might be six months for December and therefore seven months for January. History The PaR measure was originally pioneered at Norsk Hydro in Norway as part of an initiative to prepare for deregulation of the electricity market. Petter Longva and Greg Keers co-authored a paper "Risk Management in the Electricity Industry" (IAEE 17th Annual International Conference, 1994) which introduced the PaR method. This led to it being adopted as the basis for electricity market risk management at Norsk Hydro and later by most of the other electricity generating utilities in the Nordic region. The approach was based on monte-carlo simulations of paired reservoir inflow and spot price outcomes to produce a distribution of expected profit in future reporting periods. This tied directly with the focus of management reporting on profitability of operations, unlike the Value-at-Risk approach that had been pioneered by JP Morgan for banks focused on their balance sheet risks. Critics As is the case with Value at Risk, for risk measures like the PaR, Earnings-at-Risk (EaR), the Liquidity-at-Risk (LaR) or the Margin-at-Risk (MaR), the exact risk measures implementation rule vary from firm to firm. See also Value at risk Margin at risk Liquidity at risk References Mathematical finance Financial risk management Monte Carlo methods in finance
Profit at risk
[ "Mathematics" ]
528
[ "Applied mathematics", "Mathematical finance" ]
52,909,236
https://en.wikipedia.org/wiki/Biryukov%20equation
In the study of dynamical systems, the Biryukov equation (or Biryukov oscillator), named after Vadim Biryukov (1946), is a non-linear second-order differential equation used to model damped oscillators. The equation is given by where is a piecewise constant function which is positive, except for small as Eq. (1) is a special case of the Lienard equation; it describes the auto-oscillations. Solution (1) at a separate time intervals when f(y) is constant is given by where denotes the exponential function. Here Expression (2) can be used for real and complex values of . The first half-period’s solution at is The second half-period’s solution is The solution contains four constants of integration , the period and the boundary between and needs to be found. A boundary condition is derived from continuity of and . Solution of (1) in the stationary mode thus is obtained by solving a system of algebraic equations as The integration constants are obtained by the Levenberg–Marquardt algorithm. With , Eq. (1) named Van der Pol oscillator. Its solution cannot be expressed by elementary functions in closed form. References Differential equations Analog circuits
Biryukov equation
[ "Mathematics", "Engineering" ]
267
[ "Analog circuits", "Mathematical objects", "Differential equations", "Equations", "Electronic engineering" ]
52,913,121
https://en.wikipedia.org/wiki/Goldreich-Kylafis%20effect
The Goldreich-Kylafis (GK) effect is a quantum mechanical effect with applications in Astrophysics.  The theoretical background of the work was published by Peter Goldreich and his at the time postdoc Nick Kylafis in a series of two papers in The Astrophysical Journal. The GK effect predicts that, under special conditions, the spectral lines emitted by interstellar molecules should be linearly polarized and the linear polarization vector should reveal the magnetic field direction in the molecular cloud.  Even a μG magnetic field is enough for this effect.  The lines arise from rotational transitions of molecules, say J=1 to J=0, where J is the rotational quantum number. If the magnetic sublevels of the J=1 level are equally populated, as it is usually the case, then the line is unpolarized.  However, if the magnetic sublevels are unequally populated, then the line is polarized. Goldreich & Kylafis (1981) showed that, if the radiation field (their own plus external) in which the molecules are embedded is anisotropic, then the magnetic sublevels are unequally populated. Since isotropic radiation fields are practically non existent in Nature (e.g. only at the center of an isolated perfectly spherical molecular cloud), the effect should be easily detectable. This is however not the case as some specific conditions are required for detection. These are that the line optical depth of the molecular cloud should be of order unity and that the radiative rates should be comparable to or larger than the collisional rates. Since the observed lines from molecular clouds are broad, due to velocity gradients in the cloud, the GK effect has the potential to reveal the magnetic field direction along the line of sight. It has been reported in star forming regions, in thermal-pulsating (TP-) AGB stars and recently in the disk around the T Tauri star TW Hya. References Polarization (waves) Magnetism in astronomy Quantum mechanics
Goldreich-Kylafis effect
[ "Physics", "Astronomy" ]
424
[ "Theoretical physics", "Quantum mechanics", "Astrophysics", "Magnetism in astronomy", "Polarization (waves)" ]
60,634,470
https://en.wikipedia.org/wiki/Resonance%20escape%20probability
In nuclear physics, resonance escape probability is the probability that a neutron will slow down from fission energy to thermal energies without being captured by a nuclear resonance. A resonance absorption of a neutron in a nucleus does not produce nuclear fission. The probability of resonance absorption is called the resonance factor , and the sum of the two factors is . Generally, the higher the neutron energy, the lower the probability of absorption, but for some energies, called resonance energies, the resonance factor is very high. These energies depend on the properties of heavy nuclei. Resonance escape probability is highly determined by the heterogeneous geometry of a reactor, because fast neutrons resulting from fission can leave the fuel and slow to thermal energies in a moderator, skipping over resonance energies before reentering the fuel. Resonance escape probability appears in the four factor formula and the six factor formula. To compute it, neutron transport theory is used. Resonant neutron absorption The nucleus can capture a neutron only if the kinetic energy of the neutron is close to the energy of one of the energy levels of the new nucleus formed as a result of capture. The capture cross section of such a neutron by the nucleus increases sharply. The energy at which the neutron-nucleus interaction cross section reaches a maximum is called the resonance energy. The resonance energy range is divided into two parts, the region of resolved and unresolved resonances. The first region occupies the energy interval from 1 eV to Egr. In this region, the energy resolution of the instruments is sufficient to distinguish any resonance peak. Starting from the energy Egr, the distance between resonance peaks becomes smaller than the energy resolution. Subsequently, the resonance peaks are not separated. For heavy elements, the boundary energy Egr≈1 keV. In thermal neutron reactors, the main resonant neutron absorber is Uranium-238. In the table for 238U, several resonance neutron energies Er, the maximum absorption cross sections σa, r in the peak, and the width G of these resonances are given. Effective resonance integral Let us assume that the resonant neutrons move in an infinite system consisting of a moderator and 238U. When colliding with the moderator nuclei, the neutrons are scattered, and with the 238U nuclei, they are absorbed. The former collisions favor the retention and removal of resonant neutrons from the danger zone, while the latter lead to their loss. The probability of avoiding resonance capture (coefficient φ) is related to the density of nuclei NS and the moderating power of the medium ξΣS by the relationship below, The JeFF value is called the effective resonance integral. It characterizes the absorption of neutrons by a single nucleus in the resonance region and is measured in barnes. The use of the effective resonance integral simplifies quantitative calculations of resonance absorption without detailed consideration of neutron interaction at deceleration. The effective resonance integral is usually determined experimentally. It depends on the concentration of 238U and the mutual arrangement of uranium and the moderator. Homogeneous mixtures In a homogeneous mixture of moderator and 238U, the effective resonance integral is found with a good accuracy by the empirical formula below, where N3/N8 is the ratio of moderator and 238U nuclei in the homogeneous mixture, σ3S is the microscopic scattering cross section of the moderator. As can be seen from the formula, the effective resonance integral decreases with increasing 238U concentration. The more 238U nuclei in the mixture, the less likely absorption by a single nucleus of the moderating neutrons will take place. The effect of absorption in some 238U nuclei on absorption in others is called resonance level shielding. It increases with increasing concentration of resonance absorbers. As an example, we can calculate the effective resonance integral in a homogeneous natural uranium-graphite mixture with the ratio N3/N8=215. The scattering cross section of graphite σCS=4.7 barns; барн. Heterogeneous mixtures In a homogeneous environment, all 238U nuclei are in the same conditions with respect to the resonant neutron flux. In a heterogeneous environment uranium is separated from the moderator, which significantly affects the resonant neutron absorption. Firstly, some of the resonant neutrons become thermal neutrons in the moderator without colliding with uranium nuclei; secondly, resonant neutrons hitting the surface of the fuel elements are almost all absorbed by the thin surface layer. The inner 238U nuclei are shielded by the surface nuclei and participate less in the resonant neutron absorption, and the shielding increases with the increase of the fuel element diameter d. Therefore, the effective 238U resonance integral in a heterogeneous reactor depends on the fuel element diameter d: The constant a characterizes the absorption of resonance neutrons by surface and the constant b - by inner 238U nuclei. For each type of nuclear fuel (natural uranium, uranium dioxide, etc.) the constants a and b are measured experimentally. For natural uranium rods a=4.15, b=12.35. U for a rod from natural uranium with diameter d=3 cm: barns. Comparison of the last two examples shows that the separation of uranium and moderator noticeably decreases neutron absorption in the resonance region. Moderator influence Coefficient φ is dependent on the following; Which reflects the competition of two processes in the resonance region: absorption of neutrons and their deceleration. The cross section Σ, by definition, is analogous to the macroscopic absorption cross section with replacement of the microscopic cross section by the effective resonance integral JeFF. It also characterizes the loss of slowing neutrons in the resonance region. As the 238U concentration increases, the absorption of resonant neutrons increases and hence fewer neutrons are slowed down to thermal energies. The resonance absorption is influenced by the slowing down of neutrons. Collisions with the moderator nuclei take neutrons out of the resonance region and are more intense the greater the moderating power . So, for the same concentration of 238U, the probability of avoiding resonance capture in the uranium-water medium is greater than in the uranium-carbon medium. Let us calculate the probability of avoiding resonance capture in homogeneous and heterogeneous environments natural uranium-graphite. In both media the ratio of carbon and 238U nuclei NC/NS=215. The diameter of the uranium rod is d=3 cm. Taking into account that ξC=0.159, σCa=4.7 barn, we calculate the following probability; barn−1. Calculating the coefficients φ in homogeneous and heterogeneous mixtures, we get; φhom = e−0,00625·68 = e−0,425 ≈ 0,65, φhet = e−0,00625·11,3 = e−0,0705 ≈ 0,93. The transition from homogeneous to heterogeneous medium slightly reduces the thermal neutron absorption in uranium. However, this loss is considerably overlapped by the decrease of the resonance neutron absorption, and the propagation properties of the medium improve. References Literature Nuclear technology Radioactivity
Resonance escape probability
[ "Physics", "Chemistry" ]
1,435
[ "Nuclear technology", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Nuclear physics", "Radioactivity" ]
55,801,087
https://en.wikipedia.org/wiki/Crime%20harm%20index
A crime harm index is a measurement of crime rates in which crimes are weighted based on how much "harm" they cause. The most simple and most common method of measuring an area's crime rate is to count the number of crimes. In this case, one minor crime (e.g. a shoplifting incident) counts for the same as a single very serious crime (e.g. murder). Leading criminologists have argued in favour of creating a weighted measurement. Lawrence W. Sherman and two other researchers wrote in 2016 that "All crimes are not created equal. Counting them as if they are fosters distortion of risk assessments, resource allocation, and accountability." Most crime harm indices use prison sentencing policies to decide what the "harm score" of an offence should be. The harm score of an offence is the default length of the prison sentence that an offender would receive, if the crime was committed by a single offender. Cambridge Crime Harm Index The Cambridge Crime Harm Index was unveiled in 2016. It was developed by Lawrence W. Sherman, Peter Neyroud and Eleanor Neyroud. It uses sentencing guidelines of England and Wales to calculate the harm score of each crime. The system has already been adopted by several UK police forces. According to the CCHI, the harm score for a crime is the default prison sentence that an offender would receive for committing it, if the crime was committed by a single offender with no prior convictions. For minor crimes that would instead result in a fine, the harm score is the number of days it would take someone with a minimum wage job to earn the money to pay the fine. The Cambridge Crime Harm Index has inspired other crime harm indices for New Zealand, Denmark and Western Australia. It has also been evaluated for use in Scotland, though officers of Police Scotland have noted that it does not reflect Scottish sentencing guidelines. References Harm reduction Crime statistics Index numbers
Crime harm index
[ "Mathematics" ]
384
[ "Index numbers", "Mathematical objects", "Numbers" ]
55,804,131
https://en.wikipedia.org/wiki/Ross%20128%20b
Ross 128 b is a confirmed Earth-sized exoplanet, likely rocky, that is orbiting near the inner edge of the habitable zone of the red dwarf star Ross 128, at a distance of from Earth in the constellation of Virgo. The exoplanet was found using a decade's worth of radial velocity data using the European Southern Observatory's HARPS spectrograph (High Accuracy Radial velocity Planet Searcher) at the La Silla Observatory in Chile. Ross 128 b is the nearest exoplanet around a quiet red dwarf, and is considered one of the best candidates for habitability. The planet is only 35% more massive than Earth, receives only 38% more starlight, and is expected to be a temperature suitable for liquid water to exist on the surface, if it has an atmosphere. The planet does not transit its host star, which makes atmospheric characterization very difficult, but this may be possible with the advent of larger telescopes like the James Webb Space Telescope. Physical characteristics Mass, radius, and temperature Due to it being discovered by the radial velocity method, the only known physical parameter for Ross 128 b is its minimum possible mass. The planet is at least , or 1.35 times the mass of Earth (about kg). This is slightly more massive than the similar and nearby Proxima Centauri b, with a minimum mass of . The low mass of Ross 128 b implies that it is most likely a rocky Earth-sized planet with a solid surface. However, its radius, and therefore its density, is not known as no transits of this planet have been observed. Ross 128 b would be (Earth radii) for a pure-iron composition and 3.0 for a pure hydrogen-helium composition, both implausible extremes. For a more plausible Earth-like composition, the planet would need to be about - i.e., 1.1 times the radius of Earth (approximately ). With that radius, Ross 128 b would be slightly denser than Earth, due to how a rocky planet would become more compact as it increases in size. It would give the planet a gravitational pull around , or about 1.12 times that of Earth. A 2019 study predicts a true mass about 1.8 times that of Earth and a radius about 1.6 times that of Earth, with large margins of error. Ross 128 b is calculated to have a temperature similar to that of Earth and potentially conducive to the development of life. The discovery team modelled the planet's potential equilibrium temperature using albedos of 0.100, 0.367, and 0.750. Albedo is the portion of the light that is reflected instead of absorbed by a celestial object. With these three albedo parameters, Ross 128 b would have a Teq of either , , or . For an Earth-like albedo of 0.3, the planet would have an equilibrium temperature of , about 8 Kelvins lower than Earth's average temperature. The actual temperature of Ross 128 b depends on yet-unknown atmospheric parameters, if it has an atmosphere. Host star Ross 128 b orbits the small red dwarf star known as Ross 128. The star is 17% the mass and 20% the radius of that of the Sun. It has a temperature of , a luminosity of , and an age of . For comparison, the Sun has a temperature of and age of , making Ross 128 half the temperature and over twice the age. The star is only 11.03 light-years away, making it one of the 20 closest stars known. In 2018, astronomers, based on near-infrared, high-resolution spectra (APOGEE Spectra), determined the chemical abundances of several elements (C, O, Mg, Al, K, Ca, Ti, and Fe) present in Ross 128, finding that the star has near solar metallicity. Orbit Ross 128 b is a closely orbiting planet, with a year (orbital period) lasting about 9.9 days. Its semi-major axis is . According to some models of the planet's orbit, its orbit is quite circular, with an eccentricity of around 0.03, but also with a large error range as well. However, if all the orbital models are brought together then the eccentricity is higher at about 0.116, and again this is subject to a large error range. Compared to the Earth's average distance from the Sun of 149 million km, Ross 128 b orbits 20 times closer. At that close distance from its host star, the planet is most likely tidally locked, meaning that one side of the planet would have eternal daylight and the other would be in darkness. A 2024 study of the radial velocity data found an eccentricity of about 0.21 for Ross 128 b, higher than previous estimates and similar to that of Mercury. Given the planet's orbit near the inner edge of the habitable zone, such a high eccentricity would significantly decrease its potential for habitability. Habitability Stellar flux properties Ross 128 b is not confirmed to be orbiting exactly within the habitable zone. It appears to reside within the inner edge, as it receives approximately 38% more sunlight than Earth. The habitable zone is defined as the region around a star where temperatures are just right for a planet with a thick enough atmosphere to support liquid water, a key ingredient in the development of life as we know it. With its moderately high stellar flux, Ross 128 b is likely more prone to water loss, mainly on the side directly facing the star. However, an Earth-like atmosphere, assuming one exists, would be able to distribute the energy received from the star around the planet and allow more areas to potentially hold liquid water. In addition, study author Xavier Bonfils noted the possibility of significant cloud cover on the star-facing side, which would block out much incoming stellar energy and help keep the planet cool. Solar flare potential The planet is considered one of the most Earth-like worlds ever found in relation to its temperature, size and rather quiet host star. Ross 128 b is very close in mass to Earth, only about 35% more massive, and is likely around 10% larger in radius. Gravity on the planet would be only slightly higher. Also, its host star Ross 128 is an evolved star with a stable stellar activity. Many red dwarfs like Proxima Centauri and TRAPPIST-1 are prone to releasing potentially deadly flares caused by powerful magnetic fields. Billions of years of exposure to these flares can potentially strip a planet of its atmosphere and render it sterile with possibly dangerous amounts of radiation. While Ross 128 is known to produce such flares, they are currently much less common and less powerful than those of the previously mentioned stars. Atmospheric potential As of 2017, it is not yet possible to determine if Ross 128 b has an atmosphere because it does not transit the star. However, the James Webb Space Telescope and upcoming massive ground-based telescopes, like the Thirty Meter Telescope and the European Extremely Large Telescope, could analyze the atmosphere of Ross 128 b if it has an atmosphere without the need of transit. This would enable scientists to find biosignatures in the planet's atmosphere, which are chemicals like oxygen, ozone, and methane that are created by known biological processes. See also Kepler-438b, Earth-sized habitable zone planet with a very active host star. LHS 1140 b, a huge rocky habitable zone planet around another quiet M-dwarf. List of potentially habitable exoplanets Luyten b, a potentially habitable planet orbiting Luyten's Star. Proxima Centauri b, a similarly sized potentially habitable exoplanet found by the same team in August 2016. TRAPPIST-1, has 7 confirmed planets, 4 that are potentially habitable. TRAPPIST-1d TRAPPIST-1e TRAPPIST-1f TRAPPIST-1g References External links Ross 128 b at NASA Exoplanets discovered in 2017 Exoplanets detected by radial velocity Near-Earth-sized exoplanets Near-Earth-sized exoplanets in the habitable zone Virgo (constellation)
Ross 128 b
[ "Astronomy" ]
1,674
[ "Virgo (constellation)", "Constellations" ]
55,806,018
https://en.wikipedia.org/wiki/Controlled%20Access%20Protection%20Profile
The Controlled Access Protection Profile, also known as CAPP, is a Common Criteria security profile that specifies a set of functional and assurance requirements for information technology products. Software and systems that conform to CAPP standards provide access controls that are capable of enforcing access limitations on individual users and data objects. CAPP-conformant products also provide an audit capability which records the security-relevant events which occur within the system. CAPP is intended for the protection of software and systems where users are assumed to be non-hostile and well-managed, requiring protection primarily against threats of inadvertent or casual attempts to breach the security protections. It is not intended to be applicable to circumstances in which protection is required against determined attempts by hostile and well-funded attackers. It does not fully address the threats posed by malicious system development or administrative personnel, who generally have a higher level of access. The CAPP was derived from the requirements of the C2 class of the U.S. Department of Defense Trusted Computer System Evaluation Criteria and the material upon which those requirements are based. Computer security models
Controlled Access Protection Profile
[ "Technology", "Engineering" ]
217
[ "Computer security stubs", "Computing stubs", "Computer security models", "Cybersecurity engineering" ]
55,809,839
https://en.wikipedia.org/wiki/GW170608
GW170608 was a gravitational wave event that was recorded on 8 June 2017 at 02:01:16.49 UTC by Advanced LIGO. It originated from the merger of two black holes with masses of and . The resulting black hole had a mass around 18 solar masses. About one solar mass was converted to energy in the form of gravitational waves. Event detection The signal was not detected by automated analyses, as the Hanford instrument was undergoing tests at specific frequencies and data from the instrument was not being analyzed. The signal was initially identified by visual inspection of triggers from the Livingston detector. Manual follow-up with the Hanford data revealed a coincident signal. Subsequent investigations determined that the ongoing tests of the Hanford instrument did not affect the recovery of the signal from the Hanford data. Announcement This was the first gravitational wave detection where the scientific article announcing the discovery was posted on the electronic preprint arXiv server before the paper was accepted for publication by the journal. References Gravitational waves June 2017 2017 in science 2017 in outer space Stellar black holes
GW170608
[ "Physics" ]
216
[ "Black holes", "Physical phenomena", "Stellar black holes", "Unsolved problems in physics", "Waves", "Gravitational waves" ]
54,236,356
https://en.wikipedia.org/wiki/Molecular%20phenotyping
Molecular phenotyping describes the technique of quantifying pathway reporter genes, i.e. pre-selected genes that are modulated specifically by metabolic and signaling pathways, in order to infer activity of these pathways. In most cases, molecular phenotyping quantifies changes of pathway reporter gene expression to characterize modulation of pathway activities induced by perturbations such as therapeutic agents or stress in a cellular system in vitro. In such contexts, measurements at early time points are often more informative than later observations because they capture the primary response to the perturbation by the cellular system. Integrated with quantified changes of phenotype induced by the perturbation, molecular phenotyping can identify pathways that contribute to the phenotypic changes. Currently molecular phenotyping uses RNA sequencing and mRNA expression to infer pathway activities. Other technologies and readouts such as mass spectrometry and protein abundance or phosphorylation levels can be potentially used as well. Application in early drug discovery Current data suggest that by quantifying pathway reporter gene expression, molecular phenotyping is able to cluster compounds based on pathway profiles and dissect associations between pathway activities and disease phenotypes simultaneously. Furthermore, molecular phenotyping can be applicable to compounds with a range of binding specificities and is able to triage false positives derived from high-content screening assays. Furthermore, molecular phenotyping allows integration of data derived from in vitro and in vivo models as well as patient data into the drug discovery process. References Molecular biology RNA Gene expression Drug discovery Pharmaceutical industry
Molecular phenotyping
[ "Chemistry", "Biology" ]
330
[ "Pharmacology", "Life sciences industry", "Drug discovery", "Pharmaceutical industry", "Gene expression", "Molecular genetics", "Cellular processes", "Medicinal chemistry", "Molecular biology", "Biochemistry" ]
54,241,127
https://en.wikipedia.org/wiki/Hexafluorothioacetone
Hexafluorothioacetone is an organic perfluoro thione compound with formula CF3CSCF3. At standard conditions it is a blue gas. Production Hexafluorothioacetone was first produced by Middleton in 1961 by boiling bis-(perfluoroisopropyl)mercury with sulfur. Properties Hexafluorothioacetone boils at 8 °C. Below this it is a blue liquid. Colour The blue colour is due to absorption in the visible light range with bands at 800–675 nm and 725–400 nm. These bands are due to T1–S0 and S1–S0 transitions. There is also a strong absorption in ultraviolet around 230-190 nm. Reactions Hexafluorothioacetone acts more like a true thiocarbonyl (C=S) than many other thiocarbonyl compounds, because it is not able to form thioenol compounds (=C-S-H), and the sulfur is not in a negative ionized state (C-S−). Hexafluorothioacetone is not attacked by water or oxygen at standard conditions as are many other thiocarbonyls. Bases trigger the formation of a dimer 2,2,4,4-tetrakis-(trifluoromethyl)-1,3-dithietane. Bases includes amines. The dimer can be heated to regenerate the hexafluorothioacetone monomer. The dimer is also produced in a reaction with hexafluoropropene and sulfur with some potassium fluoride. Hexafluorothioacetone reacts with bisulfite to form a Bunte salt CH(CF3)2SSO2−. Thiols reacting with hexafluorothioacetone yield disulfides or a dithiohemiketal: R-SH + C(CF3)2S → R-S-S-CH(CF3)2. R-SH + C(CF3)2S → RSC(CF3)2SH (for example in methanethiol or ethanethiol). With mercaptoacetic acid, instead of a thiohemiketal, water elimination yields a ring shaped molecule called a dithiolanone -CH2C(O)SC(CF3)2S- (2,2-di(trifluoromethyl)-1,3-dithiolan-4-one). Aqueous hydrogen chloride results in the formation of a dimeric disulfide CH(CF3)2SSC(CF3)2Cl. Hydrogen bromide with water yields the similar CH(CF3)2SSC(CF3)2Br. Dry hydrogen iodide does something different and reduces the sulfur making CH(CF3)2SH. Wet hydrogen iodide only reduces to a disulfide CH(CF3)2SSC(CF3)2H. Strong organic acids add water to yield a disulfide compound CH(CF3)2SSC(CF3)2OH. Chlorine and bromine add to hexafluorothioacetone to make CCl(CF3)2SCl and CBr(CF3)2SBr. With diazomethane hexafluorothioacetone produces 2,2,5,5-tetrakis(trifluoromethyl)-l,3-dithiolane, another substituted dithiolane. Diphenyldiazoniethane reacts to form a three membered ring called a thiirane (di-2,2-trifluoromethyl-di-3,3-phenyl-thiirane) Trialkylphosphites (P(OR)3) react to make a trialkoxybis(trifluoromethyl)methylenephosphorane (RO)3P=C(CF3)2 and a thiophosphate (RO)3PS. Hexafluorothioacetone can act as a ligand on nickel. Hexafluorothioacetone is highly reactive to alkenes and dienes combining via addition reactions. With butadiene it reacts even as low as -78 °C to yield 2,2-bis-(trifluoromethyl)-3,6-dihydro-2H-l-thiapyran. See also Hexafluoroacetone References External links Thioketones Perfluorinated compounds Trifluoromethyl compounds Gases with color
Hexafluorothioacetone
[ "Chemistry" ]
992
[ "Functional groups", "Thioketones" ]
62,766,022
https://en.wikipedia.org/wiki/2218%20aluminium%20alloy
2218 aluminium alloy is an alloy in the wrought aluminium-copper family (2000 or 2xxx series). It is one of the most complex grades in the 2000 series, with at least 88.4% aluminium by weight. Unlike most other aluminium-copper alloys, 2218 is a high work-ability alloy, with relatively low for 2xxx series alloy yield strength of 255 MPa. Despite being highly alloyed, it have a good corrosion and oxidation resistance due sacrificial anode formed by magnesium inclusions, similar to marine-grade 5xxx series alloys. Although 2218 is wrought alloy, owing to granular structure it can be used in casting and been precisely machined after casting. It is easy to weld, coat, or glue. Good workability, thermal conductivity and dimensional stability make 2218 alloy a material of choice whenever high-precision parts subject to thermal shocks (especially piston engine cylinders and cylinder heads) are needed. 2218 alloy can be heat treated to increase tensile strength in expense of workability, with most common grades been F, T61, T71 and T72. Alternative names for 2218 alloy are A2218 and A92218. Chemical Composition The chemical composition of 2218 alloy is poorly standardized, with several variants in production. All variants include both copper (4%) and magnesium (1.5%) as major alloying elements. Common alloy variants also include 2% of nickel. The alloy composition of 2218 aluminium is: Aluminium: 91.35 to 92.95% Copper: 3.5 to 4.5% Magnesium: 1.2 to 1.8% Nickel: 1.7 to 2.3% Iron: 1% max Silicon: 0.9% max Zinc: 0.25% max Manganese: 0.5% max Tin: 0.25% max Chromium: 0.1% max See also Y alloy (precursor of A2218 with same major alloying elements) References Aluminium alloy table Aluminium alloys Aluminium–copper alloys
2218 aluminium alloy
[ "Chemistry" ]
422
[ "Alloys", "Aluminium alloys" ]
62,769,306
https://en.wikipedia.org/wiki/Molecular%20fragmentation%20methods
Molecular fragmentation (mass spectrometry), or molecular dissociation, occurs both in nature and in experiments. It occurs when a complete molecule is rendered into smaller fragments by some energy source, usually ionizing radiation. The resulting fragments can be far more chemically reactive than the original molecule, as in radiation therapy for cancer, and are thus a useful field of inquiry. Different molecular fragmentation methods have been built to break apart molecules, some of which are listed below. Background A major objective of theoretical chemistry and computational chemistry is the calculation of the energy and properties of molecules so that chemical reactivity and material properties can be understood from first principles. As a practical matter, the aim is to complement the knowledge we gain from experiments, particularly where experimental data may be incomplete or very difficult to obtain. High-level ab-initio quantum chemistry methods are known to be an invaluable tool for understanding the structure, energy, and properties of small up to medium-sized molecules. However, the computational time for these calculations grows rapidly with increased size of molecules. One way of dealing with this problem is the molecular fragmentation approach which provides a hierarchy of approximations to the molecular electronic energy. In this approach, large molecules are divided in a systematic way to small fragments, for which high-level ab-initio calculation can be performed with acceptable computational time. The defining characteristic of an energy-based molecular fragmentation method is that the molecule (also cluster of molecules, or liquid or solid) is broken up into a set of relatively small molecular fragments, in such a way that the electronic energy, , of the full system is given by a sum of the energies of these fragment molecules: where is the energy of a relatively small molecular fragment,. The are simple coefficients (typically integers), and is the number of fragment molecules. Some of the methods also require a correction to the energies evaluated from the fragments. However, where necessary, this correction, , is easily computed. Methods Different methods have been devised to fragment molecules. Among them you can find the following energy-based methods: Electrostatically Embedded Generalized Molecular Fractionation with Conjugate Caps (EE-GMFCC) Generalized Energy-Based Fragmentation (GEBF) Molecular Tailoring Approach (MTA) Systematic Molecular Fragmentation (SMF) Combined Fragmentation Method (CFM) Kernel Energy Method (KEM) Many-Overlapping-Body (MOB) Expansion Generalized Many-Body Expansion (GMBE) Method References Molecular biology Molecular physics Molecular genetics
Molecular fragmentation methods
[ "Physics", "Chemistry", "Biology" ]
506
[ "Molecular physics", "Biochemistry", "Molecular biology stubs", "Molecular genetics", " molecular", "nan", "Molecular biology", "Atomic", "Molecular physics stubs", " and optical physics" ]
61,791,014
https://en.wikipedia.org/wiki/Standards%20for%20Reporting%20Enzymology%20Data
Standards for Reporting Enzymology Data (STRENDA) is an initiative as part of the Minimum Information Standards which specifically focuses on the development of guidelines for reporting (describing metadata) enzymology experiments. The initiative is supported by the Beilstein Institute for the Advancement of Chemical Sciences. STRENDA establishes both publication standards for enzyme activity data and STRENDA DB, an electronic validation and storage system for enzyme activity data. Launched in 2004, the foundation of STRENDA is the result of a detailed analysis of the quality of enzymology data in written and electronic publications. Organization The STRENDA project is driven by 15 scientists from all over the world forming the STRENDA Commission and supporting the work with expertises in biochemistry, enzyme nomenclature, bioinformatics, systems biology, modelling, mechanistic enzymology and theoretical biology. Reporting guidelines The STRENDA Guidelines propose those minimum information that is needed to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions. This minimum information is suggested to be addressed in a scientific publication when enzymology research data is reported to ensure that data sets are comprehensively described. This allows scientists not only to review, interpret and corroborate the data but also to reuse the data for modelling and simulation of biocatalytic pathways. In addition, the guidelines support researchers making their experimental data reproducible and transparent. As of March 2020, more than 55 international biochemistry journal included the STRENDA Guidelines in their authors' instructions as recommendations when reporting enzymology data. The STRENDA project is registered with FAIRsharing.org and the Guidelines are part of the FAIRDOM Community standards for Systems Biology. Applications STRENDA DB STRENDA DB is a web-based storage and search platform that has incorporated the Guidelines and automatically checks the submitted data on compliance with the STRENDA Guidelines thus ensuring that the manuscript data sets are complete and valid. A valid data set is awarded a STRENDA Registry Number (SRN) and a fact sheet (PDF) is created containing all submitted data. Each dataset is registered at Datacite and assigned a DOI to refer and track the data. After the publication of the manuscript in a peer-reviewed journal the data in STRENDA DB are made open accessible. STRENDA DB is a repository recommended by re3data and OpenDOAR. It is harvested by OpenAIRE. The database service is recommended in the authors' instructions of more than 10 biochemistry journals, including Nature, The Journal of Biological Chemistry, eLife, and PLoS. It has been referred as a standard tool for the validation and storage of enzyme kinetics data in multifold publications A recent study examining eleven publications, including Supporting Information, from two leading journals revealed that at least one omission was found in every one of these papers. The authors concluded that using STRENDA DB in the current version would ensure that about 80% auf the relevant information would be made available. Data Management STRENDA DB is considered a tool for research data management by the research community (e.g. EU project CARBAFIN). References External References Record in FAIRSharing.org for STRENDA DB, https://fairsharing.org/FAIRsharing.ekj9zx Biochemistry Proteins Enzymes Standards Biological databases
Standards for Reporting Enzymology Data
[ "Chemistry", "Biology" ]
678
[ "Biomolecules by chemical classification", "Bioinformatics", "nan", "Molecular biology", "Biochemistry", "Proteins", "Biological databases" ]
61,791,659
https://en.wikipedia.org/wiki/Almadena%20Chtchelkanova
Almadena Yurevna Chtchelkanova is a Russian-American scientist. She is a program director in the Division of Computing and Communication Foundations at the National Science Foundation. Education Chtchelkanova completed a Ph.D. in physics from Moscow State University in 1988. In 1996, she earned a M.A. in the department of computer sciences at University of Texas at Austin. Her master's thesis was titled The application of object-oriented analysis to sockets system calls library testing. James C. Browne was her advisor. Career She worked as a senior scientist for Strategic Analysis, Inc. which provided support to DARPA. She provided support and oversight of the Spintronics, Quantum Information Science and Technology (QuIST) and Molecular Observation and Imaging programs. She worked at the United States Naval Research Laboratory for 4 years in the laboratory for computational physics and fluid dynamics. Chtchelkanova joined the National Science Foundation in 2005. She is a program director in the Division of Computing and Communication Foundations and oversees programs involving high performance computing. References External links United States National Science Foundation officials University of Texas at Austin College of Natural Sciences alumni Moscow State University alumni 20th-century Russian women scientists 21st-century American women scientists Russian women computer scientists Women physicists Computational physicists 21st-century American physicists 20th-century Russian physicists Year of birth missing (living people) Living people
Almadena Chtchelkanova
[ "Physics" ]
282
[ "Computational physicists", "Computational physics" ]
67,082,723
https://en.wikipedia.org/wiki/Rhodium%28III%29%20nitrate
Rhodium(III) nitrate is a inorganic compound, a salt of rhodium and nitric acid with the formula Rh(NO3)3. This anhydrous complex has been the subject of theoretical analysis but has not been isolated. However, a dihydrate and an aqueous solution are known with similar stoichiometry; they contain various hexacoordinated rhodium(III) aqua and nitrate complexes. A number of other rhodium nitrates have been characterized by X-ray crystallography: Rb4[trans-[Rh(H2O)2(NO3)4][Rh(NO3)6] and Cs2[-[Rh(NO3)5]. Rhodium nitrates are of interest because nuclear wastes, which contain rhodium, are recycled by dissolution in nitric acid. Uses Rhodium(III) nitrate is used as a precursor to synthesize rhodium. References Rhodium(III) compounds Nitrates Hypothetical chemical compounds Hydrates
Rhodium(III) nitrate
[ "Chemistry" ]
224
[ "Inorganic compounds", "Hydrates", "Nitrates", "Salts", "Inorganic compound stubs", "Oxidizing agents", "Hypotheses in chemistry", "Theoretical chemistry", "Hypothetical chemical compounds" ]
67,093,097
https://en.wikipedia.org/wiki/Gas%20chromatography-olfactometry
Gas chromatography-olfactometry (GC-O) is a technique that integrates the separation of volatile compounds using a gas chromatograph with the detection of odour using an olfactometer (human assessor). It was first invented and applied in 1964 by Fuller and co-workers. While GC separates volatile compounds from an extract, human olfaction detects the odour activity of each eluting compound. In this olfactometric detection, a human assessor may qualitatively determine whether a compound has odour activity or describe the odour perceived, or quantitatively evaluate the intensity of the odour or the duration of the odour activity. The olfactometric detection of compounds allows the assessment of the relationship between a quantified substance and the human perception of its odour, without instrumental detection limits present in other kinds of detectors. Compound identification still requires use of other detectors, such as mass spectrometry, with analytical standards. Olfactory perception The properties of a compound relating to human olfactory perception includes its odour quality, threshold and intensity as a function of its concentration. The odour quality of a (odour-active) compound is assessed using odour descriptors in sensory descriptive analyses. It shows the sensory–chemical relationship in volatile compounds. The odour quality of a compound may change with its concentration. The absolute threshold of a compound is the minimum concentration at which it can be detected. In a mixture of volatile compounds, only the proportion of compounds present at concentrations above their threshold contribute to the odour. This property can be represented by the odour threshold (OT), the minimum concentration at which the odour is perceived by 50% of a human panel without determining its quality, or the recognition threshold, the minimum concentration at which the odour is perceived and can be described by 50% of a human panel. The intensity of perception of a compound is positively correlated with its concentration. It is represented by the unique psychometric or concentration-response function of the compound. A psychometric function with a log concentration–perceived intensity plot is characterised by its sigmoidal shape, with its initial baseline representing the compound at concentrations below its threshold, a slow rise in response around the inflection point representing the threshold, an exponential rise in response as the concentration exceeds the threshold, a deceleration of the response to a flat region as the zone of saturation or the point at which the change in intensity is no longer perceived is reached. On the other hand, a log concentration–log perceived intensity plot, using Steven's power law, forms a linear line with the exponent characterising the relationship between the two variables. Experimental design The apparatus consists of a gas chromatograph equipped with an odour port (ODP), in place of or in addition to conventional detectors, from with human assessors sniff the eluates. The odour port is characterised by its nose-cone design connected to the GC instrument by a transfer line. The odour port is commonly glass or polytetrafluoroethylene. It is generally placed 30–60 cm away from the instrument, extending from the side such that it is not affected by the hot GC oven. The deactivated silica transfer line is generally heated to prevent the condensation of less-volatile compounds. It is flexible so that the assessor can adjust it according to their comfortable sitting position. As traditional warm and dry carrier gases may dehydrate the mucous membrane of the nose, volatiles are delivered via auxiliary gas or humidified carrier gas, with relative humidity (RH) of 50–75%, to ease the dehydration. The olfactometric detector may be coupled with, or connected in parallel to, a flame ionization detector (FID) or mass spectrometer (MS). Moreover, multiple odour ports may be set-up. In these cases, the eluate is generally split evenly between the detectors to allow it to reach the detectors simultaneously. Methods of detection In a GC-O analysis, various methods are used to determine the odour contribution of a compound or the relative importance of each odorant. The methods can be categorised as (i) detection frequency, (ii) dilution to threshold and (iii) direct intensity. Detection frequency The GC-O analysis is carried out by a panel of 6–12 assessors to count the number of participants who perceive an odour at each retention time. This frequency is then used to represent the relative importance of an odorant in the extract. It is also presumed to relate to the intensity of the odorant at the particular concentration, based on the assumption that individual detection thresholds are normally distributed. Two different kinds of data can be reported by this method depending on the data collected. First, if only frequency data is available, it is reported as the nasal impact frequency (NIF) or the peak height of the olfactometric signal. It is zero if no assessor senses the odour and added with one each time an assessor senses an odour. Second, if both frequency of detection and duration of odour are collected, the surface of NIF (SNIF) or the peak area corresponding to the product of frequency of detection (%) and duration of odour (s) can be interpreted. SNIF allows further interpretation of odour compounds other than just peak height. The detection frequency method benefits from its simplicity and lack of requirement for trained assessors, as the signal recorded is binary (presence/absence of odour). On the other hand, a drawback of this method is the limitation to the assumption of the relationship between frequency and perceived odour intensity. Odour-active compounds in food samples are often present at concentrations above their detection thresholds. This means that a compound may be detected by all assessors and therefore reach the limit of 100% detection in spite of increases in intensity. Dilution to threshold A dilution series of a sample or extract is prepared and assessed for presence of odour. The result can be described as the odour potency of a compound. One kind of analysis is to measure the maximum dilution in the series in which odour is still perceived. The resulting value is called the flavour dilution (FD) factor in the aroma extraction dilution analysis (AEDA) developed in 1987 by Schieberle and Grosch. On the other hand, another kind of analysis is to also measure the duration of the perceived odour to compute peak areas. The peak areas are known as Charm values in the CharmAnalysis developed in 1984 by Acree and co-workers. The former can then be interpreted as the peak height of the latter. Because the odour threshold of a compound is intended to be measured from a prepared series of dilution (commonly by a factor of 2–3 with 8–10 dilutions), the precision and variation in data can be determined from the dilution factors used. Due to time demand requirements from this method and the general requirement for multiple assessors to minimise errors, having the column split into multiple odour ports would be beneficial for the method. Direct intensity This method adds to the dilution to threshold method by considering the perceived intensity of the compounds as well. Assessors can report this based on a predetermined scale. The posterior intensity method measures the maximum intensity perceived for each eluting compound. A panel of assessors is recommended to be used to obtain an averaged signal. On the other hand, the dynamic time-intensity method measures the intensity at different points in time starting from the time of elution, allowing a continuous measurement of onset, maximum, and decline of the odour intensity. This is used in the Osme (Greek word for odour) method developed in 1992 by Da Silva. An aromagram can then be constructed in a similar way as an FID chromatogram whereby intensity is plotted as a function of retention time. The peak height corresponds to the maximum intensity perceived whereas the peak width corresponds to the duration of the odour perceived. The time requirement maybe high for this particular method regarding the essentials of assessor training, as lack of training may result in inconsistencies in scale usage. However, with a trained panel of assessors, the analysis can be done in a relatively short amount of time with high precision. Variations Gas chromatography/mass spectrometry-olfactometry (GC/MS-O) GC-recomposition-olfactometry (GC-R) Multi-gas chromatography-olfactometry References External links Gas Chromatography–Olfactometry: Principles, Practical Aspects and Applications in Food Analysis Analytical chemistry Laboratory techniques
Gas chromatography-olfactometry
[ "Chemistry" ]
1,853
[ "Chromatography", "Gas chromatography" ]
41,331,720
https://en.wikipedia.org/wiki/Weyr%20canonical%20form
In mathematics, in linear algebra, a Weyr canonical form (or, Weyr form or Weyr matrix) is a square matrix which (in some sense) induces "nice" properties with matrices it commutes with. It also has a particularly simple structure and the conditions for possessing a Weyr form are fairly weak, making it a suitable tool for studying classes of commuting matrices. A square matrix is said to be in the Weyr canonical form if the matrix has the structure defining the Weyr canonical form. The Weyr form was discovered by the Czech mathematician Eduard Weyr in 1885. The Weyr form did not become popular among mathematicians and it was overshadowed by the closely related, but distinct, canonical form known by the name Jordan canonical form. The Weyr form has been rediscovered several times since Weyr’s original discovery in 1885. This form has been variously called as modified Jordan form, reordered Jordan form, second Jordan form, and H-form. The current terminology is credited to Shapiro who introduced it in a paper published in the American Mathematical Monthly in 1999. Recently several applications have been found for the Weyr matrix. Of particular interest is an application of the Weyr matrix in the study of phylogenetic invariants in biomathematics. Definitions Basic Weyr matrix Definition A basic Weyr matrix with eigenvalue is an matrix of the following form: There is an integer partition of with such that, when is viewed as an block matrix , where the block is an matrix, the following three features are present: The main diagonal blocks are the scalar matrices for . The first superdiagonal blocks are full column rank matrices in reduced row-echelon form (that is, an identity matrix followed by zero rows) for . All other blocks of W are zero (that is, when ). In this case, we say that has Weyr structure . Example The following is an example of a basic Weyr matrix. In this matrix, and . So has the Weyr structure . Also, and General Weyr matrix Definition Let be a square matrix and let be the distinct eigenvalues of . We say that is in Weyr form (or is a Weyr matrix) if has the following form: where is a basic Weyr matrix with eigenvalue for . Example The following image shows an example of a general Weyr matrix consisting of three basic Weyr matrix blocks. The basic Weyr matrix in the top-left corner has the structure (4,2,1) with eigenvalue 4, the middle block has structure (2,2,1,1) with eigenvalue -3 and the one in the lower-right corner has the structure (3, 2) with eigenvalue 0. Relation between Weyr and Jordan forms The Weyr canonical form is related to the Jordan form by a simple permutation for each Weyr basic block as follows: The first index of each Weyr subblock forms the largest Jordan chain. After crossing out these rows and columns, the first index of each new subblock forms the second largest Jordan chain, and so forth. The Weyr form is canonical That the Weyr form is a canonical form of a matrix is a consequence of the following result: Each square matrix over an algebraically closed field is similar to a Weyr matrix which is unique up to permutation of its basic blocks. The matrix is called the Weyr (canonical) form of . Computation of the Weyr canonical form Reduction to the nilpotent case Let be a square matrix of order over an algebraically closed field and let the distinct eigenvalues of be . The Jordan–Chevalley decomposition theorem states that is similar to a block diagonal matrix of the form where is a diagonal matrix, is a nilpotent matrix, and , justifying the reduction of into subblocks . So the problem of reducing to the Weyr form reduces to the problem of reducing the nilpotent matrices to the Weyr form. This is leads to the generalized eigenspace decomposition theorem. Reduction of a nilpotent matrix to the Weyr form Given a nilpotent square matrix of order over an algebraically closed field , the following algorithm produces an invertible matrix and a Weyr matrix such that . Step 1 Let Step 2 Compute a basis for the null space of . Extend the basis for the null space of to a basis for the -dimensional vector space . Form the matrix consisting of these basis vectors. Compute . is a square matrix of size − nullity . Step 3 If is nonzero, repeat Step 2 on . Compute a basis for the null space of . Extend the basis for the null space of to a basis for the vector space having dimension − nullity . Form the matrix consisting of these basis vectors. Compute . is a square matrix of size − nullity − nullity. Step 4 Continue the processes of Steps 1 and 2 to obtain increasingly smaller square matrices and associated invertible matrices until the first zero matrix is obtained. Step 5 The Weyr structure of is where = nullity. Step 6 Compute the matrix (here the 's are appropriately sized identity matrices). Compute . is a matrix of the following form: . Step 7 Use elementary row operations to find an invertible matrix of appropriate size such that the product is a matrix of the form . Step 8 Set diag and compute . In this matrix, the -block is . Step 9 Find a matrix formed as a product of elementary matrices such that is a matrix in which all the blocks above the block contain only 's. Step 10 Repeat Steps 8 and 9 on column converting -block to via conjugation by some invertible matrix . Use this block to clear out the blocks above, via conjugation by a product of elementary matrices. Step 11 Repeat these processes on columns, using conjugations by . The resulting matrix is now in Weyr form. Step 12 Let . Then . Applications of the Weyr form Some well-known applications of the Weyr form are listed below: The Weyr form can be used to simplify the proof of Gerstenhaber’s Theorem which asserts that the subalgebra generated by two commuting matrices has dimension at most . A set of finite matrices is said to be approximately simultaneously diagonalizable if they can be perturbed to simultaneously diagonalizable matrices. The Weyr form is used to prove approximate simultaneous diagonalizability of various classes of matrices. The approximate simultaneous diagonalizability property has applications in the study of phylogenetic invariants in biomathematics. The Weyr form can be used to simplify the proofs of the irreducibility of the variety of all k-tuples of commuting complex matrices. References Linear algebra Matrix theory Matrix normal forms Matrix decompositions
Weyr canonical form
[ "Mathematics" ]
1,398
[ "Linear algebra", "Algebra" ]