id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
39,474,050
https://en.wikipedia.org/wiki/Consumer%20network
The notion of consumer networks expresses the idea that people's embeddedness in social networks affects their behavior as consumers. Interactions within consumer networks such as information exchange and imitation can affect demand and market outcomes in ways not considered in the neoclassical theory of consumer choice. Economics Economic research on the topic is not ample. In attempts to incorporate consumer networks into standard microeconomic models, some interesting implications have been found concerning market structure, market dynamics and the firm's profit maximizating decision. It has been shown that under certain assumptions the structure of the consumer network can affect market structure. In certain scenarios, where consumers have a higher inclination to compare their habitually consumed product to that of their acquaintances, the equilibrium market structure can switch from oligopoly to monopoly. In another model, which incorporates small world consumer networks into the profit function of the firm, it has been demonstrated that the density of the network significantly affects the optimal price the firm should charge and the optimal referral fee (paid to consumers who can convince another one to buy). On the other hand, the size of the network does not have an important effect on these. A 2007 laboratory experiment found that increased density of consumer networks can reduce market inefficiencies caused by moral hazard. The ability of consumers to exchange information with more neighbors increases firms’ incentives to build a reputation through selling high quality products. Even a low level of density was found to isolated consumers who can rely only on their own experience. Marketing Exploiting consumer networks for marketing purposes, through techniques such as viral marketing, word-of-mouth marketing, or network marketing, is increasingly experimented with by marketers, to the extent that "some developments in customer networking are ahead of empirical research, and a few seem ahead even of accepted theory". These might often be more effective than more traditional forms of advertising. A key task of such forms of marketing is to target the people who are opinion leaders regarding consumption, having many contacts and positive reputation. They are, in network science language, the hubs of consumer networks. See also Viral marketing Word-of-mouth marketing Notes and references Network Network theory
Consumer network
[ "Mathematics" ]
430
[ "Network theory", "Mathematical relations", "Graph theory" ]
39,474,341
https://en.wikipedia.org/wiki/Water%20remote%20sensing
Water Remote Sensing is the observation of water bodies such as lakes, oceans, and rivers from a distance in order to describe their color, state of ecosystem health, and productivity. Water remote sensing studies the color of water through the observation of the spectrum of water leaving radiance. From the spectrum of color coming from the water, the concentration of optically active components of the upper layer of the water body can be estimated via specific algorithms. Water quality monitoring by remote sensing and close-range instruments has obtained considerable attention since the founding of EU Water Framework Directive. Overview Water remote sensing instruments (sensors) allow scientists to record the color of a water body, which provides information on the presence and abundance of optically active natural water components (plankton, sediments, detritus, or dissolved substances). The water color spectrum as seen by a satellite sensor is defined as an apparent optical property (AOP) of the water. This means that the color of the water is influenced by the angular distribution of the light field and by the nature and quantity of the substances in the medium, in this case, water. Thus, the values of remote sensing reflectance, an AOP, will change with changes in the optical properties and concentrations of the optically active substances in the water. Properties and concentrations of substances in the water are known as the inherent optical properties or IOPs. IOPs are independent from the angular distribution of light (the "light field") but they are dependent on the type and amount of substances that are present in the water. For instance, the diffuse attenuation coefficient of downwelling irradiance, Kd (often used as an index of water clarity or ocean turbidity) is defined as an AOP (or quasi-AOP), while the absorption coefficient and the scattering coefficient of the water are defined as IOPs. There are two different approaches to determine the concentration of optically active water components by the study of spectra, distributions of light energy over a range of wavelengths or colors. The first approach consist of empirical algorithms based on statistical relationships. The second approach consists of analytical algorithms based on the inversion of calibrated bio-optical models. Accurate calibration of the relationships and/or models used is an important condition for successful inversion on water remote sensing techniques and the determination of concentration of water quality parameters from observed spectral remote sensing data. Thus, these techniques depend on their ability to record these changes in the spectral signature of light backscattered from water surface and relate these recorded changes to water quality parameters via empirical or analytical approaches. Depending on the water constituents of interest and the sensor used, different parts of the spectrum will be analyzed. History The gradual development of understanding of the transparency of natural waters and of the reason of their clarity variability and coloration has been sketched from the times of Henry Hudson (1600) to those of Chandrasekhara Raman (1930). However, the development of water remote sensing techniques (by the use of satellite imaging, aircraft or close range optical devices) didn't start until the early 1970s. These first techniques measured the spectral and thermal differences in the emitted energy from water surfaces. In general, empirical relationships were settled between the spectral properties and the water quality parameters of the water body. In 1974, Ritchie et al. (1974) developed an empirical approach to determine suspended sediments. This kind of empirical models are only able to use to determine water quality parameters of water bodies with similar conditions. In 1992 an analytical approach was used by Schiebe et al. (1992). This approach was based on the optical characteristics of water and water quality parameters to elaborate a physically based model of the relationship between the spectral and physical properties of the surface water studied. This physically based model was successfully applied in order to estimate suspended sediment concentrations. Applications By the use of optical close range devices (e.g. spectrometers, radiometers), airplanes or helicopters (airborne remote sensing) and satellites (space-borne remote sensing), the light energy radiating from water bodies is measured. For instance, algorithms are used to retrieve parameters such as chlorophyll-a(Chl-a) and Suspended Particulate Matter (SPM) concentration, the absorption by colored dissolved organic matter at 440 nm (aCDOM) and secchi depth. The measurement of these values will give an idea about the water quality of the water body being studied. A very high concentration of green pigments like chlorophyll might indicate the presence of an algal bloom, for example, due to eutrophication processes. Thus, the chlorophyll concentration could be used as a proxy or indicator for the trophic condition of a water body. In the same manner, other optical quality parameters such as suspended particles or Suspended Particulate matter (SPM), Colored Dissolved Organic Matter (CDOM), Transparency (Kd), and chlorophyll-a (Chl-a) can be used to monitor water quality. See also Ocean color Ocean optics References External links EULAKES project, water quality by remote sensing technique CoastColour project: remote sensing of the coastal zone Revamp project: Regional Validation of MERIS Chlorophyll products in North Sea Coastal Waters The Great Lakes Web Site: Michigan Tech's Large Lakes Remote Sensing program CoastWatch project International Ocean Colour Coordinating Group ESA European Space Agency activities: Observing the Earth Ocean Color Web Assessing remotely sensed chlorophyll-a for the implementation of the Water Framework Directive in European perialpine lakes Remote sensing Geographical technology Satellite meteorology Applications of computer vision Earth sciences Physical oceanography Hydrology Water
Water remote sensing
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
1,151
[ "Hydrology", "Applied and interdisciplinary physics", "Physical oceanography", "Environmental engineering", "Water" ]
39,483,107
https://en.wikipedia.org/wiki/Tessellation%20%28computer%20graphics%29
In computer graphics, tessellation is the dividing of datasets of polygons (sometimes called vertex sets) presenting objects in a scene into suitable structures for rendering. Especially for real-time rendering, data is tessellated into triangles, for example in OpenGL 4.0 and Direct3D 11. In graphics rendering A key advantage of tessellation for realtime graphics is that it allows detail to be dynamically added and subtracted from a 3D polygon mesh and its silhouette edges based on control parameters (often camera distance). In previously leading realtime techniques such as parallax mapping and bump mapping, surface details could be simulated at the pixel level, but silhouette edge detail was fundamentally limited by the quality of the original dataset. In Direct3D 11 pipeline (a part of DirectX 11), the graphics primitive is the patch. The tessellator generates a triangle-based tessellation of the patch according to tessellation parameters such as the TessFactor, which controls the degree of fineness of the mesh. The tessellation, along with shaders such as a Phong shader, allows for producing smoother surfaces than would be generated by the original mesh. By offloading the tessellation process onto the GPU hardware, smoothing can be performed in real time. Tessellation can also be used for implementing subdivision surfaces, level of detail scaling and fine displacement mapping. OpenGL 4.0 uses a similar pipeline, where tessellation into triangles is controlled by the Tessellation Control Shader and a set of four tessellation parameters. In computer-aided design In computer-aided design the constructed design is represented by a boundary representation topological model, where analytical 3D surfaces and curves, limited to faces, edges, and vertices, constitute a continuous boundary of a 3D body. Arbitrary 3D bodies are often too complicated to analyze directly. So they are approximated (tessellated) with a mesh of small, easy-to-analyze pieces of 3D volume—usually either irregular tetrahedra, or irregular hexahedra. The mesh is used for finite element analysis. The mesh of a surface is usually generated per individual faces and edges (approximated to polylines) so that original limit vertices are included into mesh. To ensure that approximation of the original surface suits the needs of further processing, three basic parameters are usually defined for the surface mesh generator: The maximum allowed distance between the planar approximation polygon and the surface (known as "sag"). This parameter ensures that mesh is similar enough to the original analytical surface (or the polyline is similar to the original curve). The maximum allowed size of the approximation polygon (for triangulations it can be maximum allowed length of triangle sides). This parameter ensures enough detail for further analysis. The maximum allowed angle between two adjacent approximation polygons (on the same face). This parameter ensures that even very small humps or hollows that can have significant effect to analysis will not disappear in mesh. An algorithm generating a mesh is typically controlled by the above three and other parameters. Some types of computer analysis of a constructed design require an adaptive mesh refinement, which is a mesh made finer (using stronger parameters) in regions where the analysis needs more detail. See also ATI TruForm – brand for their hardware tessellation unit from 2001 – newer unit from June 2007 – most current unit from January 2011 Tessellation shader Progressive mesh Mesh generation Tiled rendering External links GPUOpen: OpenGL sample that demonstrates terrain tessellation on the GPU References Computer graphics Computer-aided design Mesh generation
Tessellation (computer graphics)
[ "Physics", "Engineering" ]
741
[ "Computer-aided design", "Mesh generation", "Design engineering", "Tessellation", "Symmetry" ]
39,483,851
https://en.wikipedia.org/wiki/Self-organized%20criticality%20control
In applied physics, the concept of controlling self-organized criticality refers to the control of processes by which a self-organized system dissipates energy. The objective of the control is to reduce the probability of occurrence of and size of energy dissipation bursts, often called avalanches, of self-organized systems. Dissipation of energy in a self-organized critical system into a lower energy state can be costly for society, since it depends on avalanches of all sizes usually following a kind of power law distribution and large avalanches can be damaging and disruptive. Schemes Several strategies have been proposed to deal with the issue of controlling self-organized criticality: The design of controlled avalanches. Daniel O. Cajueiro and Roberto F. S. Andrade show that if well-formulated small and medium avalanches are exogenously triggered in the system, the energy of the system is released in a way that large avalanches are rarer. The modification of the degree of interdependence of the network where the avalanche spreads. Charles D. Brummitt, Raissa M. D'Souza and E. A. Leicht show that the dynamics of self-organized critical systems on complex networks depend on connectivity of the complex network. They find that while some connectivity is beneficial (since it suppresses the largest cascades in the system), too much connectivity gives space for the development of very large cascades and increases the size of capacity of the system. The modification of the deposition process of the self-organized system. Pierre-Andre Noel, Charles D. Brummitt and Raissa M. D'Souza show that it is possible to control the self-organized system by modifying the natural deposition process of the self-organized system adjusting the place where the avalanche starts. Dynamically modifying the local thresholds of cascading failures. In a model of an electric transmission network, Heiko Hoffmann and David W. Payton demonstrated that either randomly upgrading lines (sort of like preventive maintenance) or upgrading broken lines to a random breakage threshold suppresses self-organized criticality. Apparently, these strategies undermine the self-organization of large critical clusters. Here, a critical cluster is a collection of transmission lines that are near the failure threshold and that collapse entirely if triggered. Applications There are several events that arise in nature or society and that these ideas of control may help to avoid: Flood caused by systems of dams and reservoirs or interconnected valleys. Snow avalanches that take place in snow hills. Forest fires in areas susceptible to a lightning bolt or a match lighting. Cascades of load shedding that take place in power grids (a type of power outage). The OPA model is used to study different techniques for criticality control. Cascading failure in the internet switching fabric. Ischemic cascades, a series of biochemical reactions releasing toxins during moments of inadequate blood supply. Systemic risk in financial systems. Excursions in nuclear energy systems. Earthquakes and induced seismicity. The failure cascades in electrical transmission and financial sectors occur because economic forces that push for efficiency cause these systems to operate near a critical point, where avalanches of indeterminate size become possible. Financial investments that are vulnerable to this kind of failure may exhibit a Taleb distribution. See also Abelian sandpile model Complex networks Self-organized criticality References Applied and interdisciplinary physics Control theory Chaos theory Self-organization Critical phenomena
Self-organized criticality control
[ "Physics", "Materials_science", "Mathematics" ]
701
[ "Self-organization", "Physical phenomena", "Applied and interdisciplinary physics", "Applied mathematics", "Control theory", "Critical phenomena", "Condensed matter physics", "Statistical mechanics", "Dynamical systems" ]
53,737,832
https://en.wikipedia.org/wiki/Thermic%20fluid%20heater
A thermic fluid heater (TFH), also known as a thermal oil heater, is a device used for indirect heat transfer through a thermic fluid. It heats the fluid to a desired temperature and then transfers that heat to various processes without any direct contact between the heating source and the product. This type of heater is commonly used in industries where precise temperature control is essential and where high temperatures are required, such as in chemical processing, textile, pharmaceuticals, oil, gas, and food processing. Working principle The basic working principle of a thermic fluid heater is indirect heating. It uses a heating medium, typically a thermic fluid or heat transfer oil, which circulates through a closed-loop system. The thermic fluid absorbs heat generated by the combustion of fuel and then transfers this heat to the required processes or equipment via heat exchangers. Combustion: The fuel is burned in the combustion chamber, generating heat. Heat transfer: This heat is transferred to the thermic fluid flowing in coils or pipes surrounding the combustion area. Circulation: The heated fluid is pumped to the heat exchanger, where it transfers its heat to the process. Return: The cooled fluid returns to the heater for reheating, creating a continuous cycle. The advantage of using thermic fluid heaters is that they can achieve high temperatures without the need for high pressure, as is required in steam boilers. Types of thermic fluid heaters Thermic fluid heaters can be classified based on their design, fuel type, and applications. The most common classification is based on the type of fuel used for heating. 1. Solid fuel thermic fluid heaters Solid fuel thermic fluid heaters use materials such as coal, wood, biomass, or agricultural waste as the primary fuel source. These heaters are commonly employed in regions where solid fuels are abundant and economical. Advantages: Cost-effective in areas with easy access to solid fuel. Ideal for industries with specific environmental policies regarding waste utilization. Disadvantages: Requires more maintenance due to ash handling and emissions. Less efficient compared to liquid or gas fuel systems. 2. Liquid fuel thermic fluid heaters Liquid fuel thermic fluid heaters run on petroleum-based fuels like furnace oil (FO), light diesel oil (LDO), or heavy fuel oil (HFO). These heaters are more common in industries that have access to refined oil products. Advantages: High efficiency and ease of operation. Lower emissions compared to solid fuel systems. Disadvantages: High operational cost due to fuel price volatility. Requires proper storage and handling of liquid fuels. 3. Gas-fired thermic fluid heaters Gas-fired thermic fluid heaters use natural gas, liquefied petroleum gas (LPG), or other gaseous fuels for heating. These are highly efficient systems often used in industries requiring a cleaner and more environmentally friendly heat source. Advantages: High efficiency and low emissions. Minimal maintenance compared to solid and liquid fuel heaters. Fast heating time. Disadvantages: Dependent on gas availability and infrastructure. Higher initial investment compared to some other systems. 4. Electric thermic fluid heaters Electric thermic fluid heaters operate using electricity and are commonly employed in areas where alternative fuels are scarce or where environmental regulations prohibit the use of combustion-based heating methods. Advantages: Clean and environmentally friendly. No emissions or combustion by-products. Precise temperature control. Disadvantages: High operational costs due to electricity consumption. Limited use in industries requiring very high heating capacities. Advantages of thermic fluid heaters High Temperature at Low Pressure: These heaters can reach higher temperatures i.e. 300 °C without pressurization, which reduces safety concerns compared to steam boilers. Energy Efficiency: The closed-loop system minimizes heat loss, leading to better energy efficiency. Low Maintenance: Fewer components (no boiler, drum, or water treatment) result in reduced maintenance requirements. Flexibility: Suitable for a wide range of industrial applications and can be fueled by various energy sources. Applications Thermic fluid heaters are widely used in industries such as: Chemical Industry: For heating reactors, distillation columns, and other chemical processes. Textile industry: For dyeing, printing, and other fabric processing tasks. Pharmaceutical industry: For sterilization and production of pharmaceuticals. Food industry: For processes like frying, baking, and dehydration. Oil & Gas industry: For heating crude oil, regeneration of catalysts, and other petrochemical processes. Safety and Environmental Considerations While thermic fluid heaters are generally safer than steam-based systems due to the low-pressure operation, there are still important safety considerations: Emission Control: Heaters using solid, liquid, or gaseous fuels need proper emission control systems, such as scrubbers or filters, to minimize environmental impact. Leakage: Since the thermic fluid operates in a closed-loop system, any leakage can lead to safety concerns, including fire hazards. Proper operation, maintenance, and the use of high-quality thermic fluids can mitigate many of these risks. References Boilers heating
Thermic fluid heater
[ "Chemistry" ]
1,052
[ "Boilers", "Pressure vessels" ]
53,738,820
https://en.wikipedia.org/wiki/Sequence%20saturation%20mutagenesis
Sequence saturation mutagenesis (SeSaM) is a chemo-enzymatic random mutagenesis method applied for the directed evolution of proteins and enzymes. It is one of the most common saturation mutagenesis techniques. In four PCR-based reaction steps, phosphorothioate nucleotides are inserted in the gene sequence, cleaved and the resulting fragments elongated by universal or degenerate nucleotides. These nucleotides are then replaced by standard nucleotides, allowing for a broad distribution of nucleic acid mutations spread over the gene sequence with a preference to transversions and with a unique focus on consecutive point mutations, both difficult to generate by other mutagenesis techniques. The technique was developed by Professor Ulrich Schwaneberg at Jacobs University Bremen and RWTH Aachen University. Technology, development and advantages SeSaM has been developed in order to overcome several of the major limitations encountered when working with standard mutagenesis methods based on simple error-prone PCR (epPCR) techniques. These epPCR techniques rely on the use of polymerases and thus encounter limitations which mainly result from the circumstance that only single, but very rarely consecutive, nucleic acid substitutions are performed and that these substitutions occur usually at specific, favored positions only. In addition, transversions of nucleic acids are much less likely than transitions and require specifically designed polymerases with an altered bias. These characteristics of epPCR catalyzed nucleic acid exchanges together with the fact that the genetic code is degenerated decrease the resulting diversity on the amino acid level. Synonymous substitutions lead to amino acid preservation or conservative mutations with similar physico-chemical properties such as size and hydrophobicity are strongly prevalent. By non-specific introduction of universal bases at every position in the gene sequence, SeSaM overcomes the polymerase bias favoring transitory substitutions at specific positions but opens the complete gene sequence to a diverse array of amino acid exchanges. During the development of the SeSaM-method, several modifications were introduced that allowed for the introduction of several mutations simultaneously. Another advancement of the method was achieved by introduction of degenerate bases instead of universal inosine and the use of optimized DNA polymerases, further increasing the ratio of introduced transversions. This modified SeSaM-TV+ method in addition allows for and favors the introduction of two consecutive nucleotide exchanges, broadening strongly the spectrum of amino acids that may be substituted. By several optimizations including the application of an improved chimera polymerase in Step III of the SeSaM-TV-II method and the addition of an alternative degenerate nucleotide for efficient substitution of thymine and cytosine bases and increased mutation frequency in SeSaM-P/R, generated libraries were further improved with regard to transversion number and the number of consecutive mutations was raised to 2–4 consecutive mutations with a rate of consecutive mutations of up to 30%. Procedure The SeSaM-method consists of four PCR-based steps which can be executed within two to three days. Major parts include the incorporation of phosphorothioate nucleotides, the chemical fragmentation at these positions, the introduction of universal or degenerate bases and their replacement by natural nucleotides inserting point mutations. Initially, universal “SeSaM”-sequences are inserted by PCR with gene-specific primers binding in front of and behind the gene of interest. The gene of interest with its flanking regions is amplified to introduce these SeSaM_fwd and SeSaM_rev sequences and to generate template for consecutive PCR steps. These generated so-called fwd template and rev templates are now amplified in a PCR reaction with a pre-defined mixture of phosphorothioate and standard nucleotides to ensure an even distribution of inserted mutations over the full length of the gene. PCR products of Step 1 are cleaved specifically at the phosphorothioate bonds, generating a pool of single-stranded DNA fragments of different lengths starting from the universal primer. In Step 2 of SeSaM, the DNA single strands are elongated by one to several universal or degenerate bases (depending on the modification of SeSaM applied) catalyzed by terminal deoxynucleotidyl transferase (TdT). This step is the key step to introduce the characteristic consecutive mutations to randomly mutate entire codons. Subsequently, in Step 3 a PCR is performed recombining the single stranded DNA fragments with the corresponding full-length reverse template, generating the full-length double stranded gene including universal or degenerate bases in its sequence. By replacement of the universal/degenerate bases in the gene sequence by random standard nucleotides in SeSaM Step 4, a diverse array of full-length gene sequences with substitution mutations is generated, including a high load of transversions and subsequent substitution mutations. Applications SeSaM is used to directly optimize proteins on amino acid level, but also to preliminarily identify amino acid positions to test in saturation mutagenesis for the ideal amino acid exchange. SeSaM has been successfully applied in numerous directed evolution campaigns of different classes of enzymes for their improvement towards selected properties such as cellulase for ionic liquid resistance, protease with increased detergent tolerance, glucose oxidase for analytical application, phytase with increased thermostability and monooxygenase with improved catalytic efficiency using alternative electron donors. SeSaM is patent protected by US770374 B2 in over 13 countries and is one of the platform technologies of SeSaM-Biotech GmbH. References Molecular genetics Mutagenesis Protein engineering
Sequence saturation mutagenesis
[ "Chemistry", "Biology" ]
1,184
[ "Molecular genetics", "Molecular biology" ]
53,739,018
https://en.wikipedia.org/wiki/Liquid%20crystalline%20elastomer
Liquid crystal elastomers (LCEs) are slightly crosslinked liquid crystalline polymer networks. These materials combine the entropy elasticity of an elastomer with the self-organization of the liquid crystalline phase. In liquid crystalline elastomers, the mesogens can either be part of the polymer chain (main-chain liquid crystalline elastomers) or are attached via an alkyl spacer (side-chain liquid crystalline elastomers). Due to their actuation properties, liquid crystalline elastomers are attractive candidates for the use as artificial muscles or microrobots. History LCE were predicted by Pierre-Gilles de Gennes in 1975 and first synthesized by Heino Finkelmann. Properties In the temperature range of the liquid crystalline phase, the mesogen's orientation forces the polymer chains into a stretched conformation. Heating the sample above the clearing temperature destroys this orientation and the polymer backbone can relax into (the more favored) random coil conformation. That can lead to a macroscopic, reversible deformation. Good actuation requires a good alignment of the domains' directors before cross-linking. This can be achieved by: stretching of the prepolymerized sample, photo-alignment layers, magnetic or electric fields and microfluidics. Mechanical Properties Soft Elasticity Because of their anisotropy, the mechanical response of aligned nematic LCEs varies depending upon the direction of applied stress. When stress is applied along the direction of alignment (parallel to the director, ), the strain responds in a linear fashion, with a slope dictated by the material’s Young’s modulus. This linear stress-strain behavior continues until the material reaches its yield stress, at which point it may neck or strain harden before eventually failing. The shape of the stress-strain curve for LCEs stretched parallel to their aligned direction matches that of most classical rubbers and can be described using treatments such as rubber elasticity. In contrast, when stress is applied perpendicular to the direction of alignment, the strain behavior exhibits a drastically different response. For an unconstrained LCE, after an initial region where the stress-strain response matches that of classical rubbers, the material exhibits a large plateau where near-constant stress leads to ever-increasing strain. The term “soft elasticity” describes this large plateau region. After a critical strain is reached in this region, the stress-strain response returns to that of LCEs stretched in a direction parallel to their director. The theory used to describe soft elasticity first arose to explain experimental observations of the phenomena in unconstrained LCEs that reoriented in the presence of an external electric field. The theory of soft elasticity states that when an LCE is stretched in a direction perpendicular to its alignment direction, its chains rotate and reorient to align in the direction of applied stress. Assuming that the LCE chains are allowed to freely move in all three dimensions, this reorientation occurs without a change in the elastic free energy of the system. This implies that there is no energy barrier to the rotation of the LCE chains, meaning that zero-stress would be required to fully reorient them. Experimentally, a small but non-zero stress is required to induce soft elasticity and achieve this chain rotation. This deviation from the theoretical prediction arises due to the fact that real LCEs are not truly free in all three dimensions, and are instead geometrically restricted by neighboring chains. As a result, some small, finite stress is necessary in experimental systems to induce chain reorientation. Once the chain has fully rotated and is aligned parallel to the direction of applied stress, the subsequent stress-strain response is again described by that of rubber elasticity. Soft elasticity has also been exploited to develop materials with unique and useful properties. By controlling the local liquid crystal alignment in an LCE, films with spatially varying mechanical anisotropy can be fabricated. When strained, different regions of these chemically homogeneous films stretch to different extents as a result of the relative orientation of the director to the applied stress. This has the effect of localizing deformation to predetermined regions. This predictable deformation is useful because it allows for the design of soft electronic devices that are globally compliant but locally stiff, ensuring important components do not break when the film is deformed. Actuation Upon transitioning from a liquid crystalline phase to an isotropic (orientationally disordered) phase, or vice versa, an LCE sample will spontaneously deform into a different shape. For example, if a nematic LCE transitions to its isotropic state, it will undergo contraction parallel to its director and expansion in the perpendicular plane. Any stimulus that drives the ordered ⇔ disordered phase transition can induce such actuation (or 'activation'). A patterned director field thus allows an LCE sample to morph into a radically different shape upon stimulation, returning to its original shape when the stimulus is removed. Due to its reversibility, large strain, and the potential to prescribe extremely complex shape changes, this shape morphing effect has attracted much interest as a potential tool for creating soft machines such as actuators or robots. As a simple example, consider a thin disk-shaped LCE sheet with a 'concentric-circles' (everywhere azimuthal) in-plane director pattern. Upon heating to the isotropic state, the disk will rise into a cone, which can be used to lift a weight thousands of times the weight of the LCE itself. Azobenzenes Beside the thermal deformation of a sample, a light-responsive actuation can be obtained for samples by incorporating azobenzenes in the liquid crystalline phase. The phase transition temperature of an azo-liquid crystalline elastomer can be reduced due to the trans-cis isomerization of the azobenzenes during UV-irradiation and thus the liquid crystalline phase can be destroyed isothermally. For liquid crystalline elastomers with a high azo-concentration, a light-responsive change of the sample's length of up to 40% could be observed. Applications LCE have been examined for use as a light-weight energy absorption material. Tilted slabs of LCE were attached to stiff materials, approximating a honeycomb lattice. Arranged in multiple layers, allowed the material to buckle at different rates on impact, efficiently dissipating energy across the structure. Increasing the number of layers increased absorption capacity. References Polymers
Liquid crystalline elastomer
[ "Chemistry", "Materials_science" ]
1,327
[ "Polymers", "Polymer chemistry" ]
53,740,291
https://en.wikipedia.org/wiki/Kantrowitz%20limit
In gas dynamics, the Kantrowitz limit refers to a theoretical concept describing choked flow at supersonic or near-supersonic velocities. When an initially subsonic fluid flow experiences a reduction in cross-section area, the flow speeds up in order to maintain the same mass-flow rate, per the continuity equation. If a near supersonic flow experiences an area contraction, the velocity of the flow will decrease until it reaches the local speed of sound, and the flow will be choked. This is the principle behind the Kantrowitz limit: it is the maximum amount of contraction a flow can experience before the flow chokes, and the flow speed can no longer be increased above this limit, independent of changes in upstream or downstream pressure. Derivation of Kantrowitz limit Assume a fluid enters an internally contracting nozzle at cross-section 0, and passes through a throat of smaller area at cross-section 4. A normal shock is assumed to start at the beginning of the nozzle contraction, and this point in the nozzle is referred to as cross-section 2. Due to conservation of mass within the nozzle, the mass flow rate at each cross section must be equal: For an ideal compressible gas, the mass flow rate at each cross-section can be written as, where is the cross-section area at the specified point, is the Isentropic expansion factor of the gas, is the Mach number of the flow at the specified cross-section, is the ideal gas constant, is the stagnation pressure, and is the stagnation temperature. Setting the mass flow rates equal at the inlet and throat, and recognizing that the total temperature, ratio of specific heats, and gas constant are constant, the conservation of mass simplifies to, Solving for A4/A0, Three assumptions will be made: the flow from behind the normal shock in the inlet is isentropic, or pt4 = pt2 , the flow at the throat (point 4) is sonic such that M4 = 1, and the pressures between the various point are related through normal shock relations, resulting in the following relation between inlet and throat pressures, And since M4 = 1, shock relations at the throat simplify to, Substituting for and in the area ratio expression gives, This can also be written as, Applications The Kantrowitz limit has many applications in gas dynamics of inlet flow, including jet engines and rockets operating at high-subsonic and supersonic velocities, and high-speed transportation systems such as the Hyperloop. Hypersonic Engine Inlets The Kantrowitz limit demonstrates the amount of contraction, or change in two-dimensional cross-section area, that a hypersonic inlet can employ while successfully starting an engine inlet (or avoiding the expelling of the hypersonic inlet shock wave). Hyperloop The Kantrowitz limit is a fundamental concept in the Hyperloop, a proposed high-speed transportation system. The Hyperloop moves passengers in sealed pods through a partial-vacuum tube at high-subsonic speeds. As the air in the tube moves into and around the smaller cross-sectional area between the pod and tube, the air flow must speed up due to the continuity principle. If the pod is travelling through the tube fast enough, the air flow around the pod will reach the speed of sound, and the flow will become choked, resulting in large air resistance on the pod. The condition that determines if the flow around the pod chokes is the Kantrowitz limit. The Kantrowitz limit therefore acts a "speed limit" - for a given ratio of tube area and pod area, there is a maximum speed that the pod can travel before flow around the pod chokes and air resistance sharply increases. In order to break through the speed limit set by the Kantrowitz limit, there are two possible approaches. The first would increase the diameter of the tube in order to provide more bypass area for the air around the pod, preventing the flow from choking. This solution is not very practical in practice however, as the tube would have to be built very large, and logistical costs of such a large tube are impractical. As an alternative, it has been found during the main study of the Swissmetro project (1993 -1998) that a turbine can be installed on board of the vehicle to push the displaced air across the vehicle body (TurboSwissMetro) and hence to reduce far field impacts. This would avoid the continuous increase of the vehicle drag due to the choking of the flow at the cost of the power required to drive the turbine and hence enable larger speeds. The computer program NUMSTA has been developed in this context; it allows to simulate the dynamical interaction of several high speed vehicles in complex tunnel networks including the choking effect. This idea has also been proposed by Elon Musk in his 2013 Hyperloop Alpha paper where a compressor is placed at the front of the pod. The compressor actively draws in air from the front of the pod and transfers it to the rear, bypassing the gap between pod and tube while diverting a fraction of the flow to power a low-friction air-bearing suspension system. The inclusion of a compressor in the Hyperloop pod circumvents the Kantrowitz limit, allowing the pod to travel at speeds over 700 mph (about 1126 km/h) in a relatively narrow tube. For a pod travelling through a tube, the Kantrowitz limit is given as the ratio of tube area to bypass area both around the outside of the pod and through any air-bypass compressor: See also Arthur Kantrowitz References Fluid dynamics Hyperloop
Kantrowitz limit
[ "Chemistry", "Technology", "Engineering" ]
1,154
[ "Transport systems", "Chemical engineering", "Piping", "Vacuum systems", "Hyperloop", "Fluid dynamics" ]
53,741,333
https://en.wikipedia.org/wiki/NetScaler
NetScaler is a line of networking products owned by Cloud Software Group. The products consist of NetScaler, an application delivery controller (ADC), NetScaler AppFirewall, an application firewall, NetScaler Unified Gateway, NetScaler Application Delivery Management (ADM), and NetScaler SD-WAN, which provides software-defined wide-area networking management. NetScaler was initially developed in 1997 by Michel K Susai and acquired by Citrix Systems in 2005. Citrix consolidated all of its networking products under the NetScaler brand in 2016. On September 30, 2022, when Citrix was taken private as part of the merger with TIBCO Software, NetScaler was formed as a business unit under the Cloud Software Group. Overview The NetScaler line of products are the networking business unit for Cloud Software Group It includes NetScaler ADCs, NetScaler Unified Gateway, NetScaler AppFirewall, NetScaler Intelligent Traffic Management, and NetScaler Application Delivery Manager. The products can work in conjunction with other Cloud Software Group offerings, including its Citrix and Xen line of products. NetScaler is integrated with OpenStack as part of Cloud Software Group's sponsorship of the OpenStack Foundation. Products NetScaler is Cloud Software Group’s core networking product. It is an application delivery controller (ADC), a tool that improves the delivery speed and quality of applications to an end user. The product is aimed at business customers and it performs tasks such as traffic optimization, L4-L7 load balancing, and web app acceleration while maintaining data security. NetScaler monitors server health and allocates network and application traffic to additional servers for efficient use of resources. It also performs several kinds of caching and compression. It can be made a server proxy, process SSL requests, and offers VPN and micro-app VPN operations. It also includes NetScaler application firewall and SSL encryption capabilities. NetScaler ADC can manage traffic during DDoS attacks, making sure traffic gets to critical applications. Additionally, NetScaler's logs of network activity feed into Citrix's cloud-based analytics service and are used to analyze and identify security risks. There are five versions of NetScaler: NetScaler MPX, a hardware-based appliance for use in data centers; NetScaler SDX, a hardware-based appliance intended for service providers that provides virtualization delivering multitenancy for virtual and cloud-based data centers; NetScaler VPX, a software-based application that is implemented as a virtual machine and intended for small business use; and NetScaler CPX, a NetScaler ADC packaged in a container and designed for cloud and microservices applications. NetScaler BLX, a bare metal solution that can run on top of any Linux, while offering line-rate performance. In addition, the NetScaler line of products include Citrix SD-WAN, formerly CloudBridge SD-WAN, which provides software-defined wide-area networking and branch networking. the SD-WAN product was end of sale on December 31, 2022. NetScaler Unified Gateway offers secure remote access of virtual desktops and a variety applications from a single point of entry and with single sign-on (SSO). The NetScaler Application Delivery Management (ADM) is a platform designed for the organization and automation of policy management across devices and applications. The tool is intended for IT professionals to manage the various NetScaler products from a single dashboard. This dashboard is applicable to all 5 versions (SDX/MPX/VPX/CPX/BLX) and works regardless of whether the device is deployed in the cloud or on-premises. The platform also provides real-time analytics. The NetScaler ADM is also available as a service. You can use this cloud solution to manage, monitor, and troubleshoot the entire global application delivery infrastructure from a single, unified, and centralized cloud-based console. History 1997 - Entrepreneur Michel K. Susai founded NetScaler December 1st, 1997 in San Jose, California. He created NetScaler as a solution for reducing infrastructure during the growth of the Internet in the late 1990s. 2000 - NetScaler ships the first product, the WebScaler 3000,  a transmultiplexer. 2001 - The company repositioned NetScaler as a security and optimization tool in 2001. NetScaler releases the NetScaler 6000, a load balancer. 2002 - NetScaler releases the NetScaler 9000, a load balancer with integrated SSL and compression offload. The NetScaler 9000 goes on to a position of market dominance within three years 2004 – NetScaler offers a complete Load Balancer with integrated SSLVPN. Security Weekly tests the NetScaler RS9800HA-T and states "This was the fastest unit in the test. If you need the performance then this is the unit to choose." By 2005, NetScaler estimated 75 percent of Internet users used its systems through clients including Google and Amazon. Citrix acquired NetScaler in 2005 for approximately $300 million in cash and stock. 2006 – NetScaler offers the 11000 series systems, the most successful and ubiquitous HW Load Balancer of its time. 2007 – NetScaler offers the MPX-17xxx series systems, the first NetScaler with 10G connectivity. 2008/2009 – NetScaler transitions from the uniprocessor to the multiprocessor packet engine implementation, known as nCore. nCore allowed NetScaler to take advantage of the new Intel multicore chips to continue to increase performance. NetScaler offers the MPX-10500-FIPS platform for high security markets. 2009 - NetScaler introduced the VPX edition the following year. 2011 - NetScaler offers the MPX-115xx series of systems, an enormously popular platform. NetScaler also releases the first multi-tenant ADC hardware platform, the SDX which combined the flexibility of virtualization with powerful, purpose built hardware. 2016 - Citrix transitioned all of its delivery products under the NetScaler brand. Citrix CloudBridge SD-WAN became NetScaler SD-WAN. The company also introduced "NetScaler Management and Analytics System", a console for users to manage all NetScaler products, including the ADCs and SD-WAN, and a containerized version of NetScaler called NetScaler CPX. Citrix released a free developer version of NetScaler CPX called NetScaler CPX Express in August 2016. Reception Reviewing NetScaler ADC in 2007, InfoWorld gave it a score of 8.6 out of 10. The reviewer noted that it was easy to set up and administer, and provided performance improvements in load balancing and Web application speed. However, there were variable results with features such as TCP session buffering and TCP session consolidation, as these would depend on other factors. As well, InfoWorld said that NetScaler is best suited for "organizations making corporate applications available over the Web for internal or external customers" and "large, heavily trafficked Web sites" but was more costly than other available solutions for a "small, three-node Web farm that will be lightly loaded". GCN wrote in 2011 that NetScaler is "much more than a load balancer; it’s really an all-in-one Web application delivery system". The site gave NetScaler an A+ rating for features, B− for ease of use, A+ for performance and a C for value. The same article noted that it was difficult to learn and expensive. , annual net revenue from sales of NetScaler products and services was . References Citrix Systems Load balancing (computing) Cloud applications Servers (computing) Networking hardware Configuration management
NetScaler
[ "Engineering" ]
1,636
[ "Systems engineering", "Configuration management", "Computer networks engineering", "Networking hardware" ]
53,741,891
https://en.wikipedia.org/wiki/Occupational%20exposure%20banding
Occupational exposure banding, also known as hazard banding, is a process intended to quickly and accurately assign chemicals into specific categories (bands), each corresponding to a range of exposure concentrations designed to protect worker health. These bands are assigned based on a chemical’s toxicological potency and the adverse health effects associated with exposure to the chemical. The output of this process is an occupational exposure band (OEB). Occupational exposure banding has been used by the pharmaceutical sector and by some major chemical companies over the past several decades to establish exposure control limits or ranges for new or existing chemicals that do not have formal OELs. Furthermore, occupational exposure banding has become an important component of the Hierarchy of Occupational Exposure Limits (OELs). The U.S. National Institute for Occupational Safety and Health (NIOSH) has developed a process that could be used to apply occupational exposure banding to a broader spectrum of occupational settings. The NIOSH occupational exposure banding process utilizes available, but often limited, toxicological data to determine a potential range of chemical exposure levels that can be used as targets for exposure controls to reduce risk among workers. An OEB is not meant to replace an OEL, rather it serves as a starting point to inform risk management decisions. Therefore, the OEB process should not be applied to a chemical with an existing OEL. Purpose Occupational exposure limits (OELs) play a critical role in protecting workers from exposure to dangerous concentrations of hazardous material. In the absence of an OEL, determining the controls needed to protect workers from chemical exposures can be challenging. According to the U.S. Environmental Protection Agency, the Toxic Substances Control Act Chemical Substance Inventory as of 2014 contained over 85,000 chemicals that are commercially available, but a quantitative health-based OEL has been developed for only about 1,000 of these chemicals. Furthermore, the rate at which new chemicals are being introduced into commerce significantly outpaces OEL development, creating a need for guidance on thousands of chemicals that lack reliable exposure limits. The NIOSH occupational exposure banding process has been created to provide a reliable approximation of a safe exposure level for potentially hazardous and unregulated chemicals in the workplace. Occupational exposure banding uses limited chemical toxicity data to group chemicals into one of five bands. Occupational exposure bands: Define a set range of exposures expected to protect worker health Identify potential health effects and target organs with 9 toxicological endpoints Provide critical information on chemical potency Inform decisions on control methods, hazard communication, and medical surveillance Identify areas where health effects data is lacking Require less time and data than developing an OEL Assignment process The NIOSH occupational exposure banding process utilizes a three-tiered approach. Each tier of the process has different requirements for data sufficiency, which allows stakeholders to use the occupational exposure banding process in many different situations. Selection of the most appropriate tier for a specific banding situation depends on the quantity and quality of the available data and the training and expertise of the user. The process places chemicals into one of five bands, designated A through E. Each band is associated with a specific range of exposure concentrations. Band E represents the lowest range of exposure concentrations, while Band A represents the highest range. Assignment of a chemical to a band is based on both the potency of the chemical and the severity of the health effect. Band A and band B include chemicals with reversible health effects or produce adverse effects at only high concentration levels. Band C, band D, or band E include chemicals with serious or irreversible effects and those that cause problems at low concentration ranges. The resulting airborne concentration target ranges are shown in the graphic: Tier 1, the qualitative tier, produces an occupational exposure band (OEB) assignment based on qualitative data from the Globally Harmonized System of Classification and Labeling of Chemicals (GHS); it involves assigning the OEB based on criteria aligned with specific GHS hazard codes and categories. These hazard codes are typically pulled from GESTIS, ECHA Annex VI, or safety data sheets. The Tier 1 process can be performed by a health and safety generalist, and takes only minutes to complete with the NIOSH OEB e-tool. The e-tool is free to use and can be accessed through the NIOSH website. Tier 2, the semi-quantitative tier, produces an OEB assignment based on quantitative and qualitative data from secondary sources; it involves assigning the OEB on the basis of key findings from prescribed literature sources, including use of data from specific types of studies. Tier 2 focuses on nine toxicological endpoints. The Tier 2 process can be performed by an occupational hygienist but requires some formal training. Tier 2 banding is also incorporated into the NIOSH OEB e-tool but can take hours instead of minutes to complete for a given chemical. However, the resulting band is considered more robust than a Tier 1 band due to the in-depth retrieval of published data. NIOSH recommends users complete at least the Tier 2 process to produce reliable OEBs. Tier 3, the expert judgement tier, relies on expert judgement to produce a band based on primary and secondary data that is available to the user. This level of OEB would require the advanced knowledge and experience held by a toxicologist or veteran occupational hygienist. The Tier 3 process allows the professional to incorporate their own raw data in conjunction with the availability of data drawn from published literature. Reliability Since unveiling the occupational exposure banding technique in 2017, NIOSH has sought feedback from its users and has evaluated the reliability of this tool. There has been an overwhelming response of positive feedback. Users have described Tier 1 as a helpful screening tool, Tier 2 as a basic assessment for a new chemical on the worksite, and Tier 3 as a personalized in-depth analysis. During pilot testing, NIOSH evaluated the Tier 1 and Tier 2 protocols using chemicals with OELs and compared the banding results to OELs. For >90% of these chemicals, the resultant Tier 1 and Tier 2 bands were found to be equally or more stringent than the OELs. This demonstrates the confidence health & safety professionals can have in the OEB process when making risk management decisions for chemicals without OELs. Limitations Although occupational exposure banding holds a great deal of promise for the occupational hygiene profession, there are potential limitations that should be considered. As with any analysis, the outcome of the NIOSH occupational exposure banding process – the OEB – is dependent upon the quantity and the quality of data used and the expertise of the individual using the process. In order to maximize data quality, NIOSH has compiled a list of NIOSH-recommended sources which can provide data that can be used for banding. Furthermore, for some chemicals the amount of quality data may not be sufficient to derive an OEB. It is important to note that the lack of data does not indicate that the chemical is safe. Other risk management strategies, such as control banding, can then be applied. Control banding versus exposure banding The NIOSH occupational exposure banding process guides a user through the evaluation and selection of critical health hazard information to select an OEB from among five categories of severity. For OEBs, the process uses only hazard-based data (e.g., studies on human health effects or toxicology studies) to identify an overall level of hazard potential and associated airborne concentration range for chemicals with similar hazard profiles. While the output of this process can be used by informed occupational safety and health professionals to make risk management and exposure control decisions, the process does not supply such recommendations directly. In contrast, control banding is a strategy that groups workplace risks into control categories or bands based on combinations of both hazard and exposure information. Control banding combines hazard banding with exposure risk management to directly link hazards to specific control measures. Various toolkit models for control banding have been developed in the UK, Germany, and the Netherlands. COSHH Essentials was the first widely adopted banding scheme. Other banding schemes are also available, such as Stoffenmanager, EMKG, and International Chemical Control Toolkit of the ILO. Evaluation of these and other control banding systems have yielded varying results. Occupational exposure banding has emerged as a helpful supplementary exposure assessment tool. When conducting a workplace hazard assessment, occupational hygienists may find it useful to start with occupational exposure banding to identify potential hazards and exposure ranges, before moving on to control banding. Together, these tools will aid the health & safety professional in selecting the appropriate risk mitigation strategies. See also Health Hazards Evaluation Program, NIOSH Occupational hygiene References External links The NIOSH Occupational Exposure Banding Process: Guidance for the Evaluation of Chemical Hazards Current Intelligence Bulletin The NIOSH Occupational Exposure Banding Topic Page The NIOSH Occupational Exposure Banding e-Tool Occupational Exposure Banding – A Conversation with Lauralynn Taylor McKernan, ScD CIH The NIOSH Control Banding Topic Page Hands-on Activity Demonstration: Identifying Occupational Exposure Bands Occupational Exposure Control Banding Pharmaceuticals Control Recommendations by Esco Pharma based on OEB Classification Occupational safety and health Chemical safety Risk management Industrial hygiene Hazard analysis Occupational hazards
Occupational exposure banding
[ "Chemistry", "Engineering" ]
1,877
[ "Chemical accident", "Safety engineering", "Hazard analysis", "nan", "Chemical safety" ]
53,744,787
https://en.wikipedia.org/wiki/Polymer%20matrix%20composite
In materials science, a polymer matrix composite (PMC) is a composite material composed of a variety of short or continuous fibers bound together by a matrix of organic polymers. PMCs are designed to transfer loads between fibers of a matrix. Some of the advantages with PMCs include their light weight, high resistance to abrasion and corrosion, and high stiffness and strength along the direction of their reinforcements. Matrix materials The function of the matrix in PMCs is to bond the fibers together and transfer loads between them. PMCs matrices are typically either thermosets or thermoplastics. Thermosets are by far the predominant type in use today. Thermosets are subdivided into several resin systems including epoxies, phenolics, polyurethanes, and polyimides. Of these, epoxy systems currently dominate the advanced composite industry. Thermosets Thermoset resins require addition of a curing agent or hardener and impregnation onto a reinforcing material, followed by a curing step to produce a cured or finished part. Once cured, the part cannot be changed or reformed, except for finishing. Some of the more common thermosets include epoxy, polyurethanes, phenolic and amino resins, bismaleimides (BMI, polyimides), polyamides. Of these, epoxies are the most commonly used in the industry. Epoxy resins have been in use in U.S. industry for over 40 years. Epoxy compounds are also referred to as glycidyl compounds. The epoxy molecule can also be expanded or cross-linked with other molecules to form a wide variety of resin products, each with distinct performance characteristics. These resins range from low-viscosity liquids to high-molecular weight solids. Typically they are high-viscosity liquids. The second of the essential ingredients of an advanced composite system is the curing agent or hardener. These compounds are very important because they control the reaction rate and determine the performance characteristics of the finished part. Since these compounds act as catalysts for the reaction, they must contain active sites on their molecules. Some of the most commonly used curing agents in the advanced composite industry are the aromatic amines. Two of the most common are methylene-dianiline (MDA) and sulfonyldianiline (DDS). SiC–SiC matrix composites are a high-temperature ceramic matrix processed from preceramic polymers (polymeric SiC precursors) to infiltrate a fibrous preform to create a SiC matrix. Several other types of curing agents are also used in the advanced composite industry. These include aliphatic and cycloaliphatic amines, polyaminoamides, amides, and anhydrides. Again, the choice of curing agent depends on the cure and performance characteristics desired for the finished part. Polyurethanes are another group of resins used in advanced composite processes. These compounds are formed by reacting the polyol component with an isocyanate compound, typically toluene diisocyanate (TDI); methylene diisocyanate (MDI) and hexamethylene diisocyanate (HDI) are also widely used. Phenolic and amino resins are another group of PMC resins. The bismaleimides and polyamides are relative newcomers to the advanced composite industry and have not been studied to the extent of the other resins. Thermoplastics Thermoplastics currently represent a relatively small part of the PMC industry. They are typically supplied as nonreactive solids (no chemical reaction occurs during processing) and require only heat and pressure to form the finished part. Unlike the thermosets, the thermoplastics can usually be reheated and reformed into another shape, if desired. Dispersed materials Fibers Fiber-reinforced PMCs contain about 60 percent reinforcing fiber by volume. The fibers that are commonly found and used within PMCs include fiberglass, graphite and aramid. Fiberglass has a relatively low stiffness at the same time exhibits a competitive tensile strength compared to other fibers. The cost of fiberglass is also dramatically lower than the other fibers which is why fiberglass is one of the most widely used fiber. The reinforcing fibers have their highest mechanical properties along their lengths rather than their widths. Thus, the reinforcing fibers maybe arranged and oriented in different forms and directions to provide different physical properties and advantages based on the application. Carbon Nanotubes Unlike fiber-reinforced PMCs, nanomaterials reinforced PMCs are able to achieve significant improvements in mechanical properties at much lower (less than 2% by volume) loadings. Carbon nanotubes in particular have been intensely studied due to their exceptional intrinsic mechanical properties and low densities. In particular carbon nanotubes have some of the highest measured tensile stiffnesses and strengths of any material due to the strong covalent sp2 bonds between carbon atoms. However, in order to take advantage of the exceptional mechanical properties of the nanotubes, the load transfer between the nanotubes and matrix must be very large. Like in fiber-reinforced composites, the size dispersion of the carbon nanotubes significantly affects the final properties of the composite. Stress-strain studies of single-walled carbon nanotubes in a polyethylene matrix using molecular dynamics showed that long carbon nanotubes lead to an increase in tensile stiffness and strength due to the large-distance stress transfer and crack propagation prevention. On the other hand short carbon nanotubes do not lead to any enhancement of properties without any interfacial adhesion. However once modified, short carbon nanotubes are able to further improve the stiffness of the composite, however there is still very little crack propagation countering. In general, long and high aspect ratio carbon nanotubes lead to greater enhancement of mechanical properties, but are more difficult to process. Aside from size, the interface between the carbon nanotubes and the polymer matrix is of exceptional importance. In order to achieve better load transfer, a number of different methods have been used to better bond the carbon nanotubes to the matrix by functionalizing the surface of the carbon nanotube with various polymers. These methods can be divided into non-covalent and covalent strategies. Non-covalent CNT modification involves the adsorption or wrapping of polymers to the carbon nanotube surface, usually via van der Waals or π-stacking interactions. In contrast, covalent functionalization involves direct bonding onto the carbon nanotube. This can be achieved in a number of ways, such as oxidizing the surface of the carbon nanotube and reacting with the oxygenated site, or using a free radical to directly react with the carbon nanotube lattice. Covalent functionalization can be used to directly attach the polymer to the carbon nanotube, or to add an initiator molecule which can then be used for further reactions. The synthesis of carbon nanotube reinforced PMCs is dependent on the choice of matrix and functionalization of the carbon nanotubes. For thermoset polymers, solution processing is used where the polymer and nanotubes are placed in an organic solvent. The mixture is then sonicated and mixed until the nanotubes are evenly dispersed, then cast. While this method is widely used, the sonication can damage the carbon nanotubes, the polymer must be soluble in the solvent of choice, and the rate of evaporation can often lead to undesirable structures like nanotube bundling or polymer voids. For thermoplastic polymers, melt-processing can be used, where the nanotube is mixed into the melted polymer, then cooled. However, this method cannot tolerate high carbon nanotube loading due to viscosity increases. In-situ polymerization can be used for polymers that are not solvent or heat compatible. In this method, the nanotubes are mixed with the monomer, which is then reacted to form the polymer matrix. This method can lead to especially good load transfer if monomers are also attached to the carbon nanotube surface. Graphene Like carbon nanotubes, pristine graphene also possesses exceptionally good mechanical properties. Graphene PMCs are typically processed in the same manner as carbon nanotube PMCs, using either solution processing, melt-processing, or in-situ polymerization. While the mechanical properties of graphene PMCs are typically worse than their carbon nanotube equivalents, graphene oxide is much easier to functionalize due to the inherent defects present. Additionally, 3D graphene polymer composites show some promise for the isotropic enhancement of mechanical properties. References Composite materials Fibre-reinforced polymers
Polymer matrix composite
[ "Physics" ]
1,825
[ "Materials", "Composite materials", "Matter" ]
53,744,975
https://en.wikipedia.org/wiki/Geotextiles%20and%20Geomembranes
Geotextiles and Geomembranes is a bimonthly peer-reviewed scientific journal. It is the official journal of the International Geosynthetics Society and published on their behalf by Elsevier. The journal covers all topics relating to geosynthetics, including research, behaviour, performance analysis, testing, design, construction methods, case histories, and field experience. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 5.292. See also Geotechnical engineering References External links Bimonthly journals Elsevier academic journals Materials science journals Academic journals established in 1984 English-language journals Textile journals official web
Geotextiles and Geomembranes
[ "Materials_science", "Engineering" ]
147
[ "Materials science", "Materials science journals", "Textile journals" ]
42,240,565
https://en.wikipedia.org/wiki/Karl%20Spencer%20Lashley%20Award
The Karl Spencer Lashley Award is awarded by The American Philosophical Society as a recognition of research on the integrative neuroscience of behavior. The award was established in 1957 by a gift from Dr. Karl Spencer Lashley. Recipients 2024 Margaret Livingstone 2023 Silvia Arber 2022 Nicholas Spitzer 2021 Patricia K. Kuhl «in recognition of her fundamental discoveries concerning how human infants acquire language, and how brain structure and activity changes during language learning in both monolingual and bilingual children» 2020 Winrich Freiwald and Doris Tsao - "In recognition of their ground-breaking discoveries of primate cortical areas that selectively encode visual information about faces, the computational principles underlying face encoding in these areas, and the implications of these discoveries for social cognition." 2019 Wolfram Schultz 2018 Catherine Dulac - "In recognition of her incisive studies of the molecular and circuit basis of instinctive behaviors mediated through olfactory systems in the mammalian brain" 2017 Michael Shadlen - "In recognition of his pioneering experimental and theoretical studies of decision-making, identifying neural mechanisms that accumulate and convert sensory information toward behavioral choices" 2016 Charles G. Gross - "In recognition of his pioneering studies of the neurophysiology of higher visual functions and the neural basis of face recognition and object perception" 2015 David W. Tank - "In recognition of his pioneering application of intracellular recording and two-photon microscopy in awake animals, which has revealed new insights into the neural circuits underlying cognition" 2014 Edvard and May-Britt Moser - "In recognition of their discovery of grid cells in entorhinal cortex, and their pioneering physiological studies of hippocampus, which have transformed understanding of the neural computations underlying spatial memory" 2013 J. Anthony Movshon - "In recognition of his studies of how neurons in the cerebral cortex process visual information and how cortical information processing enables seeing" 2012 Eve Marder - "In recognition of her comprehensive work with a small nervous system, demonstrating general principles by which neuromodulatory substances reconfigure the operation of neuronal networks" 2011 Joseph E. LeDoux - "In recognition of his seminal studies of the neural mechanisms of emotional learning, particularly fear learning and fear memory" 2010 William T. Newsome - "In recognition of his pioneering studies of the primate visual system demonstrating the relation between perception and the activity of individual neurons" 2009 James L. McGaugh - "In recognition of his comprehensive study of the biological processes that modulate the formation and consolidation of memory" 2008 Eric Knudsen - "In recognition of his comprehensive study of visual and auditory perception in the owl and for his elucidation of how the auditory map is calibrated by the visual system during development" 2007 Richard F. Thompson - "In recognition of his distinguished contributions to understanding the brain substrates of learning and memory" 2006 Jon H. Kaas - "In recognition of his comprehensive analyses of the primate cerebral cortex, its evolution, functional organization, and plastic response to injury" 2005 Bruce McEwen - "In recognition of his extensive demonstrations of the role of circulating steroid hormones as regulators of neuroplasticity and behavioral adaption" 2004 Masakazu Konishi and Fernando Nottebohm - "In recognition of their fundamental contributions in identifying the organization and function of the avian brain systems for learning and executing birdsong" 2003 Horace B. Barlow - "In recognition of his fundamental contributions to understanding how the eye and brain accomplish vision" 2002 Jean-Pierre Changeux - "In recognition of his pioneering, comprehensive studies into the fundamental molecular mechanisms underlying interneuronal communication and their role in network formation, learning, and reward" 2001 Edward G. Jones - "In recognition of his comprehensive determination of the organization of the thalamus and the basis for the dynamic regulation of cortical excitability" 2000 Charles Stevens - "In recognition of his penetrating contributions to synaptic transmission and synaptic plasticity" 1999 Michael Merzenich - "In recognition of his original contributions to cortical plasticity" 1998 Michael I. Posner and Marcus E. Raichle - "Jointly, for their pioneering contributions to brain imaging" 1996 Patricia S. Goldman-Rakic - "For seminal contributions to the current understanding of prefrontal cortex and its role in working memory and for effectively applying insights from basic biological sciences to mental health" 1996 Mortimer Mishkin - "For his pioneering analysis of the memory and the perceptual systems of the brain, and his seminal contributions to the understanding of the higher nervous system function" 1995 Larry R. Squire - "For his seminal contribution to the delineation of implicit and explicit memory systems in the brain" 1994 Robert H. Wurtz - "For brilliant technical innovations in recording the activity of single visual neurons of alert, behaviorally-trained monkeys that made possible salient scientific discoveries relating individual nerve cells to visual perception and to the generation of eye movement" 1993 Paul Greengard - "For his pioneering work on the molecular basis of signal transduction and vesicle mobilization in nerve cells" 1992 Seymour Kety - "For major contributions to understanding the genetics of schizophrenia and depression, and for developing reliable methods for studying cerebral blood flow which paved the way for PET imaging of brain activity" 1991 Sanford L. Palay - "For pioneering the study of the nervous system on the ultrastructural level, for revolutionizing understanding, and especially for his seminal contribution - characterization of the chemical synapse in the central nervous system" 1990 Viktor Hamburger - "For pioneering the study of neuroembryology, and especially the landmark contributions to understanding neural cell death, nerve growth factor, and the developmental program for motor behavior" 1989 Bela Julesz - "For his illuminating discoveries concerning the human visual capacity, particularly for stereoscopic vision, depth perception, and pattern recognition" 1989 Gian Franco Poggio - "For discoveries of visual cortical mechanisms in stereopsis and depth perception which have significantly influenced modern studies of the brain mechanisms in vision" 1988 Seymour Benzer - "A pioneer in using genetic techniques to study the genetic code and the transfer of information from DNA to proteins. By a brilliant selection of suitable experimental systems, he has succeeded over the last twenty years in advancing these techniques and applying them to the analysis of development and behavior. These contributions have greatly expanded the power of the genetic approach in neurobiology and fostered a merger between molecular biology and neurobiology that is having profound consequences on every aspect of the field" 1987 Louis Sokoloff - "For his elucidation of the physiological and biochemical processes involved in the metabolism of the brain and the application of these discoveries to the measurement of functional activity within that organ" 1986 Pasko Rakic - "For his seminal contributions to the field of developmental neurobiology through research on the development of the central nervous system" 1985 David Bodian - "In recognition of his fundamental neurobiological studies studies that laid the foundation for the successful development of a vaccine against poliomyelitis. He has continued to make important discoveries in the development and structure of the nervous system" 1984 W. Maxwell Cowan - "For his long record of important contributions to understanding the embryological development of the brain" 1983 Edward V. Evarts 1982 Herbert H. Jasper 1981 Eric R. Kandel 1980 Curt P. Richter 1979 Brenda Milner 1978 Victor Percy Whittaker 1977 Torsten Nils Wiesel and David Hunter Hubel 1976 Roger Wolcott Sperry 1975 Paul Weiss 1974 Vernon Benjamin Mountcastle 1973 Janos Szentagothai 1972 Paul D. MacLean 1971 Sir Wilfrid Le Gros Clark 1970 Horace Winchell Magoun 1969 Elizabeth C. Crosby 1968 Theodore H. Bullock 1967 George H. Bishop 1966 Hans-Lukas Teuber 1965 Giuseppe Moruzzi 1964 Walle H . J. Nauta 1963 Alexander Forbes 1962 Philip Bard 1961 Edgar Douglas Adrian 1960 Heinrich Kluver 1959 Rafael Lorente de Nó See also List of neuroscience awards Kavli Prize Golden Brain Award Gruber Prize in Neuroscience W. Alden Spencer Award The Brain Prize Mind & Brain Prize Ralph W. Gerard Prize in Neuroscience References External links American Philosophical Society, Lashley Award Awards established in 1957 Neuroscience awards
Karl Spencer Lashley Award
[ "Technology" ]
1,680
[ "Science and technology awards", "Neuroscience awards" ]
42,241,060
https://en.wikipedia.org/wiki/Lightning%20activity%20level
Lightning activity level (LAL) is a scale that describes degrees and types of lightning activity. Values are labeled 1–6. References Lightning Electrical phenomena Weather hazards Storm
Lightning activity level
[ "Physics", "Astronomy" ]
34
[ "Physical phenomena", "Plasma physics", "Weather", "Weather hazards", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Electrical phenomena", "Plasma physics stubs", "Lightning" ]
42,241,725
https://en.wikipedia.org/wiki/Power-over-fiber
Power-over-fiber, or PoF, is a technology in which a fiber-optic cable carries optical power, which is used as an energy source rather than, or as well as, carrying data. This allows a device to be remotely powered, while providing electrical isolation between the device and the power supply. Such systems can be used to protect the power supply from dangerous voltages such as from lightning, or to prevent voltage from the supply from igniting explosives. Power over fiber may also be useful in applications or environments where it is important to avoid the electromagnetic fields created by electricity flowing through copper wire, such as around delicate sensors or in sensitive military applications. See also Phantom power Power over Ethernet (PoE) References Networking hardware Network appliances Electric power Power supplies Fiber to the premises
Power-over-fiber
[ "Physics", "Engineering" ]
158
[ "Physical quantities", "Computer networks engineering", "Power (physics)", "Electric power", "Networking hardware", "Electrical engineering" ]
42,243,800
https://en.wikipedia.org/wiki/Car%20hydraulics
Car hydraulics are equipment installed in an automobile that allows for a dynamic adjustment in height of the vehicle. These suspension modifications are often placed in a lowrider, i.e., a vehicle modified to lower its ground clearance below that of its original design. With these modifications, the body of the car can be raised by remote control. The amount and kind of hydraulic pumps being used and the different specifications of the subject vehicle will affect the impact of such systems on the height and orientation of the vehicle. With sufficient pumps, an automobile can jump and hop upwards of six feet off the ground. Enthusiasts hold car jumping contests nationwide, which are judged on how high an automobile is able to bounce. Origin Lowrider automobiles originated from the California custom car community. Hydraulics first came on the scene after the 1958 California lowered vehicle law went into effect. The first documented custom car with Hydraulics was in 1958 when Jim Logue of Long Beach California installed them in his custom 1954 Ford the “Fab X”. In 1959 Ron Aguirre of Rialto California installed hydraulics in his custom 1956 corvette the X Sonic and shortly thereafter, Aguirre began installing Hydraulic lifts in many custom cars in the Inland Empire. During the early 1960’s front hydraulic lifts became a very popular upgrade in many semi custom cars in California. War surplus aircraft hydraulic components were used to raise and lower the ride height. In 1962 the first Chevrolet Impala to feature hydraulics debuted, Tats Gotanda’s 1959 Impala the “Buddha Buggy” which featured hydraulics by Bill Hines. Throughout the 1960‘s a large majority of mild custom cars in Southern California were equipped with hydraulic lifts, eventually this style of car became known as the lowrider car by the early 1970’s. Today, lowriders can be found anywhere, worldwide. International In 1979, Japan received a shipment of Low rider magazines, which showed on the cover a lowered Chevy in front of Mount Fuji. This magazine, Orlie's Lowriding Magazine, was a profitable magazine that advertised lowriders and hydraulic kits for their consumers. Along with these magazines came mail-order forms to purchase automotive hydraulics kits. By the 1980s, these kits along with cars, made Japan Orlie's top purchaser. Interior The original pumps, valves and cylinders used for the modifications to these cars were originally used for operations performed in aircraft. Using these materials required a great deal of engineering ability in order to get these cars back in working condition after being stripped. For many automobile owners, it was too expensive to have an auto shop install the hydraulics in their car for them. In the early 1960s, owners were left to do the mechanics for their own cars because the kits were not sold in stores until the later 1960s. These hydraulic kits were known as "trays" to many consumers. Many times, since these batteries, pumps and valves were made for such large aircraft originally, extra batteries were needed to assist in the hydraulics. This would run the batteries down more often than the original usage for these batteries, so it was necessary for the owners of these automobiles to charge the automobile's batteries more frequently. After using aircraft materials, trucks' liftgate materials were found to be more manageable on the car as well as easing the maintenance of the car. These cylinders, two or more, are connected to one pipe that is filled with oil, the basic fluid used for a hydraulic system. The cylinders are used to establish compression pressure of the oil, fluid being supplied by the pump, to push the automobile up. The motion of car is defined by the number of cylinder pumps installed in the vehicle. The number and placement of the pumps determines the range of motion the automobile has. A hydraulic dump valve called a "dump" is used to control the downward movement of the car. See also Automobile Custom car Hydraulics Lowrider Los Angeles References External links Lowrider Magazine Lowrider Magazine- Japan Super Show Auto parts Control devices
Car hydraulics
[ "Engineering" ]
808
[ "Control devices", "Control engineering" ]
42,243,853
https://en.wikipedia.org/wiki/Stan%20%28software%29
Stan is a probabilistic programming language for statistical inference written in C++. The Stan language is used to specify a (Bayesian) statistical model with an imperative program calculating the log probability density function. Stan is licensed under the New BSD License. Stan is named in honour of Stanislaw Ulam, pioneer of the Monte Carlo method. Stan was created by a development team consisting of 52 members that includes Andrew Gelman, Bob Carpenter, Daniel Lee, Ben Goodrich, and others. Example A simple linear regression model can be described as , where . This can also be expressed as . The latter form can be written in Stan as the following: data { int<lower=0> N; vector[N] x; vector[N] y; } parameters { real alpha; real beta; real<lower=0> sigma; } model { y ~ normal(alpha + beta * x, sigma); } Interfaces The Stan language itself can be accessed through several interfaces: CmdStan – a command-line executable for the shell, CmdStanR and rstan – R software libraries, CmdStanPy and PyStan – libraries for the Python programming language, CmdStan.rb - library for the Ruby programming language, MatlabStan – integration with the MATLAB numerical computing environment, Stan.jl – integration with the Julia programming language, StataStan – integration with Stata. Stan Playground - online at In addition, higher-level interfaces are provided with packages using Stan as backend, primarily in the R language: rstanarm provides a drop-in replacement for frequentist models provided by base R and lme4 using the R formula syntax; brms provides a wide array of linear and nonlinear models using the R formula syntax; prophet provides automated procedures for time series forecasting. Algorithms Stan implements gradient-based Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference, stochastic, gradient-based variational Bayesian methods for approximate Bayesian inference, and gradient-based optimization for penalized maximum likelihood estimation. MCMC algorithms: Hamiltonian Monte Carlo (HMC) No-U-Turn sampler (NUTS), a variant of HMC and Stan's default MCMC engine Variational inference algorithms: Automatic Differentiation Variational Inference Pathfinder: Parallel quasi-Newton variational inference Optimization algorithms: Limited-memory BFGS (Stan's default optimization algorithm) Broyden–Fletcher–Goldfarb–Shanno algorithm Laplace's approximation for classical standard error estimates and approximate Bayesian posteriors Automatic differentiation Stan implements reverse-mode automatic differentiation to calculate gradients of the model, which is required by HMC, NUTS, L-BFGS, BFGS, and variational inference. The automatic differentiation within Stan can be used outside of the probabilistic programming language. Usage Stan is used in fields including social science, pharmaceutical statistics, market research, and medical imaging. See also PyMC is a probabilistic programming language in Python ArviZ a Python library for Exploratory Analysis of Bayesian Models References Further reading Gelman, Andrew, Daniel Lee, and Jiqiang Guo (2015). Stan: A probabilistic programming language for Bayesian inference and optimization, Journal of Educational and Behavioral Statistics. Hoffman, Matthew D., Bob Carpenter, and Andrew Gelman (2012). Stan, scalable software for Bayesian modeling , Proceedings of the NIPS Workshop on Probabilistic Programming. External links Stan web site Stan source, a Git repository hosted on GitHub Computational statistics Free Bayesian statistics software Monte Carlo software Numerical programming languages Domain-specific programming languages Probabilistic software
Stan (software)
[ "Mathematics" ]
759
[ "Probabilistic software", "Computational statistics", "Computational mathematics", "Mathematical software" ]
40,812,823
https://en.wikipedia.org/wiki/Frecency
In computing, frecency is any heuristic that combines the frequency and recency into a single measure. Heuristic In its simplest form, the frequency and recency rating can be added to form a frecency rating. The ratings can be found by sorting items by most recent and most frequent respectively. A decayed calculation using logarithms can also be used. Examples Some web browsers use frecency to predict the likelihood of revisiting a given web page or reusing a given HTTP cache entry "Frecency is a score given to each unique URI in Places, encompassing bookmarks, history and tags. This score is determined by the amount of revisitation, the type of those visits, how recent they were, and whether the URI was bookmarked or tagged." Frecency can be computed from a list of use dates, either as pro-actively while a user browses the web or as needed. Some frecency measures can also be computed in a rolling manner without storing such a list. The ZFS filesystem uses this concept in its adaptive replacement cache (ARC) cache with a most recently used (MRU) and most frequently used (MFU) list. References Heuristics Measurement Web browsers Internet terminology External links Frecency implementation in Firefox
Frecency
[ "Physics", "Mathematics", "Technology" ]
275
[ "Computing terminology", "Physical quantities", "Internet terminology", "Quantity", "Measurement", "Size" ]
40,816,967
https://en.wikipedia.org/wiki/Iron%20tetraboride
Iron tetraboride (FeB4) is a superhard superconductor (Tc < 3K) consisting of iron and boron. Iron tetraboride does not occur in nature and can be created synthetically. Its molecular structure was predicted using computer models. See also Binghamton University European Synchrotron Radiation Facility References External links First fully computer-designed superconductor First computer-designed superconductor created Scientists create first computer-designed superconductor X-rays reveal the first designer superconductor Viewpoint: Materials Prediction Scores a Hit Borides Iron compounds Superconductors Superhard materials
Iron tetraboride
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
131
[ "Materials science stubs", "Superconductivity", "Materials science", "Materials", "Superhard materials", "Superconductors", "Matter" ]
32,420,836
https://en.wikipedia.org/wiki/Sulfur%20concrete
Sulfur concrete, sometimes named thioconcrete or sulfurcrete, is a composite construction material, composed mainly of sulfur and aggregate (generally a coarse aggregate made of gravel or crushed rocks and a fine aggregate such as sand). Cement and water, important compounds in normal concrete, are not part of sulfur concrete. The concrete is heated above the melting point of elemental sulfur () at ca. in a ratio of between 12% and 25% sulfur, the rest being aggregate. Low-volatility (i.e., with a high boiling point) organic admixtures (sulfur modifiers), such as dicyclopentadiene (DCPD), styrene, turpentine, or furfural, are also added to the molten sulfur to inhibit its crystallization and to stabilize its polymeric structure after solidification. In the absence of modifying agents, elemental sulfur crystallizes in its most stable allotropic (polymorphic) crystal phase at room temperature. With the addition of some modifying agents, elemental sulfur forms a copolymer (linear chains with styrene, cross-linking structure with DCPD) and remains plastic. Sulfur concrete then achieves high mechanical strength within of cooling. It does not require a prolonged curing period like conventional cement concrete, which after setting (a few hours) must still harden to reach its expected nominal strength at 28 days. The rate of hardening of sulfur concrete depends on its cooling rate and also on the nature and concentration of modifying agents (cross-linking process). Its hardening is governed by the fairly rapid liquid/solid state change and associated phase transition processes (the added modifiers maintaining the plastic state while avoiding its recrystallization). It is a thermoplastic material whose physical state depends on temperature. It can be recycled and reshaped in a reversible way, simply by remelting it at high temperature. A sulfur concrete patent was already registered in 1900 by McKay. Sulfur concrete was studied in the 1920s and 1930s and received renewed interest in the 1970s because of the accumulation of large quantities of sulfur as a by-product of the hydrodesulfurization process of oil and gas production and its low cost. Characteristics Sulfur concrete has a low porosity and is a poorly permeable material. Its low hydraulic conductivity slows down water ingress in its low porosity matrix and so decreases the transport of harmful chemical species, such as chloride (pitting corrosion), towards the steel reinforcements (physical protection of steel as long as no microcracks develop in the sulfur concrete matrix). It is resistant to some compounds like acids which attack normal concrete. Beside its impermeability, Loov et al. (1974) also consider amongst the beneficial characteristics of sulfur concrete its low thermal and electrical conductivities. Sulfur concrete does not cause adverse reaction with glass (no alkali–silica reaction), does not produce efflorescences, and also presents a smooth surface finish. They also mention amongst its main limitations, its high coefficient of thermal expansion, the possible formation of acid under the action of water and sunlight. It also reacts with copper and produces a smell when melted. Uses Sulfur concrete was developed and promoted as a building material to get rid of large amounts of stored sulfur produced by hydrodesulfurization of gas and oil (Claus process). As of 2011, sulfur concrete has only been used in small quantities when fast curing or acid resistance is necessary. The material has been suggested by researchers as a potential building material on Mars, where water and limestone are not easily available, but sulfur is. Advantages and benefits More recently, it has been proposed as a near-carbon-neutral construction material. Its waterless and less energy-intensive production (in comparison with ordinary cement and regular concrete) makes it a potential alternative for high--emission portland-cement-based materials. Due to improvements in fabrication techniques, it can be produced in high quality and large quantities. Recyclable sulfur concrete sleepers are used in Belgium for the railways infrastructure, and are mass-produced locally. THIOTUBE is the brand name for certified acid-resistant DWF (dry weather flow) discharge pipes used in Belgium. Long-term scientific and technical challenges Sulfate-reducing bacteria (SRB) and sulfur-oxidizing bacteria (SOB) produce hydrogen sulfide () and sulfuric acid () respectively. When the sulfur cycle is active in sewers and emanations from the effluent waters are oxidized in by atmospheric oxygen at the moist surface of tunnel walls, sulfuric acid can attack the hydrated Portland cement paste of cementitious materials, especially in the non-totally immersed sections of sewers (non-completely water-filled vadose zone). It causes extensive damages to masonry mortar and concrete in older sewage infrastructures. Sulfur concrete, if proven resistant to long-term chemical and bacterial attacks, could provide an effective and long-lasting solution to this problem. However, since elemental sulfur itself participates in redox reactions used by some autotrophic bacteria to produce the energy they need from the sulfur cycle, elemental sulfur could contribute directly fueling the bacterial activity. Biofilms adhering to the surface of sewer walls could harbor autotrophic microbial colonies that can degrade sulfur concrete if they are able to use elemental sulfur directly as an electron donor to reduce nitrate (autotrophic denitrification process), or sulfate, present in wastewater. Studies and real life tests have shown that only bio sulfur is accessible to these bacteria. The very long-term durability of sulfur concrete also depends on physicochemical factors such as those controlling, among other things, the diffusion of modifying agents (if not completely chemically fixed) out of the elemental sulfur matrix and their leaching by water. The resulting changes in the physical properties of the material will determine its long-term mechanical strength and chemical behavior. The biodegradability of the organic admixtures (sulfur modifiers), or their resistance to microbial activity, and their possible biocidal properties (which may protect the sulfur concrete from microbial attack) are important aspects in assessing the durability of the material. This could also depend on the progressive recrystallization of elemental sulfur over time, or on the rate of plastic deformation of its structure modified by the different types of organic admixtures. Disadvantages and limitations Swamy and Jurjees (1986) have pointed out the limitations of sulfur concrete. They questioned the stability and the long-term durability of sulfur concrete beams with steel reinforcement, especially for sulfur concrete modified with dicyclopentadiene and dipentene. Even when dry, modified concrete beams show strength loss with ageing. Ageing in a wet environment leads to softening of sulfur concrete and loss of strength. It causes structural damages in sulfur concrete beams leading to shear failures and cracking. Swamy and Jurjees (1986) also observed severe corrosion of steel reinforcements. They concluded that the stability of reinforced sulfur concrete beams can only be guaranteed when they are unmodified and kept dry. Being based on the use of elemental sulfur (S, or S) as a binder, sulfur concrete applications are expected to suffer the same limitations as those of elemental sulfur which is not a really inert material, can burn, and is also known to be a potent corrosive agent. In case of fire, this concrete is flammable and will generate toxic and corrosive fumes of sulfur dioxide (), and sulfur trioxide (), ultimately leading to the formation of sulfuric acid (). According to Maldonado-Zagal and Boden (1982), the hydrolysis of elemental sulfur (octa-atomic sulphur, S) in water is driven by its disproportionation into oxidised and reduced forms in the ratio / = 3/1. Hydrogen sulfide () causes sulfide stress cracking (SSC) and in contact with air is also easily oxidized into thiosulfate (), responsible for pitting corrosion. Like pyrite (, iron(II) disulfide), in the presence of moisture, sulfur is also sensitive to oxidation by atmospheric oxygen and could ultimately produce sulfuric acid (), sulfate (), and intermediate chemical species such as thiosulfates (), or tetrathionates (), which are also strongly corrosive substances (pitting corrosion), as all the reduced species of sulfur. Therefore, long-term corrosion problems of steels and other metals (aluminium, copper...) need to be anticipated, and correctly addressed, before selecting sulfur concrete for specific applications. The formation of sulfuric acid could also attack and dissolve limestone () and concrete structures while also producing expansive gypsum (), aggravating the formation of cracks and fissures in these materials. If the local physico-chemical conditions are conducive (sufficient space and water available for their growth), sulfur-oxidizing bacteria (microbial oxidation of sulfur) could also thrive at the expense of concrete sulfur and contribute to aggravate potential corrosion problems. The degradation rate of elemental sulfur depends on its specific surface area. The degradation reactions are the fastest with sulfur dust, or crushed powder of sulfur, while intact compact blocks of sulfur concrete are expected to react more slowly. The service life of components made of sulfur concrete depends thus on the degradation kinetics of elemental sulfur exposed to atmospheric oxygen, moisture and microorganisms, on the density/concentration of microcracks in the material, and on the accessibility of the carbon-steel surface to the corrosive degradation products present in aqueous solution in case of macrocracks or technical voids exposed to water ingress. All these factors need to be taken into account when designing structures, systems and components (SSC) based on sulfur concrete, certainly if they are reinforced, or pre-stressed, with steel elements (rebar or tensioning cables respectively). While the process of elemental sulfur oxidation will also lower the pH value, aggravating carbon steel corrosion, in contrast to ordinary Portland cement and classical concrete, fresh sulfur concrete does not contain alkali hydroxides (KOH, NaOH), nor calcium hydroxide (), and therefore does not provide any buffering capacity to maintain a high pH passivating the steel surface. In other words, intact sulfur concrete does not chemically protect steel reinforcement bars (rebar) against corrosion. The corrosion of steel elements embedded into sulfur concrete will thus depends on water ingress through cracks and to their exposure to aggressive chemical species of sulfur dissolved in the seeping water. The presence of microorganisms fuelled by elemental sulfur could also play a role and accelerate the corrosion rate. See also Asphalt concrete, similar aggregate material using 'bitumen' as a binder Sulfur-based lunarcrete, a proposed lunar construction material Cenocell, a concrete material using fly ash cenospheres (hollow spheres) in place of cement Rubber vulcanisation and cross-linking made by disulfide bridges formed after the reaction of elemental sulfur with natural rubber terpenoids (polyisoprene) (process discovered by Charles Goodyear) Sulfur vulcanization, produced by the reaction of elemental sulfur with allyl groups (-CH=CH-CH2-) of natural rubber (latex extracted from hevea) heated at elevated temperature Notes References Further reading — also: External links Concrete Sulfur Corrosion Geomicrobiology Building materials Sustainable building Sustainable development
Sulfur concrete
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,388
[ "Structural engineering", "Sustainable building", "Building engineering", "Metallurgy", "Corrosion", "Architecture", "Construction", "Electrochemistry", "Materials", "Concrete", "Materials degradation", "Matter", "Building materials" ]
32,422,777
https://en.wikipedia.org/wiki/Bernstein%E2%80%93von%20Mises%20theorem
In Bayesian inference, the Bernstein–von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models. It states that under some conditions, a posterior distribution converges in total variation distance to a multivariate normal distribution centered at the maximum likelihood estimator with covariance matrix given by , where is the true population parameter and is the Fisher information matrix at the true population parameter value: The Bernstein–von Mises theorem links Bayesian inference with frequentist inference. It assumes there is some true probabilistic process that generates the observations, as in frequentism, and then studies the quality of Bayesian methods of recovering that process, and making uncertainty statements about that process. In particular, it states that asymptotically, many Bayesian credible sets of a certain credibility level will act as confidence sets of confidence level , which allows for the interpretation of Bayesian credible sets. Statement Let be a well-specified statistical model, where the parameter space is a subset of . Further, let data be independently and identically distributed from . Suppose that all of the following conditions hold: The model admits densities with respect to some measure . The Fisher information matrix is nonsingular. The model is differentiable in quadratic mean. That is, there exists a measurable function such that as . For every , there exists a sequence of test functions such that and as . The prior measure is absolutely continuous with respect to the Lebesgue measure in a neighborhood of , with a continuous positive density at . Then for any estimator satisfying , the posterior distribution of satisfies as . Relationship to maximum likelihood estimation Under certain regularity conditions, the maximum likelihood estimator is an asymptotically efficient estimator and can thus be used as in the theorem statement. This then yields that the posterior distribution converges in total variation distance to the asymptotic distribution of the maximum likelihood estimator, which is commonly used to construct frequentist confidence sets. Implications The most important implication of the Bernstein–von Mises theorem is that the Bayesian inference is asymptotically correct from a frequentist point of view. This means that for large amounts of data, one can use the posterior distribution to make, from a frequentist point of view, valid statements about estimation and uncertainty. History The theorem is named after Richard von Mises and S. N. Bernstein, although the first proper proof was given by Joseph L. Doob in 1949 for random variables with finite probability space. Later Lucien Le Cam, his PhD student Lorraine Schwartz, David A. Freedman and Persi Diaconis extended the proof under more general assumptions. Limitations In case of a misspecified model, the posterior distribution will also become asymptotically Gaussian with a correct mean, but not necessarily with the Fisher information as the variance. This implies that Bayesian credible sets of level cannot be interpreted as confidence sets of level . In the case of nonparametric statistics, the Bernstein–von Mises theorem usually fails to hold with a notable exception of the Dirichlet process. A remarkable result was found by Freedman in 1965: the Bernstein–von Mises theorem does not hold almost surely if the random variable has an infinite countable probability space; however, this depends on allowing a very broad range of possible priors. In practice, the priors used typically in research do have the desirable property even with an infinite countable probability space. Different summary statistics such as the mode and mean may behave differently in the posterior distribution. In Freedman's examples, the posterior density and its mean can converge on the wrong result, but the posterior mode is consistent and will converge on the correct result. References Further reading Bayesian inference Theorems in statistics
Bernstein–von Mises theorem
[ "Mathematics" ]
771
[ "Mathematical theorems", "Mathematical problems", "Theorems in statistics" ]
32,423,093
https://en.wikipedia.org/wiki/PackML
PackML (Packaging Machine Language) is an industry technical standard for the control of packaging machines, as an aspect of industrial automation. PackML was created by the Organization for Machine Automation and Control (OMAC) in conjunction with the International Society of Automation (ISA). The primary objective of PackML is to bring a common “look and feel” and operational consistency to all machines that make up a Packing Line (note: can be used for other types of discrete process) PackML provides: Description The Manufacturing Automation Industry is broken down into three main categories; Continuous control, Batch control and Discrete control. The batch control industry and the packaging industry (discrete control of packaging machines) are the focus of a set of standards and guidelines that are similar but have differences driven by equipment functionality. Standard defined machine states and operational flow Overall Equipment Effectiveness (OEE) data Root Cause Analysis (RCA) data Flexible recipe schemes and common SCADA or MES inputs These provisions are enabled by the “Line Types” definitions (“Guidelines for Packaging Machinery Automation v3.1") created by the OMAC Packaging Workgroup, and leveraging the ISA-88 State Model concepts. PackML definitions are intended to make machines more serviceable and easier to redeploy. PackML concepts are also finding application in the other discrete control environments such as converting, assembled products, machine tools, and robotics. In an effort to gain industry acceptance Procter & Gamble (P&G) developed a “PackML Implementation Guide” with a software template & help files that was provided royalty-free, non-exclusive licensed to OMAC. The guide is an implementation of ANSI/ISA-TR88.00.02-2015, borrows concepts from ANSI/ISA-88 Part 1 and embraces the ANSI/ISA-88 Part 5 draft concepts of the hierarchical model (Machine/Unit, Station/Equipment Module, Control Device/Control Module). The OMAC Implementation Guide provides PackML implementation guidelines, data structures and a minimum set of recommended PackTags (i.e. those typically needed for commercial MES packages). The implementation guideline provides a method to deliver State Control, Machine-to-Machine Communications and Machine-to-Information System Communications. The PackML Implementation Guide is software (ladder-based) and is oriented towards Rockwell control systems. It is structured such that PackML “States” can directly drive “ANSI/ISA88 Part 5 Equipment & Control Modules”. Many control suppliers (including Siemens, Lenze, Bosch, Rockwell, Mitsubishi, B&R, ELAU, Beckhoff ) have developed their own PackML software template. As control suppliers provide their implementations, links are posted on the OMAC web site. Standards ANSI/ISA-88 Batch Control Part 1 – Batch Control Models and Terminology (IEC 61512-1) Part 2 – Data Structures and Guidelines for Languages (IEC 61512-2) Part 3 – General and Site Recipe Models and Representations (IEC 61512-3) Part 4 – Batch Production Records (IEC 61512-4) Part 5 – (Make2Pack) Equipment Modules and Control Modules ANSI/ISA-TR88.00.02-2015 Machine and Unit States: An implementation example of ANSI/ISA-88.00.01 ISBN 978-1-941546-65-9 ANSI/ISA-95 Integration of Enterprise and Control Systems Part 1 – Models and Terminologies (IEC 62264-1) Part 2 – Object Model Attributes (IEC 62264-2) Part 3 – Activity Models of Manufacturing Operations Management (IEC 62264-3) Part 4 – Object Models & Attributes for Manufacturing Operations Management Part 5 – Business to Manufacturing Transactions IEC - International Electrotechnical Commission IEC 60848: 2002, GRAFCET specification language for sequential function charts IEC 60050-351: 2006, International Electrotechnical Vocabulary – Part 351: Control technology ANSI/ISA-95.00.01-2010 (IEC 62264-1 Mod), Enterprise-Control System Integration – Part 1: Models and Terminology ANSI/ISA-95.00.02-2010 (IEC 62264-2 Mod), Enterprise-Control System Integration – Part 2: Object Model Attributes ANSI/ISA–95.00.03 Enterprise-Control System Integration Part 3: Activity models of manufacturing operations management IEC/ISO 62264-1, Enterprise-Control System Integration - Part 1: Models and Terminology History The ISA-88 Committee started work in the 1980s and has developed a series of standards and technical reports with the intent of providing a broadly accepted set of concepts, models and definitions for the batch control industry. ISA 88 Part 1, Batch Control Models and Terminology, introduces the concepts of a hierarchical model, a state model and modular software design. In the late 1980s the ISA began an effort to develop a set of standards for the Batch Control Industry with the intent of providing improved system performance and programming efficiencies by way of a standard set of models and procedures. ANSI/ISA-88 Part 5 (Make2Pack) was written to provide a standard specifically for Equipment Modules and Control Modules. Starting in the early 2000s OMAC began work on a similar standard that embraced some of the basic concepts developed for the Batch Control Industry with the intent of providing the same benefits to the Machine Control Industry, specifically for Packaging Machines. These standards continued in parallel development until 2008 when an ISA sanctioned technical report was written to harmonize these standards. ANSI/ISA TR88.00.02-2008 Machine and Unit States: An Implementation Example of ISA-88 became the basis of the Packaging Standard PackML. In the early 2000s the OMAC Packaging Work Group formed 3 technical sub-committees to help unify the way machines are introduced into the packaging market. Each committee had a specific focus area: PackSoft: Research applicable programming languages to the packaging industry PackConnect: Research applicable field bus networks to the packaging industry PackML: Bridge the gap between PackSoft and PackConnect The PackML sub-committee's focus was to develop a method to quickly integrate a line of machines without concern on what field bus (protocol & media-the domain of the PackConnect sub-committee) was going to carry the data set between machines, SCADA and MES. After several iterations the approach taken was to extend the ANSI/ISA-88 Part 1 State Model concept to the Packaging Industry. Later in the development process, the concept of PackTags was introduced to provide a uniform set of naming conventions for data elements used within the state model. PackTags are used for machine-to-machine communications; for example between a Filler and a Capper. In addition, PackTags were designed to address OEE (Overall Equipment Effectiveness) calculations. PackTags can be used to provide data exchange between machines and higher level information systems like Manufacturing Operations Management and Enterprise Information Systems. In 2004 the WBF (WBF - The Organization for Production Technology) formed the Make2Pack workgroup, which was chartered to evaluate the similarities between OMAC's PackML and WBF's automation efforts. Based upon the workgroups determination the WBF expanded the Make2Pack Effort in 2006 to develop a new Batch Control Standard titled “Batch Control – Part 5: Implementation Models & Terminology for Modular Equipment Control” with the intent of providing a guideline for modular control for all automation industries. This effort was then chartered by ISA under “ISA-TR88.00.05-Machine and Unit States” but was later designated as TR88.00.02. ISA-TR88.00.02 was approved in 2008 and is the basis document for the OMAC PackML Implementation Guide. OMAC later became affiliated with ISA in 2005. OMAC is an independent, self-funded organization. It gets additional non-monetary support from PMMI (Packaging Machinery Manufacturers Institute) and ARC (Automation Research Corporation). The PackML and PackTags guideline documents have gone through several versions (v1, v2, v3). During the PackML development process, PackTags were combined into the guideline documents. In 2008 the final version (v3), which contains both PackML and PackTags, were updated and harmonized with the ANSI/ISA-88.00.01 standard terms and definitions to produce the technical report ANSI/ISA-TR88.00.02-2008 Machine and Unit States: An Implementation Example of ISA-88. ANSI/ISA-TR88.00.02 defines ISA-S88 Part 1 and Part 5 concepts of Modes, States and data structures (PackTags) in a Package Machine environment and provides example implementations. PackML has previously released versions 1, 2 & 3, with several implementations of version 2 in existence. The PackML version 2 implementation had the disadvantage of being memory intensive for PLC processors, unnecessary unused code as well as having an incomplete state/mode model for some machines. PackML v3 corrected these disadvantages. It was superseded when it was harmonized with the S88 Part 5 efforts to become ISA-TR88.00.02. References Industrial automation Packaging
PackML
[ "Engineering" ]
1,901
[ "Industrial automation", "Automation", "Industrial engineering" ]
32,425,616
https://en.wikipedia.org/wiki/Phoenix%20International%20Holdings
Phoenix International Holdings, Inc. (Phoenix) is a marine services company that performs manned and unmanned underwater operations worldwide. Notable projects Notable projects in which Phoenix has participated include: 2022-Location and Recovery of a downed U.S. Navy F-35C Lightning II jetplane in South China Sea 2021-Location and Recovery of the fuselage of a downed MH-60S Seahawk helicopter near Okinawa 2019-Deep ocean salvage of a C-2A plane (location and recovery) 2014-The search for Malaysia Airlines Flight 370; Air France Flight 447, Yemenia Flight 626, Adam Air Flight 574, and Tuninter Flight 1153 black box recoveries; 2012-Search for Amelia Earhart airplane under a contract with TIGHAR 2011-Submarine rescue readiness exercise (Bold Monarch) 2010-Forensic inspection of the Deepwater Horizon control room; 2010-The design, fabrication, and testing of a Saturation Fly-Away Diving System (SAT FADS) for the U.S. Navy; 2003-The search for Space Shuttle Columbia debris; 2003- documentary investigations and mapping projects; 2002- turret recovery; 2000-The discovery and forensic survey of the Israeli submarine ; References External links Phoenix International Home Page Underwater diving engineering Engineering companies of the United States
Phoenix International Holdings
[ "Engineering" ]
257
[ "Underwater diving engineering", "Marine engineering" ]
49,854,427
https://en.wikipedia.org/wiki/Epoxyeicosatetraenoic%20acid
Epoxyeicosatetraenoic acids (EEQs or EpETEs) are a set of biologically active epoxides that various cell types make by metabolizing the omega 3 fatty acid, eicosapentaenoic acid (EPA), with certain cytochrome P450 epoxygenases. These epoxygenases can metabolize EPA to as many as 10 epoxides that differ in the site and/or stereoisomer of the epoxide formed; however, the formed EEQs, while differing in potency, often have similar bioactivities and are commonly considered together. Structure EPA is a straight-chain, 20 carbon omega-3 fatty acid containing cis double bonds between carbons 5 and 6, 8 and 9, 11 and 12, 14 and 15, and 17 and 18; each of these double bonds is designated with the notation Z to indicate its cis configuration in the IUPAC Chemical nomenclature used here. EPA is therefore 5Z,8Z,11Z,14Z,17Z-eicosapentaenoic acid. Certain cytochrome P450 epoxygenases metabolize EPA by converting one of these double bounds to an epoxide thereby forming one of 5 possible eicosatetraenoic acid epoxide regioisomers. These regioisomers are: 5,6-EEQ (i.e. 5,6-epoxy-8Z,11Z,14Z,17Z-eicosatetraenoic acid), 8,9-EEQ (i.e. 8,9-epoxy-5Z,11Z,14Z,17Z-eicosatetraenoic acid), 11,12-EEQ (i.e. 11,12-epoxy-5Z,8Z,14Z,17Z-eicosatetraenoic acid), 14,15-EEQ (i.e. 14,15-epoxy-5Z,8Z,11Z,17Z-eicosatetraenoic acid), and 17,18-EEQ (i.e. 17,18-epoxy-5Z,8Z,11Z,14Z-eicosatetraenoic acid). The epoxydases typically make both R/S enantiomers of each epoxide. For example, they metabolize EPA at its 17,18 double bond to a mixture of 17R,18S-EEQ and 17S,18R-EEQ. The EEQ products therefore consist of as many as ten isomers. Production Cellular cytochrome P450 epoxygenases metabolize various polyunsaturated fatty acids to epoxide-containing products. They metabolize the omega-6 fatty acids arachidonic acid, which possess four double bonds, to 8 different epoxide isomers which are termed epoxyeicosatrienoic acids or EETs and linoleic acid, which possess two double bonds, to 4 different epoxide isomers, i.e. two different 9,10-epoxide isomers termed coronaric acids or leukotoxins and two different 12,13-epoxides isomers termed vernolic acids or isoleukotoxins. They metabolize the omega-3 fatty acid, docosahexaenoic acid, which possesses six double bonds, to twelve different epoxydocosapentaenoic acid (EDPs) isomers. In general, the same epoxygenases that accomplish these metabolic conversions also metabolize the omega-6 fatty acid, EPA, to 10 epoxide isomers, the EEQs. These epoxygenases fall into several subfamilies including the cytochrome P4501A (i.e.CYP1A), CYP2B, CYP2C, CYP2E, and CYP2J subfamilies, and within the CYP3A subfamily, CYP3A4. In humans, CYP1A1, CYP1A2, CYP2C8, CYP2C9, CYP2C18, CYP2C19, CYP2E1, CYP2J2, CYP3A4, and CYP2S1 metabolize EPA to EEQs, in most cases forming principally 17,18-EEQ with smaller amounts of 5,6-EEQ, 8,9-EEQ, 11,12-EEQ, and 14,15-EEQ isomers. However, CYP2C11, CYP2C18, and CYP2S1 also form 14,15-EEQ isomers while CYP2C19 also forms 11,12-EEQ isomers. The isomers formed by these CYPs vary greatly with, for example, the 17,18-EEQs made by CYP1A2 consisting of 17R,18S-EEQ but no detectable 17S,18R-EEQ and those made by CYP2D6 consisting principally of 17R,18S-EEQ with far smaller amounts of 17S,18R-EEQ. In addition to the cited CYP's, CYP4A11, CYP4F8, CYP4F12, CYP1A1, CYP1A2, and CYP2E1, which are classified as CYP monooxygenase rather than CYP epoxygenases because they metabolize arachidonic acid to monohydroxy eicosatetraenoic acid products (see 20-Hydroxyeicosatetraenoic acid), i.e. 19-hydroxyeicosatetraenoic acid and/or 20-hydroxyeicosatetranoic acid, take on epoxygenase activity in converting EPA primarily to 17,18-EEQ isomers (see Epoxyeicosatrienoic acid). 5,6-EEQ isomers are generally either not formed or formed in undetectable amounts while 8,9-EEQ isomers are formed in relatively small amounts by the cited CYPs. The EET-forming CYP epoxygenases often metabolize EPA to EEQs (as well as DHA to EDPs) at rates that exceed their rates in metabolizing arachidonic acid to EETs; that is, EPA (and DHA) appear to be preferred over arachidonic acid as substrates for many CYP epoxygenases. The EEQ-forming cytochromes are widely distributed in the tissues of humans and other mammals, including blood vessel endothelium, blood vessel atheroma plaques, heart muscle, kidneys, pancreas, intestine, lung, brain, monocytes, and macrophages. These tissues are known to metabolize arachidonic acid to EETs; it has been shown or is presumed that they also metabolize EPA to EEQs. Note, however, that the CYP epoxygenases, similar to essentially all CYP450 enzymes, are involved in the metabolism of xenobiotics as well as endogenously-formed compounds; since many of these same compounds also induce increases in the levels of the epoxygenases, CYP oxygenase levels and consequently EEQ levels in humans vary widely and are highly dependent on recent consumption history; numerous other factors, including individual genetic differences, also contribute to the variability in CYP450 epoxygenase expression. EEQ metabolism In cells, EEQs are rapidly metabolized by the same enzyme that similarly metabolizes other epoxy fatty acids including the EETs viz., cytosolic soluble epoxide hydrolase [EC 3.2.2.10.] (also termed sEH or the EPHX2), to form their corresponding vicinal diol dihydroxyeicosatetraenoic acids (diHETEs). The omega-3 fatty acid epoxides, EEQs and EPAs, appear to be preferred over EETs as substates for sEH. sEH converts 17,18-EEQ isomers to 17,18-dihydroxy-eicosatetraenoic acid isomers (17,18-diHETEs), 14,15-EEQ isomers to 14,15-diHETE isomers, 11,12-EEQ isomers to 11,12-diHETE isomers, 8,9-EEQ isomers to 8,9-diHETE isomers, and 5,6-EEQ isomers to 5,6-diHETE isomers. The product diHETEs, like their epoxy precursors, are enantiomer mixtures; for instance, sEH converts 17,18-EEQ to a mixture of 17(R),18(R)-diHETE and 17(S),18(S)-diHETE. Since the diHETE products are as a rule generally far less active than their epoxide precursors, the sEH pathway of EET metabolism is regarded as a critical EEQ-inactivating pathway. Membrane-bound microsomal epoxide hydrolase (mEH or epoxide hydrolase 2 [EC 3.2.2.9.]) can metabolize EEQs to their dihydroxy products but is regarded as not contributing significantly to EEQ inactivation in vivo except possibly in rare tissues where the sEH level is exceptionally low while the mEH level is high. In addition to the sEH pathway, EETs may be acylated into phospholipids in an acylation-like reaction. This pathway may serve to limit the action of EETs or store them for future release. EETs are also inactivated by being further metabolized through three other pathways: beta oxidation, omega oxidation, and elongation by enzymes involved in fatty acid synthesis. Clinical significance EEQs, similar to EDPs, have not be studied nearly as well as the epoxyeicosatrienoic acids (EETs). In comparison to the many activities attributed to the EETs in animal model studies, a limited set of studies indicate that EEQs (and EPAs) mimic EETs in their abilities to dilate arterioles, reduce hypertension, inhibit inflammation (the anti-inflammatory actions of EEQ are less potent than those of the EETs) and thereby reduce occlusion of arteries to protect the heart and prevent strokes (see sections on a) Regulation of blood pressure, b) Heart disease, c) Strokes and seizures, and d) Inflammation); they also mimic EETs in possessing analgesia properties in relieving certain types of pain. Often, the EEQs (and EPAs) exhibit greater potency and/or effectiveness than EET in these actions. In human studies potentially relevant to one or more of these activities, consumption of long chain omega-3 fatty acid (i.e. EPA- and DHA-rich) diet produced significant reductions in systolic blood pressure and increased peripheral arteriole blood flow and reactivity in patients at high to intermediate risk for cardiovascular events; an EPA/DHA-rich diet also reduced the risk while high serum levels of DHA and EPA were associated with a low risk of neovascular age-related macular degeneration. Since such diets lead to large increases in the serum and urine levels of EPAs, EEQs, and the dihydroxy metabolites of these epoxides but relatively little or no increases in EETs or lipoxygenase/cyclooxygenase-producing metabolites of arachidonic acid, DHA, and/or EEQs, it is suggested that the diet-induced increases in EPAs and/or EEQs are responsible for this beneficial effects. In direct contrast to the EETs which have stimulating effects in the following activities (see ), EEQs (and EPAs) inhibit new blood vessel formation (i.e. angiogenesis), human tumor cell growth, and human tumor metastasis in animal models implanted with certain types of human cancer cells. The possible beneficial effects of omega-3 fatty acid-rich diets in pathological states involving inflammation, hypertension, blood clotting, heart attacks and other cardiac diseases, strokes, brain seizures, pain perception, acute kidney injury, and cancer are suggested to result, at least in part, from the conversion of dietary EPA and DHA to EEQs and EPAs, respectively, and the cited subsequent actions of these metabolites. References Metabolic intermediates Docosanoids Fatty acids Epoxides Cell biology Immunology Inflammations Blood pressure Human physiology Animal physiology
Epoxyeicosatetraenoic acid
[ "Chemistry", "Biology" ]
2,711
[ "Animals", "Animal physiology", "Cell biology", "Immunology", "Metabolic intermediates", "Biomolecules", "Metabolism" ]
49,854,673
https://en.wikipedia.org/wiki/Euplotid%20nuclear%20code
The euplotid nuclear code (translation table 10) is the genetic code used by Euplotidae. The euplotid code is a socalled "symmetrical code", which results from the symmetrical distribution of the codons. This symmetry allows for arythmic exploration of the codon distribution. In 2013, shCherbak and Makukov, reported that "the patterns are shown to match the criteria of an intelligent signal." The code    AAs = FFLLSSSSYY**CCCWLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG Starts = -----------------------------------M----------------------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V) Differences from the standard code Systematic range Ciliata: Euplotidae See also List of genetic codes References Molecular genetics Gene expression Protein biosynthesis
Euplotid nuclear code
[ "Chemistry", "Biology" ]
576
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
49,856,639
https://en.wikipedia.org/wiki/Alternative%20yeast%20nuclear%20code
The alternative yeast nuclear code (translation table 12) is a genetic code found in certain yeasts. However, other yeast, including Saccharomyces cerevisiae, Candida azyma, Candida diversa, Candida magnoliae, Candida rugopelliculosa, Yarrowia lipolytica, and Zygoascus hellenicus, definitely use the standard (nuclear) code. The code    AAs = FFLLSSSSYY**CC*WLLLSPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG Starts = -------------------M---------------M----------------------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V). Differences from the standard code Alternative initiation codons CAG may be used in Candida albicans. Systematic range Endomycetales (yeasts): Candida albicans, Candida cylindracea, Candida melibiosica, Candida parapsilosis, and Candida rugosa. See also List of genetic codes References Molecular genetics Gene expression Protein biosynthesis
Alternative yeast nuclear code
[ "Chemistry", "Biology" ]
621
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
49,857,031
https://en.wikipedia.org/wiki/Candidate%20division%20SR1%20and%20gracilibacteria%20code
The candidate division SR1 and gracilibacteria code (translation table 25) is used in two groups of (so far) uncultivated bacteria found in marine and fresh-water environments and in the intestines and oral cavities of mammals among others. The difference to the standard and the bacterial code is that UGA represents an additional glycine codon and does not code for termination. A survey of many genomes with the codon assignment software Codetta, analyzed through the GTDB taxonomy system (release 220) shows that this genetic code is limited to the Patescibacteria order BD1-5, not what are now termed Gracilibacteria, and that the SR1 genome assembly GCA_000350285.1 for which the table 25 code was originally defined is actually using the Absconditibacterales genetic code and has the associated three special recoding tRNAs. Thus this code may now be better named the "BD1-5 code". The code    AAs = FFLLSSSSYY**CCGWLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG Starts = ---M-------------------------------M---------------M------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), and Valine (Val, V). Difference from the standard code Initiation codons AUG, GUG, UUG Systematic range Candidate Division SR1 Gracilibacteria See also List of genetic codes References Molecular genetics Gene expression Protein biosynthesis
Candidate division SR1 and gracilibacteria code
[ "Chemistry", "Biology" ]
709
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
49,861,393
https://en.wikipedia.org/wiki/Sequence%20graph
Sequence graph, also called an alignment graph, breakpoint graph, or adjacency graph, are bidirected graphs used in comparative genomics. The structure consists of multiple graphs or genomes with a series of edges and vertices represented as adjacencies between segments in a genome and DNA segments respectively. Traversing a connected component of segments and adjacency edges (called a thread) yields a sequence, which typically represents a genome or a section of a genome. The segments can be thought of as synteny blocks, with the edges dictating how to arrange these blocks in a particular genome, and the labelling of the adjacency edges representing bases that are not contained in synteny blocks. Construction Before constructing a sequence graph, there must be at least two genomes represented as directed graphs with edges as threads (adjacency edges) and vertices as DNA segments. The genomes should be labeled P and Q, while the sequence graph is labeled as BreakpointGraph(P, Q). The directional vertices of Q and their edges are arranged in the order of P. Once completed, the edges of Q are reconnected to their original vertices. After all edges have been matched the vertex directions are removed and instead each vertex is labeled as vh (vertex head) and vt (vertex tail). Similarity between genomes is represented by the number of cycles (independent systems) within the sequence graph. The number of cycles is equal to cycles (P, Q). The max number of cycles possible is equal to the number of vertices in the sequence graph. Example Figure example. Upon receiving genomes P (+a +b -c) and Q (+a +b -c), Q should be realigned to follow the direction edges (red) of P. The vertices should be renamed from a, b, c to ah at, bh bt, ch ct and the edges of P and Q should be connected to their original vertices (P edges = black, Q edges = green). Remove the directional edges (red). The number of cycles in G(P, Q) is 1 while the max possible is 3. Applications Reconstruction of ancestral genomes Alekseyev and Pevzner use sequence graphs to create their own algorithm to study the genome rearrangement history of several mammals, as well as a way to overcome problems with current ancestral reconstruction of genomes. Multiple sequence alignment Sequence graphs can be used to represent multiple sequence alignments with the addition of a new kind of edge representing homology between segments. For a set of genomes, one can create an acyclic breakpoint graph with a thread for each genome. For two segments and , where ,,, and represent the endpoints of the two segments, homology edges can be created from to and to or from to and to - representing the two possible orientations of the homology. The advantage of representing a multiple sequence alignment this way is that it is possible to include inversions and other structural rearrangements that wouldn't be allowable in a matrix representation. Representing variation If there are multiple possible paths when traversing a thread in a sequence graph, multiple sequences can be represented by the same thread. This means it is possible to create a sequence graph that represents a population of individuals with slightly different genomes - with each genome corresponding to one path through the graph. These graphs have been proposed as a replacement for the reference human genome. References Bioinformatics Genomics Evolutionary biology Application-specific graphs
Sequence graph
[ "Engineering", "Biology" ]
724
[ "Bioinformatics", "Biological engineering", "Evolutionary biology" ]
49,861,510
https://en.wikipedia.org/wiki/Paul%20Linden
Paul Frederick Linden (born 29 January 1947) is a mathematician specialising in fluid dynamics. He was the third G. I. Taylor Professor of Fluid Mechanics at the University of Cambridge, inaugural Blasker Distinguished Professor Emeritus of Environmental Science and Engineering at the UC San Diego and a fellow of Downing College. Education Linden earned his PhD from the University of Cambridge in 1972, under the supervision of Stewart Turner. His thesis was entitled The Effect of Turbulence and Shear on Salt Fingers. Awards and honours He was elected a Fellow of the American Physical Society in 2003. Linden was elected a Fellow of the Royal Society (FRS) in 2007. His certificate of election reads: References External links 1947 births Living people Fluid dynamicists Fellows of Downing College, Cambridge Fellows of the Royal Society Fellows of the American Physical Society 20th-century British mathematicians 21st-century British mathematicians University of California, San Diego faculty G. I. Taylor Professors of Fluid Mechanics
Paul Linden
[ "Chemistry" ]
186
[ "Fluid dynamicists", "Fluid dynamics" ]
49,863,574
https://en.wikipedia.org/wiki/Freddy%20Cachazo
Freddy Alexander Cachazo is a Venezuelan-born theoretical physicist who holds the Gluskin Sheff Freeman Dyson Chair in Theoretical Physics at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada. He is known for the contributions to quantum field theory through the study of scattering amplitudes, in particular in quantum chromodynamics, N = 4 supersymmetric Yang–Mills theory and quantum gravity. His contributions include BCFW recursion relations, the CSW vertex expansion and the amplituhedron. In 2014, Cachazo was awarded the New Horizons Prize for uncovering numerous structures underlying scattering amplitudes in gauge theories and gravity. Academic career After graduating from Simón Bolívar University in 1996, Cachazo attended a year-long Postgraduate Diploma Programme at the International Centre for Theoretical Physics (ICTP) in Trieste, Italy. He was admitted in Harvard University, where he completed the Ph.D. under the supervision of Cumrun Vafa in 2002. Cachazo was a post-doctoral member of the Institute for Advanced Study (IAS) in Princeton, New Jersey in 2002-05 and 2009-10. In 2005, he became a faculty member at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, as well as an Adjoint Faculty at the nearby University of Waterloo. He currently holds the Gluskin Sheff Freeman Dyson Chair in Theoretical Physics. Cachazo's research concerns quantum field theory, the underlying theory describing fundamental interactions of particles and space-time itself. The research program is to understand their deep structure through the study of scattering amplitudes. Such understanding allows for both efficient computation of the probabilities of physical processes occurring and insights into the unknown structures of the gauge theories and gravity. Together with Ruth Britto, Bo Feng and Edward Witten, he introduced the recursion relations for the computation of scattering amplitudes, which opened a new window for computations required at particle accelerators, such as the Large Hadron Collider. With Nima Arkani-Hamed and collaborators, he studied N = 4 supersymmetric Yang–Mills theory and showed how to compute amplitudes at any order in the perturbation theory. He co-discovered a new formalism unifying gauge theory and gravity in any space-time dimension, known as the Cachazo-He-Yuan formulation. Awards and honors In 2009, he was awarded the Gribov Medal for an outstanding work by a young physicist from the European Physical Society. Two year later he won the Rutherford Medal, an equivalent prize awarded by the Royal Society of Canada. In 2012, Canadian Association of Physicists awarded Cachazo with the Herzberg Medal. Finally, he won the 2014 New Horizons Prize, which by many is regarded to be the most prestigious award for young theoretical physicists. Selected publications References Living people Harvard University alumni 21st-century Canadian physicists Theoretical physicists Year of birth missing (living people)
Freddy Cachazo
[ "Physics" ]
604
[ "Theoretical physics", "Theoretical physicists" ]
46,846,453
https://en.wikipedia.org/wiki/Non-linear%20preferential%20attachment
In network science, preferential attachment means that nodes of a network tend to connect to those nodes which have more links. If the network is growing and new nodes tend to connect to existing ones with linear probability in the degree of the existing nodes then preferential attachment leads to a scale-free network. If this probability is sub-linear then the network’s degree distribution is stretched exponential and hubs are much smaller than in a scale-free network. If this probability is super-linear then almost all nodes are connected to a few hubs. According to Kunegis, Blattner, and Moser several online networks follow a non-linear preferential attachment model. Communication networks and online contact networks are sub-linear while interaction networks are super-linear. The co-author network among scientists also shows the signs of sub-linear preferential attachment. Types of preferential attachment For simplicity it can be assumed that the probability with which a new node connects to an existing one follows a power function of the existing nodes’ degree k: where α > 0. This is a good approximation for a lot of real networks such as the Internet, the citation network or the actor network. If α = 1 then the preferential attachment is linear. If α < 1 then it is sub-linear while if α > 1 then it is super-linear. In measuring preferential attachment from real networks, the above log-linearity functional form kα can be relaxed to a free form function, i.e. (k) can be measured for each k without any assumptions on the functional form of (k). This is believed to be more flexible, and allows the discovery of non-log-linearity of preferential attachment in real networks. Sub-linear preferential attachment In this case the new nodes still tend to connect to the nodes with higher degree but this effect is smaller than in the case of linear preferential attachment. There are less hubs and their size is also smaller than in a scale-free network. The size of the largest component logarithmically depends on the number of nodes: so it is smaller than the polynomial dependence. Super-linear preferential attachment If α > 1 then a few nodes tend to connect to every other node in the network. For α > 2 this process happens more extremely, the number of connections between other nodes is still finite in the limit when n goes to infinity. So the degree of the largest hub is proportional to the system size: References Networks Network theory
Non-linear preferential attachment
[ "Mathematics" ]
509
[ "Network theory", "Mathematical relations", "Graph theory" ]
46,849,118
https://en.wikipedia.org/wiki/Magnesium%20formate
Magnesium formate is a magnesium salt of formic acid. It is an inorganic compound. It consists of a magnesium cation and formate anion. It can be prepared by reacting magnesium oxide with formic acid. The dihydrate is formed when crystallizing from the solution. The dihydrate dehydrates at 105 °C to form anhydrate, then decomposes at 500 °C to produce magnesium oxide. Magnesium formate can be used for organic syntheses. References Formates Magnesium compounds
Magnesium formate
[ "Chemistry" ]
108
[ "Inorganic compounds", "Inorganic compound stubs" ]
46,849,458
https://en.wikipedia.org/wiki/Sediment%20quality%20triad
In aquatic toxicology, the sediment quality triad (SQT) approach has been used as an assessment tool to evaluate the extent of sediment degradation resulting from contaminants released due to human activity present in aquatic environments (Chapman, 1990). This evaluation focuses on three main components: 1.) sediment chemistry, 2.) sediment toxicity tests using aquatic organisms, and 3.) the field effects on the benthic organisms (Chapman, 1990). Often used in risk assessment, the combination of three lines of evidence can lead to a comprehensive understanding of the possible effects to the aquatic community (Chapman, 1997). Although the SQT approach does not provide a cause-and-effect relationship linking concentrations of individual chemicals to adverse biological effects, it does provide an assessment of sediment quality commonly used to explain sediment characteristics quantitatively. The information provided by each portion of the SQT is unique and complementary, and the combination of these portions is necessary because no single characteristic provides comprehensive information regarding a specific site (Chapman, 1997) Components Sediment chemistry Sediment chemistry provides information on contamination, however it does not provide information of biological effects (Chapman, 1990). Sediment chemistry is used as a screening tool to determine the contaminants that are most likely to be destructive to organisms present in the benthic community at a specific site. During analysis, sediment chemistry data does not depend strictly on comparisons to sediment quality guidelines when utilizing the triad approach. Rather, sediment chemistry data, once collected for the specific site, is compared to the most relevant guide values, based on site characteristics, to assess which chemicals are of the greatest concern. This technique is used because no one set of data is adequate for all situations. This allows you to identify the chemicals of concern, which most frequently exceed effects-based guidelines. Once the chemical composition of the sediment is determined and the most concerning contaminants have been identified, toxicity tests are conducted to link environmental concentrations to potential adverse effects. Sediment toxicity Sediment toxicity is evaluated based on bioassay analysis. Standard bioassay toxicity tests are utilized and are not organism restricted (Chapman, 1997). Differences in mechanisms of exposure and organism physiology must be taken into account when selecting your test organisms, and you must be able to adequately justify the use of that organism. These bioassay tests evaluate effects based on different toxicological endpoints. The toxicity tests are conducted with respect to the chemicals of concern at environmentally relevant concentrations identified by the sediment chemistry portion of the triad approach. Chapman (1990) lists typically used endpoints, which include lethal endpoints such as mortality, and sublethal endpoints such as growth, behavior, reproduction, cytotoxicity and optionally bioaccumulation. Often pilot studies are utilized to assist in the selection of the appropriate test organism and end points. Multiple endpoints are recommended and each of the selected endpoints must adequately complement each of the others (Chapman, 1997). Effects are evaluated using statistical methods that allow for the distinction between responses that are significantly different than negative controls. If sufficient data is generated, minimum significant differences (MSDs) are calculated using power analyses and applied to toxicity tests to determine the difference between statistical difference and ecological relevance. The function of the toxicity portion of the triad approach is to allow you to estimate the effects in the field. While laboratory based experiments simplify a complex and dynamic environment, toxicity results allow the potential for field extrapolation. This creates a link of exposure and effect and allows the determination of an exposure-response relationship. When combined with the other two components of the Sediment Quality Triad it allows for a holistic understanding between cause and effect. Field effects on benthic organisms The analysis of field effects on benthic organisms functions to assess the potential for community based effects resulting from the identified contaminants. This is done because benthic organisms are sessile and location specific, allowing them to be used as accurate markers of contaminant effect (Chapman, 1990). This is done through conducting field-based tests, which analyze changes in benthic community structures focusing on changes in number of species, abundance, and percentage of major taxonomic groups (Chapman, 1997). Changes in benthic communities are typically quantified using a principal component analysis and classification (Chapman, 1997). There is no one specifically defined method for conducting these field assessments, however the different multivariate analysis typically produces results identifying relationships between variables when a robust correlation exists. Knowledge of the site-specific ecosystem and the ecological roles of dominant species within that ecosystem are critical to producing biological evidence of alteration in benthic community resultant of contaminant exposure. When possible, it is recommended to observe changes in community structure that directly relate to the test species used during the sediment toxicity portion of the triad approach in order to produce the most reliable evidence. Bioaccumulation Bioaccumulation should be considered during the utilization of the triad approach depending on the study goals. It preparation for measuring bioaccumulation, it must be specified if the test will serve to assess secondary poisoning or biomagnification (Chapman, 1997). Bioaccumulation analysis should be conducted appropriately based on the contaminants of concern (for example, metals do not biomagnify). This can be done with field-collected, caged organisms, or laboratory exposed organisms (Chapman, 1997). While the bioaccumulation portion is recommended, it is not required. However, it serves an important role with the purpose of quantifying effects due to trophic transfer of contaminants through consumption of contaminated prey. Pollution-induced degradation Site-specific pollution induced degradation is measured through the combination of the three portions of the sediment quality triad. The sediment chemistry, sediment toxicity, and the field effects to benthic organisms are compared quantitatively. Data is most useful when it has been normalized to reference site values by converting them to reference-to-ratio values (Chapman et al. 1986; Chapman 1989). The reference site is chosen to be the site with the least contamination with respect to the other sites sampled. Once normalized, data between portions of the triad are able to be compared even when large differences in measurements or units exits (Chapman, 1990). From the combination of the results from each portion of the triad a multivariate figure is developed and used to determine the level of degradation. Methods and interpretation No single method can assess impact of contamination-induced degradation of sediment across aquatic communities. Methods of each component of the triad should be selected for efficacy and relevance in lab and field tests. Application of the SQT is typically location-specific and can be used to compare differences in sediment quality temporally or across regions (Chapman, 1997). Multiple lines of evidence The SQT incorporates three lines of evidence (LOE) to provide direct assessment of sediment quality. The chemistry, toxicity, and benthic components of the triad each provide a LOE, which is then integrated into a Weight of evidence. Criteria In order to qualify for SQT assessment chemistry, toxicity, and in situ measurements must be collected synoptically using standardized methods of sediment quality. A control sample is necessary to evaluate impact of contaminated sites. An appropriate reference is a whole sediment sample (particles and associated pore water) collected near area of concern and is representative of background conditions in the absence of contaminants. Evidence of contaminant exposure and biological effect is required in order to assign a site as chemically impacted. Framework The chemistry component incorporates both bioavailability and potential effects on benthic community. The potential of sediment toxicity for a given site is based on a linear regression model (LRM). A chemical score index (CSI) of the contaminant describes the magnitude of exposure relative to benthic community disturbance. An optimal set of index-specific thresholds are selected for the chemistry component by statistically comparing several candidates to evaluate which set exhibited greatest overall agreement (Bay and Weisberg, 2012). The magnitude of sediment toxicity is determined by multiple toxicity tests conducted in the lab to complement chemistry component. Toxicity LOE are determined by the mean of toxicity category score from all relevant tests. Development of LOE for benthic component is based on community metrics and abundance. Several indices such as benthic response index (BRI), benthic biotic integrity (IBI), and relative biotic index (RBI) are utilized to assess biological response of the benthic community. The median score of all individual indices will establish benthic LOE. Each component of the triad is assigned a response category: minimal, low, moderate, or high disturbance relative to background conditions. Individual LOEs are ranked into categories by comparing test results of each component to established thresholds (Bay and Weisberg, 2012). Integration of benthos and toxicity LOE classify the severity and effects of contamination. LOE of chemistry and toxicity are combined to assign the potential of chemically-mediated effects. A site is assigned an impact category by integrating the severity of effect and the potential of chemically mediated effects. The conditions of individual sites of concern are assigned an impact category between 1 and 5 (with 1 being unimpacted and 5 being clearly impacted by contamination). The SQT triad can also classify impact as inconclusive in cases when LOE between components are in disagreement or additional information is required (Bay and Weisberg, 2012). Triaxial graphs SQT measurements are scaled proportionately by relative impact and visually represented on triaxial graphs. Evaluation of sediment integrity and interrelationships between components can be determined by the size and morphology of the triangle. The magnitude of the triangle is indicative of the relative impact of contamination. Equilateral triangles imply agreement among components. (USEPA, 1994) Evaluation Advantages of triad approach The SQT approach has been praised for a variety of reasons as a technique for characterizing sediment conditions. Relative to the depth of information it provides, and the inclusive nature, it is very cost effective. It can be applied to all sediment classifications, and even adapted to soil and water column assessments (Chapman and McDonald 2005). A decision matrix can be employed such that all three measures be analyzed simultaneously, and a deduction of possible ecological impacts be made (USEPA 1994) Other advantages of the SQT include information on the potential bioaccumulation and biomagnifcation effects of contaminants, and its flexibility in application, resulting from its design as a framework rather than a formula or standard method. By using multiple lines of evidence, there are a host of ways to manipulate and interpret SQT data (Bay and Weisberg 2012). It has been accepted on an international scale as the most comprehensive approach to assessing sediment (Chapman and McDonald 2005). The SQT approach to sediment testing has been used in North America, Europe, Australia, South America, and the Antarctic. Application to sediment management standards Stemming from the National Pollutant Discharge Elimination System (NPDES) EPA permitting guidelines, point and nonpoint discharges may adversely affect sediment quality. As per state regulatory criteria, information on point and nonpoint source contamination, and its effects on sediment quality may be required for assessment of compliance. For example, Washington State Sediment Management Standards, Part IV, mandates sediment control standards which allow for establishment of discharge sediment monitoring requirements, and criteria for creation and maintenance of sediment impact zones (WADOE 2013). In this instance, the SQT could be particularly useful encompassing multiple relevant analyses simultaneously. Limitations and criticisms Although there are numerous benefits in using the SQT approach, drawbacks in its use have been identified. The major limitations include: lack of statistical criteria development within the framework, large database requirements, difficulties in chemical mixture application, and data interpretation can be laboratory intensive (Chapman 1989). The SQT does not evidently consider the bioavailability of complexed or sediment-associated contaminants (FDEP 1994). Lastly, it is difficult to translate laboratory toxicity results to biological effects seen in the field (Kamlet 1989). References Sedimentology Soil contamination
Sediment quality triad
[ "Chemistry", "Environmental_science" ]
2,451
[ "Environmental chemistry", "Soil contamination" ]
48,534,266
https://en.wikipedia.org/wiki/Optothermal%20stability
Optothermal stability describes the rate at which an optical element distorts due to a changing thermal environment. A changing thermal environment can cause an optic to bend due to either 1) changing thermal gradients on the optic and a non-zero coefficient of thermal expansion, or 2) coefficient of thermal expansion gradients in an optic and an average temperature change. Therefore, optothermal stability is an issue for optics that are present in a changing thermal environment. For example, a space telescope will experience variable heat loads from changes in spacecraft attitude, solar flux, planetary albedo, and planetary infrared emissions. Optothermal stability can be important when measuring the surface figure of optics, because thermal changes are typically low frequency (diurnal or HVAC cycling) which makes it difficult to use measurement averaging (commonly used for other error types) to remove errors. Also, optothermal stability is important for optical systems which require a high level of stability such as those that use a coronagraph. Material characterization Material characterization numbers have been mathematically derived to describe the rate at which a material deforms due to an external thermal input. It is important to note the distinction between wavefront stability (dynamic) and wavefront error (static). A higher Massive Optothermal Stability (MOS) and Optothermal Stability (OS) number will result in greater stability. As shown in the equation, MOS increases with density. Because added weight is undesirable for non-thermal reasons, especially in spaceflight applications, both MOS and OS are defined below: Where ρ, cp, α are density, specific heat, and the coefficient of thermal expansion respectively. See also Athermalization References Optics Temperature
Optothermal stability
[ "Physics", "Chemistry" ]
353
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Applied and interdisciplinary physics", "Physical quantities", "Optics", "SI base quantities", "Intensive quantities", "Thermodynamics", " molecular", "Atomic", "Wikipedia categories named after physical quantities", "...
48,534,652
https://en.wikipedia.org/wiki/Integral%20length%20scale
The integral length scale measures the correlation distance of a process in terms of space or time. In essence, it looks at the overall memory of the process and how it is influenced by previous positions and parameters. An intuitive example would be the case in which you have very low Reynolds number flows (e.g., a Stokes flow), where the flow is fully reversible and thus fully correlated with previous particle positions. This concept may be extended to turbulence, where it may be thought of as the time during which a particle is influenced by its previous position. The mathematical expressions for integral scales are: Where is the integral time scale, L is the integral length scale, and and are the autocorrelation with respect to time and space respectively. In isotropic homogeneous turbulence, the integral length scale is defined as the weighted average of the inverse wavenumber, i.e., where is the energy spectrum. References Physical quantities
Integral length scale
[ "Physics", "Mathematics" ]
191
[ "Physical phenomena", "Quantity", "Physical quantities", "Physical properties" ]
48,538,875
https://en.wikipedia.org/wiki/List%20of%20French%20astronomers
The following are list of French astronomers, astrophysicists and other notable French people who have made contributions to the field of astronomy. They may have won major prizes or awards, developed or invented widely used techniques or technologies within astronomy, or are directors of major observatories or heads of space-based telescope projects. The list The following is a list of notable French astronomers. A Abba Mari ben Eligdor Jacques d'Allonville Marie Henri Andoyer Voituret Anthelme Pierre Antonini François Arago Henri Arnaut de Zwolle Jean Audouze Adrien Auzout B Benjamin Baillaud Jules Baillaud Jean Sylvain Bailly Paul Baize Fernand Baldet Odette Bancilhon Daniel Barbier Joseph-Émile Barbier Aurélien Barrau Maria A. Barucci Aymar de la Baume Pluvinel Michel Benoist Bernard of Verdun Guillaume Bigourdan Immanuel Bonfils Jean-Marc Bonnet-Bidaud Alphonse Borrelly Jean Bosler Joseph Bossert François Bouchet Alexis Bouvard Louis Boyer P. Briault Ismaël Bullialdus Johann Karl Burckhardt C Michel Cassé César-François Cassini de Thury Dominique, comte de Cassini Jacques Cassini Roger Cayrel Catherine Cesarsky Joseph Bernard de Chabert Jean Chacornac Merieme Chadid Daniel Chalonge Jean-Baptiste Chappe d'Auteroche Auguste Charlois Sébastien Charnoz Jean Chazy Olivier Chesneau Henri Chrétien Jean-Pierre Christin Alexis Clairaut Jérôme Eugène Coggia Françoise Combes Janine Connes Eugène Cosserat Pablo Cottenot André Couder Fernand Courty D Joseph Lepaute Dagelet Michel Ferdinand d'Albert d'Ailly Marie-Charles Damoiseau André-Louis Danjon Antoine Darquier de Pellepoix Jean Baptiste Joseph Delambre Charles-Eugène Delaunay Joseph-Nicolas Delisle Gabriel Delmotte Audrey C. Delsanti Jules Alfred Pierrot Deseilligny Henri-Alexandre Deslandres Audouin Dollfus Jean Dufay Jeanne Dumée Noël Duret E Ernest Esclangon F Louis Fabry Pierre Fatou Hervé Faye Charles Fehrenbach Louis Feuillée Agnès Fienga Oronce Finé Camille Flammarion Gabrielle Renaudot Flammarion Honoré Flaugergues Jean Focas Georges Fournier G Jean Baptiste Aimable Gaillot Jean-Félix Adolphe Gambart Pierre Gassendi Casimir Marie Gaudibert Gersonides Michel Giacobini Louis Godin François Gonnessiat H Maurice Hamy Michel Hénon Paul Henry and Prosper Henry Pierre Hérigone Gustave-Adolphe Hirn J Pierre Janssen Odette Jasse René Jarry-Desloges Stéphane Javelle Edme-Sébastien Jeaurat Benjamin Jekhowsky Robert Jonckhèere K Samuel Kansi Dorothea Klumpke L Philippe de La Hire Antoine Émile Henry Labeyrie Nicolas-Louis de Lacaille Joseph-Louis Lagrange Joanny-Philippe Lagrula Jérôme Lalande Marie-Jeanne de Lalande Michel Lefrançois de Lalande André Lallemand Félix Chemla Lamèch Pierre-Simon Laplace Jacques Laskar Marguerite Laugier Paul-Auguste-Ernest Laugier Joseph Jean Pierre Laurent Jean Le Fèvre Guillaume Le Gentil Pierre Charles Le Monnier Urbain Le Verrier Nicole-Reine Lepaute Edmond Modeste Lescarbault Emmanuel Liais Jean-Baptiste Lislet Geoffroy Maurice Loewy Jean-Pierre Luminet Bernard Lyot M Louis Maillard Jean-Jacques d'Ortous de Mairan Giacomo F. Maraldi Giovanni Domenico Maraldi Claude-Louis Mathieu Pierre Louis Maupertuis Alain Maury Victor Mauvais Pierre Méchain Jean-Claude Merlin Charles Messier François Mignard Gaston Millochau Henri Mineur Antonio Mizauld Théophile Moreux Jean-Baptiste Morin (mathematician) Ernest Mouchez N Charles Nordmann P André Patry Jean-Claude Pecker Nicolas-Claude Fabri de Peiresc Julien Peridier Henri Joseph Anastase Perrotin Frédéric Petit Pierre Petit (engineer) Jean Picard Louise du Pierry Alexandre Guy Pingré Christian Pollas Jean-Louis Pons Philippe Gustave le Doulcet, Comte de Pontécoulant Jean-Loup Puget Pierre Puiseux Q Ferdinand Quénisset R Georges Rayet Jean Richer Édouard Roche Pierre Rousseau Augustin Royer Lucien Rudaux S Nicolas Sarrabat Félix Savary Évry Schatzman Alexandre Schaumasse Alfred Schmitt Jean-François Séguier Achille Pierre Dionis du Séjour Édouard Stephan Frédéric Sy Pope Sylvester II T Agop Terzan Louis Thollon Félix Tisserand Étienne Léopold Trouvelot V Jacques Vallée Joseph Gaultier de la Vallette Benjamin Valz Gérard de Vaucouleurs Philippe Véron Pierre-Antoine Véron Yvon Villarceau W Charles Wolf References Astronomical Society of the Pacific: Women in Astronomy See also List of women astronomers List of Russian astronomers and astrophysicists French astronomers French Astronomers
List of French astronomers
[ "Astronomy" ]
1,087
[ "Astronomy-related lists", "Lists of astronomers by nationality" ]
48,539,824
https://en.wikipedia.org/wiki/Channel%20sounding
Channel sounding is a technique that evaluates a radio environment for wireless communication, especially MIMO systems. Because of the effect of terrain and obstacles, wireless signals propagate in multiple paths (the multipath effect). To minimize or use the multipath effect, engineers use channel sounding to process the multidimensional spatial–temporal signal and estimate channel characteristics. This helps simulate and design wireless systems. Motivation & applications Mobile radio communication performance is significantly affected by the radio propagation environment. Blocking by buildings and natural obstacles creates multiple paths between the transmitter and the receiver, with different time variances, phases and attenuations. In a single-input, single-output (SISO) system, multiple propagation paths can create problems for signal optimization. However, based on the development of multiple input, multiple output (MIMO) systems, it can enhance channel capacity and improve QoS. In order to evaluate effectiveness of these multiple antenna systems, a measurement of the radio environment is needed. Channel sounding is such a technique that can estimate the channel characteristics for the simulation and design of antenna arrays. Problem statement & basics In a multipath system, the wireless channel is frequency dependent, time dependent, and position dependent. Therefore, the following parameters describe the channel: Direction of departure (DOD) Direction of arrival (DOA) Time delay Doppler shift Complex polarimetric path weight matrix To characterize the propagation path between each transmitter element and each receiver element, engineers transmit a broadband multi-tone test signal. The transmitter's continuous periodic test sequence arrives at the receiver, and is correlated with the original sequence. This impulse-like auto correlation function is called channel impulse response (CIR). By obtaining the transfer function of CIR, we can make an estimation of the channel environment and improve the performance. Description of existing approaches MIMO vector channel sounder Based on multiple antennas at both transmitters and receivers, a MIMO vector channel sounder can effectively collect the propagation direction at both ends of the connection and significantly improve resolution of the multiple path parameters. K–D model of wave propagation Engineers model wave propagation as a finite sum of discrete, locally planar waves instead of a ray tracing model. This reduces computation and lowers requirements for optics knowledge. The waves are considered planar between the transmitters and the receivers. Two other important assumptions are: Relative bandwidth is small enough so that the time delay can be simply transformed to a phase shift among the antennas. The array aperture is small enough that there is no observable magnitude variation. Based on such assumptions, the basic signal model is described as: where is the TDOA (time difference of arrival) of the wave-front . are DOA at the receiver and are DOD at the transmitter, is the Doppler shift. Real-time ultra-wideband MIMO channel sounding A higher bandwidth for channel measurement is a goal for future sounding devices. The new real-time UWB channel sounder can measure the channel in a larger bandwidth from near zero to 5 GHz. The real time UWB MIMO channel sounding is greatly improving accuracy of localization and detection, which facilitates precisely tracking mobile devices. Excitation signal A multitoned signal is chosen as the excitation signal. where is the center frequency, ( is Bandwidth, is Number of multitones) is the tone spacing, and is the phase of the tone. we can obtain by Data post-processing A DFT over K-1 (one waveform lost due to array switching) waveforms that measured in each channel is performed (K: waveforms per channel). The frequency domain samples at the multitone frequencies are picked at every sample. An estimated channel transfer function is obtained by: where is the noise power, is a reference signal and is the samples. The scaling factor c is defined as RUSK channel sounder A RUSK channel sounder excites all frequencies simultaneously, so that the frequency response of all frequencies can be measured. The test signal is periodic in time with period . The period must be longer than the duration of the channel's impulse response in order to capture all delayed multipath components at the receiver. The figure shows a typical channel impulse response (CIR) for a RUSK sounder. A secondary time variable is introduced so that the CIR is a function of the delay time and the observation time . A delay-Doppler spectrum is obtained by Fourier transformation. See also Channel estimation References Radio resource management Radio frequency propagation
Channel sounding
[ "Physics" ]
911
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
48,540,695
https://en.wikipedia.org/wiki/Multidimensional%20seismic%20data%20processing
Multidimensional seismic data processing forms a major component of seismic profiling, a technique used in geophysical exploration. The technique itself has various applications, including mapping ocean floors, determining the structure of sediments, mapping subsurface currents and hydrocarbon exploration. Since geophysical data obtained in such techniques is a function of both space and time, multidimensional signal processing techniques may be better suited for processing such data. Data acquisition There are a number of data acquisition techniques used to generate seismic profiles, all of which involve measuring acoustic waves by means of a source and receivers. These techniques may be further classified into various categories, depending on the configuration and type of sources and receivers used. For example, zero-offset vertical seismic profiling (ZVSP), walk-away VSP etc. The source (which is typically on the surface) produces a wave travelling downwards. The receivers are positioned in an appropriate configuration at known depths. For example, in case of vertical seismic profiling, the receivers are aligned vertically, spaced approximately 15 meters apart. The vertical travel time of the wave to each of the receivers is measured and each such measurement is referred to as a “check-shot” record. Multiple sources may be added or a single source may be moved along predetermined paths, generating seismic waves periodically in order to sample different points in the sub-surface. The result is a series of check-shot records, where each check-shot is typically a two or three-dimensional array representing a spatial dimension (the source-receiver offset) and a temporal dimension (the vertical travel time). Data processing The acquired data has to be rearranged and processed to generate a meaningful seismic profile: a two-dimensional picture of the cross section along a vertical plane passing through the source and receivers. This consists of a series of processes: filtering, deconvolution, stacking and migration. Multichannel filtering Multichannel filters may be applied to each individual record or to the final seismic profile. This may be done to separate different types of waves and to improve the signal-to-noise ratio. There are two well-known methods of designing velocity filters for seismic data processing applications. Two-dimensional Fourier transform design The two-dimensional Fourier transform is defined as: where is the spatial frequency (also known as wavenumber) and is the temporal frequency. The two-dimensional equivalent of the frequency domain is also referred to as the domain. There are various techniques to design two-dimensional filters based on the Fourier transform, such as the minimax design method and design by transformation. One disadvantage of Fourier transform design is its global nature; it may filter out some desired components as well. τ-p transform design The τ-p transform is a special case of the Radon transform, and is simpler to apply than the Fourier transform. It allows one to study different wave modes as a function of their slowness values, . Application of this transform involves summing (stacking) all traces in a record along a slope (slant), which results in a single trace (called the p value, slowness or the ray parameter). It transforms the input data from the space-time domain to intercept time-slowness domain. Each value on the trace p is the sum of all the samples along the line The transform is defined by: The τ-p transform converts seismic records into a domain where all these events are separated. Simply put, each point in the τ-p domain is the sum of all the points in the x-t plane lying across a straight line with a slope p and intercept τ. That also means a point in the x-t domain transforms into a line in the τ-p domain, hyperbolae transform into ellipses and so on. Similar to the Fourier transform, a signal in the τ-p domain can also be transformed back into the x-t domain. Deconvolution During data acquisition, various effects have to be accounted for, such as near-surface structure around the source, noise, wavefront divergence and reverbations. It has to be ensured that a change in the seismic trace reflects a change in the geology and not one of the effects mentioned above. Deconvolution negates these effects to an extent and thus increases the resolution of the seismic data. Seismic data, or a seismogram, may be considered as a convolution of the source wavelet, the reflectivity and noise. Its deconvolution is usually implemented as a convolution with an inverse filter. Various well-known deconvolution techniques already exist for one dimension, such as predictive deconvolution, Kalman filtering and deterministic deconvolution. In multiple dimensions, however, the deconvolution process is iterative due to the difficulty of defining an inverse operator. The output data sample may be represented as: where represents the source wavelet, is the reflectivity function, is the space vector and is the time variable. The iterative equation for deconvolution is of the form: and , where Taking the Fourier transform of the iterative equation gives: This is a first-order one-dimensional difference equation with index , input , and coefficients that are functions of . The impulse response is , where represents the one-dimensional unit step function. The output then becomes: The above equation can be approximated as , if and Note that the output is the same as the output of an inverse filter. An inverse filter does not actually have to be realized and the iterative procedure can be easily implemented on a computer. Stacking Stacking is another process used to improve the signal-to-noise ratio of the seismic profile. This involves gathering seismic traces from points at the same depth and summing them. This is referred to as "Common depth-point stacking" or "Common midpoint stacking". Simply speaking, when these traces are merged, the background noise cancels itself out and the seismic signal add up, thus improving the SNR. Migration Assuming a seismic wave travelling upwards towards the surface, where is the position on the surface and is the depth. The wave's propagation is described by: Migration refers to this wave's backward propagation. The two-dimensional Fourier transform of the wave at depth is given by: To obtain the wave profile at , the wave field can be extrapolated to using a linear filter with an ideal response given by: where is the x component of the wavenumber, , is the temporal frequency and For implementation, a complex fan filter is used to approximate the ideal filter described above. It must allow propagation in the region (called the propagating region) and attenuate waves in the region (called the evanescent region). The ideal frequency response is shown in the figure. References External links Tau-P Processing of Seismic Refraction Data Reflections on the Deconvolution of Land Seismic Data Seismic profiling COMMON-MIDPOINT STACKING Geophysics
Multidimensional seismic data processing
[ "Physics" ]
1,435
[ "Applied and interdisciplinary physics", "Geophysics" ]
48,541,925
https://en.wikipedia.org/wiki/Hill%20reaction
The Hill reaction is the light-driven transfer of electrons from water to Hill reagents (non-physiological oxidants) in a direction against the chemical potential gradient as part of photosynthesis. Robin Hill discovered the reaction in 1937. He demonstrated that the process by which plants produce oxygen is separate from the process that converts carbon dioxide to sugars. History The evolution of oxygen during the light-dependent steps in photosynthesis (Hill reaction) was proposed and proven by British biochemist Robin Hill. He demonstrated that isolated chloroplasts would make oxygen (O2) but not fix carbon dioxide (CO2). This is evidence that the light and dark reactions occur at different sites within the cell. Hill's finding was that the origin of oxygen in photosynthesis is water (H2O) not carbon dioxide (CO2) as previously believed. Hill's observation of chloroplasts in dark conditions and in the absence of CO2, showed that the artificial electron acceptor was oxidized but not reduced, terminating the process, but without production of oxygen and sugar. This observation allowed Hill to conclude that oxygen is released during the light-dependent steps (Hill reaction) of photosynthesis. Hill also discovered Hill reagents, artificial electron acceptors that participate in the light reaction, such as Dichlorophenolindophenol (DCPIP), a dye that changes color when reduced. These dyes permitted the finding of electron transport chains during photosynthesis. Further studies of the Hill reaction were made in 1957 by plant physiologist Daniel I. Arnon. Arnon studied the Hill reaction using a natural electron acceptor, NADP. He demonstrated the light-independent reaction, observing the reaction under dark conditions with an abundance of carbon dioxide. He found that carbon fixation was independent of light. Arnon effectively separated the light-dependent reaction, which produces ATP, NADPH, H+ and oxygen, from the light-independent reaction that produces sugars. Biochemistry Photosynthesis is the process in which light energy is absorbed and converted to chemical energy. This chemical energy is eventually used in the conversion of carbon dioxide to sugar in plants. Natural electron acceptor During photosynthesis, natural electron acceptor NADP is reduced to NADPH in chloroplasts. The following equilibrium reaction takes place. A reduction reaction that stores energy as NADPH: NADP+ + 2H+ + 2e- -> NADPH + H+ (Reduction) An oxidation reaction as NADPH's energy is used elsewhere: NADP+ + 2H+ + 2e- <- NADPH + H+ (Oxidation) Ferredoxin, also known as an NADP+ reductase, is an enzyme that catalyzes the reduction reaction. It is easy to oxidize NADPH but difficult to reduce NADP+, hence a catalyst is beneficial. Cytochromes are conjugate proteins that contain a haem group. The iron atom from this group undergoes redox reactions: (Reduction) (Oxidation) The light-dependent redox reaction takes place before the light-independent reaction in photosynthesis. Chloroplasts in vitro Isolated chloroplasts placed under light conditions but in the absence of CO2, reduce and then oxidize artificial electron acceptors, allowing the process to proceed. Oxygen (O2) is released as a byproduct, but not sugar (CH2O). Chloroplasts placed under dark conditions and in the absence of CO2, oxidize the artificial acceptor but do not reduce it, terminating the process, without production of oxygen or sugar. Relation to phosphorylation The association of phosphorylation and the reduction of an electron acceptor such as ferricyanide increase similarly with the addition of phosphate, magnesium (Mg), and ADP. The existence of these three components is important for maximal reductive and phosphorylative activity. Similar increases in the rate of ferricyanide reduction can be stimulated by a dilution technique. Dilution does not cause a further increase in the rate in which ferricyanide is reduced with the accumulation of ADP, phosphate, and Mg to a treated chloroplast suspension. ATP inhibits the rate of ferricyanide reduction. Studies of light intensities revealed that the effect was largely on the light-independent steps of the Hill reaction. These observations are explained in terms of a proposed method in which phosphate esterifies during electron transport reactions, reducing ferricyanide, while the rate of electron transport is limited by the rate of phosphorylation. An increase in the rate of phosphorylation increases the rate by which electrons are transported in the electron transport system. Hill reagent It is possible to introduce an artificial electron acceptor into the light reaction, such as a dye that changes color when it is reduced. These are known as Hill reagents. These dyes permitted the finding of electron transport chains during photosynthesis. Dichlorophenolindophenol (DCPIP), an example of these dyes, is widely used by experimenters. DCPIP is a dark blue solution that becomes lighter as it is reduced. It provides experimenters with a simple visual test and easily observable light reaction. In another approach to studying photosynthesis, light-absorbing pigments such as chlorophyll can be extracted from chloroplasts. Like so many important biological systems in the cell, the photosynthetic system is ordered and compartmentalized in a system of membranes. See also Cell biology Photophosphorylation Daniel I. Arnon References Name reactions Photosynthesis
Hill reaction
[ "Chemistry", "Biology" ]
1,194
[ "Name reactions", "Biochemistry", "Photosynthesis" ]
60,072,697
https://en.wikipedia.org/wiki/TMEM125
Transmembrane protein 125 is a protein that, in humans, is encoded by the TMEM125 gene. It has 4 transmembrane domains and is expressed in the lungs, thyroid, pancreas, intestines, spinal cord, and brain. Though its function is currently poorly understood by the scientific community, research indicates it may be involved in colorectal and lung cancer networks. Additionally, it was identified as a cell adhesion molecule in oligodendrocytes, suggesting it may play a role in neuron myelination. Gene The TMEM125 gene has no aliases, except for its encoded protein’s name. Its cytogenic location is at 1p34.2 on the plus strand and it spans from bases 43,272,723 to 43,273,379. TMEM125 comprises four exons. Gene-level regulation Five TMEM125 promoters were identified by Genomatix Gene2Promoter. The primary promoter (NM_001320244) is 1881 bp in length. It consists of binding sites for fork head domain factors and zinc finger transcription factors. Transcript TMEM125 has two variant transcripts that differ only in the 5' untranslated region (UTR), but both encode the same protein. mRNA variant 1 represents the longer of the two variants and is 1898 base pairs (bp) in length; variant 2 is 1797 bp long. Transcript-level regulation TMEM125 microarray-assessed expression patterns in normal human tissue demonstrate the primary tissues of expression are the pancreas, lungs, salivary glands, trachea, brain, prostate, spinal cord, and thyroid. Additionally, RNA-seq data illustrates transcript expression in the following additional tissue: colon, small intestines, prostate, and stomach. Protein Predicted structure TMEM125 comprises 219 amino acids with four transmembrane domains. Its predicted isoelectric point is 8.32 and predicted molecular weight is 22.1 kDa. It is primarily leucine-rich, and secondarily alanine- and glycine-rich; TMEM125 is also arginine- and lysine-deficient. It has two core repeat blocks, VALL and TTSS, which both appear twice within the protein. The secondary structure of TMEM125 is predicted to consist of α-helices and small segments of β-sheets. The tertiary structure and topology of TMEM125 was predicted and visualized through Phyre2. Post-translational modifications and localization TMEM125 has 1 predicted phosphorylation site (CK2 Phos), 5 predicted N-myristoylation sites (N-myr), 2 predicted palmitoylation sites (Pal), and 1 predicted amidation site (Amid). It also contains the domain of unknown function 66 (DUF66). TMEM125 is predicted to be subcellularly localized in the plasma membrane. It is secondarily predicted to be localized in the endoplasmic reticulum. Protein interactions There were no scientifically-verified protein interactions identified for TMEM125. String Protein Interaction predicted 10 functional protein partners for TMEM125, but all were determined through textmining. Homology/evolution TMEM125 is conserved in species as distantly related to humans as cartilaginous fish, that’s most recent common ancestor to humans existed 465 million years ago. TMEM125 is highly conserved in primates, mammals, birds, reptiles, bony fish, and cartilaginous fish, but is not observed in invertebrates. TMEM125 does not have any paralogs. Clinical significance In a topological transcriptome analysis, researchers profiled important proteins of the non-small cell lung cancer regulatory network and determined that TMEM125 exhibited different topological characteristics across cancerous and normal conditions, suggesting its criticality in lung cancer networks. This is consistent with post-translational modification analysis; TMEM125 phosphorylation suggests it may be involved in a signal transduction pathway or as a receptor protein. Additionally, its myristoylation sites suggests its involvement in signal transduction, apoptosis, and alternative extracellular protein export. TMEM125 was identified as a tetraspanin cell adhesion molecule enriched in oligodendrocytes, suggesting it may play a role in myelination. Additionally, its expression was not observed in differentiating oligodendrocytes in vitro, but was detected in oligodendrocytes from treated rat brains, which suggests its expression is regulated by the presence of axons. References Proteins
TMEM125
[ "Chemistry" ]
988
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
60,073,157
https://en.wikipedia.org/wiki/Tanja%20Stadler
Tanja Stadler is a mathematician and professor of computational evolution at the Swiss Federal Institute of Technology (ETH Zurich). She’s the current president of the Swiss Scientific Advisory Panel COVID-19 and Vize-Chair of the Department of Biosystems Science and Engineering at ETH Zürich. Career Tanja Stadler studied applied mathematics and statistics at the Technical University of Munich, University of Cardiff, and the University of Canterbury. She continued at the Technical University of Munich to obtain a PhD in 2008 on the topic 'Evolving Trees – Models for Speciation and Extinction in Phylogenetics' (with Prof. Anusch Taraz and Prof. Mike Steel). After a postdoctoral period with Prof. Sebastian Bonhoeffer in the Department of Environmental Systems Sciences at ETH Zürich, she was promoted to Junior Group Leader at ETH Zürich in 2011. In 2014, she became an assistant professor at the Department of Biosystems Science and Engineering of ETH Zürich, where she was promoted to associate professor in 2017 and to full professor in 2021. Research Scientific Contributions Tanja's research addresses core questions in the life sciences through an evolutionary perspective, in particular in macroevolution, epidemiology, developmental biology and immunology. Her research questions include fundamental aspects such as how speciation processes led to the current biodiversity, as well as questions directly relevant to human societies, such as the spread of pathogens like COVID-19 or Ebola. Tanja assesses these questions by developing and applying statistical phylodynamic tools to estimate evolutionary and population dynamics from genomic sequencing data while in parallel leading consortia to produce such data. Her unique approach is an innovative mix of mathematics, computer science and biology. Tanja made major theoretical contributions to the field of phylodynamics by developing statistical frameworks to use birth-death processes in the context of phylogenetic trees. In particular, she laid the foundations to account for sampling through time in birth-death models – enabling coherent analysis of genetic sequencing data collected through time during epidemics as well as coherent analysis of fossil (collected sequentially through time) and present-day species data. Tanja used this framework for example to quantify HCV spread, the spread of Ebola during the 2014 outbreak, assess Zika spread, to show that influenza waves in a city are majorly driven by travel patterns, and to provide real-time information during the COVID-19 pandemic. In macroevolution, Tanja explored in particular the impact of dinosaur extinction on mammal diversification. Most recently, she is introducing statistical tree thinking into developmental biology. Her group founded “Taming the BEAST”, in 2016. BEAST 2 is a widely used Bayesian phylogenetic software platform allowing to infer evolutionary and population dynamics from genomic sequencing data to which Tanja’s team contributed many package. “Taming the BEAST” is both an international workshop series and an online resource, to teach the usage of BEAST 2. In the field of epidemiology, Tanja is currently spear-heading the use of wastewater information to understand pathogen spread. She is principal investigator of a project between ETH Zürich and Eawag. Her team is estimating the reproductive number for SARS-CoV-2 and influenza from wastewater and contributes to understanding variant dynamics. Outreach and political engagement During the COVID-19 pandemic, Tanja was president of the Swiss National COVID-19 Science task force advising the authorities and decision makers of Switzerland from August 2021 until the termination of the task force in March 2022. She started the presidency after having been a member and later chaired the data & modelling group of the task force. She was responsible for the weekly communication of the pandemic situation to the Swiss Federal Government and the corresponding authorities. In addition, she presented scientific insights in briefings with the complete Federal Government and with members of the executive branches of the Federal and Cantonal Governments, as well as with different divisions of the Swiss Parliament. Tanja actively contributed core scientific insights to the Task Force. Her daily calculations of the reproductive number became a key part of the epidemic monitoring. The reproductive numbers were employed in the national „Ordinance of 19 June 2020 on Measures during the Special Situation to combat the COVID-19 Epidemic”. Further, the reproductive number dashboard was highlighted when the South Africa Health department informed the world about the new variant Omicron. Tanja also lead the most extensive Swiss-based SARS-CoV-2 sequencing effort providing results on the emergence and spread of new variants. Through this effort, the first beta, gamma, and delta variants in Switzerland were detected. The platform cov-spectrum is developed by Tanja’s team and became essential in SARS-CoV-2 variant tracking. It is widely used to facilitate SARS-CoV-2 lineage designation and used in policy such as the FDA advisory committee meeting discussing possible SARS-CoV-2 strains for a vaccine update. During the mpox outbreak, the team launched mpox-spectrum within days to track the newly spreading virus. In addition to advising the government and informing policy makers, she became actively involved in informing the public about the situation of the pandemic. Tanja communicated the scientific insights often on national news and national TV shows in Switzerland, as well as through Federal press conferences. Personal life Stadler lives with her partner and their two daughters in Basel. Awards and honors 2008. TUM PhD award 2012. John Maynard Smith Prize of the European Society for Evolutionary Biology 2013. ERC starting grant 2013. ETH Latsis Prize 2013. Zonta prize 2016. ETH Golden Owl for teaching 2021: SMBE Mid-Career Excellence Award. 2021: Carus Prize of the German National Academy of Sciences Leopoldina 2022: Rössler Prize 2022: Highly cited researcher by Clarivate 2023: Member German National Academy of Sciences Leopoldina References External links ETH Zürch Department of Biosystems Science and Engineering – Computational Evolution Group GenSpectrum Wastewater RE Taming the BEAST Selection of press coverage of Tanja Stadler's work Living people German women academics Academic staff of ETH Zurich Phylogenetics 20th-century German mathematicians German women mathematicians 1981 births Scientists from Stuttgart 20th-century German women
Tanja Stadler
[ "Biology" ]
1,277
[ "Bioinformatics", "Phylogenetics", "Taxonomy (biology)" ]
57,128,153
https://en.wikipedia.org/wiki/Gregory%20Stephanopoulos
Greg N. Stephanopoulos (born 1950) is an American chemical engineer and the Willard Henry Dow Professor in the department of chemical engineering at the Massachusetts Institute of Technology. He has worked at MIT, Caltech, and the University of Minnesota in the areas of biotechnology, bioinformatics, and metabolic engineering especially in the areas of bioprocessing for biochemical and biofuel production. Stephanopoulos is the author of over 400 scientific publications with more than 35,000 citations (h index = 97) as of April 2018. In addition, Greg has supervised more than 70 graduate students and 50 post-docs whose research has led to more than 50 patents. He was elected a fellow of the American Association for the Advancement of Science (2005), a member of the National Academy of Engineering (2003), and received the ENI Prize on Renewable Energy 2011. Early life and education He completed his Ph.D. in chemical engineering at the University of Minnesota in 1975, with advisors Arnold Fredrickson and Rutherford Aris on the topic of modeling of population dynamics. His thesis was published in 1978 with the title, "Mathematical Modelling of the Dynamics of Interacting Microbial Populations. Extinction Probabilities in a Stochastic Competition and Predation". Career Stephanopoulos began his career as an assistant professor of chemical engineering at the California Institute of Technology in 1978. He was promoted to associate professor in 1978. In 1985, he was hired by the Massachusetts Institute of Technology as professor of chemical engineering. During his time at MIT, he has held the following positions: associate director, Biotechnology Center (1990-1997), professor of the MIT-Harvard Division of Health Science and Technology - HST (2000–Present), Bayer Professor of Chemical Engineering and Biotechnology (2000 - 2005), and the W. H. Dow Professor of Chemical Engineering and Biotechnology (2006–Present). From 2006 to 2007, he was a visiting professor at the Institute for Chemical and Bioengineering in Zürich, Switzerland. As noted in the citation for his ENI Prize, Stephanopoulos's research has addressed the advancement of multiple aspects bioengineering: Works Books H. W. Blanch, E. T. Papoutsakis and Gregory Stephanopoulos, (eds.) Foundations of Biochemical Engineering, Kinetics and Thermodynamics of Biological System. ACS Symposium Series, 207 (1983). M. N. Karim and G. Stephanopoulos (eds.). Modelling and Control of Biotechnical Processes. IFAC Symposia Series No. 10, Proceedings of the 5th Int. Conf. of Computer App. in Ferm. Tech., Keystone, CO, 29 March - 2 April 1992, Pergamon Press (1992). G. Stephanopoulos (ed.), Bioprocessing, Vol. 3 of Biotechnology, H. J. Rehm, G. Reed, A. Puhler, P. Stadler (series eds.), VCH, Weinheim (1993). G. Stephanopoulos, Jens Nielsen, and A. Aristidou. Metabolic Engineering. Principles and Methodologies. Academic Press (1998). G. Stephanopoulos (ed.) Proceedings of the 1st Conference on Metabolic Engineering. Special Issue, Biotechnology & Bioengineering, Issues 2 & 3, (1998). Isidore Rigoutsos and G. Stephanopoulos (eds.), Systems Biology. Volume 1 and 2, Oxford University Press, (2006) Journal articles Stephanopoulos has authored more than 400 journal articles on the topics of biotechnology, bioinformatics, and metabolic engineering. These include: Gregory Stephanopoulos, R. Aris, A. G. Fredrickson. "A stochastic analysis of the growth of competing microbial populations in a continuous biochemical reactor", Mathematical Biosciences 45, 99-135, (1979). Gregory Stephanopoulos, R. Aris, A. G. Fredrickson. "The growth of competing microbial populations in a CSTR with periodically varying inputs", AIChE Journal 25, 863-872, (1979). G. Stephanopoulos, A. G. Fredrickson. ""Coexistence of Photosynthetic Microorganisms with Growth Rates Depending on the Spectral Quality of Light", Bulletin of Mathematical Biology 41, 525-542, (1979). G. Stephanopoulos, A. G. Fredrickson. "The Effect of Spatial Inhomogeneities on the Coexistence of Competing Microbial Populations", Biotechnology and Bioengineering 21, 1491-1498, (1979). Rahul Singhvi, Amit Kumar, Gabriel P. Lopez, Gregory N. Stephanopoulos, D. I. Wang, George M. Whitesides, Donald E. Ingber "Engineering cell shape and function", Science, 264(5159), 696-698, (1994). Hal Alper, Curt Fischer, Elke Nevoigt, Gregory Stephanopoulos. "Tuning genetic control through promoter engineering", Proceedings of the National Academy of Sciences, 102(36), 12678, (2005). Parayil Kumaran Ajikumar, Wen-Hai Xiao, Keith E. J. Tyo, Yong Wang, Fritz Simeon, Effendi Leonard, Oliver Mucha, Too Heng Phon, Blaine Pfeifer, Gregory Stephanopoulos. "Isoprenoid pathway optimization for Taxol precursor overproduction in Escherichia coli", Science, 330(6000), 70-74, (2010). Christian M Metallo, Paulo A. Gameiro, Eric L. Bell, Katherine R. Mattaini, Juanjuan Yang, Karsten Hiller, Christopher M Jewell, Zachary R Johnson, Darrell J. Irvine, Leonard Guarente, Joanne K. Kelleher, Matthew G. Vander Heiden, Othon Iliopoulos, Gregory Stephanopoulos. "Reductive glutamine metabolism by IDH1 mediates lipogenesis under hypoxia", Nature, 481(7381), 380, (2012). Honors In 2003, Stephanopoulos was elected a member of the American National Academy of Engineering (NAE). His NAE election citation noted: Other awards and honors include: AIChE R.H. Wilhelm Award in Chemical Reaction Engineering (2001) Elected to the National Academy of Engineering (2002) Elected Fellow of AAAS (2005) AIChE Founders Award (2007) Amgen Biochemical Engineering Award (2009) ENI Prize in Renewable and Non-Conventional Energy (2011) Elected Fellow, American Academy of Microbiology (2013) Elected President of the American Institute of Chemical Engineers (2015) References External links MIT Chemical Engineering Google Scholar - Greg Stephanopoulos Academic Tree - Greg Stephanopoulos American chemical engineers American materials scientists MIT School of Engineering faculty University of Minnesota College of Science and Engineering alumni 1950 births Biochemical engineering Minnesota CEMS Greek emigrants to the United States Fellows of the American Academy of Arts and Sciences Members of the United States National Academy of Engineering National Technical University of Athens alumni Greek engineers Living people Scientists from Kalamata University of Florida College of Engineering alumni
Gregory Stephanopoulos
[ "Chemistry", "Engineering", "Biology" ]
1,512
[ "Biochemistry", "Chemical engineering", "Biological engineering", "Biochemical engineering" ]
57,131,412
https://en.wikipedia.org/wiki/Outline%20of%20bridges
The following outline is provided as an overview of and topical guide to bridges: Bridges – a structure built to span physical obstacles without closing the way underneath such as a body of water, valley, or road, for the purpose of providing passage over the obstacle. What type of thing is a bridge? Bridges can be described as all of the following: A structure – An arrangement and organization of interrelated elements in a material object or system, or the object or system so organized. A thoroughfare – A road connecting one location to another. Types of bridges Beam Bridge Truss Bridge Truss arch bridge Cantilever Bridge Stressed ribbon bridge Arch Bridge Tied Arch Bridge Through arch bridge Skew arch Suspension Bridge Cable-stayed bridge Simple suspension bridge Inca rope bridge Tubular bridge Extradosed bridge Moveable Bridge Drawbridge (British English definition) – the bridge deck is hinged on one end Bascule bridge – a drawbridge hinged on pins with a counterweight to facilitate raising ; road or rail Rolling bascule bridge – an unhinged drawbridge lifted by the rolling of a large gear segment along a horizontal rack Folding bridge – a drawbridge with multiple sections that collapse together horizontally Curling bridge – a drawbridge with transverse divisions between multiple sections that curl vertically Fan Bridge - a drawbridge with longitudinal divisions between multiple bascule sections that rise to various angles of elevation, forming a fan arrangement. Vertical-lift bridge – the bridge deck is lifted by counterweighted cables mounted on towers ; road or rail Table bridge – a lift bridge with the lifting mechanism mounted underneath it Retractable bridge (Thrust bridge) – the bridge deck is retracted to one side Submersible bridge – also called a ducking bridge, the bridge deck is lowered into the water Tilt bridge – the bridge deck, which is curved and pivoted at each end, is lifted at an angle Swing bridge – the bridge deck rotates around a fixed point, usually at the centre, but may resemble a gate in its operation ; road or rail Transporter bridge – a structure high above carries a suspended, ferry-like structure Jet bridge – a passenger bridge to an airplane. One end is mobile with height, yaw, and tilt adjustments on the outboard end Guthrie rolling bridge Vlotbrug, a design of retractable floating bridge in the Netherlands Locks are implicitly bridges as well allowing ship traffic to flow when open and at least foot traffic on top when closed Rigid-frame bridge Side-spar cable-stayed bridge Segmental bridge Multi-Level Bridges Viaduct Vierendeel bridge Toll bridge Footbridge Clapper bridge Moon bridge Step-stone bridge Zig-zag bridge Plank Boardwalk Joist Multi-way bridge Three-Way Bridge Four-Way Bridge Five-Way Bridge Trestle bridge Coal trestle Transporter bridge Log bridge Packhorse bridge Aqueduct Military Bridges AM 50 Armoured vehicle-launched bridge Bailey bridge Callender-Hamilton bridge Mabey Logistic Support Bridge Medium Girder Bridge Pontoon bridge History of bridges History of bridges General bridges concepts Bending The behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. Compression (physics) The application of balanced inward ("pushing") forces to different points on a material or structure, that is, forces with no net sum or torque directed so as to reduce its size in one or more directions. Shear stress The component of stress coplanar with a material cross section. Span (engineering) The distance between two intermediate supports for a structure. Tension (physics) The pulling force transmitted axially by the means of a string, cable, chain, or similar one-dimensional continuous object, or by each end of a rod, truss member, or similar three-dimensional object; tension might also be described as the action-reaction pair of forces acting at each end of said elements. Torsion (mechanics) The twisting of an object due to an applied torque. Torque The rate of change of angular momentum of an object. Bridges companies Alabama Department of Transportation (ALDOT) Alaska Department of Transportation and Public Facilities (DOT&PF) Arizona Department of Transportation (ADOT) Arkansas State Highway and Transportation Department (AHTD) California Department of Transportation (Caltrans) Colorado Department of Transportation (CDOT) Connecticut Department of Transportation (CONNDOT) Delaware Department of Transportation (DelDOT) Florida Department of Transportation (FDOT) Georgia Department of Transportation (GDOT) Hawaii Department of Transportation (HDOT) Idaho Transportation Department (ITD) Illinois Department of Transportation (IDOT) Indiana Department of Transportation (INDOT) Iowa Department of Transportation (Iowa DOT) Kansas Department of Transportation (KDOT) Kentucky Transportation Cabinet (KYTC) Louisiana Department of Transportation and Development (DOTD) Maine Department of Transportation (MaineDOT) Maryland Department of Transportation (MDOT) Massachusetts Department of Transportation (MassDOT) Michigan Department of Transportation (MDOT) Minnesota Department of Transportation (Mn/DOT) Mississippi Department of Transportation (MDOT) Missouri Department of Transportation (MoDOT) Montana Department of Transportation (MDT) Nebraska Department of Transportation (NDOT) Nevada Department of Transportation (NDOT) New Hampshire Department of Transportation (NHDOT) New Jersey Department of Transportation (NJDOT) New Mexico Department of Transportation (NMDOT) New York New York State Bridge Authority New York State Department of Transportation (NYSDOT) New York State Thruway Authority (NYSTA) North Carolina Department of Transportation (NCDOT) North Dakota Department of Transportation (NDDOT) Ohio Department of Transportation (ODOT) Oklahoma Department of Transportation (ODOT) Oregon Department of Transportation (ODOT) Pennsylvania Department of Transportation (PennDOT) Puerto Rico Department of Transportation and Public Works (DTOP) Rhode Island Department of Transportation (RIDOT) South Carolina Department of Transportation (SCDOT) South Dakota Department of Transportation (SDDOT) Tennessee Department of Transportation (TDOT) Texas Department of Transportation (TxDOT) Utah Department of Transportation (UDOT) Vermont Agency of Transportation (VTrans) Virginia Department of Transportation (VDOT) Washington State Department of Transportation (WSDOT) West Virginia Department of Transportation (WVDOT) Wisconsin Department of Transportation (WisDOT) Wyoming Department of Transportation (WYDOT) Notable bridges Akashi Kaikyō Bridge Alcantara Bridge Brooklyn Bridge Chapel Bridge Charles Bridge Chengyang Bridge Chesapeake Bay Bridge Gateshead Millennium Bridge George Washington Bridge Golden Gate Bridge Great Belt Bridge Hangzhou Bay Bridge Mackinac Bridge Millau Viaduct Ponte Vecchio Rainbow Bridge (Niagara Falls) Rialto Bridge Royal Gorge Bridge Seri Wawasan Bridge Seven Mile Bridge Stari Most Sunshine Skyway Bridge Sydney Harbour Bridge Tacoma Narrows Bridges The Confederation Bridge The Helix Bridge Tower Bridge Verrazano-Narrows Bridge Tsing Ma Bridge See also List of bridges References External links Bridges Bridges Bridges
Outline of bridges
[ "Engineering" ]
1,380
[ "Structural engineering", "Bridges" ]
57,133,818
https://en.wikipedia.org/wiki/BOQA
Bridge Operations (or Operational) Quality Assurance (BOQA; pronounced ) is a methodology utilised in shipping and which originates from the similar FOQA/FDM (Flight Operations Quality Assurance/Flight Data Monitoring) concept in aviation. BOQA is a methodology with which ship owners/operators, ship Captains, and other associated shipping stakeholders can automatically and systematically monitor, track, trend and analyse operational quality of (seagoing) vessels. The main target with BOQA is to provide a non-punitive platform for proactive analysis of vessel data to enable the enhancement of maritime safety . The BOQA methodology can be used in both conventional crewed ships and in autonomous or uncrewed vessels provided that adequate data sources are available. History The original template for BOQA was laid out when Royal Caribbean approached Aerobytes Limited (a market leader in FOQA) to collaborate to provide a similar product for the maritime industry. https://www.aerobytes.co.uk/boqa/. Discussions were held as to what breaches of performance should be detected and two recorders were installed on RCCL vessels and discussions were also held with Carnival group to develop the BOQA concept. Aerobytes decided to focus on its core business of aviation and RCCL and Carnival went on to develop their own systems along with a few other companies who saw the potential. At present BOQA is not mandated and therefore there are no strict rules as to what an effective BOQA system should contain, but once the enormous potential is realised it is entirely possible that might change. Description BOQA is best developed as a non-punitive company-internal methodology or process, which has the overall target of assisting the ship Captain and the ship operator to maintain a high level of safety and operational quality. BOQA has been described as: A system which delivers 24/7 electronic monitoring and electronic alarms when set operational parameters are deviated from During normal operations, BOQA will collect and analyse digital operational data direct from ships operation equipment BOQA data is unique because it can provide objective information that is not available through other methods A BOQA program can identify operational situations in which there is increased risk, allowing operators to take early corrective action before that risk results in an incident or accident The BOQA program is a tool in the operators overall operational risk assessment and prevention program BOQA, being proactive in identifying and addressing risk will enhance safety The Oil Companies International Marine Forum (OCIMF) published in June 2013 the "Recommendations on the Proactive Use of Voyage Data Recorder Information" and went forth with submitting these recommendations as an info paper to the IMO (NAV 59/INF.9) 12/07/2013 with the title "The proactive use of Voyage Data Recorder (VDR) information". Main aspect with this paper was to use the ship's VDR as a data source for proactive safety monitoring. This paper called for: Routine transmission of VDR data ashore Auto-analysis of data Alerts on non-conformance ...and as such the paper described in essence a BOQA solution. Methodology BOQA is used to objectively monitor the human operational performance and external factors, such as weather and other traffic around the vessel, by utilising various types of sensor and external data and comparing this data with defined best-practices and Standard operating procedure (SOPs). BOQA is usually not used to monitor the internal technical performance of the vessel machinery and equipment-base, which in most cases already has a wide range of proprietary monitoring and alarm functions. BOQA is often a software system consisting of shipboard real-time data collection and sensing, automated data transmission between the vessel and shore and a shore-based system which receives, combines, analyses, alerts and stores the data according to defined rules and logic. BOQA usually includes three "time-domains", i.e.: a historical database of data and events and which is used for trend- and pattern analysis a real-time reactive stream analytics component which can react to events or deviations in real- or near-real-time (NRT) a predictive and pro-active component, which assists the operator in taking actions before an event occurs Event types BOQA is a continuously developing methodology, much in the same way as FOQA. At present time BOQA solutions are known to be able monitor some of the following various event types some of which might include: Cross-track error or deviation from a defined navigational route (the Costa Concordia disaster is known to be caused by such deviation) Safety corridor, i.e. is the ship sailing within its defined safe waters Collision risk based on Proximity (ship too close to own ship), CPA (Closest point of approach which measures the time and distance to a potential collision) and BCR (Bow Cross Range) Entry into a restricted or protected area Severe list, heel or roll Excessive accelerations Excessive turns (high ROT) Crash stop Heavy weather at present position and along a planned route (this is a predictive component) Sudden change in atmospheric pressure (which is an indicator of oncoming change in weather) Speed monitoring, which can be used for example in Charterparty Compliance monitoring Unscheduled stops along the route, which may lead to schedule-deviations and unwanted changes in arrival time Proximity to a shore line, ice edge or iceberg Rescue boat launch AIS (Automatic identification system) status change Black-out Port inactivity Excessive use of engine controls Under keel clearance Excessive rudder angles Traffic Separation Scheme violations Inputs BOQA relies on data from various sources. Some data sources could be but are not restricted to: Ship's AIS transponders (Automatic identification system), which give location, speed and course of the vessel itself and the surrounding traffic Ship's GPS (Global Positioning System), which also gives location, speed and course Ship's VDR (Voyage Data Recorder) Motion sensors Barometric pressure sensors Real-time weather data from sensors Weather forecast data Ship's navigational ECDIS (Electronic Chart Display and Information System) route External geographic data, such as restricted areas, PSSAs (Marine protected area) or TSS (Traffic Separation Scheme) Ship's electronic logbook Applications BOQA is not yet known to be officially mandated or regulated by any official maritime bodies, such as International Maritime Organization, Classification societies or Flag state administrations. Royal Caribbean Cruise line stated in their 2012 Stewardship report that they were evaluating a BOQA system. Carnival Corporation & plc is known to have a large scale in-house developed BOQA solution, which consists of a data-system (Neptune) and a 24/7 staffed Fleet Operations Centre in three locations around the world. AS Tallink Group is known to be using a BOQA solution, following successful trials in 2018. Literature NTSB Forum 25.-26.3.2914: Cruise Ships: Examining Safety, Operations and Oversight Forum, Capt. David Christie – Carnival Corporation Innovative Techniques to Enhance Safety OCIMF (Oil Companies International Marine Forum). Recommendations on the Proactive Use of Voyage Data Recorder Information (Ver 2).pdf AWS Public sector blog "Maritime Operations – Automating Operational Quality Assurance with AWS and Open Data" References Marine engineering Maritime safety
BOQA
[ "Engineering" ]
1,482
[ "Marine engineering" ]
45,522,513
https://en.wikipedia.org/wiki/Electro%20sinter%20forging
Electro sinter forging (ESF) is an industrial single electromagnetic pulse sintering technique to rapidly produce a wide range of small components in metals, alloys, intermetallics, semiconductors, and composites. ESF was invented by Alessandro Fais, an Italian metallurgical engineer and scientist. ESF is obtained by inserting loose, binder-less powders into the automatic dosing system, or manually inserted in the mold. The automatic procedure applies a pre-pressure onto the powders to ensure electrical contact; hence, it superimposes an intense electromagnetic pulse with a mechanical pulse. The two pulses last 30 to 100 ms. After a brief holding time, the sintered component is extracted by the lower plunger and pushed out by the extractor to leave room for the next sintering. Each sintering round lasts less than one second, and is carried out entirely in air (even with pyrophoric materials). References Nanotechnology Metallurgical facilities
Electro sinter forging
[ "Chemistry", "Materials_science", "Engineering" ]
206
[ "Materials science stubs", "Metallurgy", "Materials science", "Nanotechnology", "Metallurgical facilities" ]
45,523,985
https://en.wikipedia.org/wiki/Organization%20and%20expression%20of%20immunoglobulin%20genes
Antibody (or immunoglobulin) structure is made up of two heavy-chains and two light-chains. These chains are held together by disulfide bonds. The arrangement or processes that put together different parts of this antibody molecule play important role in antibody diversity and production of different subclasses or classes of antibodies. The organization and processes take place during the development and differentiation of B cells. That is, the controlled gene expression during transcription and translation coupled with the rearrangements of immunoglobulin gene segments result in the generation of antibody repertoire during development and maturation of B cells. B-Cell development During the development of B cells, the immunoglobulin gene undergoes sequences of rearrangements that lead to formation of the antibody repertoire. For example, in the lymphoid cell, a partial rearrangement of the heavy-chain gene occurs which is followed by complete rearrangement of heavy-chain gene. Here at this stage, Pre-B cell, mμ heavy chain and surrogate light chain are formed. The final rearrangement of the light chain gene generates immature B cell and mIgM. The process explained here occurs only in the absence of the antigen. The mature B cell formed as RNA processing changes leaves the bone marrow and is stimulated by the antigen then differentiated into IgM -secreted plasma cells. Also at first, the mature B cell expresses membrane-bound IgD and IgM. These two classes could switch to secretory IgD and IgM during the processing of mRNAs. Finally, further class switching follows as the cell keep dividing and differentiating. For instance, IgM switches to IgG which switches to IgA that eventually switches to IgE The multigene organization of immunoglobulin genes From studies and predictions such as Dreyer and Bennett's, it shows that the light chains and heavy chains are encoded by separate multigene families on different chromosomes. They are referred to as gene segments and are separated by non-coding regions. The rearrangement and organization of these gene segments during the maturation of B cells produce functional proteins. The entire process of rearrangement and organization of these gene segments is the vital source where our body immune system gets its capabilities to recognize and respond to variety of antigens. Light chain multigene family The light chain gene has three gene segments. These include: the light chain variable region (V), joining region (J), and constant region (C) gene segments. The variable region of light is therefore encoded by the rearrangement of VJ segments. The light chain can be either kappa,κ or lambda,λ. This process takes place at the level of mRNAs processing. Random rearrangements and recombinations of the gene segments at DNA level to form one kappa or lambda light chain occurs in an orderly fashion. As a result, "a functional variable region gene of a light chain contains two coding segments that are separated by a non-coding DNA sequence in unrearranged germ-line DNA" (Barbara et al., 2007). Heavy-chain multigene family Heavy chain contains similar gene segments such as VH, JH and CH, but also has another gene segment called D (diversity). Unlike the light chain multigene family, VDJ gene segments code for the variable region of the heavy chain. The rearrangement and reorganization of gene segments in this multigene family is more complex . The rearranging and joining of segments produced different end products because these are carried out by different RNA processes. The same reason is why the IgM and IgG are generates at the time. Variable-region rearrangements The variable region rearrangements happen in an orderly sequence in the bone marrow. Usually, the assortment of these gene segments occurs at B cell maturation. Light chain DNA The kappa and lambda light chains undergo rearrangements of the V and J gene segments. In this process, a functional Vlambda can combine with four functional Jλ –Cλ combinations. On the other hands, Vk gene segments can join with either one of the Jk functional gene segments. The overall rearrangements result in a gene segment order from 5 prime to 3 prime end. These are a short leader (L) exon, a noncoding sequence (intron), a joined VJ segment, a second intron, and the constant region. There is a promoter upstream from each leader gene segment. The leader exon is important in the transcription of light chain by the RNA polymerase. To remain with coding sequence only, the introns are removed during RNA- processing and repairing. In summary, Heavy chain DNA The rearrangements of heavy-chains are different from the light chains because DNA undergoes rearrangements of V-D-J gene segments in the heavy chains. These reorganizations of gene segments produce gene sequence from 5 prime to 3 prime ends such as a short leader exon, an intron, a joined VDJ segment, a second intron and several gene segments. The final product of the rearrangement is transcribed when RNA polymerase Mechanism of variable region rearrangements It is understood that rearrangement occurs between specific sites on the DNA called recombination signal sequences (RSSs). The signal sequences are composed of a conserved palindromic heptamer and a conserved AT- rich nonamer. These signal sequences are separated by non-conserved spacers of 12 or 23 base pairs called one-turn and two-turn respectively. They are within the lambda chain, k-chain and the processes of rearrangement in these regions are catalyzed by two recombination-activating genes: RAG-1 and RAG-2 and other enzymes and proteins. The segments joined due to signals generated RSSs that flank each V, D, and J segments. Only genes flank by 12 -bp that join to the genes flank by 23-bp spacer during the rearrangements and combinations to maintain VL-JL and VH-DH-JH joining. Generation of antibody diversity Antibody diversity is produced by genetic rearrangement after shuffling and rejoining one of each of the various gene segments for the heavy and light chains. Due to mixing and random recombination of the gene segments errors can occur at the sites where gene segments join with each other. These errors are one of the sources of the antibody diversity that is commonly observed in both the light and heavy chains. Moreover, when B cells continue to proliferate, mutations accumulate at the variable regions through a process called somatic hypermutation. The high concentrations of these mutations at the variable region also produce high antibody diversity. Class-switching When the B cells get activated, class switching can occur. The class switching involves switch regions that made up of multiple copies of short repeats (GAGCT and TGGGG). These switches occur at the level of rearrangements of the DNA because there is a looping event that chops off the constant regions for IgM and IgD and form the IgG mRNAs. Any continuous looping occurrence will produce IgE or IgA mRNAs. In addition, cytokines are factors that have great effects on class switching of different classes of antibodies. Their interaction with B cells provides the appropriates signals needed for B cells differentiation and eventual class switching occurrence. For example, interleukin-4 induces the rearrangements of heavy chain immunoglobulin genes. That is IL- 4 induces the switching of Cμ to Cγ to Cκ References Barbara, AO., Richard, AG., and Thomas, JK(2007) Kuby Immunology. W..H Freeman and Company, pp 111–142 Notes Antibodies Gene expression
Organization and expression of immunoglobulin genes
[ "Chemistry", "Biology" ]
1,616
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
45,524,498
https://en.wikipedia.org/wiki/Catalogue%20of%20Spectroscopic%20Binary%20Orbits
The catalogue of spectroscopic binary orbits (SB) is a compilation of orbital data for spectroscopic binary stars which have been produced since 1969 by Alan Henry Batten of the Dominion Astrophysical Observatory and various collaborators. At the 24th International Astronomical Union general assembly, in 2000, a working group was established to take responsibility for maintenance of the catalogue, and to take it from a paper based system to an online database. The 9th catalogue was published in 2004. As of 7 August 2009, the catalogue database contained information on over 2940 binary systems, increasing to 3722 in March 2019. The main components of the current SB9 catalogue, as a work in progress, can be downloaded in gzipped tar ball format. Applications The catalogue is used for a variety of purposes: Completeness assessments and statistical analysis Generation of H–R diagrams and definition of shortest period Computation of period & eccentricity relationships References Applied and interdisciplinary physics Astronomical catalogues of stars Spectroscopic binaries
Catalogue of Spectroscopic Binary Orbits
[ "Physics", "Astronomy" ]
197
[ "Astronomical catalogue stubs", "Applied and interdisciplinary physics", "Astronomy stubs" ]
45,527,564
https://en.wikipedia.org/wiki/3D%20Print%20Canal%20House
The 3D Print Canal House is a three-year, publicly accessible "Research & Design by Doing" project in which an international team of partners from various sectors works together on 3D printing a canal house in Amsterdam. By building the house, all parties research the possibilities of 3D printing architecture and form connections between design, science, culture, building, software, communities and the city. The project serves as both an exhibition of 3D printing technology, as well as a research site into 3D printing architecture. The project is initiated by DUS architects and the site, in Amsterdam North, opened to the public on March 1, 2014. Kamermaker The house is constructed by a fused deposition modeling printer developed by DUS: the Kamermaker ("Room builder"), able to print elements of up to 2.2×2.2×3.5 metres. It is a movable pavilion with the size of a shipping container. The machine itself is 6 meters tall. The Kamermaker can be moved by truck or by ship. See also Construction 3D printing References Canal house Houses in the Netherlands Buildings and structures in Amsterdam 2014 establishments in the Netherlands Building research Building engineering
3D Print Canal House
[ "Engineering" ]
236
[ "Building engineering", "Civil engineering", "Architecture" ]
45,532,115
https://en.wikipedia.org/wiki/N-Acetyllactosamine
N-Acetyllactosamine (LacNAc) (also known as CD75) is a nitrogen-containing disaccharide, a lactosamine derivative that is substituted with an acetyl group on its glucosamine component. The N-acetyllactosamine is a component of many glycoproteins and functions as a carbohydrate antigen that is thought to play roles in normal cellular recognition as well as in malignant transformation and metastasis. It is also found in the structure of human milk oligosaccharides and has prebiotic effects. References External links Amino sugars Disaccharides Acetamides
N-Acetyllactosamine
[ "Chemistry" ]
142
[ "Amino sugars", "Carbohydrates" ]
45,532,600
https://en.wikipedia.org/wiki/Matrix-assisted%20ionization
In mass spectrometry, matrix-assisted ionization (also inlet ionization) is a low fragmentation (soft) ionization technique which involves the transfer of particles of the analyte and matrix sample from atmospheric pressure (AP) to the heated inlet tube connecting the AP region to the vacuum of the mass analyzer. Initial ionization occurs as the pressure drops within the inlet tube. Inlet ionization is similar to electrospray ionization in that a reverse phase solvent system is used and the ions produced are highly charged, however a voltage or a laser is not always needed. It is a highly sensitive process for small and large molecules like peptides, proteins and lipids that can be coupled to a liquid chromatograph. Inlet ionization techniques can be used with an Orbitrap mass analyzer, Orbitrap fourier transform mass spectrometer, linear trap quadrupole and MALDI-TOF. Types of inlet ionization Matrix-assisted inlet ionization In Matrix-assisted inlet ionization (MAII), a matrix which can be a solvent is used at ambient temperature with the analyte of interest as a mixture. The matrix/analyte mixture is inserted into the heated inlet tube through tapping the mixture at the opening end of the tube. For the highly charged ions of the analyte to be produced from ionization, desolvation of the matrix molecules needs to occur. Matrices that can be used include: 2,5-dihydroxybenzoic acid, 2,5-dihydroxyacetophenone, 2-aminobenzyl alcohol, anthranilic acid, and 2-hydroxyacetophenone. Laserspray inlet ionization Laserspray inlet ionization (LSII) is a subset of MAII and uses a matrix-assisted laser desorption/ionization (MALDI) method. It was originally called atmospheric pressure matrix-assisted laser desorption/ionization however was renamed as LSII to avoid confusion with MALDI and as it was found to be a type of inlet ionization. As all inlet ionization techniques, highly multiply charged ions are produced. A nitrogen laser is used to ablate the solid matrix/analyte into the heated inlet tube, the observed ions are generated at the surface of the matrix/analyte and so the laser is not directly involved in the ionization as was originally thought. LSII can determine protein molecular weights and has been found to detect masses of proteins up to 20,000 Da. The sensitivity of LSII, for protein detection, is higher by an order of magnitude compared to ESI. Solvent assisted inlet ionization Solvent assisted inlet ionization (SAII) is similar to matrix-assisted inlet ionization however the matrix is a solvent such as water, acetonitrile and methanol. This ionization technique is highly sensitive to small molecules, peptides and proteins. The analyte is dissolved in the solvent and can either be introduced to the heated inlet tube by a capillary column or directly injected into the inlet tube with a syringe or by pipetting. The capillary column is made of fused silica particles with one end submerged in the sample solvent and the other in the end of the heated inlet tube. The solvent flows through the capillary column without the use of a pump due to the pressure difference between ambient pressure and the vacuum. The temperature can vary in the inlet tube from 50 °C to 450 °C, with the lower temperature being used if the results obtained from a higher temperature are of good resolution. Solvent assisted inlet ionization can be coupled not only to liquid chromatography (LC) but also to nano LC. Advantages of inlet ionization Ionization at atmospheric pressure often leads to a loss of ions during the transfer of the ions from the ambient pressure region to the vacuum of the mass analyzer. Ions are lost due to dispersion of analyte spray and 'rim loss' causing fewer ions to reach the vacuum for m/z separation to occur. Initial ionization occurs in the sub-atmospheric pressure region of the heated inlet tube which is directly attached to the vacuum of the mass analyzer and so ion loss is reduced as transfer of the ions does not occur. In LSII the use of the laser increases the image quality of the results by producing better spatial resolution. This is where more pixels are created and so a clearer image is obtained. Multiply charged ions are produced further extending mass range. Multiple methods can be used to fragment molecules producing fragmentation for structural information: electron transfer dissociation (ETD), collision-induced dissociation (CID), and electron capture dissociation (ECD). When using a laser, only small volumes are needed. References Ion source
Matrix-assisted ionization
[ "Physics" ]
977
[ "Ion source", "Mass spectrometry", "Spectrum (physical sciences)" ]
52,361,732
https://en.wikipedia.org/wiki/IFRS%2017
IFRS 17 is an International Financial Reporting Standard that was issued by the International Accounting Standards Board in May 2017. It will replace IFRS 4 on accounting for insurance contracts and has an effective date of 1 January 2023. The original effective date was meant to be 1 January 2021. In November 2018 the International Accounting Standards Board proposed to delay the effective date by one year to 1 January 2022. In March 2020, the International Accounting Standards Board further deferred the effective date to 1 January 2023. List of insurance contracts to which IFRS 17 applies: Insurance and reinsurance contracts issued by an insurer; Reinsurance contracts held by an insurer; Investment contracts with discretionary participation features (DPF) issued by an insurer, provided the insurer also issues insurance contracts. Under the IFRS 17 general model, insurance contract liabilities will be calculated as the expected present value of future insurance cash flows with a provision for non-financial risk. The discount rate will reflect current time value of money adjusted for financial risk. If the risk-adjusted expected present value of future cash flows would produce a gain at the time a contract is recognized, the model would also require a "contractual service margin" to offset the day 1 gain. The contractual service margin would be released to insurance revenue over the life of the contract. There would also be a new income statement presentation for insurance contracts, including a conceptual definition of revenue, and additional disclosure requirements. For short-duration insurance contracts, insurers are permitted to use a simplified method, aka. Premium Allocation Approach ('PAA'). Under this simplified method, insurance liability is similar to premium unearned (less insurance acquisition cash flows). Some insurance contracts include participation features where the entity shares the performance of underlying items with policyholders in an extent that the remaining profit of the insurer has the character of a contractual fee. IFRS 17 has a specific accounting approach for such participating contracts, defined as ‘insurance contracts with direct participation features’. That approach is referred to as the variable fee approach (‘VFA’). Criticism Several features of IFRS 17 have been criticized by preparers. One example is the volatility caused by applying current rates for time value of money. IFRS 17 permits presenting the effects of changes in the discount rate under Other Comprehensive Income to eliminate the volatility from the P&L. Former IASB chairman Hans Hoogervorst regarded the use of a current discount rate as one of the benefits of the new standard, stating that by doing otherwise "the devastating impact of the current low-interest-rate environment on long-term obligations is not nearly as visible in the insurance industry as it is in the defined benefit pension schemes of many companies." He also stated that current discount rates would "increase comparability between insurance companies and between insurance and other parts of the financial industry, such as banks and asset management." Other benefits Hoogervorst saw in the new standard were increased consistency across companies in accounting for insurance contracts and a more theoretically valid measurement of revenue. Adoption In November 2021 EU has adopted IFRS 17 with an exemption regarding the limitation of aggregating contracts for purposes of subsequent measurement of the contractual service margin, the so-called groups of insurance contracts; under IFRS 17 contracts may be only aggregated in groups which were issued not more than one year apart. This limitation is optional to be applied in the EU. 2020 Amendments On 26 June 2019, the IASB released an exposure draft proposing several amendments. Comments on the amendments were open for three months, closing on 25 September 2019. In total, 123 submissions were received. In June 2020 the IASB adopted the final set of amendments and deferred the effective date of the standard to January 1, 2023. External links IFRS17 text IFRS17 on ifrs.org References International Financial Reporting Standards Actuarial science Insurance
IFRS 17
[ "Mathematics" ]
793
[ "Applied mathematics", "Actuarial science" ]
34,017,528
https://en.wikipedia.org/wiki/Tension%20control%20bolt
A tension control bolt (TC bolt) is a heavy duty bolt used in steel frame construction. The head is usually domed and is not designed to be driven. The end of the shank has a spline on it which is engaged by a special power wrench which prevents the bolt from turning while the nut is tightened. When the appropriate tension is reached the spline shears off. See also Screw list Shear pin References Metalworking Threaded fasteners Torque
Tension control bolt
[ "Physics" ]
94
[ "Wikipedia categories named after physical quantities", "Force", "Physical quantities", "Torque" ]
34,022,102
https://en.wikipedia.org/wiki/Water%20retention%20on%20random%20surfaces
Water retention on random surfaces is the simulation of catching of water in ponds on a surface of cells of various heights on a regular array such as a square lattice, where water is rained down on every cell in the system. The boundaries of the system are open and allow water to flow out. Water will be trapped in ponds, and eventually all ponds will fill to their maximum height, with any additional water flowing over spillways and out the boundaries of the system. The problem is to find the amount of water trapped or retained for a given surface. This has been studied extensively for random surfaces. Random surfaces One system in which the retention question has been studied is a surface of random heights. Here one can map the random surface to site percolation, and each cell is mapped to a site on the underlying graph or lattice that represents the system. Using percolation theory, one can explain many properties of this system. It is an example of the invasion percolation model in which fluid is introduced in the system from any random site. In hydrology, one is concerned with runoff and formation of catchments. The boundary between different drainage basin (watersheds in North America) forms a drainage divide with a fractal dimension of about 1.22. The retention problem can be mapped to standard percolation. For a system of five equally probable levels, for example, the amount of water stored R5 is just the sum of the water stored in two-level systems R2(p) with varying fractions of levels p in the lowest state: R5 = R2(1/5) + R2(2/5) + R2(3/5) + R2(4/5) Typical two-level systems 1,2 with p = 0.2, 0.4, 0.6, 0.8 are shown on the right (blue: wet, green: dry, yellow: spillways bordering wet sites). The net retention of a five-level system is the sum of all these. The top level traps no water because it is far above the percolation threshold for a square lattice, 0.592746. The retention of a two-level system R2(p) is the amount of water connected to ponds that do not touch the boundary of the system. When p is above the critical percolation threshold p c, there will be a percolating cluster or pond that visits the entire system. The probability that a point belongs to the percolating or "infinite" cluster is written as P∞ in percolation theory, and it is related to R2(p) by R2(p)/L2 = p − P∞ where L is the size of the square. Thus, the retention of a multilevel system can be related to a well-known quantity in percolation theory. To measure the retention, one can use a flooding algorithm in which water is introduced from the boundaries and floods through the lowest spillway as the level is raised. The retention is just the difference in the water level that a site was flooded minus the height of the terrain below it. Besides the systems of discrete levels described above, one can make the terrain variable a continuous variable say from 0 to 1. Likewise, one can make the surface height itself be a continuous function of the spatial variables. In all cases, the basic concept of the mapping to an appropriate percolation system remains. A curious result is that a square system of n discrete levels can retain more water than a system of n+1 levels, for sufficiently large order L > L*. This behavior can be understood through percolation theory, which can also be used to estimate L* ≈ (p − pc)−ν where ν = 4/3, p = i*/n where i* is the largest value of i for which i/n < pc, and pc = 0.592746 is the site percolation threshold for a square lattice. Numerical simulations give the following values of L*, which are extrapolated to non-integer values. For example, R2 < R3 for L ≤ 51, but R2 > R3 for L ≥ 52: As n gets larger, crossing become less and less frequent, and the value of L* where crossing occurs is no longer a monotonic function of n. The retention when the surface is not entirely random but correlated with a Hurst exponent H is discussed in Schrenk et al. See also Drainage divide References Further reading External links https://commons.wikimedia.org/wiki/Category:Associative_magic_squares_of_order_4 Hugo Pfoertner. , with links to magic square pictures Hugo Pfoertner. Discussion site for Al Zimmermann's Programming Contests Item on Futility Closet —Polyominoe enumeration and lake patterns —Upgrade the model from 2D to 3D Nature 2018 Water retention histogram as a computing problem http://oeis.org/A331507/ Maximum number of ponds Random matrices Critical phenomena Percolation theory
Water retention on random surfaces
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,049
[ "Random matrices", "Physical phenomena", "Phase transitions", "Critical phenomena", "Percolation theory", "Mathematical objects", "Combinatorics", "Matrices (mathematics)", "Condensed matter physics", "Statistical mechanics", "Dynamical systems" ]
34,022,823
https://en.wikipedia.org/wiki/Binary%20black%20hole
A binary black hole (BBH), or black hole binary, is a system consisting of two black holes in close orbit around each other. Like black holes themselves, binary black holes are often divided into binary stellar black holes, formed either as remnants of high-mass binary star systems or by dynamic processes and mutual capture; and binary supermassive black holes, believed to be a result of galactic mergers. For many years, proving the existence of binary black holes was made difficult because of the nature of black holes themselves and the limited means of detection available. However, in the event that a pair of black holes were to merge, an immense amount of energy should be given off as gravitational waves, with distinctive waveforms that can be calculated using general relativity. Therefore, during the late 20th and early 21st century, binary black holes became of great interest scientifically as a potential source of such waves and a means by which gravitational waves could be proven to exist. Binary black hole mergers would be one of the strongest known sources of gravitational waves in the universe, and thus offer a good chance of directly detecting such waves. As the orbiting black holes give off these waves, the orbit decays, and the orbital period decreases. This stage is called binary black hole inspiral. The black holes will merge once they are close enough. Once merged, the single hole settles down to a stable form, via a stage called ringdown, where any distortion in the shape is dissipated as more gravitational waves. In the final fraction of a second the black holes can reach extremely high velocity, and the gravitational wave amplitude reaches its peak. The existence of stellar-mass binary black holes (and gravitational waves themselves) was finally confirmed when the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected GW150914 (detected September 2015, announced February 2016), a distinctive gravitational wave signature of two merging stellar-mass black holes of around 30 solar masses each, occurring about 1.3 billion light-years away. In its final 20 ms of spiraling inward and merging, GW150914 released around 3 solar masses as gravitational energy, peaking at a rate of 3.6 watts more than the combined power of all light radiated by all the stars in the observable universe put together. Supermassive binary black hole candidates have been found, but not yet categorically proven. Occurrence Stellar-mass binary black holes have been demonstrated to exist, by the first detection of a black-hole merger event GW150914 by LIGO. Supermassive black-hole (SMBH) binaries are believed to form during galaxy mergers. Some likely candidates for binary black holes are galaxies with double cores still far apart. An example active double nucleus is NGC 6240. Much closer black-hole binaries are likely in single-core galaxies with double emission lines. Examples include SDSS J104807.74+005543.5 and EGSD2 J142033.66 525917.5. Other galactic nuclei have periodic emissions suggesting large objects orbiting a central black hole, for example, in OJ287. Measurements of the peculiar velocity of the mobile SMBH in the galaxy J0437+2456 indicate that it is a promising candidate for hosting either a recoiling or binary SMBH, or an ongoing galaxy merger. The quasar PKS 1302-102 appears to have a binary black hole with an orbital period of 1900 days. Final parsec problem When two galaxies collide, the supermassive black holes at their centers are very unlikely to hit head-on and would most likely shoot past each other on hyperbolic trajectories, unless some mechanism brings them together. The most important mechanism is dynamical friction, which transfers kinetic energy from the black holes to nearby matter. As a black hole passes a star, the gravitational slingshot accelerates the star while decelerating the black hole. This slows the black holes enough that they form a bound binary system, and further dynamical friction steals orbital energy from the pair until they are orbiting within a few parsecs of each other. However, this process also ejects matter from the orbital path, and as the orbits shrink, the volume of space the black holes pass through reduces, until there is so little matter remaining that it could not cause a merger within the age of the universe. Gravitational waves can cause significant loss of orbital energy, but not until the separation shrinks to a much smaller value, roughly 0.01–0.001 parsec. Nonetheless, supermassive black holes appear to have merged, and what appears to be a pair in this intermediate range has been observed in PKS 1302-102. The question of how this happens is the "final parsec problem". A number of solutions to the final parsec problem have been proposed. Most involve mechanisms to bring additional matter, either stars or gas, close enough to the binary pair to extract energy from the binary and cause it to shrink. If enough stars pass close by to the orbiting pair, their gravitational ejection can bring the two black holes together in an astronomically plausible time. Dark matter is also being considered, although it appears that self-interacting dark matter is required to avoid the same problem of it all being ejected before the merger occurs. One mechanism that is known to work, although infrequently, is a third supermassive black hole from a second galactic collision. While it is possible that one of the three is ejected, their large masses make it more likely that one is not ejected, but instead the three have repeated interactions. The resultant chaotic orbits allow two additional energy loss mechanisms: the black holes orbit through a substantially larger volume of the galaxy, interacting with (and losing energy to) a much greater amount of matter, and the orbits can become highly eccentric, allowing energy loss by gravitational radiation at the point of closest approach. Lifecycle Inspiral The first stage of the life of a binary black hole is the inspiral, a gradually shrinking orbit. The first stages of the inspiral take a very long time, as the gravitational waves emitted are very weak when the black holes are distant from each other. In addition to the orbit shrinking due to the emission of gravitational waves, extra angular momentum may be lost due to interactions with other matter present, such as other stars. As the black holes’ orbit shrinks, the speed increases, and gravitational wave emission increases. When the black holes are close the gravitational waves cause the orbit to shrink rapidly. The last stable orbit or innermost stable circular orbit (ISCO) is the innermost complete orbit before the transition from inspiral to merger. Merger This is followed by a plunging orbit, in which the two black holes meet, followed by the merger. Gravitational wave emission peaks at this time. Ringdown Immediately following the merger, the now single black hole will "ring". This ringing is damped in the next stage, called the ringdown, by the emission of gravitational waves. The ringdown phase starts when the black holes approach each other within the photon sphere. In this region most of the emitted gravitational waves go towards the event horizon, and the amplitude of those escaping reduces. Remotely detected gravitational waves have an oscillation with fast-reducing amplitude, as echos of the merger event result from tighter and tighter spirals around the resulting black hole. Observation The first observation of stellar-mass binary black holes merging, GW150914, was performed by the LIGO detector. As observed from Earth, a pair of black holes with estimated masses around 36 and 29 times that of the Sun spun into each other and merged to form an approximately 62-solar-mass black hole on 14 September 2015, at 09:50 UTC. Three solar masses were converted to gravitational radiation in the final fraction of a second, with a peak power 3.6×1056 erg/s (200 solar masses per second), which is 50 times the total output power of all the stars in the observable universe. The merger took place from Earth, between 600 million and 1.8 billion years ago. The observed signal is consistent with the predictions of numerical relativity. Dynamics modelling Some simplified algebraic models can be used for the case where the black holes are far apart, during the inspiral stage, and also to solve for the final ringdown. Post-Newtonian approximations can be used for the inspiral. These approximate the general-relativity field equations adding extra terms to equations in Newtonian gravity. Orders used in these calculations may be termed 2PN (second-order post-Newtonian) 2.5PN or 3PN (third-order post-Newtonian). Effective-one-body (EOB) approximation solves the dynamics of the binary black-hole system by transforming the equations to those of a single object. This is especially useful where mass ratios are large, such as a stellar-mass black hole merging with a galactic-core black hole, but can also be used for equal-mass systems. For the ringdown, black-hole perturbation theory can be used. The final Kerr black hole is distorted, and the spectrum of frequencies it produces can be calculated. Description of the entire evolution, including merger, requires solving the full equations of general relativity. This can be done in numerical relativity simulations. Numerical relativity models space-time and simulates its change over time. In these calculations it is important to have enough fine detail close into the black holes, and yet have enough volume to determine the gravitation radiation that propagates to infinity. In order to reduce the number of points such that the numerical problem is tractable in a reasonable time, special coordinate systems can be used, such as Boyer–Lindquist coordinates or fish-eye coordinates. Numerical-relativity techniques steadily improved from the initial attempts in the 1960s and 1970s. Long-term simulations of orbiting black holes, however, were not possible until three groups independently developed groundbreaking new methods to model the inspiral, merger, and ringdown of binary black holes in 2005. In the full calculations of an entire merger, several of the above methods can be used together. It is then important to fit the different pieces of the model that were worked out using different algorithms. The Lazarus Project linked the parts on a spacelike hypersurface at the time of the merger. Results from the calculations can include the binding energy. In a stable orbit the binding energy is a local minimum relative to parameter perturbation. At the innermost stable circular orbit the local minimum becomes an inflection point. The gravitational waveform produced is important for observation prediction and confirmation. When inspiralling reaches the strong zone of the gravitational field, the waves scatter within the zone producing what is called the post-Newtonian tail (PN tail). In the ringdown phase of a Kerr black hole, frame-dragging produces a gravitation wave with the horizon frequency. In contrast, the Schwarzschild black-hole ringdown looks like the scattered wave from the late inspiral, but with no direct wave. The radiation reaction force can be calculated by Padé resummation of gravitational wave flux. A technique to establish the radiation is the Cauchy-characteristic extraction technique CCE, which gives a close estimate of the flux at infinity, without having to calculate at larger and larger finite distances. The final mass of the resultant black hole depends on the definition of mass in general relativity. The Bondi mass is calculated from the Bondi–Sach mass-loss formula, , with being the gravitational wave flux at retarded time . is a surface integral of the news function at null infinity varied by solid angle. The Arnowitt–Deser–Misner (ADM) energy, or ADM mass, is the mass as measured at infinite distance and includes all the gravitational radiation emitted: . Angular momentum is also lost in the gravitational radiation. This is primarily in the z axis of the initial orbit. It is calculated by integrating the product of the multipolar metric waveform with the news function complement over retarded time. Shape One of the problems to solve is the shape or topology of the event horizon during a black-hole merger. In numerical models, test geodesics are inserted to see whether they encounter an event horizon. As two black holes approach each other, a "duckbill" shape protrudes from each of the two event horizons towards the other one. This protrusion extends longer and narrower until it meets the protrusion from the other black hole. At this point in time the event horizon has a very narrow X-shape at the meeting point. The protrusions are drawn out into a thin thread. The meeting point expands to a roughly cylindrical connection called a bridge. Simulations had not produced any event horizons with toroidal topology (ring-shaped). Some researchers suggested that it would be possible if, for example, several black holes in the same nearly circular orbit coalesce. Black-hole merger recoil An unexpected result can occur with binary black holes that merge, in that the gravitational waves carry momentum, and the merging black-hole pair accelerates, seemingly violating Newton's third law. The center of gravity can add over 1000 km/s of kick velocity. The greatest kick velocities (approaching 5000 km/s) occur for equal-mass and equal-spin-magnitude black-hole binaries, when the spins directions are optimally oriented to be counter-aligned, parallel to the orbital plane or nearly aligned with the orbital angular momentum. This is enough to escape large galaxies. With more likely orientations, a smaller effect takes place, perhaps only a few hundred kilometers per second. This sort of speed can eject merging binary black holes from globular clusters, thus preventing the formation of massive black holes in globular-cluster cores. This, in turn, reduces the chances of subsequent mergers, and thus the chance of detecting gravitational waves. For non-spinning black holes a maximum recoil velocity of 175 km/s occurs for masses in the ratio of five to one. When spins are aligned in the orbital plane, a recoil of 5000 km/s is possible with two identical black holes. Parameters that may be of interest include the point at which the black holes merge, the mass ratio that produces maximum kick, and how much mass/energy is radiated via gravitational waves. In a head-on collision this fraction is calculated at 0.002, or 0.2%. One of the best candidates of the recoiled supermassive black holes is CXO J101527.2+625911. See also List of most massive black holes References External links + Articles containing video clips
Binary black hole
[ "Physics", "Astronomy" ]
3,021
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
50,911,158
https://en.wikipedia.org/wiki/KS%20Steel
KS Steel is a permanent magnetic steel with three times the magnetic reluctance of tungsten steel, which was developed in 1917 by the Japanese scientist and inventor Kotaro Honda. "KS" stands for Kichizaemon Sumitomo, the head of the family-run conglomerate, who provided financial support for the research leading to KS Steel's invention. Honda would go on to invent NKS steel in 1933 whose magnetic resistance is several times higher than that of KS Steel. History After World War one, when Japan had to cope with painful restrictions on imports of materials from foreign countries such as Germany, physicist Kotaro Honda was motivated to study alloys due to the need of a domestic steel production. He opened up his RIKEN-Honda Laboratory at Tohoku Imperial University in 1922 after he invented KS steel in 1917; it is a permanent magnetic steel with three times the magnetic resistance of tungsten steel. The initials KS in the name of the steel come from Kichizaemon Sumitomo, who was the head of the family that provided financial support for the research leading to the invention. Material properties The composition of KS steel is 0.4–0.8 percent carbon; 30–40 percent cobalt; 5–9 percent tungsten; and 1.5–3 percent chromium. KS steel is best tempered when heated to 950 °C and then quenched in heavy oil. The residual magnetism is reduced by only 6 percent when artificially aged. The yield strength of KS steel is above 500 and tensile strength is above 620 and elongation is above 14. The maximum energy product (BH)max of KS steel is 30 kJ/m^3. See also MKM steel RIKEN Ten Japanese Great Inventors References Steels Japanese inventions 1917 introductions Magnetic alloys Ferromagnetic materials
KS Steel
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
368
[ "Alloy stubs", "Ferromagnetic materials", "Steels", "Electric and magnetic fields in matter", "Materials science", "Magnetic alloys", "Materials", "Alloys", "Matter" ]
56,683,458
https://en.wikipedia.org/wiki/Brennan%20conjecture
The Brennan conjecture is a mathematical hypothesis (in complex analysis) for estimating (under specified conditions) the integral powers of the moduli of the derivatives of conformal maps into the open unit disk. The conjecture was formulated by James E. Brennan in 1978. Let be a simply connected open subset of with at least two boundary points in the extended complex plane. Let be a conformal map of onto the open unit disk. The Brennan conjecture states that whenever . Brennan proved the result when for some constant . Bertilsson proved in 1999 that the result holds when , but the full result remains open. References Conjectures Unsolved problems in mathematics
Brennan conjecture
[ "Mathematics" ]
131
[ "Unsolved problems in mathematics", "Mathematical problems", "Conjectures" ]
56,683,615
https://en.wikipedia.org/wiki/Mass%20mortality%20event
A mass mortality event (MME) is an incident that kills a vast number of individuals of a single species in a short period of time. The event may put a species at risk of extinction or upset an ecosystem. This is distinct from the mass die-off associated with short lived and synchronous emergent insect taxa which is a regular and non-catastrophic occurrence. Causes of MME's include disease and human-related activities such as pollution. Climatic extremes and other environmental influences such as oxygen stress in aquatic environments play a role, as does starvation. In many MME's there are multiple stressors. An analysis of such events from 1940 to 2012 found that these events have become more common for birds, fish and marine invertebrates, but have declined for amphibians and reptiles and not changed for mammals. Known mass mortality events Migratory birds (1904), Minnesota and Iowa In March 1904, 1.5 million migrating birds died in Minnesota and Iowa during a strong snowstorm. According to The Guardian, this was the largest avian mortality event on record in the region. Records of MMEs have been kept since the 1880s. MMEs of this size are rare, however, and few before or since have been as big as the 1904 event. According to the records, MMEs "are always associated with extreme weather events such as a drop in temperature, snowstorm or hailstorm". George River caribou (1984), Canada In 1984, about 10,000 caribou of the George River caribou herd—one of Canada's migratory woodland caribou herds—drowned during their bi-annual crossing of the Caniapiscau River when the James Bay Hydro Project flooded the region. Harbour seals (1988), North Sea In 1988, the deaths of 20,000 harbour seals in the North Sea were found to be caused by phocine distemper virus. Sea lions (1998), New Zealand Ten years later, two strains of bacteria were implicated in the deaths of approximately 1,600 New Zealand sea lions. Fur seals (2007), Prince Edward Islands On Marion Island in 2007, some 250–300 adult male subantarctic fur seals died in a two-week period. It was suggested, though not proven, that this gender-biased mortality was caused by Streptococcus sanguinis, a bacterium carried by the house mouse, an alien species accidentally introduced to the island in the 1800s. Muskoxen (2003), Canada In 2003, a rain-on-snow event encased the ground in ice, resulting in the starvation of 20,000 muskoxen on Banks Island in the Canadian Arctic. Birds (2010), Arkansas Shortly before midnight on New Year's Eve 2010, between 3,000 and 5,000 red-winged blackbirds fell from the sky in Beebe, Arkansas. Most died upon hitting the ground, but some were living but dazed. Laboratory tests were performed and the Arkansas Livestock and Poultry Commission, the National Wildlife Health Center in Madison, Wisconsin, and the University of Georgia's wildlife disease study group procured specimens of the dead birds. In addition to the blackbirds, a few grackles and starlings also fell from the sky in the same incident. A test report from the state poultry lab concluded that the birds had died from blunt trauma, with an unlicensed fireworks discharge being the likely cause. Seabirds and marine life (2010–2013), Gulf of Mexico The months-long Deepwater Horizon oil spill that began in April 2010 in the coastal waters of the Gulf of Mexico resulted in about 600,000 to 800,000 bird mortalities. Dolphins and other species of marine life continued to die in record numbers into 2013. Birds (2011), Arkansas The Beebe, Arkansas bird deaths were repeated again on New Year's Eve of the following year, 2011, with the reported number of dead birds being 5,000. On 3 January 2011, more than five hundred starlings, red-winged blackbirds, and sparrows fell dead in Pointe Coupee Parish, Louisiana. On 5 January, "hundreds" of dead turtle doves were found at Faenza, Italy. According to Italian news agencies, a huge number of the birds were found to have blue stains on their beaks that may have been caused by paint or hypoxia. Over the weekend of 8–9 January, "over a hundred" dead birds were found clustered together on a California highway, while "thousands of dead gizzard shad" (a species of fish) turned up in the harbors of Chicago. Fish (2011), Brazil Between 28 December 2010 and 3 January 2011, 100 tons of dead fish washed ashore on the Brazilian coast. On 3 January, an estimated two million dead fish were found floating in the Chesapeake Bay in Maryland. On 7 March, millions of small fish, including anchovies, sardines, and mackerel, were found dead in the area of King Harbor at Redondo Beach, California. An investigation by the authorities within the area concluded that the sardines had become trapped within the harbor and depleted the ambient oxygen, which resulted in the deaths. The authorities stated that the event was "unusual, but not unexplainable". Cows (2011), Wisconsin On 14 January, approximately two hundred cows were found dead in a field in Stockton, Wisconsin. The owner of the cattle has told deputies that he suspected the animals died of infectious bovine rhinotracheitis (IBR), or bovine virus diarrhea (BVD). Authorities in Wisconsin sent samples from the carcasses to labs in Madison in order to determine cause of death. Saiga antelope (2015), Kazakhstan In 2015, some 200,000 saiga antelope died within a period of one week in a area of the Betpak-Dala desert region of Kazakhstan. They had gathered in large groups for their annual calving. It was determined that warm and humid temperatures had caused Pasteurella multocida, a strain of bacteria that normally lives harmlessly in their tonsils, to cross into their bloodstream and cause hemorrhagic septicemia. This event wiped out 60% of the population of this critically endangered species. Mass mortality events are not uncommon for saiga. In 1981, 70,000 died; in 1988 there were 200,000 deaths; and more recently, in 2010, 12,000 died. Seabirds (2015–2016), Pacific Ocean beaches Starting in the summer of 2015 and continuing into the spring of 2016, about 62,000 dead or dying birds were found on Pacific Ocean beaches from California to Alaska. Some researchers believe that as many as one million common murres may have died in the massive die-off. Fish (2016), Vietnam In May 2016, the Los Angeles Times reported that millions of fish had washed ashore along the coast of north-central Vietnam, stretching over of beaches. This included the shoreline in the Phu Loc district, in Thua Thien Hue province. Possible causes include industrial pollution, as government researchers had found that "toxic elements" had caused the "unprecedented" fish mortalities. Concerns were raised about a "massive Taiwanese-owned steel plant" that was allegedly "pumping untreated wastewater" into the ocean. Mule deer (2017), California In the Inyo National Forest in California, there are several records of large numbers of migrating mule deer falling to their deaths by slipping on ice while crossing mountain passes. This has occurred when heavy snowfalls have persisted until fall, and have been turned to ice by frequent thawing and refreezing. Brumby (2019), Australia In 2019, an extreme heatwave with temperatures exceeding in central Australia lead to the death of approximately 40 brumbies. Bats (2014, 2018), Australia In 2014 and 2018, heatwaves in Australia killed significant portions of local bat populations. Migratory birds (2020) New Mexico In August 2020, observers reported that hundreds of dead migratory birds heading south for the winter had been found at the White Sands Missile Range in New Mexico. By September, the number had increased to tens of thousands, and the die-off had spread across at least New Mexico, Colorado, Texas, Arizona, and farther north into Nebraska. The birds were migrating species, including "owls, warblers, hummingbirds, loons, flycatchers, and woodpeckers". They seemed to be emaciated, as if they had just kept on flying until they dropped. Possible causes of the deaths include climate crisis and wildfires, according to The Guardian. Fish (2022), River Oder In 2022, a mass die-off of fish, beaver and other wildlife occurred in the Oder river, between Poland and Germany. Fish (2023), Darling River In March 2023, millions of fish were reported dead along the Darling River at Menindee, following a heatwave. Initially, police attributed the cause to (naturally occurring) hypoxic blackwater. Subsequently it was announced that the New South Wales government will treat the deaths as a "pollution incident", thus giving the Environmental Protection Authority (EPA) greater investigative powers. Dairy cattle (2023), Texas explosion In April 2023, an explosion and subsequent fire at South Fork Dairy, near Dimmitt, Texas resulted in the deaths of an estimated 18,000 dairy cattle. Explanations According to most scientists, massive die-offs of animals are not necessarily unusual in nature and may happen for any of a wide variety of reasons, both preventable and unpreventable. Natural causes often include severe weather, volcanic eruptions, disease outbreaks, and accidental poisonings, while human-caused die-offs are typically due to pollution (especially major oil and chemical spills) and climate change adding to the stresses on wildlife. The U.S. Geological Survey's website listed about 90 mass deaths of birds and other wildlife from June through 12 December 2010; Louisiana's State Wildlife Veterinarian Jim LaCour stated that there had been 16 similar mass blackbird deaths in the previous 30 years. Sudden or short-term die-offs must also be distinguished from much longer-term extinction events, which have occurred naturally for countless species throughout the Earth's history and for many extant species are often demonstrated to be ongoing, if gradually, in the modern era. On the other hand, some mass die-offs appear to be unique because there are no previous records of similar occurrences, or because the likely cause of death can be pinpointed to a novel man-made event that has never previously existed; human technologies of a type or scale unknown at any prior point in history are frequently implicated in catastrophic mortality events. These types of mass die-offs are, then, unusual by definition. According to Italy's WWF president Giorgio Tramonti, mass dove deaths like the ones that occurred in Italy had never happened before 2010. The event in Arkansas was attributed primarily to an unexpected temperature change causing atmospheric turbulence (visible on NEXRAD Doppler weather radar images) above the birds' roosting areas, which likely disoriented them. Apocalypse Some Christians asserted that the cluster of cow deaths in 2011 was a sign of the Apocalypse. They reference a passage in the Book of Hosea in the Hebrew Bible which reads: "By swearing, and lying, and killing, and stealing, and committing adultery, they break out, and blood toucheth blood," and the prophecy continues "Therefore shall the land mourn, and every one that dwelleth therein shall languish, with the beasts of the field, and with the fowls of heaven; yea, the fishes of the sea also shall be taken away." The term aflockalypse was adopted by some media commentators in reference to the 2010–2011 bird deaths. Aflockalypse is a portmanteau of the words "flock" and "apocalypse". See also Fish kill Harmful algal bloom References Natural disasters Biological events Animal death
Mass mortality event
[ "Physics" ]
2,466
[ "Weather", "Physical phenomena", "Natural disasters" ]
56,687,142
https://en.wikipedia.org/wiki/Volumetric%20capture
Volumetric capture or volumetric video is a technique that captures a three-dimensional space, such as a location or performance. This type of volumography acquires data that can be viewed on flat screens as well as using 3D displays and VR headset. Consumer-facing formats are numerous and the required motion capture techniques lean on computer graphics, photogrammetry, and other computation-based methods. The viewer generally experiences the result in a real-time engine and has direct input in exploring the generated volume. History Recording talent without the limitation of a flat screen has been depicted in science-fiction for a long time. Holograms and 3D real-world visuals have featured prominently in Star Wars, Blade Runner, and many other science-fiction productions over the years. Through the growing advancements in the fields of computer graphics, optics, and data processing, this fiction has slowly evolved into a reality. Volumetric video is the logical next step after stereoscopic movies and 360° videos in that it combines the visual quality of photography with the immersion and interactivity of spatialized content and could prove to be the most important development in the recording of human performance since the creation of contemporary cinema. Computer graphics and VFX Creating 3D models from video, photography, and other ways of measuring the world has always been an important topic in computer graphics. The ultimate goal is to imitate reality in minute detail while giving creatives the power to build worlds atop this foundation to match their vision. Traditionally, artists create these worlds using modeling and rendering techniques developed over decades since the birth of computer graphics. Visual effects in movies and video games paved the way for advances in photogrammetry, scanning devices, and the computational backend to handle the data received from these new intensive methods. Generally, these advances have come as a result of creating more advanced visuals for entertainment and media, but have not been the goal of the field itself. LIDAR LIDAR scanning describes a survey method that uses laser-sampled points densely packed to scan static objects into a point cloud. This requires physical scanners and produces enormous amounts of data. In 2007 the band Radiohead used it extensively to create a music video for "House of Cards", capturing point cloud performances of the singer's face and of select environments in one of the first uses of this technology for volumetric capture. Director James Frost collaborated with media artist Aaron Koblin to capture 3D point-clouds used for this music clip, and while the final output of this work was still a rendered flat representation of the data, the capture and mindset of the authors was already ahead of its time. Point clouds, being distinct samples of three-dimensional space with position and color, create a high fidelity representation of the real world with a huge amount of data. However, viewing this data in real-time was not yet possible. Structured light In 2010 Microsoft brought the Kinect to the market, a consumer product that used structured light in the infrared spectrum to generate a 3D mesh from its camera. While the intent was to facilitate and innovate in user input and gameplay, it was very quickly adapted as a generic capture device for 3D data in the volumetric capture community. By projecting a known pattern onto the space and capturing the distortion by objects in the scene, the result capture can then be computed into different outputs. Artists and hobbyists started to make tools and projects around the affordable device, sparking a growing interest in volumetric capture as a creative medium. Researchers at Microsoft then constructed an entire capture stage using multiple cameras, Kinect devices, and algorithms that generated a full volumetric capture from the combined optical and depth information. This is now the Microsoft Mixed Reality Capture Studio, used today as part of both their research division and in certain select commercial experiences such as the Blade Runner 2049 VR experience. There are currently three studios in operation: Redmond, WA; San Francisco, CA; and London, England. While this remains a very interesting setup for the high-end market, the affordable price of a single Kinect device led more experimental artists and independent directors to become active in the volumetric capture field. Two results from this activity are Depthkit and EF EVE™. EF EVE™ supports unlimited number of Azure Kinect sensors on one PC giving full volumetric capture with easy setup. It also has automatic sensors calibration and VFX functionality. Depthkit is a software suite that allows the capture of geometry data with one structured light sensor including the Azure Kinect, as well as high quality color detail from an attached witness camera. Photogrammetry Photogrammetry describes the process of measuring data based on photographic reference. While being as old as photography itself, only through advances over the years in volumetric capture research has it now become possible to capture more and more geometry and texture detail from a large number of input images. The result is usually split into two composited sources, static geometry and full performance capture. For static geometry, sets that are captured with a large number of overlapping digital images are then aligned to each other using similar features in the images and used as a base for triangulation and depth estimation. This information is interpreted as 3D geometry, resulting in a near-perfect replica of the set. Full performance capture, however, uses an array of video cameras to capture real-time information. Those synchronized cameras are then used frame-by-frame to generate a set of points or geometry that can be played back at speed, resulting in the full volumetric performance capture that can be composited into any environment. In 2008, 4DViews installed a first volumetric video capture system at DigiCast studio in Tokyo (JP). Later in 2015, 8i contributed in the field, and recently Intel, Microsoft, and Samsung have joined in by creating their own capture stages for performance capture and photogrammetry. Virtual reality As volumetric video developed into a commercially applicable approach to environment and performance capture, the ability to move about the results with six degrees of freedom and true stereoscopy necessitated a new type of display device. With the rise of consumer-facing VR in 2016 through devices such as the Oculus Rift and HTC Vive, this was suddenly possible. Stereoscopic viewing and the ability to rotate and move the head as well as move in a small space allows immersion into environments well beyond what was possible in the past. The photographic nature of the captures combined with this immersion and the resulting interactivity is one giant step closer to being the holy grail of true virtual reality. With the rise of 360° video content, the demand for 6-DOF capture is rising, and VR in particular drives the applications for this technology, slowly fusing cinema, games and art with the field of volumetric capture research. Volumetric video is currently being used to deliver virtual concerts via the Scenez application on Meta Quest and Apple Vision Pro Devices. Light fields Light fields describe at a given sample point the incoming light from all directions. This is then used in post processing to generate effects such as depth of field as well as allowing the user to move their head slightly. Since 2006 Lytro is creating consumer-facing cameras to allow the capture of light fields. Fields can be captured inside-out in camera or outside-in from renderings of 3D geometry, representing a huge amount of information ready to be manipulated. Currently data rates are still a large issue and the technique has a large potential for the future as it samples light and displays the result in a variety of ways. Another by-product of this technique is a reasonably accurate depth map of the scene. Meaning each pixel has information about its distance from the camera. Facebook is using this idea in its Surround360 camera family to capture 360° video footage that is getting stitched with the help of distance maps. Extracting this raw data is possible and allows a high-resolution capture of any stage. Again the data rates combined with the fidelity of the depth maps are huge bottlenecks but soon to be overcome with more advanced depth estimation techniques, compression, as well as parametric light fields. Workflows Different workflows to generate volumetric video are currently available. These are not mutually exclusive and are used effectively in combinations. Here are some examples that show a couple of them: Mesh-based This approach generates a more traditional 3D triangle mesh similar to the geometry used for computer games and visual effects. The data volume is usually less but the quantization of real-world data into lower resolution data limits the resolution and visual fidelity. Trade-offs are generally made between mesh density and final experience performance. Photogrammetry is usually used as a base for static meshes, and is then augmented with performance capture of talent via the same underlying technology of videogrammetry. Intense clean up is required to create the final set of triangles. To extend beyond the physical world, CG techniques can be deployed to further enhance the captured data, employing artists to build onto and into the static mesh as necessary. The playback is usually handled by a real-time engine and resembles a traditional game pipeline in implementation, allowing interactive lighting changes and creative and archivable ways of compositing static and animated meshes together. Point-based Recently the spotlight has shifted towards point-based volumetric capture. The resulting data is represented as points or particles in 3D space carrying attributes such as color and point size with them. This allows for more information density and higher resolution content. The data rates required are big and current graphics hardware is not optimized to render this, being optimised to a mesh-based render pipeline. The main advantage of points is the potential for higher spatial resolution. Points can either be scattered on triangle meshes with pre-computed lighting, or used directly from a LIDAR scanner. Performance of talent is captured the same way as per the mesh-based approach, but more time and computational power can be used at production time to further improve the data. At playback, 'level of detail' can be utilized to manage the computational load on the playback device, increasing or decreasing the number of polygons. Interactive light changes are harder to realize as the bulk of the data is pre-baked. This means that while the lighting information stored with the points is very accurate and high-fidelity, it lacks the ability to easily change in any given situation. Another benefit of point capture is that computer graphics can be rendered with very high quality and also stored as points, opening the door for a perfect blend of real and imagined elements. After capturing and generating the data, editing and compositing is done within a realtime engine, connecting recorded actions to tell the intended story. The final product can then be viewed either as a flat rendering of the captured data, or interactively in a VR headset. While one goal, with the point-based approach to volumetric capture, is to stream point data from cloud to the user at home, allowing the creating and dissemination of realistic virtual worlds on demand - a second goal more recently considered would be a real-time data stream of live events. This requires very high bandwidth as pixel information includes depth data (i.e. become voxels) Promises With the general understanding of the technology in mind, this chapter will describe the advances on the horizon for entertainment and other industries, as well as the potential this technology has to change media landscape. True immersion As volumetric video evolves into global capture and the display hardware evolves to match, we will enter into an era of true immersion where the nuances of captured environment combined with those of captured performances will convey emotionality in a whole new medium, blurring the boundaries between real and virtual worlds. This groundbreaking in the world of sensory trickery will spark an evolution in the way we consume media, and while technologies for other senses like scent, smell, and proprioception are still in research and development stage, one day in the not-so-distant future we will travel convincingly to new locales, both real and imagined. Industries in tourism and journalism will find new life in the ability to transport to a viewer or visitor safely to a location, while others such as architectural visualization and civil engineering will find ways to build entire structures and cities and explore them without the need for a single swing of a hammer. Full capture and re-use Once a capture is created and saved, it can be re-used and even possibly re-purposed ad nauseam for circumstances beyond the initial envisioned scope. Creating a virtual set enables volumetric videographers and cinematographers to create stories and plan for shots without needing a crew or to even be present at the physical set itself, and a proper visualization can help an actor or performer block out a scene or action with the comfort that their practice isn't at the expense of the rest of production. Old sets can be captured digitally before being torn down, allowing them to persist eternally as a place to revisit and explore for entertainment and inspiration, and multiple sets can be kit-bashed in such a way to tighten the iteration loops of set design, sound design, coloring, and many other aspects of production. Traditional skillsets One area of concern with the growing field of volumetric capture is the shrinking of demand for traditional skillsets like modeling, lighting, animation, etc. However, while in future the stack of production-oriented volumetric capture technologies will grow and grow, so too will the demand for traditional skillsets. Volumetric capture excels at capturing static data or pre-rendered animated footage. It can not, however, create an imaginary environment or natively allow for any level of interactivity. This is where skilled artists and developers will be in highest demand, creating seamless interactive events and assets to complement the existing geometry data, or using the existing data as a base on which to build, similar to how a digital painter might paint over a basic 3D render. The onus will be on the artisan to ensure they keep up with the tools and workflows that best suit their skillsets, but the prudent will find that the production pipeline of the future will involve many opportunities to streamline the creation of the labor-intensive and allowing for investment in bigger creative challenges. Most importantly, skills currently rendered semi-obsolete by advances in computer graphics and off-line rendering will once again be made relevant, as the fidelity of things like real, hand-built sets quality tailored costumes rendered as high volume captures will almost always be far more immersive than anything completely CG. By combining these real-life set capture with the volumetric captures of additional CG elements, we will be able to blend real-life and our imagination in a way that we have only previously been able to do on a flat-screen, creating new fields in the areas like compositing and VFX. Challenges The capture and creation process of volumetric data is full of challenges and unsolved problems. It is the next step in cinematography and comes with issues that will be resolved over time. Visual language As every medium creates its own visual language, rules and creative approaches, volumetric video is still at the infancy of this. This compares to the addition of sound to moving pictures. New design philosophies had to be created and tested. Currently the language of film, the art of directing is battle hardened over 100 years. In a completely six degrees of freedom, interactive and non-linear world many of the traditional approaches can't function. The more experiences are being created and analyzed, the quicker can the community come to a conclusion about this language of experiences. Pipeline disruption Current video and film making production pipelines are not immediately ready transition to volumetric production. Every step in the film making process needs to be rethought and reinvented. On set capture, directing of talent on set, editing, photography, story telling, and much more are all fields that need to spend time to adapt to the volumetric workflows. Currently each production is using a variety of technologies as well as trying the rules of engagement. Data rates In order to store and playback the captured data, enormous sets need to be streamed to the consumer. Currently the most effective way is to build bespoke apps that are delivered. There is no standard yet that generated volumetric video and makes it experienceable at home. Compression of this data is starting to be available with the Moving Picture Experts Group in search for a reasonable way to stream the data. This would make truly interactive immersive projects available to be distributed and worked on more efficiently and needs to be solved before the medium becomes mainstream. Future applications Besides the application in entertainment, several other industries have vested interest in the capture of scenes to the detail described above. Sports events would benefit greatly from a detailed replay of the state of a game. This is already happening in American football and baseball, as well as British soccer. Those 360° degree replays will enable viewers in the future to analyze a match from multiple perspectives. Documenting spaces for historical event, captured live or recreated will benefit the educational sector greatly. Virtual lectures depicting big events in history with an immersive component will help future generations imagine spaces and learn collaboratively about events. This can be abstracted and used to visualize micro scale scenarios on a cellular level as much as epic events that changed the course of the human experiment. The main advantage being that virtual field trips is the democratisation of high end educational scenarios. Being able to take part in visiting a museum without having to physically be there allows a broader audience and also enables institutions to show their entire inventory rather the subsection currently on display. Real estate and tourism could preview destinations accurately and make the retail industry much more custom for the individual. Capturing products has already been done for shoes and magic mirrors can be used in stores to visualize this. Shopping Malls have started to embrace this to repopulate them by attracting customers with VR Arcades as well as presenting merchandise virtually. References List of experiences contributing House of Cards, Radiohead, Music video Carne Y Arena, Alejandro G. Inarritu, LACMA Art Exhibit Blade Runner 2049: Memory Lab, VR Experience (filmed at Microsoft Mixed Reality Capture Studio, Redmond, WA) William Patrick Corgan: Aeronaut, VR Experience and Music Video (filmed at Microsoft Mixed Reality Capture Studio, Redmond, WA) Awake: Episode One, Start VR & Animal Logic, Interactive Cinematic VR experience (filmed at Microsoft Mixed Reality Capture Studio, Redmond, WA) Scenez on Meta Quest and Apple Vision Pro (captured at 4D Fun Studios, Culver City, CA) Scenez XR on Meta Quest and Apple Vision Pro (captured at 4D Fun Studios, Culver City, CA) Video Film and video technology Display 3D computer graphics Motion in computer vision Telepresence
Volumetric capture
[ "Physics" ]
3,813
[ "Physical phenomena", "Motion (physics)", "Motion in computer vision" ]
39,484,598
https://en.wikipedia.org/wiki/Super-Poissonian%20distribution
In mathematics, a super-Poissonian distribution is a probability distribution that has a larger variance than a Poisson distribution with the same mean. Conversely, a sub-Poissonian distribution has a smaller variance. An example of super-Poissonian distribution is negative binomial distribution. The Poisson distribution is a result of a process where the time (or an equivalent measure) between events has an exponential distribution, representing a memoryless process. Mathematical definition In probability theory it is common to say a distribution, D, is a sub-distribution of another distribution E if D 's moment-generating function, is bounded by E 's up to a constant. In other words for some C > 0. This implies that if and are both from a sub-E distribution, then so is . A distribution is strictly sub- if C ≤ 1. From this definition a distribution, D, is sub-Poissonian if for all t > 0. An example of a sub-Poissonian distribution is the Bernoulli distribution, since Because sub-Poissonianism is preserved by sums, we get that the binomial distribution is also sub-Poissonian. References Poisson point processes Types of probability distributions
Super-Poissonian distribution
[ "Mathematics" ]
251
[ "Point processes", "Point (geometry)", "Poisson point processes" ]
39,486,416
https://en.wikipedia.org/wiki/Clifton%20Furnace
Clifton Furnace is a historic cold blast charcoal furnace located near Clifton Forge, Alleghany County, Virginia. It was built in 1846 of large, rough-hewn, rectangular stones. It measures 34 feet square at the base and the sides and face taper towards the top. The furnace went out of blast in 1854 and was revamped in 1874. It was abandoned in 1877. It was added to the National Register of Historic Places in 1982. References Industrial buildings and structures on the National Register of Historic Places in Virginia Industrial buildings completed in 1846 Buildings and structures in Alleghany County, Virginia National Register of Historic Places in Alleghany County, Virginia Charcoal Industrial furnaces
Clifton Furnace
[ "Chemistry" ]
141
[ "Metallurgical processes", "Industrial furnaces" ]
39,488,905
https://en.wikipedia.org/wiki/Bohn%E2%80%93Schmidt%20reaction
The Bohn–Schmidt reaction, a named reaction in chemistry, introduces a hydroxy group at an anthraquinone system. The anthraquinone must already have at least one hydroxy group. The reaction was first described in 1889 by René Bohn (1862–1922) and in 1891 by Robert Emanuel Schmidt (1864–1938), two German industrial chemists. René Bohn is one of the few industrial chemists after whom a reaction is named. In 1901, he made indanthrone from 2-aminoanthraquinone and thus laid the basis for a new group of dyes. Reaction mechanism The postulated reaction mechanism is explained below for the example of 2-hydroxyanthraquinone: The sulfuric acid protonates the keto group of the anthraquinone 1. This causes a shift of the electrons to the oxonium ion in molecule 2. This shift enables the sulfuric acid to attack the carbenium ion 3 which is formed. The sulfuric acid oxidizes the resulting hydroxyanthracenone 5, which is then protonated and the reaction starts all over again. Finally, polyhydroxyanthraquinones with different numbers of hydroxy groups are obtained. The reaction proceeds best at 25–50 °C and takes up to several weeks to complete. The presence of a catalyst such as selenium or mercury accelerate the reaction. By adding boric acid, sulfuric acid can be used instead of fuming sulfuric acid. If boric acid is used, it has a regulating effect as ester formation occurs, which prevents further oxidation. Atom economy The reaction is ideally suited for the general production of multi-hydroxyated anthraquinones due to the good atom economy. Sulfuric acid can be reused, as it is split off at the very end. The reaction is therefore used in many dye production processes. The only disadvantage is that in case boric acid is used, esterification occurs, which must then be reverted (hydrolyzed). See also Wolffenstein–Böters reaction References Name reactions Organic redox reactions
Bohn–Schmidt reaction
[ "Chemistry" ]
430
[ "Name reactions", "Organic redox reactions", "Organic reactions" ]
39,490,943
https://en.wikipedia.org/wiki/Elimination%20%28pharmacology%29
In pharmacology, the elimination or excretion of a drug is understood to be any one of a number of processes by which a drug is eliminated (that is, cleared and excreted) from an organism either in an unaltered form (unbound molecules) or modified as a metabolite. The kidney is the main excretory organ although others exist such as the liver, the skin, the lungs or glandular structures, such as the salivary glands and the lacrimal glands. These organs or structures use specific routes to expel a drug from the body, these are termed elimination pathways: Urine Tears Perspiration Saliva Respiration Milk Faeces Bile Drugs are excreted from the kidney by glomerular filtration and by active tubular secretion following the same steps and mechanisms as the products of intermediate metabolism. Therefore, drugs that are filtered by the glomerulus are also subject to the process of passive tubular reabsorption. Glomerular filtration will only remove those drugs or metabolites that are not bound to proteins present in blood plasma (free fraction) and many other types of drugs (such as the organic acids) are actively secreted. In the proximal and distal convoluted tubules, non-ionised acids and weak bases are reabsorbed both actively and passively. Weak acids are excreted when the tubular fluid becomes too alkaline and this reduces passive reabsorption. The opposite occurs with weak bases. Poisoning treatments use this effect to increase elimination, by alkalizing the urine causing forced diuresis which promotes excretion of a weak acid, rather than it getting reabsorbed. As the acid is ionised, it cannot pass through the plasma membrane back into the blood stream and instead gets excreted with the urine. Acidifying the urine has the same effect for weakly basic drugs. On other occasions drugs combine with bile juices and enter the intestines. In the intestines the drug will join with the unabsorbed fraction of the administered dose and be eliminated with the faeces or it may undergo a new process of absorption to eventually be eliminated by the kidney. The other elimination pathways are less important in the elimination of drugs, except in very specific cases, such as the respiratory tract for alcohol or anaesthetic gases. The case of mother's milk is of special importance. The liver and kidneys of newly born infants are relatively undeveloped and they are highly sensitive to a drug's toxic effects. For this reason it is important to know if a drug is likely to be eliminated from a woman's body if she is breast feeding in order to avoid this situation. Pharmacokinetic parameters of elimination Pharmacokinetics studies the manner and speed with which drugs and their metabolites are eliminated by the various excretory organs. This elimination will be proportional to the drug's plasmatic concentrations. In order to model these processes a working definition is required for some of the concepts related to excretion. Half life The plasma half-life or half life of elimination is the time required to eliminate 50% of the absorbed dose of a drug from an organism. Or put another way, the time that it takes for the plasma concentration to fall by half from its maximum levels. Clearance The difference in a drug's concentration in arterial blood (before it has circulated around the body) and venous blood (after it has passed through the body's organs) represents the amount of the drug that the body has eliminated or cleared. Although clearance may also involve other organs than the kidney, it is almost synonymous with renal clearance or renal plasma clearance. Clearance is therefore expressed as the plasma volume totally free of the drug per unit of time, and it is measured in units of volume per units of time. Clearance can be determined on an overall, organism level («systemic clearance») or at an organ level (hepatic clearance, renal clearance etc.). The equation that describes this concept is: Where: is the organ's clearance rate, is the drug's plasma concentration in arterial blood, is the drug's plasma concentration in venous blood and an organ's blood flow. Each organ will have its own specific clearance conditions, which will relate to its mode of action. The «renal clearance» rate will be determined by factors such as the degree of plasma protein binding as the drug will only be filtered out if it is in the unbound free form, the degree of saturation of the transporters (active secretion depends on transporter proteins that can become saturated) or the number of functioning nephrons (hence the importance of factors such as kidney failure). As «hepatic clearance» is an active process it is therefore determined by factors that alter an organism's metabolism such as the number of functioning hepatocytes, this is the reason that liver failure has such clinical importance. Steady state The steady state or stable concentration is reached when the drug's supply to the blood plasma is the same as the rate of elimination from the plasma. It is necessary to calculate this concentration in order to decide the period between doses and the amount of drug supplied with each dose in prolonged treatments. Other parameters Other parameters of interest include a drug's bioavailability and the apparent volume of distribution. References For elimination via bile please see: Estimation of Biliary Excretion of Foreign Compounds Using Properties of Molecular Structure. 2014. Sharifi M., Ghafourian T. AAPS J. 16(1) 65–78. External links UAlberta.ca, Animation of excretion Digestive system Excretion Pharmacokinetics Pharmacy
Elimination (pharmacology)
[ "Chemistry", "Biology" ]
1,176
[ "Digestive system", "Pharmacology", "Pharmacokinetics", "Pharmacy", "Excretion", "Organ systems" ]
39,491,679
https://en.wikipedia.org/wiki/Transition%20metal%20thiolate%20complex
Transition metal thiolate complexes are metal complexes containing thiolate ligands. Thiolates are ligands that can be classified as soft Lewis bases. Therefore, thiolate ligands coordinate most strongly to metals that behave as soft Lewis acids as opposed to those that behave as hard Lewis acids. Most complexes contain other ligands in addition to thiolate, but many homoleptic complexes are known with only thiolate ligands. The amino acid cysteine has a thiol functional group, consequently many cofactors in proteins and enzymes feature cysteinate-metal cofactors. Synthesis Metal thiolate complexes are commonly prepared by reactions of metal complexes with thiols (RSH), thiolates (RS−), and disulfides (R2S2). The salt metathesis reaction route is common. In this method, an alkali metal thiolate is treated with a transition metal halide to produce an alkali metal halide and the metal thiolate complex: LiSC6H5 + CuI → Cu(SC6H5) + LiI Lithium tert-butylthiolate reacts with MoCl4 to give the tetrathiolate complex: MoCl4 + 4 t-BuSLi → Mo(t-BuS)4 + 4 LiCl Mo(t-BuS)4 is a dark red diamagnetic complex that is sensitive to air and moisture. The molybdenum center has a distorted tetrahedral coordination to four sulfur atoms, with overall D2 symmetry. Nickelocene and ethanethiol give a dimeric thiolate, one cyclopentadienyl ligand serving as a base: 2 HSC2H5 + 2 Ni(C5H5)2 → [Ni(SC2H5)(C5H5)]2 + 2 C5H6 Regarding their mechanism of formation from thiols, metal thiolate complexes can arise via deprotonation of thiol complexes. Redox routes Many thiolate complexes are prepared by redox reactions. Organic disulfides oxidize low valence metals, as illustrated by the oxidation of titanocene dicarbonyl: Some metal centers are oxidized by thiols, the coproduct being hydrogen gas: These reactions may proceed by the oxidative addition of the thiol to Fe(0). Thiols and especially thiolate salts are reducing agents. Consequently, they induce redox reactions with certain transition metals. This phenomenon is illustrated by the synthesis of cuprous thiolates from cupric precursors: 4 HSC6H5 + 2 CuO → 2 Cu(SC6H5) + (C6H5S)2 + 2 H2O Thiolate clusters of the type [Fe4S4(SR)4]2− occur in iron–sulfur proteins. Synthetic analogues can be prepared by combined redox and salt metathesis reactions: 4 FeCl3 + 6 NaSR + 6 NaSH → Na2[Fe4S4(SR)4] + 10 NaCl + 4 HCl + H2S + R2S2 Structure Divalent sulfur exhibits bond angles approaching 90°. Such acute angles are also seen in the M-S-C angles of metal thiolates. Having filled p-orbitals of suitable symmetry, thiolates are pi-donor ligands. This property plays a role in the stabilization of Fe(IV) states in the enzyme cytochrome P450. Reactions Thiolates are relatively basic ligands, being derived from conjugate acids with pKa's of 6.5 (thiophenol) to 10.5 (butanethiol). Consequently, thiolate ligand often bridge pairs of metals. One example is Fe2(SCH3)2(CO)6. Thiolate ligands, especially when nonbridging, are susceptible to attack by electrophiles including acids, alkylating agents, and oxidants. Occurrence and applications Metal thiolate functionality is pervasive in metalloenzymes. Iron-sulfur proteins, blue copper proteins, and the zinc-containing enzyme liver alcohol dehydrogenase feature thiolate ligands. Commonly thiolate is ligand is provided from the cysteine residue. All molybdoproteins feature thiolates in the form of cysteinyl and/or molybdopterin. References Thiolates Biochemistry Inorganic chemistry Coordination complexes
Transition metal thiolate complex
[ "Chemistry", "Biology" ]
936
[ "Coordination complexes", "Coordination chemistry", "Functional groups", "Thiolates", "nan", "Biochemistry" ]
39,493,834
https://en.wikipedia.org/wiki/Three-dimensional%20losses%20and%20correlation%20in%20turbomachinery
Three-dimension losses and correlation in turbomachinery refers to the measurement of flow-fields in three dimensions, where measuring the loss of smoothness of flow, and resulting inefficiencies, becomes difficult, unlike two-dimensional losses where mathematical complexity is substantially less. Three-dimensionality takes into account large pressure gradients in every direction, design/curvature of blades, shock waves, heat transfer, cavitation, and viscous effects, which generate secondary flow, vortices, tip leakage vortices, and other effects that interrupt smooth flow and cause loss of efficiency. Viscous effects in turbomachinery block flow by the formation of viscous layers around blade profiles, which affects pressure rise and fall and reduces the effective area of a flow field. Interaction between these effects increases rotor instability and decreases the efficiency of turbomachinery. In calculating three-dimensional losses, every element affecting a flow path is taken into account—such as axial spacing between vane and blade rows, end-wall curvature, radial distribution of pressure gradient, hup/tip ratio, dihedral, lean, tip clearance, flare, aspect ratio, skew, sweep, platform cooling holes, surface roughness, and off-take bleeds. Associated with blade profiles are parameters such as camber distribution, stagger angle, blade spacing, blade camber, chord, surface roughness, leading- and trailing-edge radii, and maximum thickness. Two-dimensional losses are easily evaluated using Navier-Stokes equations, but three-dimensional losses are difficult to evaluate; so, correlation is used, which is difficult with so many parameters. So, correlation based on geometric similarity has been developed in many industries, in the form of charts, graphs, data statistics, and performance data. Types of losses Three-dimensional losses are generally classified as: Three-dimensional profile losses Three-dimensional shock losses Secondary flow Endwall losses in axial turbomachinery Tip leakage flow losses Blade boundary layer losses Three-dimensional profile losses The main points to consider are: Profile losses that occur due to the curvature of blades, which includes span-wise mixing of flow field, in addition to two-dimensional mixing losses (which can be predicted using Navier-Stokes equations). Major losses in rotors that are caused by radial pressure gradient from midspan to tip (flow ascending to tip). Reduction in high losses between annulus wall and tip clearance region, which includes the trailing edge of a blade profile. This is due to flow mixing and flow redistribution at the inner radius as flow proceeds downstream. Between the hub and annulus wall, losses are prominent due to three-dimensionality. In single-stage turbomachinery, large radial pressure gradient losses at exit of flow from rotor. Platform cooling increases the endwall flow loss and coolant air increases profile loss. Navier-Stokes identifies many of the losses when some assumptions are made, such as unseparated flow. Here correlation is no longer justified. Three-dimensional shock losses The main points to consider are: Shock losses continuously increase from the hub to tip of the blade in both supersonic and transonic rotors. Shock losses are accompanied by shock-boundary-layer interaction losses, boundary-layer losses in profile secondary flow, and tip clearance effects. From the Mach number prospective, fluid inside rotor is in supersonic phase except at initial hub entry. The Mach number increases gradually from midspan to tip. At the tip, the effect is less than secondary flow, tip clearance effect, and annulus wall boundary-layer effect. In a turbofan, shock losses increase overall efficiency by 2% because of the absence of tip clearance effect and secondary flow being present. Correlation depends on many parameters and is difficult to calculate. Correlation based on geometric similarity is used. Secondary flow The main points to consider are: The rotation of a blade row causes non-uniformity in radial velocity, stagnation pressure, stagnation enthalpy, and stagnation temperature. Distribution in both tangential and radial directions generates secondary flow. Secondary flow generates two velocity components Vy, Vz, hence introducing three-dimensionality in the flow field. The two components of velocity result in flow-turning at the tailing end of the blade profile, which directly affects pressure rise-and-fall in turbomachinery. Hence efficiency decreases. Secondary flow generates vibration, noise, and flutter because of unsteady pressure field between blades and rotor–stator interaction. Secondary flow introduces vortex cavitation, which diminishes flow rate, decreases performance, and damages the blade profile. The temperature in turbomachinery is affected. Correlation for secondary flow, given by Dunham (1970), is given by: ζs = (0.0055 + 0.078(δ1/C)1/2)CL2 (cos3α2/cos3αm) (C/h) (C/S)2 ( 1/cos ά1) where ζs = average secondary flow loss coefficient; α2, αm = flow angles; δ1/C = inlet boundary layer; and C,S,h = blade geometry. Endwall losses in axial flow in turbomachinery The main points to consider are: In a turbine, secondary flow forces the wall boundary layer toward the suction side of the rotor, where mixing of blade and wall boundary takes place, resulting in endwall losses. The secondary flow carries core losses away from the wall and blade boundary layer, through formation of vortices. So, peak loss occurs away from endwall. Endwall losses are high in stator (Francis turbine/Kaplan turbine) and nozzle vane (Pelton turbine), and the loss distribution is different for turbine and compressor, due to flows being opposite to each other. Due to the presence of vortices, large flow-turning and secondary flow result to form a complex flow field, and interaction between these effects increases endwall losses. In total loss, endwall losses form the fraction of secondary losses given by Gregory-Smith, et al., 1998. Hence secondary flow theory for small flow-turning fails. Correlation for endwall losses in an axial-flow turbine is given by: ζ = ζp + ζew ζ = ζp[ 1 + ( 1 + ( 4ε / ( ρ2V2/ρ1V1 )1/2 ) ) ( S cos α2 - tTE )/h ] where ζ=total losses, ζp=blade profile losses, ζew=endwall losses. The expression for endwall losses in an axial-flow compressor is given by: η = ή ( 1 - ( δh* + δt*)/h ) / ( 1 - ( Fθh + Fθt ) / h ) where η=efficiency in absence of endwall boundary layer, where h refers to the hub and t refers to the tip. The values of Fθ and δ* are derived from the graph or chart. Tip-leakage flow losses The main points to consider are: The rotation of a rotor in turbomachinery induces a pressure differences between opposite sides of the blade profile, resulting in tip leakage. In a turbomachinery rotor, a gap between the annulus wall and the blade causes leakage, which also occurs in the gap between the rotating hub and stator. Direct loss through clearance volume, as no angular momentum is transferred to fluid. So, no work is done. Leakage, and its interaction with other losses in the flow field, is complex; and hence, at the tip, it has a more pronounced effect than secondary flow. Leakage-flow induced three-dimensionality, like the mixing of leakage flow with vortex formation, entrainment process, diffusion and convection. This results in aerodynamics losses and inefficiency. Tip leakage and clearance loss account for 20–40% of total losses. The effects of cooling in turbines causes vibration, noise, flutter, and high blade stress. Leakage flow causes low static pressure in the core area, increasing the risk of cavitation and blade damage. The leakage velocity is given as: QL = 2 ( ( Pp - Ps ) / ρ )1/2 The leakage flow sheet due to velocity induced by the vortex is given in Rains, 1954: a/τ = 0.14 ( d/τ ( CL )1/2 )0.85 Total loss in clearance volume is given by two equations- ζL ~ ( CL2 * C * τ * cos2β1 ) / ( A * S * S * cos2βm ) ζW ~ ( δS* + δP* / S ) * ( 1 / A ) * ( ( CL )3/2) * ( τ / S )3/2Vm3 / ( V2 * V12 ) See also Axial compressor Centrifugal Centrifugal compressor Centrifugal fan Centrifugal pump Francis turbine Kaplan turbine Mechanical fan Secondary flow Turbomachinery References Chapter 4,5,6 In Fluid dynamics and Heat Transfer by Budugur Lakshminarayana Fluid dynamics and Heat Transfer by James George Knudsen, Donald La Verne Katz Turbomachinery: Design and Theory (Marcell Dekker) by Rama S.R. Gorla Handbook of Turbomachinery, 2nd Edition (Mechanical Engineering, No. 158) by Earl Logan, Jr; Ramendra Turbines Compressors and Fans by S M Yahya Principles of Turbomachinery by R K Turton Turbomachinery Flow Physics and Dynamic Performance by Meinhard Schobeiril Torsional Vibration of Turbo-Machinery by Duncan Walker Turbomachinery Performance Analysis by R. I. Lewis Fluid Machinery: Performance, Analysis, and Design by Terry Wright Fluid Mechanics and Thermodynamics of Turbomachinery by S L Dixon and C.A Hall Turbo-Machinery Dynamics by A. S. Rangwala Journals External links Turbomachinery Fluid dynamic instabilities Fluid mechanics
Three-dimensional losses and correlation in turbomachinery
[ "Chemistry", "Engineering" ]
2,071
[ "Fluid dynamic instabilities", "Turbomachinery", "Chemical equipment", "Civil engineering", "Mechanical engineering", "Fluid mechanics", "Fluid dynamics" ]
39,494,260
https://en.wikipedia.org/wiki/Kharasch%E2%80%93Sosnovsky%20reaction
The Kharasch–Sosnovsky reaction is a method that involves using a copper or cobalt salt as a catalyst to oxidize olefins at the allylic position, subsequently condensing a peroxy ester (e.g. tert-Butyl peroxybenzoate) or a peroxide resulting in the formation of allylic benzoates or alcohols via radical oxidation. This method is noteworthy for being the first allylic functionalization to utilize first-row transition metals and has found numerous applications in chemical and total synthesis. Chiral ligands can be used to render the reaction asymmetric, constructing chiral C–O bonds via C–H bond activation. This is notable as asymmetric addition to allylic groups tends to be difficult due to the transition state being highly symmetric. The reaction is named after Morris S. Kharasch and George Sosnovsky who first reported it in 1958. This method is noteworthy for being the first allylic functionalization to utilize first-row transition metals and has found numerous applications in chemical and total synthesis. Modifications Substituted oxazolines and thiazolines can be oxidized to the corresponding oxazoles and thiazoles via a modification of the classic reaction. Mechanism Although the mechanism of Kharasch-Sosnovsky oxidation is not fully understood, the general aspects have been established. The reaction is known to undergo a radical mechanism. Taking the most representative reaction as an example, most of the studies suggest that the Cu(I) and perester complex can go through a homolytic dissociation of the perester through coordination of a Cu(I) salt, leading to the formation of a Cu(II) complex and tert-butoxyl radical. However, the mechanism of Cu(II) to Cu(III) remains unknown. Several mechanistic studies hypothesize it can undergo multiple steps to generate the allyl- Cu(III) key intermediate. In the final step, the C-O bond formation between the alkenyl and benzoate occurs through the reductive elimination of the copper(III) complex. The last step, a reductive elimination of an organocopper(III) intermediate to regenerate the Cu(I) catalyst and form the product, is proposed to take place via a seven-membered ring transition state. Regioselectivity In the original work on Kharasch-Sosnovsky oxidation, Kharasch and Sosnovsky observed the selective formation of the branched product over the linear product with 1-octene in a ratio of 99:1. It is notable that the reaction favors the thermodynamically less stable terminal alkene. Mechanistic investigations later suggested that the reaction proceeds through a 7-membered ring organo-copper (III) species in a pericyclic reaction, resulting in an unrearranged terminal alkene product. Stereoselectivity Since the reaction usually generates a stereogenic center, multiple asymmetric variants of this transformation have been developed. To achieve the stereoselectivity, employing bidentate chiral ligand into the reactions is the most common strategy, inducing the asymmetric formation of benzoate often relies on the ability of the ligand and Cu(III). Some examples of frequently used ligands are oxazolines, pyridines, and C3 symmetric oxazoles. Applications in total synthesis Since the early 20th century, the scientific community has been aware of the oxidation of allylic C-H bonds. This reactivity can be attributed to the weakening strength of allylic and benzylic C-H bonds by approximately 16.4-16.7 kcal/mol, compared to a regular C-H bond. In the late 1950s, Kharasch—Sosnovsky oxidation was developed. Since then, there have been multiple studies employing first-row transition metals (especially copper)-mediated reactions to install functional groups in the allylic position. Corey's Synthesis of Oleanolic Acid One of the examples is from Corey and his co-workers' synthesis of oleanolic acid in 1993. They employed Kharasch—Sosnovsky oxidation in a novel manner to access OBz intermediate. Initially, vinylcyclopropane was treated with CuBr and tert-butyl perbenzoate, resulting in the abstraction of a hydrogen atom, leading to the formation of allylic radical. Subsequently, the allylic radical underwent a transformation through the homolytic cleavage of the cyclopropane ring, followed by the recombination of the resulting primary and benzyloxy radicals. This unique combination of the Kharasch reaction and the Simmons−Smith cyclopropanation facilitated the introduction of the cyclopropyl group, enabling the efficient and stereoselective installation of an oxidized methyl group. Mukaiyama's Synthesis of Taxol Another example is from Mukaiyama's Taxol synthesis in 1999 Mukaiyama's group utilized the Kharasch reaction to introduce an oxidation on the Taxol C-ring. By treating with an excess of CuBr and tert-butyl perbenzoate, a mixture was obtained. After separating the two bromides, Mukaiyama and his colleagues were able to convert side product into the desired through isomerization using CuBr in MeCN at 50 °C. The efficient conversion of the relatively inert alkene to the reactive allylic bromide played a crucial role in the success of Mukaiyama's synthesis, as the allylic bromide served as the necessary component to construct the oxetane D ring. References Catalysis Name reactions Carbon-heteroatom bond forming reactions
Kharasch–Sosnovsky reaction
[ "Chemistry" ]
1,195
[ "Catalysis", "Organic reactions", "Name reactions", "Carbon-heteroatom bond forming reactions", "Chemical reaction stubs", "Chemical kinetics", "Chemical process stubs" ]
36,625,367
https://en.wikipedia.org/wiki/N%20%3D%204%20supersymmetric%20Yang%E2%80%93Mills%20theory
N = 4 supersymmetric Yang–Mills (SYM) theory is a relativistic conformally invariant Lagrangian gauge theory describing the interactions of fermions via gauge field exchanges. In D=4 spacetime dimensions, N=4 is the maximal number of supersymmetries or supersymmetry charges. SYM theory is a toy theory based on Yang–Mills theory; it does not model the real world, but it is useful because it can act as a proving ground for approaches for attacking problems in more complex theories. It describes a universe containing boson fields and fermion fields which are related by four supersymmetries (this means that transforming bosonic and fermionic fields in a certain way leaves the theory invariant). It is one of the simplest (in the sense that it has no free parameters except for the gauge group) and one of the few ultraviolet finite quantum field theories in 4 dimensions. It can be thought of as the most symmetric field theory that does not involve gravity. Like all supersymmetric field theories, SYM theory may equivalently be formulated as a superfield theory on an extended superspace in which the spacetime variables are augmented by a number of Grassmann variables which, for the case N=4, consist of 4 Dirac spinors, making a total of 16 independent anticommuting generators for the extended ring of superfunctions. The field equations are equivalent to the geometric condition that the supercurvature 2-form vanish identically on all super null lines. This is also known as the super-ambitwistor correspondence. A similar super-ambitwistor characterization holds for D=10, N=1 dimensional super Yang–Mills theory, and the lower dimensional cases D=6, N=2 and D=4, N=4 may be derived from this via dimensional reduction. Meaning of N and numbers of fields In N supersymmetric Yang–Mills theory, N denotes the number of independent supersymmetric operations that transform the spin-1 gauge field into spin-1/2 fermionic fields. In an analogy with symmetries under rotations, N would be the number of independent rotations, N = 1 in a plane, N = 2 in 3D space, etc... That is, in a N = 4 SYM theory, the gauge boson can be "rotated" into N = 4 different supersymmetric fermion partners. In turns, each fermion can be rotated into four different bosons: one corresponds to the rotation back to the spin-1 gauge field, and the three others are spin-0 boson fields. Because in 3D space one may use different rotations to reach a same point (or here the same spin-0 boson), each spin-0 boson is superpartners of two different spin-1/2 fermions, not just one. So in total, one has only 6 spin-0 bosons, not 16. Therefore, N = 4 SYM has 1 + 4 + 6 = 11 fields, namely: one vector field (the spin-1 gauge boson), four spinor fields (the spin-1/2 fermions) and six scalar fields (the spin-0 bosons). N = 4 is the maximum number of independent supersymmetries: starting from a spin-1 field and using more supersymmetries, e.g., N = 5, only rotates between the 11 fields. To have N > 4 independent supersymmetries, one needs to start from a gauge field of spin higher than 1, e.g., a spin-2 tensor field such as that of the graviton. This is the N = 8 supergravity theory. Lagrangian The Lagrangian for the theory is where and are coupling constants (specifically is the gauge coupling and is the instanton angle), the field strength is with the gauge field and indices i,j = 1, ..., 6 as well as a, b = 1, ..., 4, and represents the structure constants of the particular gauge group. The are left Weyl fermions, are the Pauli matrices, is the gauge covariant derivative, are real scalars, and represents the structure constants of the R-symmetry group SU(4), which rotates the four supersymmetries. As a consequence of the nonrenormalization theorems, this supersymmetric field theory is in fact a superconformal field theory. Ten-dimensional Lagrangian The above Lagrangian can be found by beginning with the simpler ten-dimensional Lagrangian where I and J are now run from 0 through 9 and are the 32 by 32 gamma matrices , followed by adding the term with which is a topological term. The components of the gauge field for i = 4 to 9 become scalars upon eliminating the extra dimensions. This also gives an interpretation of the SO(6) R-symmetry as rotations in the extra compact dimensions. By compactification on a T6, all the supercharges are preserved, giving N = 4 in the 4-dimensional theory. A Type IIB string theory interpretation of the theory is the worldvolume theory of a stack of D3-branes. S-duality The coupling constants and naturally pair together into a single coupling constant The theory has symmetries that shift by integers. The S-duality conjecture says there is also a symmetry which sends as well as switching the group to its Langlands dual group. AdS/CFT correspondence This theory is also important in the context of the holographic principle. There is a duality between Type IIB string theory on AdS5 × S5 space (a product of 5-dimensional AdS space with a 5-dimensional sphere) and N = 4 super Yang–Mills on the 4-dimensional boundary of AdS5. However, this particular realization of the AdS/CFT correspondence is not a realistic model of gravity, since gravity in our universe is 4-dimensional. Despite this, the AdS/CFT correspondence is the most successful realization of the holographic principle, a speculative idea about quantum gravity originally proposed by Gerard 't Hooft, who was expanding on work on black hole thermodynamics, and was improved and promoted in the context of string theory by Leonard Susskind. Integrability There is evidence that N = 4 supersymmetric Yang–Mills theory has an integrable structure in the planar large N limit (see below for what "planar" means in the present context). As the number of colors (also denoted N) goes to infinity, the amplitudes scale like , so that only the genus 0 (planar graph) contribution survives. Planar Feynman diagrams are graphs in which no propagator cross over another one, in contrast to non-planar Feynman graphs where one or more propagator goes over another one. A non-planar graph has a smaller number of possible gauge loops compared to a similar planar graph. Non-planar graphs are thus suppressed by factors compared to planar ones which therefore dominate in the large N limit. Consequently, a planar Yang–Mills theory denotes a theory in the large N limit, with N usually the number of colors. Likewise, a planar limit is a limit in which scattering amplitudes are dominated by Feynman diagrams which can be given the structure of planar graphs. In the large N limit, the coupling vanishes and a perturbative formalism is therefore well-suited for large N calculations. Therefore, planar graphs are associated to the domain where perturbative calculations converge well. Beisert et al. give a review article demonstrating how in this situation local operators can be expressed via certain states in spin chains (in particular the Heisenberg spin chain), but based on a larger Lie superalgebra rather than for ordinary spin. These spin chains are integrable in the sense that they can be solved by the Bethe ansatz method. They also construct an action of the associated Yangian on scattering amplitudes. Nima Arkani-Hamed et al. have also researched this subject. Using twistor theory, they find a description (the amplituhedron formalism) in terms of the positive Grassmannian. Relation to 11-dimensional M-theory N = 4 super Yang–Mills can be derived from a simpler 10-dimensional theory, and yet supergravity and M-theory exist in 11 dimensions. The connection is that if the gauge group U(N) of SYM becomes infinite as it becomes equivalent to an 11-dimensional theory known as matrix theory. See also 4D N = 1 global supersymmetry 6D (2,0) superconformal field theory Extended supersymmetry N = 1 supersymmetric Yang–Mills theory N = 8 supergravity Seiberg–Witten theory References Citations Sources Supersymmetric quantum field theory Conformal field theory
N = 4 supersymmetric Yang–Mills theory
[ "Physics" ]
1,887
[ "Supersymmetric quantum field theory", "Supersymmetry", "Symmetry" ]
58,568,168
https://en.wikipedia.org/wiki/Klaus%20Immelmann
Klaus Immelmann (May 6, 1935 – September 8, 1987) was a German ethologist and ornithologist. He undertook field research in Africa and Australia, and published works in German and English. His second and third visit to South Africa were in 1969 and 1971. Immelmann became a permanent executive member of the International Ornithological Union, and its president in 1986. He is the author of Australian finches in bush and aviary (1965), regarded as the first standard text on the subject, and a study of comparative biology of estrildid finches in Australia. His first visit to Australia was in the late 1950s, shortly after receiving his PhD. His 1976 book Einführung in die Verhaltensforschung, which brought together much of his scientific thinking, was translated into English in 1980. References 1935 births 1987 deaths German ornithologists Ethologists Ornithological writers 20th-century German zoologists
Klaus Immelmann
[ "Biology" ]
201
[ "Ethology", "Behavior", "Ethologists" ]
58,574,158
https://en.wikipedia.org/wiki/Illegitimate%20recombination
Illegitimate recombination, or nonhomologous recombination, is the process by which two unrelated double stranded segments of DNA are joined. This insertion of genetic material which is not meant to be adjacent tends to lead to genes being broken causing the protein which they encode to not be properly expressed. One of the primary pathways by which this will occur is the repair mechanism known as non-homologous end joining (NHEJ). Discovery Illegitimate recombination is a natural process which was first found to be present within E. coli. A 700-1400 base pair segment of DNA was found to have inserted itself into the gal and lac operons resulting in a strong polar mutation. This mechanism was then found to have the ability to insert other short genetic sequences into other locations within the bacterial genome often leading to a change in the expression of neighboring genes. Oftentimes it leads to the neighboring genes to simply shut off. However some of these segments also had strong start and stop signals which changed the regulation of neighboring genes leading in changes in the amount of transcription. What differentiated this form of genetic recombination from those dependent of genetic homology was that the process observed as illegitimate did not require the use of homologous segments of DNA. While not being entirely understood at the time, it was recognized to hold potential in generating changes in the chromosomal evolution. Mechanism In prokaryotes In prokaryotes, illegitimate recombination results in a mutation of the genetic sequence of the prokaryote. This process takes different forms in eukaryotes one of which is deletions. In a deletion mutation the prokaryotic organism undergoes illegitimate recombination resulting in the removal of a continuous segment of genetic code. However this form of mutation occurs infrequently among mutants of natural origin rather than those that have been induced. Another form of illegitimate recombination in prokaryotes is that of a duplication mutations of a genome. In this case a portion of the parental genome is inserted multiple times into the genome. This duplication either inserts the genetic material in the same orientation or opposite of the original parental segments as it is non-homology driven. In eukaryotes The mechanism of Illegitimate recombination is that of non-homologous end joining in which two strands of DNA not sharing homology will be joined together by the gene repair machinery. Upon recognition of a double strand break a protein complex will keep the two strands within close enough proximity in order to allow for repair of the strands. Next the ends of the DNA are repaired such that any incorrect or damaged nucleotides are removed. Once this happens the strands are able to be ligated together such that they are a single strand of DNA which previously had not been adjacent. This process is common for eukaryotic cells and tends to act as a repair mechanism, but can lead to these mutations if illegitimate recombination occurs. The illegitimate recombination will often take the form of large chromosomal aberrations within a eukaryotic organism as it has much larger segments of DNA than prokaryotic cells. As of such non-homologous end joining can cause illegitimate recombination which creates insertion and deletion mutations in chromosomes as well as translocation of one chromosomal segment to that of another chromosome. These large scale changes in the chromosome in eukaryotic organisms tend to have deleterious effects on the organism rather than conferring a type of genetic advantage. Deleterious effects on organisms Illegitimate recombination oftentimes has deleterious effects on an organism as it results in a large scale change on the genetic sequence of an organism. These changes will result in mutations as the joining of DNA not based on homology will most often place genetic elements in locations in which they previously had not been placed. This can disrupt the function of genes which may be essential to the function of an organism. In the case of cancer it has been found that tumors can be a result of illegitimate recombination resulting in hairpin formation which alters the gene function within the genome of tumor cells. Applications Illegitimate recombination is a tool which can be used in the laboratory as well as it is a useful research tool. Illegitimate recombination can generate random mutagenesis in order to generate a random alteration of the genetic sequence of an organism. The induction of this mutagenesis allows for the study of a genetic sequence by creating a mutation in a genetic segment altering the function of that genetic segment. This allows for the study of gene function through the analysis of differences between mutants and natural organisms to interpret what process a gene is linked to. References DNA repair Molecular genetics Mutated genes
Illegitimate recombination
[ "Chemistry", "Biology" ]
981
[ "Molecular genetics", "Cellular processes", "DNA repair", "Molecular biology" ]
58,577,604
https://en.wikipedia.org/wiki/Magnesocene
Magnesocene, also known as bis(cyclopentadienyl)magnesium(II) and sometimes abbreviated as MgCp2, is an organometallic compound with the formula Mg(η5-C5H5)2. It is an example of an s-block main group sandwich compound, structurally related to the d-block element metallocenes, and consists of a central magnesium atom sandwiched between two cyclopentadienyl rings. Properties Magnesocene is a white solid at room temperature. It has a melting point of 176 °C, though at atmospheric pressures it sublimes at 100 °C. Unlike ferrocene, magnesocene displays slight dissociation and subsequent ion association in polar, electron-donating solvents (such as ether and THF). MgCp2 <=> MgCp+ + Cp- MgCp2 + MgCp+ <=> Mg2Cp3+ MgCp2 + Cp- <=> MgCp3- While ferrocene is stable at ambient conditions, magnesocene decomposes rapidly on exposure to oxygen or moisture, and as such must be synthesized and stored under inert conditions. Structure and bonding As revealed by X-ray crystallographic refinement, solid-phase magnesocene exhibits an average Mg-C and C-C bond distance of 2.30 Å and 1.39 Å, respectively, and the Cp rings adopt a staggered conformation (point group D5d). Gas-phase electron diffraction has shown similar bond lengths, albeit with the Cp rings in an eclipsed conformation (point group D5h). The nature of Mg-Cp bonding has been hotly contested as to whether the interaction is primarily ionic or covalent in character. Gas-phase electron diffraction measurements have been invoked to argue for a covalent model, while vibrational spectroscopy measurements have offered evidence for both. Hartree-Fock calculations have shown that, in contrast to transition metal metallocenes, the Mg 3d orbitals play no role in metal-ring bonding; instead, favorable bonding interactions with the Cp π system are accomplished by promotion of the two 3s electrons to the 3px,y orbitals. Further stabilization is afforded by back-donation from the Cp rings to the Mg 3s orbital. Such interactions afford a lesser degree of orbital overlap as compared to ferrocene, resulting in a comparatively weak metal-ring bond and a fairly high effective local charge on Mg. Experimental evidence in favor of an ionic bonding model can thus be explained by the very weak, highly polar Mg-Cp interactions. The weak nature of this bonding mode is responsible for magnesocene's relative instability and vigorous reactivity when compared to ferrocene. Synthesis High-temperature synthesis The first synthesis of magnesocene, as reported by F. A. Cotton and Geoffrey Wilkinson in 1954, involved the thermal decomposition of the cyclopentadienyl Grignard reagent. A similar procedure was offered by W. A. Barber in which cyclopentadiene is directly reacted with solid magnesium at 500-600 °C. Under water- and oxygen-free conditions, freshly distilled monomeric cyclopentadiene is directed through a tube furnace by an inert carrier gas (such as helium, argon, or nitrogen) and passed over magnesium turnings or powder. Magnesocene deposits on cooler surfaces past the exit end of the furnace. The product of this process is typically a white, fluffy mass of fine microcrystals, but large, colorless single crystals can be obtained by adjusting temperature and flow rate. If solid magnesocene is not needed, the receiving flask can instead be filled with solvent and the product collected in solution, which Barber noted as much safer to handle than the pure solid. Mg + 2C5H6 ->[500-600 ^oC] Mg(C5H5)2 + H2 This procedure is capable of producing a gram of product every two minutes under ideal conditions, and that with a vertical setup (in which cyclopentadiene is directed downwards and the product collected below) nearly pure product can be obtained at >80% yield (by cyclopentadiene). A horizontal setup was shown to be possible but at the expense of product purity, due to gas flow restriction by product accumulation. Liquid-phase methods Magnesocene can be produced from magnesium turnings in THF at mild conditions with cyclopentadienyltitanium trichloride (CpTiCl3) acting as a catalyst. Maslennikov et al. later showed similar catalytic activity with Cp2TiCl2, TiCl3, TiCl4, and VCl3. The mechanism, as shown by electron spin resonance, proceeds through a Cp2TiH2MgCl intermediate. Magnesocene formation from elemental magnesium has not been observed in THF without a catalyst present. Attempts to substitute THF with diethyl ether, diglyme, or benzene resulted only in polymerization of cyclopentadiene. The syntheses of magnesocene and its derivatives have also been carried out in hydrocarbon solvents, such as heptane, from Cp and (nBu)(sBu)Mg. Metallation of cyclopentadiene can also be accomplished by Mg-Al alkyl complexes with a final magnesocene yield of 85%. Reactivity and potential applications Magnesocene serves as an intermediate in the preparation of transition metal metallocenes: MgCp2 + MCl2 -> MCp2 + MgCl2 Magnesocene also undergoes ligand exchange reactions with MgX2 (X = halide) to form CpMgX half-sandwich compounds in THF: MgCp2 + MgX2 <=> 2 CpMgX The resulting half-sandwich halides can serve as starting materials for synthesizing substituted cyclopentadienes from organic halides. Because of its high reactivity, magnesocene is an attractive target for semiconductor research as a starting material for chemical vapor deposition and doping applications. Magnesocene has also been investigated for its potential use as an electrolyte in next-generation magnesium ion batteries. References Cyclopentadienyl complexes Magnesium compounds Substances discovered in the 1950s Metallocenes
Magnesocene
[ "Chemistry" ]
1,303
[ "Organometallic chemistry", "Cyclopentadienyl complexes" ]
38,040,677
https://en.wikipedia.org/wiki/Nitrocefin
Nitrocefin is a chromogenic cephalosporin substrate routinely used to detect the presence of beta-lactamase enzymes produced by various microbes. Beta-lactamase mediated resistance to beta-lactam antibiotics such as penicillin is a widespread mechanism of resistance for a number of bacteria including members of the family Enterobacteriaceae, a major group of enteric Gram-negative bacteria. Other methods for beta-lactamase detection exist including PCR; however, nitrocefin allows for rapid beta-lactamase detection using few materials and inexpensive equipment. Structure As a cephalosporin, nitrocefin contains a beta-lactam ring which is susceptible to beta-lactamase mediated hydrolysis. Once hydrolyzed, the degraded nitrocefin compound rapidly changes color from yellow to red. Although nitrocefin is considered a cephalosporin, it does not appear to have antimicrobial properties. Degradation and chromogenic properties Intact beta-lactam antibiotics act by binding to penicillin binding proteins (PBPs) involved in peptidoglycan synthesis. Beta-lactamases hydrolyze the amide bond between the carbonyl carbon and the nitrogen in the beta-lactam ring of susceptible beta-lactams and members of beta-lactam subclasses (including certain cephalosporins). After hydrolysis of the amide bond, the antibiotic lacks the ability to bind bacterial PBPs and is rendered useless. Visual detection of this process is essentially impossible with most cephalosporins because the shift of ultraviolet absorption from the intact versus hydrolyzed product occurs outside of the visible spectrum. Hydrolysis of nitrocefin however, produces a shift of ultraviolet absorption inside the visible light spectrum from intact (yellow) nitrocefin (~380 nm) to degraded (red) nitrocefin (~500 nm) allowing visual detection of beta-lactamase activity on a macroscopic level. Detection assays The following assays describe methods in which nitrocefin can be used to detect beta-lactamase enzymes using inexpensive materials and equipment. Working solutions of nitrocefin lie within 0.5 mg/mL to 1.0 mg/mL. Slide Surface Assay Add one drop of 0.5 mg/ml Nitrocefin to the surface of a clean glass slide. Select a colony from an agar surface using a sterile loop and mix with the drop. Appearance of red color within 20-30 min. indicates beta-lactamase activity. Direct Contact Assay Place one drop of 0.5 mg/ml Nitrocefin directly on the surface of an isolated colony. Appearance of red color within 20-30 min. indicates beta-lactamase activity. Broth Suspension Assay Add 3-5 drops of 0.5 mg/ml Nitrocefin to 1 ml of broth suspension. Appearance of red color within 20-30 min. indicates beta-lactamase activity. Lysed Cell Assay Lyse 1ml of cell suspension by sonication. Add 3-5 drops of 0.5 mg/ml Nitrocefin to lysed cell suspension. Appearance of red color within 20-30 min. indicates beta-lactamase activity. Filter Paper Assay Place a small piece of filter paper (~3 x 3 cm) in a clean petri dish or another clean isolated surface and saturate (3-5 ml) with 0.5 mg/ml Nitrocefin Select an isolated colony and smear over the surface of the impregnated filter paper. Appearance of red color within 20-30 min. indicates beta-lactamase activity See also Antibiotic resistance Antimicrobial Beta-lactam β-Lactam antibiotic Beta-lactamase Cephalosporin Peptidoglycan References Biochemistry detection reactions Cephalosporin antibiotics
Nitrocefin
[ "Chemistry", "Biology" ]
815
[ "Biochemistry detection reactions", "Microbiology techniques", "Biochemical reactions" ]
38,041,270
https://en.wikipedia.org/wiki/Mercury%20Systems
Mercury Systems, Inc. is a technology company serving the aerospace and defense industry. It designs, develops and manufactures open architecture computer hardware and software products, including secure embedded processing modules and subsystems, avionics mission computers and displays, rugged secure computer servers, and trusted microelectronics components, modules and subsystems. Mercury sells its products to defense prime contractors, the US government and original equipment manufacturer (OEM) commercial aerospace companies. Mercury is based in Andover, Massachusetts, with more than 2300 employees and annual revenues of approximately US$988 million for its fiscal year ended June 30, 2022. History Founded on July 14, 1981 as Mercury Computer Systems by Jay Bertelli. Went public on the Nasdaq stock exchange on January 30, 1998, listed under the symbol MRCY. In July 2005, Mercury Computer Systems acquired Echotek Corporation for approximately US$49 million. In January 2011, Mercury Computer Systems acquired LNX Corporation. In December, 2011, Mercury Computer Systems acquired KOR Electronics for US$70 million, In August 2012, Mercury Computer Systems acquired Micronetics for US$74.9 million. In November 2012, the company changed its name from Mercury Computer Systems to Mercury Systems. In December 2015, Mercury Systems acquired Lewis Innovative Technologies, Inc. (LIT). In November 2016, Mercury Systems acquired Creative Electronic Systems for US$38 million. In April 2017, Mercury Systems acquired Delta Microwave, LLC (“Delta”) for US$40.5 million, enabling the Company to expand into the satellite communications (SatCom), datalinks and space launch markets. In July 2017, Mercury Systems acquired Richland Technologies, LLC (RTL), increasing the Company's market penetration in commercial aerospace, defense platform management, C4I, and mission computing. In January 2019, Mercury Systems acquired GECO Avionics, LLC for US$36.5 million. In December 2020, Mercury Systems acquired Physical Optics Corporation (POC) for $310 million. In May 2021, Mercury Systems acquired Pentek for $65.0 million. In November, 2021 Mercury Systems acquired Avalex Technologies Corporation and Atlanta Micro, Inc. In August, 2023 Mercury Systems appointed William L. Ballhaus as the president and CEO. Facilities Manufacturing centers Mercury manufactures in New England, New York Metro-area, Southern California and a trusted DMEA facility in the Southwest, which has Missile Defense Agency approval and AS9100 certification. Four Mercury sites have been awarded the James S. Cogswell Award for Outstanding Industrial Security Achievement Award by the Defense Counterintelligence and Security Agency (DCSA). References External links Manufacturing companies based in Massachusetts Companies based in Essex County, Massachusetts Andover, Massachusetts Signal processing Signals intelligence Radio frequency propagation Middleware Cell BE architecture Aerospace companies of the United States Defense companies of the United States American companies established in 1981 Technology companies established in 1981 1981 establishments in Massachusetts Companies listed on the Nasdaq
Mercury Systems
[ "Physics", "Technology", "Engineering" ]
600
[ "Physical phenomena", "Telecommunications engineering", "Spectrum (physical sciences)", "Computer engineering", "Radio frequency propagation", "Signal processing", "IT infrastructure", "Electromagnetic spectrum", "Software engineering", "Waves", "Middleware" ]
38,045,209
https://en.wikipedia.org/wiki/Despeciation
Despeciation is the loss of a unique species of animal due to its combining with another previously distinct species. It is the opposite of speciation and is much more rare. It is similar to extinction in that there is a loss of a unique species but without the associated loss of a biological lineage. Despeciation has been noted in species of butterflies, sunflowers, mosquitoes, fish, wolves, and even humans. Examples North American ravens Holarctic ravens and Californian ravens had been two separate species for 1.5 million years until tens of thousands of years ago when their regions overlapped and they began to merge into a new species. This new raven species contains genes coming from both the Holarctic and Californian raven. This was possible because they occupy the same area of the Western United States. Three-spined sticklebacks Another possible cause for despeciation is increased gene flow and hybridization due to changes in the environment. One of these changes could include the loss of essential nourishment resources for each individual species. For example, Taylor et al.'s genetic analysis of three-spined sticklebacks across six lakes in southwestern British Columbia found two distinct species in 1977 and 1988 but only one combined species in data from 1997, 2000, and 2002. The new species is a hybrid and shows an intermediate form of the parental genotype. They concluded that external factors had imperiled the living conditions of the two species, thus eliminating the evolutionary specializations that had kept them unique. Humans Anatomically modern humans (Homo sapiens) are considered to have undergone despeciation due to the genomes associated with remains of the closest archaic human species to modern humans, such as the Denisovans and Neanderthals, among others. These genomes show that human ancestors interbred with these other hominins. Genes identified as being of Neanderthal and Denisovan origin have been located in the genome of modern humans in varying amounts dependent on location, and one Neanderthal-Denisovan hybrid nicknamed Denny has been identified, indicating that interbreeding took place where populations of the three species met. References Evolution Speciation
Despeciation
[ "Biology" ]
441
[ "Evolutionary processes", "Speciation" ]
38,046,595
https://en.wikipedia.org/wiki/Evolution%20of%20tetrapods
The evolution of tetrapods began about 400 million years ago in the Devonian Period with the earliest tetrapods evolved from lobe-finned fishes. Tetrapods (under the apomorphy-based definition used on this page) are categorized as animals in the biological superclass Tetrapoda, which includes all living and extinct amphibians, reptiles, birds, and mammals. While most species today are terrestrial, little evidence supports the idea that any of the earliest tetrapods could move about on land, as their limbs could not have held their midsections off the ground and the known trackways do not indicate they dragged their bellies around. Presumably, the tracks were made by animals walking along the bottoms of shallow bodies of water. The specific aquatic ancestors of the tetrapods, and the process by which land colonization occurred, remain unclear. They are areas of active research and debate among palaeontologists at present. Most amphibians today remain semiaquatic, living the first stage of their lives as fish-like tadpoles. Several groups of tetrapods, such as the snakes and cetaceans, have lost some or all of their limbs. In addition, many tetrapods have returned to partially aquatic or fully aquatic lives throughout the history of the group (modern examples of fully aquatic tetrapods include cetaceans and sirenians). The first returns to an aquatic lifestyle may have occurred as early as the Carboniferous Period whereas other returns occurred as recently as the Cenozoic, as in cetaceans, pinnipeds, and several modern amphibians. The change from a body plan for breathing and navigating in water to a body plan enabling the animal to move on land is one of the most profound evolutionary changes known. It is also one of the best understood, largely thanks to a number of significant transitional fossil finds in the late 20th century combined with improved phylogenetic analysis. Origin Evolution of fish The Devonian period is traditionally known as the "Age of Fish", marking the diversification of numerous extinct and modern major fish groups. Among them were the early bony fishes, who diversified and spread in freshwater and brackish environments at the beginning of the period. The early types resembled their cartilaginous ancestors in many features of their anatomy, including a shark-like tailfin, spiral gut, large pectoral fins stiffened in front by skeletal elements and a largely unossified axial skeleton. They did, however, have certain traits separating them from cartilaginous fishes, traits that would become pivotal in the evolution of terrestrial forms. With the exception of a pair of spiracles, the gills did not open singly to the exterior as they do in sharks; rather, they were encased in a gill chamber stiffened by membrane bones and covered by a bony operculum, with a single opening to the exterior. The cleithrum bone, forming the posterior margin of the gill chamber, also functioned as anchoring for the pectoral fins. The cartilaginous fishes do not have such an anchoring for the pectoral fins. This allowed for a movable joint at the base of the fins in the early bony fishes, and would later function in a weight bearing structure in tetrapods. As part of the overall armour of rhomboid cosmin scales, the skull had a full cover of dermal bone, constituting a skull roof over the otherwise shark-like cartilaginous inner cranium. Importantly, they also had a pair of ventral paired lungs, a feature lacking in sharks and rays. It was assumed that fishes to a large degree evolved around reefs, but since their origin about 480 million years ago, they lived in near-shore environments like intertidal areas or permanently shallow lagoons and didn't start to proliferate into other biotopes before 60 million years later. A few adapted to deeper water, while solid and heavily built forms stayed where they were or migrated into freshwater. The increase of primary productivity on land during the late Devonian changed the freshwater ecosystems. When nutrients from plants were released into lakes and rivers, they were absorbed by microorganisms which in turn were eaten by invertebrates, which served as food for vertebrates. Some fish also became detritivores. Early tetrapods evolved a tolerance to environments which varied in salinity, such as estuaries or deltas. Lungs before land The lung/swim bladder originated as an outgrowth of the gut, forming a gas-filled bladder above the digestive system. In its primitive form, the air bladder was open to the alimentary canal, a condition called physostome and still found in many fish. The primary function of swim bladder is not entirely certain. One consideration is buoyancy. The heavy scale armour of the early bony fishes would certainly weigh the animals down. In cartilaginous fishes, lacking a swim bladder, the open sea sharks need to swim constantly to avoid sinking into the depths, the pectoral fins providing lift. Another factor is oxygen consumption. Ambient oxygen was relatively low in the early Devonian, possibly about half of modern values. Per unit volume, there is much more oxygen in air than in water, and vertebrates (especially nektonic ones) are active animals with a higher energy requirement compared to invertebrates of similar sizes. The Devonian saw increasing oxygen levels which opened up new ecological niches by allowing groups able to exploit the additional oxygen to develop into active, large-bodied animals. Particularly in tropical swampland habitats, atmospheric oxygen is much more stable, and may have prompted a reliance of proto-lungs (performing essentially an evolved type of enteral respiration) rather than gills for primary oxygen uptake. In the end, both buoyancy and breathing may have been important, and some modern physostome fishes do indeed use their bladders for both. To function in gas exchange, lungs require a blood supply. In cartilaginous fishes and teleosts, the heart lies low in the body and pumps blood forward through the ventral aorta, which splits up in a series of paired aortic arches, each corresponding to a gill arch. The aortic arches then merge above the gills to form a dorsal aorta supplying the body with oxygenated blood. In lungfishes, bowfin and bichirs, the swim bladder is supplied with blood by paired pulmonary arteries branching off from the hindmost (6th) aortic arch. The same basic pattern is found in the lungfish Protopterus and in terrestrial salamanders, and was probably the pattern found in the tetrapods' immediate ancestors as well as the first tetrapods. In most other bony fishes the swim bladder is supplied with blood by the dorsal aorta. The breath In order for the lungs to allow gas exchange, the lungs first need to have gas in them. In modern tetrapods, three important breathing mechanisms are conserved from early ancestors, the first being a CO2/H+ detection system. In modern tetrapod breathing, the impulse to take a breath is triggered by a buildup of CO2 in the bloodstream and not a lack of O2. A similar CO2/H+ detection system is found in all Osteichthyes, which implies that the last common ancestor of all Osteichthyes had a need of this sort of detection system. The second mechanism for a breath is a surfactant system in the lungs to facilitate gas exchange. This is also found in all Osteichthyes, even those that are almost entirely aquatic. The highly conserved nature of this system suggests that even aquatic Osteichthyes have some need for a surfactant system, which may seem strange as there is no gas underwater. The third mechanism for a breath is the actual motion of the breath. This mechanism predates the last common ancestor of Osteichthyes, as it can be observed in Lampetra camtshatica, the sister clade to Osteichthyes. In Lampreys, this mechanism takes the form of a "cough", where the lamprey shakes its body to allow water flow across its gills. When CO2 levels in the lamprey's blood climb too high, a signal is sent to a central pattern generator that causes the lamprey to "cough" and allow CO2 to leave its body. This linkage between the CO2 detection system and the central pattern generator is extremely similar to the linkage between these two systems in tetrapods, which implies homology. External and internal nares The nostrils in most bony fish differ from those of tetrapods. Normally, bony fish have four nares (nasal openings), one naris behind the other on each side. As the fish swims, water flows into the forward pair, across the olfactory tissue, and out through the posterior openings. This is true not only of ray-finned fish but also of the coelacanth, a fish included in the Sarcopterygii, the group that also includes the tetrapods. In contrast, the tetrapods have only one pair of nares externally but also sport a pair of internal nares, called choanae, allowing them to draw air through the nose. Lungfish are also sarcopterygians with internal nostrils, but these are sufficiently different from tetrapod choanae that they have long been recognized as an independent development. The evolution of the tetrapods' internal nares was hotly debated in the 20th century. The internal nares could be one set of the external ones (usually presumed to be the posterior pair) that have migrated into the mouth, or the internal pair could be a newly evolved structure. To make way for a migration, however, the two tooth-bearing bones of the upper jaw, the maxilla and the premaxilla, would have to separate to let the nostril through and then rejoin; until recently, there was no evidence for a transitional stage, with the two bones disconnected. Such evidence is now available: a small lobe-finned fish called Kenichthys, found in China and dated at around 395 million years old, represents evolution "caught in mid-act", with the maxilla and premaxilla separated and an aperture—the incipient choana—on the lip in between the two bones. Kenichthys is more closely related to tetrapods than is the coelacanth, which has only external nares; it thus represents an intermediate stage in the evolution of the tetrapod condition. The reason for the evolutionary movement of the posterior nostril from the nose to lip, however, is not well understood. Into the shallows The relatives of Kenichthys soon established themselves in the waterways and brackish estuaries and became the most numerous of the bony fishes throughout the Devonian and most of the Carboniferous. The basic anatomy of the group is well known thanks to the very detailed work on Eusthenopteron by Erik Jarvik in the second half of the 20th century. The bones of the skull roof were broadly similar to those of early tetrapods and the teeth had an infolding of the enamel similar to that of labyrinthodonts. The paired fins had a build with bones distinctly homologous to the humerus, ulna, and radius in the fore-fins and to the femur, tibia, and fibula in the pelvic fins. There were a number of families: Rhizodontida, Canowindridae, Elpistostegidae, Megalichthyidae, Osteolepidae and Tristichopteridae. Most were open-water fishes, and some grew to very large sizes; adult specimens are several meters in length. The Rhizodontid Rhizodus is estimated to have grown to , making it the largest freshwater fish known. While most of these were open-water fishes, one group, the Elpistostegalians, adapted to life in the shallows. They evolved flat bodies for movement in very shallow water, and the pectoral and pelvic fins took over as the main propulsion organs. Most median fins disappeared, leaving only a protocercal tailfin. Since the shallows were subject to occasional oxygen deficiency, the ability to breathe atmospheric air with the swim bladder became increasingly important. The spiracle became large and prominent, enabling these fishes to draw air. Skull morphology The tetrapods have their root in the early Devonian tetrapodomorph fish. Primitive tetrapods developed from an osteolepid tetrapodomorph lobe-finned fish (sarcopterygian-crossopterygian), with a two-lobed brain in a flattened skull. The coelacanth group represents marine sarcopterygians that never acquired these shallow-water adaptations. The sarcopterygians apparently took two different lines of descent and are accordingly separated into two major groups: the Actinistia (including the coelacanths) and the Rhipidistia (which include extinct lines of lobe-finned fishes that evolved into the lungfish and the tetrapodomorphs). From fins to feet The oldest known tetrapodomorph is Kenichthys from China, dated at around 395 million years old. Two of the earliest tetrapodomorphs, dating from 380 Ma, were Gogonasus and Panderichthys. They had choanae and used their fins to move through tidal channels and shallow waters choked with dead branches and rotting plants. Their fins could have been used to attach themselves to plants or similar while they were lying in ambush for prey. The universal tetrapod characteristics of front limbs that bend forward from the elbow and hind limbs that bend backward from the knee can plausibly be traced to early tetrapods living in shallow water. Pelvic bone fossils from Tiktaalik shows, if representative for early tetrapods in general, that hind appendages and pelvic-propelled locomotion originated in water before terrestrial adaptations. Another indication that feet and other tetrapod traits evolved while the animals were still aquatic is how they were feeding. They did not have the modifications of the skull and jaw that allowed them to swallow prey on land. Prey could be caught in the shallows, at the water's edge or on land, but had to be eaten in water where hydrodynamic forces from the expansion of their buccal cavity would force the food into their esophagus. It has been suggested that the evolution of the tetrapod limb from fins in lobe-finned fishes is related to expression of the HOXD13 gene or the loss of the proteins actinodin 1 and actinodin 2, which are involved in fish fin development. Robot simulations suggest that the necessary nervous circuitry for walking evolved from the nerves governing swimming, utilizing the sideways oscillation of the body with the limbs primarily functioning as anchoring points and providing limited thrust. This type of movement, as well as changes to the pectoral girdle are similar to those seen in the fossil record, can be induced in bichirs by raising them out of water. A 2012 study using 3D reconstructions of Ichthyostega concluded that it was incapable of typical quadrupedal gaits. The limbs could not move alternately as they lacked the necessary rotary motion range. In addition, the hind limbs lacked the necessary pelvic musculature for hindlimb-driven land movement. Their most likely method of terrestrial locomotion is that of synchronous "crutching motions", similar to modern mudskippers. (Viewing several videos of mudskipper "walking" shows that they move by pulling themselves forward with both pectoral fins at the same time (left & right pectoral fins move simultaneously, not alternatively). The fins are brought forward and planted; the shoulders then rotate rearward, advancing the body & dragging the tail as a third point of contact. There are no rear "limbs"/fins, and there is no significant flexure of the spine involved.) Denizens of the swamp The first tetrapods probably evolved in coastal and brackish marine environments, and in shallow and swampy freshwater habitats. Formerly, researchers thought the timing was towards the end of the Devonian. In 2010, this belief was challenged by the discovery of the oldest known tetrapod tracks named the Zachelmie trackways, preserved in marine sediments of the southern coast of Laurasia, now Świętokrzyskie (Holy Cross) Mountains of Poland. They were made during the Eifelian age, early Middle Devonian. The tracks, some of which show digits, date to about 395 million years ago—18 million years earlier than the oldest known tetrapod body fossils. Additionally, the tracks show that the animal was capable of thrusting its arms and legs forward, a type of motion that would have been impossible in tetrapodomorph fish like Tiktaalik. The animal that produced the tracks is estimated to have been up to long with footpads up to wide, although most tracks are only wide. The new finds suggest that the first tetrapods may have lived as opportunists on the tidal flats, feeding on marine animals that were washed up or stranded by the tide. Currently, however, fish are stranded in significant numbers only at certain times of year, as in alewife spawning season; such strandings could not provide a significant supply of food for predators. There is no reason to suppose that Devonian fish were less prudent than those of today. According to Melina Hale of University of Chicago, not all ancient trackways are necessarily made by early tetrapods, but could also be created by relatives of the tetrapods who used their fleshy appendages in a similar substrate-based locomotion. Palaeozoic tetrapods Devonian tetrapods Research by Jennifer A. Clack and her colleagues showed that the very earliest tetrapods, animals similar to Acanthostega, were wholly aquatic and quite unsuited to life on land. This is in contrast to the earlier view that fish had first invaded the land — either in search of prey (like modern mudskippers) or to find water when the pond they lived in dried out — and later evolved legs, lungs, etc. By the late Devonian, land plants had stabilized freshwater habitats, allowing the first wetland ecosystems to develop, with increasingly complex food webs that afforded new opportunities. Freshwater habitats were not the only places to find water filled with organic matter and dense vegetation near the water's edge. Swampy habitats like shallow wetlands, coastal lagoons and large brackish river deltas also existed at this time, and there is much to suggest that this is the kind of environment in which the tetrapods evolved. Early fossil tetrapods have been found in marine sediments, and because fossils of primitive tetrapods in general are found scattered all around the world, they must have spread by following the coastal lines — they could not have lived in freshwater only. One analysis from the University of Oregon suggests no evidence for the "shrinking waterhole" theory — transitional fossils are not associated with evidence of shrinking puddles or ponds — and indicates that such animals would probably not have survived short treks between depleted waterholes. The new theory suggests instead that proto-lungs and proto-limbs were useful adaptations to negotiate the environment in humid, wooded floodplains. The Devonian tetrapods went through two major bottlenecks during what is known as the Late Devonian extinction; one at the end of the Frasnian stage, and one twice as large at the end of the following Famennian stage. These events of extinctions led to the disappearance of primitive tetrapods with fish-like features like Ichthyostega and their primary more aquatic relatives. When tetrapods reappear in the fossil record after the Devonian extinctions, the adult forms are all fully adapted to a terrestrial existence, with later species secondarily adapted to an aquatic lifestyle. Lungs It is now clear that the common ancestor of the bony fishes (Osteichthyes) had a primitive air-breathing lung—later evolved into a swim bladder in most actinopterygians (ray-finned fishes). This suggests that crossopterygians evolved in warm shallow waters, using their simple lung when the oxygen level in the water became too low. Fleshy lobe-fins supported on bones rather than ray-stiffened fins seem to have been an ancestral trait of all bony fishes (Osteichthyes). The lobe-finned ancestors of the tetrapods evolved them further, while the ancestors of the ray-finned fishes (Actinopterygii) evolved their fins in a different direction. The most primitive group of actinopterygians, the bichirs, still have fleshy frontal fins. Fossils of early tetrapods Nine genera of Devonian tetrapods have been described, several known mainly or entirely from lower jaw material. All but one were from the Laurasian supercontinent, which comprised Europe, North America and Greenland. The only exception is a single Gondwanan genus, Metaxygnathus, which has been found in Australia. The first Devonian tetrapod identified from Asia was recognized from a fossil jawbone reported in 2002. The Chinese tetrapod Sinostega pani was discovered among fossilized tropical plants and lobe-finned fish in the red sandstone sediments of the Ningxia Hui Autonomous Region of northwest China. This finding substantially extended the geographical range of these animals and has raised new questions about the worldwide distribution and great taxonomic diversity they achieved within a relatively short time. These earliest tetrapods were not terrestrial. The earliest confirmed terrestrial forms are known from the early Carboniferous deposits, some 20 million years later. Still, they may have spent very brief periods out of water and would have used their legs to paw their way through the mud. Why they went to land in the first place is still debated. One reason could be that the small juveniles who had completed their metamorphosis had what it took to make use of what land had to offer. Already adapted to breathe air and move around in shallow waters near land as a protection (just as modern fish and amphibians often spend the first part of their life in the comparative safety of shallow waters like mangrove forests), two very different niches partially overlapped each other, with the young juveniles in the diffuse line between. One of them was overcrowded and dangerous while the other was much safer and much less crowded, offering less competition over resources. The terrestrial niche was also a much more challenging place for primarily aquatic animals, but because of the way evolution and selection pressure work, those juveniles who could take advantage of this would be rewarded. Once they gained a small foothold on land, thanks to their pre-adaptations, favourable variations in their descendants would gradually result in continuing evolution and diversification. At this time the abundance of invertebrates crawling around on land and near water, in moist soil and wet litter, offered a food supply. Some were even big enough to eat small tetrapods, but the land was free from dangers common in the water. From water to land Initially making only tentative forays onto land, tetrapods adapted to terrestrial environments over time and spent longer periods away from the water. It is also possible that the adults started to spend some time on land (as the skeletal modifications in early tetrapods such as Ichthyostega suggests) to bask in the sun close to the water's edge, while otherwise being mostly aquatic. However, recent microanatomical and histological analysis of tetrapod fossil specimens found that early tetrapods like Acanthostega were fully aquatic, suggesting that adaptation to land happened later. Research by Per Ahlberg and colleagues suggest that tides could have been a driving force for the evolution of tetrapods. The hypothesis proposes that as "the tide retreated, fishes became stranded in shallow water tidal-pool environments, where they would be subjected to raised temperatures and hypoxic conditions" and then fishes that developed "efficient air-breathing organs, as well as for appendages adapted for land navigation" would be selected. Carboniferous tetrapods Until the 1990s, there was a 30 million year gap in the fossil record between the late Devonian tetrapods and the reappearance of tetrapod fossils in recognizable mid-Carboniferous amphibian lineages. It was referred to as "Romer's Gap", which now covers the period from about 360 to 345 million years ago (the Devonian-Carboniferous transition and the early Mississippian), after the palaeontologist who recognized it. During the "gap", tetrapod backbones developed, as did limbs with digits and other adaptations for terrestrial life. Ears, skulls and vertebral columns all underwent changes too. The number of digits on hands and feet became standardized at five, as lineages with more digits died out. Thus, those very few tetrapod fossils found in this "gap" are all the more prized by palaeontologists because they document these significant changes and clarify their history. The transition from an aquatic, lobe-finned fish to an air-breathing amphibian was a significant and fundamental one in the evolutionary history of the vertebrates. For an organism to live in a gravity-neutral aqueous environment, then colonize one that requires an organism to support its entire weight and possess a mechanism to mitigate dehydration, required significant adaptations or exaptations within the overall body plan, both in form and in function. Eryops, an example of an animal that made such adaptations, refined many of the traits found in its fish ancestors. Sturdy limbs supported and transported its body while out of water. A thicker, stronger backbone prevented its body from sagging under its own weight. Also, through the reshaping of vestigial fish jaw bones, a rudimentary middle ear began developing to connect to the piscine inner ear, allowing Eryops to amplify, and so better sense, airborne sound. By the Visean (mid early-Carboniferous) stage, the early tetrapods had radiated into at least three or four main branches. Some of these different branches represent the ancestors to all living tetrapods. This means that the common ancestor of all living tetrapods likely lived in the early Carboniferous. Under a narrow cladistic definition of Tetrapoda (also known as crown-Tetrapoda), which only includes descendants of this common ancestor, tetrapods first appeared in the Carboniferous. Recognizable early tetrapods (in the broad sense) are representative of the temnospondyls (e.g. Eryops) lepospondyls (e.g. Diplocaulus), anthracosaurs, which were the relatives and ancestors of the Amniota, and possibly the baphetids, which are thought to be related to temnospondyls and whose status as a main branch is yet unresolved. Depending on which authorities one follows, modern amphibians (frogs, salamanders and caecilians) are most probably derived from either temnospondyls or lepospondyls (or possibly both, although this is now a minority position). The first amniotes (clade of vertebrates that today includes reptiles, mammals, and birds) are known from the early part of the Late Carboniferous. By the Triassic, this group had already radiated into the earliest mammals, turtles, and crocodiles (lizards and birds appeared in the Jurassic, and snakes in the Cretaceous). This contrasts sharply with the (possibly fourth) Carboniferous group, the baphetids, which have left no extant surviving lineages. Carboniferous rainforest collapse Amphibians and reptiles were strongly affected by the Carboniferous rainforest collapse (CRC), an extinction event that occurred ~307 million years ago. The Carboniferous period has long been associated with thick, steamy swamps and humid rainforests. Since plants form the base of almost all of Earth's ecosystems, any changes in plant distribution have always affected animal life to some degree. The sudden collapse of the vital rainforest ecosystem profoundly affected the diversity and abundance of the major tetrapod groups that relied on it. The CRC, which was a part of one of the top two most devastating plant extinctions in Earth's history, was a self-reinforcing and very rapid change of environment wherein the worldwide climate became much drier and cooler overall (although much new work is being done to better understand the fine-grained historical climate changes in the Carboniferous-Permian transition and how they arose). The ensuing worldwide plant reduction resulting from the difficulties plants encountered in adjusting to the new climate caused a progressive fragmentation and collapse of rainforest ecosystems. This reinforced and so further accelerated the collapse by sharply reducing the amount of animal life which could be supported by the shrinking ecosystems at that time. The outcome of this animal reduction was a crash in global carbon dioxide levels, which impacted the plants even more. The aridity and temperature drop which resulted from this runaway plant reduction and decrease in a primary greenhouse gas caused the Earth to rapidly enter a series of intense Ice Ages. This impacted amphibians in particular in a number of ways. The enormous drop in sea level due to greater quantities of the world's water being locked into glaciers profoundly affected the distribution and size of the semiaquatic ecosystems which amphibians favored, and the significant cooling of the climate further narrowed the amount of new territory favorable to amphibians. Given that among the hallmarks of amphibians are an obligatory return to a body of water to lay eggs, a delicate skin prone to desiccation (thereby often requiring the amphibian to be relatively close to water throughout its life), and a reputation of being a bellwether species for disrupted ecosystems due to the resulting low resilience to ecological change, amphibians were particularly devastated, with the Labyrinthodonts among the groups faring worst. In contrast, reptiles - whose amniotic eggs have a membrane that enables gas exchange out of water, and which thereby can be laid on land - were better adapted to the new conditions. Reptiles invaded new niches at a faster rate and began diversifying their diets, becoming herbivorous and carnivorous, rather than feeding exclusively on insects and fish. Meanwhile, the severely impacted amphibians simply could not out-compete reptiles in mastering the new ecological niches, and so were obligated to pass the tetrapod evolutionary torch to the increasingly successful and swiftly radiating reptiles. Permian tetrapods In the Permian period: early "amphibia" (labyrinthodonts) clades included temnospondyl and anthracosaur; while amniote clades included the Sauropsida and the Synapsida. Sauropsida would eventually evolve into today's reptiles and birds; whereas Synapsida would evolve into today's mammals. During the Permian, however, the distinction was less clear—amniote fauna being typically described as either reptile or as mammal-like reptile. The latter (synapsida) were the most important and successful Permian animals. The end of the Permian saw a major turnover in fauna during the Permian–Triassic extinction event: probably the most severe mass extinction event of the phanerozoic. There was a protracted loss of species, due to multiple extinction pulses. Many of the once large and diverse groups died out or were greatly reduced. Mesozoic tetrapods Life on Earth seemed to recover quickly after the Permian extinctions, though this was mostly in the form of disaster taxa such as the hardy Lystrosaurus. Specialized animals that formed complex ecosystems with high biodiversity, complex food webs, and a variety of niches, took much longer to recover. Current research indicates that this long recovery was due to successive waves of extinction, which inhibited recovery, and to prolonged environmental stress to organisms that continued into the Early Triassic. Recent research indicates that recovery did not begin until the start of the mid-Triassic, 4M to 6M years after the extinction; and some writers estimate that the recovery was not complete until 30M years after the P-Tr extinction, i.e. in the late Triassic. A small group of reptiles, the diapsids, began to diversify during the Triassic, notably the dinosaurs. By the late Mesozoic, the large labyrinthodont groups that first appeared during the Paleozoic such as temnospondyls and reptile-like amphibians had gone extinct. All current major groups of sauropsids evolved during the Mesozoic, with birds first appearing in the Jurassic as a derived clade of theropod dinosaurs. Many groups of synapsids such as anomodonts and therocephalians that once comprised the dominant terrestrial fauna of the Permian also became extinct during the Mesozoic; during the Triassic, however, one group (Cynodontia) gave rise to the descendant taxon Mammalia, which survived through the Mesozoic to later diversify during the Cenozoic. Cenozoic tetrapods The Cenozoic era began with the end of the Mesozoic era and the Cretaceous epoch; and continues to this day. The beginning of the Cenozoic was marked by the Cretaceous-Paleogene extinction event during which all non-avian dinosaurs became extinct. The Cenozoic is sometimes called the "Age of Mammals". During the Mesozoic, the prototypical mammal was a small nocturnal insectivore something like a tree shrew. Due to their nocturnal habits, most mammals lost their color vision, and greatly improved their sense of olfaction and hearing. All mammals of today are shaped by this origin. Primates and some Australian marsupials later re-evolved color-vision. During the Paleocene and Eocene, most mammals remained small (under 20 kg). Cooling climate in the Oligocene and Miocene, and the expansion of grasslands favored the evolution of larger mammalian species. Ratites run, and penguins swim and waddle: but the majority of birds are rather small, and can fly. Some birds use their ability to fly to complete epic globe-crossing migrations, while others such as frigate birds fly over the oceans for months on end. Bats have also taken flight, and along with cetaceans have developed echolocation or sonar. Whales, seals, manatees, and sea otters have returned to the ocean and an aquatic lifestyle. Vast herds of ruminant ungulates populate the grasslands and forests. Carnivores have evolved to keep the herd-animal populations in check. Extant (living) tetrapods Following the great faunal turnover at the end of the Mesozoic, only seven groups of tetrapods were left, with one, the Choristodera, becoming extinct 11 million years ago due to unknown reasons. The other six persisting today also include many extinct members: Lissamphibia: frogs and toads, salamanders, and caecilians Testudines: turtle, tortoises and terrapins Lepidosauria: tuataras, lizards, amphisbaenians and snakes Crocodilia: crocodiles, alligators, caimans and gharials Neornithes: extant birds Mammalia: mammals References Works cited External links
Evolution of tetrapods
[ "Biology" ]
7,386
[ "Phylogenetics", "Evolution of tetrapods" ]
38,048,848
https://en.wikipedia.org/wiki/Teduglutide
Teduglutide, sold under the brand names Revestive (EU) and Gattex (US), is a 33-membered polypeptide and glucagon-like peptide-2 (GLP-2) analog that is used for the treatment of short bowel syndrome. It works by promoting mucosal growth and possibly restoring gastric emptying and secretion. It was approved in both the European Union and the United States in 2012. Medical uses Up to a certain point, the gut can adapt to partial resections that result in short bowel syndrome. Still, parenteral substitution of water, minerals and vitamins (depending on which part of the gut has been removed) is often necessary. Teduglutide may reduce or shorten the necessity of such infusions by improving the intestinal mucosa and possibly by other mechanisms. Adverse effects Common adverse effects in clinical studies included abdominal discomfort (49% of patients), respiratory infections (28%), nausea (27%) and vomiting (14%), local reactions at the injection site (21%), and headache (17%). Chemistry and mechanism of action Teduglutide differs from natural GLP-2 by a single amino acid: an alanine is replaced with a glycine. This blocks breaking down of the molecule by dipeptidyl peptidase and increases its half-life from seven minutes (GLP-2) to about two hours, while retaining its biological actions. These include maintenance of the intestinal mucosa, increasing intestinal blood flow, reducing gastrointestinal motility and secretion of gastric acid. Society and culture Legal status It was approved in both the European Union (brand name Revestive) and the United States (brand name Gattex) in 2012. It was granted orphan drug designation by the European Medicines Agency (EMA). References Drugs acting on the gastrointestinal system and metabolism Orphan drugs Peptides Drugs developed by Takeda Pharmaceutical Company
Teduglutide
[ "Chemistry" ]
420
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
38,050,156
https://en.wikipedia.org/wiki/RiAFP
RiAFP refers to an antifreeze protein (AFP) produced by the Rhagium inquisitor longhorned beetle. It is a type V antifreeze protein with a molecular weight of 12.8 kDa; this type of AFP is noted for its hyperactivity. R. inquisitor is a freeze-avoidant species, meaning that, due to its AFP, R. inquisitor prevents its body fluids from freezing altogether. This contrasts with freeze-tolerant species, whose AFPs simply depress levels of ice crystal formation in low temperatures. Whereas most insect antifreeze proteins contain cysteines at least every sixth residue, as well as varying numbers of 12- or 13-mer repeats of 8.3-12.5kDa, RiAFP is notable for containing only one disulfide bridge. This property of RiAFP makes it particularly attractive for recombinant expression and biotechnological applications. AFPs AFPs work through an interaction with small ice crystals that is similar to an enzyme-ligand binding mechanism which inhibits recrystallization of ice. This explanation of the interruption of the ice crystal structure by the AFP has come to be known as the adsorption-inhibition hypothesis. According to this hypothesis, AFPs disrupt the thermodynamically favourable growth of an ice crystal via kinetic inhibition of contact between solid ice and liquid water. In this manner, the nucleation sites of the ice crystal lattice are blocked by the AFP, inhibiting the rapid growth of the crystal that could be fatal for the organism. In physical chemistry terms, the AFPs adsorbed onto the exposed ice crystal force the growth of the ice crystal in a convex fashion as the temperature drops, which elevates the ice vapour pressure at the nucleation sites. Ice vapour pressure continues to increase until it reaches equilibrium with the surrounding solution (water), at which point the growth of the ice crystal stops. The aforementioned effect of AFPs on ice crystal nucleation is lost at the thermal hysteresis point. At a certain low temperature, the maximum convexity of the ice nucleation site is reached. Any further cooling will actually result in a "spreading" of the nucleation site away from this convex region, causing rapid, uncontrollable nucleation of the ice crystal. The temperature at which this phenomenon occurs is the thermal hysteresis point. The adsorption-inhibition hypothesis is further supported by the observation that antifreeze activity increases with increasing AFP concentration – the more AFPs adsorb onto the forming ice crystal, the more 'crowded' these proteins become, making ice crystal nucleation less favourable. In the R. inquisitor beetle, AFPs are found in the haemolymph, a fluid that bathes all the cells of the beetle and fills a cavity called the haemocoel. The presence of AFPs in R. inquisitor allows the tissues and fluids within the beetle to withstand freezing up to -30 °C (the thermal hysteresis point for this AFP). This strategy provides an obvious survival benefit to these beetles, who are endemic to cold climates, such as Scandinavia, Siberia, and Alaska. RiAFP Ice Binding The primary structure of RiAFP (the sequence may be found here) determined by Mass Spectroscopy, Edman degradation and by constructing a partial cDNA sequence and PCR have shown that a TxTxTxT internal repeat exists. Sequence logos constructed from the RiAFP internal repeats, have been particularly helpful in the determination of the consensus sequence of these repeats. The TxTxTxT domains are irregularly spaced within the protein and have been shown to be conserved from the TxT binding motif of other AFPs. The hydroxyl moiety of the T residues fits well, when spaced as they are in the internal repeats, with the hydroxyl moieties of externally facing water molecules in the forming ice lattice. This mimics the formation of the growth cone at a nucleation site in the absence of AFPs. Thus, the binding of RiAFP inhibits the growth of the crystal in the basal and prism planes of the ice. RiAFP Predicted Structure The fact that the binding motif appears as a "triplet" of the conserved TxT repeat, as well as the observation that blastp queries have returned no viable matches, has led some researchers to suggest that RiAFP represents a new type of AFP – one that differs from the heavily studied TmAFP (from T. molitor), DcAFP (from D. canadensis), and CfAFP (from C. fumiferana). On the basis of these observations, it has been predicted that the need for insect AFPs came about after insect evolutionary divergence, much like the evolution of fish AFPs; thus, different AFPs most likely evolved in parallel from adaptations to cold (environmental) stress. As a result, homology modelling with TmAFP, DcAFP, or CfAFP would prove to be fruitless. Secondary structure modelling algorithms have determined that the internal repeats are spaced sufficiently to tend towards β-strand configuration; no helical regions include the conserved repeats; and all turn regions are located at the ends of β-strand regions. These data suggest that RiAFP is a well-folded β-helical protein, having six β-strand regions consisting of 13-amino acids (including one TxTxTxT binding motif) per strand. Primary crystallographic studies, have been published on a RiAFP crystal (which diffracted to 1.3Å resolution) in the trigonal space group P3121 (or P3221), with unit-cell parameters a = b = 46.46, c = 193.21Å. References Further reading Proteins Cryobiology
RiAFP
[ "Physics", "Chemistry", "Biology" ]
1,213
[ "Biomolecules by chemical classification", "Physical phenomena", "Phase transitions", "Cryobiology", "Molecular biology", "Biochemistry", "Proteins" ]
38,050,347
https://en.wikipedia.org/wiki/Network%20scheduler
A network scheduler, also called packet scheduler, queueing discipline (qdisc) or queueing algorithm, is an arbiter on a node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive queues of the protocol stack and network interface controller. There are several network schedulers available for the different operating systems, that implement many of the existing network scheduling algorithms. The network scheduler logic decides which network packet to forward next. The network scheduler is associated with a queuing system, storing the network packets temporarily until they are transmitted. Systems may have a single or multiple queues in which case each may hold the packets of one flow, classification, or priority. In some cases it may not be possible to schedule all transmissions within the constraints of the system. In these cases the network scheduler is responsible for deciding which traffic to forward and what gets dropped. Terminology and responsibilities A network scheduler may have responsibility in implementation of specific network traffic control initiatives. Network traffic control is an umbrella term for all measures aimed at reducing network congestion, latency and packet loss. Specifically, active queue management (AQM) is the selective dropping of queued network packets to achieve the larger goal of preventing excessive network congestion. The scheduler must choose which packets to drop. Traffic shaping smooths the bandwidth requirements of traffic flows by delaying transmission packets when they are queued in bursts. The scheduler decides the timing for the transmitted packets. Quality of service (QoS) is the prioritization of traffic based on service class (Differentiated services) or reserved connection (Integrated services). Algorithms In the course of time, many network queueing disciplines have been developed. Each of these provides specific reordering or dropping of network packets inside various transmit or receive buffers. Queuing disciplines are commonly used as attempts to compensate for various networking conditions, like reducing the latency for certain classes of network packets, and are generally used as part of QoS measures. Classful queueing disciplines allow the creation of classes, which work like branches on a tree. Rules can then be set to filter packets into each class. Each class can itself have assigned other classful or classless queueing discipline. Classless queueing disciplines do not allow adding more queueing disciplines to it. Examples of algorithms suitable for managing network traffic include: Several of the above have been implemented as Linux kernel modules and are freely available. Bufferbloat Bufferbloat is a phenomenon in packet-switched networks in which excess buffering of packets causes high latency and packet delay variation. Bufferbloat can be addressed by a network scheduler that strategically discards packets to avoid an unnecessarily high buffering backlog. Examples include CoDel, FQ-CoDel and random early detection. Implementations Linux kernel The Linux kernel packet scheduler is an integral part of the Linux kernel's network stack and manages the transmit and receive ring buffers of all NICs. The packet scheduler is configured using the utility called tc (short for traffic control). As the default queuing discipline, the packet scheduler uses a FIFO implementation called pfifo_fast, although systemd since its version 217 changes the default queuing discipline to fq_codel. The ifconfig and ip utilities enable system administrators to configure the buffer sizes txqueuelen and rxqueuelen for each device separately in terms of number of Ethernet frames regardless of their size. The Linux kernel's network stack contains several other buffers, which are not managed by the network scheduler. Berkeley Packet Filter filters can be attached to the packet scheduler's classifiers. The eBPF functionality brought by version 4.1 of the Linux kernel in 2015 extends the classic BPF programmable classifiers to eBPF. These can be compiled using the LLVM eBPF backend and loaded into a running kernel using the tc utility. BSD and OpenBSD ALTQ is the implementation of a network scheduler for BSDs. As of OpenBSD version 5.5 ALTQ was replaced by the HFSC scheduler. Cell-Free Network Scheduling Schedulers in communication networks manage resource allocation, including packet prioritization, timing, and resource distribution. Advanced implementations increasingly leverage artificial intelligence to address the complexities of modern network configurations. For instance, a supervised neural network (NN)-based scheduler has been introduced in cell-free networks to efficiently handle interactions between multiple radio units (RUs) and user equipment (UEs). This approach reduces computational complexity while optimizing latency, throughput, and resource allocation, making it a promising solution for beyond-5G networks. See also Queueing theory Statistical time-division multiplexing Type of service Notes References Linux kernel features Network performance Network scheduling algorithms Network theory
Network scheduler
[ "Mathematics" ]
989
[ "Network theory", "Mathematical relations", "Graph theory" ]
40,823,382
https://en.wikipedia.org/wiki/Reshetnyak%20gluing%20theorem
In metric geometry, the Reshetnyak gluing theorem gives information on the structure of a geometric object built by using as building blocks other geometric objects, belonging to a well defined class. Intuitively, it states that a manifold obtained by joining (i.e. "gluing") together, in a precisely defined way, other manifolds having a given property inherit that very same property. The theorem was first stated and proved by Yurii Reshetnyak in 1968. Statement Theorem: Let be complete locally compact geodesic metric spaces of CAT curvature , and convex subsets which are isometric. Then the manifold , obtained by gluing all along all , is also of CAT curvature . For an exposition and a proof of the Reshetnyak Gluing Theorem, see . Notes References , translated in English as: . . Theorems in geometry Metric geometry
Reshetnyak gluing theorem
[ "Mathematics" ]
181
[ "Mathematical theorems", "Mathematical problems", "Geometry", "Theorems in geometry" ]
43,676,249
https://en.wikipedia.org/wiki/Branchpoint%20Binding%20Protein
Branchpoint Binding Protein (BBP) is an Orem RNA processing factor. The protein complex binds to the branchpoint sequence (BPS) in the pre-mRNA, aiding in splice site recognition. The role of the protein in yeast cells has been the subject of study, as for other eukaryotic cells. The BPS that the protein binds to in yeast is UACUAAC. References Proteins RNA splicing
Branchpoint Binding Protein
[ "Chemistry" ]
90
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
43,677,277
https://en.wikipedia.org/wiki/Mean-field%20particle%20methods
Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean-field particle techniques rely on sequential interacting samples. The terminology mean-field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model. History The theory of mean-field interacting particle models had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. The mathematical foundations of these classes of models were developed from the mid-1980s to the mid-1990s by several mathematicians, including Werner Braun, Klaus Hepp, Karl Oelschläger, Gérard Ben Arous and Marc Brunaud, Donald Dawson, Jean Vaillancourt and Jürgen Gärtner, Christian Léonard, Sylvie Méléard, Sylvie Roelly, Alain-Sol Sznitman and Hiroshi Tanaka for diffusion type models; F. Alberto Grünbaum, Tokuzo Shiga, Hiroshi Tanaka, Sylvie Méléard and Carl Graham for general classes of interacting jump-diffusion processes. We also quote an earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies. Mean-field genetic type particle methods are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey. The Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms. Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean-field particle approximation of Feynman-Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984 In molecular chemistry, the use of genetic heuristic-like particle methods (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall. N. Rosenbluth and Arianna. W. Rosenbluth. The first pioneering articles on the applications of these heuristic-like particle methods in nonlinear filtering problems were the independent studies of Neil Gordon, David Salmon and Adrian Smith (bootstrap filter), Genshiro Kitagawa (Monte Carlo filter) , and the one by Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut published in the 1990s. The term interacting "particle filters" was first coined in 1996 by Del Moral. Particle filters were also developed in signal processing in the early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems. The foundations and the first rigorous analysis on the convergence of genetic type models and mean field Feynman-Kac particle methods are due to Pierre Del Moral in 1996. Branching type particle methods with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. The first uniform convergence results with respect to the time parameter for mean field particle models were developed in the end of the 1990s by Pierre Del Moral and Alice Guionnet for interacting jump type processes, and by Florent Malrieu for nonlinear diffusion type processes. New classes of mean field particle simulation techniques for Feynman-Kac path-integration problems includes genealogical tree based models, backward particle models, adaptive mean field particle models, island type particle models, and particle Markov chain Monte Carlo methods Applications In physics, and more particularly in statistical mechanics, these nonlinear evolution equations are often used to describe the statistical behavior of microscopic interacting particles in a fluid or in some condensed matter. In this context, the random evolution of a virtual fluid or a gas particle is represented by McKean-Vlasov diffusion processes, reaction–diffusion systems, or Boltzmann type collision processes. As its name indicates, the mean field particle model represents the collective behavior of microscopic particles weakly interacting with their occupation measures. The macroscopic behavior of these many-body particle systems is encapsulated in the limiting model obtained when the size of the population tends to infinity. Boltzmann equations represent the macroscopic evolution of colliding particles in rarefied gases, while McKean Vlasov diffusions represent the macroscopic behavior of fluid particles and granular gases. In computational physics and more specifically in quantum mechanics, the ground state energies of quantum systems is associated with the top of the spectrum of Schrödinger's operators. The Schrödinger equation is the quantum mechanics version of the Newton's second law of motion of classical mechanics (the mass times the acceleration is the sum of the forces). This equation represents the wave function (a.k.a. the quantum state) evolution of some physical system, including molecular, atomic of subatomic systems, as well as macroscopic systems like the universe. The solution of the imaginary time Schrödinger equation (a.k.a. the heat equation) is given by a Feynman-Kac distribution associated with a free evolution Markov process (often represented by Brownian motions) in the set of electronic or macromolecular configurations and some potential energy function. The long time behavior of these nonlinear semigroups is related to top eigenvalues and ground state energies of Schrödinger's operators. The genetic type mean field interpretation of these Feynman-Kac models are termed Resample Monte Carlo, or Diffusion Monte Carlo methods. These branching type evolutionary algorithms are based on mutation and selection transitions. During the mutation transition, the walkers evolve randomly and independently in a potential energy landscape on particle configurations. The mean field selection process (a.k.a. quantum teleportation, population reconfiguration, resampled transition) is associated with a fitness function that reflects the particle absorption in an energy well. Configurations with low relative energy are more likely to duplicate. In molecular chemistry, and statistical physics Mean field particle methods are also used to sample Boltzmann-Gibbs measures associated with some cooling schedule, and to compute their normalizing constants (a.k.a. free energies, or partition functions). In computational biology, and more specifically in population genetics, spatial branching processes with competitive selection and migration mechanisms can also be represented by mean field genetic type population dynamics models. The first moments of the occupation measures of a spatial branching process are given by Feynman-Kac distribution flows. The mean field genetic type approximation of these flows offers a fixed population size interpretation of these branching processes. Extinction probabilities can be interpreted as absorption probabilities of some Markov process evolving in some absorbing environment. These absorption models are represented by Feynman-Kac models. The long time behavior of these processes conditioned on non-extinction can be expressed in an equivalent way by quasi-invariant measures, Yaglom limits, or invariant measures of nonlinear normalized Feynman-Kac flows. In computer sciences, and more particularly in artificial intelligence these mean field type genetic algorithms are used as random search heuristics that mimic the process of evolution to generate useful solutions to complex optimization problems. These stochastic search algorithms belongs to the class of Evolutionary models. The idea is to propagate a population of feasible candidate solutions using mutation and selection mechanisms. The mean field interaction between the individuals is encapsulated in the selection and the cross-over mechanisms. In mean field games and multi-agent interacting systems theories, mean field particle processes are used to represent the collective behavior of complex systems with interacting individuals. In this context, the mean field interaction is encapsulated in the decision process of interacting agents. The limiting model as the number of agents tends to infinity is sometimes called the continuum model of agents In information theory, and more specifically in statistical machine learning and signal processing, mean field particle methods are used to sample sequentially from the conditional distributions of some random process with respect to a sequence of observations or a cascade of rare events. In discrete time nonlinear filtering problems, the conditional distributions of the random states of a signal given partial and noisy observations satisfy a nonlinear updating-prediction evolution equation. The updating step is given by Bayes' rule, and the prediction step is a Chapman-Kolmogorov transport equation. The mean field particle interpretation of these nonlinear filtering equations is a genetic type selection-mutation particle algorithm During the mutation step, the particles evolve independently of one another according to the Markov transitions of the signal . During the selection stage, particles with small relative likelihood values are killed, while the ones with high relative values are multiplied. These mean field particle techniques are also used to solve multiple-object tracking problems, and more specifically to estimate association measures The continuous time version of these particle models are mean field Moran type particle interpretations of the robust optimal filter evolution equations or the Kushner-Stratonotich stochastic partial differential equation. These genetic type mean field particle algorithms also termed Particle Filters and Sequential Monte Carlo methods are extensively and routinely used in operation research and statistical inference . The term "particle filters" was first coined in 1996 by Del Moral, and the term "sequential Monte Carlo" by Liu and Chen in 1998. Subset simulation and Monte Carlo splitting techniques are particular instances of genetic particle schemes and Feynman-Kac particle models equipped with Markov chain Monte Carlo mutation transitions Illustrations of the mean field simulation method Countable state space models To motivate the mean field simulation algorithm we start with S a finite or countable state space and let P(S) denote the set of all probability measures on S. Consider a sequence of probability distributions on S satisfying an evolution equation: for some, possibly nonlinear, mapping These distributions are given by vectors that satisfy: Therefore, is a mapping from the -unit simplex into itself, where s stands for the cardinality of the set S. When s is too large, solving equation () is intractable or computationally very costly. One natural way to approximate these evolution equations is to reduce sequentially the state space using a mean field particle model. One of the simplest mean field simulation scheme is defined by the Markov chain on the product space , starting with N independent random variables with probability distribution and elementary transitions with the empirical measure where is the indicator function of the state x. In other words, given the samples are independent random variables with probability distribution . The rationale behind this mean field simulation technique is the following: We expect that when is a good approximation of , then is an approximation of . Thus, since is the empirical measure of N conditionally independent random variables with common probability distribution , we expect to be a good approximation of . Another strategy is to find a collection of stochastic matrices indexed by such that This formula allows us to interpret the sequence as the probability distributions of the random states of the nonlinear Markov chain model with elementary transitions A collection of Markov transitions satisfying the equation () is called a McKean interpretation of the sequence of measures . The mean field particle interpretation of () is now defined by the Markov chain on the product space , starting with N independent random copies of and elementary transitions with the empirical measure Under some weak regularity conditions on the mapping for any function , we have the almost sure convergence These nonlinear Markov processes and their mean field particle interpretation can be extended to time non homogeneous models on general measurable state spaces. Feynman-Kac models To illustrate the abstract models presented above, we consider a stochastic matrix and some function . We associate with these two objects the mapping and the Boltzmann-Gibbs measures defined by We denote by the collection of stochastic matrices indexed by given by for some parameter . It is readily checked that the equation () is satisfied. In addition, we can also show (cf. for instance) that the solution of () is given by the Feynman-Kac formula with a Markov chain with initial distribution and Markov transition M. For any function we have If is the unit function and , then we have And the equation () reduces to the Chapman-Kolmogorov equation The mean field particle interpretation of this Feynman-Kac model is defined by sampling sequentially N conditionally independent random variables with probability distribution In other words, with a probability the particle evolves to a new state randomly chosen with the probability distribution ; otherwise, jumps to a new location randomly chosen with a probability proportional to and evolves to a new state randomly chosen with the probability distribution If is the unit function and , the interaction between the particle vanishes and the particle model reduces to a sequence of independent copies of the Markov chain . When the mean field particle model described above reduces to a simple mutation-selection genetic algorithm with fitness function G and mutation transition M. These nonlinear Markov chain models and their mean field particle interpretation can be extended to time non homogeneous models on general measurable state spaces (including transition states, path spaces and random excursion spaces) and continuous time models. Gaussian nonlinear state space models We consider a sequence of real valued random variables defined sequentially by the equations with a collection of independent standard Gaussian random variables, a positive parameter σ, some functions and some standard Gaussian initial random state . We let be the probability distribution of the random state ; that is, for any bounded measurable function f, we have with The integral is the Lebesgue integral, and dx stands for an infinitesimal neighborhood of the state x. The Markov transition of the chain is given for any bounded measurable functions f by the formula with Using the tower property of conditional expectations we prove that the probability distributions satisfy the nonlinear equation for any bounded measurable functions f. This equation is sometimes written in the more synthetic form The mean field particle interpretation of this model is defined by the Markov chain on the product space by where stand for N independent copies of and respectively. For regular models (for instance for bounded Lipschitz functions a, b, c) we have the almost sure convergence with the empirical measure for any bounded measurable functions f (cf. for instance ). In the above display, stands for the Dirac measure at the state x. Continuous time mean field models We consider a standard Brownian motion (a.k.a. Wiener Process) evaluated on a time mesh sequence with a given time step . We choose in equation (), we replace and σ by and , and we write instead of the values of the random states evaluated at the time step Recalling that are independent centered Gaussian random variables with variance the resulting equation can be rewritten in the following form When h → 0, the above equation converge to the nonlinear diffusion process The mean field continuous time model associated with these nonlinear diffusions is the (interacting) diffusion process on the product space defined by where are N independent copies of and For regular models (for instance for bounded Lipschitz functions a, b) we have the almost sure convergence , with and the empirical measure for any bounded measurable functions f (cf. for instance.). These nonlinear Markov processes and their mean field particle interpretation can be extended to interacting jump-diffusion processes References External links Feynman-Kac models and interacting particle systems, theoretical aspects and a list of application domains of Feynman-Kac particle methods Sequential Monte Carlo method and particle filters resources Interacting Particle Systems resources QMC in Cambridge and around the world, general information about Quantum Monte Carlo EVOLVER Software package for stochastic optimisation using genetic algorithms CASINO Quantum Monte Carlo program developed by the Theory of Condensed Matter group at the Cavendish Laboratory in Cambridge Biips is a probabilistic programming software for Bayesian inference with interacting particle systems. Telecommunication theory Statistical data types Monte Carlo methods Statistical mechanics Sampling techniques Stochastic simulation Randomized algorithms Risk analysis methodologies
Mean-field particle methods
[ "Physics" ]
3,709
[ "Monte Carlo methods", "Statistical mechanics", "Computational physics" ]
43,680,319
https://en.wikipedia.org/wiki/Cecilia%20Jarlskog
Cecilia Jarlskog (born in 1941) is a Swedish theoretical physicist, working mainly on elementary particle physics. Jarlskog obtained her doctorate in 1970 in theoretical particle physics at the Technical University of Lund. She is known for her work on CP violation in the electroweak sector of the Standard Model, introducing what is known as the Jarlskog invariant, and for her work on grand unified theories (see Georgi–Jarlskog mass relation). Research interests Cecilia Jarlskog is mainly known for her study and expertise in theoretical particle physics. Her studies include research on the ways that sub-atomic and electronic constituents of matter cohere or lose their symmetry, matter and antimatter asymmetry, mathematical physics, neutrino physics, and grand unification. The Jarlskog invariant or rephasing-invariant CP violation parameter, is an invariant quantity in particle physics, which is in the order of ±2.8 x 10−5. This parameter is related to the unitarity conditions of the Cabibbo–Kobayashi–Maskawa matrix, which can be expressed as triangles whose sides are products of different elements of the matrix. As such, the Jarlskog invariant can be written as J = ±Im(VusVcbVV), which amounts to twice the area of the unitarity triangle. Because the area vanishes for the specific parameters in the Standard Model for which there would be no CP violation, this invariant is thus very useful to quantify the non-conservation of the CP-symmetry in elementary particle physics. It is one of Jarlskog's foremost contributions to physics, the other being the many years that she was an active member of CERN. She recalls her appreciation of CERN's (European Organization for Nuclear Research) international atmosphere. Being a part of this community gave her great opportunities to meet and talk with inspiring physicists from across the world. She noted that she felt fortunate to have 'lived in a period when the amount of information revealed about the nature of the elementary constituents of matter and their interactions has been mind-boggling'. At CERN, the European Organization for Nuclear Research, physicists and engineers probe the fundamental structure of the universe. The world's largest and most complex scientific instruments are employed to study the basic constituents of matter – fundamental particles. The particles are caused to collide at close to the speed of light, which affords physicists clues about the interactions of particles, and insights into the fundamental laws of nature. Career Jarlskog was appointed professor at the University of Bergen, Norway, in 1976. In 1985 she switched to the University of Stockholm, Sweden, staying there until 1994. Since then, Jarlskog has been a professor at Lund University, her alma mater, where she had graduated in 1970 with a PhD in theoretical particle physics. Jarlskog worked as a member of CERN from 1970 to 1972. In addition, she served on the CERN Scientific Policy committee from 1982 to 1988. In her remaining 6 years at CERN, she served as the Advisor to the Director General of CERN on Member States, from 1998 to 2004. Jarlskog was recognized by the Swedish Academy of Science community and was appointed as one of the 5 members of the Swedish Nobel Committee for Physics from 1989 to 2000, serving as chairman of that committee in 1999, when the prize was awarded to Gerard 't Hooft and Martinus J. G. Veltman. In 2023, Cecilia Jarlskog was given the EPS High Energy and Particle Physics Prize, which is awarded by the European Physical Society for outstanding contributions in experimental, theoretical or technological achievements. Jarlskog's prize was due to her "discovery of an invariant measure of CP violation in both quark and lepton sectors." Jarlskog is an Honorary Professor at three universities in China and received an honorary degree from the University College Dublin. She was also Member of the Swedish Academy of Sciences (1984), Member of the Norwegian Academy of Sciences (1987), Member of the Board of Trustees of the Nobel Foundation (1996) and Member of the Academia Europaea (2005). Books and articles Cecilia Jarlskog wrote the book, Portrait of Gunnar Källén: A Physics Shooting Star and Poet of Early Quantum Field Theory, while a member of CERN. Here she relates the accomplishments of a comparatively unknown physicist in quantum physics. Jarlskog has written many articles in her lifetime, among them are "Invariations of Lepton Mass Matrices and CP and T violation in Neutrino Oscillations", "On the Wings of Physics" and "Ambiguities Pertaining to Quark-Lepton Complementarity." External links Scientific publications of Cecilia Jarlskog on INSPIRE-HEP References 1941 births People associated with CERN Living people Lund University alumni Particle physicists Swedish physicists Theoretical physicists Members of Academia Europaea Swedish women physicists Presidents of the International Union of Pure and Applied Physics Members of the Royal Swedish Academy of Sciences
Cecilia Jarlskog
[ "Physics" ]
1,038
[ "Theoretical physics", "Theoretical physicists", "Particle physics", "Particle physicists" ]
53,750,726
https://en.wikipedia.org/wiki/Combinatorial%20mirror%20symmetry
A purely combinatorial approach to mirror symmetry was suggested by Victor Batyrev using the polar duality for -dimensional convex polyhedra. The most famous examples of the polar duality provide Platonic solids: e.g., the cube is dual to octahedron, the dodecahedron is dual to icosahedron. There is a natural bijection between the -dimensional faces of a -dimensional convex polyhedron and -dimensional faces of the dual polyhedron and one has . In Batyrev's combinatorial approach to mirror symmetry the polar duality is applied to special -dimensional convex lattice polytopes which are called reflexive polytopes. It was observed by Victor Batyrev and Duco van Straten that the method of Philip Candelas et al. for computing the number of rational curves on Calabi–Yau quintic 3-folds can be applied to arbitrary Calabi–Yau complete intersections using the generalized -hypergeometric functions introduced by Israel Gelfand, Michail Kapranov and Andrei Zelevinsky (see also the talk of Alexander Varchenko), where is the set of lattice points in a reflexive polytope . The combinatorial mirror duality for Calabi–Yau hypersurfaces in toric varieties has been generalized by Lev Borisov in the case of Calabi–Yau complete intersections in Gorenstein toric Fano varieties. Using the notions of dual cone and polar cone one can consider the polar duality for reflexive polytopes as a special case of the duality for convex Gorenstein cones and of the duality for Gorenstein polytopes. For any fixed natural number there exists only a finite number of -dimensional reflexive polytopes up to a -isomorphism. The number is known only for : , , , The combinatorial classification of -dimensional reflexive simplices up to a -isomorphism is closely related to the enumeration of all solutions of the diophantine equation . The classification of 4-dimensional reflexive polytopes up to a -isomorphism is important for constructing many topologically different 3-dimensional Calabi–Yau manifolds using hypersurfaces in 4-dimensional toric varieties which are Gorenstein Fano varieties. The complete list of 3-dimensional and 4-dimensional reflexive polytopes have been obtained by physicists Maximilian Kreuzer and Harald Skarke using a special software in Polymake. A mathematical explanation of the combinatorial mirror symmetry has been obtained by Lev Borisov via vertex operator algebras which are algebraic counterparts of conformal field theories. See also Toric variety Homological mirror symmetry Mirror symmetry (string theory) References Algebraic geometry Mathematical physics Duality theories String theory
Combinatorial mirror symmetry
[ "Physics", "Astronomy", "Mathematics" ]
570
[ "Astronomical hypotheses", "Mathematical structures", "Applied mathematics", "Theoretical physics", "Fields of abstract algebra", "Category theory", "Duality theories", "Geometry", "Algebraic geometry", "String theory", "Mathematical physics" ]
53,751,966
https://en.wikipedia.org/wiki/Michael%20B%C3%BChl
Michael Bühl is a professor of Computational and Theoretical Chemistry in the School of Chemistry, University of St. Andrews. He has published work on the performance of various density functionals, modelling thermal and medium effects, transition-metal NMR of metalloenzymes, modelling of homogeneous catalysis, and molecular dynamics of transition metal complexes. Biography Bühl was born in 1962. He earned his PhD at the University of Erlangen-Nuremberg's Institute for Organic Chemistry (Institut für organische Chemie), where his thesis advisor was Paul von Ragué Schleyer. In 1992, he worked as a post-doctoral researcher with Henry F. Schaefer III (University of Georgia). He was an Oberassistent at the Institute of Organic Chemistry, University of Zürich between 1993 and 1999. In 1999, he also worked at Max-Planck-Institut für Kohlenforschung, Mülheim. He was on the faculty at the University of Zürich from 1998 to 2000 and then at University of Wuppertal from 2000 to 2008. He is Chair of Computational Chemistry at the University of St. Andrews since 2008. Research interests Bühl's group applies the tools of computational quantum chemistry to study a variety of chemical and biochemical systems and their properties, focussing on transition-metal and f-element chemistry, homogeneous and bio-catalysis, and NMR properties. The methods employed are mostly rooted in density-functional theory (DFT), including quantum-mechanical/molecular-mechanical (QM/MM) calculations and first-principles molecular dynamics simulations. References External links Buehl's research group Scottish chemists Academics of the University of St Andrews Living people 1962 births Computational chemistry
Michael Bühl
[ "Chemistry" ]
353
[ "Theoretical chemistry", "Computational chemistry" ]
55,267,887
https://en.wikipedia.org/wiki/JASPAR
JASPAR is an open access and widely used database of manually curated, non-redundant transcription factor (TF) binding profiles stored as position frequency matrices (PFM) and transcription factor flexible models (TFFM) for TFs from species in six taxonomic groups. From the supplied PFMs, users may generate position-specific weight matrices (PWM). The JASPAR database was introduced in 2004. There were seven major updates and new releases in 2006, 2008, 2010, 2014, 2016, 2018, 2020 and 2022, which is the latest release of JASPAR. Availability The JASPAR database is an open-source and freely available for scientific community at http://jaspar.genereg.net/. Similar databases TRANSFAC – Transcription Factor Database HOCOMOCO - HOmo sapiens COmprehensive MOdel COllection PAZAR - Database with focus on experimentally validated transcription factor binding sites TFe – the transcription factor encyclopedia AnimalTFDB – Animal transcription factor database PlantCARE – cis-regulatory elements and transcription factors in plants (2002) RegTransBase - transcription factor binding sites in a diverse set of bacteria. RegulonDB – Primary database on transcriptional regulation in Escherichia coli TRRD – Transcription Regulatory Regions Database, mainly about regulatory regions and TF-binding sites PlantRegMap Plant Transcriptional Regulatory Map MethMotif An integrative database of cell-specific transcription factor binding site motifs coupled with DNA methylation profiles. References Transcription factors
JASPAR
[ "Chemistry", "Biology" ]
304
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
55,270,130
https://en.wikipedia.org/wiki/Pschorr%20cyclization
The Pschorr cyclization is a name reaction in organic chemistry, which was named after its discoverer, the German chemist Robert Pschorr (1868-1930). It describes the intramolecular substitution of aromatic compounds via aryldiazonium salts as intermediates and is catalyzed by copper. The reaction is a variant of the Gomberg-Bachmann reaction. The following reaction scheme shows the Pschorr cyclization for the example of phenanthrene: Reaction mechanism In the course of the Pschorr cyclization, a diazotization of the starting compound occurs, so that an aryldiazonium salt is formed as intermediate. For this, sodium nitrite is added to hydrochloric acid to obtain nitrous acid. The nitrous acid is protonated and reacts with another equivalent of nitrous acid to the intermediate 1 which is later used for the diazotization of the aromatic amine: The intermediate 1 reacts in the following way with the starting compound: Intermediate 1 replaces a hydrogen atom from the amino group of the starting compound. A nitroso group is introduced as new substituent, producing under the release of nitrous acid intermediate 2. Intermediate 2 then reacts via a tautomerism and dehydration to the aryldiazonium cation 3. Nitrogen is then cleaved from the aryldiazonium cation 3 by the use of the copper catalyst. The aryl radical thus formed reacts via ring closure to the intermediate stage 4. Finally, rearomatization takes place using again the copper catalyst and phenanthrene is formed. Atom economy The Pschorr cyclization has a relatively good atom economy, since essentially only nitrogen is produced as a waste material. For the diazotization, two equivalents of nitrous acid are used, of which one equivalent is being re-formed in the course of the reaction. The copper is used in catalytic amounts only and does therefore not affect the atomic efficiency of the reaction. However, when considering the atom economy it has to be mentioned that the Pschorr cyclization has often only low yields. References Name reactions
Pschorr cyclization
[ "Chemistry" ]
451
[ "Name reactions", "Ring forming reactions", "Organic reactions" ]
55,271,019
https://en.wikipedia.org/wiki/Arthur%20V.%20Tobolsky
Arthur Victor Tobolsky (1919–1972) was a professor in the chemistry department at Princeton University known for teaching and research in polymer science and rheology. Personal Tobolsky was born in New York City in 1919. On September 7, 1972, Tobolsky died unexpectedly at the age of 53 on September 7, 1972, while attending a conference in Utica, N.Y. Education Tobolsky graduated from Columbia in 1940, and received his PhD from Princeton in 1944. He studied under Henry Eyring and Hugh Stott Taylor. Career Early in his career, he spent one year at the Brooklyn Polytechnic Institute. After that, he spent his entire career in the Chemistry Department at Princeton. He served on the Editorial Boards of American Scientist, the Journal of Polymer Science, and the Journal of Applied Physics. In 1966, Tobolsky was a Fellow of the American Physical Society. His most cited work proposed a molecular theory of relaxing media. References Polymer scientists and engineers 1919 births 1972 deaths 20th-century American chemists Fellows of the American Physical Society Princeton University faculty
Arthur V. Tobolsky
[ "Chemistry", "Materials_science" ]
219
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
55,274,673
https://en.wikipedia.org/wiki/Single%20colour%20reflectometry
Single colour reflectometry (SCORE), formerly known as imaging Reflectometric Interferometry (iRIf) and 1-lambda Reflectometry, is a physical method based on interference of monochromatic light at thin films, which is used to investigate (bio-)molecular interactions. The obtained binding curves using SCORE provide detailed information on kinetics and thermodynamics of the observed interaction(s) as well as on concentrations of the used analytes. These data can be relevant for pharmaceutical screening and drug design, biosensors and other biomedical applications, diagnostics, and cell-based assays. Principle The underlying principle corresponds to that of the Fabry-Pérot interferometer, which is also the underlying principle for the white-light interferometry. Realisation / setup Monochromatic light is illuminated vertically on the rear side of a transparent multi-layer substrate. The partial beams of the monochromatic light are transmitted and reflected at each interphase of the multi-layer system. Superimposition of the reflected beams result in destructive or constructive interference (depending on wavelength of the used light and the used substrate/multi-layer system materials) that can be detected in an intensity change of the reflected light using a photodiode, CCD, or CMOS element. The sensitive layer on top of the multi-layer system can be (bio-)chemically modified with receptor molecules, e.g. antibodies. Binding of specific ligands to the immobilised receptor molecules results in a change refractive index n and physical thickness d of the sensitive layer. The product of n and d results in the optical thickness (n*d) of the sensitive layer. Monitoring the change of the reflected intensity of the used light over time results in binding curves that provide information on: concentration of used ligand binding kinetics (association and dissociation rate constants) between receptor and ligand binding strength (affinity) between receptor and ligand specificity of the interaction between receptor and ligand Compared to bio-layer interferometry, which monitors the change of the interference pattern of reflected white light, SCORE only monitors the intensity change of the reflected light using a photodiode, CCD, or CMOS element. Thus, it is possible to analyse not only a single interaction but high-density arrays with up to 10,000 interactions per cm2. Compared to surface plasmon resonance (SPR), which penetration depth is limited by the evanescent field, SCORE is limited by the coherence length of the light source, which is typically a few micrometers. This is especially relevant when investigating whole cell assays. Also, SCORE (as well as BLI) is not influenced by temperature fluctuations during the measurement, while SPR needs thermostabilisation. Application SCORE is especially used as detection method in bio- and chemosensors. It is a label-free technique like Reflectometric interference spectroscopy (RIfS), Bio-layer Interferometry (BLI) and Surface plasmon resonance (SPR), which allows time-resolved observation of binding events on the sensor surface without the use of fluorescence or radioactive labels. The SCORE technology was commercialised by Biametrics GmbH, a service provider and instrument manufacturer with headquarters in Tübingen, Germany. In January 2020, Biametrics GmbH and its technology was acquired by BioCopy Holding AG, headquartered in Aadorf, Switzerland. See also Reflectometric interference spectroscopy (RIfS) Bio-layer Interferometry (BLI) Surface Plasmon Resonance (SPR) References Literature Ewald, M., Fechner, P. & Gauglitz, G. Anal Bioanal Chem (2015) 407: 4005. doi:10.1007/s00216-015-8562-0 Bleher, O., Schindler, A., Yin, MX. et al. Anal Bioanal Chem (2014) 406: 3305. doi:10.1007/s00216-013-7504-y Schindler, A., Bleher, O., Thaler, M., et al. (2014). Diagnostic performance study of an antigen microarray for the detection of antiphospholipid antibodies in human serum. Clinical Chemistry and Laboratory Medicine, 53(5), pp. 801–808. Retrieved 2 Mar. 2017, from doi:10.1515/cclm-2014-0569 Ewald, M., Le Blanc, A.F., Gauglitz, G. et al. Anal Bioanal Chem (2013) 405: 6461. doi:10.1007/s00216-013-7040-9 Rüdiger Frank ; Bernd Möhrle ; Dieter Fröhlich and Günter Gauglitz, "A transducer-independent optical sensor system for the detection of biochemical binding reactions", Proc. SPIE 5993, Advanced Environmental, Chemical, and Biological Sensing Technologies III, 59930A (November 8, 2005); doi:10.1117/12.633881; https://dx.doi.org/10.1117/12.633881 SLAS Technol. 2017 Aug;22(4):437-446. doi: 10.1177/2211068216657512. Low-Volume Label-Free Detection of Molecule-Protein Interactions on Microarrays by Imaging Reflectometric Interferometry. Burger J, Rath C, Woehrle J, Meyer PA, Ben Ammar N, Kilb N, Brandstetter T, Pröll F, Proll G, Urban G, Roth G. External links SCORE Technology Spectroscopy Biophysics methods Forensic techniques Nanotechnology Biochemistry methods Protein–protein interaction assays
Single colour reflectometry
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,225
[ "Biochemistry methods", "Protein–protein interaction assays", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Materials science", "Biochemistry", "Nanotechnology", "Spectroscopy", "Biophysics methods" ]
42,245,935
https://en.wikipedia.org/wiki/Chromatic%20homotopy%20theory
In mathematics, chromatic homotopy theory is a subfield of stable homotopy theory that studies complex-oriented cohomology theories from the "chromatic" point of view, which is based on Quillen's work relating cohomology theories to formal groups. In this picture, theories are classified in terms of their "chromatic levels"; i.e., the heights of the formal groups that define the theories via the Landweber exact functor theorem. Typical theories it studies include: complex K-theory, elliptic cohomology, Morava K-theory and tmf. Chromatic convergence theorem In algebraic topology, the chromatic convergence theorem states the homotopy limit of the chromatic tower (defined below) of a finite p-local spectrum is itself. The theorem was proved by Hopkins and Ravenel. Statement Let denotes the Bousfield localization with respect to the Morava E-theory and let be a finite, -local spectrum. Then there is a tower associated to the localizations called the chromatic tower, such that its homotopy limit is homotopic to the original spectrum . The stages in the tower above are often simplifications of the original spectrum. For example, is the rational localization and is the localization with respect to p-local K-theory. Stable homotopy groups In particular, if the -local spectrum is the stable -local sphere spectrum , then the homotopy limit of this sequence is the original -local sphere spectrum. This is a key observation for studying stable homotopy groups of spheres using chromatic homotopy theory. See also Elliptic cohomology Redshift conjecture Ravenel conjectures Moduli stack of formal group laws Chromatic spectral sequence Adams-Novikov spectral sequence References External links http://ncatlab.org/nlab/show/chromatic+homotopy+theory Homotopy theory Cohomology theories
Chromatic homotopy theory
[ "Mathematics" ]
399
[ "Topology stubs", "Topology" ]
42,246,122
https://en.wikipedia.org/wiki/Thermanaeromonas%20toyohensis
Thermanaeromonas toyohensis is a species of bacteria within the family Thermoanaerobacteraceae. This species is thermophilic, anaerobic, and can reduce thiosulfate. It was originally isolated from a geothermal aquifer more than 500 m below the surface of the Earth. References External links StrainInfo: Thermanaeromonas toyohensis Type strain of Thermanaeromonas toyohensis at BacDive - the Bacterial Diversity Metadatabase Thermoanaerobacterales Thermophiles Anaerobes Bacteria described in 2002
Thermanaeromonas toyohensis
[ "Biology" ]
126
[ "Bacteria", "Anaerobes" ]
42,246,223
https://en.wikipedia.org/wiki/HRG%20gyrocompass
A HRG gyrocompass is a compass and instrument of navigation. It is the latest generation of maintenance-free instruments. It uses a hemispherical resonant gyroscope, accelerometers and computers to compute true north. The HRG gyrocompass is a complete unit, which unlike a conventional compass, has no rotating or other moving parts. It has an outstanding reliability. Its operational Mean Time Between Failure (MTBF) values are improved over a Fiber Optic Gyrocompass and also conventional mechanical gyrocompass. It is also immune to severe environmental conditions. See also Fibre optic gyrocompass Gyrocompass References http://www.raytheon-anschuetz.com/products-solutions/product-range/standard-30-mf-gyro-compass http://www.raytheon-anschuetz.com/product-range/product-detail/69/Horizon-MF-Gyro-Compass-%28HRG%29 http://www.sagem.com/media/20121024_bluenautetm-revolution-maritime-navigation-has-been-waiting http://www.sagem.com/media/20150916_sagems-bluenaute-navigation-system-chosen-canadas-new-offshore-patrol-vessels http://www.sagem.com/media/20150729_us-coast-guard-chooses-sagems-bluenauter-navigation-system-modernize-its-medium-cutters https://www.shephardmedia.com/news/digital-battlespace/uscg-patrol-boat-gets-safrans-bluenaute-system/ Aircraft instruments Aerospace engineering Spacecraft components Navigational equipment Navigational aids Avionics
HRG gyrocompass
[ "Technology", "Engineering" ]
400
[ "Avionics", "Aircraft instruments", "Measuring instruments", "Aerospace engineering" ]
42,248,073
https://en.wikipedia.org/wiki/Bell%20roof
A bell roof (bell-shaped roof, ogee roof, Philibert de l'Orme roof) is a roof form resembling the shape of a bell. Shapes Bell roofs may be round, multi-sided or square. A similar-sounding feature added to other roof forms at the eaves or walls is bell-cast, sprocketed or flared eaves, the roof flairs upward resembling the common shape of the bottom of a bell. Gallery See also Roof pitch Bochka roof References Roofs
Bell roof
[ "Technology", "Engineering" ]
106
[ "Structural system", "Structural engineering", "Roofs" ]
42,249,120
https://en.wikipedia.org/wiki/Batter%20%28walls%29
In architecture, batter is a receding slope of a wall, structure, or earthwork. A wall sloping in the opposite direction is said to overhang. When used in fortifications it may be called a talus. The term is used with buildings and non-building structures to identify when a wall or element is intentionally built with an inward slope. A battered corner is an architectural feature using batters. A batter is sometimes used in foundations, retaining walls, dry stone walls, dams, lighthouses, and fortifications. Other terms that may be used to describe battered walls are "tapered" and "flared". Typically in a battered wall, the taper provides a wide base to carry the weight of the wall above, with the top gradually resulting in the thinnest part as to ease the weight of wall below. The batter angle is typically described as a ratio of the offset and height or a degree angle that is dependent on the building materials and application. For example, typical dry-stone construction of retaining walls utilizes a 1:6 ratio, that is for every 1 inch that the wall steps back, it increases 6 inches in height. Historical uses Walls may be battered to provide structural strength or for decorative reasons. In military architecture, they made walls harder to undermine or tunnel, and provided some defense against artillery, especially early siege engine projectiles and cannon, where the energy of the projectile might be largely deflected, on the same principle as modern sloped armor. Siege towers could not be pushed next to the top of a strongly battered wall. Types of fortification using batters included the talus and glacis. Regional examples Asia Architectural styles that often include battered walls as a stylistic feature include Indo-Islamic architecture, where it was used in many tombs and some mosques, as well as many forts in India. Tughlaqabad Fort in Delhi is a good example, built by Ghiyath al-Din Tughluq, whose tomb opposite the fort (illustrated above) also has a strong batter. In Hindu temple architecture, the walls of the large Gopurams of South India are usually battered, often with a slight concave curve. In the Himalayan region, battered walls are one of the typifying characteristics of traditional Tibetan architecture. With minimal foreign influence over the centuries, the region's use of battered walls are considered to be an indigenous creation and part of Tibet's vernacular architecture. This style of batter wall architecture was the preferred style of construction for much of Inner-Asia, and has been used from Nepal to Siberia. The 13-story Potala Palace in Lhasa, is one of the best known examples of this style and was named a UNESCO World Heritage Site in 1994. Middle East Battered walls are a common architectural feature found in Ancient Egyptian architecture. Usually constructed from mud brick for residential applications, limestone, sandstone, or granite was used mainly in the construction of temples and tombs. In terms of monumental architecture, the Giza pyramid complex in Cairo utilized different grades of battered walls to achieve great heights with relative stability. The Pyramid of Djoser is an archeological remain in the Saqqara necropolis, northwest of the city of Memphis that is a quintessential example of battered walls used in sequence to produce a step pyramid. New World In the Americas, battered walls are seen as a fairly common aspect of Mission style architecture, where Spanish design was hybridized with Native American adobe building techniques. As exemplified by the San Estevan del Rey Mission Church in Acoma, New Mexico, c.1629-42, the heights desired by Spanish Catholic Mission design was achieved through battering adobe bricks to achieve structural stability. Gallery References Building engineering Types of wall Architectural terminology
Batter (walls)
[ "Engineering" ]
751
[ "Structural engineering", "Building engineering", "Types of wall", "Civil engineering", "Architectural terminology", "Architecture" ]
42,251,979
https://en.wikipedia.org/wiki/Root%20microbiome
The root microbiome (also called rhizosphere microbiome) is the dynamic community of microorganisms associated with plant roots. Because they are rich in a variety of carbon compounds, plant roots provide unique environments for a diverse assemblage of soil microorganisms, including bacteria, fungi, and archaea. The microbial communities inside the root and in the rhizosphere are distinct from each other, and from the microbial communities of bulk soil, although there is some overlap in species composition. Different microorganisms, both beneficial and harmful, affect the development and physiology of plants. Beneficial microorganisms include bacteria that fix nitrogen, various microbes that promote plant growth, mycorrhizal fungi, mycoparasitic fungi, protozoa, and certain biocontrol microorganisms. Pathogenic microorganisms can also include certain bacteria, fungi, and nematodes that can colonize the rhizosphere. Pathogens are able to compete with protective microbes and break through innate plant defense mechanisms. Some pathogenic bacteria that can be carried over to humans, such as Salmonella, enterohaemorhagic Escherichia coli, Burkholderia cenocepacia, Pseudomonas aeruginosa, and Stenotrophomonas maltophilia, can also be detected in root microbiomes and other plant tissues. Root microbiota affect plant host fitness and productivity in a variety of ways. Members of the root microbiome benefit from plant sugars or other carbon rich molecules. Individual members of the root microbiome may behave differently in association with different plant hosts, or may change the nature of their interaction (along the mutualist-parasite continuum) within a single host as environmental conditions or host health change. Despite the potential importance of the root microbiome for plants and ecosystems, our understanding of how root microbial communities are assembled is in its infancy. This is in part because, until recent advances in sequencing technologies, root microbes were difficult to study due to high species diversity, the large number of cryptic species, and the fact that most species have yet to be retrieved in culture. Evidence suggests both biotic (such as host identity and plant neighbor) and abiotic (such as soil structure and nutrient availability) factors affect community composition. Function Types of symbioses Root associated microbes include fungi, bacteria, and archaea. In addition, other organisms such as viruses, algae, protozoa, nematodes, and arthropods are part of root microbiota. Symbionts associated with plant roots subsist off of photosynthetic products (carbon rich molecules) from the plant host and can exist anywhere on the mutualist/parasite continuum. Root symbionts may improve their host's access to nutrients, produce plant-growth regulators, improve environmental stress tolerance of their host, induce host defenses and systemic resistance against pests or pathogens, or be pathogenic. Parasites consume carbon from the plant without providing any benefit or providing insufficient benefit relative to their carbon consumption, thereby compromising host fitness. Symbionts may be biotrophic (subsisting off of living tissue) or necrotrophic (subsisting off of dead tissue). Mutualist-parasite continuum While some microbes may be purely mutualistic or parasitic, many may behave differently depending on the host species with which it is associated, environmental conditions, and host health. A host's immune response controls symbiont infection and growth rates. If a host's immune response is not able to control a particular microbial species, or if host immunity is compromised, the microbe-plant relationship will likely reside somewhere nearer the parasitic side of the mutualist-parasite continuum. Similarly, high nutrients can push some microbes into parasitic behavior, encouraging unchecked growth at a time when symbionts are no longer needed to aid with nutrient acquisition. Composition Roots are colonized by fungi, bacteria, and archaea. Because they are multicellular, fungi can extend hyphae from nutrient exchange organs within host cells into the surrounding rhizosphere and bulk soil. Fungi that extend beyond the root surface and engage in nutrient-carbon exchange with the plant host are commonly considered to be mycorrhizal, but external hyphae can also include other endophytic fungi. Mycorrhizal fungi can extend a great distance into bulk soil, thereby increasing the root system's reach and surface area, enabling mycorrhizal fungi to acquire a large percentage of its host plant's nutrients. In some ecosystems, up to 80% of plant nitrogen and 90% of plant phosphorus is acquired by mycorrhizal fungi. In return, plants may allocate ~20–40% of their carbon to mycorrhizae. Fungi Mycorrhizae Mycorrhizal (from Greek) literally means "fungus roots" and defines symbiotic interaction between plants and fungi. Fungi are important for decomposing and recycling organic material. However, the boundaries between the pathogenic and symbiotic lifestyles of fungi are not always clear-cut. Most of the time, the association is symbiotic, with the fungus improving nutrient and water acquisition or increasing stress tolerance for the plant and benefiting from the carbohydrates produced by the plant in return. Mycorrhizae include a wide variety of root-fungi interactions characterized by the mode of colonization. Essentially all plants form mycorrhizal associations, and there is evidence that some mycorrhizae transport carbon and other nutrients not only from soil to plant, but also between different plants in a landscape. The main groups include ectomycorrhizae, arbuscular mycorrhizae, ericoid mycorrhizae, orchid mycorrhizae, and monotropoid mycorrhizae. Monotropoid mycorrhizae are associated with plants in the monotropaceae, which lack chlorophyll. Many Orchids are also achlorophyllous for at least part of their life cycle. Thus, these mycorrhizal-plant relationships are unique because the fungus provides the host with carbon and other nutrients, often by parasitizing other plants. Achlorophyllous plants forming these types of mycorrhizal associations are called mycoheterotrophs. Endophytes Endophytes grow inside plant tissue—roots, stems, leaves—mostly symptomless. However, when plants age, they can become slightly pathogenic. They may colonize inter-cellular spaces, the root cells themselves, or both. Rhizobia and dark septate endophytes (which produce melanin, an antioxidant that may provide resilience against a variety of environmental stresses) are examples. Bacteria The zone of soil surrounding the roots is rich in nutrients released by plants and is, therefore, an attractive growth medium for both beneficial and pathogenic bacteria. Root associated beneficial bacteria promote plant growth and provide protection from pathogens. They are mostly rhizobacteria that belong to Pseudomonadota and Bacillota, with many examples from Pseudomonas and Bacillus genera. Rhizobium species colonize legume roots forming nodule structures. In response to root exudates, rhizobia produce Nod signalling factors that are recognized by legumes and induce the formation of nodules on plant roots. Within these structures, Rhizobium fix atmospheric nitrogen into ammonia that is then used by the plant. In turn, plants provide the bacteria with a carbon source to energize the nitrogen fixation. In addition to nitrogen fixation, Azospirillum species promote plant growth through the production of growth phytohormones (auxins, cytokinins, gibberellins). Due to these phytohormones, root hairs expand to occupy a larger area and better acquire water and nutrients. Pathogenic bacteria that infect plants infect plant roots are most commonly from Pectobacterium, Ralstonia, Dickeya and Agrobacterium genera. Among the most notorious are Pectobacterium carotovorum, Pectobacterium atrosepticum, Ralstonia solanacearum, Dickeya dadanthi, Dickeya solani, and Agrobacterium tumefaciens. Bacteria attach to roots in a biphasic mechanism with two steps—first weak, non-specific binding, then a strong irreversible residence phase. Both beneficial and pathogenic bacteria attach in this fashion. Bacteria can stay attached to the outer surface or colonize the inner root. Primary attachment is governed by chemical forces or extracellular structures such as pili or flagella. Secondary attachment is mainly characterized by the synthesis of cellulose, extracellular fibrils, and specific attachment factors such as surface proteins that help bacteria aggregate and form colonies. Archaea Though archaea are often thought of as extremophiles, microbes belonging to extreme environments, advances in metagenomics and gene sequencing have revealed that archaea are found in nearly any environment, including the root microbiome. For example, root-colonizing archaea have been discovered in maize, rice, wheat, and mangroves. Methanogen and ammonium-oxidizing archaea are prevalent members of the root microbiome, especially in anaerobic soils and wetlands. Archaeal phyla found in the root microbiome include Euryarchaeota, Nitrososphaerota (formerly Thaumarchaeota), and Thermoproteota (formerly Crenarchaeota). The presence and relative abundance of archaea in various environments suggest that they likely play an important role in the root microbiome. Archaea have been found to promote plant growth and development, provide stress tolerance, improve nutrient uptake, and protect against pathogens. For example, Arabidopsis thaliana colonized with an ammonia-oxidizing soil archaea, Nitrosocosmicus oleophilius, exhibited increased shoot weight, photosynthetic activity, and immune response. Examination of microbial communities in soil and roots has identified archaeal organisms and genes with functions similar to that of bacteria and fungi, such as auxin synthesis, protection against abiotic stress, and nitrogen fixation. In some cases, key genes for plant growth and development, such as metabolism and cell wall synthesis, are more prevalent in archaea than bacteria. Archaeal presence in the root microbiome can also be affected by plant hosts, which can change the diversity, presence, and health of archaeal communities. Viruses Viruses also infect plants via the roots; however, to penetrate the root tissues, they typically use vectors such as nematodes or fungi. Assembly mechanisms There is an ongoing debate regarding what mechanisms are responsible for assembling individual microbes into communities. There are two primary competing hypotheses. One is that "everything is everywhere, but the environment selects," meaning biotic and abiotic factors pose the only constraints, through natural selection, to which microbes colonize what environments. This is called the niche hypothesis. Its counterpart is the hypothesis that neutral processes, such as distance and geographic barriers to dispersal, control microbial community assembly when taxa are equally fit within an environment. In this hypothesis, differences between individual taxa in modes and reach of dispersal explain the differences in microbial communities of different environments. Most likely, both natural selection and neutral processes affect microbial community assembly, though certain microbial taxa may be more restricted by one process or the other depending on their physiological restrictions and mode of dispersion. Microbial dispersal mechanisms include wind, water, and hitchhiking on more mobile macrobes. Microbial dispersion is difficult to study, and little is known about its effect on microbial community assembly relative to the effect of abiotic and biotic assembly mechanisms, particularly in roots. For this reason, only assembly mechanisms that fit within the niche hypothesis are discussed below. The taxa within root microbial communities seem to be drawn from the surrounding soil, though the relative abundance of various taxa may differ greatly from those found in bulk soil due to unique niches in the root and rhizosphere. Biotic assembly mechanisms Different parts of the root are associated with different microbial communities. For example, fine roots, root tips, and the main root are all associated with different communities, and the rhizosphere, root surface, and root tissue are all associated with different communities, likely due to the unique chemistry and nutrient status of each of these regions. Additionally, different plant species, and even different cultivars, harbor different microbial communities, probably due to host specific immune responses and differences in carbon root exudates. Host age affects root microbial community composition, likely for similar reasons as host identity. The identity of neighboring vegetation has also been shown to impact a host plant's root microbial community composition. Abiotic assembly mechanisms Abiotic mechanisms also affect root microbial community assembly because individual taxa have different optima along various environmental gradients, such as nutrient concentrations, pH, moisture, temperature, etc. In addition to chemical and climatic factors, soil structure and disturbance impact root biotic assembly. Succession The root microbiome is dynamic and fluid within the constraints imposed by the biotic and abiotic environment. As in macroecological systems, the historical trajectory of the microbiotic community may partially determine the present and future community. Due to antagonistic and mutualistic interactions between microbial taxa, the taxa colonizing a root at any given moment could be expected to influence which new taxa are acquired, and therefore how the community responds to changes in the host or environment. While the effect of initial community on microbial succession has been studied in various environmental samples, human microbiome, and laboratory settings, it has yet to be studied in roots. See also Mangrove root microbiome References Microbiology Microbiomes Plant roots Soil biology
Root microbiome
[ "Chemistry", "Biology", "Environmental_science" ]
2,938
[ "Microbiology", "Soil biology", "Microscopy", "Microbiomes", "Environmental microbiology" ]
42,257,199
https://en.wikipedia.org/wiki/C20H22N2O3
{{DISPLAYTITLE:C20H22N2O3}} The molecular formula C20H22N2O3 (molar mass: 338.400 g/mol, exact mass: 338.1630 u) may refer to: Picrinine URB597 (KDS-4103) Molecular formulas
C20H22N2O3
[ "Physics", "Chemistry" ]
70
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
42,257,371
https://en.wikipedia.org/wiki/Wahlquist%20fluid
In general relativity, the Wahlquist fluid is an exact rotating perfect fluid solution to Einstein's equation with equation of state corresponding to constant gravitational mass density. Introduction The Wahlquist fluid was first discovered by Hugo D. Wahlquist in 1968. It is one of few known exact rotating perfect fluid solutions in general relativity. The solution reduces to the static Whittaker metric in the limit of zero rotation. Metric The metric of a Wahlquist fluid is given by where and is defined by . It is a solution with equation of state where is a constant. Properties The pressure and density of the Wahlquist fluid are given by The vanishing pressure surface of the fluid is prolate, in contrast to physical rotating stars, which are oblate. It has been shown that the Wahlquist fluid can not be matched to an asymptotically flat region of spacetime. References General relativity
Wahlquist fluid
[ "Physics" ]
184
[ "General relativity", "Theory of relativity" ]
49,874,045
https://en.wikipedia.org/wiki/Yue%20Qi
Yue Qi is a Chinese-born American nanotechnologist and physicist who specializes in computational materials scientist at Brown University. She won the 1999 Feynman Prize in Nanotechnology for Theory along with William Goddard and Tahir Cagin for "work in modeling the operation of molecular machine designs." Education Qi graduated from Tsinghua University with a double B.S. in materials science and computer science in 1996. As a graduate student in William Goddard's laboratory at the California Institute of Technology, she worked on molecular modelling, including in nanowires and binary liquid metals. She earned her Ph.D. in materials science in 2001. Her dissertation was entitled "Molecular dynamics (MD) studies on phase transformation and deformation behaviors in FCC metals and alloys." Career In 2001, Qi became a research scientist at General Motors Research and Development, where she was recognized for work in interfacial tribology and multiscale modeling of aluminum plasticity. Her research focused on using computational analysis of grain boundaries to improve the strength and flexibility of aluminum panels as well as energy applications such as modelling proton exchange membranes in fuel cells, and studying lithium-ion batteries. Qi had previously completed an internship at General Motors as a graduate student, and joined the company right after graduating where in the next decade, she engineered multiple models used in processes involving metal alloys, energy systems, and batteries. She said that she was one of the few people in General Motors with a physics rather than an engineering background. Beginning in 2009, she also taught classes at the University of Windsor. In 2013, she joined the faculty of the Department of Chemical Engineering and Materials Science at Michigan State University. Her research program focuses on materials simulation for clean energy, including density functional theory studies of diffusion, and the effects of mechanical deformation, in lithium-ion batteries. During this time she also became vice chair of the Michigan chapter of the American Vacuum Society. She moved to Brown University as a Joan Wernig Sorensen Professor of Engineering starting from July 1, 2020. Qi has also been involved with numerous science outreach programs for young people such as the Sally Ride Science Festival for Girls. In June 2018 she was appointed as the first Associate Dean for Inclusion and Diversity in the College of Engineering at Michigan State University. Awards and honors Qi has received multiple awards in recognition of her work on computational materials both during her time at General Motors and in academia. Her first award came in 1999, where she won the Feynman Prize in Nanotechnology alongside William Goddard and Tahir Cagin. She subsequently won multiple GM Campbell prizes during her time at General Motors. In 2017, Qi won the Brimacombe Medal from The Minerals, Metals & Materials Society in recognition of her scientific contributions to the computational materials field. Other awards 2013 FMD Young Leaders Professional Development References Living people 20th-century American scientists 20th-century American women scientists 21st-century American scientists 21st-century American women scientists California Institute of Technology alumni Chinese emigrants to the United States Chinese nanotechnologists Computational physicists General Motors people Michigan State University faculty Scientists from Michigan Tsinghua University alumni Academic staff of University of Windsor Place of birth missing (living people) Year of birth missing (living people) Brown University faculty American women academics
Yue Qi
[ "Physics" ]
654
[ "Computational physicists", "Computational physics" ]
49,883,150
https://en.wikipedia.org/wiki/Centrifugal%20partition%20chromatography
Centrifugal partition chromatography is a special chromatographic technique where both stationary and mobile phase are liquid, and the stationary phase is immobilized by a strong centrifugal force. Centrifugal partition chromatography consists of a series-connected network of extraction cells, which operates as elemental extractors, and the efficiency is guaranteed by the cascade. History In the 1940s Craig invented the first apparatus to conduct countercurrent partitioning; he called this the countercurrent distribution Craig apparatus. The apparatus consists of a series of glass tubes that are designed and arranged such that the lighter liquid phase is transferred from one tube to the next. The next major milestone was droplet countercurrent chromatography (DCCC). It uses only gravity to move the mobile phase through the stationary phase which is held in long vertical tubes connected in series. The modern era of CCC began with the development of the planetary centrifuge by Ito which was first introduced in 1966 as a closed helical tube which was rotated on a "planetary" axis as is turned on a "sun" axis. Centrifugal partition chromatography was introduced in Japan in 1982; the first instrument was built at Sanki Eng. Ltd. in Kyoto. The first instrument consisted of twelve cartridges arranged around the rotor of a centrifuge; the inner volume of each cartridge was about 15 mL for 50 channels. In 1999 Kromaton developed the first FCPC with radial cells. During cell development, the Z cell was completed in 2005 and the twin cell in 2009. In 2017 RotaChrom designed its top performing CPC cells through computed fluid dynamic simulation software. After thousands of simulations, this tool revealed the drawbacks of conventional CPC cell designs and highlighted the unparallel load capacity and scalable cell design of RotaChrom. Operation The extraction cells consist of hollow bodies with inlets and outlets of liquid connection. The cells are first filled with the liquid chosen to be the stationary phase. Under rotation, the pumping of the mobile phase is started, which enters the cells from the inlet. When entering the flow of mobiles phase forms small droplets according to the Stokes' law, which is called atomization. These droplets fall through the stationary phase, creating a high interface area, which is called the extraction. At the end of the cells, these droplets unite due to the surface tension, which is called settling. When a sample mixture is injected as a plug into the flow of mobile phase the compounds of the mixtures elute according to their partition coefficients: Centrifugal partition chromatography requires only a biphasic mixture of solvents, so by varying the constitution of the solvent system it is possible to tune the partition coefficients of different compounds so that separation is guaranteed by the high selectivity. Comparison with countercurrent chromatography Countercurrent chromatography and centrifugal partition chromatography are two different instrumental realization of the same liquid–liquid chromatographic theory. Countercurrent chromatography usually uses a planetary gear motion without rotary seals, while centrifugal partition chromatography uses circular rotation with rotary seals for liquid connection. CCC has interchanging mixing and settling zones in the coil tube, so atomization, extraction and settling are time and zone separated. Inside centrifugal partition chromatography, all three steps happen continuously in one time, inside the cells. Advantages of centrifugal partition chromatography: Higher flow rate for same volume size Laboratory scale example: 250 mL centrifugal partition chromatography has optimal flow rate of 5–15 mL/min, 250 mL countercurrent chromatography has optimal flow rate of 1–3 mL/min. Process scale example: 25 L countercurrent chromatography has optimal flow rate of 100–300 ml/min, 25 L centrifugal partition chromatography has optimal flow rate of 1000–3000 ml/min. Higher productivity (due to higher flow rate and faster separation time) Scalable up to tonnes per month Better stationary phase retention for most phases Disadvantages of centrifugal partition chromatography: Higher pressure than CCC (typical operation pressures of 40–160 bar vs 5–25 bar) Rotary seal wear over time Laboratory scale Centrifugal partition chromatography has been extensively used for isolation and purification of natural products for 40 years. Due to the ability to get very high selectivity, and the ability to tolerate samples containing particulated matter, it is possible to work with direct extracts of biomass, opposed to traditional liquid chromatography, where impurities degrade the solid stationary phase so that separation become impossible. There are numerous laboratory scale centrifugal partition chromatography manufacturers around the world, like Gilson (Armen Instrument), Kromaton (Rousselet Robatel), and AECS-QUIKPREP. These instruments operate at flow rates of 1–500 mL/min. with stationary phase retentions of 40–80%. Production scale Centrifugal partition chromatography does not use any solid stationary phase, so it guarantees a cost-effective separation for the highest industrial levels. As opposed to countercurrent chromatography, it is possible to get very high flow rates (for example 10 liters / min) with active stationary phase ratio of >80%, which guarantees good separation and high productivity. As in centrifugal partition chromatography, material is dissolved, and loaded the column in mass / volume units, loading capability can be much higher than standard solid-liquid chromatographic techniques, where material is loaded to the active surface area of the stationary phase, which takes up less than 10% of the column. Industrial instrument like Gilson (Armen Instrument), Kromaton (Rousselet Robatel) and RotaChrom Technologies (RotaChrom) differ from laboratory scale instruments by the applicable flow rate with satisfactory stationary phase retention (70–90%). Industrial instruments have flow rates of multiple liter / minutes, while able to purify materials from 10 kg to tonnes per month. Operating the production scale equipment requires industrial volume solvent preparation (mixer/settler) and solvent recovery equipment. See also Radial chromatography References Centrifugal partition Chromatography - Chromatographic Science Series - Volume 68, Editor: Alain P. Foucault, Marcel Dekker Inc Chromatography
Centrifugal partition chromatography
[ "Chemistry" ]
1,353
[ "Chromatography", "Separation processes" ]
49,884,670
https://en.wikipedia.org/wiki/Multiple%20models
In control theory, multiple model control is an approach to ensure stability in cases of large model uncertainty or changing plant dyanamics. It uses a number of models, which are distributed to give a suitable cover of the region of uncertainty, and adapts control based on the responses of the plant and the models. A model is chosen at every instant, depending on which is closest to the plant according to some metric, and this is used to determine the appropriate control input. The method offers satisfactory performance when no restrictions are put on the number of available models. Approaches There are a number of multiple model methods, including: “Switching”, the control input to the plant is based on the fixed model chosen at that instant. It is discontinuous, fast, but coarse. However it does have the advantage of verifiable stability bounds. “Switching and tuning”, an adaptive model is initialized from the location of the fixed model chosen, and the parameters of the best model determine the control to be used. It is continuous, slow, but accurate. "Blending", the control input is chosen based on a weighted combination of a number of suitable models. Applications Multiple model method can be used for: controlling an unknown plant - parameter estimate and the identification errors can be used collectively to determine the control input to the overall system, applying multi observer - significantly improving transients and reducing observer overshoot. See also State observer Adaptive control References General references Control theory
Multiple models
[ "Mathematics" ]
296
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
49,885,288
https://en.wikipedia.org/wiki/Dirac%20membrane
In quantum mechanics, a Dirac membrane is a model of a charged membrane introduced by Paul Dirac in 1962. Dirac's original motivation was to explain the mass of the muon as an excitation of the ground state corresponding to an electron. Anticipating the birth of string theory by almost a decade, he was the first to introduce what is now called a type of Nambu–Goto action for membranes. In the Dirac membrane model the repulsive electromagnetic forces on the membrane are balanced by the contracting ones coming from the positive tension. In the case of the spherical membrane, classical equations of motion imply that the balance is met for the radius , where is the classical electron radius. Using Bohr–Sommerfeld quantisation condition for the Hamiltonian of the spherically symmetric membrane, Dirac finds the approximation of the mass corresponding to the first excitation as , where is the mass of the electron, which is about a quarter of the observed muon mass. Action principle Dirac chose a non-standard way to formulate the action principle for the membrane. Because closed membranes in provide a natural split of space into the interior and the exterior there exists a special curvilinear system of coordinates in spacetime and a function such that defines a membrane , describe a region outside or inside the membrane Choosing and the following gauge , , where () is the internal parametrization of the membrane world-volume, the membrane action proposed by Dirac is where the induced metric and the factors J and M are given by In the above are rectilinear and orthogonal. The space-time signature used is (+,-,-,-). Note that is just a usual action for the electromagnetic field in a curvilinear system while is the integral over the membrane world-volume i.e. precisely the type of the action used later in string theory. Equations of motion There are 3 equations of motion following from the variation with respect to and . They are: variation w.r.t. for - this results in sourceless Maxwell equations variation w.r.t. for - this gives a consequence of Maxwell equations variation w.r.t. for The last equation has a geometric interpretation: the r.h.s. is proportional to the curvature of the membrane. For the spherically symmetric case we get Therefore, the balance condition implies where is the radius of the balanced membrane. The total energy for the spherical membrane with radius is and it is minimal in the equilibrium for , hence . On the other hand, the total energy in the equilibrium should be (in units) and so we obtain . Hamiltonian formulation Small oscillations about the equilibrium in the spherically symmetric case imply frequencies - . Therefore, going to quantum theory, the energy of one quantum would be . This is much more than the muon mass but the frequencies are by no means small so this approximation may not work properly. To get a better quantum theory one needs to work out the Hamiltonian of the system and solve the corresponding Schroedinger equation. For the Hamiltonian formulation Dirac introduces generalised momenta for : and - momenta conjugate to and respectively (, coordinate choice ) for : - momenta conjugate to Then one notices the following constraints for the Maxwell field for membrane momenta where - reciprocal of , . These constraints need to be included when calculating the Hamiltonian, using the Dirac bracket method. The result of this calculation is the Hamiltonian of the form where is the Hamiltonian for the electromagnetic field written in the curvilinear system. Quantisation For spherically symmetric motion the Hamiltonian is however the direct quantisation is not clear due to the square-root of the differential operator. To get any further Dirac considers the Bohr - Sommerfeld method: and finds for . See also Brane References P. A. M. Dirac, An Extensible Model of the Electron, Proc. Roy. Soc. A268, (1962) 57–67. Quantum models Electron
Dirac membrane
[ "Physics", "Chemistry" ]
829
[ "Quantum models", "Molecular physics", "Quantum mechanics", "Electron" ]
46,864,821
https://en.wikipedia.org/wiki/Sparse%20network
In network science, a sparse network has much fewer links than the possible maximum number of links within that network (the opposite is a dense network). The study of sparse networks is a relatively new area primarily stimulated by the study of real networks, such as social and computer networks. The notion of much fewer links is, of course, colloquial and informal. While a threshold for a particular network may be invented, there is no universal threshold that defines what much fewer actually means. As a result, there is no formal sense of sparsity for any finite network, despite widespread agreement that most empirical networks are indeed sparse. There is, however, a formal sense of sparsity in the case of infinite network models, determined by the behavior of the number of edges (M) and/or the average degree () as the number of nodes (N) goes to infinity. Definitions A simple unweighted network of size is called sparse if the number of links in it is much smaller than the maximum possible number of links : . In any given (real) network, the number of nodes N and links M are just two numbers, therefore the meaning of the much smaller sign ( above) is purely colloquial and informal, and so are statements like "many real networks are sparse." However, if we deal with a synthetic graph sequence , or a network model that is well defined for networks of any size N = 1,2,...,, then the attains its usual formal meaning: . In other words, a network sequence or model is called dense or sparse depending on whether the (expected) average degree in scales linearly or sublinearly with N: is dense if ; is sparse if . An important subclass of sparse networks are networks whose average degree is either constant or converges to a constant. Some authors call only such networks sparse, while others reserve special names for them: is truly sparse or extremely sparse or ultrasparse if . There also exist alternative, stricter definitions of network sparsity requiring the convergence of the degree distribution in to a well defined limit at . According to this definition, the N-star graph , for example, is not sparse. Node degree distribution The node degree distribution changes with the increasing connectivity. Different link densities in the complex networks have different node-degree distribution, as Flickr Network Analysis suggests. The sparsely connected networks have a scale free, power law distribution. With increasing connectivity, the networks show increasing divergence from power law. One of the main factors, influencing on the network connectivity is the node similarity. For instance, in social networks, people are likely to be linked to each other if they share common social background, interests, tastes, beliefs, etc. In context of biological networks, proteins or other molecules are linked if they have exact or complementary fit of their complex surfaces. Common terminology If the nodes in the networks are not weighted, the structural components of the network can be shown through adjacency matrix. If the most elements in the matrix are zero, such matrix is referred as sparse matrix. In contrast, if most of the elements are nonzero, then the matrix is dense. The sparsity or density of the matrix is identified by the fraction of the zero element to the total number of the elements in the matrix. Similarly, in the context of graph theory, if the number of links is close to its maximum, then the graph would be known as dense graph. If the number of links is lower than the maximum number of links, this type of graphs are referred as sparse graph. Applications Sparse Network can be found in social, computer and biological networks, as well as, its applications can be found in transportation, power-line, citation networks, etc. Since most real networks are large and sparse, there were several models developed to understand and analyze them. These networks have inspired sparse network-on-chip design in multiprocessor embedded computer engineering. Sparse networks also induce cheaper computations by making it efficient to store the network as an Adjacency list, rather than an Adjacency matrix. For example, when using an adjacency list, iterating over a node's neighbors can be achieved in O(M/N), whereas it is achieved in O(N) with an adjacency matrix. References Networks Network theory Network topology
Sparse network
[ "Mathematics" ]
893
[ "Network topology", "Graph theory", "Network theory", "Topology", "Mathematical relations" ]
46,867,955
https://en.wikipedia.org/wiki/Attila%20Grandpierre
Attila Grandpierre (; born 4 July 1951) is a Hungarian musician, astrophysicist, physicist, self-taught historian, writer and poet. He is best known as leader/vocalist of the Galloping Coroners (), an original shamanic music band. Personal worldview From his childhood on he was very interested in dealing with the Sun, the cosmos, music and the nature of life. As an adult he is looking for the answer whether the Universe does have a physical, biological or psychological nature. Life Ancestry Grandpierre's family name comes from French Huguenot ancestors, according to family tradition, with some French bishops, of whom one, Louis Grandpierre, was a Swiss politician and president of the Swiss Appeal Court, and another, Károly Grandpierre, a writer and consultant of Lajos Kossuth, settled in Hungary. Young years, family Grandpierre was born in Hungary under the Soviet regime on 4 July 1951 in Budapest. His father, Endre Grandpierre K. was a writer and historian. His father's studies on history greatly influenced the small Attila. He was five years old when he stated that he wanted to become an astronomer, dealing with the Sun, and seven years old when he stated that he wanted to become a singer. He graduated at ELTE as a physicist-astronomer in 1974, and got his Ph.D. in 1977. Physicist career He studied theoretical biology focused on Ervin Bauer's works. In 2009 his subject field of interest concerned the relation between astronomy and civilization. During 1995-1998 he worked with Professor Ervin László studying the physics of collective consciousness and the quantum-vacuum interactions. In 2011 he was an invited professor on Computational Biology at the Chapman University, California for six months. As physicist he had a strong interest in the problem of bringing the sciences and metaphysics together. He paid special attention to interdisciplinal science and complexity of living systems in 2008. He paid special attention to comprehensive science unifying the sciences of matter, life and mind, deepening the explanatory structure of sciences [7], complexity of living systems in 2008, the relations between astronomy and civilization in 2011, the living nature of the Sun in 2017, the ancient history of the Silk Road in 2021, and life-centred economics in 2022. Books Grandpierre published 21 books, 7 book chapters, more than 100 science papers, over 400 popular science articles, edited some books. He wrote the Fundamental Complexity Measures of Life and Cosmic Life Forms chapters in the book titled From Fossils to Astrobiology (2008, Springer). edited the book Astronomy and Civilization (2011, Analecta Husserliana, [6]), The Helios Theory – The Sun as a Self-Regulating System and as a Cosmic Living Organism (2018, Process Studies, Limits to Growth and the Philosophy of Life-Centred Economics. World Futures (2022, World Futures, Extending Whiteheadian Organic Cosmology to a Comprehensive Science of Nature (Chapter 2 in Process Cosmology: New Integrations in Science and Philosophy, in the new “Palgrave Perspectives in Process Philosophy”, Generalization of Quantum Theory into Biology (book chapter in Process-Philosophical Perspectives on Biology: Intuiting Life, wrote two chapters and edited “The Cosmic Life Instinct Shows the Way for the Healthy Civilization” (2023, Springer, ). He was co-editor of the book Astronomy and Civilization in the New Enlightenment (2011, Springer). Important publications In astrophysics he wrote an article on the variable nature of the Sun's core, which was mentioned in New Scientist as giving the best fit to explain the periodicities of terrestrial Ice Ages in 2007. Working with Katalin Martinás, he wrote on "natural" thermodynamics. Since 2020, he is the Research President of the Budapest Centre for Long-Term Sustainability. He generalised the principle of least action, playing a fundamental role in physics, to the principle of biology as the principle of greatest action. Grandpierre, A. 2024, The epoch-making importance of Ervin Bauer's theoretical biology. BioSystems 238: 105179. Grandpierre, A. 2023a, Generalization of Quantum Theory into Biology (Process-Philosophical Perspectives on Biology: Intuiting Life, ed. Spyridon Koutroufinis. Cambridge Scholars Publishing, 149-174). Grandpierre, A. 2023b, The Cosmic Life Instinct Shows the Way for the Healthy Civilization. in: Towards a Philosophy of Cosmic Life - New Discussions and Interdisciplinary Views (book chapter; Springer, 35-67). Grandpierre, A. 2022, Limits to Growth and the Philosophy of Life-Centred Economics. World Futures, DOI: 10.1080/02604027.2022.2072160 Grandpierre, A. 2022, Extending Whiteheadian Organic Cosmology to a Comprehensive Science of Nature (Chapter 2 in Process Cosmology: New Integrations in Science and Philosophy, in the new “Palgrave Perspectives in Process Philosophy” series, ed. Andrew M. Davis, 59-91. Grandpierre, A. 2021, Cosmic Roots of Human Nature and Our Culturally Conditioned Self-Image. In: International Communication of Chinese Culture. Spectra of Cultural Understanding 8: 47–63. Musician career As a musician Atilla Grandpierre is best known as leader/vocalist of Galloping Coroners (Vágtázó Halottkémek in Hungarian, Die Rasenden Leichenbeschauer in German, where they were a cultic band in the 80’s and 90’s) shamanic band inspired by the cosmic life force, from 1975-today and later also the same of acoustic Galloping Wonder Stag from 2005-today. By his high school years, before he had started to sing, he had a certain degree of countrywide fame among youngsters as a mysterious, unconventional boy who did crazy things with his friends, e.g. creating homemade rockets. References Hungarian scientists Hungarian rock musicians 1951 births Alternative Tentacles artists Living people Quantum physicists Hungarian people of French descent
Attila Grandpierre
[ "Physics" ]
1,260
[ "Quantum physicists", "Quantum mechanics" ]