id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
50,223,751 | https://en.wikipedia.org/wiki/MCDRAM | Multi-Channel DRAM or MCDRAM (pronounced em cee dee ram) is a 3D-stacked DRAM that is used in the Intel Xeon Phi processor codenamed Knights Landing. It is a version of Hybrid Memory Cube developed in partnership with Micron Technology, and a competitor to High Bandwidth Memory.
The many cores in the Xeon Phi processors, along with their associated vector processing units, enable them to consume many more gigabytes per second than traditional DRAM DIMMs can supply. The "Multi-channel" part of the MCDRAM full name reflects the cores having many more channels available to access the MCDRAM than processors have to access their attached DIMMs.
This high channel count leads to MCDRAM's high bandwidth, up to 400+ GB/s, although the latencies are similar to a DIMM access.
Its physical placement on the processor imposes some limits on capacity – up to 16 GB at launch, although speculated to go higher in the future.
Programming
The memory can be partitioned at boot time, with some used as cache for more distant DDR, and the remainder mapped into the physical address space.
The application can request pages of virtual memory to be assigned to either the distant DDR directly, to the portion of DDR that is cached by the MCDRAM, or to the portion of the MCDRAM that is not being used as cache. One way to do this is via thememkind API.
When used as cache, the latency of a miss accessing both the MCDRAM and DDR is slightly higher than going directly to DDR, and so applications may need to be tuned
to avoid excessive cache misses.
References
External links
MCDRAM (High Bandwidth Memory) on Knights Landing – Analysis Methods & Tools
An Intro to MCDRAM (High Bandwidth Memory) on Knights Landing
High Bandwidth Memory (HBM): how will it benefit your application?
Micron HMC Webinar July 2017 slides
Computer architecture
Computer-related introductions in 2016
Intel
Parallel computing
Computer memory | MCDRAM | [
"Technology",
"Engineering"
] | 417 | [
"Computers",
"Computer engineering",
"Computer architecture"
] |
50,226,212 | https://en.wikipedia.org/wiki/Breit%20frame | In particle physics, the Breit frame (also known as infinite-momentum frame or IMF) is a frame of reference used to describe scattering experiments of the form , that is experiments in which particle A scatters off particle B, possibly producing particles in the process. The frame is defined so that the particle A has its momentum reversed in the scattering process.
Another way of understanding the Breit frame is to look at the elastic scattering . The Breit frame is defined as the frame in which . There are different occasions when Breit frame can be useful, e.g., in measuring the electromagnetic form factor of a hadron, is the scattered hadron; while for deep inelastic scattering process, the elastically scattered parton should be considered as . It is only in the latter case the Breit frame gets related to infinite-momentum frame.
It is named after the American physicist Gregory Breit.
See also
Center-of-momentum frame
Laboratory frame of reference
References
Frames of reference
Kinematics | Breit frame | [
"Physics",
"Mathematics",
"Technology"
] | 206 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Coordinate systems",
"Frames of reference",
"Classical mechanics stubs",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics",
"Theory of relativity"
] |
50,229,484 | https://en.wikipedia.org/wiki/Photon%20scanning%20microscopy | The operation of a photon scanning tunneling microscope (PSTM) is analogous to the operation of an electron scanning tunneling microscope, with the primary distinction being that PSTM involves tunneling of photons instead of electrons from the sample surface to the probe tip. A beam of light is focused on a prism at an angle greater than the critical angle of the refractive medium in order to induce total internal reflection within the prism. Although the beam of light is not propagated through the surface of the refractive prism under total internal reflection, an evanescent field of light is still present at the surface.
The evanescent field is a standing wave which propagates along the surface of the medium and decays exponentially with increasing distance from the surface. The surface wave is modified by the topography of the sample, which is placed on the surface of the prism. By placing a sharpened, optically conducting probe tip very close to the surface (at a distance <λ), photons are able to propagate through the space between the surface and the probe (a space which they would otherwise be unable to occupy) through tunneling, allowing detection of variations in the evanescent field and thus, variations in surface topography of the sample. In this manner, PSTM is able to map the surface topography of a sample in much the same way as in electron scanning tunneling microscope.
One major advantage of PSTM is that an electrically conductive surface is no longer necessary. This makes imaging of biological samples much simpler and eliminates the need to coat samples in gold or another conductive metal. Furthermore, PSTM can be used to measure the optical properties of a sample and can be coupled with techniques such as photoluminescence, absorption, and Raman spectroscopy.
History
Conventional optical microscopy utilizing far-field illumination achieves resolution that is restricted by the Abbe diffraction limit. Modern optical microscopes with diffraction limited resolution are therefore capable of resolving features as small as λ/2.3. Researchers have long sought to break the diffraction limit of conventional optical microscopy in order to achieve super-resolution microscopes. One of the first major advances toward this goal was the development of scanning optical microscopy (SOM) by Young and Roberts in 1951. SOM involves scanning individual regions of the sample with a very small field of light illuminated through a diffraction limited aperture. Individual features as small as λ/3 are observed at each scanned point, and the image collected at each point is then compiled together into one image of the sample.
The resolution of these devices was extended beyond the diffraction limit in 1972 by Ash and Nicholls, who first demonstrated the concept of near-field scanning optical microscopy. In NSOM, the object is illuminated through a sub-wavelength sized aperture located at a distance <λ from the sample surface. The concept was first demonstrated using microwaves, however the technique was extended into the field of optical imaging in 1984 by Pohl, Denk, and Lanz, who developed a near-field scanning optical microscope capable of achieving a resolution of λ/20. Along with the development of electron scanning tunneling microscopy in 1982 by Binning et al., this led to the development of the photon scanning tunneling microscope by Reddick and Courjon (independently) in 1989. PSTM combines the techniques of STM and NSOM by creating an evanescent field using total internal reflection in a prism under the sample and detecting sample-induced variations in the evanescent field by tunneling photons into a sharpened optical fiber probe.
Theory
Total internal reflection
A beam of light travelling through a medium of refractive index n1 incident on an interface with a second medium of refractive index n2 (with n1>n2) will be partially transmitted through the second medium and partially reflected back through the first medium if the angle of incidence is less than the critical angle. At the critical angle, the incident beam will be refracted tangent to the interface (i.e. it will travel along the boundary between the two media). At an angle greater than the critical angle (when the incident beam is nearly parallel to the interface) the light will be completely reflected within the first medium, a condition known as total internal reflection. In the case of PSTM, the first medium is a prism, typically made of glass, and the second medium is the air above the prism.
Evanescent field coupling
Under total internal reflection, although no energy is propagated through the second medium, a non-zero electric field is still present in the second medium near the interface. This field exponentially decays with increasing distance from the interface and is known as the evanescent field. Figure 1 shows the optical component of the evanescent field is modulated by the presence of a dielectric sample placed on the interface (the surface of the prism), hence the field contains detailed optical information about the sample surface. Although this image is lost in the diffraction limited far field, a detailed optical image may be constructed by probing the near field region (at a distance <λ) and detecting sample induced modulation of the evanescent field.
This is accomplished through frustrated total internal reflection, also known as evanescent field coupling. This occurs when a third medium (in this case the sharpened fiber probe) of refractive index n3 (with n3>n2) is brought near the interface at a distance <λ. At this distance the third medium overlaps the evanescent field, disrupting the total reflection of light in the first medium and allowing propagation of the wave in the third medium. This process is analogous to quantum tunneling; the photons confined within the first medium are able to tunnel through the second medium (where they cannot exist) into the third medium. In PSTM, the tunneled photons are conducted through the fiber probe into a detector where a detailed image of the evanescent field can then be reconstructed. The degree of coupling between the probe and surface is highly distance dependent, as the evanescent field is an exponentially decaying function of distance from the interface. Hence, the degree of coupling is used to measure the tip to surface distance in order to obtain topographical information about the sample placed on the surface.
Probe-field interaction
The intensity of the evanescent field at a distance z from the surface is given by the relation
I~exp(-γz)
where γ is the decay constant of the field and is represented by
γ = 2k2(n122sin2θi − 1)1/2
where n12=(n1/n2), n1 is the refractive index of the first medium, n2 is the refractive index of the second medium, k is the magnitude of the incident wave vector, and θi is the angle of incidence.The decay constant is used in determining the transmittance of photons from the surface to the probe tip, however the degree of coupling is also highly dependent on the properties of the probe tip such as the length of the probe tip region in contact with the evanescent field, the probe tip geometry, and the size of the aperture (in apertured probes). The degree of optical coupling to the probe tip as a function of height must therefore be determined individually for a given instrument and probe tip. In practice, this is usually determined during instrument calibration by scanning the probe perpendicular to the surface and monitoring the detector signal as a function of tip height. Thus the decay constant is found empirically and is used to interpret the signal obtained during the lateral scan and to set a feedback point for the piezoelectric transducer during constant signal scanning.
Although the decay constant is typically determined through empirical methods, detailed mathematical models of probe–sample coupling interactions that account for probe tip geometry and sample distance have been published by Goumri-Said et al. In many cases the evanescent field is primarily modulated by sample surface topography, hence the detected optical signal can be interpreted as the topography of the sample. However, the refractive index and absorption properties of the sample can cause further changes to the detected evanescent field, making it necessary to separate optical data from topographical data. This is often accomplished by coupling PSTM to other techniques such as AFM (see below). Theoretical models have also been developed by Reddick to account for modulation of the evanescent field by secondary effects such as scattering and absorbance at the sample surface.
Procedure
Figure 2 shows the operation and principle of PSTM. An evanescent field is attained using a laser beam at an attenuated total reflection geometry for total internal reflection within a triangular prism. The sample is placed on a glass or quartz slide, which is affixed to the prism with an index matching gel. The sample then becomes the surface at which total internal reflection occurs. The probe consists of the sharpened tip of an optical fiber attached to a piezoelectric transducer to control fine motion of the probe tip during scanning. The end of the optical fiber is coupled to a photomultiplier tube, which acts as the detector. The probe tip and piezoelectric transducer are housed within a scanner cartridge mounted above the sample. The position of this assembly is manually adjusted to bring the probe tip within tunneling distance of the evanescent field.
As photons tunnel from the evanescent field into the probe tip, they are conducted along the optical fiber to the photomultiplier tube, where they are converted into an electrical signal. The amplitude of the electrical output of the photomultiplier tube is directly proportional to the number of photons collected by the probe, thus allowing measurement of the degree of interaction of the probe with the evanescent field at the sample surface. Since this field exponentially decays with increasing distance from the surface, the degree of intensity of the field corresponds to the height of the probe from the sample surface. The electrical signals are sent to a computer where the topography of the surface is mapped based on the corresponding changes in the detected evanescent field intensity.
The electrical output from the photomultiplier tube is used as constant feedback to the piezoelectric transducer to adjust the height of the tip according to variations in surface topography. The probe must be scanned perpendicular to the sample surface in order to calibrate the instrument and determine the decay constant of the field intensity as a function of probe height. During this scan, a feedback point is set so that the piezoelectric transducer can maintain constant signal intensity during the lateral scan.
Fiber probe tips
The resolution of a PSTM instrument is highly dependent on probe tip geometry and diameter. Probes are typically fabricated via chemical etching of an optical fiber in a solution of HF and can be apertured or apertureless. Using chemical etching, fiber tips with a curvature radius as low as 20 nm have been made. In apertured tips, the sides of the sharpened fiber are sputter coated in a metal or other material. This helps to limit tunneling of photons into the side of the probe in order to maintain more consistent and accurate evanescent field coupling. Due to the rigidity of the fiber probe, even brief contact with the surface will destroy the probe tip.
Larger probe tips have a greater degree of coupling to the evanescent field and will therefore have greater collection efficiency due to a larger area of the optical fiber interacting with the field. The primary limitation of a large tip is the increased probability of collision with rougher surface features as well as photon tunneling into the side of the probe. A narrower probe tip is necessary to resolve more abrupt surface features without collision, however the collection efficiency will be reduced.
Figure 3 shows that fiber probe with metal coating. In metal coated fiber probes, the diameter and geometry of the aperture, or uncoated area at the tip of the probe, determines the collection efficiency. Wider cone angles result in larger aperture diameters and shorter probe lengths, while narrower cone angles result in smaller aperture diameters and longer probes. Double tapered probe tips have been developed in which a long, narrow region of the probe tapers into a tip with a wider cone angle. This provides a wider aperture for greater collection efficiency while still maintaining a long narrow probe tip capable of resolving abrupt surface features with low risk of collision.
PSTM coupled spectroscopy techniques
Photoluminescence
It has been demonstrated that photoluminescence spectra can be recorded utilizing a modified PSTM instrument. Coupling photoluminescence spectroscopy to PSTM allows the observation of emission from local nanoscopic regions of a sample and provides an understanding of how the photoluminescent properties of a material change due to surface morphology or chemical differences in an inhomogeneous sample. In this experiment, a 442 nm He-Cd laser beam under total internal reflection was used as an excitation source. The signal from the optical fiber was first passed through a monochromator before reaching a photomultiplier tube to record the signal. Photoluminescence spectra were recorded from local regions of a ruby crystal sample. A subsequent publication successfully demonstrated the use of PSTM to record the fluorescence spectrum of a Cr3+ ion implanted sapphire cryogenically cooled under liquid nitrogen. This technique allows characterization of individual surface features of semiconductor samples whose photoluminescent properties are highly temperature dependent and must be studied at cryogenic temperatures.
Infrared
PSTM has been modified to record spectra in the infrared range. Utilizing both cascade arc and free electron laser CLIO as infrared light sources, infrared absorbance spectra were recorded from a diazoquinone resin. This mode of operation requires a fluoride glass fiber and HgCdTe detector in order to effectively collect and record the infrared wavelengths used. Furthermore, the fiber tip must be metal coated and oscillated during collection in order to sufficiently reduce background noise. The surface must first be imaged using a wavelength that will not be absorbed by the sample. Next, the light source is stepped through the infrared wavelengths of interest at each point during collection. The spectrum is acquired by analysis of the differences in the images recorded at different wavelengths.
Atomic force microscopy
Figure 4 shows the combination of a PSTM, AFM, and conventional microscope. In PSTM and AFM the silicon nitride cantilever can be used as the optical probe tip in order to simultaneously perform (AFM) and PSTM. This allows comparison of the recorded optical signal with the higher resolution topography data obtained by AFM. Silicon nitride is a suitable material for an optical probe tip as it is optically transparent down to 300 nm. However, since it is not optically conducting, the photons collected by the probe tip must be focused through a lens to the detector instead of travelling through an optical fiber. The instrument can be operated in constant height or constant force mode and resolution is limited to 10–50 nm due to tip convolution. Since the optical signal obtained in PSTM is affected by the optical properties of the sample as well as topography, comparison of the PSTM data with AFM data allows determination of the absorbance of the sample. In one study, the 514 nm absorbance of a Langmuir-Blodgett film of 10,12-pentacosadiynoic acid (PCA) was recorded using this method.
Photo-conductive imaging with atomic force/electron scanning tunneling microscopy
PSTM can be combined with both electron scanning tunneling microscope and AFM in order to simultaneously record optical, conductive, and topological information of a sample. This experimental apparatus, published by Iwata et al., allows the characterization of semiconductors such as photovoltaics, as well as other photo-conductive materials. The experimental configuration utilizes a cantilever consisting of a bent optical fiber sharpened to a tip diameter of less than 100 nm, coated with an ITO layer, and a thin Au layer. Hence, the fiber probe acts as the AFM cantilever for force sensing, is optically conductive to record optical data, and electrically conductive to record current from the sample. The signals from the three detection methods are recorded simultaneously and independently in order to separate topographical, optical, and electrical information from the signals..
This apparatus was used to characterize copper phthalocyanine deposited over an array of gold squares patterned on an ITO substrate affixed to a prism. The prism was illuminated under total internal reflection at 636 nm, 533 nm, and 441 nm (selected from a white light laser using optical filters), allowing photo-conductive imaging at different excitation wavelengths. Copper phthalocyanine is a semiconducting organometallic compound. The conductivity of this compound is high enough for the electric current to travel through the film and tunnel into the probe tip. The photo-conductive properties of this material cause the conductivity to increase under irradiation due to an increase in the number of photo-generated charge carriers. Optical and topographical images of the sample were obtained utilizing the novel imaging technique described above. The changes in photo-conductivity of point-contact areas of the film were observed under different excitation wavelengths.
References
Photonics
Scanning probe microscopy | Photon scanning microscopy | [
"Chemistry",
"Materials_science"
] | 3,526 | [
"Nanotechnology",
"Scanning probe microscopy",
"Microscopy"
] |
50,232,027 | https://en.wikipedia.org/wiki/Nereda | Nereda is a wastewater treatment technology invented by Mark van Loosdrecht of the Delft University of Technology in the Netherlands. The technology is based on aerobic granulation and is a modification of the activated sludge process.
Aerobic granular sludge can be formed by applying specific process conditions that favour slow growing organisms such as PAOs (polyphosphate accumulating organisms) and GAOs (glycogen accumulating organisms). Another key part of granulation is selective washing whereby slow settling floc-like sludge is discharged as waste sludge and faster settling biomass is retained. At the full-scale, the Nereda system consists of a cyclical process with three main cycle components or phases, namely: simultaneous fill and draw, aeration / reaction and settling. The aerobic granules form excellent settling properties allowing for higher biomass concentrations (8g/L), the non-use of secondary clarifiers and the exclusion of major sludge recycle pumping in the Nereda system – the result is a compact (reduced plant footprints), simple system that requires significantly less chemicals and energy when compared to conventional activated sludge (CAS) systems. The theory of this technology has proven to work in the field and currently more than 90 wastewater treatment plants worldwide are operational, under construction or under design, varying in size from 500 up to 2,400,000 person equivalent. The Nereda technology is an invention of the Delft University of Technology and Engineering consultancy Royal HaskoningDHV.
The Nereda derives from the Greek word “Neraida”. Nereda was a water nymph and one of the daughters of Nereus, the wise and benevolent Greek god of the sea. In Greek mythology Nereda is linked with the terms “pure” and “immaculate”, a reference to the water quality produced by the new technology.
See also
List of waste water treatment technologies
Water purification
Sequencing batch reactor
References
External links
Official website
Sewerage | Nereda | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 411 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
41,054,127 | https://en.wikipedia.org/wiki/Coastal%20sediment%20transport | Coastal sediment transport (a subset of sediment transport) is the interaction of coastal land forms to various complex interactions of physical processes. The primary agent in coastal sediment transport is wave activity (see Wind wave), followed by tides and storm surge (see Tide and Storm surge), and near shore currents (see Sea#Currents) . Wind-generated waves play a key role in the transfer of energy from the open ocean to the coastlines. In addition to the physical processes acting upon the shore, the size distribution of the sediment is a critical determination for how the beach will change (see Grain size determination). These various interactions generate a wide variety of beaches. (see Beach). Other than the interactions between coastal land forms and physical processes there is also the addition of modification of these landforms through anthropogenic sources (see human modifications). Some of the anthropogenic sources of modification have been put in place to halt erosion or prevent harbors from filling up with sediment. In order to assist community planners, local governments, and national governments a variety of models have been developed to predict the changes of beach sediment transport at coastal locations. Typically, during large wave events, the sediment gets transported off the beach face and deposited offshore generating a sandbar. Once the significant wave event has diminished, the sediment then gets slowly transported back onshore.
History
In the mid-1970s a significant amount of attention was paid to coastal sediment transport. In part, due to the National Sea Grant College Program and the U.S. Congress Mandated Sea Grant Act of 1976. One of the research areas included "the development and the experimental verification of hydrodynamic laws governing the transport of marine sediments in the flow fields occurring in coastal waters." From this request for research, the Office of Sea Grant reviewed, accepted, and funded the Nearshore Sediment Transport Study (NSTS). Due to unforeseen complications the NSTS conducted only two major field experiments and a validation experiment. This was a significant contribution to the field of coastal sediment transport and helped initialize a great deal of future research.
Glossary
shore zone between the water's edge at normal low tide and the landward limit of effective wave action.
shoreline the water's edge, migrating up and down with the tide.
foreshore exposed at low tide and submerged at high tide.
backshore extending above normal high tide level.
nearshore zone between shoreline and the line where the waves begin to break.
beach an accumulation of loose sediment sometimes confined to the backshore but often extending across the foreshore as well.
Beach profile measurements
A variety of measurements are used to determine the beach profile, sediment grain size, and various other important parameters to determine what is influencing coastal sediment transport. Below are a few of the multitude.
Coastal research amphibious buggy (CRAB)
A three-wheeled vehicle deployed at the beach to measure the beach profile. (more information can be found at http://frf.usace.army.mil/vehicles2.stm)
Emory beach profile measurement
In order to determine what the profile of a beach looks like, one method for determination is the Emory Beach Profiling Method. Initiating a benchmark, the researcher establishes a control point to start the surveys at. Typically this is far enough away from the swash zone that large changes in elevation will not occur during the sampling time. Once the initial benchmark is established, the researcher will take the Emory sampling device and measure the change in elevation over the distance the device is covering. Then, they will pick up the device and move it to the end point of their last survey, and so on. Until they reach the shoreline. Typically this is done during neap tide (see Tide for more information on neap tide).
Grain size determination
Since the sand grain diameters can vary throughout the entire beach the median grain size is used to determine sediment fall velocity. Determining sediment fall velocity allows the determination of what sediment is left where.
Human modifications
Sea walls
Groynes
Breakwaters
Dredging of harbor entrances
Dumping of material on the coast and offshore
Reduction of coastal vegetation (cutting, burning, grazing, pollution)
Models
Models for the prediction of sediment transport in coastal regions have been used since the mid 1970s. One of the first formulas to calculate coastal sediment transport was developed by Eco Bijker end of sixties. Some transport models are:
XBeach (http://oss.deltares.nl/web/xbeach/)
Profile Parameter P
Engineering tools and databases on Sediment Transport and Morphology (http://www.leovanrijn-sediment.com/page4.html)
DHI's MIKE software (http://www.mikepoweredbydhi.com/products/mike-21/sediments)
DELFT3D (http://oss.deltares.nl/web/delft3d/home)
TELEMAC-MASCARET: SISYPHE - Sediment transport and bed evolution (http://www.opentelemac.org/index.php/modules-list/164-sysiphe-sediment-transport-and-bed-evolution)
CoastalME (Coastal Modelling Environment - https://earthwise.bgs.ac.uk/index.php/Category:Coastal_Modeling_Environment)
References
Fluid mechanics
Geomorphology
Sedimentology
Hydrology
Geological processes
Deposition (geology)
Coastal engineering | Coastal sediment transport | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,101 | [
"Hydrology",
"Coastal engineering",
"Civil engineering",
"Environmental engineering",
"Fluid mechanics"
] |
41,060,071 | https://en.wikipedia.org/wiki/Pandora%20%28fungus%29 | Pandora is a genus of fungi within the order Entomophthorales. This has been supported by molecular phylogenetic analysis (Gryganskyi et al. 2012).
It was initially formed by Polish mycologist Andrzej Batko (1933-1997), as a subgenus of Zoophthora. Then American mycologist Richard A. Humber raised it to the genus level. The genus name of Pandora is derived from the Latin word pando which means “to become curved” or “to sag” and the generic suffix “ra” thus describing conidia, which are often with weakly outlined bilateral symmetry. They are on one side (abdominal) slightly flattened and on the opposite (dorsal) side, more convex, on the third (lateral) side, they are somewhat curved towards the abdominal side and slightly asymmetrical.
It has a cosmopolitan distribution.
It is best known by its representative Pandora neoaphidis, which acts as an obligate pathogen in various species of aphids. It is a widespread species that is often found to be the most common fungal insect pathogen on the local aphid community (e.g. in surveys from Argentina, Slovakia, and China.). It has therefore been the subject of study for biological control. Including usage on the green peach aphid, Myzus persicae (Homoptera: Aphididae) which predates on spinach (Spinacea oleracea ) in Arkansas, America. Up to 95 species of the aphid (world-wide) have been found to be infected by the fungus. From places such as France (Rabasse et al. 1983), Mexico (Remaudiere and Hennebert, 1980), Portugal and Spain (Humber, 1986) and also Japan (Kobayashi et al.,1984). Panicum miliaceum or broomcorn millets were trialled in 2003 as a production base (within labs) for the fungus. However, difficulty with mass production of infectious spores in vitro and the viable formulation and storage into an easily applicable commercial product has halted their direct use as a biological control in 2012.
There is limited evidence that the ladybird Harmonia axyridis, which is invasive in America and Europe, has an advantage over native ladybird species because it feeds more on Pandora-infested aphid cadavers.
Pandora formicae is a rare example of the entomophthoralean fungus that has adapted to exclusively infect social insects, such as the wood ant Formica polyctena. The proportion of dead ant bodies with resting spores increased from late summer throughout autumn, which suggests that these fungal spores are the main overwintering fungal structures.
Pandora sp. nov. inedit. (ARSEF13372) is a recently isolated fungus species with high potential for usage in psyllid pest control. Experiments in biomass production are being studied for usefulness.
Species
As accepted by Species Fungorum;
Pandora aleurodis
Pandora bibionis
Pandora blunckii
Pandora borea
Pandora brahminae
Pandora bullata
Pandora dacnusae
Pandora delphacis
Pandora dipterigena
Pandora echinospora
Pandora formicae
Pandora gloeospora
Pandora guangdongensis
Pandora heteropterae
Pandora kondoiensis
Pandora lipae
Pandora longissima
Pandora minutispora
Pandora muscivora
Pandora myrmecophaga
Pandora neoaphidis
Pandora nouryi
Pandora phalangicida
Pandora philonthi
Pandora phyllobii
Pandora poloniae-majoris
Pandora psocopterae
Pandora sciarae
Pandora shaanxiensis
Pandora terrestris
Pandora uroleuconii
Former species;
P. americana = Furia americana, Entomophthoraceae
P. athaliae = Zoophthora athaliae, Entomophthoraceae
P. calliphorae = Entomophthora calliphorae, Entomophthoraceae
P. chironomi = Erynia chironomi, Entomophthoraceae
P. cicadellis = Erynia cicadellis, Entomophthoraceae
P. suturalis = Zoophthora suturalis, Entomophthoraceae
References
Entomophthorales
Zygomycota genera | Pandora (fungus) | [
"Biology"
] | 891 | [
"Fungus stubs",
"Fungi"
] |
32,662,418 | https://en.wikipedia.org/wiki/Anomalous%20diffraction%20theory | Anomalous diffraction theory (also van de Hulst approximation, eikonal approximation, high energy approximation, soft particle approximation) is an approximation developed by Dutch astronomer van de Hulst describing light scattering for optically soft spheres.
The anomalous diffraction approximation for extinction efficiency is valid for optically soft particles and large size parameter, x = 2πa/λ:
,
where in this derivation since the refractive index is assumed to be real, and thus there is no absorption (). is the efficiency factor of extinction, which is defined as the ratio of the extinction cross section and geometrical cross section πa2. p = 4πa(n – 1)/λ has a physical meaning of the phase delay of the wave passing through the center of the sphere; a is the sphere radius, n is the ratio of refractive indices inside and outside of the sphere, and λ the wavelength of the light.
This set of equations was first described by van de Hulst. There are extensions to more complicated geometries of scattering targets.
The anomalous diffraction approximation offers a very approximate but computationally fast technique to calculate light scattering by particles. The ratio of refractive indices has to be close to 1, and the size parameter should be large. However, semi-empirical extensions to small size parameters and larger refractive indices are possible. The main advantage of the ADT is that one can (a) calculate, in closed form, extinction, scattering, and absorption efficiencies for many typical size distributions; (b) find solution to the inverse problem of predicting size distribution from light scattering experiments (several wavelengths); (c) for parameterization purposes of single scattering (inherent) optical properties in radiative transfer codes.
Another limiting approximation for optically soft particles is Rayleigh scattering, which is valid for small size parameters.
Notes and references
Scattering, absorption and radiative transfer (optics) | Anomalous diffraction theory | [
"Chemistry"
] | 400 | [
"Scattering",
" absorption and radiative transfer (optics)"
] |
32,668,461 | https://en.wikipedia.org/wiki/Susanna%20S.%20Epp | Susanna Samuels Epp (born 1943) is an author, mathematician, and professor. Her interests include discrete mathematics, mathematical logic, cognitive psychology, and mathematics education, and she has written numerous articles, publications, and textbooks. She is currently professor emerita at DePaul University, where she chaired the Department of Mathematical Sciences and was Vincent de Paul Professor in Mathematics.
Education and career
Epp holds degrees in mathematics from Northwestern University and the University of Chicago, where she completed her doctorate in 1968 under the supervision of Irving Kaplansky. She taught at Boston University and at the University of Illinois at Chicago before becoming a professor at DePaul University.
Contributions
Initially researching commutative algebra, Epp became interested by cognitive psychology, especially in education of Mathematics, Logic, Proof, and the Language of mathematics. She wrote several articles about teaching logic and proof in American Mathematical Monthly, and the Mathematics Teacher, a Journal by the National Council of Teachers of Mathematics.
She is the author of several books including Discrete Mathematics with Applications (4th ed., Brooks/Cole, 2011), the third edition of which earned a Textbook Excellence Award from the Textbook and Academic Authors Association.
"By combining discussion of theory and practice, I have tried to show that mathematics has engaging and important applications as well as being interesting and beautiful in its own right" - Susanna S. Epp wrote in the Preface of the 4th Edition of Discrete Mathematics.
Recognition
In 2005, she received the Louise Hay Award from the Association for Women in Mathematics in recognition for her contributions to mathematics education.
Selected publications
Epp, S.S., Variables in Mathematics Education. In Tools for Teaching Logic. Blackburn, P., van Ditmarsch, H., et al., eds. Springer Publishing, 2011. (Reprinted in Best Writing on Mathematics 2012, M. Pitici, Ed. Princeton Univ. Press, Nov. 2012.)
Epp, S.S., V. Durand-Guerrier, et al. Argumentation and proof in the mathematics classroom. In Proof and Proving in Mathematics Education, G. Hanna & M. de Villiers Eds. Springer Publishing. (co-authors: V. Durand-Guerrier, P. Boero, N. Douek, D. Tanguay), 2012.
Epp, S.S., V. Durand-Guerrier, et al. Examining the role of logic in teaching proof. In Proof and Proving in Mathematics Education, G. Hanna & M. de Villiers Eds. Springer Publishing, 2012.
Epp, S.S., Proof Issues with Existential Quantification. In Proof and Proving in Mathematics Education: ICMI Study 19 Conference Proceedings, F. L. Lin et al. eds., National Taiwan Normal University, 2009.
Epp, S.S., The Use of Logic in Teaching Proof. In Resources for Teaching Discrete Mathematics. B. Hopkins, ed. Washington, DC: Mathematical Association of America, 2009, pp. 313–322.
Epp, S.S., The Role of Logic in Teaching Proof, American Mathematical Monthly (110)10, Dec. 2003, 886-899
Epp, S.S., The Language of Quantification in Mathematics Instruction. In Developing Mathematical Reasoning in Grades K-12. Lee V. Stiff, Ed. Reston, VA: NCTM Publications, 1999, 188-197.
Epp, S.S., The Role of Proof in Problem Solving. In Mathematical Thinking and Problem Solving. Alan H. Schoenfeld, Ed. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc., Publishers, 1994, 257-269.
References
External links
Susanna Epp's webpage at De Paul
Fifteenth Annual Louise Hay Award, contains a brief biography of Susanna S. Epp.
1943 births
20th-century American mathematicians
21st-century American mathematicians
Living people
DePaul University faculty
Mathematical logicians
Women logicians
21st-century American women mathematicians
20th-century American women mathematicians | Susanna S. Epp | [
"Mathematics"
] | 826 | [
"Mathematical logic",
"Mathematical logicians"
] |
32,670,698 | https://en.wikipedia.org/wiki/Continuous%20q-Hahn%20polynomials | In mathematics, the continuous q-Hahn polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. give a detailed list of their properties.
Definition
The polynomials are given in terms of basic hypergeometric functions and the q-Pochhammer symbol by
Gallery
References
Orthogonal polynomials
Q-analogs
Special hypergeometric functions | Continuous q-Hahn polynomials | [
"Mathematics"
] | 71 | [
"Q-analogs",
"Combinatorics"
] |
32,671,041 | https://en.wikipedia.org/wiki/T-matrix%20method | The Transition Matrix Method (T-matrix method, TMM) is a computational technique of light scattering by nonspherical particles originally formulated by Peter C. Waterman (1928–2012) in 1965.
The technique is also known as null field method and extended boundary condition method (EBCM). In the method, matrix elements are obtained by matching boundary conditions for solutions of Maxwell equations. It has been greatly extended to incorporate diverse types of linear media occupying the region enclosing the scatterer.
T-matrix method proves to be highly efficient and has been widely used in computing electromagnetic scattering of single and compound particles.
Definition of the T-matrix
The incident and scattered electric field are expanded into spherical vector wave functions (SVWF), which are also encountered in Mie scattering. They are the fundamental solutions of the vector Helmholtz equation and can be generated from the scalar fundamental solutions in spherical coordinates, the spherical Bessel functions of the first kind and the spherical Hankel functions. Accordingly, there are two linearly independent sets of solutions denoted as and , respectively. They are also called regular and outgoing SVWFs, respectively. With this, we can write the incident field as
The scattered field is expanded into radiating SVWFs:
The T-matrix relates the expansion coefficients of the incident field to those of the scattered field.
The T-matrix is determined by the scatterer shape and material and for a given incident field allows one to calculate the scattered field.
Calculation of the T-matrix
The standard way to calculate the T-matrix is the null-field method, which relies on the Stratton–Chu equations. They basically state that the electromagnetic fields outside a given volume can be expressed as integrals over the surface enclosing the volume involving only the tangential components of the fields on the surface. If the observation point is located inside this volume, the integrals vanish.
By making use of the boundary conditions for the tangential field components on the scatterer surface,
and
,
where is the normal vector to the scatterer surface, one can derive an integral representation of the scattered field in terms of the tangential components of the internal fields on the scatterer surface. A similar representation can be derived for the incident field.
By expanding the internal field in terms of SVWFs and exploiting their orthogonality on spherical surfaces, one arrives at an expression for the T-matrix. The T-matrix can also be computed from far field data. This approach avoids numerical stability issues associated with the null-field method.
Several numerical codes for the evaluation of the T-matrix can be found online .
The T matrix can be found with methods other than null field method and extended boundary condition method (EBCM); therefore, the term "T-matrix method" is infelicitous.
Improvement of traditional T-matrix includes Invariant-imbedding T-matrix Method (IITM) by B. R. Johnson. The numerical code of IITM is developed by Lei Bi, based on Mishchenko's EBCM code. It is more powerful than EBCM as it is more efficient and increases the upper limit of particle size during the computation.
References
Computational physics
Electromagnetism
Electrodynamics
Scattering, absorption and radiative transfer (optics)
Computational electromagnetics | T-matrix method | [
"Physics",
"Chemistry",
"Mathematics"
] | 679 | [
"Electromagnetism",
"Physical phenomena",
"Computational electromagnetics",
" absorption and radiative transfer (optics)",
"Computational physics",
"Scattering",
"Fundamental interactions",
"Electrodynamics",
"Dynamical systems"
] |
32,672,803 | https://en.wikipedia.org/wiki/GenoCAD | GenoCAD is one of the earliest computer assisted design tools for synthetic biology. The software is a bioinformatics tool developed and maintained by GenoFAB, Inc.. GenoCAD facilitates the design of protein expression vectors, artificial gene networks and other genetic constructs for genetic engineering and is based on the theory of formal languages.
History
GenoCAD originated as an offshoot of an attempt to formalize functional constraints of genetic constructs using the theory of formal languages. In 2007, the website genocad.org (now retired) was set up as a proof of concept by researchers at Virginia Bioinformatics Institute, Virginia Tech. Using the website, users could design genes by repeatedly replacing high-level genetic constructs with lower level genetic constructs, and eventually with actual DNA sequences.
On August 31, 2009, the National Science Foundation granted a three-year $1,421,725 grant to Dr. Jean Peccoud, an associate professor at the Virginia Bioinformatics Institute at Virginia Tech, for the development of GenoCAD. GenoCAD was and continues to be developed by GenoFAB, Inc., a company founded by Peccoud (currently CSO and acting CEO), who was also one of the authors of the originating study.
Source code for GenoCAD was originally released on SourceForge in December 2009.
GenoCAD version 2.0 was released in November 2011 and included the ability to simulate the behavior of the designed genetic code. This feature was a result of a collaboration with the team behind COPASI.
In April, 2015, Peccoud and colleagues published a library of biological parts, called GenoLIB, that can be incorporated into the GenoCAD platform.
Goals
The four aims of the project are to develop a:
computer language to represent the structure of synthetic DNA molecules used in E.coli, yeast, mice, and Arabidopsis thaliana cells
compiler capable of translating DNA sequences into mathematical models in order to predict the encoded phenotype
collaborative workflow environment which allow to share parts, designs, fabrication resource
means to forward the results to the user community through an external advisory board, an annual user conference, and outreach to industry
Features
The main features of GenoCAD can be organized into three main categories.
Management of genetic sequences: The purpose of this group of features is to help users identify, within large collections of genetic parts, the parts needed for a project and to organize them in project-specific libraries.
Genetic parts: Parts have a unique identifier, a name and a more general description. They also have a DNA sequence. Parts are associated with a grammar and assigned to a parts category such a promoter, gene, etc.
Parts libraries: Collections of parts are organized in libraries. In some cases part libraries correspond to parts imported from a single source such as another sequence database. In other cases, libraries correspond to the parts used for a particular design project. Parts can be moved from one library to another through a temporary storage area called the cart (analogous to e-commerce shopping carts).
Searching parts: Users can search the parts database using the Lucene search engine. Basic and advanced search modes are available. Users can develop complex queries and save them for future reuse.
Importing/Exporting parts: Parts can be imported and exported individually or as entire libraries using standard file formats (e.g., GenBank, tab delimited, FASTA, SBML).
Combining sequences into genetic constructs: The purpose of this group of features is to streamline the process of combining genetic parts into designs compliant with a specific design strategy.
Point-and-click design tool: This wizard guides the user through a series of design decisions that determine the design structure and the selection of parts included in the design.
Design management: Designs can be saved in the user workspace. Design statuses are regularly updated to warn users of the consequences of editing parts on previously saved designs.
Exporting designs: Designs can be exported using standard file formats (e.g., GenBank, tab delimited, FASTA).
Design safety: Designs are protected from some types of errors by forcing the user to follow the appropriate design strategy.
Simulation: Sequences designed in GenoCAD can be simulated to display chemical production in the resulting cell.
User workspace: Users can personalize their workspace by adding parts to the GenoCAD database, creating specialized libraries corresponding to specific design projects, and saving designs at different stages of development.
Theoretical foundation
GenoCAD is rooted in the theory of formal languages; in particular, the design rules describing how to combine different kinds of parts and form context-free grammars.
A context free grammar can be defined by its terminals, variables, start variable and substitution rules. In GenoCAD, the terminals of the grammar are sequences of DNA that perform a particular biological purpose (e.g. a promoter). The variables are less homogeneous: they can represent longer sequences that have multiple functions or can represent a section of DNA that can contain one of multiple different sequences of DNA but perform the same function (e.g. a variable represents the set of promoters). GenoCAD includes built in substitution rules to ensure that the DNA sequence is biologically viable. Users can also define their own sets of rules for other purposes.
Designing a sequence of DNA in GenoCAD is much like creating a derivation in a context free grammar. The user starts with the start variable and repeatedly selects a variable and a substitution for it until only terminals are left.
Alternatives
The most common alternatives to GenoCAD are Proto, GEC and EuGene
References
External links
GenoCAD.com
Project page on SourceForge
Tutorials and FAQs
Peccoud Lab
Synthetic biology
Free bioinformatics software
Systems biology
Biotechnology | GenoCAD | [
"Engineering",
"Biology"
] | 1,180 | [
"Synthetic biology",
"Biological engineering",
"Biotechnology",
"Bioinformatics",
"Molecular genetics",
"nan",
"Systems biology"
] |
34,289,077 | https://en.wikipedia.org/wiki/Euclidean%20random%20matrix | Within mathematics, an N×N Euclidean random matrix  is defined with the help of an arbitrary deterministic function f(r, r′) and of N points {ri} randomly distributed in a region V of d-dimensional Euclidean space. The element Aij of the matrix is equal to f(ri, rj): Aij = f(ri, rj).
History
Euclidean random matrices were first introduced in 1999. They studied a special case of functions f that depend only on the distances between the pairs of points: f(r, r′) = f(r - r′) and imposed an additional condition on the diagonal elements Aii,
Aij = f(ri - rj) - u δijΣkf(ri - rk),
motivated by the physical context in which they studied the matrix.
A Euclidean distance matrix is a particular example of Euclidean random matrix with either f(ri - rj) = |ri - rj|2 or f(ri - rj) = |ri - rj|.
For example, in many biological networks, the strength of interaction between two nodes depends on the physical proximity of those nodes. Spatial interactions between nodes can be modelled as a Euclidean random matrix, if nodes are placed randomly in space.
Properties
Because the positions of the points {ri} are random, the matrix elements Aij are random too. Moreover, because the N×N elements are completely determined by only N points and, typically, one is interested in N≫d, strong correlations exist between different elements.
Hermitian Euclidean random matrices
Hermitian Euclidean random matrices appear in various physical contexts, including supercooled liquids, phonons in disordered systems, and waves in random media.
Example 1: Consider the matrix  generated by the function f(r, r′) = sin(k0|r-r′|)/(k0|r-r′|), with k0 = 2π/λ0. This matrix is Hermitian and its eigenvalues Λ are real. For N points distributed randomly in a cube of side L and volume V = L3, one can show that the probability distribution of Λ is approximately given by the Marchenko-Pastur law, if the density of points ρ = N/V obeys ρλ03 ≤ 1 and 2.8N/(k0 L)2 < 1 (see figure).
Non-Hermitian Euclidean random matrices
A theory for the eigenvalue density of large (N≫1) non-Hermitian Euclidean random matrices has been developed and has been applied to study the problem of random laser.
Example 2: Consider the matrix  generated by the function f(r, r′) = exp(ik0|r-r′|)/(k0|r-r′|), with k0 = 2π/λ0 and f(r= r′) = 0. This matrix is not Hermitian and its eigenvalues Λ are complex. The probability distribution of Λ can be found analytically if the density of point ρ = N/V obeys ρλ03 ≤ 1 and 9N/(8k0 R)2 < 1 (see figure).
References
Random matrices
Mathematical physics | Euclidean random matrix | [
"Physics",
"Mathematics"
] | 672 | [
"Random matrices",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Matrices (mathematics)",
"Statistical mechanics",
"Mathematical physics"
] |
34,289,679 | https://en.wikipedia.org/wiki/Smooth%20completion | In algebraic geometry, the smooth completion (or smooth compactification) of a smooth affine algebraic curve X is a complete smooth algebraic curve which contains X as an open subset. Smooth completions exist and are unique over a perfect field.
Examples
An affine form of a hyperelliptic curve may be presented as where and () has distinct roots and has degree at least 5. The Zariski closure of the affine curve in is singular at the unique infinite point added. Nonetheless, the affine curve can be embedded in a unique compact Riemann surface called its smooth completion. The projection of the Riemann surface to is 2-to-1 over the singular point at infinity if has even degree, and 1-to-1 (but ramified) otherwise.
This smooth completion can also be obtained as follows. Project the affine curve to the affine line using the x-coordinate. Embed the affine line into the projective line, then take the normalization of the projective line in the function field of the affine curve.
Applications
A smooth connected curve over an algebraically closed field is called hyperbolic if where g is the genus of the smooth completion and r is the number of added points.
Over an algebraically closed field of characteristic 0, the fundamental group of X is free with generators if r>0.
(Analogue of Dirichlet's unit theorem) Let X be a smooth connected curve over a finite field. Then the units of the ring of regular functions O(X) on X is a finitely generated abelian group of rank r -1.
Construction
Suppose the base field is perfect. Any affine curve X is isomorphic to an open subset of an integral projective (hence complete) curve. Taking the normalization (or blowing up the singularities) of the projective curve then gives a smooth completion of X. Their points correspond to the discrete valuations of the function field that are trivial on the base field.
By construction, the smooth completion is a projective curve which contains the given curve as an everywhere dense open subset, and the added new points are smooth. Such a (projective) completion always exists and is unique.
If the base field is not perfect, a smooth completion of a smooth affine curve doesn't always exist. But the above process always produces a regular completion if we start with a regular affine curve (smooth varieties are regular, and the converse is true over perfect fields). A regular completion is unique and, by the valuative criterion of properness, any morphism from the affine curve to a complete algebraic variety extends uniquely to the regular completion.
Generalization
If X is a separated algebraic variety, a theorem of Nagata says that X can be embedded as an open subset of a complete algebraic variety. If X is moreover smooth and the base field has characteristic 0, then by Hironaka's theorem X can even be embedded as an open subset of a complete smooth algebraic variety, with boundary a normal crossing divisor. If X is quasi-projective, the smooth completion can be chosen to be projective.
However, contrary to the one-dimensional case, there is no uniqueness of the smooth completion, nor is it canonical.
See also
Hyperelliptic curve
Bolza surface
References
Bibliography
(see chapter 4).
Algebraic geometry
Riemann surfaces
Algebraic curves
Birational geometry | Smooth completion | [
"Mathematics"
] | 682 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
55,539,147 | https://en.wikipedia.org/wiki/Boolean%20differential%20calculus | Boolean differential calculus (BDC) (German: (BDK)) is a subject field of Boolean algebra discussing changes of Boolean variables and Boolean functions.
Boolean differential calculus concepts are analogous to those of classical differential calculus, notably studying the changes in functions and variables with respect to another/others.
The Boolean differential calculus allows various aspects of dynamical systems theory such as
automata theory on finite automata
Petri net theory
supervisory control theory (SCT)
to be discussed in a united and closed form, with their individual advantages combined.
History and applications
Originally inspired by the design and testing of switching circuits and the utilization of error-correcting codes in electrical engineering, the roots for the development of what later would evolve into the Boolean differential calculus were initiated by works of Irving S. Reed, David E. Muller, David A. Huffman, Sheldon B. Akers Jr. and (, ) between 1954 and 1959, and of Frederick F. Sellers Jr., Mu-Yue Hsiao and Leroy W. Bearnson in 1968.
Since then, significant advances were accomplished in both, the theory and in the application of the BDC in switching circuit design and logic synthesis.
Works of , Marc Davio and in the 1970s formed the basics of BDC on which , and further developed BDC into a self-contained mathematical theory later on.
A complementary theory of Boolean integral calculus (German: ) has been developed as well.
BDC has also found uses in discrete event dynamic systems (DEDS) in digital network communication protocols.
Meanwhile, BDC has seen extensions to multi-valued variables and functions as well as to lattices of Boolean functions.
Overview
Boolean differential operators play a significant role in BDC. They allow the application of differentials as known from classical analysis to be extended to logical functions.
The differentials of a Boolean variable models the relation:
There are no constraints in regard to the nature, the causes and consequences of a change.
The differentials are binary. They can be used just like common binary variables.
See also
Boolean Algebra
Boole's expansion theorem
Ramadge–Wonham framework
References
Further reading
(14 pages)
(462 pages)
(9 pages) Translation of: (9 pages)
(18 pages)
(NB. Also: Chemnitz, Technische Universität, Dissertation.) (147 pages)
(15 pages)
(392 pages)
(xxii+232 pages) (NB. Per this hardcover edition has been rereleased as softcover edition in 2010.)
(49 pages)
(24 of 153 pages)
External links
with
Boolean algebra
Automata (computation)
Mathematical logic
Order theory
Set theory | Boolean differential calculus | [
"Mathematics"
] | 557 | [
"Boolean algebra",
"Set theory",
"Mathematical logic",
"Fields of abstract algebra",
"Order theory"
] |
55,540,036 | https://en.wikipedia.org/wiki/IC%202714 | IC 2714 is an open cluster in the constellation Carina. It was discovered by James Dunlop in 1826. It is located approximately 4,000 light years away from Earth, in the Carina–Sagittarius Arm.
Characteristics
It is a rich to moderately rich, intermediate-brightness, detached cluster with Trumpler type II2r or II3m. There are 494 probable member stars within the angular radius of the cluster and 215 within the central part of the cluster. The tidal radius of the cluster is 6.3 - 8.7 parsecs (21 - 28 light years) and represents the average outer limit of IC 2714, beyond which a star is unlikely to remain gravitationally bound to the cluster core. The core of the cluster is estimated to be 5.9 light years across.
The brightest stars of the cluster are of 11th magnitude and the brightest main sequence stars are of late B of A type. Two blue stragglers have been detected in the cluster, one variable star and eleven red giants. The turn-off mass of the cluster is estimated to be at 3.1 . The cluster has the same metallicity as the Sun.
See also
List of open clusters
Open cluster family
Open cluster remnant
References
External links
2714
Carina (constellation)
Open clusters | IC 2714 | [
"Astronomy"
] | 262 | [
"Carina (constellation)",
"Constellations"
] |
55,541,007 | https://en.wikipedia.org/wiki/Kinematically%20complete%20experiment | In accelerator physics, a kinematically complete experiment is an experiment in which all kinematic parameters of all collision products are determined. If the final state of the collision involves n particles 3n momentum components (3 Cartesian coordinates for each particle) need to be determined. However, these components are linked to each other by momentum conservation in each direction (3 equations) and energy conservation (1 equation) so that only 3n-4 components are linearly independent. Therefore, the measurement of 3n-4 momentum components constitutes a kinematically complete experiment.
If the final state involves only two particles (e.g. in the Rutherford experiment on elastic scattering) then only one particle needs to be detected. However, for processes leading to three collision products, like e.g. single ionization of the target atom, then two particles need to be momentum-analyzed (for one of them it is sufficient to measure two momentum components) and measured in coincidence. Any pair of the three final state particles (i.e. the scattered projectile, the ejected electron, and the recoiling target ion) can be detected. The first kinematically complete experiment on single ionization was performed for electron impact. There, the scattered projectile electron and the ejected electron were momentum-analyzed. For ion impact, such an experiment is much more challenging because of the much larger projectile mass. As a result, the projectile scattering as well as the projectile energy loss relative to the initial energy are by many orders of magnitude smaller than for electron impact and are not measurable with standard techniques for fast heavy ions. Furthermore, only with the advent of cold target recoil-ion momentum spectroscopy (COLTRIMS) could the recoil ions be measured with sufficient momentum resolution. The first kinematically complete experiment on single ionization by ion impact was performed by momentum analyzing the recoil ions and the ejected electrons. For proton impact at much smaller energy kinematically complete experiments were also performed by momentum-analyzing the scattered projectiles and the recoil ions. These studies play an important role in the context of the few-body problem (see the article on few-body systems).
Other processes involving more than two final state particles for which kinematically complete experiments were performed include double ionization of the target by electron impact, transfer-ionization (i.e. one target electron is ejected to the continuum while a second electron is captured by the projectile) by ion impact and dissociative capture in p + H2 collisions, where the capture of an electron to the projectile leads to a fragmentation of the target molecule. Studies on double ionization and transfer-ionization revealed the important role of electron-electron correlation effects in processes involving multiple electrons. In dissociative capture pronounced quantum-mechanical interference was observed, from which detailed information about the phase angle, which in turn provides sensitive information on the few-body dynamics, was obtained.
References
Accelerator physics
Physics experiments | Kinematically complete experiment | [
"Physics"
] | 593 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics",
"Physics experiments"
] |
56,926,089 | https://en.wikipedia.org/wiki/Variational%20asymptotic%20method | Variational Asymptotic Method (VAM) is a powerful mathematical approach to simplify the process of finding stationary points for a described functional by taking advantage of small parameters. VAM is the synergy of variational principles and asymptotic approaches. Variational principles are applied to the defined functional as well as the asymptotes are applied to the same functional instead of applying on differential equations which is more prone error. This methodology is applicable for a whole range of physics problems, where the problem has to be defined in a variational form and should be able to identify the small parameters within the problem definition.
In other words, VAM can be applicable where the functional is so complex in determining the stationary points either by analytical or by computationally expensive numerical analysis with an advantage of small parameters. Thus, approximate stationary points in the functional can be utilized to obtain the original functional.
Introduction
VAM was first initiated by Berdichevsky in 1979 for shell analysis. He applied VAM to develop nonlinear shell theory in 1980 and for the beams in 1982. This method can construct accurate models for dimensionally reducible structures and in analyzing geometric and material nonlinear models. Berdichevsky elucidated VAM procedure thoroughly and applied for shell structures to obtain the in-plane and out-of-plane warping functions, where an introduced warping function is a kind of bridge between 1-D and 3-D fields, and derived the analytical expressions to attain three dimensional displacements, stresses and strains.
In the beginning, the asymptotic methods are used to develop cross-sectional analysis of anisotropic beams with finite-element based solutions. The development of the formulation in the Variational Asymptotic Beam Sectional Analysis (VABS) was started in 1988, and various former students of Hodges made contributions to this project, including Hodges, Cesnik and Hodges, and Yu et al. A more detailed account of the VABS history can be found in Hodges’s book. Thereafter, linear cross-sectional problems are solved for materials with anisotropic and inhomogeneous properties. VABS, a novel finite-element-based code, for the beam cross-sectional analysis and extended this work to piezoelectric materials which ultimately led for the development of VABS and UM/VABS. Hodges and his co-workers introduced many generalizations to cross sectional analysis. Subsequently, using VAM, an effective plate model unifying homogenization procedure and a dimensional reduction process were established, to deal with realistic heterogeneous plates VAPAS has been implemented which is based on finite element technique. This work has been extended for the analysis of laminated composite plates. VAM is also used to develop the Variational Asymptotic Method For Unit Cell Homogenization (VAMUCH) for heterogeneous materials.
Procedure
In specific structural applications, in beams, the procedure begin with 3-Dimensional analysis and mathematically divide the analysis into 2-Dimensional cross section analysis and 1-Dimensional beam analysis. In the cross section analysis, 1-Dimensional constitutive law can be obtained and is provided as an input to the beam 1-Dimensional analysis. Closed form of analytical expression for warping functions along with set of recovery relations can be achieved to express the 3-D displacements, strains and stresses. In plates/shells, the 3-Dimensional problem splits into 1-Dimensional through the thickness and 2-Dimensional plate/shell analysis. Therefore, obtained 2-D constitutive law form the thickness analysis can be provided as input to 2-Dimensional analysis. Subsequently, recovery relations can be formed which presents the 3-D displacements, strains and stresses.
Advantages
• No adhoc kinematic assumptions are required
• VAM is fully physics based and developed by neglecting the smaller energy contributions
• This method is capable to capture non-classical nonlinear effects automatically
• Asymptotes have been applied to functional instead of applying to differential equations, which led to less errors
• Mathematically rigorous theory, yet engineer-friendly end results
• VAM is an efficient method and obtained results are accurate
• Implementation of VAM can allow to use analytical and/or numerical approaches
• Right tool for the validation of asymptotical correctness with comparison of other theories
Applications
VAM is extensively applied to the structural problems such as beams, plates, shells to find the stresses and strains as stationary points for a strain energy functional based on small parameters. In those structural problems, width and height are the small parameters for beams and thickness is the small parameter for plates and shells. In fact, small parameters (geometric and/or physical) are not limited to the above-mentioned parameters and those can be chosen based on the specific application of the defined problem.
In macro mechanics, VAM applied to dimensional reduction of beams, plates, shells and multifunctional structures, where considerable number of small parameters exists. In micro mechanics, VAM is capable in design and analysis of composites, where the fiber and matrix are involved. This methodology is applied not only to linear elastic materials with isotropic in nature but also to different kind of hyper elastic materials with orthotropic in nature, where the hyper elastic materials plays important role in the application of bio-implants, study of soft tissues behavior, high altitude airships etc. and the materials have geometric and material nonlinearities. In addition, this method is applicable for different type of materials such as dielectric materials, multi-functional composite materials, energy harvesting materials etc. This approach can be used in aerospace structural analysis, fabrics design and analysis, automotive industries etc. It can handle various type of analysis such as static, dynamic, multi-physics, buckling, modal problems.
Subsequently, various computer codes have been developed with the basis of Variational Asymptotic Method such as Variational Asymptotic Beam Analysis (VABS), Variational Asymptotic Plate and Shell Analysis (VAPAS), Dynamic Variational Asymptotic Plate and Shell Analysis (DVAPAS) etc. These computer-based programs are well established and validated for the commercial applications and extensively used to analyze the behavior of composite structures. These various VAM-based developments culminated in the formalization of the mechanics of the structure genome (MSG) as a general framework for multiscale constitutive modeling of composite structures and materials, embodied in the code SwiftComp.
References
Continuum mechanics | Variational asymptotic method | [
"Physics"
] | 1,322 | [
"Classical mechanics",
"Continuum mechanics"
] |
43,927,242 | https://en.wikipedia.org/wiki/Hierarchical%20closeness | Hierarchical closeness (HC) is a structural centrality measure used in network theory or graph theory. It is extended from closeness centrality to rank how centrally located a node is in a directed network. While the original closeness centrality of a directed network considers the most important node to be that with the least total distance from all other nodes, hierarchical closeness evaluates the most important node as the one which reaches the most nodes by the shortest paths. The hierarchical closeness explicitly includes information about the range of other nodes that can be affected by the given node. In a directed network where is the set of nodes and is the set of interactions, hierarchical closeness of a node ∈ called was proposed by Tran and Kwon as follows:
where:
is the reachability of a node defined by a path from to , and
is the normalized form of original closeness (Sabidussi, 1966). It can use a variant definition of closeness as follows: where is the distance of the shortest path, if any, from to ; otherwise, is specified as an infinite value.
In the formula, represents the number of nodes in that can be reachable from . It can also represent the hierarchical position of a node in a directed network. It notes that if , then because is . In cases where , the reachability is a dominant factor because but . In other words, the first term indicates the level of the global hierarchy and the second term presents the level of the local centrality.
Application
Hierarchical closeness can be used in biological networks to rank the risk of genes to carry diseases.
References
Graph theory
Graph algorithms
Algebraic graph theory
Networks
Network analysis
Network theory | Hierarchical closeness | [
"Mathematics"
] | 328 | [
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Network theory",
"Mathematical relations",
"Algebra",
"Algebraic graph theory"
] |
43,930,593 | https://en.wikipedia.org/wiki/C3H4S2 | {{DISPLAYTITLE:C3H4S2}}
The molecular formula C3H4S2 may refer to:
Dithioles
1,2-Dithiole, a type of heterocycle with the parent 1,2-dithiacyclopentene
1,3-Dithiole, a type of heterocycle with the parent 1,3-dithiacyclopentene | C3H4S2 | [
"Chemistry"
] | 88 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
43,931,157 | https://en.wikipedia.org/wiki/Cross-reactive%20carbohydrate%20determinants | Cross-reactive carbohydrate determinants (CCDs) play a role in the context of allergy diagnosis. The terms CCD or CCDs describe protein-linked carbohydrate structures responsible for the phenomenon of cross-reactivity of sera from allergic patients towards a wide range of allergens from plants and insects. In serum-based allergy diagnosis, antibodies of the IgE class directed against CCDs therefore give the impression of polysensitization. Anti-CCD IgE, however, does not seem to elicit clinical symptoms. Diagnostic results caused by CCDs are therefore regarded as false positives.
Structural basis
When in 1981 Rob Aalberse from the University of Amsterdam noticed the enormous cross-reactivity of some patients´ sera against virtually any plant and even insects, notably, insect venoms, it took ten years to arrive at a possible structural explanation of this phenomenon. 1991, Japanese researchers determined the structure of the epitope common to horseradish peroxidase and Drosophila neurons as being an asparagine-linked oligosaccharide (N-glycan) containing a xylose and a core-linked α1,3-linked fucose residue. These structural features are not present in humans and animals. Core α1,3-fucose was then found to be relevant for the binding of patients´ IgE to honeybee venom allergens, which contain N-glycans with structural similarities to plant N-glycans. Ever since then, core α1,3-fucose emerged as the structural element most relevant as a CCD in plants and insect allergens. Much later, both xylose and core α1,3-fucose were revealed as heart pieces of two independent glycan epitopes for rabbit IgG. The occurrence of human anti-xylose IgE, however, has not been verified so far. Still, because of the two possible epitopes and the different carrier structures, the plural CCDs is in frequent use even though core α1,3-fucose appears to be the single culprit.
Clinical and diagnostic relevance
IgE antibodies against plant/insect CCD determinants were shown to have both strict specificity and high affinity, so in principle they might be expected to lead to clinical symptoms just as habitual for anti-peptide IgE. In vitro experiments (histamine-release tests) with polyvalent glyco-allergens corroborated this view. Provocation tests with patients as well as empirical evidence however, indicate that CCDs never cause any ponderable allergic symptoms. It is assumed that the frequent contact with CCD containing foods induces tolerance akin a specific immune therapy.
Other CCDs
While α-galactose as a part of glycoprotein glycans from vertebrates other than higher apes was known for a long time as being a prominent xeno-antigen, its implication in allergy only began to materialize when complications during treatment with a recombinant monoclonal antibody (Erbitux) were attributed to IgE directed against α-Gal containing N-glycans on this antibody. The incidencies of anaphylaxis due to Erbitux were confined to a certain area in the eastern United States, which raised speculations about the involvement of a particular type of tick endemic in this area. However, IgE antibodies against the α-Gal epitope should be taken into account in the diagnosis of milk and meat allergy. It is currently largely unexplored whether this type of CCD is generally also clinically irrelevant such as the plant/insect CCDs. The very localized case of Erbitux complications points at a possible if rare clinical significance of α-Gal.
Yet other potentially immunogenic carbohydrates with widespread occurrence such as N-Glycolylneuraminic acid, which does not occur in humans, or plant O-glycans (arabinogalactans and arabinans) may be mentioned but have so far not qualified as either IgE or as cross-reactive determinants.
Literature:
References
Allergology
Carbohydrate chemistry
Immune system | Cross-reactive carbohydrate determinants | [
"Chemistry",
"Biology"
] | 881 | [
"Immune system",
"Organ systems",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
43,932,267 | https://en.wikipedia.org/wiki/Staub-Traugott%20Phenomenon | The Staub-Traugott Phenomenon (or Staub-Traugott Effect) is the premise that a normal subject fed glucose will rapidly return to normal levels of blood glucose after an initial spike, and will see improved reaction to subsequent glucose feedings.
History
A. T. B. Jacobson determined in 1913 that carbohydrate ingestion results in blood glucose fluctuations. Hamman and Hirschman first reported improvement of carbohydrate tolerance following repeated glucose administration in 1919. H. Staub in 1921 and K. Traugott in 1922 subsequently confirmed the improved reaction in healthy subjects and the phenomenon was named for them. As this effect does not occur in diabetic subjects, it became the basis for the Glucose Tolerance Test.
Mechanism
Abraira and Lawrence describe the original discovery as being that "when glucose loads are given in succession, orally or intravenously, significant and progressive improvement in glucose tolerance will occur in normal and nonketotic diabetic subjects. This facilitated disposal of a glucose load is known as the Staub-Traugott phenomenon."
This phenomenon drew considerable interest as it was demonstrated that the ingested glucose was still being processed by the gut at the same rate while being cleared much more rapidly in the bloodstream. "It is not surprising that when a large amount of readily diffusible glucose is suddenly introduced into the alimentary tract the rate of absorption should exceed the rate at which the tissues can abstract it from the blood. But it is not so clear why the curve should again fall to normal as rapidly as it often does at a time when the rate of absorption from the gut can be scarcely diminished."
Various mechanisms were hypothesized involving the liver and insulin. It was determined in 2009 that "enhanced potentiation of insulin response and increased suppression of hepatic glucose production are the main mechanisms underlying the Staub-Traugott effect", meaning that the liver slows its release of glucose into the bloodstream and the existing insulin becomes better at clearing glucose from the bloodstream with each dose of glucose administered.
Exceptions and limitations
This effect has been observed to disappear under conditions of starvation and in hypopituitary patients.
Attempts to base dietary and nutrition advice on this effect have met with limited success.
References
Blood tests
Diabetes-related tests | Staub-Traugott Phenomenon | [
"Chemistry"
] | 485 | [
"Blood tests",
"Chemical pathology"
] |
43,933,585 | https://en.wikipedia.org/wiki/Passive%20survivability | Passive survivability refers to a building's ability to maintain critical life-support conditions in the event of extended loss of power, heating fuel, or water. This idea proposes that designers should incorporate ways for a building to continue sheltering inhabitants for an extended period of time during and after a disaster situation, whether it be a storm that causes a power outage, a drought which limits water supply, or any other possible event.
The term was coined by BuildingGreen President and EBN Executive editor Alex Wilson in 2005 after the wake of Hurricane Katrina. Passive survivability is suggested to become a standard in the design criteria for houses, apartment buildings, and especially buildings used as emergency shelters. While many of the strategies considered to achieve the goals of passive survivability are not new concepts and have been widely used in green building over the decades, the distinction comes from the motivation for moving towards resilient and safe buildings.
Current issues
The increase in duration, frequency, and intensity of extreme weather events due to climate change exacerbates the challenges that passive survivability tries to address. Climates that did not previously need cooling are now seeing warmer temperatures and a need for air conditioning. Sea level rise and storm surge increases the risk of flooding in coastal locations, while precipitation-based flooding is an issue in low-lying areas. In order for buildings to provide livable conditions at all times, potential threats must be realized.
Power outages
In much of the developed world, there is a heavy reliance on a grid for power and gas. These grids are the main source of energy for many societies, and while they generally do not get interrupted, they are constantly prone to events that may cause disruption, such as natural disasters. In California, there have even been intentional power outages as a preventative measure in response to wildfires caused by power lines. When a power outage occurs, most mechanical heating and cooling can no longer operate. The aim of passive survivability is to be prepared for when such an event may occur, and maintain safe indoor temperatures. While back-up generators can provide some power during an outage, it is often not enough for heating and cooling needs or adequate lighting.
Extreme temperature
Heat is the leading cause of weather-related death in the US. Heat waves coinciding with power outages puts many lives at risk due to the inability of a building to keep temperatures down. Even without a power outage, lack of access to air conditioning or lack of funds to pay for electricity also highlights the need for passive ways to maintain a livable thermal environment. One of the issues that passive survivability looks at is considering the many ways to keep thermal resistance of a building skin to prevent a room from becoming overbearing in the event of having a lack of access to standard temperature regulating systems.
In the winter months, power outages or lack of a fuel source for heat pose a threat when there are cold fronts. Leaky construction and poor insulation result in rapid heat loss, causing indoor temperatures to fall.
Drought
During a drought, the limited water supply means a community must get by using less, which may mean mandatory restrictions on water use. Extended dry spells can instigate wildfires, which add a heightened level of devastation. Drying clay soil can cause critical water mains to burst and damage homes and infrastructure. Droughts can also cause power-outages in areas where thermo-electric power plants are the main source of electricity. Water-efficient appliances and landscaping is crucial in water-scarce locations.
Natural disasters
Natural disasters such as hurricanes, earthquakes, tornadoes, and other storm events can result in destruction of infrastructure that provides key electricity, water, and energy sources. Flooding after extreme precipitation is a major threat to buildings and utilities. The resulting electricity or water shortages can pose more of a threat than the event itself, often lasting longer than the initial disaster.
Terrorist threats
Terrorist threats and cyberterrorism can also cause an interruption in power supply. Attacks on central plants or major distribution segments, or hacking of a utility grid’s control system are possible threats that could cut off electricity, water, or fuel.
Passive design strategies
There are many passive strategies that require no electricity but instead can provide heating, cooling, and lighting for a building through proper design. In envelope-dominated buildings, the climate and surroundings have a greater effect on the interior of the structure due to a high surface area to volume ratio and minimal internal heat sources. Internally dominated buildings, such as the typical office building, are more affected by internal heat sources like equipment and people, however the building envelope still plays an important role, especially during a power outage.
While the distinction between the two types of buildings can sometimes be unclear, all buildings have a balance point temperature that is a result of building design and function. Balance point temperature is the outdoor temperature under which a building requires heating. An internally dominated structure will have a lower balance point temperature because of more internal heat sources, which means a longer overheated period and shorter under-heated period. Achieving a livable thermal environment during a power outage is dependent on the balance point temperature, as well as the interaction with the surrounding environment. A key aspect of all design for passive survivability is climate-responsive design. Passive strategies should be chosen based on climate and local conditions, in addition to building function.
Thermal envelope
When a building has leaky construction or poor insulation, desired heat is lost in the winter and conditioned air is lost in the summer. This loss is accounted for by pumping more mechanical heating or cooling into the building to make up the difference. Since this strategy is obsolete during a power outage, the building should be able to maintain internal temperatures for longer periods of time. To avoid heat loss by infiltration, the thermal envelope should be constructed with minimal breaks and joints, and cracks around windows and doors should be sealed. The air tightness of a building can be tested using a blower-door test.
Heat is also lost by transmission through the many surfaces in a room, including walls, windows, floors, ceilings, and doors. The area and thermal resistance of the surface, as well as the temperature difference between indoors and outdoors, determines the rate of heat loss. Continuous insulation with high R-values reduces heat loss by transmission in walls and ceilings. Double and triple-pane windows with special coatings reduce loss through windows. The practice of superinsulation greatly reduces heat loss through high levels of thermal resistance and air tightness.
Passive solar
The ability to passively heat a building is beneficial during the colder winter months to help keep temperature levels up. Passive solar systems collect and distribute energy from the sun without the use of mechanical equipment such as fans or pumps. Passive solar heating consists of equator-facing glazing (south-facing in the northern hemisphere) to collect solar energy and thermal mass to store the heat. A direct-gain system allows short-wave radiation from the sun to enter a room through the window, where the floor and wall surfaces then act as thermal mass to absorb the heat, and the long-wave radiation is trapped inside due to the greenhouse effect. Proper glazing to thermal mass ratios should be used to prevent overheating and provide adequate heating. A Trombe wall or indirect gain system places the thermal mass right inside the glazing to collect heat during the day for night-time use due to time-lag of mass. This method is useful if daylighting is not required, or can be used in combination with direct-gain. A third technique is a sunspace or isolated gain system, which collects solar energy in a separate space attached to the building, and which can double as a living area for most of the year.
Heat avoidance
Heat avoidance strategies can be used to reduce cooling needs during the overheated periods of the year. This is achieved largely though shading devices and building orientation. In the northern hemisphere, windows should primarily be placed on southern facades which receive the most sun during the winter, while windows on east and west facades should be avoided due to difficulty to shade and high solar radiation during the summer. Fixed overhangs can be designed that block the sun during the overheated periods and allow the sun during the under-heated periods. Movable shading devices are most appropriate due to their ability to respond to the environment and building needs. Using light colors on roofs and walls is another effective strategy to reduce heat gain by reflecting the sun.
Natural ventilation
Natural ventilation can be used to increase thermal comfort during warmer periods. There are two main types of natural ventilation: comfort ventilation and night-flush cooling. Comfort ventilation brings in outside air to move over skin and increase the skin’s evaporative cooling, creating a more comfortable thermal environment. The temperature does not necessarily decrease unless the outdoor temperature is lower than the indoor temperature, however the air movement increases comfort. This technique is especially useful in humid climates. When the wind is not blowing, a solar chimney can increase ventilation flow by using the sun to increase buoyancy of air.
Night-flush cooling utilizes the cool nighttime air to flush the warm air out of the building and lower the indoor temperature. The cooled structure then acts as a heat sink during the day, when bringing the warm outdoor air in is avoided. Night-flush cooling is most effective in locations that have large diurnal temperature ranges, such as in hot and dry climates. With both techniques, providing operable windows alone does not result in adequate natural ventilation; the building must be designed for proper airflow.
Daylighting
When the power goes out, rooms at the center of a building typically receive little to no light. Designing a building to take advantage of natural daylight instead of relying on electric lighting will make it more resilient to power outages and other events. Daylighting and passive solar gain often go hand in hand, but in the summer there is a desire for “cool” daylight. Daylighting design should therefore provide adequate lighting without adding undesired heat. Direct sunlight and reflected light from the sky have different levels of radiation. The daylighting design should reflect the needs of the building in both its climate and function, and different methods can achieve that. Southern and northern windows are generally best for daylighting, and clerestories or monitors on the roof can bring daylight into the center of a building. Placing windows higher up on a wall will bring the light further into the room, and other methods like light shelves can bring light deeper into a building by reflecting light off the ceiling.
Other design strategies
The over arching goal of passive survivability is to try to reduce discomfort or suffering in the event of having a key source cut off to a building. There are several different solutions to any one design problem. While many of the solutions that are presented by advocates of passive survivability are ones that have been universally accepted by passive design and other standard sustainability practices, it is important to examine these measures and apply the appropriate strategies to developing and existing buildings in order to minimize the risk of displeasure or death.
Back-up Power
Buildings should be designed to maintain survivable thermal conditions without air conditioning or supplemental heat. Providing back-up generators and adequate fuel to maintain the critical functions of a building during outages are conventional solutions to power-supply interruptions. However, unless they are very large, generators support only basic needs for a short amount of time and may not power systems such as air conditioning, lighting, or even heating or ventilation during extended outages. Back-up generators are also expensive both to buy and maintain. Storing significant quantities of fuel on-site to power generators during extended outages has inherent environmental and safety risks, particularly during storms.
Renewable energy systems can provide power during an extreme event. For example, photovoltaic (or solar electric) power systems, when coupled with on-site battery storage can provide electricity when the grid loses power. Other fuel sources like wood can provide heat if buildings are equipped with wood-burning stoves or fireplaces.
Water
Emergency water supply systems such as rooftop rainwater harvesting systems can provide water for toilet flushing, bathing, and other building needs in the event of water supply interruptions. Rain barrels or larger cisterns store water from runoff that can often use a gravity-feed to obtain the water for use. Installing composting toilets and waterless urinals ensure those facilities can continue to function regardless of the circumstance, while reducing water consumptions on a daily basis. Having backup sources of potable water on-site is also a necessity in the case of water interruption.
Passive survivability in rating systems
Leadership in Energy and Environmental Design
Leadership in Energy and Environmental Design (LEED) is a widely used green building certification in the United States. As of LEED version 4, there is a pilot credit called “Passive Survivability and Backup Power During Disruptions” under LEED BD+C: New Construction. The credit is worth up to two points, with one point awarded for providing for passive survivability and thermal safety, and one point awarded for providing backup power for critical loads. For the passive survivability point, the building must maintain thermally safe conditions during a four-day power outage during both peak summer and peak winter conditions. LEED lists three paths to compliance for thermal safety, two of which consist of thermal modelling, and the remaining path being Passive House certification.
Passive house certification
While passive survivability is not mentioned by name in the two major passive house standards, Passive House Institute and Passive House Institute US (PHIUS), the passive strategies that make these buildings so energy efficient are the same strategies outlined for passive survivability. Buildings that achieve passive house certification are hitting some of the main criteria for passive survivability, including airtight construction and superinsulation. Many buildings will also have on-site photovoltaics to offset energy consumption. These buildings that rely very little on energy will be more resilient in power outages and extreme weather.
RELi
RELi is a building and community rating system completely based on resilient design. It has been adopted by the US Green Building Council, the same body that developed LEED. The Hazard Adaptation and Mitigation category has several credits related to passive survivability. One required credit is “Fundamental Emergency Operations: Thermal Safety During Emergencies” which requires indoor temperatures to be at or below outdoor temperatures in the summer, and above 50 °F in the winter for up to four days. Another way to comply is to provide a thermal safe zone with adequate space for all building occupants. There is an optional poly-credit, “Advanced Emergency Operations: Back-Up Power, Operations, Thermal Safety & Operating Water,” that incorporates other passive survivability measures such as water storage. Another poly-credit, “Passive Thermal Safety, Thermal Comfort, & Lighting Design Strategies,” outlines more passive strategies including passive cooling, passive heating, and daylighting.
References
Further reading
Committee on the Effect of Climate Change on Indoor Air Quality and Public Health. Climate Change, the Indoor Environment, and Health. Washington, D.C.: National Academies, 2011. Print.
Kibert, Charles J. Sustainable Construction: Green Building Design and Delivery. Vol. 3rd. Hoboken, NJ: John Wiley & Sons, 2008. Print.
Pearce, Walter. "Environmental Building News Calls for "Passive Survivability"" BuildingGreen. N.p., 25 Dec. 2005. Web. 30 Sept. 2014.
Pearson, Forest. "Old Way of Seeing." : Designing Homes for Passive Survivability. Blogspot, 12 Nov. 2012. Web. 30 Sept. 2014.
Perkins, Broderick. "'Passive Survivability' Builds In Disaster Preparedness, Sustainability." RealtyTimes. N.p., 04 Jan. 2006. Web. 30 Sept. 2014.
"Passive Survivability Possible using the 'Hurriquake' Nail." Nelson Daily News: 20. Jan 07 2009. ProQuest. Web. 30 Sep. 2014 .
Wilson, Alex, and Andrea Ward. "Design for Adaptation: Living in a ClimateChanging World." Buildgreen. Web.
Wilson, Alex. "A Call for Passive Survivability." Heating/Piping/Air Conditioning Engineering : HPAC 78.1 (2006): 7,7,10. ProQuest. Web. 30 Sep. 2014.
Wilson, Alex. "Making Houses Resilient to Power Outages." GreenBuildingAdvisor.com. N.p., 23 Dec. 2008. Web. 30 Sept. 2014.
Wilson, Alex. "Passive Survivability." - GreenSource Magazine. N.p., June 2006. Web. 30 Sept. 2014.
Building engineering
Construction standards
Sustainable building | Passive survivability | [
"Engineering"
] | 3,438 | [
"Sustainable building",
"Construction standards",
"Building engineering",
"Construction",
"Civil engineering",
"Architecture"
] |
43,934,042 | https://en.wikipedia.org/wiki/Disk%20Detective | Disk Detective is the first NASA-led and funded-collaboration project with Zooniverse. It is NASA's largest crowdsourcing citizen science project aiming at engaging the general public in search of stars, which are surrounded by dust-rich circumstellar disks, where planets usually dwell and are formed. Initially launched by NASA Citizen Science Officer, Marc Kuchner, the principal investigation of the project was turned over to Steven Silverberg.
Details
Disk Detective was launched in January 2014, and was expected to continue until 2017. In April 2019 Disk Detective uploaded partly classified subjects, as Zooniverse did stop to support the old platform for projects, which was completed in May 2019. The project team began working on Disk Detective 2.0 that was then launched May 24, 2020, utilizing Zooniverse's new platform.
The project invites the public to search through images captured by NASA's Wide-field Infrared Survey Explorer (WISE) and other sky surveys. Disk Detective 1.0 compared images from the WISE mission to the Two Micron All Sky Survey (2MASS), the Digitized Sky Survey (DSS) and the Sloan Digital Sky Survey (SDSS). Version 2.0 compares WISE images to 2MASS, Panoramic Survey Telescope and Rapid Response System (Pan-STARRS), Australia's SkyMapper telescope, and the unblurred coadds of WISE imaging (unWISE).
The images in Disk Detective have all been pre-selected to be extra bright at wavelengths where circumstellar dust emits thermal radiation. They are at mid-infrared, near-infrared and optical wavelengths. Disks are not the only heavenly objects that appear bright at infrared wavelengths; active galactic nuclei, galaxies, asteroids and interstellar dust clouds also emit at these wavelengths. Computer algorithms cannot distinguish the difference, so it is necessary to examine all images by "eye" to make sure that the selected candidates are stars with disks, and not other celestial objects.
After the initial and subsequent discovery of several Peter Pan disks—M dwarf primordial gas-rich circumstellar disk systems that retain their gas 2 to 10 times longer than that of other disks—by the Disk Detective science team, research began to understand how these unusual systems fit into disk development. On September 29, 2022, NASA announced version 2.1 of the project, releasing new data containing thousands of images of nearby stars located in young star-forming regions and to provide a better view of "extreme" debris disks—circumstellar disks that have brighter than expected luminosity—in the galactic plane. The 2.1 dataset targets stars with brightness at a wavelength of 12 μm in an effort to discover more Peter Pan disks.
Classification
At the Disk Detective website, the images are presented in animated forms which are called flip books. Each image of the flip book is formatted to focus on the subject of interest within a series of circles and crosshairs.
Website visitors—whether or not they are registered member users of Zooniverse—examine the flip book images and classify the target subjects based on simple criteria. Disk Detective 2.0 elimination criteria include whether the subject "moves" off the center crosshairs in 2MASS images only, if it moves off of crosshairs in two or more images, if the subject is not round in Pan-STARRS, SkyMapper, or 2MASS images, if it becomes extended beyond the outer circle in WISE images, and if two or more images show objects between the inner and outer circles. The ideal target is classified as a "good candidate," and is further vetted by the advanced research group into a list of "debris disk of interest" (DDOI) candidates. Particular interest is paid to good candidates that have two or more images where objects other than the subject are present within the inner circle only.
The selected disk candidates will eventually become the future targets for NASA's Hubble Space Telescope and its successor, the James Webb Space Telescope. They will also be the topic for future publications in scientific literature.
Seeking objects
The disks that NASA's scientists at the Goddard Space Flight Center aim to find are debris disks, which are older than 5 million years; and young stellar object (YSO) disks, which are younger than 5 million years.
Advanced user group
Volunteers who have registered as citizen scientists with Zooniverse can join an exclusive group on the Disk Detective project, called "advanced users" or "super users," after they have done 300 classifications. Advanced users might then further vet candidates marked as "good," compare candidate subjects with literature, or analyze follow-up data. This advanced user group is similar to other groups that have formed in citizen science projects, such as the Peas Corps in Galaxy Zoo.
Discoveries
The Disk Detective project discovered the first example of a Peter Pan disk. At the 235th meeting of the American Astronomical Society the discovery of four new Peter Pan disks was presented. Three objects are high-probability members of the Columba and Carina stellar associations. The forth object has an intermediate likelihood of being part of a moving group. All four objects are young M dwarfs.
The project has also discovered the first debris disk with a white dwarf companion (HD 74389) and a new kind of M dwarf disk (WISE J080822.18-644357.3) in a moving group. The project found 37 new disks (including HD 74389) and four Be stars in the first paper and 213 newly-identified disk candidates in the third paper. Together with WISE J080822.18-644357.3, the Disk Detective project found 251 new disks or disk candidates. The third paper also found HD 150972 (WISEA J164540.79-310226.6) as a likely member of the Scorpius–Centaurus moving group, 12 candidates that are co-moving binaries and 31 that are closer than 125 parsec, making them possible targets for direct imaging of exoplanets.
Additionally, the project published the discovery a nearby young brown dwarf with a warm class-II type circumstellar disk, WISEA J120037.79−784508.3 (W1200−7845), located in the ε Chamaeleontis association. Found 102 parsecs (~333 lightyears) from the Sun, this puts it within the solar neighborhood, making it ideal for study since brown dwarfs are very faint due to their low masses of about 13-80 MJ. Therefore, it is within distance to observe greater details if viewed with large telescope arrays or space telescopes. W1200-7845 is also very young, with measurements putting it at about 3.7 million years old, meaning that—along with its relatively close proximity—it could serve as a benchmark for future studies of brown dwarf system formation.
A study with JWST MIRI found that the disk around WISEA J044634.16-262756.1B, which was first discovered by the Disk Detective project, has a carbon-rich disk. The study found clear evidence that the disk has long-lived primordial gas. 14 molecules were found within the disk, many of them being hydrocarbons.
False positive rate and applications
The project did make estimates about the amount of high-quality disk candidates in AllWISE and lower-limit false-positive rates for several catalogs, based on classification false-positive rates, follow-up imaging and literature review. Out of the 149,273 subjects on the Disk Detective website 7.9±0.2% are likely candidates. 90.2% of the subjects are eliminated by website evaluation, 1.35% were eliminated by literature review and 0.52% were eliminated by high-resolution follow-up imaging (Robo-AO + Dupont/Retrocam). From this result AllWISE might contain ~21,600 high quality disk candidates and 4-8% of the disk candidates from high-quality surveys might show background objects in high-resolution images, which are bright enough to affect the infrared excess.
The project also has a database that is available through the Mikulski Archive for Space Telescopes (MAST). It contains the "goodFraction", describing how often a source was voted as a good source on the website, as well as other information about the source, such as comments from the science team, machine learned classification, cross-matched catalog information and SED fits.
A group at MIT did use the Disk Detective classifications to train a machine-learning system. They found that their machine-learning system agreed with user identifications of debris disks 97% of the time. The group has found 367 promising candidates for follow-up observations with this method.
See also
Exoplanet
Kuiper belt
Planetesimal
Protoplanetary disk
T Tauri star
WISEA J120037.79-784508.3
Zooniverse projects:
Asteroid Zoo
Backyard Worlds: Planet 9
Backyard Worlds
Galaxy Zoo
The Milky Way Project
Old Weather
Planet Hunters
SETILive
References
External links
NASA's Disk Detective page
Disk Detective official website
Disk Detective Facebook page
Disk Detective Twitter page
Disk Detective project blog
Astronomy websites
Astronomy projects
Citizen science
Human-based computation
Internet properties established in 2014 | Disk Detective | [
"Astronomy",
"Technology"
] | 1,914 | [
"Works about astronomy",
"Information systems",
"Astronomy projects",
"Human-based computation",
"Astronomy websites"
] |
43,937,224 | https://en.wikipedia.org/wiki/Pan%20Jianwei | Pan Jianwei (; born 11 March 1970) is a Chinese academic administrator and quantum physicist. He is a university administrator and professor of physics at the University of Science and Technology of China. Pan is known for his work in the field of quantum entanglement, quantum information and quantum computers. In 2017, he was named one of Nature's 10, which labelled him "Father of Quantum". He is an academician of the Chinese Academy of Sciences and the World Academy of Sciences and Executive Vice President of the University of Science and Technology of China. He also serves as one of the Vice Chairman of Jiusan Society.
Early life and education
Pan was born in Dongyang, Jinhua, Zhejiang province in 1970. In 1987, he entered the University of Science and Technology of China (USTC), from which he received his bachelor's and master's degrees. He received his PhD from the University of Vienna in Austria, where he studied and worked in the group led by Nobel prize winning physicist Anton Zeilinger.
Contributions
Pan's team demonstrated five-photon entanglement in 2004. Under his leadership, the world's first quantum satellite launched successfully in August 2016 as part of the Quantum Experiments at Space Scale, a Chinese research project. In June 2017, Pan's team used their quantum satellite to demonstrate entanglement with satellite-to-ground total summed lengths between 1600km and 2400km and entanglement distribution over 1200km between receiver stations.
In 2021, Pan led a team which built quantum computers. One of the devices, named "Zuchongzhi 2.1", was claimed to be one million times faster than its nearest competitor, Google's Sycamore.
Awards and recognition
Pan was elected to the Chinese Academy of Sciences in 2011 at the age of 41, making him one of the youngest CAS academicians. He was then elected to the World Academy of Sciences in 2012 and won the International Quantum Communication Award in the same year.
In April 2014, he was appointed Vice President of the University of Science and Technology of China.
His team's work on double quantum-teleportation was selected as the Physics World "Top Breakthrough of the Year" in 2015. His team, whose members include Peng Chengzhi, Chen Yu'ao, Lu Chaoyang, and Chen Zengbing, won the State Natural Science Award (First Class) in 2015.
In 2017, the journal Nature named Pan, along with such figures as Ann Olivarius and Scott Pruitt, one of the top 10 people who made "a significant impact in science either for good or for bad", with the label "Father of Quantum" given to Pan. The same year he won the Future Science Prize.
Pan was included in Time magazine's 100 Most Influential People of 2018.
In 2019, Pan was appointed as lead editor of Physical Review Research. He also received The Optical Society's R. W. Wood Prize.
In 2020, Pan received the ZEISS Research Award.
References
1970 births
Living people
21st-century Chinese physicists
Academic staff of the University of Science and Technology of China
Chinese academic administrators
Educators from Jinhua
Members of the Chinese Academy of Sciences
Members of the Jiusan Society
Quantum physicists
People from Dongyang
Physicists from Zhejiang
Scientists from Jinhua
TWAS fellows
University of Science and Technology of China alumni
University of Vienna alumni
Fellows of the American Physical Society
Westlake University
Fellows of Optica (society) | Pan Jianwei | [
"Physics"
] | 703 | [
"Quantum physicists",
"Quantum mechanics"
] |
42,497,844 | https://en.wikipedia.org/wiki/Xbra | Xbra is a homologue of Brachyury (T) gene for Xenopus. It is a transcription activator involved in vertebrate gastrulation which controls posterior mesoderm patterning and notochord differentiation by activating transcription of genes expressed throughout mesoderm. The effects of Xbra is concentration dependent where concentration gradient controls the development of specific types of mesoderm in Xenopus. Xbra results of the expression of the FGF transcription factor, synthesized by the ventral endoderm. So while ventral mesoderm is characterized by a high concentration of FGF and Xbra, the dorsal mesoderm is characterized by a reunion of two others transcription factors, Siamois and XnR, which activates the synthesis of Goosecoid Transcription Factor. Goosecoid enables the depletion of Xbra. In a nutshell, high concentrations of Xbra induce ventral mesoderm while low concentration stimulates the formation of a back.
Posterior mesoderm development presents two types of cell behaviors, cell migration and convergent extension, in prechordal mesoderm and chordamesoderm cells, respectively. Cell migration is exhibited by the prechordal mesoderm cells, resulting in the formation of the future anterior end. Xbra induces convergent extension which inhibits cell migration and rearranges the chordamesoderm cells into a structure that will later differentiate into notochord. As a result, Xbra acts as a switch to convert between these two behaviors.
Xbra is able to activate itself indirectly, specifically for dorsal mesoderm, through FGF signaling while eFGF maintains Xbra expression, creating an autoregulatory loop.
Inhibition of Xbra leads to abnormal patterning of mesoderm, such as shortened trunk. In a previous study, the activation domain of Xbra was replaced by repressor domain of Drosophila engrailed protein in order to form a dominant-interfering Xbra construct that would help to study the function and regulation of Xbra. The injection of RNA encoding this construct has led to various birth defects such as defective blastopore closure and abnormal notochord differentiation in the developing embryo.
References
Transcription factors
Xenopus | Xbra | [
"Chemistry",
"Biology"
] | 467 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
42,500,615 | https://en.wikipedia.org/wiki/Acoustic%20harassment%20device | Acoustic harassment and acoustic deterrents are technologies used to keep animals and in some cases humans away from an area. Applications of the technology are used to keep marine mammals away from aquaculture facilities and to keep birds away from certain areas (for instance in the vicinity of airports and blueberry fields). The devices have also been employed to keep marine mammals away from fishing nets. The devices are known as acoustic harassment devices (AHDs) and acoustic deterrent devices, which are smaller AHDs or intended as an awareness tool to warn species to the presence of danger rather than as a tool of harassment at a much louder level.
While they have proven effective over the short-term, animals tend to become conditioned over time and can even be drawn to the sounds once they habituate to the lack of real danger and the presence of sustenance. Only acoustic harassment devices that cause actual pain have been found to be effective over the longer term. The devices can cause hearing damage in non-targeted species and design changes in the fishing gear, fishing methods, and fish farm design to provide a permanent solution are preferable.
History
Primitive harassment methods included firecrackers, rubber bullets, chasing animals by boat, banging pipes and seal bombs (incendiary devices). Devices emitting loud noises have also been used, including broadcasts of killer whale sounds, pingers, and acoustic buzzers. These often employ shrill sounding screams broadcast between 12 and 17 kHz. Acoustic deterrent devices normally broadcast near 10 kHz and use high volume. The intensity level of acoustic harassment devices has been measured up to 194 dB re 1uPa @ 1 m and the noise can be audible up to 50 kilometers away.
Assessments
Studies of long-term effects on the marine environment have not been carried out, including damage to non-targeted species. Results of the devices are mixed, and they have proved ineffective in some circumstances, especially over the long term, while design improvements such as electric fences to keep seals from climbing into enclosures, gear modification to exclude certain species, and keeping aquaculture plants clean of dead fish have often been effective at solving the problem of keeping predatory species away. Reports indicate that in contrast to the harassment devices, the deterrent devices have been very effective in dealing with cetacean bycatch.
Recent research shows that acoustic deterrent devices intended to scare off seals do not work, but they do scare off porpoises.
A new technique called "startle technology" is currently in development. Preliminary trials conducted by the University of St. Andrews shows great promise as a substitute for ADDs.
Acoustic devices and acoustic weapon use on humans
Acoustic devices have been used for military purposes including to stress enemies, as an aid in interrogation, and to create "an infrasonic sound barrier". The British Army used "Squawk Boxes" to emit ultrasonic frequencies causing various discomforts. Audio harassment was also used by the U.S. military in the Vietnam War and was famously depicted in the fictional movie Apocalypse Now as helicopters descend on the enemy with loud speakers. Operation Wandering Soul broadcast voices purported to be dead Vietcong. Other examples include the 350 watt HPS-1 Sound System that could be heard 2.5 miles away and was used on the Vatican embassy in Panama where ousted president Manuel Noriega was in refuge. At the Branch Davidian siege in Waco, Texas, loud music was broadcast.
Devices utilising the deterioration of hearing with age have been deployed to drive younger people away, e.g. The Mosquito.
See also
Acoustic Hailing Device
The Mosquito
Long Range Acoustic Device (LRAD)
Sonic weapon
Directional sound
References
Further reading
A Study Into the Effectiveness of Acoustic Harassment Devices-AHDs in Deterring Seals from Salmon Farms Around Shetland by Rachel Beacham Aberdeen University: Dissertation. M. Sc Marine and Fisheries Science
Non-lethal weapons
Acoustics | Acoustic harassment device | [
"Physics"
] | 780 | [
"Classical mechanics",
"Acoustics"
] |
42,504,282 | https://en.wikipedia.org/wiki/Heel%20effect | In X-ray tubes, the heel effect or, more precisely, the anode heel effect is a variation of the intensity of X-rays emitted by the anode depending on the direction of emission along the anode-cathode axis. X-rays emitted toward the anode are less intense than those emitted perpendicular to the cathode–anode axis or toward the cathode. The effect stems from the absorption of X-ray photons before they leave the anode in which they are produced. The probability of absorption depends on the distance the photons travel within the anode material, which in turn depends on the angle of emission relative to the anode surface.
Factors
Source-to-image distance (SID)
The distance from the anode (the source of X-rays) to the image receptor influences the apparent magnitude of the anode heel effect. At short SID, image receptor captures a wider range of X-ray intensities than the same size image receptor at larger source–image distances (SID).
Beam width and receptor size
Wide X-ray beams can be cropped to narrow X-ray beams by use of a beam restricting device (variable-aperture X-ray "collimator", fixed aperture, or cone). A beam that is wide along the cathode–anode axis contains a wider range of X-ray intensities than a narrow beam. In a wide beam, a large image receptor captures a wider range of X-ray intensities than a small receptor (at the same SID). Both of these factors influence the visibility of the anode heel effect. A smaller field size results in a less pronounced heel effect.
Anode angle
When the angle of the anode is large, the usable X-ray photons do not have to travel through as much of the anode material to exit the tube. This results in a much less apparent anode heel effect, though the effective focal spot size is increased.
Solutions
Almost all modern diagnostic X-ray generators exhibit heel effect. Graded intensity across the beam, generally a drawback, can be turned to advantage in some techniques by positioning the object or patient relative to the X-ray tube. For example, when imaging a foot, which is thicker at the ankle end than the toes, the toes should be positioned toward the anode and the ankle toward the cathode. Arbitrary intensity gradients can also be produced by placing a beam compensator (a wedge of homogeneous material) across the X-ray beam before exposure.
There have also been some efforts to eliminate the heel effect through automatic software adjustment of pixel values.
References
X-rays
Medical physics | Heel effect | [
"Physics"
] | 546 | [
"Applied and interdisciplinary physics",
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Medical physics"
] |
54,003,498 | https://en.wikipedia.org/wiki/Axicabtagene%20ciloleucel | Axicabtagene ciloleucel, sold under the brand name Yescarta, is a medication used for the treatment for large B-cell lymphoma that has failed conventional treatment. T cells are removed from a person with lymphoma and genetically engineered to produce a specific T-cell receptor. The resulting chimeric antigen receptor T cells (CAR-Ts) that react to the cancer are then given back to the person to populate the bone marrow. Axicabtagene treatment carries a risk for cytokine release syndrome (CRS) and neurological toxicities.
Due to CD19 being a pan-B cell marker, the T-cells that are engineered to target CD19 receptors on the cancerous B cells also influence normal B cells, except some plasma cells.
Adverse effects
Because treatment with axicabtagene carries a risk of cytokine release syndrome and neurological toxicities, the Food and Drug Administration (FDA) has mandated that hospitals be certified for its use prior to treatment of any patients.
In April 2024, the FDA label boxed warning was expanded to include T cell malignancies.
History
It was developed by California-based Kite Pharma.
Axicabtagene ciloleucel was awarded U.S. FDA breakthrough therapy designation in October 2017, for diffuse large B-cell lymphoma, transformed follicular lymphoma, and primary mediastinal B-cell lymphoma. It also received priority review and orphan drug designation.
Based on the ZUMA-1 trial, Kite submitted a biologics license application for axicabtagene in March 2017, for the treatment of non-Hodgkin lymphoma.
The FDA granted approval in October 2017, for the second-line treatment of diffuse large B-cell lymphoma.
In April 2022, the FDA approved axicabtagene ciloleucel for adults with large B-cell lymphoma (LBCL) that is refractory to first-line chemoimmunotherapy or relapses within twelve months of first-line chemoimmunotherapy. It is not indicated for the treatment of patients with primary central nervous system lymphoma.
Approval was based on ZUMA-7, a randomized, open-label, multicenter trial in adults with primary refractory LBCL or relapse within twelve months following completion of first-line therapy. Participants had not yet received treatment for relapsed or refractory lymphoma and were potential candidates for autologous hematopoietic stem cell transplantation (HSCT). A total of 359 participants were randomized 1:1 to receive a single infusion of axicabtagene ciloleucel following fludarabine and cyclophosphamide lymphodepleting chemotherapy or to receive second-line standard therapy, consisting of two or three cycles of chemoimmunotherapy followed by high-dose therapy and autologous HSCT in participants who attained complete remission or partial remission. In the ZUMA-7 trial, patients treated with axicabtagene ciloleucel had superior clinical outcomes compared with the previous standard of care, including improved overall survival with an estimated 4-year overall survival rate of 54.6% for axicabtagene ciloleucel, compared with 46% for the previous standard of care.
In January 2023, the National Institute for Health and Care Excellence (NICE) recommended axicabtagene ciloleucel to treat adult patients with diffuse large B-cell lymphoma (DLBCL) or primary mediastinal large B-cell lymphoma (PMBCL) who have already been treated with two or more systemic therapies.
Society and culture
Names
Axicabtagene ciloleucel is the international nonproprietary name.
References
External links
Cancer treatments
Approved gene therapies
CAR T-cell therapy
Gilead Sciences
Orphan drugs
Antineoplastic drugs | Axicabtagene ciloleucel | [
"Biology"
] | 849 | [
"Cell therapies",
"CAR T-cell therapy"
] |
52,657,328 | https://en.wikipedia.org/wiki/Bayesian%20model%20of%20computational%20anatomy | Computational anatomy (CA) is a discipline within medical imaging focusing on the study of anatomical shape and form at the visible or gross anatomical scale of morphology.
The field is broadly defined and includes foundations in anatomy, applied mathematics and pure mathematics, including medical imaging, neuroscience, physics, probability, and statistics. It focuses on the anatomical structures being imaged, rather than the medical imaging devices.
The central focus of the sub-field of computational anatomy within medical imaging is mapping information across anatomical coordinate systems most often dense information measured within a magnetic resonance image (MRI). The introduction of flows into CA, which are akin to the equations of motion used in fluid dynamics, exploit the notion that dense coordinates in image analysis follow the Lagrangian and Eulerian equations of motion. In models based on Lagrangian and Eulerian flows of diffeomorphisms, the constraint is associated to topological properties, such as open sets being preserved, coordinates not crossing implying uniqueness and existence of the inverse mapping, and connected sets remaining connected. The use of diffeomorphic methods grew quickly to dominate the field of mapping methods post Christensen's
original paper, with fast and symmetric methods becoming available.
The main statistical model
The central statistical model of Computational Anatomy in the context of medical imaging has been the source-channel model of Shannon theory; the source is the deformable template of images , the channel outputs are the imaging sensors with observables (see Figure). The importance of the source-channel model is that the variation in the anatomical configuration are modelled separated from the sensor variations of the Medical imagery. The Bayes theory dictates that the model is characterized by the prior on the source, on , and the conditional density on the observable
conditioned on .
In deformable template theory, the images are linked to the templates, with the deformations a group which acts on the template;
see group action in computational anatomy
For image action , then the prior on the group induces the prior on images , written as densities the log-posterior takes the form
The random orbit model which follows specifies how to generate the group elements and therefore the random spray of objects which form the prior distribution.
The random orbit model of computational anatomy
The random orbit model of Computational Anatomy first appeared in modelling the change in coordinates associated to the randomness of the group acting on the templates, which induces the randomness on the source of images in the anatomical orbit of shapes and forms and resulting observations through the medical imaging devices. Such a random orbit model in which randomness on the group induces randomness on the images was examined for the Special Euclidean Group for object recognition in which the group element
was the special Euclidean group in.
For the study of deformable shape in CA, the high-dimensional diffeomorphism groups used in computational anatomy are generated via smooth flows which satisfy the Lagrangian and Eulerian specification of the flow fields satisfying the ordinary differential equation:
with the vector fields on termed the Eulerian velocity of the particles at position of the flow. The vector fields are functions in a function space, modelled as a smooth Hilbert space with the vector fields having 1-continuous derivative . For , the inverse of the flow is given by
and the Jacobian matrix for flows in given as
To ensure smooth flows of diffeomorphisms with inverse, the vector fields must be at least 1-time continuously differentiable in space which are modelled as elements of the Hilbert space using the Sobolev embedding theorems so that each element has 3-square-integrable derivatives. Thus embed smoothly in 1-time continuously differentiable functions. The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm:
where with a linear operator defining the norm of the RKHS. The integral is calculated by integration by parts when is a generalized function in the dual space .
Riemannian exponential
In the random orbit model of computational anatomy, the entire flow is reduced to the initial condition which forms the coordinates encoding the diffeomorphism. From the initial condition then geodesic positioning with respect to the Riemannian metric of Computational anatomy solves for the flow of the Euler-Lagrange equation.
Solving the geodesic from the initial condition is termed the Riemannian-exponential, a mapping at identity to the group.
The Riemannian exponential satisfies for initial condition , vector field dynamics ,
for classical equation diffeomorphic shape momentum , , then
for generalized equation, then ,
It is extended to the entire group,
Depicted in the accompanying figure is a depiction of the random orbits around each exemplar, , generated by randomizing the flow by generating the initial tangent space vector field at the identity , and then generating random object .
Shown in the Figure on the right the cartoon orbit, are a random spray of the subcortical manifolds generated by randomizing the vector fields supported over the submanifolds. The random orbit model induces the prior on shapes and images conditioned on a particular atlas . For this the generative model generates the mean field as a random change in coordinates of the template according to , where the diffeomorphic change in coordinates is generated randomly via the geodesic flows.
MAP estimation in the multiple-atlas orbit model
The random orbit model induces the prior on shapes and images conditioned on a particular atlas . For this the generative model generates the mean field as a random change in coordinates of the template according to , where the diffeomorphic change in coordinates is generated randomly via the geodesic flows. The prior on random transformations on is induced by the flow , with constructed as a Gaussian random field prior . The density on the random observables at the output of the sensor are given by
Maximum a posteriori estimation (MAP) estimation is central to modern statistical theory. Parameters of interest take many forms including (i) disease type such as neurodegenerative or neurodevelopmental diseases, (ii) structure type such as cortical or subcortical structures in problems associated to segmentation of images, and (iii) template reconstruction from populations. Given the observed image , MAP estimation maximizes the posterior:
This requires computation of the conditional probabilities . The multiple atlas orbit model randomizes over the denumerable set of atlases . The model on images in the orbit take the form of a multi-modal mixture distribution
The conditional Gaussian model has been examined heavily for inexact matching in dense images and for landmark matching.
Dense emage matching
Model as a conditionally Gaussian random field conditioned, mean field, . For uniform variance the endpoint error terms plays the role of the log-conditional (only a function of the mean field) giving the endpoint term:
Landmark matching
Model as conditionally Gaussian with mean field , constant noise variance independent of landmarks. The log-conditional (only a function of the mean field) can be viewed as the endpoint term:
MAP segmentation based on multiple atlases
The random orbit model for multiple atlases models the orbit of shapes as the union over multiple anatomical orbits generated from the group action of diffeomorphisms, , with each atlas having a template and predefined segmentation field . incorporating the parcellation into anatomical structures of the coordinate of the MRI.. The pairs are indexed over the voxel lattice with an MRI image and a dense labelling of every voxel coordinate. The anatomical labelling of parcellated structures are manual delineations by neuroanatomists.
The Bayes segmentation problem is given measurement with mean field and parcellation , the anatomical labelling . mustg be estimated for the measured MRI image. The mean-field of the observable image is modelled as a random deformation from one of the templates , which is also randomly selected, ,. The optimal diffeomorphism is hidden and acts on the background space of coordinates of the randomly selected template image . Given a single atlas , the likelihood model for inference is determined by the joint probability ; with multiple atlases, the fusion of the likelihood functions yields the multi-modal mixture model with the prior averaging over models.
The MAP estimator of segmentation is the maximizer given , which involves the mixture over all atlases.
The quantity is computed via a fusion of likelihoods from multiple deformable atlases, with being the prior probability that the observed image evolves from the specific template image .
The MAP segmentation can be iteratively solved via the expectation–maximization algorithm
MAP estimation of volume templates from populations and the EM algorithm
Generating templates empirically from populations is a fundamental operation ubiquitous to the discipline.
Several methods based on Bayesian statistics have emerged for submanifolds and dense image volumes.
For the dense image volume case, given the observable the problem is to estimate the template in the orbit of dense images . Ma's procedure takes an initial hypertemplate as the starting point, and models the template in the orbit under the unknown to be estimated diffeomorphism , with the parameters to be estimated the log-coordinates determining the geodesic mapping of the hyper-template .
In the Bayesian random orbit model of computational anatomy the observed MRI images are modelled as a conditionally Gaussian random field with mean field , with a random unknown transformation of the template. The MAP estimation problem is to estimate the unknown template given the observed MRI images.
Ma's procedure for dense imagery takes an initial hypertemplate as the starting point, and models the template in the orbit under the unknown to be estimated diffeomorphism . The observables are modelled as conditional random fields, a random field with mean field . The unknown variable to be estimated explicitly by MAP is the mapping of the hyper-template , with the other mappings considered as nuisance or hidden variables which are integrated out via the Bayes procedure. This is accomplished using the expectation–maximization algorithm.
The orbit-model is exploited by associating the unknown to be estimated flows to their log-coordinates via the Riemannian geodesic log and exponential for computational anatomy the initial vector field in the tangent space at the identity so that , with the mapping of the hyper-template.
The MAP estimation problem becomes
The EM algorithm takes as complete data the vector-field coordinates parameterizing the mapping, and compute iteratively the conditional-expectation
Compute new template maximizing Q-function, setting
Compute the mode-approximation for the expectation updating the expected-values for the mode values:
References
Bayesian estimation
Computational anatomy
Geometry
Fluid mechanics
Neural engineering
Biomedical engineering | Bayesian model of computational anatomy | [
"Mathematics",
"Engineering",
"Biology"
] | 2,174 | [
"Biological engineering",
"Biomedical engineering",
"Civil engineering",
"Geometry",
"Fluid mechanics",
"Medical technology"
] |
39,723,835 | https://en.wikipedia.org/wiki/Atmospheric%20optics%20ray-tracing%20codes | Atmospheric optics ray tracing codes - this article list codes for light scattering using ray-tracing technique to study atmospheric optics phenomena such as rainbows and halos. Such particles can be large raindrops or hexagonal ice crystals. Such codes are one of many approaches to calculations of light scattering by particles.
Geometric optics (ray tracing)
Ray tracing techniques can be applied to study light scattering by spherical and non-spherical particles under the condition that the size of a particle is much larger than the wavelength of light. The light can be considered as collection of separate rays with width of rays much larger than the wavelength but smaller than a particle. Rays hitting the particle undergoes reflection, refraction and diffraction. These rays exit in various directions with different amplitudes and phases. Such ray tracing techniques are used to describe optical phenomena such as rainbow of halo on hexagonal ice crystals for large particles.
Review of several mathematical techniques is provided in series of publications.
The 46° halo was first explained as being caused by refractions through ice crystals in 1679 by the French physicist Edmé Mariotte (1620–1684) in terms of light refraction
Jacobowitz in 1971 was the first to apply the ray-tracing technique to hexagonal ice crystal. Wendling et al. (1979) extended Jacobowitz's work from hexagonal ice particle with infinite length to finite length and combined Monte Carlo technique to the ray-tracing simulations.
Classification
The compilation contains information about the electromagnetic scattering by hexagonal ice crystals, large raindrops, and relevant links and applications.
Codes for light scattering by hexagonal ice crystals
Relevant scattering codes
Discrete dipole approximation codes
Codes for electromagnetic scattering by cylinders
Codes for electromagnetic scattering by spheres
External links
Scatterlib - Google Code repository of light scattering codes
References
Science-related lists
Computational science
Scattering, absorption and radiative transfer (optics)
Scattering
Electromagnetic simulation software | Atmospheric optics ray-tracing codes | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 385 | [
" absorption and radiative transfer (optics)",
"Applied mathematics",
"Computational science",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics"
] |
41,065,295 | https://en.wikipedia.org/wiki/Satellite%20surface%20salinity | Satellite surface salinity refers to measurements of surface salinity made by remote sensing satellites. The radiative properties of the ocean surface are exploited in order to estimate the salinity of the water's surface layer.
The depth of the water column that a satellite surface salinity measurement is sensitive to depends on the frequency (or wavelength) of the radiance that is being measured. For instance, the optical depth for seawater at the 1.413 GHz microwave frequency, used for the Aquarius mission, is about 1–2 cm.
Background
As with many passive remote sensing satellite products, satellites measure surface salinity by initially taking radiance measurements emitted by the Earth's atmosphere and ocean. If the object emitting the measured radiance is considered to be a black body, then the relationship between the object's temperature and the measured radiance can be related, at a given frequency, through the Planck function (or Planck's law).
where
(the Intensity or Brightness) is the amount of energy emitted per unit surface per unit time per unit solid angle and in the frequency range between and ; is the temperature of the black body; is the Planck constant; is frequency; is the speed of light; and is the Boltzmann constant.
This equation can be rewritten to express the temperature, T, in terms of the measured radiance at a particular frequency. The temperature derived from the Planck function is referred to as the brightness temperature (which see, for derivation).
For ideal black bodies, the brightness temperature is also the directly measurable temperature. For objects in nature, often called Gray Bodies, the actual temperature is only a fraction of the brightness temperature. The fraction of brightness temperature to actual temperature is defined as the emissivity. The relationship between brightness temperature and temperature can be written as:
where Tb is the brightness temperature, e is the emissivity, and T is the temperature of the surface sea water. The emissivity describes the ability of an object to emit energy by radiation. Several factors can affect the emissivity of water, including temperature, emission angle, wavelength, and chemical composition. The emissivity of sea water has been modeled as a function of its temperature, salinity, and radiant energy frequency.
Measurement technique
Studies have shown that measurements of seawater brightness temperature at the 1.413 GHz (L-band) are sufficient to make reasonably accurate measurements of seawater surface salinity. The emissivity of seawater can be described in terms of its polarized components of emissivity as:
The above equations are governed by the Fresnel equations, the instrument viewing angle from nadir θ, and the dielectric coefficient ε. Microwave radiometers can be further equipped to measure the vertical and horizontal components of the surface seawater's brightness temperature, which relates to the horizontal and vertical components of the emissivity as:
,
where refers to the brightness temperature and is simply the temperature of the surface seawater. Since the viewing angle from nadir is typically set by the remote sensing instrument, measurements of the polarized components of the brightness temperature can be related to the surface seawater's temperature and dielectric coefficient.
Several models have been proposed to estimate the dielectric constant of sea water given its salinity and temperature. The "Klein and Swift" dielectric model function is a common and well-tested model used to compute the dielectric coefficient of seawater at a given salinity, temperature, and frequency. The Klein and Swift model is based on the Debye equation and fitted with laboratory measurements of the dielectric coefficient.
Using this model, if the temperature of the seawater is known from external sources, then measurements of the brightness temperature can be used to compute the salinity of surface seawater directly. Figure 1 shows an example of the brightness temperature curves associated with sea surface salinity, as a function of sea surface temperature.
When looking at the polarized components of the brightness temperature, the spread of the brightness temperature curves will be different depending on the component. The vertical component of the brightness temperature shows a greater spread in constant salinity curves than the horizontal component. This implies a greater sensitivity to salinity in the vertical component of brightness temperature than in the horizontal.
Sources of measurement error
There are many sources of error associated with measurements of sea surface salinity:
Radiometer
Antenna
System pointing
Roughness (of sea surface)
Solar
Galactic
Rain (total liquid water)
Ionosphere
Atmosphere(other)
Sea surface temperature
Antenna gain near land and ice
Model function
Most of the error sources on the previous list stem from either standard instrument errors (Antenna, System Pointing, etc.) or noise from external sources measurement signal (Solar, Galactic, etc.). However, the largest error source comes from the effect of ocean surface roughness. A rough ocean surface tends to cause an increase in the measured brightness temperature as a result of multiple scattering and shadowing effects. Quantifying the influence of ocean roughness to the measured temperature brightness is crucial to make an accurate measurement. Some instruments use radar scatterometers to measure the surface roughness to account for this source of error.
List of satellite instruments measuring sea surface salinity
Soil Moisture and Ocean Salinity satellite
Aquarius (SAC-D instrument)
References
Satellite surface salinity | Satellite surface salinity | [
"Physics",
"Environmental_science"
] | 1,099 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
41,067,118 | https://en.wikipedia.org/wiki/String%20phenomenology | String phenomenology is a branch of theoretical physics that uses tools from mathematics and computer science to study the implications of string theory for particle physics and cosmology. In cosmology, string phenomenology studies, among others, implications of string theory for inflation, dark matter and dark energy. In particle physics, efforts include finding realistic or semi-realistic models of particle physics within the string theory landscape. The term "realistic" is usually taken to mean that the low energy limit of string theory yields a model which bears a resemblance to the Minimal Supersymmetric Standard Model (MSSM) or the Standard Model (SM). The latter is obtained after supersymmetry breaking or by starting from a string theory without (target space) supersymmetry. A complementary approach to studying the landscape of string theory solutions is to look at the swampland, which consists of low-energy theories that are not compatible with string theory or sometimes even any quantum theory of gravity.
See also
String cosmology
String theory landscape
Swampland
References
String theory
Physics beyond the Standard Model
Physical cosmology | String phenomenology | [
"Physics",
"Astronomy"
] | 224 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Particle physics",
"String theory",
"Physics beyond the Standard Model",
"Physical cosmology"
] |
41,067,402 | https://en.wikipedia.org/wiki/Numerical%20solution%20of%20the%20convection%E2%80%93diffusion%20equation | The convection–diffusion equation describes the flow of heat, particles, or other physical quantities in situations where there is both diffusion and convection or advection. For information about the equation, its derivation, and its conceptual importance and consequences, see the main article convection–diffusion equation. This article describes how to use a computer to calculate an approximate numerical solution of the discretized equation, in a time-dependent situation.
In order to be concrete, this article focuses on heat flow, an important example where the convection–diffusion equation applies. However, the same mathematical analysis works equally well to other situations like particle flow.
A general discontinuous finite element formulation is needed. The unsteady convection–diffusion problem is considered, at first the known temperature T is expanded into a Taylor series with respect to time taking into account its three components. Next, using the convection diffusion equation an equation is obtained from the differentiation of this equation.
Equation
General
The following convection diffusion equation is considered here
In the above equation, four terms represents transience, convection, diffusion and a source term respectively, where
is the temperature in particular case of heat transfer otherwise it is the variable of interest
is time
is the specific heat
is velocity
is porosity that is the ratio of liquid volume to the total volume
is mass density
is thermal conductivity
is source term representing the capacity of internal sources
The equation above can be written in the form
where is the diffusion coefficient.
Solving the convection–diffusion equation using the finite difference method
A solution of the transient convection–diffusion equation can be approximated through a finite difference approach, known as the finite difference method (FDM).
Explicit scheme
An explicit scheme of FDM has been considered and stability criteria are formulated. In this scheme, temperature is totally dependent on the old temperature (the initial conditions) and , a weighting parameter between 0 and 1. Substitution of gives the explicit discretization of the unsteady conductive heat transfer equation.
where
is the uniform grid spacing (mesh step)
Stability criteria
These inequalities set a stringent maximum limit to the time step size and represents a serious limitation for the explicit scheme. This method is not recommended for general transient problems because the maximum possible time step has to be reduced as the square of .
Implicit scheme
In implicit scheme, the temperature is dependent at the new time level . After using implicit scheme, it was found that all coefficients are positive. It makes the implicit scheme unconditionally stable for any size of time step. This scheme is preferred for general purpose transient calculations because of its robustness and unconditional stability. The disadvantage of this method is that more procedures are involved and due to larger , truncation error is also larger.
Crank–Nicolson scheme
In the Crank–Nicolson method, the temperature is equally dependent on and . It is a second-order method in time and this method is generally used in diffusion problems.
Stability criteria
This time step limitation is less restricted than the explicit method. The Crank–Nicolson method is based on the central differencing and hence it is second-order accurate in time.
Finite element solution to convection–diffusion problem
Unlike the conduction equation (a finite element solution is used), a numerical solution for the convection–diffusion equation has to deal with the convection part of the governing equation in addition to diffusion. When the Péclet number (Pe) exceeds a critical value, the spurious oscillations result in space and this problem is not unique to finite elements as all other discretization techniques have the same difficulties. In a finite difference formulation, the spatial oscillations are reduced by a family of discretization schemes like upwind scheme. In this method, the basic shape function is modified to obtain the upwinding effect. This method is an extension of Runge–Kutta discontinuous for a convection-diffusion equation.
For time-dependent equations, a different kind of approach is followed. The finite difference scheme has an equivalent in the finite element method (Galerkin method). Another similar method is the characteristic Galerkin method (which uses an implicit algorithm). For scalar variables, the above two methods are identical.
See also
Advanced Simulation Library
Convection–diffusion equation
Double diffusive convection
An Album of Fluid Motion
Lagrangian and Eulerian specification of the flow field
Fluid animation
Finite volume method for unsteady flow
References
Diffusion
Parabolic partial differential equations
Stochastic differential equations
Transport phenomena
Equations of physics | Numerical solution of the convection–diffusion equation | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 908 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Equations of physics",
"Chemical engineering",
"Mathematical objects",
"Equations"
] |
41,069,188 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20phosphodiesterase%205%20inhibitors | Phosphodiesterases (PDEs) are a superfamily of enzymes. This superfamily is further classified into 11 families, PDE1 - PDE11, on the basis of regulatory properties, amino acid sequences, substrate specificities, pharmacological properties and tissue distribution. Their function is to degrade intracellular second messengers such as cyclic adenine monophosphate (cAMP) and cyclic guanosine monophosphate (cGMP) which leads to several biological processes like effect on intracellular calcium level by the Ca2+ pathway.
Phosphodiesterase 5 (PDE5) is widely expressed in several tissues in the body for example brain, lung, kidney, urinary bladder, smooth muscle and platelets. It is possible to prevent cGMP hydrolysis by inhibiting PDE5 and therefore treat diseases associated with low cGMP levels, because of this, PDE5 is an ideal target for the development of inhibitors. The therapeutic effects of PDE5 inhibition have been demonstrated in several cardiovascular conditions, chronic kidney disease and diabetes mellitus.
The major PDE5 inhibitors (a subset of the phosphodiesterase inhibitors) are sildenafil, tadalafil, vardenafil, and avanafil, and although all share the same mechanism of action each has unique pharmacokinetic and pharmacodynamic properties which dictate their suitability in various conditions and their side effect profile.
General
The human genome contains at least 21 genes involved in determining the intracellular levels of cAMP and cGMP by the expression of phosphodiesterase proteins or PDE's. These PDE's are grouped into at least 11 functional subfamilies, named PDE1-PDE11. PDEs are enzymes that hydrolyze cyclic adenosine 3,5-monophosphate (cAMP) and cyclic guanosine 3,5-monophospahate (cGMP), which are intracellular second messengers, into AMP and GMP. These second messengers control many physiological processes.
The cAMP is formed from ATP by the enzyme adenylyl cyclase and cGMP is formed from GTP by the enzyme guanylyl cyclase which are either membrane bound or soluble in the cytosol. When soluble it functions as a receptor for nitric oxide (NO) (see figure 1).
Formation of cGMP initiates several reactions in the body including influence on cGMP ion channels, cGMP binding proteins and protein kinase G (PKG). The effect on PKG reduces levels of calcium leading to relaxation of smooth muscles (see figure 2).
The PDE5 enzyme is specific for cGMP which means it only hydrolyzes cGMP but not cAMP. The selectivity is mediated through an intricate network of hydrogen bonding which is favorable for cGMP but unfavorable for cAMP in PDE5.
By inhibition of PDE5 enzyme the cGMP concentration will be raised and can therefore increase the relaxation of smooth muscles. PDE5 has only one subtype, PDE5A, of which there are 4 isoforms in humans called PDE5A1-4. The difference in PDE5A1-3 isoforms is only in the 5´ end of the mRNA and corresponding N-terminal of the protein.
Distribution of PDE5 in the body
In humans the distribution of PDE5A1 and PDE5A2 isoforms is the same and can be found in the brain, lung tissue, heart, liver, kidneys, bladder, prostate, urethra, penis, uterus and skeletal muscles. PDE5A2 is more common than PDE5A1. PDE5A3 is not as widespread as the other two isoforms, and is only found in smooth muscle tissues, it is found in the heart, bladder, prostate, urethra, penis and uterus, Exact distribution of PDE5A4 isoform was not found in the literature. PDE5 enzyme in humans has also been reported in platelets, gastrointestinal epithelial cells, Purkinje cells of cerebellum, corpus cavernosum, pancreas, placenta and colon, clitoral corpus cavernosum as well as vaginal smooth muscle and epithelium.
PDE Structure and SAR
PDE enzymes are composed of 3 functional domains: an N-terminal cyclin fold domain, a linker helical domain and a C-terminal helical bundle domain (see figure 3). The active site is a deep pocket at the junction of the 3 subdomains and is lined with highly conserved residues between isotypes of PDE. The pocket is approximately 15 Å deep and the opening is approximately 20 by 10 Å. The volume of the active site has been calculated to be between 875 and 927 Å3. The active site of PDE5 has been described as subdivided into 3 main regions based on its crystal structure in complex with sildenafil:
M site: contains both a zinc and magnesium ion. The role of the ions is to stabilize the structure and activation of hydroxide to mediate the reaction. Current PDE5 inhibitors do not interact with the metal ions, in contrast with cGMP. Direct or indirect interactions may improve the potency of future inhibitors.
Q pocket: it is believed that the guanidine group of cGMP binds in this region as the Q pocket accommodates the pyrazolopyrimidinone group (see figure 4) of sildenafil. The pyrazolopyrimidinone of sildenafil mimics that of the guanine in cGMP and has the same H-bond donor and acceptor features, forming a bidendrate H-bond with Q817. Card et al. describe the Q pocket as subdivided into 3 parts:
A saddle formed by the conserved glutamine (Q817 in PDE5A, Q443 in PDE4B and Q369 in PDE4D) and the P clamp (a hydrophobic clamp at the narrow side of the active sites pocket, formed of invariant purine-selective glutamine and a pair of conserved residues).
2 narrow, hydrophobic pockets, Q1 and Q2, composed mainly of hydrophobic residues flanking the saddle.
L region: the methyl piperazine group (see figure 4) of sildenafil is surrounded by Tyr 664, Met 816, Ala 823 and Gly 819 residues, and residues 662-664 form a lid over the pocket narrowing the entrance to the active site of PDE5.
Jeon et al. also describe a fourth pocket called the H pocket which is hydrophobic and accommodates the ethoxy phenyl group of sildenafil
The 3 PDE5 inhibitors already on the market, sildenafil, tadalafil and vardenafil, occupy part of the active site, mainly around the Q pocket and sometimes the M pocket as well and all 3 interact with the active site in 3 important manners:
interaction between the metal ions mediated through water
hydrogen bonding with the saddle of the Q pocket
hydrophobic interaction with hydrophobic residues lining the cavity of the active site.
It has also been described that the hydrophobic interaction with the Q1 and Q2 pockets are important for inhibitor potency and differences between isotypes of PDE in the Q2 pocket can be exploited for selectivity between isotypes.
Role in diseases
Erectile dysfunction
Drugs that inhibit PDE5, sildenafil, tadalafil and vardenafil, have been used as treatment for erectile dysfunction. These inhibitors increase the cGMP, smooth muscle relaxation and consequently cause penis erection during sexual stimulation.
Pulmonary arterial hypertension
Upregulation of PDE5 gene expression has been observed in animal models of pulmonary hypertension, and is thought to contribute to vasoconstriction in the lung. Several randomised controlled trials investigating PDE5 inhibitors use in pulmonary arterial hypertension, a subtype of pulmonary hypertension, have demonstrated their potent effects in reducing pulmonary hypertension and vascular remodelling and improving symptoms and mortality in patients with the condition. Long-term treatment with a PDE5 inhibitor has been shown to enhance natriuretic peptide-cGMP pathway, downregulate Ca2+ signaling pathway and alter vascular tone in pulmonary arteries in rat models.
Benign prostatic hyperplasia
As of 2011, the long-acting agent tadalafil is licensed for the treatment of urinary symptoms resulting from benign prostatic hyperplasia.
Future indications for PDE5 inhibitors
Cardiovascular diseases
PDE5 inhibitors have broad-ranging effects on the cardiovascular system beyond their acute haemodynamic influence. For example, PDE5 inhibitors have been shown to improve several parameters of endothelial function. Increasingly, their use in the management of systemic hypertension (including treatment-resistant hypertension), cardioprotection, heart failure, and peripheral arterial disease are being evaluated.
Heart failure
PDE5 inhibitors have shown promise in the treatment of heart failure with reduced ejection fraction through several beneficial effects on lung vasculature, cardiac remodelling and diastolic function. A study showed that effective treatment of pulmonary arterial hypertension with sildenafil improved functional capacity and reduced right ventricular mass in patients. The effects on right ventricular remodeling were significantly greater in comparison with the non-selective endothelial receptor antagonist bosentan. However, PDE5 inhibitors may be harmful in patients with heart failure with preserved ejection fraction due to potential negative inotropic effects.
Chronic kidney disease
Experimental studies in animals have shown that PDE5 inhibitors may reverse kidney damage independently of their effects on blood pressure through intra-renal mechanisms. In humans, PDE5 inhibitors have also been shown to reduce proteinuria, a marker of kidney damage. However, the successful introduction of SGLT2 inhibitors and endothelin receptor antagonists to the field of renal therapeutics makes the development of PDE5 inhibitors for this purpose unlikely.
Diabetes mellitus
PDE5 inhibitors have been shown to have various macrovascular, microvascular and metabolic benefits in diabetes mellitus, and in a large study of men with type 2 diabetes mellitus the agents were found to significantly reduce patients' risk of death from any cause. It is unclear to what extent this observation reflects the protective effects of PDE5 inhibitors against cardiovascular and renal disease.
Raynaud's phenomenon
Sildenafil has been shown to be at least as effective as calcium channel blockers in treating severe Raynaud's phenomenon (RP) associated with systemic sclerosis and digital ulceration. When given sildenafil for 4 weeks subjects had reduced mean frequency and duration of Raynaud attacks and a significantly lowered mean Raynaud's condition score. The capillary blood flow velocity also increased in each individual patient and the mean capillary flow velocity of all patients increased significantly. These results came without significant reductions of the systemic blood pressure. However, the therapeutic effects of PDE5 inhibitors in primary (idiopathic) RP are less well defined.
Stroke
Sildenafil has been shown to significantly improve neurovascular coupling without affecting overall cerebral blood flow by increasing brain levels of cGMP, evoking neurogenesis and reducing neurological deficits in rats 2 or 24 hours after stroke. These experimental data suggest that PDE5 inhibitors may have a role in promoting recovery from stroke. However, studies in humans remain inconclusive.
Premature ejaculation
Adding PDE5 inhibitors to SSRI drugs (e.g. paroxetine) for the treatment of premature ejaculation could result in better ejaculatory control according to recent studies. Possible mechanism is based on nitric oxide (NO)/cGMP transduction system as a central and peripheral mediator of inhibitory non-adrenergic, non-cholinergic nitrergic neurotransmission in the urogenital system.
Female sexual arousal disorder
PDE5 is expressed in clitoral corpus cavernosum and in vaginal smooth muscle and epithelium. Therefore, it is possible that PDE5 inhibitors could affect female sexual arousal disorder but further research is needed. Increased levels of cGMP have been shown to occur in human-cultured vaginal smooth muscle cells treated with a PDE5 inhibitor suggesting involvement of the NO/cGMP axis in the female sexual response.
Sexual Exhaustion Disorder
The similarity of many PDE5 inhibitors to the structure of many of the analogs of caffeine that are also adenosine antagonists suggests that in the future, it may be possible to design an PDE5 inhibitor that, like caffeine, is also an adenosine antagonist.
Discovery
PDE5 is an enzyme that was first purified in 1980 from a rats lung. PDE5 converts intracellular cGMP to the nucleotide GMP. Many tissues contain PDE5, such as lungs, kidneys, brain, platelets, liver, prostate, urethra, bladder and smooth muscles. Because of the localization of PDE5 in the smooth muscle tissue, inhibitors were developed for the treatment of erectile dysfunction along with pulmonary hypertension.
Sildenafil was initially introduced for clinical trial in 1989. It was the result of extensive research on chemical agents targeting PDE5 that could be effective in treatment of coronary heart disease. Sildenafil did not prove effective for coronary heart disease but an interesting side effect was discovered, a penile erection. That side effect soon became the main field of investigation. The inhibitor is highly selective for the PDE5 family.
Sildenafil is a prototype of PDE5 inhibitors that Pfizer launched as Viagra. It was approved by the Food and Drug Administration (FDA) in 1998 as the first oral medicine for erectile dysfunction. Later, in the year 2005, it was approved for the treatment of pulmonary arterial hypertension. Vardenafil and tadalafil were discovered in 1990. These drugs came out of research programs focusing on finding PDE5 inhibitors for the treatment of cardiovascular diseases and erectile dysfunction. The two PDE5 inhibitors soon became treatments for these conditions.
Tadalafil is the most versatile inhibitor and has the longest half-life, 17.5 hours. This allows for a longer therapeutic window and is therefore often a more convenient drug than others with a shorter therapeutic window. Tadalafil is more bioavailable (80%) than sildenafil (40%) and vardenafil (15%) but it has a slow absorption, or about 2 hours compared to 50 minutes of sildenfil. Vardenafil is most known for its potency.
Because of severe adverse effects and patients dissatisfaction with current therapy choices other inhibitors have recently been approved for clinical use. These inhibitors are udenfil, avanafil lodenafil and mirodenafil.
Development
Biological activity
Penile erection
Penile erection is a hemodynamic event in the smooth muscle of corpus cavernous. PDE5 is the main cGMP hydrolysing enzyme found in penile corpus cavernous. Erection is triggered by release of the neurotransmitter nitric oxide (NO) from non-adrenergic and non-cholinergic neurons from nerve ending in the penis as well as from endothelial cells. NO activates soluble guanylyl cyclase in smooth muscle cells in the penis which results in increased production of 3'-5'-cyclic guanosine monophosphate from guanosine-5'-triphosphate (GTP). Cyclic GMP binds to the cGMP-dependent protein kinase (PKG1) which phosphorylates several proteins that results in decreased intracellular calcium. Lower intracellular calcium leads to smooth muscle relaxation and ultimately penile erection. This pathway is demonstrated in figure 1.
Erectile dysfunction
PDE5 degrades cGMP and therefore inhibits erection. As demonstrated in figure 1, inhibition of PDE5 reduces degradation of cGMP and leads to penile erection.
Because of this action PDE5 inhibitors have been developed for the treatment of penile erectile dysfunction.
The phosphodiesterase 5 enzyme
The PDE5 enzyme has a molecular mass of 200 kDa and its active state is a homodimer. PDE5 consists of monomers and each contains two major functional domains: the regulatory domain (R domain) which is located in the N-terminal portion of the protein and the catalytic domain (C domain) located in the more C-terminal portion of the protein.
The R domain contains specific allosteric cGMP binding site that controls the enzymes function. This specific binding site consists of subdomain GAF (cGMP-specific cGMP-stimulated PDE, adenylate cyclase, and FhlA) which is located in the N-terminal section of the specific proteins. The allosteric binding site GAF consists of GAFa and GAFb where GAFa has a higher binding affinity. The importance and functional role of the two homologous binding sites are unknown.
Conformational change occurs when cGMP binds to the allosteric site that exposes serine and permits phosphorylation. The results for the phosphorylation of serine leads to increased cGMP hydrolysis at the catalytic domain. The affinity of the catalytic domain for cGMP increases and further increases the PDE5 catalytic domain activity.
Through the C domain, intracellular cGMP is degraded rapidly by PDE5 which minimizes the activity of cGMP on its PKG1 substrate by cleaving the cyclic phosphate part of cGMP to GMP. GMP is an inactive molecule with no second messenger activity.
Phosphorylation of a single serine by PKG1 and the allosteric cGMP binding site activates the PDE5 catalytic activity and the result is a negative feedback regulation of cGMP/NO/PKG1 signalling. cGMP therefore interacts with both allosteric and catalytic domain of the PDE5 enzyme and PDE5 inhibitors compete with cGMP for binding at the catalytic domain resulting in higher cGMP levels. PDE5 domains are demonstrated in figure 2.
PDE5 Inhibitors
The PDE5 inhibitors sildenafil, vardenafil and tadalafil are competitive and reversible inhibitors of cGMP hydrolysis by the catalytic side of PDE5. The structures of vardenafil and sildenafil are similar, they both contain similar structured purine ring of cGMP that contributes their features to act as a competitive inhibitor of PDE5. The difference of the molecular structures is the reason for interaction with the catalytic site of PDE5 and improves the affinity of these compounds compared with cGMP selectivity.
Pharmacophore
The pharmacophore model of PDE5 usually consists of one hydrogen bond acceptor, one hydrophobic aliphatic carbon chain and two aromatic rings. Small hydrophobic pocket and H-loop of PDE5 enzyme are important for binding affinity of PDE5 inhibitors. As well as positional and conformational changes are observed upon inhibitor binding in many cases.
The active site of PDE5 is located at a helical bundle domain at the center of C domain (catalytic domain). The substrate pocket is composed of four subsites: M site (metal-binding site), Q pocket (core pocket), H pocket (hydrophobic pocket) and L region (lid region) as demonstrated in figure 3. The Q pocket accommodates the pyrazolopyrimidinone group of sildenafil. That suggest that other chemicals similar to guanidine groups of cGMP can also bind at this region. The amino acids residues, Gln817, Phe820, Val782 and Tyr612, are lined in the Q pocket, they are highly conserved in all PDEs. The amide moiety of the pyrazolopyrimidinone group forms a bidentate hydrogen bond with the ɣ-amide group of Gln817. 3D structure of sildenafil is demonstrated in figure 4.
Side effects
PDE5 inhibitors are generally well tolerated, with side effects including transient headaches, flushing, dyspepsia, congestion and dizziness. There have also been reports of temporary vision disturbances with sildenafil and, to a lesser extent, vardenafil, and back and muscle pain with tadalafil. These side effects may be attributed to the unintended effects of PDE5 inhibitors against other PDE isozymes, such as PDE1, PDE6 and PDE11. It is theorised that improved selectivity of PDE5 inhibitors may lead to fewer side effects. For example, vardenafil and tadalafil have demonstrated reduced adverse effects probably due to improved selectivity for PDE5. However, no highly selective PDE5 inhibitors are currently in development.
Patients who take nitrates, alpha blockers or sGC stimulators within 24 hours of PDE5 inhibitor administration (or 48 hours for tadalafil) may experience symptomatic hypotension, so concurrent use is contraindicated. PDE5 inhibitors are also contraindicated in patients with hereditary eye conditions such as retinitis pigmentosa due to the small increased risk of nonarteritic ischaemic optic neuropathy in patients taking the medication.
Hearing impairment is one risk factor for those who are using PDE5 inhibitors and it has been reported for all available drugs on the market. This problem may be due to high level effect cGMP on cochlear hair cells. It has been reported that PDE5 inhibitors (sildenafil & vardenafil) cause transient visual disturbances likely due to PDE6 inhibition.
Several reports are about approaches to improve PDE5 inhibitors, where as chemical groups have been switched out to increase potency and selectivity, which should potentially lead to drugs with fewer side effects.
Structure–activity relationship (SAR)
Sildenafil, the first PDE5 inhibitor, was discovered through rational drug design programme. The compound was potent and selective over PDE5 but was lacking preferable pharmacological properties.
Structure-activity relationship (SAR) is demonstrated in figure 5, figure 6 and figure 7. Figure 5 demonstrates the three main groups of sildenafil, R1, R2 and R3. R1 is the pyrazolopyrimidinone ring, R2 the ethoxyphenyl ring and R3 is the methylpiperazine ring. R1 group is responsible for the binding of the drug to its active binding site of PDE5.
Solubility is one of the pharmacological properties that was increased. A group was substituted for the hydrogen atom as demonstrated in figure 6. The sulfonamide group was chosen to lower lipophilicity and increase solubility as seen in figure 7.
Solubility was further increased by placing a methyl group at R positions as demonstrated in figure 7. Other phosphodiesterase-5 inhibitors were developed from the structure in figure 7.
Other research
Although PDE5 inhibitors main use has been for erectile dysfunction there has been a great interest in PDE5 inhibitors as a promising new therapeutic agents for treatment of other diseases, such as Alzheimer's disease. Elevation of cGMP levels through inhibition of PDE5 provides a way of improving memory and learning.
PDE5 has also been considered as a potential therapeutic agent for parasitic disease such as African sleeping sickness. Strategic changes were made to the structure of sildenafil so the molecule could project into a parasite-specific pocket (the p-pocket). Similar approach has been used to design therapeutic agents Plasmodium falciparum.
PDE5-inhibitors in clinical trials
See also
Erectile dysfunction
Sildenafil
PDE5
PDE5 inhibitor
Molecular modelling
References
phosphodiesterase 5 inhibitors
EC 3.1.4
Molecular biology
Medicinal chemistry | Discovery and development of phosphodiesterase 5 inhibitors | [
"Chemistry",
"Biology"
] | 5,083 | [
"Life sciences industry",
"Drug discovery",
"Medicinal chemistry",
"nan",
"Biochemistry",
"Molecular biology"
] |
41,069,208 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20beta-blockers | β adrenergic receptor antagonists (also called beta-blockers or β-blockers) were initially developed in the 1960s, for the treatment of angina pectoris but are now also used for hypertension, congestive heart failure and certain arrhythmias. In the 1950s, dichloroisoproterenol (DCI) was discovered to be a β-antagonist that blocked the effects of sympathomimetic amines on bronchodilation, uterine relaxation and heart stimulation. Although DCI had no clinical utility, a change in the compound did provide a clinical candidate, pronethalol, which was introduced in 1962.
Development
History
The β-blockers are an immensely important class of drugs due to their high prevalence of use. The discovery of β-blockers reaches back to more than 100 years ago, when early investigators came up with the idea that catecholamines were binding selectively to receptor-like structures and that this was the cause of their pharmacological actions. In 1948, Raymond P. Ahlquist published a seminal paper concluding his findings, that there were two distinct receptors for catecholamine drugs, and they caused different responses in the heart muscle. He labeled them α-and β-adrenoceptors. These findings were soon to be a foundation for further research into drug development.
In the early 1960s, James Black, a Scottish pharmacologist, and associates of his at the Imperial Chemical Industries (ICI) in Great Britain were working on a series of β-adrenergic blocking compounds, pronethalol and propranolol. Dr. Black focused on developing a drug that would relieve the pain of angina pectoris, which results from oxygen deprivation in the heart. His plan was to create a drug that would decrease the heart's requirement for oxygen. He hypothesized that these compounds would lower the heart's oxygen consumption by interfering with the effects of catecholamines. In 1958, the pharmacological properties of dichloroisoproterenol (DCI) were described, a β-antagonist discovered a few years before by the Eli Lilly group. It was the synthesis of DCI which established that β-receptors could be chemically blocked and thus its existence could be confirmed. DCI had no clinical utility but a replacement of the 3,4-dichloro substituents, with a carbon bridge to form a naphtylethanolamine derivative, afforded a clinical candidate, pronethalol.
In April 1963, toxicity tests for pronethalol showed results of thymic tumours in mice. Nevertheless, it was launched under the trade name Alderlin, as the first clinically useful β-blocker. The launch took place in November 1963 when many small-scale clinical trials had proved their effectiveness in angina and certain types of arrhythmias. Pronethalol was only marketed for use in life-threatening situations. Dr. James Black went on to create another β-blocker, called propranolol; a non-selective β-blocker. Clinical trials started in the summer of 1964 and a year later, propranolol was launched under the trade name Inderal, only two and a half years after it had first been tested. It turned out to have a higher potency than pronethalol, with fewer side effects. Propranolol became the first major drug in the treatment of angina pectoris, since the introduction of coronary vasodilators, (such as nitroglycerin), almost 100 years earlier. Propranolol became a best-selling drug, used to treat a wide range of cardiovascular diseases such as arrhythmia, hypertension and hypertrophic cardiomyopathy.
The evolution of non-selective and selective β-blockers
By the time propranolol was launched, ICI was beginning to experience competition from other companies. This potential threat led to ongoing refinements in the pharmacologic structure of β-blockers and subsequent advances in drug delivery. ICI studied analogues further and in 1970 launched practolol (figure 4) under the trade name Eraldin. It was withdrawn from the market a few years later because of the severe side effects it caused, nevertheless it played a large role in the fundamental study of β-blockade and β-receptors.
The withdrawal of Eraldin gave ICI the nudge to launch another β-blocker, atenolol, which was launched in 1976 under the trade name Tenormin. Atenolol is a selective β1-receptor antagonist and was developed for the purpose of obtaining the “ideal β-blocker”. It soon became one of the best-selling heart drug. ICI's β-blocker project was based on Ahlquist's dual receptor theory. The drugs that were the outcome of this project, from propranolol to atenolol, helped to establish the receptor theory among scientist and pharmaceutical companies.
The progress in β-blocker development led to the introduction of drugs with variety of properties. β-blockers were developed having a relative selectivity for cardiac β1-receptors (for example metoprolol and atenolol), partial adrenergic agonist activity (pindolol), concomitant α-adrenergic blocking activity (for example labetalol and carvedilol) and additional direct vasodilator activity (nebivolol). In addition, long-acting and ultra-short formulations of β-blockers were developed. In 1988, Sir James Black was awarded the Nobel Prize in Medicine for his work on drug development.
Mechanism of action
Pharmacokinetics
The β-adrenergic receptor antagonists all have similar therapeutic and pharmacodynamic actions in patients with cardiovascular disorders. They vary greatly in their pharmacokinetic properties, as they demonstrate a high range of values in plasma protein binding, the percent of drug eliminated by metabolism or unchanged in the urine and hepatic extraction ratio. Each of the β-blockers possesses at least one chiral centre and a high degree of enantioselectivity when binding to the β-adrenergic receptor. For those β-blockers containing a single chiral centre, the (-) enantiomer has a much higher affinity in binding to the β-adrenergic receptor than the (+) enantiomer. All β-blockers used systemically are delivered as racemate, except for timolol.
Binding to β adrenergic receptors
Three different types of β-adrenergic receptors have been identified by molecular pharmacology. β1-receptors are located in the heart and consist of about 75% of all β-receptors. β2-receptors can be found in the smooth muscles of vessels and the bronchies. β3-receptors are presumed to be involved in fatty acid metabolism and are found in the adipocytes. β-blockers cause a competitive inhibition of the β-receptor, which counters the effects of catecholamines. β1 and β2-receptors are G-protein coupled receptors, which couple to Gαs-proteins. When activated, it stimulates an increase in intracellular cAMP via the adenylyl cyclase. cAMP, which is the second messenger, then activates protein kinase A that phosphorylates the membrane's calcium channel and increases the entry of calcium into the cytosol. Protein kinase A also increases the release of calcium from the sarcoplasmic reticulum, which causes a positive inotropic effect. The phosphorylation of troponin I and phospholamban by protein kinase A causes the lusitropic effects of β-blockers. This increases the re-uptake of calcium by the sarcoplasmic reticulum.
β-blockers are sympatholytic drugs. Some β-blockers partially activate the receptor while preventing catecholamines from binding to the receptor, making them partial agonists. They provide a background of sympathetic activity, while preventing normal and enhanced sympathetic activity. These β-blockers possess intrinsic sympathomimetic activity (ISA). Some of them also possess what is called membrane-stabilizing activity (MSA) on myocardial muscle fibers.
Selectivity
β-blockers can be selective for either β1, β2 adrenergic receptor, or to be non-selective. By blocking β1 receptor it is possible to reduce heart rate, conduction of velocity and contractility. The blocking of β2 receptor promotes vascular smooth muscle contraction, which results in increase of peripheral resistance. Blockade of the β2 receptor effectively reduces the sympathetic activity, which results in reduce of the associated platelet-and coagulation activation. This is why a non-selective β-blocker treatment may result in a lower risk of both arterial and venous embolic events.
Synthesis
Synthesis for a standard β-blocker begins with the mono-alkylation of catechol to give an ether (see figure 4).
The fundamental step, and usually the last, in the synthesis of β-blockers consists of adding a propanolamine side chain. This can be done following two paths which both involve alkylation of an appropriate phenoxide with epichlorohydrin (ECH). The first way is shown as the upper way in figure 5. It consists of phenoxide reacting at the oxirane and resulting in an alkoxide, that displaces the adjacent chloride to form a new epoxide ring. The second way is shown as the lower route in figure 5. It consists of displacement of the halogen directly with a SN2 reaction to give the same glycidic ether. When following both pathways, the central chiral carbon preserves its configuration, which is an important part to consider when synthesizing enantiomerically defined drugs. The ring opening of the epoxide ring in glycidic ether is done with an appropriate amine, such as isopropyl amine or tert-butylamine, and leads to the aryloxypropanolamine compound that consist of a secondary amine. This amine is typically known as the structural requirement for the β-adrenergic blocking activity.
(S)-propranolol
Propranolol exist in two different enantiomers, (S)-(−)- and (R)-(+)-enantiomers. The (S)-isomer is 100-fold more potent than the (R)-isomer and that is the general rule for most β-blockers. It is possible to produce the (S)-propranolol enantiomer from α-naphthol and 3-bromopropanol as seen in figure 6. α-Naphthol and 3-bromopropanol are refluxed for 6 hours to give alcohol. The alcohol is oxidized by using 2-Iodoxybenzoic acid (IBX) to give aldehyde. The aldehyde is subjected to L-proline catalyzed asymmetric α-aminoxylation and a reduction is made with NaBH4 in methanol. A diol is obtained by Pd/C-catalyzed hydrogenolysis. Finally the diol is converted to epoxide using the Mitsunobu reaction and stirred with isopropyl amine in CH2Cl2 to give (S)-propranolol.
Structure-activity relationship (SAR)
β-blockers' binding site to the receptor is the same as for endogenous catecholamines, such as noradrenaline and adrenaline. This binding is based on hydrogen bonds between the β-blocker and the receptor, and therefore not based on covalent bonds, which results in the reversibility of the binding. A significant step in the development of β adrenergic antagonists was the discovery that an oxymethylene bridge (—OCH2—, figure 7) could be inserted into the arylethanolamine structure of pronethalol to produce propranolol. Propranolol is an aryloxypropanolamine, which are more potent β-blockers than arylethanolamines. Today, most of the β-blockers used clinically are aryloxypropanolamines. The length of the side chain is increased when an oxymethylene bridge is introduced. It has been shown that the side chains of aryloxypropanolamine can adopt a conformation that puts the hydroxyl and amine groups in more or less the same position as with beta blocker that do not have this group as a part of the side chain.
After the release of propranolol, relative lipophilicity of β-blockers as a significant factor in their varied and complex pharmacology, became an important factor. It was suspected that propranolol's centrally induced side effects could be due to its high lipophilicity. Thus, it was focused on synthesizing analogues with hydrophilic moieties, favourably placed to see if the side effects would decrease. Selecting para-acylamino groups as the hydrophilic moiety, scientists synthesized a group of para-acylphenoxyethanol and propanolamines, and selected practolol for clinical trials. Practolol had one property not previously seen with β-blockers, it exhibited cardioselectivity (β1 selectivity). Studies from practolol showed that moving the acylamino group to meta or ortho positions, on the benzene ring, caused a loss of selectivity but not loss of the β-blockade itself. This illustrated the significance of para-substitution for β1 selectivity of β-blockers.
Figure 8 shows the structure-activity relationship (SAR) for β-blockers. For the function of a β-blocker it's essential for the compound to contain an aromatic ring and a β-ethanolamine. The aromatic ring can either be benzoheterocyclic (such as indole) or heterocyclic (such as thiadiazole). This is mandatory. The side chains can be variable:
The X part of the side chain can either be directly linked to the aromatic ring or linked through a —OCH2— group
When X is —CH2CH2—, —CH=CH—, —SCH2— or —NCH2—, there is little or no activity
The R1 group can only be a secondary substitution and branched is the optimal choice
Alkyl (—CH3) substituents on the α, β or γ carbon (if X = —OCH2—) lower beta blockade, especially at the α carbon
The general rule for aromatic substitution is: ortho > meta > para. This gives non-selective β-blockers. Large para-substituents usually decrease activity but large ortho-groups retain some activity. Polysubstitution on carbon 2 and 6 makes the compound inactive but when the substitution is on carbon 3 and 5 there's some activity. For the highest cardioselectivity, the substituents should be as following: para > meta > ortho. All the β-blockade is in one isomer, (S)-aryloxypropylamine and (R)-ethanolamine.
Clinical use
Cardiovascular indications
For decades β-blockers have been used in cardiovascular medicine. They have proved to reduce morbidity and mortality. In acute coronary syndrome, β-blockers have been recommended as a class I-A indication in clinical practice guidelines, because the treatment decreases the mortality rate. β-blockers, along with calcium channel blockers, reduce the workload of the heart and its oxygen requirement. β-blockers are sometimes used in a combination therapy to treat angina, if a β-blocker doesn't work well enough on its own. They are used as anti-arrhythmic drugs in patients with hyperthyroidism, cardiac dysrhythmia, atrial fibrillation, atrial flutter and ventricular tachycardia. The treatment with β-blockers reduces the incidence of sudden heart failure when the patient has already had a myocardial infarction. The reason is probably because of their anti-arrhythmic effects and also anti-ischemic effects. A β-blocker therapy is also useful in myocardial infarction, independent to heart failure. The therapy has been very helpful for high-risk patients. Although beta-blockers effectively lower blood pressure, they are not recommended as a first-line agent in the treatment of hypertension, as thiazides diuretics, ACE inhibitors, and calcium channel blockers show greater benefit. Therefore, β-blockers are usually used alongside other blood pressure medications such as calcium channel blockers. They also have an effect on cardiomyopathy, postural orthostatic tachycardia syndrome and portal hypertension, to name a few.
Other indications
There are few diseases, other than cardiovascular diseases, that β-blockers have a clinical effect on. These diseases are mentioned in the following sub-chapters. In addition, there are diseases which β-blockers have a clinical effect but are not the first choice of treatment. They won't be mentioned in the sub-chapters.
Essential tremor
When symptoms of the essential tremors are considerably high, non-selective β-blockers are an important treatment option and usually the first choice. Studies have shown that propranolol did reduce symptoms the most in that category. The β-blockers can be used alone or in a combination.
Glaucoma
Glaucoma is caused by high intra-ocular pressure (IOP). β-blockers reduce IOP and are the most common therapy. Most of the patients, who use the topical β-blockers, need adjunctive therapy to achieve a target IOP lowering. One of the most used drug in adjunctive therapy is dorzolamide.
Teratogenicity
Hypertension is reported to complicate one out of ten pregnancies, which makes it the most common medical disorder in pregnancy. It is important to have a correct diagnosis of hypertension during pregnancy, with the emphasis on differentiating pre-existing hypertension from pregnancy induced hypertension (gestational and the syndrome of pre-eclampsia). During pregnancy, the challenge is to determine when to use antihypertensive medications and which level of blood pressure to target. A balance has to be found between the potential risk to the health of the baby related to drug-exposure and the risk to the mother and baby due to an untreated medical condition (severe hypertension).
Antihypertensive drug use during pregnancy is relatively common and increasing. Only a small proportion of available antihypertensive drugs have been tested in pregnant women, and many are contraindicated. It is important to make the exposure of medications to the baby as small as possible. It is not clear if treating women who have mild or moderate hypertension during pregnancy with anti-hypertensive medication is beneficial.
The most common first trimester antihypertensive are β-blockers. The consequences of treatment with β-blocker during pregnancy are disputable. Some studies report a connection between β-blocker treatment and small for-gestational-age (SGA) of newborns and pre-term birth, while others do not. Based on meta-analyses, first trimester oral β-blocker use showed no increase in odds of major congenital anomalies. However, analyses examining organ-specific malformations observed increased odds of cardiovascular defects, cleft lip and neural tube defects. The U.S. Food and Drug Administration (FDA) categorises β-blockers into different pregnancy categories depending on the safety of the drugs and they range from category B to D, that is, no β-blockers is completely safe for using during pregnancy
See also
α blockers
β blockers
β2 adrenergic receptors
Catecholamine
Cardiovascular disease
Hypertension
References
Beta blockers
beta-adrenergic receptor antagonists | Discovery and development of beta-blockers | [
"Chemistry",
"Biology"
] | 4,280 | [
"Drug discovery",
"Life sciences industry",
"Medicinal chemistry"
] |
41,071,610 | https://en.wikipedia.org/wiki/Theia%20%28planet%29 | Theia () is a hypothesized ancient planet in the early Solar System which, according to the giant-impact hypothesis, collided with the early Earth around 4.5 billion years ago, with some of the resulting ejected debris coalescing to form the Moon. Collision simulations support the idea that the large low-shear-velocity provinces in the lower mantle may be remnants of Theia. Theia is hypothesized to have been about the size of Mars, and may have formed in the outer Solar System and provided much of Earth's water, though this is debated.
Name
In Greek mythology, Theia was one of the Titans, the sister of Hyperion whom she later married, and the mother of Selene, the goddess of the Moon: this story parallels the planet Theia's theorized role in creating the Moon.
Orbit
Theia is hypothesized to have orbited in the L4 or L5 configuration presented by the Earth–Sun system, where it would tend to remain. If this were the case it might have grown to a size comparable to Mars, with a diameter of about . Gravitational perturbations by Venus could have put it onto a collision course with the early Earth.
Size
Theia is often suggested to be around the size of Mars, with a mass about 10% that of current Earth; however, its size is not definitively settled, with some authors suggesting that Theia may have been considerably larger, perhaps 30% or even 40-45% the mass of current Earth making it nearly equal to the mass of proto-Earth.
Collision
According to the giant impact hypothesis, Theia orbited the Sun, nearly along the orbit of the proto-Earth, by staying close to one or the other of the Sun-Earth system's two more stable Lagrangian points (i.e., either L4 or L5). Theia was eventually perturbed away from that relationship, most likely by the gravitational influence of Jupiter, Venus, or both, resulting in a collision between Theia and Earth.
Initially, the hypothesis supposed that Theia had struck Earth with a glancing blow and ejected many pieces of both the proto-Earth and Theia, those pieces either forming one body that became the Moon or forming two moons that eventually merged to form the Moon. Such accounts assumed that a head-on impact would have destroyed both planets, creating a short-lived second asteroid belt between the orbits of Venus and Mars.
In contrast, evidence published in January 2016 suggests that the impact was indeed a head-on collision and that Theia's remains are on Earth and the Moon.
Simulations suggest that Theia would be responsible for around 70-90% of the total mass of the Moon under a classic giant impact scenario where Theia is considerably smaller than proto-Earth.
Hypotheses
From the beginning of modern astronomy, there have been at least four hypotheses for the origin of the Moon:
A single body split into Earth and Moon
The Moon was captured by Earth's gravity (as most of the outer planets' smaller moons were captured)
The Earth and Moon formed at the same time when the protoplanetary disk accreted
The Theia-impact scenario described above
The lunar rock samples retrieved by Apollo astronauts were found to be very similar in composition to Earth's crust, and so were likely removed from Earth in some violent event.
It is possible that the large low-shear-velocity provinces detected deep in Earth's mantle may be fragments of Theia. In 2023, computer simulations reinforced that hypothesis.
Composition
The composition of Theia and how different it was from Earth is disputed and subject to debate. It is considered unlikely that Theia had an exactly similar isotopic composition to proto-Earth. A key constraint has been that the many isotope ratios of retrieved rocks from the Moon are nearly identical to those from Earth, either implying that the two bodies were extensively homogenized by the collision, or that the isotopic composition of Theia was very similar to Earth. However, a 2020 study showed that lunar rocks were more variable in oxygen isotope composition than previously thought, some differing more from Earth than others, with the more divergent values probably originating deeper in the lunar mantle suggested to be a more true reflection of Theia, and may suggest that Theia formed further away from the Sun than Earth.
See also
Disrupted planet
Phaeton (hypothetical planet)
Synestia
References
Lunar science
Hypothetical impact events
Hypothetical bodies of the Solar System
Hypothetical planets
Water
Space
Solar System | Theia (planet) | [
"Physics",
"Astronomy",
"Mathematics",
"Biology",
"Environmental_science"
] | 928 | [
"Astronomical hypotheses",
"Hydrology",
"Water",
"Outer space",
"Astronomical myths",
"Hypothetical impact events",
"Hypothetical astronomical objects",
"Space",
"Geometry",
"Biological hypotheses",
"Spacetime",
"Astronomical objects",
"Solar System"
] |
51,132,519 | https://en.wikipedia.org/wiki/C29H46O3 | {{DISPLAYTITLE:C29H46O3}}
The molecular formula C29H46O3 (molar mass: 442.674 g/mol, exact mass: 442.3447 u) may refer to:
Nandrolone undecanoate
Testosterone decanoate
Molecular formulas | C29H46O3 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
51,135,583 | https://en.wikipedia.org/wiki/Von%20Stahel%20und%20Eysen | Von Stahel und Eysen (English: On Steel and Iron) is the first printed book on metallurgy, published in 1532 by several publishers: Kunegunde Hergot in Nuremberg, Melchior Sachs in Erfurt, and Peter Jordan in Mainz. It has been suggested that Hergot was probably the first to publish the text, as the material seems to come from Nuremberg: its material on tempering and quenching is similar to the short treatise on hardening iron beginning 'Von dem herten. Nu spricht meister Alkaym' in the late fourteenth- or early fifteenth-century Nuremberg manuscript Nürnberger Handschrift GNM 3227a.
About half the text is on how to harden iron and steel through tempering and quenching, mentioning water, but also a range of recipes of varying degrees of elaborateness. The recipe 'take clarified honey, fresh urine of a he-goat, alum, borax, olive oil, and salt; mix everything well together and quench therein' might, through the urea content of the urine (H2NCONH2), have helped to produce nitrated, 'case-hardened' iron. Less likely to have been efficacious is: 'take varnish, dragon's blood, horn scrapings, half as much salt, juice made from earthworms, radish juice, tallow, and vervain and quench therein. It is also very advantageous in hardening if a piece that is to be hardened is first thoroughly cleaned and well polished'.
A modern commentator on some of the more outlandish techniques in the book noted: "There isn't really much to say...except that perhaps it was meant to trip up rivals. However, this may not be the case because similar instructions were circulated in 1708 in Nuremberg."
The text also includes techniques for colouring, soldering, and etching. Etching was quite a new technology at the time, and Von Stahel und Eysen provides the first attested recipes.
Translations
Williams, H. (trans.), 'A sixteenth-century German treatise: Von Stahel und Eysen. 1532', Technical studies in the field of the fine arts, 4.2 (October, 1935), 63-92.
Smith, Cyril Stanley (ed.), Sources for the History of the Science of Steel, 1532-1786, Society for the History of Technology, 4 (Cambridge, Mass.: Society for the History of Technology, 1968), pp. 7–19.
References
Alchemical documents
Engineering textbooks
1532 books
German books
German non-fiction books
History of metallurgy | Von Stahel und Eysen | [
"Chemistry",
"Materials_science"
] | 563 | [
"Metallurgy",
"History of metallurgy"
] |
51,138,703 | https://en.wikipedia.org/wiki/The%20Chemical%20Engineer | The Chemical Engineer is a monthly chemical engineering technical and news magazine published by the Institution of Chemical Engineers (IChemE). It has technical articles of interest to practitioners and educators, and also addresses current events in world of chemical engineering including research, international business news and government policy as it affects the chemical engineering community. The magazine is sent to all members of the IChemE and is included in the cost of membership. Some parts of the magazine are available free online, including recent news and a series of biographies “Chemical Engineers who Changed the World”, although the core and the archive magazine is available only with a subscription. The online magazine also has freely available podcasts.
History
The formal journal of the IChemE was the “Transactions” which was initially an annual publication. In order to keep members informed a “Quarterly Bulletin- Institution of Chemical Engineers” was issued. When the Transactions became quarterly, the Bulletin was issued as a supplement. In 1956 both changed to bi-monthly and the title was changed to “The Chemical Engineer” with the sub-title “Bulletin of the Institution of Chemical Engineers". It kept the same numbering, so was issue 125. According to the editorial it would contain news and “articles and comments by members, handled less formally than in Transactions, relating both to practical matters arising from experience and to broader aspects of professional life.”
From 2002 it was published as “TCE” but reverted to its original title with issue 894 in December 2015.
References
External links
Official website
Monthly magazines published in the United Kingdom
Science and technology magazines published in the United Kingdom
Chemical industry in the United Kingdom
Chemical engineering journals
Magazines established in 1956
Professional and trade magazines
Institution of Chemical Engineers | The Chemical Engineer | [
"Chemistry",
"Engineering"
] | 339 | [
"Chemical engineering journals",
"Chemical engineering",
"Chemical engineering organizations",
"Institution of Chemical Engineers"
] |
51,139,305 | https://en.wikipedia.org/wiki/JJ%20Electronic | JJ Electronic, s.r.o is a Slovak electronic component manufacturer, and one of the world's remaining producers of vacuum tubes. It is based in Čadca, in the Kysuce region of Slovakia.
Most of its products are audio receiving tubes, mainly used for guitar and hi-fi amplifiers. In technical terms, JJ produces triodes, beam tetrodes and power pentodes. Double diode vacuum tubes for full-wave AC-to-DC rectifiers are also produced. JJ also produces electrolytic capacitors for higher-voltage purposes, generally for use in audio amplifiers. JJ also manufactures its own line of high-end audio amplifiers and guitar amplifiers.
In 2015, the company sales amounted to EUR 8.5 million and net income came to EUR 3.8 million. Most production is exported to the United States.
History
Before 1989, Tesla was the main Czechoslovak producer of electron tubes. While Tesla vacuum tubes were exported all over the world, and were known for their quality, the company did not survive the change of economic system after 1989 in combination with the downturn in the vacuum tube market. JJ Electronic was founded in 1993 by Jan Jurco, using the old Tesla machinery for the manufacture of vacuum tubes. Eventually, JJ Electronic started to produce its own line of vacuum tubes and electrolytic capacitors, mainly targeted at high-end audiophile and guitar amplifier applications.
Products
Small signal vacuum tubes
ECC81/12AT7, dual triode
ECC82/12AU7, dual triode
ECC83/12AX7, dual triode
ECC88/6DJ8, dual triode
12BH7, dual triode
5751, low-noise dual triode
EF86, sharp-cutoff pentode
6SN7, dual triode
Power vacuum tubes
300B, directly heated power triode
2A3, directly heated power triode
EL34, power pentode
EL84, power pentode
6V6, beam tetrode
6L6GC, beam tetrode
5881, beam tetrode
6550, beam tetrode
6CA7, beam tetrode
KT66, beam tetrode
KT77, beam tetrode
KT88, beam tetrode
Rectifiers
GZ34, indirectly heated full-wave rectifier
5U4GB, directly heated full-wave rectifier
5Y3, directly heated full-wave rectifier
EZ81, indirectly heated full-wave rectifier (noval base)
References
External links
Guitar amplification tubes
Manufacturing companies of Slovakia
Companies of Slovakia
Companies established in 1993
Slovak brands
Vacuum tubes | JJ Electronic | [
"Physics"
] | 567 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
47,083,767 | https://en.wikipedia.org/wiki/Leukotriene%20receptor | The leukotriene (LT) receptors are G protein-coupled receptors that bind and are activated by the leukotrienes. They include the following proteins:
Leukotriene B4 receptors (BLTRs) – bind to and are activated by LTB4:
BLT1 (Leukotriene B4 receptor 1) –
BLT2 (Leukotriene B4 receptor 2) –
Cysteinyl leukotriene receptors (CysLTRs) – bind to and are activated by LTC4, LTD4, and LTE4:
CysLT1 (Cysteinyl leukotriene receptor 1) –
CysLT2 (Cysteinyl leukotriene receptor 2) –
The recently elucidated CysLTE, represented by GPR99/OXGR1, may constitute a third CysLTR.
See also
Eicosanoid receptor
Oxoeicosanoid receptor
Prostaglandin receptor
Thromboxane receptor
References
External links
IUPHAR GPCR Database – Leukotriene receptors
Eicosanoids
G protein-coupled receptors | Leukotriene receptor | [
"Chemistry",
"Biology"
] | 237 | [
"Biotechnology stubs",
"Signal transduction",
"G protein-coupled receptors",
"Biochemistry stubs",
"Biochemistry"
] |
47,084,051 | https://en.wikipedia.org/wiki/PI-RADS | PI-RADS is an acronym for Prostate Imaging Reporting and Data System, defining standards of high-quality clinical service for multi-parametric magnetic resonance imaging (mpMRI), including image creation and reporting.
History
In 2007, the AdMeTech Foundation's International Prostate MRI Working Group convened the key global experts, including members of the European Society of Urogenital Radiology (ESUR) and the American College of Radiology (ACR). In March 2009 in Vienna an ESUR Prostate MRI Committee was formed, with the aim to produce minimal and maximal standards for acquisition and reporting of prostate MRI. This standardization was endorsed by the results of a consensus meeting in London in December 2009
Dr. Jelle Barentsz published with the ESUR Prostate MRI Committee the first PI-RADS (v.1) version in December 2011. Following this initiative the ACR, ESUR, and the AdMeTech Foundation formed a Joint Steering Committee, and by 2016 published a second version of PI-RADS (v.2) in European Urology. This paper enabled acceptance of the urologists of prostate MRI and was awarded “Best clinical scientific paper of 2016 in European Urology”. In 2019 the PI-RADS Steering Committee published an updated version: PI-RADS v2.1.
Purpose
The aim of prostate MRI using PI-RADS is to assess the risk of clinically significant prostate cancer being present. Furthermore, the PI-RADS v2 system is designed to standardize prostate MRI.
Performance
Various studies have compared the predictive performance of PI-RADS v1 for detecting significant prostate cancer against either image-guided biopsy results (definitive pathology) and/or prostatectomy specimens (histopathology). In a 2015 articles in the Journal of Urology, Thompson reported multi-parametric MRI detection of significant prostate cancer had sensitivity of 96%, specificity of 36%, negative predictive value and positive predictive values of 92% and 52%; when PI-RADS was incorporated into a multivariate analysis (PSA, digital rectal exam, prostate volume, patient age) the area under the curve (AUC) improved from 0.776 to 0.879, p<0.001. A similar paper in European Radiology found that when correlated with histopathology, PI-RADS v2 correctly identified 94-95% of prostate cancer foci ≥0.5 mL, but was limited for the assessment of GS ≥4+3 (significant) tumors ≤0.5 mL; in their series, DCE-MRI offered limited added value to T2WI+DW-MRI. Other applications for which PI-RADS may be useful include prediction of termination of Active Surveillance due to tumor progression/aggressiveness, detection of extraprostatic extension of prostate cancer, and supplemental information when considering whether to re-biopsy patients with a history of previous negative biopsy.
PI-RADS v2 is designed to improve detection, characterization and risk stratification in patients suspected of prostate cancer with a goal of better treatment decisions, improved outcomes and simplified reporting. However, multi-center validation trials are needed and expected to lead to modifications in the scoring system.
Interactive Calculators
Calculators designed to assist with PI-RADS criteria application have been developed to streamline the evaluation of prostate MRI. These tools, while not officially endorsed by the ACR, are becoming more popular among radiologists for their ability to reduce variability and improve diagnostic efficiency. Examples include PI-RADS v. 2.1 calculators available on independent online platforms, which helps in systematically applying the PI-RADS scoring system. These tools are increasingly recognized for their potential to enhance clinical workflow and reporting accuracy.
References
Medical imaging
Prostate cancer
Magnetic resonance imaging
Radiology | PI-RADS | [
"Chemistry"
] | 782 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
36,858,969 | https://en.wikipedia.org/wiki/C17H25NO3 | The molecular formula C17H25NO3 (molar mass: 291.38 g/mol, exact mass: 291.1834 u) may refer to:
Cyclopentolate
EA-3834
Levobunolol
Mesembranol
Pecilocin
Molecular formulas | C17H25NO3 | [
"Physics",
"Chemistry"
] | 61 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
36,861,734 | https://en.wikipedia.org/wiki/Outer%20membrane%20porin%20D | Outer membrane porin D is a protein family containing bacterial outer membrane porins which are involved in transport of cationic amino acids, peptides, antibiotics and other compounds.
It was also described as having some serine protease activity. However many of these proteins are not peptidases and are classified as non-peptidase homologues as they either have been found experimentally to be without peptidase activity, or lack amino acid residues that are believed to be essential for the catalytic activity of peptidases in the S43 family.
References
Outer membrane proteins
Protein families | Outer membrane porin D | [
"Biology"
] | 124 | [
"Protein families",
"Protein classification"
] |
36,862,865 | https://en.wikipedia.org/wiki/Contractor%20management | Contractor management is the managing of outsourced work performed for an individual company. Contractor management implements a system that manages contractors' health and safety information, insurance information, training programs and specific documents that pertain to the contractor and the owner client. Most modern contracts require the effective use of contract management software to aid administration between multiple parties.
Risk and control
Risk increases with the loss of control from outsourcing work. Keeping work in-house gives an Owner Client complete control over the production or services provided including quality, durability, and consistency. Outsourcing the work reduces the amount of control held over these aspects. While contracts and agreements can be set in place to control the end product, the Owner Client cannot have complete assurance that their requirements are being met.
With the continuing outsourcing of production, companies struggle to standardize their contractor management processes. Requirements and regulations from the U.S. Occupational Safety and Health Administration and other governing bodies are constantly changing. Companies need to have full visibility into the quality of work their hired contractors have performed in the past and are performing now, and this often proves difficult.
There are tools that may measure the contractor's level of performance. For example, many large refineries have integrated their gate access control system to contract management software. This provides real-time access to the performance of the contractor workforce within the refinery.
Components
Effective contractor management relies first on a standardized prequalification form (PQF). A quality prequalification form will also allow for customized functionality, as needed. The prequalification form ensures that the necessary steps are in place for a contractor to work safely and sustainably, prior to establishing an agreement, or allowing a vendor to come on-site.
A pre-qualification form (or an explanation of the requirements) is provided before bidding or quoting to assure and include the requirements in work plans and budgets.
Specifically, the prequalification form will allow the organization to track the most important aspect of contractor management –
contractor prequalification across these essential dynamics:
Financial stability
Regulatory citation history
Safety and Health statistics and programs
Environmental protection programs
Background checks and Security programs
Sustainability/ Social Responsibility background and programs, including Human Rights.
Site-specific requirements (as needed)
Major projects performed, including references
The length of time the contractor has been in business
Services performed, and a risk ranking based on the contractor's trade
Insurance coverage and limits, additional insured, and waiver of subrogation
A thorough prequalification form with each of these components is used to verify incidence rates and ensures that the contractor's insurance certification is in line with company requirements.
The prequalification form is then reviewed for OSHA logs and Experience Modification Ratings (EMR) to unearth any inconsistencies and to verify the contractor license status. Finally, references are contacted to provide actual work history and experience to further certify that the contractor is prequalified for performing work at that location.
Auditing
Once the prequalification form has been filled out, the contractor must be monitored for compliance, this is a significant part of contractor management. An audit is conducted based on the services that are performed by the contractor and the risk associated with that service.
For contractors involved in a higher risk trade (e.g. electricians, lockout-tagout and confined space workers ) the audit is an essential part of reviewing the contractor's safety program. Many of these high risk workers are involved in potentially life-threatening situations and must demonstrate their ability to protect employees and their clients from harmful situations.
In order to determine that the contractor possesses an adequate understanding of both the work to be performed and the safety standards that must be followed, the following questions are asked:
Is the Safety Program that is currently in place adequate?
Has the written program Safety Program been implemented?
Is the inspection of critical pieces of equipment being done?
Does the program allow for customization in the program based on risk?
Is the Safety Program specific to the contractor and the services provided?
Are the essential programs addressed, based on the services that will be performed?
Is there a method to ensure the training based on written programs is being conducted?
Are Job Hazard Analyses (JHAs), job-site inspections and other hazard identification / mitigation techniques being used and documented?
These questions are then used as indicators to determine the contractor's level of performance. The audit ensures that the contractor has prepared for, and is enforcing, safety guidelines in their everyday practices. Essentially, the audit provides a third-party confirmation that the information supplied in the prequalification form is both accurate and up-to-date.
Database
A database can be used to record and access vendor data within a contractor management program. The database needs to be updated regularly, to ensure that all stakeholders are kept informed of any changes, particularly if the contractor management program is being used to eliminate subpar performers. The use of an online contractor management database facilitates the sharing of contractor data in a secure format with all necessary users, using 24-7/365 availability.
Benefits
The purpose of a contractor management program is to better centralize, qualify and monitor a contingent workforce. As a result of implementing a contractor management program, an organization can expect to experience some, or all, of the following advantages:
Cost savings
Better supplier/client relationships
Higher quality contractors and suppliers
Less paperwork for both owners and contractors. Some contractor management programs are cloud based software platforms that allow contractors to manage their own permits, licenses, inductions, training records and other issues that facilitate organisational and regulatory compliance.
Instant information sharing & evergreen qualification
Reduced risk – continuous improvement in loss control
Moving towards using leading indicators vs. lagging indicators
Contractor awareness with regulatory and best practices such as VPP and PSM
These advantages are both immediate and long-standing. A comprehensive web-based contractor management program from a reputable firm can provide a reliable basis for prequalifying contractors, vendors and other suppliers of goods and services.
Mitigating risk
There are two major considerations when managing contractors. First is deciding on the criteria for evaluation and second is developing an effective management process to evaluate these criteria. There are a number of criteria on which a contractor's safety can be evaluated, such as historical and future trend information.
See also
Contingent labor
Contingent workforce
Vendor management system
References
Business software
Human resource management
Outsourcing
Process safety | Contractor management | [
"Chemistry",
"Engineering"
] | 1,284 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
36,868,208 | https://en.wikipedia.org/wiki/Y-Set%20%28intravenous%20therapy%29 | In intravenous therapy a Y-Set, T-Set and V-Sets are Y-, T- and V-shaped three-way connector sets made of connecting plastic tubes used for delivering intravenous drugs into the body from multiple fluid sources. As Y-Sets are the most common shaped sets, Y-Set is a name that is sometimes used to represent the family of connector sets (sometimes called Y-tubes). The majority of these infusion sets have a left and right hand line that deliver fluid and drugs (often via a valve) to a short common limb attached to the female fitting on the intravenous cannulae.
Applications
3-way connectors allow for "piggybacking", that is, putting a second infusion set onto the same line, such as adding a dose of antibiotics to a continuous volume expander drip, with the etymology being to refer to the second infusion as "riding on the back" of the first one.
Most 3+ way connectors can be opened to allow an infusion limb and a vertical limb to deliver fluid via a common limb to the female fitting of an IV cannulae. V-shaped fittings allow multiple limbs to flow directly to the patient with no common space. As different tubes for these infusion sets usually have different flow rates and fluid delivered from different tubes, there is a risk that the common space (dead volume) of Y-Sets and T-Sets fills with high concentration drugs and accidentally gets flushed out at a high flow rate.
References
Intravenous fluids
Medical equipment
Routes of administration | Y-Set (intravenous therapy) | [
"Chemistry",
"Biology"
] | 325 | [
"Pharmacology",
"Medical technology",
"Medical equipment",
"Routes of administration"
] |
36,869,441 | https://en.wikipedia.org/wiki/Sigma%20Piscium | Sigma Piscium (Sigma Psc, σ Piscium, σ Psc) is a main-sequence star in the zodiac constellation of Pisces. It has an apparent magnitude of +5.50, meaning it is barely visible to the naked eye, according to the Bortle scale. While parallax measurements by the Hipparcos spacecraft give a distance of approximately 430 light years (133 parsecs), dynamical parallax measurements put it slightly closer, at 368 light-years (113 parsecs) from Earth.
Sigma Piscium is a spectroscopic binary system, meaning the components of the system have been detected from periodic Doppler shifts in their spectra. In this case, light from both stars can be detected and it is double-lined. It has an orbital period of 81 days, and the orbit is relatively eccentric, at about 0.9. Both components are B-type main-sequence stars.
Sigma Piscium is moving through the Milky Way at a speed of 23.5 km/s relative to the Sun. Its projected galactic orbit carries it between 24,300 and 29,400 light years from the center of the galaxy.
Sigma Piscium was a latter designation of 40 Andromedae.
Naming
In Chinese, (), meaning Legs, refers to an asterism consisting of σ Piscium, η Andromedae, 65 Piscium, ζ Andromedae, ε Andromedae, δ Andromedae, π Andromedae, ν Andromedae, μ Andromedae, β Andromedae, τ Piscium, 91 Piscium, υ Piscium, φ Piscium, χ Piscium and ψ1 Piscium. Consequently, the Chinese name for σ Piscium itself is (, .) Sigma Piscium, however, is also identified with Kuísùzēngshíwǔ (奎宿增十五), the 15th additional star in the Legs asterism.
References
Piscium, Sigma
Pisces (constellation)
B-type main-sequence stars
Spectroscopic binaries
Piscium, 069
004889
0291
006918
Durchmusterung objects | Sigma Piscium | [
"Astronomy"
] | 467 | [
"Pisces (constellation)",
"Constellations"
] |
38,282,497 | https://en.wikipedia.org/wiki/Gutmann%E2%80%93Beckett%20method | In chemistry, the Gutmann–Beckett method is an experimental procedure used by chemists to assess the Lewis acidity of molecular species. Triethylphosphine oxide (, TEPO) is used as a probe molecule and systems are evaluated by 31P-NMR spectroscopy. In 1975, used 31P-NMR spectroscopy to parameterize Lewis acidity of solvents by acceptor numbers (AN). In 1996, Michael A. Beckett recognised its more generally utility and adapted the procedure so that it could be easily applied to molecular species, when dissolved in weakly Lewis acidic solvents. The term Gutmann–Beckett method was first used in chemical literature in 2007.
Background
The 31P chemical shift (δ) of Et3PO is sensitive to chemical environment but can usually be found between +40 and +100 ppm. The O atom in Et3PO is a Lewis base, and its interaction with Lewis acid sites causes deshielding of the adjacent P atom. Gutmann, a chemist renowned for his work on non-aqueous solvents, described an acceptor-number scale for solvent Lewis acidity with two reference points relating to the 31P NMR chemical shift of Et3PO in the weakly Lewis acidic solvent hexane (δ = 41.0 ppm, AN 0) and in the strongly Lewis acidic solvent SbCl5 (δ = 86.1 ppm, AN 100). Acceptor numbers can be calculated from AN = 2.21 x (δsample – 41.0) and higher AN values indicate greater Lewis acidity. It is generally known that there is no one universal order of Lewis acid strengths (or Lewis base strengths) and that two parameters (or two properties) are needed (see HSAB theory and ECW model) to define acid and base strengths and that single parameter or property scales are limited to a smaller range of acids (or bases). The Gutmann-Beckett method is based on a single parameter NMR chemical shift scale but is in commonly used due to its experimental convenience.
Application to boranes
Boron trihalides are archetypal Lewis acids and have AN values between 89 (BF3) and 115 (BI3). The Gutmann–Beckett method has been applied to fluoroarylboranes such as B(C6F5)3 (AN 82), and borenium cations, and its application to these and various other boron compounds has been reviewed.
Application to other compounds
The Gutmann–Beckett method has been successfully applied to alkaline earth metal complexes, p-block main group compounds
(e.g. AlCl3, AN 87; silylium cations; [E(bipy)2]3+ (E = P, As, Sb, Bi) cations; cationic 4 coordinate Pv and Sbv derivatives) and transition-metal compounds (e.g. TiCl4, AN 70).
References
Acid–base chemistry
Nuclear magnetic resonance experiments | Gutmann–Beckett method | [
"Chemistry"
] | 612 | [
"Acid–base chemistry",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance experiments",
"Equilibrium chemistry",
"nan"
] |
38,289,515 | https://en.wikipedia.org/wiki/Extracellular%20RNA | Extracellular RNA (exRNA) describes RNA species present outside of the cells in which they were transcribed. Carried within extracellular vesicles, lipoproteins, and protein complexes, exRNAs are protected from ubiquitous RNA-degrading enzymes. exRNAs may be found in the environment or, in multicellular organisms, within the tissues or biological fluids such as venous blood, saliva, breast milk, urine, semen, menstrual blood, and vaginal fluid. Although their biological function is not fully understood, exRNAs have been proposed to play a role in a variety of biological processes including syntrophy, intercellular communication, and cell regulation. The United States National Institutes of Health (NIH) published in 2012 a set of Requests for Applications (RFAs) for investigating extracellular RNA biology. Funded by the NIH Common Fund, the resulting program was collectively known as the Extracellular RNA Communication Consortium (ERCC). The ERCC was renewed for a second phase in 2019.
Background
Both prokaryotic and eukaryotic cells are known to release RNA, and this release can be passive or active. The Endosomal Sorting Complex Required for Transport (ESCRT) machinery was previously considered as a possible mechanism for RNA secretion from the cell, but more recently research studying microRNA secretion in human embryonic kidney cells and Cercopithecus aethiops kidney cells identified neutral sphingomyelinase 2 (nSMase2), an enzyme involved in ceramide biosynthesis, as a regulator of microRNA secretion levels. ExRNAs are often found packaged within vesicles such as exosomes, ectosomes, prostasomes, microvesicles, and apoptotic bodies. Although RNAs can be excreted from the cell without an enveloping container, ribonucleases present in extracellular environments would eventually degrade the molecule.
Types
Extracellular RNA should not be viewed as a category describing a set of RNAs with a specific biological function or belonging to a particular RNA family. Similar to the term "non-coding RNA", "extracellular RNA" defines a group of several types of RNAs whose functions are diverse, yet they share a common attribute which, in the case of exRNAs, is existence in an extracellular environment. The following types of RNA have been found outside the cell:
Messenger RNA (mRNA)
Transfer RNA (tRNA)
MicroRNA (miRNA)
Small interfering RNA (siRNA)
Long non-coding RNA (lncRNA)
Though prevalent inside of the cell, ribosomal RNA (rRNA) does not seem to be a common exRNA. Efforts by Valadi et al. to characterize exosomal RNA using the Agilent Bioanalyzer technology showed little to no trace of 18S and 28S rRNA in exosomes secreted by MC/9 murine mast cells, and similar conclusions were made by Skog et al. for rRNA in gliobastoma microvesicles.
Function
To function or even survive as full-length RNA in extracellular environments, exRNA must be protected from digestion by RNases. This requirement does not apply to prokaryotic syntrophy, where digested nucleotides are recycled. exRNA can be shielded from RNases by RNA binding proteins (RBPs), on their own or within/associated with lipoprotein particles and extracellular vesicles. Extracellular vesicles in particular are thought to be a way to transport RNA between cells, in a process that may be general or highly specific, for example, due to incorporation of markers of the parent cell that may be recognized by receptors on the recipient cell. Biochemical evidence supports the idea that exRNA uptake is a common process, suggesting new pathways for intercellular communication. As a result, the presence, absence, and relative abundance of certain exRNAs can be correlated with changes in cellular signaling and may indicate specific disease states.
Despite a limited understanding of exRNA biology, current research has shown the role of exRNAs to be multi-faceted. Extracellular miRNAs are capable of targeting mRNAs in the recipient cell through RNA interference pathways. In vitro experiments have shown the transfer of specific exRNAs into recipient cells inhibiting protein expression and preventing cancer cell growth. In addition to mRNAs being regulated by exRNAs, mRNAs can act as exRNAs to carry genetic information between cells. Messenger RNA contained in microvesicles secreted from glioblastomal cells were shown to generate a functional protein in recipient (human brain microvascular endothelial) cells in vitro. In another study of extracellular mRNAs, mRNAs transported by microvesicles from endothelial progenitor cells (EPCs) to human microvascular and macrovascular endothelial cells triggered angiogenesis in both the in vitro and in vivo setting. Work by Hunter et al. used Ingenuity Pathway Analysis (IPA) software that associated exRNAs found in human blood microvesicles with pathways involved in blood cell differentiation, metabolism, and immune function. These experimental and bioinformatics analyses favor the hypothesis that exRNAs play a role in numerous biological processes.
Detection
Several methods have been developed or adapted to detect, characterize, and quantify exRNA from biological samples. RT-PCR, cDNA microarrays, and RNA sequencing are common techniques for RNA analysis. Applying these methods to study exRNAs mainly differs from cellular RNA experiments in the RNA isolation and/or extraction steps.
RT-PCR
For known exRNA nucleotide sequences, RT-PCR can be applied to detect their presence within a sample as well as quantify their abundance. This is done through first reverse transcribing the RNA sequence into cDNA. The cDNA then serves as a template for PCR amplification. The major benefits of using RT-PCR are its quantitative accuracy in a dynamic range and increased sensitivity compared to methods such as RNase protection assays and dot blot hybridization. The disadvantage to RT-PCR is the requirement of costly supplies, and the necessity of sound experimental design and an in-depth understanding of normalization techniques in order to obtain accurate results and conclusions.
Microfluidics
Microfluidic platforms such as the Agilent Bioanalyzer are useful in assessing the quality of exRNA samples. With the Agilent Bioanalyzer, a lab-on-chip technology that uses a sample of isolated RNA measures the length and quantity of RNA in the sample, and the results of the experiment can be represented as a digital electrophoresis gel image or an electropherogram. Because a diverse range of RNAs can be detected by this technology, it is an effective method for more generally determining what types of RNAs are present in exRNAs samples through using size characterization.
cDNA microarrays
Microarrays allow for larger-scale exRNA characterization and quantification. Microarrays used for RNA studies first generate different cDNA oligonucleotides (probes) that are attached to the microarray chip. An RNA sample can then be added to the chip, and RNAs with sequence complementarity to the cDNA probe will bind and generate a fluorescent signal that can be quantified. Micro RNA arrays have been used in exRNA studies to generate miRNA profiles of bodily fluids.
RNA sequencing
The advent of massively parallel sequencing (next-generation sequencing) lead to variations in DNA sequencing that allowed for high-throughput analyses of many genomic properties. Among these DNA sequencing-derived methods is RNA sequencing. The main advantage of RNA sequencing over other methods for exRNA detection and quantification is its high-throughput capabilities. Unlike microarrays, RNA sequencing is not constrained by factors such as oligonucleotide generation, and the number of probes that can be added to a chip. Indirect RNA sequencing of exRNA samples involves generating a cDNA library from the exRNAs followed by PCR amplification and sequencing. In 2009, Helicos Biosciences published a method for directly sequencing RNA molecules called Direct RNA sequencing (DRS™). Regardless of the RNA sequencing platform, inherent biases exist at various steps in the experiment, but methods have been proposed to correct for these biases with promising results.
Clinical significance
As growing evidence supports the function of exRNAs as intercellular communicators, research efforts are investigating the possibility of utilizing exRNAs in disease diagnosis, prognosis, and therapeutics.
Biomarkers
The potential of extracellular RNAs to serve as biomarkers is significant not only because of their role in intercellular signaling but also due to developments in next generation sequencing that enable high throughput profiling. The simplest form of an exRNA biomarker is the presence (or absence) of a specific extracellular RNA. These biological signatures have been discovered in exRNA studies of cancer, diabetes, arthritis, and prion-related diseases. Recently, a bioinformatics analysis of extracellular vesicles extracted from Trypanosoma cruzi, in which SNPs were mined from transcriptomic data, suggested that exRNAs could be biomarkers of neglected diseases such as Chagas disease.
Cancer
A major research area of interest for exRNA has been its role in cancer. The table below (adapted from Kosaka et al.) lists several types of cancer in which exRNAs have been shown to be associated:
See also
Environmental DNA
non-coding RNA
International Society for Extracellular Vesicles
Journal of Extracellular Vesicles
References
External links
NIH request for applications for 'Reference Profiles of Human Extracellular RNA'
International Society for Extracellular Vesicles
miRandola: Extracellular Circulating microRNAs Database
RNA
Molecular genetics
Non-coding RNA | Extracellular RNA | [
"Chemistry",
"Biology"
] | 2,038 | [
"Molecular genetics",
"Molecular biology"
] |
46,242,766 | https://en.wikipedia.org/wiki/Rainbow%20gravity%20theory | Rainbow gravity (or "gravity's rainbow") is a theory that different wavelengths of light experience different gravity levels and are separated in the same way that a prism splits white light into the rainbow. This phenomenon would be imperceptible in areas of relatively low gravity, such as Earth, but would be significant in areas of extremely high gravity, such as a black hole. As such the theory claims to disprove that the universe has a beginning or Big Bang, as the big bang theory calls for all wavelengths of light to be impacted by gravity to the same extent. The theory was first proposed in 2003 by physicists Lee Smolin and João Magueijo, and claims to bridge the gap between general relativity and quantum mechanics. Scientists are currently attempting to detect rainbow gravity using the Large Hadron Collider.
Background
Rainbow gravity theory's origin is largely the product of the disparity between general relativity and quantum mechanics. More specifically, "locality," or the concept of cause and effect that drives the principles of general relativity, is mathematically irreconcilable with quantum mechanics. This issue is due to incompatible functions between the two fields; in particular, the fields apply radically different mathematical approaches in describing the concept of curvature in four-dimensional space-time. Historically, this mathematical split begins with the disparity between Einstein's theories of relativity, which saw physics through the lens of causality, and classical physics, which interpreted the structure of space-time to be random and inherent.
The prevailing notion about cosmic change is that the universe is expanding at a constantly accelerating rate; moreover, it is understood that as one traces the universe's history backwards one finds that it was, at one point, far denser. If true, the Rainbow gravity theory prohibits a singularity such as that which is postulated in the Big Bang. This indicates that, when viewed in reverse, the universe slowly approaches a point of terminal density without ever reaching it, implying that the universe does not possess a point of origin.
Criticism
There are stringent constraints on energy-dependent speed-of-light scenarios. Based on these, Sabine Hossenfelder has strongly criticised the rainbow gravity concept, stating that "It is neither a theory nor a model, it is just an idea that, despite more than a decade of work, never developed into a proper model. Rainbow gravity has not been shown to be compatible with the standard model. There is no known quantization of this approach and one cannot describe interactions in this framework at all. Moreover, it is known to lead to non-localities which are ruled out already. For what I am concerned, no papers should get published on the topic until these issues have been resolved."
See also
Steady-state model
Eternal inflation
Cyclic model
References
Physical cosmology
Astrophysics theories
Theories of gravity | Rainbow gravity theory | [
"Physics",
"Astronomy"
] | 578 | [
"Astrophysics theories",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"Theories of gravity",
"Physical cosmology"
] |
46,244,530 | https://en.wikipedia.org/wiki/Extended%20interaction%20oscillator | The extended interaction oscillator (EIO) is a linear-beam vacuum tube designed to convert direct current to RF power. The conversion mechanism is the space charge wave process whereby velocity modulation in an electron beam transforms to current or density modulation with distance.
The tubes contain a single resonator. The complete cavity is a rectangular box containing a ladder-like structure through which the electron beam passes. Such a cavity has a large number of resonances but in the resonant mode used, large RF fields are developed in the gaps between the rungs. The phase advance from gap to gap is selected in such a way that an electron sees the same field at every gap, and it is described as being synchronous. In this context, the same field means a field of the same phase but not necessarily the same magnitude.
An electron beam which enters an RF excited cavity with approximately synchronous velocity will receive cumulative velocity modulation at each gap. After some distance into the resonator, repeatedly accelerated electrons will be catching up with electrons repeatedly decelerated, and bunches will form. These bunches will have a velocity close to the beam velocity. If the electron velocity is somewhat greater than synchronous, the bunches will start to cross gaps when the field is retarding, rather than zero. When this happens, the electrons are slowed; their lost energy is gained by the cavity and sustained oscillations become possible. As the velocity of the beam entering the cavity is increased further, more energy is transferred to the cavity and the frequency of oscillation rises somewhat. Eventually, however, the bunches punch through the retarding fields and oscillations cease abruptly. Reducing the beam velocity (voltage) will cause the tube to resume oscillation. However, it is necessary to reduce the beam velocity below the value at which oscillations ceased before oscillation will start again. This phenomenon is known as hysteresis and is similar to that observed in many reflex klystrons.
The frequency change which occurs as the beam voltage is raised is referred to as electronic tuning, and is typically 0.2% of the operating frequency measured from half power to cessation of oscillation. For larger frequency changes mechanical tuning is used which is obtained by moving one wall of the cavity. The moveable wall is, in fact, a piston which can be moved in a tunnel whose cross-section is that of the wall which it replaces. The range of mechanical tuning is usually limited by parasitic resonances which occur when the oscillating frequency and the frequency of one of the many other cavity resonances coincide. When this happens, serious loss is introduced, often enough to suppress oscillation completely. Typically, a mechanical tuning range of 4% can be obtained but greater ranges have been demonstrated.
Apart from the resonant cavity, the Extended Interaction Oscillator is very similar to more conventional klystrons. An electron gun produces a narrow beam of electrons which is maintained at the required diameter by a magnetic field while it passes through the RF section. Thereafter, the beam enters a relatively field-free region where it spreads out and is collected by an appropriately cooled collector. Many of these oscillators have electrically isolated anodes and in these cases, the voltage between the cathode and anode determines the tube current which in turn determines the maximum power output.
Vacuum tubes
Electron beam
Electronic oscillators | Extended interaction oscillator | [
"Physics",
"Chemistry"
] | 708 | [
"Electron",
"Electron beam",
"Vacuum tubes",
"Vacuum",
"Matter"
] |
46,250,924 | https://en.wikipedia.org/wiki/Cubical%20set | In topology, a branch of mathematics, a cubical set is a set-valued contravariant functor on the category of (various) n-cubes.
Cubical sets have been often considered as an alternative to simplicial sets in combinatorial topology, including in the early work of Daniel Kan and Jean-Pierre Serre. They have also been developed in computer science, in particular in concurrency theory and in homotopy type theory.
See also
Simplicial presheaf
References
nLab, Cubical set.
Rick Jardine, Cubical sets, Lecture 12 in "Lectures on simplicial presheaves" https://web.archive.org/web/20110104053206/http://www.math.uwo.ca/~jardine/papers/sPre/index.shtml
Topology | Cubical set | [
"Physics",
"Mathematics"
] | 185 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
34,296,445 | https://en.wikipedia.org/wiki/Atlas%20of%20UTR%20Regulatory%20Activity | The Atlas of UTR Regulatory Activity (AURA), a biological database, now at its second version, is a manually curated and comprehensive catalog of human 5' and 3' untranslated sequences (UTR) and UTR regulatory annotations.
It includes basic annotation, phylogenetic conservation, binding sites for RNA-binding proteins and miRNA, cis-elements, RNA methylation and editing data, and more, for human and mouse. Through its intuitive web interface, it furthermore provides full access to a wealth of information that integrates RNA sequence and structure data, variation sites, gene synteny, gene and protein expression and gene functional descriptions from scientific literature and specialized databases. Eventually, it provides several tool for batch analysis of gene lists, allowing the tracing of post-transcriptional regulatory networks.
See also
Five prime untranslated region
MiRNA
RNA-binding protein
Three prime untranslated region
Untranslated region
References
External links
http://aura.science.unitn.it/.
Biological databases
RNA-binding proteins
RNA | Atlas of UTR Regulatory Activity | [
"Biology"
] | 217 | [
"Bioinformatics",
"Biological databases"
] |
34,296,669 | https://en.wikipedia.org/wiki/Exciton-polariton | In physics, the exciton–polariton is a type of polariton; a hybrid light and matter quasiparticle arising from the strong coupling of the electromagnetic dipolar oscillations of excitons (either in bulk or quantum wells) and photons. Because light excitations are observed classically as photons, which are massless particles, they do not therefore have mass, like a physical particle. This property makes them a quasiparticle.
Theory
The coupling of the two oscillators, photons modes in the semiconductor optical microcavity and excitons of the quantum wells, results in the energy anticrossing of the bare oscillators, giving rise to the two new normal modes for the system, known as the upper and lower polariton resonances (or branches). The energy shift is proportional to the coupling strength (dependent, e.g., on the field and polarization overlaps). The higher energy or upper mode (UPB, upper polariton branch) is characterized by the photonic and exciton fields oscillating in-phase, while the LPB (lower polariton branch) mode is characterized by them oscillating with phase-opposition. Microcavity exciton–polaritons inherit some properties from both of their roots, such as a light effective mass (from the photons) and a capacity to interact with each other (from the strong exciton nonlinearities) and with the environment (including the internal phonons, which provide thermalization, and the outcoupling by radiative losses). In most cases the interactions are repulsive, at least between polariton quasi-particles of the same spin type (intra-spin interactions) and the nonlinearity term is positive (increase of total energy, or blueshift, upon increasing density).
Researchers also studied the long-range transport in organic materials linked to optical microcavities and demonstrated that exciton-polaritons propagate over several microns. This was done in order to prove that exciton-polaritons propagate over several microns and that the interplay between the molecular disorder and long-range correlations induced by coherent mixing with light leads to a mobility transition between diffusive and ballistic transport.
Other features
Polaritons are also characterized by non-parabolic energy–momentum dispersion relations, which limit the validity of the parabolic effective-mass approximation to a small range of momenta.
They also have a spin degree-of-freedom, making them spinorial fluids able to sustain different polarization textures. Exciton-polaritons are composite bosons which can be observed to form Bose–Einstein condensates,
and sustain polariton superfluidity and quantum vortices
and are prospected for emerging technological applications.
Many experimental works currently focus on polariton lasers, optically addressed transistors, nonlinear states such as solitons and shock waves, long-range coherence properties and phase transitions, quantum vortices and spinorial patterns. Modelization of exciton-polariton fluids mainly rely on the use of GPE (Gross–Pitaevskii equations) which are in the form of nonlinear Schrödinger equations.
See also
Bose–Einstein condensation of polaritons
Bose–Einstein condensation of quasiparticles
Polariton
Polariton superfluid
References
External links
YouTube animation explaining what a polariton is in a semiconductor micro-resonator.
Description of experimental research on polariton fluids at the Institute of Nanotechnologies within the Italian CNR.
Quasiparticles | Exciton-polariton | [
"Physics",
"Materials_science"
] | 770 | [
"Quasiparticles",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
34,298,214 | https://en.wikipedia.org/wiki/Dragontrail | Dragontrail is an alkali-aluminosilicate sheet glass manufactured by AGC Inc. It is engineered for a combination of thinness, lightness and damage-resistance, similarly to Corning's proprietary Gorilla Glass. The material's primary properties are its strength, allowing thin glass without fragility; its high scratch resistance; and its hardness with a Vickers hardness test rating of 595 to 673.
Cell phones and tablets
To date, some of the cell phone and tablet models that have incorporated Dragontrail protection are:
Alcatel 7 (By Metro PCS)
Alcatel Idol Alpha
Alcatel Hero 2
Alcatel One Touch Conquest
Alcatel One Touch Idol 3
Alcatel One Touch Idol 4 Pro
Alcatel Tetra
Alcatel TCL LX
Allview P8 Energy mini
Allview V2 Viper i
Allview V1 Viper i4G
Allview Viper E
Allview W1i
Allview W1s
Allview X2 Soul Lite
Allview X2 Soul Style
Allview X2 Soul Style + Platinum
BlackBerry DTEK 50/60
BlackBerry Motion
Bq Aquaris
Cherry Mobile 4/S4/S4 Plus/G1
Crosscall Trekker-S1
Doro 8080
Elephone P8000
eSTAR X45
Flipkart Billion Capture+
Galaxy Nexus
Getnord Onyx
Gionee Marathon M5 lite
Google Pixel 3a and 3a XL
Haier Esteem I70
Highscreen Power Rage Evo
i-mobile IQ 6
InnJoo One
InnJoo Two
Kruger&Matz DRIVE 3
Lava Iris 504q
Lava Pixel V1
Lava Pixel V2
Lava X8
Lava Agni 3 5G (2024)
Lenovo K3 Note (Also known as Lenovo A7000 Turbo in India, Model Number: K50a40)
Lenovo ThinkPad Yoga 12
LYF WATER 5
Meizu M2
Meizu M2 note
Oplus XonPhone 5
Oukitel K10
Philips Xenium I908
Polytron Prime 7
Samsung Galaxy J3 (2016)
Samsung Galaxy M10, M20, M30s
Samsung Galaxy On5, On7, On5 Pro, On7 Pro
Sony Ericsson Xperia Active
Sony Ericsson Xperia Acro S
Sony Xperia X Performance
Sony Xperia Z
Sony Xperia Z1
Sony Xperia Z2
Sony Xperia Z3
Sony Xperia Z5
Sony Xperia Z5 Premium
Stonex One
TrekStor WinPhone 4.7 HD
UMi Z
V341U
Videocon Krypton3 V50JG
WE T1
Wileyfox Spark/Spark+/SparkX
Xiaomi Redmi 1S/2/2 Prime
XOLO 8X-1000
XOLO BLACK 1X
XOLO Q1000
XOLO Q1010i
XOLO Win Q900s
ZTE Avid Plus
ZTE Obsidian
ZTE Sonata 2
References
External links
Patent
Dragontrail Glass comparison with Gorilla Glass
Materials science
Glass engineering and science
Glass applications
Glass trademarks and brands
Japanese brands | Dragontrail | [
"Physics",
"Materials_science",
"Engineering"
] | 615 | [
"Glass engineering and science",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
34,298,507 | https://en.wikipedia.org/wiki/Sweep%20frequency%20response%20analysis | Sweep frequency response analysis (SFRA) is a method to evaluate the mechanical integrity of core, windings and clamping structures within power transformers by measuring their electrical transfer functions over a wide frequency range.
Methods
SFRA is a comparative method, meaning an evaluation of the transformer condition is done by comparing an actual set of SFRA results to reference results. Three methods are commonly used to assess the measured traces:
Time-based – current SFRA results will be compared to previous results of the same unit.
Type-based – SFRA of one transformer will be compared to an equal type of transformer.
Phase comparison – SFRA results of one phase will be compared to the results of the other phases of the same transformer.
Process
Transformers generate a unique signature when tested at discrete frequencies and plotted as a curve. The distance between conductors of the transformer forms a capacitance. Any movement of the conductors or windings will change this capacitance. This capacitance being a part of complex L (inductance), R (Resistance) and C (Capacitance) network, any change in this capacitance will be reflected in the curve or signature.
An initial SFRA test is carried out to obtain the signature of the transformer frequency response by injecting various discreet frequencies. This reference is then used for future comparisons. A change in winding position, degradation in the insulation, etc. will result in change in capacitance or inductance thereby affecting the measured curves.
Tests are carried out periodically or during major external events like short circuits and results compared against the initial signature to test for any problems.
Problem detection
SFRA analysis can detect problems in transformers such as:
winding deformation – axial & radial, like hoop buckling, tilting, spiraling
displacements between high and low voltage windings
partial winding collapse
shorted or open turns
faulty grounding of core or screens
core movement
broken clamping structures
problematic internal connections
Uses
SFRA can be used in the following contexts:
To obtain initial signature of healthy transformer for future comparisons
Periodic checks as part of regular maintenance
Immediately after a major external event like short circuit
Transportation or relocation of transformer
Studying earthquakes
Pre-commissioning check
References
External links
Experimental Investigations to Identify SFRA Measurement Sensitivity for Detecting Faults in Transformers
Experiences with the practical application of Sweep Frequency Response Analysis (SFRA)on power transformers
DIAGNOSIS OF POWER TRANSFORMER THROUGH SWEEP FREQUENCY RESPONSE ANALYSIS AND COMPARISON METHODS
Review of Sweep Frequency Response Analysis -SFRA for Assessment Winding Displacements and Deformation in Power Transformers
Sweep frequency response analysis (SFRA) for the assessment of winding displacements and deformation in power transformers
TRANSFORMER DIAGNOSTICS
Power Transformer Frequency Response Analysis: Examination of Resonance Influences on Frequency Response Analysis Signals as The Traveling Wave is Transmitted Through a Power Transformer
Electrical engineering | Sweep frequency response analysis | [
"Engineering"
] | 570 | [
"Electrical engineering"
] |
34,299,760 | https://en.wikipedia.org/wiki/Parasite%20experiment | In experimental physics, and particularly in high energy and nuclear physics, a parasite experiment or parasitic experiment is an experiment performed using a big particle accelerator or other large facility, without interfering with the scheduled experiments of that facility. This allows the experimenters to proceed without the usual competitive time scheduling procedure. These experiments may be instrument tests or experiments whose scientific interest has not been clearly established.
Further reading
Experimental particle physics | Parasite experiment | [
"Physics"
] | 82 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
34,302,118 | https://en.wikipedia.org/wiki/Balanced%20module | In the subfield of abstract algebra known as module theory, a right R module M is called a balanced module (or is said to have the double centralizer property) if every endomorphism of the abelian group M which commutes with all R-endomorphisms of M is given by multiplication by a ring element. Explicitly, for any additive endomorphism f, if fg = gf for every R endomorphism g, then there exists an r in R such that f(x) = xr for all x in M. In the case of non-balanced modules, there will be such an f that is not expressible this way.
In the language of centralizers, a balanced module is one satisfying the conclusion of the double centralizer theorem, that is, the only endomorphisms of the group M commuting with all the R endomorphisms of M are the ones induced by right multiplication by ring elements.
A ring is called balanced if every right R module is balanced. It turns out that being balanced is a left-right symmetric condition on rings, and so there is no need to prefix it with "left" or "right".
The study of balanced modules and rings is an outgrowth of the study of QF-1 rings by C.J. Nesbitt and R. M. Thrall. This study was continued in V. P. Camillo's dissertation, and later it became fully developed. The paper gives a particularly broad view with many examples. In addition to these references, K. Morita and H. Tachikawa have also contributed published and unpublished results. A partial list of authors contributing to the theory of balanced modules and rings can be found in the references.
Examples and properties
Examples
Semisimple rings are balanced.
Every nonzero right ideal over a simple ring is balanced.
Every faithful module over a quasi-Frobenius ring is balanced.
The double centralizer theorem for right Artinian rings states that any simple right R module is balanced.
The paper contains numerous constructions of nonbalanced modules.
It was established in that uniserial rings are balanced. Conversely, a balanced ring which is finitely generated as a module over its center is uniserial.
Among commutative Artinian rings, the balanced rings are exactly the quasi-Frobenius rings.
Properties
Being "balanced" is a categorical property for modules, that is, it is preserved by Morita equivalence. Explicitly, if F(–) is a Morita equivalence from the category of R modules to the category of S modules, and if M is balanced, then F(M) is balanced.
The structure of balanced rings is also completely determined in , and is outlined in .
In view of the last point, the property of being a balanced ring is a Morita invariant property.
The question of which rings have all finitely generated right R modules balanced has already been answered. This condition turns out to be equivalent to the ring R being balanced.
Notes
References
Module theory
Ring theory | Balanced module | [
"Mathematics"
] | 629 | [
"Fields of abstract algebra",
"Ring theory",
"Module theory"
] |
34,304,924 | https://en.wikipedia.org/wiki/Quasiregular%20map | In the mathematical field of analysis, quasiregular maps are a class of continuous maps between Euclidean spaces Rn of the same dimension or, more generally, between Riemannian manifolds of the same dimension, which share some of the basic properties with holomorphic functions of one complex variable.
Motivation
The theory of holomorphic (=analytic) functions of one complex variable is one of the most beautiful and most useful parts of the whole mathematics.
One drawback of this theory is that it deals only with maps between two-dimensional spaces (Riemann surfaces). The theory of functions
of several complex variables has a different character, mainly because analytic functions of several variables are not conformal. Conformal maps can be defined between Euclidean spaces of arbitrary dimension, but when the dimension is greater than 2, this class of maps is very small: it consists of Möbius transformations only.
This is a theorem of Joseph Liouville; relaxing the smoothness assumptions does not help, as proved by Yurii Reshetnyak.
This suggests the search of a generalization of the property of conformality which would give a rich and interesting class of maps in higher dimension.
Definition
A differentiable map f of a region D in Rn to Rn is called K-quasiregular if the following inequality holds at all points in D:
.
Here K ≥ 1 is a constant, Jf is the Jacobian determinant, Df is the derivative, that is the linear map defined by the Jacobi matrix, and ||·|| is the usual (Euclidean) norm of the matrix.
The development of the theory of such maps showed that it is unreasonable to restrict oneself to differentiable maps in the classical sense, and that the "correct" class of maps consists of continuous maps in the Sobolev space W whose partial derivatives in the sense of distributions have locally summable n-th power, and such that the above inequality is satisfied almost everywhere. This is a formal definition of a K-quasiregular map. A map is called quasiregular if it is K-quasiregular with some K. Constant maps are excluded from the class of quasiregular maps.
Properties
The fundamental theorem about quasiregular maps was proved by Reshetnyak:
Quasiregular maps are open and discrete.
This means that the images of open sets are open and that preimages of points consist of isolated points. In dimension 2, these two properties give a topological characterization of the class of non-constant analytic functions:
every continuous open and discrete map of a plane domain to the plane can be pre-composed with a homeomorphism, so that the result is an analytic function. This is a theorem of Simion Stoilov.
Reshetnyak's theorem implies that all pure topological results about analytic functions (such that the Maximum Modulus Principle, Rouché's theorem etc.) extend to quasiregular maps.
Injective quasiregular maps are called quasiconformal. A simple example of non-injective quasiregular map is given in cylindrical coordinates in 3-space by the formula
This map is 2-quasiregular. It is smooth everywhere except the z-axis. A remarkable fact is that all smooth quasiregular maps are local homeomorphisms. Even more remarkable is that every quasiregular local homeomorphism Rn → Rn, where n ≥ 3, is a homeomorphism (this is a theorem of Vladimir Zorich).
This explains why in the definition of quasiregular maps it is not reasonable to restrict oneself to smooth maps: all smooth quasiregular maps of Rn to itself are quasiconformal.
Rickman's theorem
Many theorems about geometric properties of holomorphic functions of one complex variable have been extended to quasiregular maps. These extensions are usually highly non-trivial.
Perhaps the most famous result of this sort is the extension of Picard's theorem which is due to Seppo Rickman:
A K-quasiregular map Rn → Rn can omit at most a finite set.
When n = 2, this omitted set can contain at most one point (this is a simple extension of Picard's theorem). But when n > 2, the omitted set can contain more than one point, and its cardinality can be estimated from above in terms of n and K. In fact, any finite set
can be omitted, as shown by David Drasin and Pekka Pankka.
Connection with potential theory
If f is an analytic function, then log is subharmonic, and harmonic away from the zeros of f. The corresponding fact for quasiregular maps is that log satisfies a certain non-linear partial differential equation of elliptic type.
This discovery of Reshetnyak stimulated the development of non-linear potential theory, which treats this kind of equations
as the usual potential theory treats harmonic and subharmonic functions.
See also
Yurii Reshetnyak
Vladimir Zorich
References
Mathematical analysis | Quasiregular map | [
"Mathematics"
] | 1,045 | [
"Mathematical analysis"
] |
34,307,312 | https://en.wikipedia.org/wiki/Isoflavonoid%20biosynthesis | The biosynthesis of isoflavonoids involves several enzymes; These are:
Liquiritigenin,NADPH:oxygen oxidoreductase (hydroxylating, aryl migration), also known as Isoflavonoid synthase, is an enzyme that uses liquiritigenin (a flavanone), O2, NADPH and H+ to produce 2,7,4'-trihydroxyisoflavanone (an isoflavonoid), H2O and NADP+.
Biochanin-A reductase
Flavone synthase
2'-hydroxydaidzein reductase
2-hydroxyisoflavanone dehydratase
2-hydroxyisoflavanone synthase
Isoflavone 4'-O-methyltransferase
Isoflavone 7-O-methyltransferase
Isoflavone 2'-hydroxylase
Isoflavone 3'-hydroxylase
Isoflavone-7-O-beta-glucoside 6"-O-malonyltransferase
Isoflavone 7-O-glucosyltransferase
4'-methoxyisoflavone 2'-hydroxylase
Pterocarpans biosynthesis
3,9-dihydroxypterocarpan 6a-monooxygenase
Glyceollin synthase
Pterocarpin synthase
See also
Flavonoid biosynthesis
References
External links
http://www.genome.jp/kegg/pathway/map/map00943.html
Isoflavonoids metabolism
Biosynthesis | Isoflavonoid biosynthesis | [
"Chemistry"
] | 359 | [
"Biosynthesis",
"Metabolism",
"Chemical synthesis"
] |
34,307,566 | https://en.wikipedia.org/wiki/MimoDB | MimoDB is a database of peptides that have been selected from random peptide libraries based on their ability to bind small compounds, nucleic acids, proteins, cells, and tissues through phage display.
See also
Mimotope
Phage display
References
External links
https://web.archive.org/web/20121116054757/http://immunet.cn/mimodb/.
Biological databases
Biochemistry methods | MimoDB | [
"Chemistry",
"Biology"
] | 92 | [
"Biochemistry methods",
"Bioinformatics",
"Biochemistry",
"Biological databases"
] |
58,876,827 | https://en.wikipedia.org/wiki/Many-body%20localization | Many-body localization (MBL) is a dynamical phenomenon occurring in isolated many-body quantum systems. It is characterized by the system failing to reach thermal equilibrium, and retaining a memory of its initial condition in local observables for infinite times.
Thermalization and localization
Textbook quantum statistical mechanics assumes that systems go to thermal equilibrium (thermalization). The process of thermalization erases local memory of the initial conditions. In textbooks, thermalization is ensured by coupling the system to an external environment or "reservoir," with which the system can exchange energy. What happens if the system is isolated from the environment, and evolves according to its own Schrödinger equation? Does the system still thermalize?
Quantum mechanical time evolution is unitary and formally preserves all information about the initial condition in the quantum state at all times. However, a quantum system generically contains a macroscopic number of degrees of freedom, but can only be probed through few-body measurements which are local in real space. The meaningful question then becomes whether accessible local measurements display thermalization.
This question can be formalized by considering the quantum mechanical density matrix of the system. If the system is divided into a subregion (the region being probed) and its complement (everything else), then all information that can be extracted by measurements made on alone is encoded in the reduced density matrix . If, in the long time limit, approaches a thermal density matrix at a temperature set by the energy density in the state, then the system has "thermalized," and no local information about the initial condition can be extracted from local measurements. This process of "quantum thermalization" may be understood in terms of acting as a reservoir for . In this perspective, the entanglement entropy of a thermalizing system in a pure state plays the role of thermal entropy. Thermalizing systems therefore generically have extensive or "volume law" entanglement entropy at any non-zero temperature. They also generically obey the eigenstate thermalization hypothesis (ETH).
In contrast, if fails to approach a thermal density matrix even in the long time limit, and remains instead close to its initial condition , then the system retains forever a memory of its initial condition in local observables. This latter possibility is referred to as "many body localization," and involves failing to act as a reservoir for . A system in a many body localized phase exhibits MBL, and continues to exhibit MBL even when subject to arbitrary local perturbations. Eigenstates of systems exhibiting MBL do not obey the ETH, and generically follow an "area law" for entanglement entropy (i.e. the entanglement entropy scales with the surface area of subregion ). A brief list of properties differentiating thermalizing and MBL systems is provided below.
In thermalizing systems, a memory of initial conditions is not accessible in local observables at long times. In MBL systems, memory of initial conditions remains accessible in local observables at long times.
In thermalizing systems, energy eigenstates obey ETH. In MBL systems, energy eigenstates do not obey ETH.
In thermalizing systems, energy eigenstates have volume law entanglement entropy. In MBL systems, energy eigenstates have area law entanglement entropy.
Thermalizing systems generically have non-zero thermal conductivity. MBL systems have zero thermal conductivity.
Thermalizing systems have continuous local spectra. MBL systems have discrete local spectra.
In thermalizing systems, entanglement entropy grows as a power law in time starting from low entanglement initial conditions. In MBL systems, entanglement entropy grows logarithmically in time starting from low entanglement initial conditions.
In thermalizing systems, the dynamics of out-of-time-ordered correlators forms a linear light cone which reflects the ballistic propagation of information. In MBL systems, the light cone is logarithmic.
History
MBL was first proposed by P.W. Anderson in 1958 as a possibility that could arise in strongly disordered quantum systems. The basic idea was that if particles all live in a random energy landscape, then any rearrangement of particles would change the energy of the system. Since energy is a conserved quantity in quantum mechanics, such a process can only be virtual and cannot lead to any transport of particle number or energy.
While localization for single particle systems was demonstrated already in Anderson's original paper (coming to be known as Anderson localization), the existence of the phenomenon for many particle systems remained a conjecture for decades. In 1980 Fleishman and Anderson demonstrated the phenomenon survived the addition of interactions to lowest order in perturbation theory. In a 1998 study, the analysis was extended to all orders in perturbation theory, in a zero-dimensional system, and the MBL phenomenon was shown to survive. In 2005 and 2006, this was extended to high orders in perturbation theory in high dimensional systems. MBL was argued to survive at least at low energy density. A series of numerical works
provided further evidence for the phenomenon in one dimensional systems, at all energy densities (“infinite temperature”). Finally, in 2014 Imbrie presented a proof of MBL for certain one dimensional spin chains with strong disorder, with the localization being stable to arbitrary local perturbations – i.e. the systems were shown to be in a many body localized phase.
It is now believed that MBL can arise also in periodically driven "Floquet" systems where energy is conserved only modulo the drive frequency.
Emergent integrability
Many body localized systems exhibit a phenomenon known as emergent integrability. In a non-interacting Anderson insulator, the occupation number of each localized single particle orbital is separately a local integral of motion. It was conjectured (and proven by Imbrie) that a similar extensive set of local integrals of motion should also exist in the MBL phase. Consider for specificity a one dimensional spin-1/2 chain with Hamiltonian
where , and are Pauli operators, and are random variables drawn from a distribution of some width . When the disorder is strong enough () that all eigenstates are localized, then there exists a local unitary transformation to new variables such that
where are Pauli operators that are related to the physical Pauli operators by a local unitary transformation, the ... indicates additional terms which only involve operators, and the coefficients fall off exponentially with distance. This Hamiltonian manifestly contains an extensive number of localized integrals of motion or "l-bits" (the operators , which all commute with the Hamiltonian). If the original Hamiltonian is perturbed, the l-bits get redefined, but the integrable structure survives.
Exotic orders
MBL enables the formation of exotic forms of quantum order that could not arise in thermal equilibrium, through the phenomenon of localization-protected quantum order. A form of localization-protected quantum order, arising only in periodically driven systems, is the Floquet time crystal.
Experimental realizations
A number of experiments have been reported observing the MBL phenomenon. Most of these experiments involve synthetic quantum systems, such as assemblies of ultracold atoms or trapped ions. Experimental explorations of the phenomenon in solid state systems are still in their infancy.
See also
Quantum scar
Thermalization
Time crystal
References
Quantum mechanics
Quantum chaos theory | Many-body localization | [
"Physics"
] | 1,532 | [
"Theoretical physics",
"Quantum mechanics"
] |
58,881,336 | https://en.wikipedia.org/wiki/CICE%20%28sea%20ice%20model%29 | CICE () is a computer model that simulates the growth, melt and movement of sea ice. It has been integrated into many coupled climate system models as well as global ocean and weather forecasting models and is often used as a tool in Arctic and Southern Ocean research. CICE development began in the mid-1990s by the United States Department of Energy (DOE), and it is currently maintained and developed by a group of institutions in North America and Europe known as the CICE Consortium. Its widespread use in Earth system science in part owes to the importance of sea ice in determining Earth's planetary albedo, the strength of the global thermohaline circulation in the world's oceans, and in providing surface boundary conditions for atmospheric circulation models, since sea ice occupies a significant proportion (4-6%) of Earth's surface. CICE is a type of cryospheric model.
Development
Development of CICE began in 1994 by Elizabeth Hunke at Los Alamos National Laboratory (LANL). Since its initial release in 1998 following development of the Elastic-Viscous-Plastic (EVP) sea ice rheology within the model, it has been substantially developed by an international community of model users and developers. Enthalpy-conserving thermodynamics and improvements to the sea ice thickness distribution were added to the model between 1998 and 2005. The first institutional user outside of LANL was Naval Postgraduate School in the late-1990s, where it was subsequently incorporated into the Regional Arctic System Model (RASM) in 2011. The National Center for Atmospheric Research (NCAR) was the first to incorporate CICE into a global climate model in 2002, and developers of the NCAR Community Earth System Model (CESM) have continued to contribute to CICE innovations and have used it to investigate polar variability in Earth's climate system. The United States Navy began using CICE shortly after 2000 for polar research and sea ice forecasting and it continues to do so today. Since 2000, CICE development or coupling to oceanic and atmospheric models for weather and climate prediction has occurred at the University of Reading, University College London, the U.K. Met Office Hadley Centre, Environment and Climate Change Canada, the Danish Meteorological Institute, the Commonwealth Science and Industrial Research Organisation, and Beijing Normal University, among other institutions. As a result of model development in the global community of CICE users, the model's computer code now includes a comprehensive saline ice physics and biogeochemistry library that incorporates mushy-layer thermodynamics, anisotropic continuum mechanics, Delta-Eddington radiative transfer, melt-pond physics and land-fast ice. CICE version 6 is open-source software and was released in 2018 on GitHub.
Keystone Equations
There are two main physics equations solved using numerical methods in CICE that underpin the model's predictions of sea ice thickness, concentration and velocity, as well as predictions made with many equations not shown here giving, for example, surface albedo, ice salinity, snow cover, divergence, and biogeochemical cycles. The first keystone equation is Newton's second law for sea ice:
where is the mass per unit area of saline ice on the sea surface, is the drift velocity of the ice, is the Coriolis parameter, is the upward unit vector normal to the sea surface, and are the wind and water stress on the ice, respectively, is acceleration due to gravity, is sea surface height and is internal ice the two-dimensional stress tensor within the ice. Each of the terms require information about the ice thickness, roughness, and concentration, as well as the state of the atmospheric and oceanic boundary layers. Ice mass per unit area is determined using the second keystone equation in CICE, which describes evolution of the sea ice thickness distribution for different thicknesses spread of the area for which sea ice velocity is calculated above:
where is the change in the thickness distribution due to thermodynamic growth and melt, is redistribution function due to sea ice mechanics and is associated with internal ice stress , and describes advection of sea ice in a Lagrangian reference frame. From this, ice mass is given by:
for density of sea ice.
Code Design
CICE version 6 is coded in FORTRAN90. It is organized into a dynamical core (dycore) and a separate column physics package called Icepack, which is maintained as a CICE submodule on GitHub. The momentum equation and thickness advection described above are time-stepped on a quadrilateral Arakawa B-grid within the dynamical core, while Icepack solves diagnostic and prognostic equations necessary for calculating radiation physics, hydrology, thermodynamics, and vertical biogeochemistry, including terms necessary to calculate , , , , and defined above. CICE can be run independently, as in the first figure on this page, but is frequently coupled with earth systems models through an external flux coupler, such as the CESM Flux Coupler from NCAR for which results are shown in the second figure for the CESM Large Ensemble. The column physics were separated into Icepack for the version 6 release to permit insertion into earth system models that use their own sea ice dynamical core, including the new DOE Energy Exascale Earth System Model (E3SM), which uses an unstructured grid in the sea ice component of the Model for Prediction Across Scales (MPAS), as demonstrated in the final figure.
See also
Sea ice
Sea ice microbial communities
Sea ice emissivity modeling
Sea ice growth processes
Sea ice concentration
Sea ice thickness
Sea ice physics and ecosystem experiment
Arctic Ocean
Southern Ocean
Climate model
Weather forecasting
Northern Sea Route
Northwest Passage
Antarctica
References
External links
CICE Consortium GitHub Information Page
CICE Consortium Model for Sea-Ice Development
Icepack: Essential Physics for Sea Ice Models
Community-Driven Sea Ice Modeling with the CICE Consortium (Witness the Arctic)
NOAA press release
Oceans Deeply
Pacific Standard
phys.org: Arctic ice model upgrade to benefit polar research, industry and military
Sea ice: More than just frozen water (Santa Fe New Mexican)
Energy Exascale Earth System Model (E3SM)
Community Earth System Model (CESM)
Sea ice
Numerical climate and weather models
Physical oceanography
Physics software | CICE (sea ice model) | [
"Physics"
] | 1,302 | [
"Physical phenomena",
"Earth phenomena",
"Applied and interdisciplinary physics",
"Sea ice",
"Computational physics",
"Physical oceanography",
"Physics software"
] |
58,884,841 | https://en.wikipedia.org/wiki/Level%20control%20valve | A level control valve or altitude control valve is a type of valve that automatically responds to changes in the height of a liquid in some storage system. A common example is the set of ballcocks in a flush toilet, where each stage of the flush cycle is actuated by the emptying or filling of the tank. Another example is in reservoirs and other tank storage systems, where the tank is refilled from another source when the tank runs low and overfilling is prevented as it refills. In all cases, the valve itself is attached to a sensor, such as a float switch or similar system where float attached to the desired length of cable, or a spring whose strength is calibrated for the desired head pressure. They can be modulating, where the flow is proportional to the difference between the actual depth and the desired set-point, or non-modulating, where the valve is either open or closed.
References
Valves | Level control valve | [
"Physics",
"Chemistry"
] | 192 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
58,887,785 | https://en.wikipedia.org/wiki/Toehold%20mediated%20strand%20displacement | Toehold mediated strand displacement (TMSD) is an enzyme-free molecular tool to exchange one strand of DNA or RNA (output) with another strand (input). It is based on the hybridization of two complementary strands of DNA or RNA via Watson-Crick base pairing (A-T/U and C-G) and makes use of a process called branch migration. Although branch migration has been known to the scientific community since the 1970s, TMSD has not been introduced to the field of DNA nanotechnology until 2000 when Yurke et al. was the first who took advantage of TMSD. He used the technique to open and close a set of DNA tweezers made of two DNA helices using an auxiliary strand of DNA as fuel. Since its first use, the technique has been modified for the construction of autonomous molecular motors, catalytic amplifiers, reprogrammable DNA nanostructures and molecular logic gates. It has also been used in conjunction with RNA for the production of kinetically-controlled ribosensors. TMSD starts with a double-stranded DNA complex composed of the original strand and the protector strand. The original strand has an overhanging region the so-called “toehold” which is complementary to a third strand of DNA referred to as the “invading strand”. The invading strand is a sequence of single-stranded DNA (ssDNA) which is complementary to the original strand. The toehold regions initiate the process of TMSD by allowing the complementary invading strand to hybridize with the original strand, creating a DNA complex composed of three strands of DNA. This initial endothermic step is rate limiting and can be tuned by varying the strength (length and sequence composition e.g. G-C or A-T rich strands) of the toehold region. The ability to tune the rate of strand displacement over a range of 6 orders of magnitude generates the backbone of this technique and allows the kinetic control of DNA or RNA devices.
After the binding of the invading strand and the original strand occurred, branch migration of the invading domain then allows the displacement of the initial hybridized strand (protector strand). The protector strand can possess its own unique toehold and can, therefore, turn into an invading strand itself, starting a strand-displacement cascade. The whole process is energetically favored and although a reverse reaction can occur its rate is up to 6 orders of magnitude slower.
Additional control over the system of toehold mediated strand displacement can be introduced by toehold sequestering.
A slightly different variant of strand displacement has also been introduced using a strand displacing polymerase enzyme. Unlike TMSD, it used the polymerase enzyme as a source of energy and it referred to as polymerase-based strand displacement.
Toehold sequestering
Toehold sequestering is a technique to “mask” the toehold region, rendering its accessibility. There are several ways to do so but the most common approaches are hybridizing the toehold with a complementary strand or by designing the toehold region to form a hairpin loop. Masking and unmasking of the toehold domains together with the ability to precisely control the kinetics of the reaction makes toehold mediated strand displacement a valuable tool in the field of DNA nanotechnology Moreover, biosensors based on toehold mediated strand displacement reaction are useful in single molecule detection of DNA targets and SNP discrimination.
References
Genetic engineering | Toehold mediated strand displacement | [
"Chemistry",
"Engineering",
"Biology"
] | 701 | [
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
60,373,549 | https://en.wikipedia.org/wiki/Phase%20separation | Phase separation is the creation of two distinct phases from a single homogeneous mixture. The most common type of phase separation is between two immiscible liquids, such as oil and water. This type of phase separation is known as liquid-liquid equilibrium. Colloids are formed by phase separation, though not all phase separations forms colloids - for example oil and water can form separated layers under gravity rather than remaining as microscopic droplets in suspension.
A common form of spontaneous phase separation is termed spinodal decomposition; it is described by the Cahn–Hilliard equation. Regions of a phase diagram in which phase separation occurs are called miscibility gaps. There are two boundary curves of note: the binodal coexistence curve and the spinodal curve. On one side of the binodal, mixtures are absolutely stable. In between the binodal and the spinodal, mixtures may be metastable: staying mixed (or unmixed) absent some large disturbance. The region beyond the spinodal curve is absolutely unstable, and (if starting from a mixed state) will spontaneously phase-separate.
The upper critical solution temperature (UCST) and the lower critical solution temperature (LCST) are two critical temperatures, above which or below which the components of a mixture are miscible in all proportions. It is rare for systems to have both, but some exist: the nicotine-water system has an LCST of 61 °C, and also a UCST of 210 °C at pressures high enough for liquid water to exist at that temperature. The components are therefore miscible in all proportions below 61 °C and above 210 °C (at high pressure), and partially miscible in the interval from 61 to 210 °C.
Physical basis
Mixing is governed by the Gibbs free energy, with phase separation or mixing occurring for whichever case lowers the Gibbs free energy. The free energy can be decomposed into two parts: , with the enthalpy, the temperature, and the entropy. Thus, the change of the free energy in mixing is the sum of the enthalpy of mixing and the entropy of mixing. The enthalpy of mixing is zero for ideal mixtures, and ideal mixtures are enough to describe many common solutions. Thus, in many cases, mixing (or phase separation) is driven primarily by the entropy of mixing. It is generally the case that the entropy will increase whenever a particle (an atom, a molecule) has a larger space to explore; and thus, the entropy of mixing is generally positive: the components of the mixture can increase their entropy by sharing a larger common volume.
Phase separation is then driven by several distinct processes. In one case, the enthalpy of mixing is positive, and the temperature is low: the increase in entropy is insufficient to lower the free energy. In another, considerably more rare case, the entropy of mixing is "unfavorable", that is to say, it is negative. In this case, even if the change in enthalpy is negative, phase separation will occur unless the temperature is low enough. It is this second case which gives rise to the idea of the lower critical solution temperature.
Phase separation in cold gases
A mixture of two helium isotopes (helium-3 and helium-4) in a certain range of temperatures and concentrations separates into parts. The initial mix of the two isotopes spontaneously separates into ^{4}He-rich and {}^3He-rich regions. Phase separation also exists in ultracold gas systems. It has been shown experimentally in a two-component ultracold Fermi gas case. The phase separation can compete with other phenomena as vortex lattice formation or an exotic Fulde-Ferrell-Larkin-Ovchinnikov phase.
See also
Biomolecular condensate
Colloid
Phase diagram
Phase rule
UNIQUAC
References
Further reading
Equilibrium chemistry
Solvents
Condensed matter physics | Phase separation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 802 | [
"Phases of matter",
"Materials science",
"Equilibrium chemistry",
"Condensed matter physics",
"Matter"
] |
55,561,303 | https://en.wikipedia.org/wiki/N-Methyl-2-thiazolidinethione | N-Methyl-2-thiazolidinethione is the organosulfur compound with the formula C2H4S(NCH3)CS. It is classified as a heterocycle called a thiazolidine. It is a colorless or off-white solid. It has gained attention as a proposed low toxicity replacement for ethylenethioureas, which are used as accelerators for the vulcanization of chloroprene rubbers. The compound is prepared by reaction of N-methylethanolamine and carbon disulfide.
See also
Mercaptobenzothiazole - a structurally similar, but aromatic, vulcanization accelerator
References
Thiazolidines
Dithiocarbamates | N-Methyl-2-thiazolidinethione | [
"Chemistry"
] | 153 | [
"Dithiocarbamates",
"Functional groups"
] |
55,561,850 | https://en.wikipedia.org/wiki/Golden%20binary | In gravitational wave astronomy, a golden binary is a binary black hole collision event whose inspiral and ringdown phases have been measured accurately enough to provide separate measurements of the initial and final black hole masses.
Testing general relativity
Current LIGO/Virgo protocol relies on its library of several hundred thousand precomputed templates of black hole collisions conceivably detectable in their frequency range. A putative binary black hole collision signal consists of inspiral, merger, and ringdown phases. The complete signal is compared with the template library, and event parameters and significance are based on an analysis of such matches.
This allows for self-consistency checks of general relativity. In order to test certain competing theories of gravity, one faces the problem that only general relativity has been studied enough that the complete merger phase is known. Therefore, only a signal that can be matched separately in the inspiral and ringdown phases can be used to allow or contradict such theories.
Identified golden binaries
GW150914 was a golden binary, indeed, this led to additional internal checks done by LIGO. GW151226 and LVT151012 were not.
References
Black holes
Gravitational-wave astronomy
Binary stars | Golden binary | [
"Physics",
"Astronomy"
] | 249 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astronomy stubs",
"Astrophysics",
"Stellar astronomy stubs",
"Astrophysics stubs",
"Density",
"Relativity stubs",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects",
"Gravitational... |
55,566,398 | https://en.wikipedia.org/wiki/Nanotechnology%20for%20water%20purification | There are many water purifiers available in the market which use different techniques like boiling, filtration, distillation, chlorination, sedimentation and oxidation. Currently nanotechnology plays a vital role in water purification techniques. Nanotechnology is the process of manipulating atoms on a nanoscale. In nanotechnology, nanomembranes are used with the purpose of softening the water and removal of contaminants such as physical, biological and chemical contaminants. There are a variety of techniques in nanotechnology which uses nanoparticles for providing safe drinking water with a high level of effectiveness. Some techniques have become commercialized.
For better water purification or treatment processes nanotechnology is preferred. Many different types of nanomaterials or nanoparticles are used in water treatment processes. Nanotechnology is useful in regards to remediation, desalination, filtration, purification and water treatment.
The main features that make nanoparticles effective for water treatment are
More surface area
Small volume
The higher the surface area and volume, the particles become stronger, more stable and durable
Materials may change electrical, optical, physical, chemical, or biological properties at the nano level
Makes chemical and biological reactions easier
Current commercial water purifiers using nanotechnology include the LifeSaver bottle, Lifesaver Jerrycan, Lifesaver Cube, Nanoceram, and NanoH2O.
Nanocellulose based water purification system
Nanocellulose based renewable material has a combination of high surface area with high material strength. It is chemically inert and possesses versatile hydrophilic surface chemistry. These properties make them a most promising nanomaterial for usage as a membrane and filter in water purification systems to remove bacterial and chemical contaminants from polluted water. It is noted that nanocellulose material has high potential in water purification technology. Different types of nanocellulose materials available for water purification system includes Cellulose nanocrystals (CNC) and Cellulose nanofibrils (CNF). These are the rod-like nanomaterials whose size ranges from 100 to 2000 nm with the diameter of 2 to 20 nm. Those length and diameter are mostly based on origin and preparation route for the synthesis of nanocellulose. Those nanocellulose materials are used to remove organic pollutants in water such as dyes, oils and pesticides traces present in water. Currently, fully biobased membrane using nanocellulose are fabricated which is used to remove metal ions such as Cu2+, Fe2+ etc, sulfates, fluorides and other organic compounds. This bio-based nanocellulose filter has more advantage to conventional filters. Nanocellulose is prepared by various methods such as sulphuric acid hydrolysis and mechanical grinding method. Water purification system is mainly based on the principle of absorption. For the absorption of anionic metal species, the nanocellulose materials are functionalized with a positive charged cationic group. Similarly, for the absorption of cationic metal species, the nanocellulose material is functionalized with the negatively charged anionic group. Nanocellulose based materials have limitation in cost for large-scale production and its specificity. Current research is based on the synthesis of hybrid nanocellulose material in combination with several other nanomaterials for the improvement of adsorption capacity.
Graphene coated nanofilter
Graphene is chemically dormant, mechanically sturdy, and non-permeable to gas or liquid. So, carbon plays a major role for fabrication of nanomaterials with porous nature. Graphene membranes that are formed by graphene oxide molecules or chemically converted graphene that is adhered with 2D nano mediated arrays have the ability to efficiently separate molecules in a gas or in a liquid phase. Graphene-coated nanomembranes are said to be more applicable in water treatment due to its unique properties. Graphene membranes are obtained from vacuum filtration or coating of graphene oxide solution as Graphene oxide sheets. The graphene coated nanofiltration membrane showed a higher water flux range. The graphene embedded with carbon nanotubes to serve as nanofilters is more useful for dye rejection in water effluent, removal of salt ions, and also acts as antifouling agent. Graphene nanofilter membranes possess effective antifouling agent due to its strong bond between graphene sheets and proteins. Also, graphene oxide coated nanofilter membranes helps in dechlorination of water. In addition to this, ultrathin nanofilter coated with graphene is the most potent filter that could be commercialized for water purification. Graphene oxide membranes can be used in various forms such as free, surface modified, and graphene cast in membranes in the range of micro, nano, or ultrafilters. Among which nanofilters is more efficient for water desalination due to its mechanical strength and physiochemical properties of the membrane. Moreover, there are some challenges in fabricating and applying graphene oxide based nanofilters for water desalination. The challenges include mechanical instability if nanofilters are in the form of nanosheets, cost strategy, surface flaws, and assembly. Therefore, there are more scopes in this area of research to be worked on for the betterment of the society.
Electrochemical Carbon nanotube filter
Carbon nanotubes have gained much attention for its use as wastewater and water filter. Carbon nanotube’s mechanical, electrical and chemical properties made it unique and an ideal candidate for research since 1990. Carbon nanotube combined with electrochemistry proved to be the best method for water and wastewater purification. Electrochemistry helps in reducing the fouling rate of the CNT. In case of CNT based ultra-filters modified with electrochemistry, helps in reducing the energy by two folds comparing to an unmodified CNT based filters. Thus electrochemical carbon nanotubes have been developed due to the advanced studies in nanotechnology and electrochemistry. Here the electrochemical activity of the CNT is exploited. Very first electrochemical CNT was developed by P.J.Britto etal and the results were first recognized in 1996. An electrochemical CNT filter contains electrodes and CNT in a systematic setup such that the electrodes can attract the wastes that clog the CNT based on its charges, thus resulting in high efficiency of filtering and extension of the lifetime of the CNT in the process. The electrochemical carbon nanotubes can be easily used for removing amino group based dyes from wastewater. Chen etal first reported the absorption of dyes to the CNT walls by strong covalent bonds. These electrochemical CNT can be typically used for filtering, and recycling wastewater. Currently, there are many unannounced advancements in CNT based electrochemical sensors and these are highly under research to bring its applications into biomedical systems.
Health and safety
See also
Nanostructure
Nanotopography
LifeStraw
Nanofiltration
Tata Swach
Slingshot (water vapor distillation system)
Millbank bag
Nanoremediation
List of nanotechnology applications
Nanomaterials
Nanotechnology
Ultrafiltration
Reverse Osmosis
References
External links
Nanotechnology-Enabled Water Treatment (NEWT) - NSF-funded Nanosystems Engineering Research Center
Project ETAP-ERN, that uses renewable energies for desalinization.
Nano based methods to improve water quality - Hawk's Perch Technical Writing, LLC
Safety of Manufactured Nanomaterials: OECD Environment Directorate
Assessing health risks of nanomaterials summary by GreenFacts of the European Commission SCENIHR assessment
Textiles Nanotechnology Laboratory at Cornell University
IOP.org Article
Nano Structured Material
Online course MSE 376-Nanomaterials by Mark C. Hersam (2006)
Drinking water
Waterborne diseases
Water filters
Nanotechnology and the environment | Nanotechnology for water purification | [
"Chemistry",
"Materials_science"
] | 1,657 | [
"Water filters",
"Water treatment",
"Filters",
"Nanotechnology and the environment",
"Nanotechnology",
"Nanomaterials"
] |
55,567,666 | https://en.wikipedia.org/wiki/Newlab | Newlab opened in June 2016, as a multi-disciplinary technology center. Housed in Building 128 of the Brooklyn Navy Yard, the $35 million project serves as a hardware-focused shared workspace, research lab, and hatchery for socially oriented tech manufacturing.
Using the MIT Media Lab as a model, the impetus for the independent organization was to provide space and services to new manufacturing enterprises. Current members work in fields such as robotics, connected devices, energy, nanotechnology, life sciences, and urban tech.
Media coverage of Newlab has focused on the company's role in revitalizing the Brooklyn Navy Yard, its public-private partnership lease structure, and Urban Tech initiative with the New York City Economic Development Corporation.
History and formation
David Belt and partner Scott Cohen formed the concept for Newlab in 2011 after prospecting the decaying Building 128 with Navy Yard president David Ehrenberg. The partners found the maritime manufacturing history of the structure, specifically the manufacturing innovations that took place there, synchronous with their aim to provide a platform for emerging hardware technologies in New York City. The city was abundant in resources and opportunities for entrepreneurs working in software, Belt said in a recent interview, but space, tools and resources for those working in the new manufacturing hardware community were lacking.
Belt leveraged his development firm, Macro Sea, a company that specializes "in bringing historic properties back into cultural relevance," to obtain funding, architectural expertise, and begin constructing a lease with the city of New York. The Navy Yard was in the initial stages of its current revitalization at the time and, because the city owned the property, special arrangements were needed to develop there.
Cohen began scouting the companies who would comprise their core members and helped work to capitalize them. To date, venture capitalists have invested approximately $250 million in Newlab and its members.
Historical significance of Building 128
Lineage of the site
The land predating Newlab has a rich historical cultural lineage and narrative of experimental and innovative breakthroughs. That past was a major factor in the decision to develop Newlab in the Navy Yard. Before colonial settlement, the area that would become the Navy Yard served as a clamming site for the Lenape Native Americans. It was then settled by the Dutch and sold to a developer, thus beginning its employment as a center for manufacturing. Among the technological advancements that took place at the Navy Yard are: the first use of the steam-powered pile driver; construction of the first undersea telegraph cable; development of a commercialized form of anesthetic ether by E.R. Squibb; and a broadcast of the first person to sing over the radio, Eugenia Farrar performing "I Love You Truly", was heard at the yard in 1907.
Construction of navy ships like the Fulton II, a first-of-its-kind steam-powered warship, and fabrication of the USS Arizona, state-of-the-art among its peers, induced many influential manufacturing process refinements and advancements.
In interviews, Belt and Cohen both cite this maritime and technological history as inspiration for Newlab, both in guiding the renovation of the facility and in shaping its mission.
Building 128
According to a Brooklyn Navy Yard Development Corporation document, 128 was raised in 1899 as a "steel structure... used to assemble large boiler engines and fabricated sections of naval vessels." It served as the primary machine shop for every major ship launched during World Wars One and Two. Designed to accommodate the significant height of a warship, the sequence of its hulking steel girders resembles an airport hangar. 128 has been slated for, but avoided, plans for non-naval readaptation. The City of New York sought to adapt it for reuse as a "food complex at one point," but the effort was not sustained.
Renovation and architecture
Marvel Architects, Newlab's architect of record, along with DBI Projects, Belt's project management firm, worked together to craft and execute the renovation. Press regarding Newlab often states that the company occupies Building 128 of the Navy Yard, but this is slightly misleading in that 128 is a complex of warehouses and Newlab occupies the southernmost portion.
Recladding the building's armature and repurposing of the 51,000 ft2 machine shop into an 84,000 ft2 multidisciplinary design, prototyping, and advanced manufacturing space took approximately 5 years and continued until the company's full opening in September 2016. The undertaking utilized approximately 9,000 lbs of steel in total according to the developer.
A guiding principle of the redesign was to harmonize of the needs of the forthcoming lab environment with the original structural features. Modern workplace design elements were fused with the 19th century industrial characteristics of the building's centerline.
Floor plan
Newlab's open floor design was intended, spatially, to reinforce its mission, the layout meant to encourage member companies to collaborate and cross-pollinate ideas. Communal meeting rooms, office pods, and interior plazas on both floors emphasize the developer's intention to create a collaborative design and fabrication center.
Upon completion, the rebuild subdivided Building 128's usable space into: Private studios = 31,664 ft2; Open private studios = 6,226 ft2; Fabrication lab = 6,834 ft2; Cafe kitchen = 600 ft2; Conference rooms = 2,014 ft2; Coworking desks = 144; Flex space = 66 desks.
There is an additional 6,174 ft2 of event space which hosts talks, hackathons, and new manufacturing events such as the recent Urban Tech Hub launch.
Prototyping and fabrication lab
Additive manufacturing (3D printing) technology is a component of the design process for many Newlab residents. Prototyping shops are a distinguishing feature of the hardware-centric facility. Newlab leverages partnerships with firms like AutoDesk, Stratasys, BigRep, Haas, Ultimaker, and others to provide and maintain equipment and filament for printing. The organization has amassed several million dollars of digital fabrication and manufacturing machine assets such as 3D Printers, electronic workbenches, fabrication tools, and CNC equipment since its opening.
Companies
As of September 2017 eighty companies and 400 people worked at Newlab. By 2018, the number of companies had increased to over 100. Members are typically growth-stage companies with anywhere between 3-20 employees.
References
External links
Newlab Official Website
Further reading
NPR: How an Old Shipyard Became a Home for Hardware Startups
CBS News: The Brooklyn Navy Yard's Rebirth as a High-tech Center
Brooklyn’s New Lab Is an Inventor’s Paradise
Inside New Lab, an 84,000-Square-Foot Tech Paradise in Brooklyn
New Lab is a New Home for Hardware Startups in Brooklyn’s Navy Yard
Artificial intelligence laboratories
Computational neuroscience
Cybernetics
DIY culture
Industrial design
Laboratories in the United States
Nanotechnology
Robotics organizations
United States Navy shipyards | Newlab | [
"Materials_science",
"Engineering"
] | 1,417 | [
"Industrial design",
"Design engineering",
"Materials science",
"Nanotechnology",
"Design"
] |
48,793,269 | https://en.wikipedia.org/wiki/Fambrini%20%26%20Daniels | Fambrini and Daniels were artificial stone and architectural terracotta manufacturers in Canwick Road, Lincoln, England. The company was probably founded in 1838. About 1913 it became the Lindum Stone Company which ceased trading after 1949.
History
Joseph Fambrini, was born in Italy in 1815, possibly in Florence. He is first noted as a plaster manufacturer and landlord of the Packet Inn on Waterside North in Lincoln. The workshops in the 1860s were in Waterside South and then Newton Street, in what is now Sippers (formerly the Crown and Cushion) Public House and the adjoining property. In 1872 he is described as a modeller, and manufacturer of Plaster of Paris, Roman, Parian and other cements, enrichments etc. In 1878 he built a workshop at 85 Canwick Road, later developing it into the show-yard and offices of the company of Fambrini and Daniels. The surviving office building on Canwick Road of 1889, by the Lincoln architect William Mortimer, is a two-storied building of red brick, with many decorative features in brick and terracotta, including the city crest on the north elevation and an 1889 date-stone on the north elevation. The building was listed Grade II in 1999.
In 1888 they also were manufacturing at the Excelsior Works in Monks Road, Lincoln an Imperishable Concrete Stone which they had invented. This was said to be a material resembling Portland stone and was used for embellishing the new Lincoln Hospital in that year. and also for the New Grand Opera House in Hull in 1893 After Fambrini's death in 1890, Daniel entered into a partnership with a Mr Webster.
Examples of Fambrini and Daniels' work
Fambrini appears to have benefited from the rapid growth of Lincoln in the latter half of the 19th century. Many houses of the professional classes in the growing suburbs, as well as commercial buildings have artificial stone mouldings, often in the gothic revival style. However, only a few examples of their work can be definitely identified. In 1876 Fambrini had built for himself the large house on the corner of Monks Road (95 Monks Road) and Baggholme Road, and naturally had artificial stone to decorate it. Fambrini called this Florence villa, but after he died in 1890, his house was renamed Villa Firenze.
The Company's offices in Canwick Road were designed to exhibit many of the company's products. The eaves cornice have decorative corbels and banding, with above in parapet a projecting panel decorated with a pendant flag and wreath. Rainwater heads are in the form of monstrous heads. The side entrance facade has similar elaborate architectural detail. Topped with panel bearing Lincoln City coat of arms surmounted by segmental pediment bearing date 1889. Some red terracotta mouldings are used.
The Lincolnshire Chronicle in March 1894 reported that Fambrini and Daniels, have just erected an exceptionally large fountain at the Bridge of Weir, near Glasgow, for the trustees of the Orphanage Asylum of Scotland. The fountain is in red concrete, and stands 18 feet high. One of the special features of the structure is the large basin at the base, which, although seven feet in diameter, has been successfully cast in one piece. Basins of such dimensions are usually cast in sections; to cast them in one piece is a task seldom attempted, and still less seldom accomplished. In this case the casting has been entirely successful, the completed basin weighing a ton and a half. The second basin of the fountain is supported by three huge Dolphins intertwined, and the top one by the kneeling figure of a Nubian boy.
The company is known to have provided mouldings for the domes, finials and decorative panels on the stepped gables of Southport Opera House, Lord Street, Southport (1890–91) by the well known theatre architect Frank Matcham. They were described in The Builder as being of imperishable red concrete masonry. The Opera House, with a capacity for 2000, opened on 7 September 1891 and was destroyed by fire 1929.
Possible Examples of Artificial Stone and Terracotta in Lincoln
There are many examples of decorative artificial stonework and terracotta decorative features on the later Victorian buildings in Lincoln. It is likely that William Mortimer would have patronised the company, particularly for his terracotta revival buildings such the Lincoln Liberal Club and the Oddfellows Hall. The terracotta used is a deeper reddish hue than that coming from other sources such as Ruabon and Doulton, used by another Lincoln architect William Watkins. This terracotta is of noticeably lower quality than that produced elsewhere and often tends to flake.
References
Bibliography
Michael Stratton, (1993) The Terracotta Revival: Building Innovation and the Image of the Industrial City in Britain and North America, Gollanz, London.
Brian Walker (ed) (1980) Frank Matcham; Theatre Architect Belfast
Building materials companies of the United Kingdom
Companies based in Lincoln, England
Terracotta
Manufacturer of architectural terracotta | Fambrini & Daniels | [
"Engineering"
] | 1,020 | [
"Manufacturer of architectural terracotta",
"Architecture"
] |
48,794,726 | https://en.wikipedia.org/wiki/Mosely%20snowflake | The Mosely snowflake (after Jeannine Mosely) is a Sierpiński–Menger type of fractal obtained in two variants either by the operation opposite to creating the Sierpiński-Menger snowflake or Cantor dust i.e. not by leaving but by removing eight of the smaller 1/3-scaled corner cubes and the central one from each cube left from the previous recursion (lighter) or by removing only corner cubes (heavier).
In one dimension this operation (i.e. the recursive removal of two side line segments) is trivial and converges only to single point. It resembles the original water snowflake of snow. By construction the Hausdorff dimension of the lighter snowflake is
and the heavier
.
See also
Menger sponge
References
.
Fractals
Curves
Topological spaces
Cubes
Eponymous curves | Mosely snowflake | [
"Mathematics"
] | 183 | [
"Functions and mappings",
"Mathematical analysis",
"Mathematical structures",
"Mathematical analysis stubs",
"Mathematical objects",
"Fractals",
"Space (mathematics)",
"Topology stubs",
"Topological spaces",
"Topology",
"Mathematical relations"
] |
48,795,897 | https://en.wikipedia.org/wiki/Tanin%20Industrial%20Company | Tanin () was a leading consumer electric device company (emphasized on products such as televisions, fans, rice cookers, radios, etc.) in Thailand. It was out of the business a few years prior to financial crisis in Thailand in 1997.
History
Tanin was established in 1946 in Thailand as a family business.
However, with entrepreneurial thinking and determination of Mr. Udom Wittayasirinun, Tanin became a giant radio manufacturer. Tanin experienced success and expanded the company in 1962 making its radio a recognized product.
During the 1980s, Tanin was one of the top players in the electric appliance industry in Thailand with estimated net sales of $33,000,000. Tanin was also the sole Thai brand in electric device industry that had sub-branches.
However, during the period of the 1980s and 1990s, there were many emerging players in the global market such as South Korea, Singapore, and so on. These countries were the top exporters in Asia thereby negatively impacting the ability of Tanin to be competitive.
References
Home appliance manufacturers of Thailand
Manufacturing companies based in Bangkok
Radio manufacturers | Tanin Industrial Company | [
"Engineering"
] | 232 | [
"Radio electronics",
"Radio manufacturers"
] |
48,796,678 | https://en.wikipedia.org/wiki/Princeton%20field-reversed%20configuration | The Princeton Field Reversed Configuration (PFRC) is a series of experiments in plasma physics, an experimental program to evaluate a configuration for a fusion power reactor, at the Princeton Plasma Physics Laboratory (PPPL). The experiment probes the dynamics of long-pulse, collisionless, low s-parameter field-reversed configurations (FRCs) formed with odd-parity rotating magnetic fields. FRCs are the evolution of the Greek engineer's Nicholas C. Christofilos original idea of E-layers which he developed for the Astron fusion reactor. The PFRC program aims to experimentally verify the physics predictions that such configurations are globally stable and have transport levels comparable with classical magnetic diffusion. It also aims to apply this technology to the Direct Fusion Drive concept for spacecraft propulsion.
History
The PFRC was initially funded by the United States Department of Energy. Early in its operation it was contemporary with such RMF-FRCs as the Translation Confinement Sustainment experiment (TCS) and the Prairie View Rotamak (PV Rotamak).
At PPPL, the experiment PFRC-1 ran from 2008 through 2011. PFRC-2 is running . PFRC-3 is scheduled next. PFRC-4 is scheduled for the late-2020s.
fusion had not been achieved.
Experiments and results
The PFRC-1 and PFRC-2 experiments have heated electrons to energies in excess of 100 eV and plasma durations to 300 ms, more than 104 times longer than the predicted tilt instability growth time.
PFRC-1
PFRC-2
Odd-parity rotating magnetic field
The electric current that forms the field-reversed configuration (FRC) in the PFRC is driven by a rotating magnetic field (RMF). This method has been well-studied and produced favorable results in the Rotamak series of experiments. However, rotating magnetic fields as applied in these and other experiments (so-called even parity RMFs) induce opening of the magnetic field lines. When a transverse magnetic field is applied to the axisymmetric equilibrium FRC magnetic field, rather than magnetic field lines closing on themselves and forming a closed region, they spiral around in the azimuthal direction and ultimately cross the separatrix surface which contains the closed FRC region.
The PFRC uses RMF antennae that produce a magnetic field which flips direction about a symmetry plane oriented with its normal along the axis, half-way along the length of the axis of the machine. This configuration is called an odd parity rotating magnetic field (RMFo). Such magnetic fields, when added in small magnitude to axisymmetric equilibrium magnetic fields, do not cause opening of the magnetic field lines and overall topology is preserved. The critical threshold magnitude of 'odd parity' rotating magnetic field which opens up the axisymmetric equilibrium magnetic field lines and fundamentally changes field topology is rather high. Thus, the RMF is not expected to contribute to transport of particles and energy out of the core of the PFRC.
Low s-parameter
In an FRC, the name s-parameter is given to the ratio of the distance between the magnetic null and the separatrix, and the thermal ion Larmor radius. That is how many ion orbits can fit between the core of the FRC and where it meets the bulk plasma. A high-s FRC would have very small ion gyroradii compared to the size of the machine. Thus, at high s-parameter, the model of magnetohydrodynamics (MHD) applies. MHD predicts that the FRC is unstable to the "n=1 tilt mode," in which the reversed field tilts 180 degrees to align with the applied magnetic field, destroying the FRC.
A low-s FRC is predicted to be stable to the tilt mode. An s-parameter less than or equal to 2 is sufficient for this effect. However, only two ion radii between the hot core and the cool bulk means that on average only two scattering periods (velocity changes of on average 90 degrees) are sufficient to remove a hot, fusion-relevant ion from the core of the plasma. Thus the choice is between high s-parameter ions that are classically well confined but convectively poorly confined, and low s-parameter ions that are classically poorly confined but convectively well confined.
The PFRC has an s-parameter between 1 and 2. Stabilizing the tilt-mode is predicted to aid confinement more than the small number of tolerable collisions will hurt confinement.
Spacecraft propulsion
Scientists from Princeton Satellite Systems are working on a new concept called Direct Fusion Drive (DFD) that is based on the PFRC but has one open end through which exhaust flows to generate thrust. It would produce electric power and propulsion from a single compact fusion reactor. The first concept study and modeling (Phase I NASA NIAC) was published in 2017, and was proposed to power the propulsion system of a Pluto orbiter and lander. Adding propellant to the cool plasma flow results in a variable thrust when channeled through a magnetic nozzle. Modeling suggests that the DFD might produce 5 Newtons of thrust per each megawatt of generated fusion power. About 35% of the fusion power goes to thrust, 30% to electric power, 25% lost to heat, and 10% is recirculated for the radio frequency (RF) heating. The concept was awarded a Phase II to further advance the design and shielding.
References
External links
, Princeton Plasma Physics Laboratory
Professor Samuel A. Cohen
Magnetic confinement fusion devices
Princeton Plasma Physics Laboratory | Princeton field-reversed configuration | [
"Chemistry"
] | 1,156 | [
"Particle traps",
"Magnetic confinement fusion devices"
] |
48,798,515 | https://en.wikipedia.org/wiki/Human%20thermoregulation | As in other mammals, human thermoregulation is an important aspect of homeostasis. In thermoregulation, body heat is generated mostly in the deep organs, especially the liver, brain, and heart, and in contraction of skeletal muscles. Humans have been able to adapt to a great diversity of climates, including hot humid and hot arid. High temperatures pose serious stress for the human body, placing it in great danger of injury or even death. For humans, adaptation to varying climatic conditions includes both physiological mechanisms resulting from evolution and behavioural mechanisms resulting from conscious cultural adaptations.
There are four avenues of heat loss: convection, conduction, radiation, and evaporation. If skin temperature is greater than that of the surroundings, the body can lose heat by radiation and conduction. But, if the temperature of the surroundings is greater than that of the skin, the body actually gains heat by radiation and conduction. In such conditions, the most efficient means by which the body can rid itself of heat is by evaporation. So, when the surrounding temperature is higher than the skin temperature, anything that prevents adequate evaporation will cause the internal body temperature to rise. During sports activities, evaporation becomes the main avenue of heat loss. Humidity affects thermoregulation by limiting sweat evaporation and thus heat loss.
Humans cannot survive prolonged exposure to a wet-bulb temperature above . Such a temperature used to be thought not to occur on Earth's surface but has been recorded in some parts of the Indus Valley and Persian Gulf. Occurrence of conditions too hot and humid for human life is expected to increase in the future due to global warming.
Control system
The core temperature of a human is regulated and stabilized primarily by the hypothalamus, a region of the brain linking the endocrine system to the nervous system, and more specifically by the anterior hypothalamic nucleus and the adjacent preoptic area regions of the hypothalamus. As core temperature varies from the set point, endocrine production initiates control mechanisms to increase or decrease energy production/dissipation as needed to return the temperature toward the set point (see figure).
In hot conditions
Eccrine sweat glands under the skin secrete sweat (a fluid containing mostly water with some dissolved ions), which travels up the sweat duct, through the sweat pore and onto the surface of the skin. This causes heat loss via evaporative cooling; however, a lot of essential water is lost.
The hair on the skin lie flat, preventing heat from being trapped by the layer of still air between the hair. This is caused by tiny muscles under the surface of the skin called arrector pili muscles relaxing so that their attached hair follicles are not erect. These flat hairs increase the flow of air next to the skin increasing heat loss by convection. When environmental temperature is above core body temperature, sweating is the only physiological way for humans to lose heat.
Arteriolar vasodilation occurs. The smooth muscle walls of the arterioles relax allowing increased blood flow through the artery. This redirects blood into the superficial capillaries in the skin increasing heat loss by convection and conduction.
In hot and humid conditions
In general, humans appear physiologically well adapted to hot dry conditions. However, effective thermoregulation is reduced in hot, humid environments such as the Red Sea and Persian Gulf (where moderately hot summer temperatures are accompanied by unusually high vapor pressures), tropical environments, and deep mines where the atmosphere can be water-saturated. In hot-humid conditions, clothing can impede efficient evaporation. In such environments, it helps to wear light clothing such as cotton, that is pervious to sweat but impervious to radiant heat from the sun. This minimizes the gaining of radiant heat, while allowing as much evaporation to occur as the environment will allow. Clothing such as plastic fabrics that are impermeable to sweat and thus do not facilitate heat loss through evaporation can actually contribute to heat stress.
In cold conditions
Heat is lost mainly through the hands and feet.
Sweat production is decreased.
The minute muscles under the surface of the skin called arrector pili muscles (attached to an individual hair follicle) contract (piloerection), lifting the hair follicle upright. This makes the hairs stand on end, which acts as an insulating layer, trapping heat. This is what also causes goose bumps since humans do not have very much hair and the contracted muscles can easily be seen.
Arterioles carrying blood to superficial capillaries under the surface of the skin can shrink (constrict), thereby rerouting blood away from the skin and towards the warmer core of the body. This prevents blood from losing heat to the surroundings and also prevents the core temperature dropping further. This process is called vasoconstriction. It is impossible to prevent all heat loss from the blood, only to reduce it. In extremely cold conditions, excessive vasoconstriction leads to numbness and pale skin. Frostbite occurs only when water within the cells begins to freeze. This destroys the cell causing damage.
Muscles can also receive messages from the thermoregulatory center of the brain (the hypothalamus) to cause shivering. This increases heat production as respiration is an exothermic reaction in muscle cells. Shivering is more effective than exercise at producing heat because the animal (includes humans) remains still. This means that less heat is lost to the environment through convection. There are two types of shivering: low-intensity and high-intensity. During low-intensity shivering, animals shiver constantly at a low level for months during cold conditions. During high-intensity shivering, animals shiver violently for a relatively short time. Both processes consume energy, however high-intensity shivering uses glucose as a fuel source and low-intensity tends to use fats. This is a primary reason why animals store up food in the winter.
Brown adipocytes are also capable of producing heat via a process called non-shivering thermogenesis. In this process, triglycerides are burned into heat, thereby increasing body temperature.
Related factors
Fitness
The more physically fit a person is, the greater their ability to adjust to temperature variation. This includes adapting for heat (keeping cool) and for cold (keeping warm).
Age
Age can be a factor in a person's ability to adapt to temperature variations. Studies have shown that younger people adapt more efficiently to contact with cold surfaces than elderly people. Notably, a good level of fitness allowed the elderly people to cope better and offset somewhat the drop off to their ability to thermoregulate due to old age.
Body mass
A high body mass has been found to help with thermoregulation in regard to adapting for hot environments. This is considered on the basis that the levels of body fat were within healthy ranges i.e. the person's muscle-to-fat ratio was healthy. However, extra body fat has been shown to offer some benefit in terms of keeping warm, especially during immersion in cold water. For this reason long distance outdoor swimmers often have a generous layer of body fat. This is not necessarily always the case though, and high levels of physical fitness can allow thinner swimmers to also perform effectively in cold water environments.
Uses of hypothermia
Adjusting the human body temperature downward has been used therapeutically, in particular, as a method of stabilizing a body following trauma. It has been suggested that adjusting the adenosine A1 receptor of the hypothalamus may allow humans to enter a hibernation-like state of reduced body temperature, which could be useful for applications such as long-duration space flight.
Related testing
The thermoregulatory sweat test (TST) can be used to diagnose certain conditions that cause abnormal temperature regulation and defects in sweat production in the body.
To perform the test, the patient is placed in a chamber that slowly rises in temperature. Before the chamber is heated, the patient is coated with a special kind of indicator powder that will change in color when sweat is produced. This powder, when changing color, will be useful in visualizing which skin is sweating versus not sweating. Results of the patient's sweat pattern will be documented by digital photography, and abnormal TST patterns can indicate if there is dysfunction in the autonomic nervous system. Certain differentials can be made depending on the type of sweat pattern found from the TST (along with history and clinical presentation) including hyperhidrosis, small fiber and autonomic neuropathies, multiple system atrophy, Parkinson disease with autonomic dysfunction, and pure autonomic failure.
Related physiological processes, diseases and syndromes
Hypothermia
Hyperthermia
Heat stroke
Raynaud's phenomenon (Raynaud's disease)
Endocrine system disorders (hyperthyroidism, hypothyroidism)
Induced hypothermia
Erythromelalgia (hyperthermia)
Hypohidrotic ectodermal dysplasia
Thermogenesis
Poikilothermia
References
Thermoregulation
Human homeostasis
Heat transfer | Human thermoregulation | [
"Physics",
"Chemistry",
"Biology"
] | 1,890 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Human homeostasis",
"Thermoregulation",
"Thermodynamics",
"Homeostasis"
] |
50,241,048 | https://en.wikipedia.org/wiki/LEFTY2 | Left-right determination factor 2 is a protein that in humans is encoded by the LEFTY2 gene.
Function
This gene encodes a member of the TGF-beta family of proteins. The encoded protein is secreted and plays a role in left-right asymmetry determination of organ systems during development. The protein may also play a role in endometrial bleeding. Mutations in this gene have been associated with left-right axis malformations, particularly in the heart and lungs. Some types of infertility have been associated with dysregulated expression of this gene in the endometrium. Alternative processing of this protein can yield three different products. This gene is closely linked to both a related family member and a related pseudogene. Alternate splicing of this gene results in multiple transcript variants.
References
Further reading
Proteins | LEFTY2 | [
"Chemistry"
] | 173 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
50,250,122 | https://en.wikipedia.org/wiki/Nanomechanical%20resonator | A nanomechanical resonator is a nanoelectromechanical systems ultra-small resonator that oscillates at a specific frequency depending on its mass and stiffness.
See also
Quartz crystal microbalance
Atomic force microscopy
References
Further reading
Nanoelectronics | Nanomechanical resonator | [
"Materials_science"
] | 60 | [
"Nanotechnology",
"Nanoelectronics"
] |
50,252,496 | https://en.wikipedia.org/wiki/Polar%20metal | A polar metal, metallic ferroelectric, or ferroelectric metal is a metal that contains an electric dipole moment. Its components have an ordered electric dipole. Such metals should be unexpected, because the charge should conduct by way of the free electrons in the metal and neutralize the polarized charge. However they do exist. Probably the first report of a polar metal was in single crystals of the cuprate superconductors YBa2Cu3O7−δ. A polarization was observed along one (001) axis by pyroelectric effect measurements, and the sign of the polarization was shown to be reversible, while its magnitude could be increased by poling with an electric field. The polarization was found to disappear in the superconducting state. The lattice distortions responsible were considered to be a result of oxygen ion displacements induced by doped charges that break inversion symmetry. The effect was utilized for fabrication of pyroelectric detectors for space applications, having the advantage of large pyroelectric coefficient and low intrinsic resistance.
Another substance family that can produce a polar metal is the nickelate perovskites. One example interpreted to show polar metallic behavior is lanthanum nickelate, LaNiO3. A thin film of LaNiO3 grown on the (111) crystal face of lanthanum aluminate, (LaAlO3) was interpreted to be both conductor and a polar material at room temperature. The resistivity of this system, however, shows an upturn with decreasing temperature, hence does not strictly adhere to the definition of a metal. Also, when grown 3 or 4 unit cells thick (1-2 nm) on the (100) crystal face of LaAlO3, the LaNiO3 can be a polar insulator or polar metal depending on the atomic termination of the surface. Lithium osmate, LiOsO3 also undergoes a ferrorelectric transition when it is cooled below 140K. The point group changes from Rc to R3c losing its centrosymmetry. At room temperature and below, lithium osmate is an electric conductor, in single crystal, polycrystalline or powder forms, and the ferroelectric form only appears below 140K. Above 140K the material behaves like a normal metal. Artificial two-dimensional polar metal by charge transfer to a ferroelectric insulator has been realized in LaAlO3/Ba0.8Sr0.2TiO3/SrTiO3 complex oxide heterostructures.
Native metallicity and ferroelectricity has been observed at room temperature in bulk single-crystalline tungsten ditelluride (WTe2); a transition metal dichalcogenide (TMDC). It has bistable and electrically switchable spontaneous polarization states indicating ferroelectricity. Coexistence of metallic behavior and switchable electric polarization in WTe2, which is a layered material, has been observed in the low-thickness limit of two- and three-layers. Calculations suggest this originates from vertical charge transfer between the layers, which is switched by interlayer sliding. In April 2022 another polar metal at room temperature was reported which was also magnetic, skyrmions and the Rashba–Edelstein effect were observed.
P. W. Anderson and E. I. Blount predicted that a ferroelectric metal could exist in 1965. They were inspired to make this prediction based on superconducting transitions, and the ferroelectric transition in barium titanate. The prediction was that atoms do not move far and only a slight crystal non-symmetrical deformation occurs, say from cubic to tetragonal. This transition they called martensitic. They suggested looking at sodium tungsten bronze and InTl alloy. They realised that the free electrons in the metal would neutralise the effect of the polarization at a global level, but that the conduction electrons do not strongly affect transverse optical phonons, or the local electric field inherent in ferroelectricity.
References
Metals
Ferroelectric materials | Polar metal | [
"Physics",
"Chemistry",
"Materials_science"
] | 843 | [
"Physical phenomena",
"Metals",
"Ferroelectric materials",
"Materials",
"Electrical phenomena",
"Hysteresis",
"Matter"
] |
52,662,393 | https://en.wikipedia.org/wiki/ProSTEP%20iViP | pr is an association with its headquarters in Darmstadt, Germany. Founded in 1993 as the ProSTEP Association for the Promotion of Product Data Standards and later renamed to ProSTEP iViP Association in 2002, and since May 2017 the association's name has been written as "prostep ivip". Prostep ivip is a globally active, independent association of 180 member companies from industry, IT and research. It is an industry-driven association and its main focuses are on the digital transformation in product creation and production. By designing digital transformation in the manufacturing industry prostep ivip defines and aggregates the requirements of manufacturers and suppliers, intending to define standards and interfaces primarily for the digitalization of the entire product creation process – from idea to implementation.
History
After the end of the ProSTEP Initiative of the German Federal Ministry for Economic Affairs and Energy (German acronym: BMW), the ProSTEP Association was founded in 1993. Leading IT managers at BMW, Bosch, Continental, Daimler, Delphi, Opel, Siemens, Volkswagen and 30 other companies realized that the development of modern processes for efficient product data management was crucial to ensuring the ability of German companies to compete in the global marketplace and that they can address their common aims at best when joining under the neutral umbrella of an association.
The starting point for this endeavor was the joint development of the STEP data format (ISO 10303). In 2002, it merged with the initiative "Integrated Virtual Product Creation (German acronym: iViP)" of the German Federal Ministry of Education and Research (German acronym: BMBF), which led to a massive scope extension. Up to today, the prostep ivip association remains committed to developing new approaches to end-to-end process, system and data integration for its members and providing digital support for all the phases of the product creation process.
Organization
40% of today's 180 member companies in prostep ivip are manufacturing companies (manufacturers and suppliers), 40% are IT companies and service providers and 20% are research institutions and other standardization bodies. This tripartism is also reflected within the by-annually elected board of the association: one representative of the manufacturers, one of the suppliers, one of the IT and one of the research institutions. Prostep ivip's Technical Programme, with its currently over 20 running project groups, is governed by the Technical Steering Committee (TSC).
Cooperations
Prostep ivip maintains and continuously expands its network toward like-minded organizations. Examples for these organizations are AIA, ISO, OMG as well as associations like the French GALIA, the Japanese JAMA, the US-based PDES, Inc., the German VDA .
Publications
ProSTEP iViP publishes Standards, Recommendations, White Paper and Best Practices together with its partner organizations. For example:
ISO 10303-242:2014: STEP AP 242 - Managed model-based 3D engineering
ISO 14306:2012: JT file format specification for 3D visualization
OMG Requirements Interchange Format
ASD-STAN EN 9300 / AIA NAS 9300 Long-term Archiving and Retrieval (LOTAR)
VDA 4965 - Engineering Change Management (ECM)
VDA 4968 - Vehicle Electric Container (VEC)
PSI 11 - Smart Systems Engineering (SmartSE)
PSI 12 - Manufacturing Change Management (MCM)
PSI 13 - OEM-OEM and OEM-Joint Venture Collaboration (PDM to PDM)
PSI 14 - JT Industrial Application Package, Format and Use Cases (JTIAP)
PSI 15 - Enterprise Rights Management (ERM)
PSI 16 - Code of Openness (CPO)
DIN SPEC 91383 - JT Industrial Application Package (JTIAP)
DIN SPEC 91372 - Code of PLM Openness (CPO) - IT Offenheitskriterien (CPO)
Events
Each year in spring prostep ivip conducts one of the world's largest neutral PLM Congresses: the prostep ivip symposium. Beside this, it invites to smaller topic-specific events and Webinars.
References
External links
Homepage
prostep ivip Symposium
YouTube
Motor trade associations
Aerospace
Standards organisations in Germany | ProSTEP iViP | [
"Physics"
] | 863 | [
"Spacetime",
"Space",
"Aerospace"
] |
52,667,139 | https://en.wikipedia.org/wiki/National%20Concrete%20Masonry%20Association | The National Concrete Masonry Association (NCMA) is a United States trade association of manufacturers of concrete and masonry products. The association was founded in 1918.
NCMA publishes methods and specifications, which are used by the industry, and are cited within professional manuals.
NCMA published a monthly magazine, Concrete Masonry Designs, from 2004 until 2010, when it became bi-monthly until 2012. The last edition of the magazine was published in 2015. Beginning in 2015, NCMA began publishing eNews on an almost weekly basis. NCMA holds an annual convention called ICON-Xchange.
NCMA offers certification programs and educational courses that are centered around the practical application of industry knowledge. These programs and courses are designed to address common issues that occur everyday in the field of Masonry.
NCMA operates an ISO/IEC 17025 accredited testing laboratory.
The association once worked with the United States Office of Civil Defense to create a video on how to build a family fallout shelter.
References
External links
Free NCMA promotional and educational materials to advance concrete and masonry
Trade associations based in the United States
Masonry
Stonemasonry
Building materials
Building stone
Construction industry of the United States
Monumental masonry companies | National Concrete Masonry Association | [
"Physics",
"Engineering"
] | 236 | [
"Matter",
"Building engineering",
"Architecture",
"Construction",
"Stonemasonry",
"Materials",
"Masonry",
"Building materials"
] |
41,077,022 | https://en.wikipedia.org/wiki/Earth%27s%20internal%20heat%20budget | Earth's internal heat budget is fundamental to the thermal history of the Earth. The flow of heat from Earth's interior to the surface is estimated at 47±2 terawatts (TW) and comes from two main sources in roughly equal amounts: the radiogenic heat produced by the radioactive decay of isotopes in the mantle and crust, and the primordial heat left over from the formation of Earth.
Earth's internal heat travels along geothermal gradients and powers most geological processes. It drives mantle convection, plate tectonics, mountain building, rock metamorphism, and volcanism. Convective heat transfer within the planet's high-temperature metallic core is also theorized to sustain a geodynamo which generates Earth's magnetic field.
Despite its geological significance, Earth's interior heat contributes only 0.03% of Earth's total energy budget at the surface, which is dominated by 173,000 TW of incoming solar radiation. This external energy source powers most of the planet's atmospheric, oceanic, and biologic processes. Nevertheless on land and at the ocean floor, the sensible heat absorbed from non-reflected insolation flows inward only by means of thermal conduction, and thus penetrates only a few dozen centimeters on the daily cycle and only a few dozen meters on the annual cycle. This renders solar radiation minimally relevant for processes internal to Earth's crust.
Global data on heat-flow density are collected and compiled by the International Heat Flow Commission of the International Association of Seismology and Physics of the Earth's Interior.
Heat and early estimate of Earth's age
Based on calculations of Earth's cooling rate, which assumed constant conductivity in the Earth's interior, in 1862 William Thomson, later Lord Kelvin, estimated the age of the Earth at 98 million years, which contrasts with the age of 4.5 billion years obtained in the 20th century by radiometric dating. As pointed out by John Perry in 1895 a variable conductivity in the Earth's interior could expand the computed age of the Earth to billions of years, as later confirmed by radiometric dating. Contrary to the usual representation of Thomson's argument, the observed thermal gradient of the Earth's crust would not be explained by the addition of radioactivity as a heat source. More significantly, mantle convection alters how heat is transported within the Earth, invalidating Thomson's assumption of purely conductive cooling.
Global internal heat flow
Estimates of the total heat flow from Earth's interior to surface span a range of 43 to 49 terawatts (TW) (a terawatt is 1012 watts). One recent estimate is 47 TW, equivalent to an average heat flux of 91.6 mW/m2, and is based on more than 38,000 measurements. The respective mean heat flows of continental and oceanic crust are 70.9 and 105.4 mW/m2.
While the total internal Earth heat flow to the surface is well constrained, the relative contribution of the two main sources of Earth's heat, radiogenic and primordial heat, are highly uncertain because their direct measurement is difficult. Chemical and physical models give estimated ranges of 15–41 TW and 12–30 TW for radiogenic heat and primordial heat, respectively.
The structure of Earth is a rigid outer crust that is composed of thicker continental crust and thinner oceanic crust, solid but plastically flowing mantle, a liquid outer core, and a solid inner core. The fluidity of a material is proportional to temperature; thus, the solid mantle can still flow on long time scales, as a function of its temperature and therefore as a function of the flow of Earth's internal heat. The mantle convects in response to heat escaping from Earth's interior, with hotter and more buoyant mantle rising and cooler, and therefore denser, mantle sinking. This convective flow of the mantle drives the movement of Earth's lithospheric plates; thus, an additional reservoir of heat in the lower mantle is critical for the operation of plate tectonics and one possible source is an enrichment of radioactive elements in the lower mantle.
Earth heat transport occurs by conduction, mantle convection, hydrothermal convection, and volcanic advection. Earth's internal heat flow to the surface is thought to be 80% due to mantle convection, with the remaining heat mostly originating in the Earth's crust, with about 1% due to volcanic activity, earthquakes, and mountain building. Thus, about 99% of Earth's internal heat loss at the surface is by conduction through the crust, and mantle convection is the dominant control on heat transport from deep within the Earth. Most of the heat flow from the thicker continental crust is attributed to internal radiogenic sources; in contrast the thinner oceanic crust has only 2% internal radiogenic heat. The remaining heat flow at the surface would be due to basal heating of the crust from mantle convection. Heat fluxes are negatively correlated with rock age, with the highest heat fluxes from the youngest rock at mid-ocean ridge spreading centers (zones of mantle upwelling), as observed in the global map of Earth heat flow.
Sources of heat
Radiogenic heat
The radioactive decay of elements in the Earth's mantle and crust results in production of daughter isotopes and release of geoneutrinos and heat energy, or radiogenic heat. About 50% of the Earth's internal heat originates from radioactive decay. Four radioactive isotopes are responsible for the majority of radiogenic heat because of their enrichment relative to other radioactive isotopes: uranium-238 (238U), uranium-235 (235U), thorium-232 (232Th), and potassium-40 (40K). Due to a lack of rock samples from below 200 km depth, it is difficult to determine precisely the radiogenic heat throughout the whole mantle, although some estimates are available.
For the Earth's core, geochemical studies indicate that it is unlikely to be a significant source of radiogenic heat due to an expected low concentration of radioactive elements partitioning into iron. Radiogenic heat production in the mantle is linked to the structure of mantle convection, a topic of much debate, and it is thought that the mantle may either have a layered structure with a higher concentration of radioactive heat-producing elements in the lower mantle, or small reservoirs enriched in radioactive elements dispersed throughout the whole mantle.
Geoneutrino detectors can detect the decay of 238U and 232Th and thus allow estimation of their contribution to the present radiogenic heat budget, while 235U and 40K are not thus detectable. Regardless, 40K is estimated to contribute 4 TW of heating. However, due to the short half-lives the decay of 235U and 40K contributed a large fraction of radiogenic heat flux to the early Earth, which was also much hotter than at present. Initial results from measuring the geoneutrino products of radioactive decay from within the Earth, a proxy for radiogenic heat, yielded a new estimate of half of the total Earth internal heat source being radiogenic, and this is consistent with previous estimates.
Primordial heat
Primordial heat is the heat lost by the Earth as it continues to cool from its original formation, and this is in contrast to its still actively-produced radiogenic heat. The Earth core's heat flow—heat leaving the core and flowing into the overlying mantle—is thought to be due to primordial heat, and is estimated at 5–15 TW. Estimates of mantle primordial heat loss range between 7 and 15 TW, which is calculated as the remainder of heat after removal of core heat flow and bulk-Earth radiogenic heat production from the observed surface heat flow.
The early formation of the Earth's dense core could have caused superheating and rapid heat loss, and the heat loss rate would slow once the mantle solidified. Heat flow from the core is necessary for maintaining the convecting outer core and the geodynamo and Earth's magnetic field; therefore primordial heat from the core enabled Earth's atmosphere and thus helped retain Earth's liquid water.
Primordial heat energy comes from the potential energy released by collapsing a large amount of matter into a gravity well, and the kinetic energy of accreted matter.
Heat flow and tectonic plates
Controversy over the exact nature of mantle convection makes the linked evolution of Earth's heat budget and the dynamics and structure of the mantle difficult to unravel. There is evidence that the processes of plate tectonics were not active in the Earth before 3.2 billion years ago, and that early Earth's internal heat loss could have been dominated by advection via heat-pipe volcanism. Terrestrial bodies with lower heat flows, such as the Moon and Mars, conduct their internal heat through a single lithospheric plate, and higher heat flows, such as on Jupiter's moon Io, result in advective heat transport via enhanced volcanism, while the active plate tectonics of Earth occur with an intermediate heat flow and a convecting mantle.
See also
Geothermal energy
Geothermal gradient
Planetary differentiation
Thermal history of the Earth
Anthropogenic heat
External links
References
Earth
Geodynamics
Plate tectonics
Heat transfer
Geothermal energy | Earth's internal heat budget | [
"Physics",
"Chemistry"
] | 1,904 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
41,080,840 | https://en.wikipedia.org/wiki/Gut%E2%80%93brain%20axis | The gut–brain axis is the two-way biochemical signaling that takes place between the gastrointestinal tract (GI tract) and the central nervous system (CNS). The term "microbiota–gut–brain axis" highlights the role of gut microbiota in these biochemical signaling. Broadly defined, the gut–brain axis includes the central nervous system, neuroendocrine system, neuroimmune systems, the hypothalamic–pituitary–adrenal axis (HPA axis), sympathetic and parasympathetic arms of the autonomic nervous system, the enteric nervous system, vagus nerve, and the gut microbiota.
Chemicals released by the gut microbiome can influence brain development, starting from birth. A review from 2015 states that the gut microbiome influences the CNS by "regulating brain chemistry and influencing neuro-endocrine systems associated with stress response, anxiety and memory function". The gut, sometimes referred to as the "second brain", may use the same type of neural network as the CNS, suggesting why it could have a role in brain function and mental health.
The bidirectional communication is done by immune, endocrine, humoral and neural connections between the gastrointestinal tract and the central nervous system. More research suggests that the gut microbiome influence the function of the brain by releasing the following chemicals: cytokines, neurotransmitters, neuropeptides, chemokines, endocrine messengers and microbial metabolites such as "short-chain fatty acids, branched chain amino acids, and peptidoglycans". These chemical signals are then transported to the brain via the blood, neuropod cells, nerves, endocrine cells, where they impact different metabolic processes. Studies have confirmed that gut microbiome contribute to range of brain functions controlled by the hippocampus, prefrontal cortex and amygdala (responsible for emotions and motivation) and act as a key node in the gut-brain behavioral axis.
While Irritable bowel syndrome (IBS) is the only disease confirmed to be directly influenced by the gut microbiome, many disorders (such as anxiety, autism, depression and schizophrenia) have been reportedly linked to the gut-brain axis as well. According to a study from 2017, "probiotics have the ability to restore normal microbial balance, and therefore have a potential role in the treatment and prevention of anxiety and depression".
The first of the brain–gut interactions shown, was the cephalic phase of digestion, in the release of gastric and pancreatic secretions in response to sensory signals, such as the smell and sight of food. This was first demonstrated by Pavlov through Nobel prize winning research in 1904.
As of October 2016, most of the work done on the role of gut microbiota in the gut–brain axis had been conducted in animals, or on characterizing the various neuroactive compounds that gut microbiota can produce. Studies with humans – measuring variations in gut microbiota between people with various psychiatric and neurological conditions or when stressed, or measuring effects of various probiotics (dubbed "psychobiotics" in this context) – had generally been small and were just beginning to be generalized. Whether changes to the gut microbiota are a result of disease, a cause of disease, or both in any number of possible feedback loops in the gut–brain axis, remain unclear.
Enteric nervous system
The enteric nervous system is one of the main divisions of the nervous system and consists of a mesh-like system of neurons that governs the function of the gastrointestinal system; it has been described as a "second brain" for several reasons. The enteric nervous system can operate autonomously. It normally communicates with the central nervous system (CNS) through the parasympathetic (e.g., via the vagus nerve) and sympathetic (e.g., via the prevertebral ganglia) nervous systems. However, vertebrate studies show that when the vagus nerve is severed, the enteric nervous system continues to function.
In vertebrates, the enteric nervous system includes efferent neurons, afferent neurons, and interneurons, all of which make the enteric nervous system capable of carrying reflexes in the absence of CNS input. The sensory neurons report on mechanical and chemical conditions. Through intestinal muscles, the motor neurons control peristalsis and churning of intestinal contents. Other neurons control the secretion of enzymes. The enteric nervous system also makes use of more than 30 neurotransmitters, most of which are identical to the ones found in CNS, such as acetylcholine, dopamine, and serotonin. More than 90% of the body's serotonin lies in the gut, as well as about 50% of the body's dopamine; the dual function of these neurotransmitters is an active part of gut–brain research.
The first of the gut–brain interactions was shown to be between the sight and smell of food and the release of gastric secretions, known as the cephalic phase, or cephalic response of digestion.
Gut microbiota
The gut microbiota is the complex community of microorganisms that live in the digestive tracts of humans and other animals. The gut metagenome is the aggregate of all the genomes of gut microbiota. The gut is one niche that human microbiota inhabit.
In humans, the gut microbiota has the largest quantity of bacteria and the greatest number of species, compared to other areas of the body. In humans, the gut flora is established at one to two years after birth; by that time, the intestinal epithelium and the intestinal mucosal barrier that it secretes have co-developed in a way that is tolerant to, and even supportive of, the gut flora and that also provides a barrier to pathogenic organisms.
The relationship between gut microbiota and humans is not merely commensal (a non-harmful coexistence), but rather a mutualistic relationship. Human gut microorganisms benefit the host by collecting the energy from the fermentation of undigested carbohydrates and the subsequent absorption of short-chain fatty acids (SCFAs), acetate, butyrate, and propionate. Intestinal bacteria also play a role in synthesizing vitamin B and vitamin K as well as metabolizing bile acids, sterols, and xenobiotics. The systemic importance of the SCFAs and other compounds they produce are like hormones and the gut flora itself appears to function like an endocrine organ; dysregulation of the gut flora has been correlated with a host of inflammatory and autoimmune conditions.
The composition of human gut microbiota changes over time, when the diet changes, and as overall health changes. In general, the average human has over 1000 species of bacteria in their gut microbiome, with Bacteroidetes and Firmicutes being the dominant phyla. Diets higher in processed foods and unnatural chemicals can negatively alter the ratios of these species, while diets high in whole foods can positively alter the ratios. Additional health factors that may skew the composition of the gut microbiota are antibiotics and probiotics. Antibiotics have severe impacts on gut microbiota, ridding of both good and bad bacteria. Without proper rehabilitation, it can be easy for harmful bacteria to become dominant. Probiotics may help to mitigate this by supplying healthy bacteria into the gut and replenishing the richness and diversity of the gut microbiota. There are many strains of probiotics that can be administered depending on the needs of a specific individual.
Gut–brain integration
The gut–brain axis, a bidirectional neurohumoral communication system, is important for maintaining homeostasis and is regulated through the central and enteric nervous systems and the neural, endocrine, immune, and metabolic pathways, and especially including the hypothalamic–pituitary–adrenal axis (HPA axis). That term has been expanded to include the role of the gut microbiota as part of the "microbiome-gut-brain axis", a linkage of functions including the gut microbiota.
Interest in the field was sparked by a 2004 study (Nobuyuki Sudo and Yoichi Chida) showing that germ-free mice (genetically homogeneous laboratory mice, birthed and raised in an antiseptic environment) showed an exaggerated HPA axis response to stress, compared to non-GF laboratory mice.
The gut microbiota can produce a range of neuroactive molecules, such as acetylcholine, catecholamines, γ-aminobutyric acid, histamine, melatonin, and serotonin, which are essential for regulating peristalsis and sensation in the gut. Changes in the composition of the gut microbiota due to diet, drugs, or disease correlate with changes in levels of circulating cytokines, some of which can affect brain function. The gut microbiota also release molecules that can directly activate the vagus nerve, which transmits information about the state of the intestines to the brain.
Likewise, chronic or acutely stressful situations activate the hypothalamic–pituitary–adrenal axis, causing changes in the gut microbiota and intestinal epithelium, and possibly having systemic effects. Additionally, the cholinergic anti-inflammatory pathway, signaling through the vagus nerve, affects the gut epithelium and microbiota. Hunger and satiety are integrated in the brain, and the presence or absence of food in the gut and types of food present also affect the composition and activity of gut microbiota.
Most of the work that has been done on the role of gut microbiota in the gut–brain axis has been conducted in animals, including the highly artificial germ-free mice. As of 2016, studies with humans measuring changes to gut microbiota in response to stress, or measuring effects of various probiotics, have generally been small and cannot be generalized; whether changes to gut microbiota are a result of disease, a cause of disease, or both in any number of possible feedback loops in the gut–brain axis, remains unclear.
The concept is of special interest in autoimmune diseases such as multiple sclerosis. This process is thought to be regulated via the gut microbiota, which ferment indigestible dietary fibre and resistant starch; the fermentation process produces short chain fatty acids (SCFAs) such as propionate, butyrate, and acetate.
The history of ideas about a relationship between the gut and the mind dates from the nineteenth century.
Clinical significance
While Irritable bowel syndrome (IBS) is the only disease confirmed to be directly influenced by the gut microbiome, many disorders such as anxiety, autism, depression and schizophrenia have been linked to the gut-brain axis as well.
Skin conditions
Skin conditions such as acne were proposed as early as 1930, to be related to emotional states which altered the gut microbiome leading to systemic inflammation. Such conditions that have been improved by the use of probiotics. Studies have shown overlapping mechanisms in psoriasis and depression; psoriasis causing disturbances in the gut microbiota that reflect in the brain causing depression that in turn can cause the stress that affects the microbiome. Probiotics may reduce symptoms of depression through the vagus nerve and sympathetic pathways.
Irritable bowel syndrome
Irritable bowel syndrome (IBS) can cause many abdominal issues such as symptoms of constipation, diarrhea, gas, bloating, and abdominal pain. IBS can be stress-induced and flare-ups are associated with bouts of stress. The gut-brain axis may explain this. The use of probiotics has been shown to help to restore a balance of helpful and harmful bacteria.
Anxiety
Brain function is dependent on multiple neuropeptides including dopamine, GABA and serotonin, that are controlled in the gut microbiota. Imbalances in the gut microbiota intensifies anxiety as both the immune and metabolic pathways are affected. Specific microbes can lead to increased anxiety due to the activation of c-Fos proteins. These proteins serve as indicators of neuronal activation. Probiotics have beneficial impacts on anxiety.
Autism
Studies have shown that children with autism are four times more likely to develop gastrointestinal disorders. The severity of their behavioral symptoms is proportional to the severity of their gastrointestinal issues. Many children with autism have high focal levels of HMGB1.
Schizophrenia
Different neurotrophins play a role in schizophrenia. One of the main ones is called Brain-Derived Neurotrophic Factor (BDNF). BDNF has been associated with schizophrenia and is believed to be a part of the molecular mechanism that has to do with cognitive dysfunction during neurodevelopmental changes. Those who have been diagnosed with schizophrenia tend to exhibit lower levels of BDNF in blood and levels of BDNF are also lower in the cortex and hippocampus. Levels of butyric acid have also been shown to be different between schizophrenic patients and non-schizophrenic patients. It is important to note that studies regarding the link between the gut-brain axis and schizophrenia are limited and further studies are underway.
Parkinson's disease
Braak's theory proposed that gut dysbiosis in Parkinson's causes the aggregation of alpha-synuclein in the gastrointestinal tract before its spreading to the brain.
The gut-brain microbiota abnormalities that contribute to Parkinson's disease, supports the idea that it originates in the gut and spreads. The route would be from the gut to the central nervous system, through the vagus nerve. Gastrointestinal syndromes are known to be dysphagia, gastroparesis, and constipation among others, contributing to the risk of Parkinson's disease. From the understanding of these diseases, the disease modifying therapies are known to be aspects that help prevent the progress of these diseases that focus on the gut-brain axis. Relevant therapies are the Vagus nerve stimulation, the Fecal microbiota transplantation, the use of Rifaximun and other drugs directed towards the gut.
Bile acids and cognitive function
Microbial derived secondary bile acids produced in the gut may influence cognitive function. Altered bile acid profiles occur in cases of mild cognitive impairment and Alzheimer's disease with an increase in cytotoxic secondary bile acids and a decrease in primary bile acids. These findings suggest a role of the gut microbiome in the progression to Alzheimer's disease. In contrast to the cytotoxic effect of secondary bile acids, the bile acid tauroursodeoxycholic acid may be beneficial in the treatment of neurodegenerative diseases.
As more bile acids are absorbed via apical sodium-bile acid transporters, there is a significant increase in age-related cognitive impairment. Levels of serum conjugated primary bile acids were monitored and increased levels revealed ammonia accumulation in the brain. These increased levels of ammonia led to hippocampal synapse loss. Because the hippocampus is largely responsible for memory, the loss of these synapses can have profound impacts on the memories of those affected.
References
External links
Gut flora
Digestive system
Bacillota
Environmental microbiology
Brain
Microbiomes
Parkinson's disease | Gut–brain axis | [
"Biology",
"Environmental_science"
] | 3,292 | [
"Digestive system",
"Organ systems",
"Microbiomes",
"Environmental microbiology",
"Gut flora"
] |
41,083,964 | https://en.wikipedia.org/wiki/Genetically%20modified%20tree | A genetically modified tree (GMt, GM tree, genetically engineered tree, GE tree or transgenic tree) is a tree whose DNA has been modified using genetic engineering techniques. In most cases the aim is to introduce a novel trait to the plant which does not occur naturally within the species. Examples include resistance to certain pests, diseases, environmental conditions, and herbicide tolerance, or the alteration of lignin levels in order to reduce pulping costs.
Genetically modified forest trees are not yet approved ("deregulated") for commercial use with the exception of insect-resistant poplar trees in China and one case of GM Eucalyptus in Brazil. Several genetically modified forest tree species are undergoing field trials for deregulation, and much of the research is being carried out by the pulp and paper industry, primarily with the intention of increasing the productivity of existing tree stock. Certain genetically modified orchard tree species have been deregulated for commercial use in the United States including the papaya and plum. The development, testing and use of GM trees remains at an early stage in comparison to GM crops.
Research
Research into genetically modified trees has been ongoing since 1988. Concerns surrounding the biosafety implications of releasing genetically modified trees into the wild have held back regulatory approval of GM forest trees. This concern is exemplified in the Convention on Biological Diversity's stance:
A precondition for further commercialization of GM forest trees is likely to be their complete sterility. Plantation trees remain phenotypically similar to their wild cousins in that most are the product of no more than three generations of artificial selection, therefore, the risk of transgene escape by pollination with compatible wild species is high. One of the most credible science-based concerns with GM trees is their potential for wide dispersal of seed and pollen. The fact that pine pollen travels long distances is well established, moving up to 3,000 kilometers from its source. Additionally, many tree species reproduce for a long time before being harvested. In combination these factors have led some to believe that GM trees are worthy of special environmental considerations over GM crops. Ensuring sterility for GM trees has proven elusive, but efforts are being made. While tree geneticist Steve Strauss predicted that complete containment might be possible by 2020, many questions remain.
Proposed uses
GM trees under experimental development have been modified with traits intended to provide benefit to industry, foresters or consumers. Due to high regulatory and research costs, the majority of genetically modified trees in silviculture consist of plantation trees, such as eucalyptus, poplar, and pine.
Lignin alteration
Several companies and organizations (including ArborGen, GLBRC, ...) in the pulp and paper industry are interested in utilizing GM technology to alter the lignin content of plantation trees (particularly eucalyptus and poplar trees). It is estimated that reducing lignin in plantation trees by genetic modification could reduce pulping costs by up to $15 per cubic metre. Lignin removal from wood fibres conventionally relies on costly and environmentally hazardous chemicals. By developing low-lignin GM trees it is hoped that pulping and bleaching processes will require fewer inputs, therefore, mills supplied by low-lignin GM trees may have a reduced impact on their surrounding ecosystems and communities. However, it is argued that reductions in lignin may compromise the structural integrity of the plant, thereby making it more susceptible to wind, snow, pathogens and disease, which could necessitate pesticide use exceeding that on traditional plantations. This has proven correct, and an alternative approach followed by the University of Columbia was developed. This approach was to introduce chemically labile linkages instead (by inserting a gene from the plant Angelica sinensis), which allows the lignin to break down much more easy. Due to this new approach, the lignin from the trees not only easily breaks apart when treated with a mild base at temperatures of 100 degrees C, but the trees also maintained their growth potential and strength.
Frost tolerance
Genetic modification can allow trees to cope with abiotic stresses such that their geographic range is broadened. Freeze-tolerant GM eucalyptus trees for use in southern US plantations are currently being tested in open air sites with such an objective in mind. ArborGen, a tree biotechnology company and joint venture of pulp and paper firms Rubicon (New Zealand), MeadWestvaco (US) and International Paper (US) is leading this research. Until now the cultivation of eucalyptus has only been possible on the southern tip of Florida, freeze-tolerance would substantially extend the cultivation range northwards.
Reduced vigour
Orchard trees require a rootstock with reduced vigour to allow them to remain small.
Genetic modification could allow the elimination of the rootstock, by making the tree less vigorous, hence reducing its height when fully mature. Research is being done into which genes are responsible for the vigour in orchard trees (such as apples, pears, ...).
Accelerated growth
In Brazil, field trials of fast growing GM eucalyptus are currently underway, they were set to conclude in 2015–2016 with commercialization to result. FuturaGene, a biotechnology company owned by Suzano, a Brazilian pulp and paper company, has been leading this research. Stanley Hirsch, chief executive of FuturaGene has stated: "Our trees grow faster and thicker. We are ahead of everyone. We have shown we can increase the yields and growth rates of trees more than anything grown by traditional breeding." The company is looking to reduce harvest cycles from 7 to 5.5 years with 20-30% more mass than conventional eucalyptus. There is concern that such objectives may further exacerbate the negative impacts of plantation forestry. Increased water and soil nutrient demand from faster growing species may lead to irrecoverable losses in site productivity and further impinge upon neighbouring communities and ecosystems. Researchers at the University of Manchester's Faculty of Life Sciences modified two genes in poplar trees, called PXY and CLE, which are responsible for the rate of cell division in tree trunks. As a result, the trees are growing twice as fast as normal, and also end up being taller, wider and with more leaves.
Disease resistance
Ecologically motivated research into genetic modification is underway. There are ongoing schemes that aim to foster disease resistance in trees such as the American chestnut (see Chestnut blight) and the English elm (see Dutch elm disease) for the purpose of their reintroduction to the wild. Specific diseases have reduced the populations of these emblematic species to the extent that they are mostly lost in the wild. Genetic modification is being pursued concurrently with traditional breeding techniques in an attempt to endow these species with disease resistance.
Current uses
Poplars in China
In 2002 China's State Forestry Administration approved GM poplar trees for commercial use. Subsequently, 1.4 million Bt (insecticide) producing GM poplars were planted in China. They were planted both for their wood and as part of China's 'Green Wall' project, which aims to impede desertification. Reports indicate that the GM poplars have spread beyond the area of original planting and that contamination of native poplars with the Bt gene is occurring. There is concern with these developments, particularly because the pesticide producing trait may impart a positive selective advantage on the poplar, allowing it a high level of invasiveness.
Living Carbon in the USA
Living Carbon, an American biotechnology company founded in 2019, has developed genetically engineered hybrid poplar trees aimed at enhancing carbon sequestration. These trees have been modified to improve photosynthetic efficiency, enabling them to capture more carbon dioxide (CO₂) and produce greater woody biomass than conventional trees. Living Carbon’s mission is to leverage technology to combat climate change while promoting biodiversity and restoring degraded ecosystems.
Development and Deployment
Living Carbon’s genetically modified trees were first planted in a bottomland forest in Georgia, USA, in February 2023. Early field trials indicated that these trees achieved a 53% increase in above-ground biomass compared to control groups, enabling them to absorb 27% more carbon. The company generates revenue by selling carbon credits derived from these forests to individuals and businesses seeking to offset greenhouse gas emissions.
Benefits and Potential
Supporters of Living Carbon’s approach highlight its potential to contribute to global climate solutions, particularly if deployed on a large scale. The modified trees are targeted for use in afforestation and reforestation projects on degraded land, where they can aid in carbon capture and ecosystem restoration without displacing native species. These projects also aim to enhance biodiversity while addressing environmental degradation.
Controversies and Challenges
The deployment of genetically modified trees has been met with skepticism. Critics, including some forestry and genetic experts, question whether the trees will meet carbon absorption expectations outside controlled laboratory settings. Concerns have also been raised about the potential ecological risks, such as the unintended spread of genetically modified traits to wild tree populations, which could disrupt native ecosystems.
Maddie Hall, co-founder of Living Carbon, has addressed these concerns, emphasizing the urgency of climate action and the limitations of waiting for natural evolutionary processes to improve tree resilience. However, experts note that achieving success in lab or greenhouse trials does not guarantee similar outcomes in complex, natural environments.
See also
Genetically modified crops
Genetically modified food
Genetically modified organisms
Plantations
Regulation of the release of genetic modified organisms
Tree breeding
References
Genetically modified organisms
Environmental issues with forests
Trees | Genetically modified tree | [
"Engineering",
"Biology"
] | 1,906 | [
"Genetic engineering",
"Genetically modified organisms"
] |
43,939,378 | https://en.wikipedia.org/wiki/Pressure-induced%20hydration | Pressure-induced hydration (PIH), also known as “super-hydration”, is a special case of pressure-induced insertion whereby water molecules are injected into the pores of microporous materials. In PIH, a microporous material is placed under pressure in the presence of water in the pressure-transmitting fluid of a diamond anvil cell.
Early physical characterization and initial diffraction experiments in zeolites were followed by the first unequivocal structural characterization of PIH in the small-pore zeolite natrolite (Na16Al16Si24O80·16H2O), which in its fully super-hydrated form, Na16Al16Si24O80·32H2O, doubles the amount of water it contains in its pores.
PIH has now been demonstrated in natrolites containing Li, K, Rb and Ag as monovalent cations as well as in large-pore zeolites, pyrochlores, clays and graphite oxide.
Using the noble gases Ar, Kr, and Xe as well as CO2 as pressure-transmitting fluids, researchers have prepared and structurally characterized the products of reversible, pressure-induced insertion of Ar Kr, and CO2 as well as the irreversible insertion of Xe and water.
References
Chemistry
Molecules | Pressure-induced hydration | [
"Physics",
"Chemistry"
] | 282 | [
"Molecular physics",
"Molecules",
"Physical objects",
"nan",
"Atoms",
"Matter"
] |
43,940,392 | https://en.wikipedia.org/wiki/Nano%20flake | In a general meaning a Nano flake is a flake (that is, an uneven piece of material with one dimension substantially smaller than the other two) with at least one nanometric dimension (that is, between 1 and 100 nm). A flake is not necessarily perfectly flat but it is characterized by a plate-like form or structure. There are nanoflakes of all sorts of materials.
In a more restricted meaning, in the context of solar energy, Nano flakes are a type of semiconductor that has potential for solar energy creation as the product itself is only in the prototype phase. With its crystalline structure the crystals are able to absorb light and harvest 30 percent of solar energy directed at its surface.
Structure
Nano flakes have a structure that contains tiny crystals in which millions of these crystals could fit into a single square centimeter. The tiny crystals absorb the sunlight and use the solar energy to convert it to electricity. This perfect crystalline structure is why this product can revolutionize solar energy. The large surface to volume ratio and the texture of the surface of this nano structure provides a larger absorption rate of the sun's light energy. Also researchers are working on trying to combine it with different semiconducting materials since the usual requirements of a need for a similar crystal structure for the carrier substrate is less stressed in the Nano flakes structure. The carrier substrate in the Nano flakes purpose is to permit growth of the nano structures and works as a contact for the Nano structures when they are actively absorbing the sun's energy.
Purpose
Solar energy obtained from the Nano flakes can help benefit in a couple of ways. Nano flakes can potentially help lower the cost of solar energy. Also since more solar energy can theoretically be obtained from Nano flakes, their use can potentially keep the earth's environment cleaner by reducing the need for fossil fuels.
Cost
The high cost of solar energy stems from the difficulty of converting the solar energy into electricity for use, and less than 1 percent of the world's electricity comes from the sun because of this process. Nano flakes can potentially help with the economic issues of solar energy by lowering the cost due to an easier process and a better outcome of energy. Nano flake technology can potentially make it easier to convert solar energy into electricity estimated at twice the amount that today's solar cells can harvest. This new technology can also potentially lower the cost of solar energy because it allows for a reduction in expensive semiconducting silicon. Energy loss is also potentially reduced with a shorter distance of the solar energy transportation across smaller Nano flakes.
Environment
Nano flake technology can also help keep the environment cleaner as the sun as the source it produces clean pure sustainable energy that can be converted into electricity. While fossil fuel is the primary energy source for electricity, using solar energy obtained from Nano flakes will lower dependence on fossil fuels. When fossil fuels are burned for use they release a toxic gas which has a huge impact on earth's pollution. Also the process of obtaining these fossil fuels is not good for the environment, whether it be mining for coal, drilling for oil, or hydraulic fracturing of the earth's surface to reach the oil and gas.
Research
One researcher working on Nano flake technology is Dr. Martin Aagesen at the Niels Bohr Institute at the University of Copenhagen, who holds a PhD from the Nano Science Center. Aagesen discovered and published information about the science of Nano flakes in 2007. Aagesen is Chief Executive Officer of SunFlake, launched from the Nano Science Center. Funding for Nano flake science came from Danish Venture Capital fund SEED capital and University of Copenhagen.
See also
Nano-Science Center (Copenhagen University)
References
Nanotechnology | Nano flake | [
"Materials_science",
"Engineering"
] | 751 | [
"Nanotechnology",
"Materials science"
] |
43,942,620 | https://en.wikipedia.org/wiki/Sanatogen | Sanatogen was a "brain tonic" invented by the Bauer Chemical Company, in Germany in 1898 and sold worldwide
In the US it was advertised as a "nerve revitaliser". The medicine was prohibited in Australia in 1915 during World War I and a British-made substitute "Sanagen" was introduced to the Australian market the following year, claiming to be "identical to Sanatogen". The product became fashionable in China in the early 20th century and won the favour of many renowned people.
The indications or uses for this product provided by the manufacturer were: "Food tonic. A concentrated nutrient with tonic properties... easily digested and absorbed and is recommended as an effective means of reinforcing the daily diet of anaemic and convalescent patients."
Product information
The ingredients have been described as:
pure milk protein 95% (drug active ingredients)
sodium glycerophosphate 5% (drug active ingredients)
Marketing
The product was marketed extensively in Europe and the US with such slogans as "Endorsed by over 20,000 Physicians".
Court case
In 1913 the product was at the centre of a United States Supreme Court case: Bauer & Cie. v. O'Donnell. The product was also the subject of intellectual property litigation worldwide.
Other products
The Sanatogen brand persists in the UK as a fortified wine, with an alcohol content of 15%.
Sanatogen is also the name of a modern multivitamin product manufactured by Fisons before being sold to Roche, and later Bayer.
References
Products introduced in 1898
Pharmaceutical industry
Patent medicines | Sanatogen | [
"Chemistry",
"Biology"
] | 322 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
43,947,283 | https://en.wikipedia.org/wiki/Interference%20of%20the%20footings | The Interference of the footings is a phenomenon that is observed when two footings are closely spaced. The buildings when are to be constructed nearby to each other, the architectural requirements or the less availability of space for the construction forces the engineers to place the foundation footings close to each other, and when foundations are placed close to each other with similar soil conditions, the Ultimate Bearing Capacity of each foundation may change due to the interference effect of the failure surface in the soil.
Introduction
Foundations or group of foundations are important components of the structure through which the superficial structural loads are transmitted to the underlying foundation soil or bed on which the foundations are laid. The structural loads are transmitted to the foundation soil safely such that neither the foundation fail nor the foundation soil fails either in shear or in excessive settlement. The foundations are basically designed based on two criterion namely Bearing Capacity and Settlement criterion. Many classical theories have been postulated for the isolated foundations by many pioneers like Terzhagi (1943), Meyerhoff (1963), Hansen (1970) and Vesic (1973). In general as per the Terzaghi (1943), when an isolated shallow foundation is loaded, the stress or the failure zone in the foundation soil extends in horizontal direction on either side of the footing to about twice the width of the footing and in vertical downward direction to about three times the width of the footing. Unless until the stress or failure zone of individual footings do not interfere, the individual footings behave as an isolated footing. However, in many a situations such as lack of construction space, structural restrictions, rapid urbanization, architecture of the building, structures close to each other etc. In such situations the foundations or group of foundations may be placed close to each other. In such cases the stress isobars or the failure zone of closely spaced isolated footings may interfere with each other leading to the phenomenon called Interference. Owing to the phenomenon of footing interference, the failure mechanism, load-settlement, bearing capacity, settlement, rotational characteristics etc. of an isolated footing may be altered and therefore the classical theories as postulated in the literature for isolated footings cannot be applied. Due to interference the stress isobars of individual interacting footings coalesce to form a single isobar of larger dimensions altering the characteristic behavior of an isolated footing. Therefore, the study of interference of closely spaced footings is one of the significant practical importances.
Previous studies
Stuart was the first pioneer to study the interference phenomenon of closely spaced surface strip footing. He examined the effect of footing interference on ultimate bearing capacity of strip footings by theoretical analysis using limit equilibrium method, assuming a non-linear failure surface wherein the cross-section composed of logarithmic spiral and straight line portion tangent to the curvilinear portion. Further Stuart (1962) carried out few small-scale laboratory experiments and compared the results of theoretical analysis with that of the experimental results and concluded that the ultimate bearing capacity of two interfering footings increase with decrease in spacing between the footings and attains a peak magnitude at some spacing termed as critical spacing. The study of Stuart (1962) was further extended by West and Stuart (1965) by performing a series of small-scale laboratory tests to examine the effect of interference on bearing capacity of strip footings resting on the surface of cohesion-less soil bed. Moreover, West and Stuart (1965) carried out few theoretical analyses using method of stress characteristics to observe the eccentricity of load and reactions at the base of footing resulting from interference effect for footings resting on the surface of sand. The results obtained from this theory were smaller than those observed by Stuart (1962) using limit equilibrium method; however the trend was similar to the variation as observed by Stuart (1962) and the results obtained by experiments reasonably matched with those of the theoretical analysis. The researchers carried out the study by theoretical or numerical techniques for interfering footings by making use of the following methods : Method of stress characteristics, Analytical method, Probabilistic approach, Upper bound limits analysis, Lower bound limits analysis, Finite element method, Finite difference method, Distinct element method.
Observed results
The bearing capacity of soil is influenced by many factors for instance soil strength, foundation width and depth, soil weight and surcharge, and spacing between foundations. These factors are related to the loads exerted on the soil and considerably affect the bearing capacity.
References
Soil mechanics
Civil engineering
Shallow foundations | Interference of the footings | [
"Physics",
"Engineering"
] | 894 | [
"Soil mechanics",
"Civil engineering",
"Applied and interdisciplinary physics",
"Construction"
] |
54,013,249 | https://en.wikipedia.org/wiki/Day%E2%80%93evening%E2%80%93night%20noise%20level | The day–evening–night noise level or L is a 2002 European standard to express noise level over an entire day. It imposes a penalty on sound levels during evening and night and it is primarily used for noise assessments of airports, busy main roads, main railway lines and in cities over 100,000 residents. The penalty for sound production during evenings and nights is due to higher nuisance perception during quieter hours and to prevent sleep deprivation for nearby residents.
Definition
L is calculated as:
Where the long-term average noise levels are defined as:
The exact hours of the three periods may be chosen differently by individual EU member states.
The formula for L can be considered a weighted average of the yearly individual noise level during day, evening and night.
See also
Environmental noise directive
Day-night average sound level, the US equivalent
References
Noise
Noise pollution
Audiology
Sounds by type
Urbanization
Urban planning
Acoustics | Day–evening–night noise level | [
"Physics",
"Engineering"
] | 181 | [
"Urban planning",
"Classical mechanics",
"Acoustics",
"Architecture"
] |
54,016,907 | https://en.wikipedia.org/wiki/Time-translation%20symmetry | Time-translation symmetry or temporal translation symmetry (TTS) is a mathematical transformation in physics that moves the times of events through a common interval. Time-translation symmetry is the law that the laws of physics are unchanged (i.e. invariant) under such a transformation. Time-translation symmetry is a rigorous way to formulate the idea that the laws of physics are the same throughout history. Time-translation symmetry is closely connected, via Noether's theorem, to conservation of energy. In mathematics, the set of all time translations on a given system form a Lie group.
There are many symmetries in nature besides time translation, such as spatial translation or rotational symmetries. These symmetries can be broken and explain diverse phenomena such as crystals, superconductivity, and the Higgs mechanism. However, it was thought until very recently that time-translation symmetry could not be broken. Time crystals, a state of matter first observed in 2017, break time-translation symmetry.
Overview
Symmetries are of prime importance in physics and are closely related to the hypothesis that certain physical quantities are only relative and unobservable. Symmetries apply to the equations that govern the physical laws (e.g. to a Hamiltonian or Lagrangian) rather than the initial conditions, values or magnitudes of the equations themselves and state that the laws remain unchanged under a transformation. If a symmetry is preserved under a transformation it is said to be invariant. Symmetries in nature lead directly to conservation laws, something which is precisely formulated by Noether's theorem.
Newtonian mechanics
To formally describe time-translation symmetry we say the equations, or laws, that describe a system at times and are the same for any value of and .
For example, considering Newton's equation:
One finds for its solutions the combination:
does not depend on the variable . Of course, this quantity describes the total energy whose conservation is due to the time-translation invariance of the equation of motion. By studying the composition of symmetry transformations, e.g. of geometric objects, one reaches the conclusion that they form a group and, more specifically, a Lie transformation group if one considers continuous, finite symmetry transformations. Different symmetries form different groups with different geometries. Time independent Hamiltonian systems form a group of time translations that is described by the non-compact, abelian, Lie group . TTS is therefore a dynamical or Hamiltonian dependent symmetry rather than a kinematical symmetry which would be the same for the entire set of Hamiltonians at issue. Other examples can be seen in the study of time evolution equations of classical and quantum physics.
Many differential equations describing time evolution equations are expressions of invariants associated to some Lie group and the theory of these groups provides a unifying viewpoint for the study of all special functions and all their properties. In fact, Sophus Lie invented the theory of Lie groups when studying the symmetries of differential equations. The integration of a (partial) differential equation by the method of separation of variables or by Lie algebraic methods is intimately connected with the existence of symmetries. For example, the exact solubility of the Schrödinger equation in quantum mechanics can be traced back to the underlying invariances. In the latter case, the investigation of symmetries allows for an interpretation of the degeneracies, where different configurations to have the same energy, which generally occur in the energy spectrum of quantum systems. Continuous symmetries in physics are often formulated in terms of infinitesimal rather than finite transformations, i.e. one considers the Lie algebra rather than the Lie group of transformations
Quantum mechanics
The invariance of a Hamiltonian of an isolated system under time translation implies its energy does not change with the passage of time. Conservation of energy implies, according to the Heisenberg equations of motion, that .
or:
Where is the time-translation operator which implies invariance of the Hamiltonian under the time-translation operation and leads to the conservation of energy.
Nonlinear systems
In many nonlinear field theories like general relativity or Yang–Mills theories, the basic field equations are highly nonlinear and exact solutions are only known for ‘sufficiently symmetric’ distributions of matter (e.g. rotationally or axially symmetric configurations). Time-translation symmetry is guaranteed only in spacetimes where the metric is static: that is, where there is a coordinate system in which the metric coefficients contain no time variable. Many general relativity systems are not static in any frame of reference so no conserved energy can be defined.
Time-translation symmetry breaking (TTSB)
Time crystals, a state of matter first observed in 2017, break discrete time-translation symmetry.
See also
Absolute time and space
Mach's principle
Spacetime
Time reversal symmetry
References
External links
The Feynman Lectures on Physics – Time Translation
Concepts in physics
Conservation laws
Energy (physics)
Laws of thermodynamics
Quantum field theory
Spacetime
Symmetry
Time in physics
Theory of relativity
Thermodynamics | Time-translation symmetry | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,021 | [
"Physical phenomena",
"Physical quantities",
"Quantum mechanics",
"Space (mathematics)",
"Thermodynamics",
"Dynamical systems",
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Laws of thermodynamics",
"Time in physics",
"Equations of physics",
"Quantity",
"Geomet... |
54,021,404 | https://en.wikipedia.org/wiki/Charge%20transport%20mechanisms | Charge transport mechanisms are theoretical models that aim to quantitatively describe the electric current flow through a given medium.
Theory
Crystalline solids and molecular solids are two opposite extreme cases of materials that exhibit substantially different transport mechanisms. While in atomic solids transport is intra-molecular, also known as band transport, in molecular solids the transport is inter-molecular, also known as hopping transport. The two different mechanisms result in different charge mobilities.
In disordered solids, disordered potentials result in weak localization effects (traps), which reduce the mean free path, and hence the mobility, of mobile charges. Carrier recombination also decreases mobility.
Starting with Ohm's law and using the definition of conductivity, it is possible to derive the following common expression for current as a function of carrier mobility μ and applied electric field E:
The relationship holds when the concentration of localized states is significantly higher than the concentration of charge carriers, and assuming that hopping events are independent from each other.
Generally, the carrier mobility μ depends on temperature T, on the applied electric field E, and the concentration of localized states N. Depending on the model, increased temperature may either increase or decrease carrier mobility, applied electric field can increase mobility by contributing to thermal ionization of trapped charges, and increased concentration of localized states increases the mobility as well. Charge transport in the same material may have to be described by different models, depending on the applied field and temperature.
Concentration of localized states
Carrier mobility strongly depends on the concentration of localized states in a non-linear fashion. In the case of nearest-neighbour hopping, which is the limit of low concentrations, the following expression can be fitted to the experimental results:
where is the concentration and is the localization length of the localized states. This equation is characteristic of incoherent hopping transport, which takes place at low concentrations, where the limiting factor is the exponential decay of hopping probability with inter-site distance.
Sometimes this relation is expressed for conductivity, rather than mobility:
where is the concentration of randomly distributed sites, is concentration independent, is the localization radius, and is a numerical coefficient.
At high concentrations, a deviation from the nearest-neighbour model is observed, and variable-range hopping is used instead to describe transport. Variable range hopping can be used to describe disordered systems such as molecularly-doped polymers, low molecular weight glasses and conjugated polymers. In the limit of very dilute systems, the nearest-neighbour dependence is valid, but only with .
Temperature dependence
At low carrier densities, the Mott formula for temperature-dependent conductivity is used to describe hopping transport. In variable hopping it is given by:
where is a parameter signifying a characteristic temperature.
For low temperatures, assuming a parabolic shape of the density of states near the Fermi level, the conductivity is given by:
At high carrier densities, an Arrhenius dependence is observed:
In fact, the electrical conductivity of disordered materials under DC bias has a similar form for a large temperature range, also known as activated conduction:
Applied electric field
High electric fields cause an increase in the observed mobility:
It was shown that this relationship holds for a large range of field strengths.
AC conductivity
The real and imaginary parts of the AC conductivity for a large range of disordered semiconductors has the following form:
where C is a constant and s is usually smaller than unity.
Ionic conduction
Similar to electron conduction, the electrical resistance of thin-film electrolytes depends on the applied electric field, such that when the thickness of the sample is reduced, the conductivity improves due to both the reduced thickness and the field-induced conductivity enhancement. The field dependence of the current density j through an ionic conductor, assuming a random walk model with independent ions under a periodic potential is given by:
where α is the inter-site separation.
Experimental determination of transport mechanisms
Characterization of transport properties requires fabricating a device and measuring its current-voltage characteristics. Devices for transport studies are typically fabricated by thin film deposition or break junctions. The dominant transport mechanism in a measured device can be determined by differential conductance analysis. In the differential form, the transport mechanism can be distinguished based on the voltage and temperature dependence of the current through the device.
It is common to express the mobility as a product of two terms, a field-independent term and a field-dependent term:
where is the activation energy and β is model-dependent. For Poole–Frenkel hopping, for example,
Tunneling and thermionic emission are typically observed when the barrier height is low.
Thermally-assisted tunneling is a "hybrid" mechanism that attempts to describe a range of simultaneous behaviours, from tunneling to thermionic emission.
See also
Electron transfer
Further reading
References
Electrical resistance and conductance
Charge carriers
Electrical phenomena | Charge transport mechanisms | [
"Physics",
"Materials_science",
"Mathematics"
] | 980 | [
"Physical phenomena",
"Physical quantities",
"Charge carriers",
"Quantity",
"Electrical phenomena",
"Condensed matter physics",
"Wikipedia categories named after physical quantities",
"Electrical resistance and conductance"
] |
57,340,621 | https://en.wikipedia.org/wiki/Odd%20cycle%20transversal | In graph theory, an odd cycle transversal of an undirected graph is a set of vertices of the graph that has a nonempty intersection with every odd cycle in the graph. Removing the vertices of an odd cycle transversal from a graph leaves a bipartite graph as the remaining induced subgraph.
Relation to vertex cover
A given -vertex graph has an odd cycle transversal of size , if and only if the Cartesian product of graphs (a graph consisting of two copies of , with corresponding vertices of each copy connected by the edges of a perfect matching) has a vertex cover of size . The odd cycle transversal can be transformed into a vertex cover by including both copies of each vertex from the transversal and one copy of each remaining vertex, selected from the two copies according to which side of the bipartition contains it. In the other direction, a vertex cover of can be transformed into an odd cycle transversal by keeping only the vertices for which both copies are in the cover. The vertices outside of the resulting transversal can be bipartitioned according to which copy of the vertex was used in the cover.
Algorithms and complexity
The problem of finding the smallest odd cycle transversal, or equivalently the largest bipartite induced subgraph, is also called odd cycle transversal, and abbreviated as OCT. It is NP-hard, as a special case of the problem of finding the largest induced subgraph with a hereditary property (as the property of being bipartite is hereditary). All such problems for nontrivial properties are NP-hard.
The equivalence between the odd cycle transversal and vertex cover problems has been used to develop fixed-parameter tractable algorithms for odd cycle transversal, meaning that there is an algorithm whose running time can be bounded by a polynomial function of the size of the graph multiplied by a larger function of . The development of these algorithms led to the method of iterative compression, a more general tool for many other parameterized algorithms. The parameterized algorithms known for these problems take nearly-linear time for any fixed value of . Alternatively, with polynomial dependence on the graph size, the dependence on can be made as small as .
In contrast, the analogous problem for directed graphs does not admit a fixed-parameter tractable algorithm under standard complexity-theoretic assumptions.
See also
Maximum cut, equivalent to asking for a minimum set of edges whose removal leaves a bipartite graph
References
Graph theory objects
Computational problems in graph theory
NP-complete problems | Odd cycle transversal | [
"Mathematics"
] | 518 | [
"Computational problems in graph theory",
"Graph theory objects",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems"
] |
56,937,054 | https://en.wikipedia.org/wiki/5%CE%B1-Dihydronormethandrone | 5α-Dihydronormethandrone (5α-DHNMT; developmental code name RU-575), also known as 17α-methyl-4,5α-dihydro-19-nortestosterone or as 17α-methyl-5α-estran-17β-ol-3-one, is an androgen/anabolic steroid and a likely metabolite of normethandrone formed by 5α-reductase. Analogously to nandrolone and its 5α-reduced metabolite 5α-dihydronandrolone, 5α-DHNMT shows reduced affinity for the androgen receptor relative to normethandrone. Its affinity for the androgen receptor is specifically about 33 to 60% of that of normethandrone.
See also
5α-Dihydronorethandrolone
5α-Dihydronandrolone
5α-Dihydronorethisterone
5α-Dihydrolevonorgestrel
References
5α-Reduced steroid metabolites
1-Methylcyclopentanols
Anabolic–androgenic steroids
Human drug metabolites
Estranes
Ketones
Progestogens | 5α-Dihydronormethandrone | [
"Chemistry"
] | 252 | [
"Pharmacology",
"Ketones",
"Functional groups",
"Medicinal chemistry stubs",
"Chemicals in medicine",
"Human drug metabolites",
"Pharmacology stubs"
] |
56,940,695 | https://en.wikipedia.org/wiki/Howarth%E2%80%93Dorodnitsyn%20transformation | In fluid dynamics, Howarth–Dorodnitsyn transformation (or Dorodnitsyn-Howarth transformation) is a density-weighted coordinate transformation, which reduces variable-density flow conservation equations to simpler form (in most cases, to incompressible form). The transformation was first used by Anatoly Dorodnitsyn in 1942 and later by Leslie Howarth in 1948. The transformation of coordinate (usually taken as the coordinate normal to the predominant flow direction) to is given by
where is the density and is the density at infinity. The transformation is extensively used in boundary layer theory and other gas dynamics problems.
Stewartson–Illingworth transformation
Keith Stewartson and C. R. Illingworth, independently introduced in 1949, a transformation that extends the Howarth–Dorodnitsyn transformation to compressible flows. The transformation reads as
where is the streamwise coordinate, is the normal coordinate, denotes the sound speed and denotes the pressure. For ideal gas, the transformation is defined as
where is the specific heat ratio.
References
Fluid dynamics | Howarth–Dorodnitsyn transformation | [
"Chemistry",
"Engineering"
] | 219 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
47,093,327 | https://en.wikipedia.org/wiki/Interactive%20architecture | Interactive architecture refers to the branch of architecture which deals with buildings, structures, surfaces and spaces that are designed to change, adapt and reconfigure in real-time response to people (their activity, behaviour and movements), as well as the wider environment. This is usually achieved by embedding sensors, processors and effectors as a core part of a building's nature and functioning in such a way that the form, structure, mood or program of a space can be altered in real-time. Interactive architecture encompasses building automation but goes beyond it by including forms of interaction engagements and responses that may lie in pure communication purposes as well as in the emotive and artistic realm, thus entering the field of interactive art. It is also closely related to the field of Responsive architecture and the terms are sometimes used interchangeably, but the distinction is important for some.
Examples of interactive architecture
While now quite common (most large-scale new buildings are built around environmentally responsive technologies, sustainability systems and user-configurable environments) earlier notable examples of interactive architecture include:
Tower of Winds (Yokohama, Japan, 1987) – Toyo Ito
Kunsthaus (Graz, Austria, 2003) – Peter Cook and Colin Fournier
Galleria Centercity (Seoul, South Korea, 2008) – UN Studio
The Shed (New York City, USA, 2019) – Diller Scofidio + Renfro
Conceptual development of interactive architecture
Early contributions to the ideas behind interactive architecture include New Babylon (Constant Nieuwenhuys) (a massive global city formed from "a series of linked transformable structures") and Cedric Price's Fun Palace ("Designed as a flexible framework into which programmable spaces can be plugged, the structure has as its ultimate goal the possibility of change at the behest of its users"), later given form in his Inter-Action Centre.
Nicholas Negroponte's book Soft Architecture Machines (1975) proposed architecture machines "not simply used as aids in the design of buildings—they serve as buildings in themselves. Man will live in living, intelligent machines or cognitive physical environments that can immediately respond to his needs or wishes or whims". He had earlier founded the Architecture Machine Group at MIT in 1968, creating the lab "as a test bed for interactive computers, sensors and programs that sought to change the manner in which computers and humans interacted with each other" which later grew into MIT Media Lab.
Other notable contributors to the conceptual development of the field include:
Gordon Pask, who offered "a paradigm for an intelligent environment that not only adapts to its use but also actively puts this use in question, requiring new actions from its users".
John Frazer, who posits that architecture should be living and evolving, "to achieve in the built environment the symbiotic behaviour and metabolic balance that are characteristic of the natural environment".
Ranulph Glanville, who proposed that "intelligent architecture will join with us in a debate, the subject of which will be how we might live so we (the architecture and the inhabitant) gain effectiveness and delight in living (forging lives) together."
Stephen Gage, Professor at the Bartlett School of Architecture who founded the Interactive Architecture Workshop, and wrote "The 21st Century designer will have to be fluent in automatic, reactive & interactive design, i.e. Time based design in its three forms. Designers & architects are faced with an essentially new extension to their craft"
Usman Haque, who makes the distinction between multi-loop 'interactive' and 'merely reactive' environments, encouraging the "goal of authentic multi-loop interaction in actual built architectural projects, forsaking the easier route of creating merely reactive works" and extended this to explore "connected environments" and the internet of things. He has also extrapolated from Gordon Pask's work to propose architecture that "can choose what it senses, either by having ill-defined sensors or by dynamically determining its own perceptual categories, then it moves a step closer to true autonomy which would be required in an authentically interactive system".
Molly Wright Steenson, who has written about how computational, cybernetic, and artificial intelligence researchers engage with architects and architectural problems
Ben Sweeting, who uses cybernetics to explore the connections between architecture, epistemology and ethics, connecting it directly to interactive architecture
Rebecca Parsons, who defined evolutionary architecture as a supporting, guiding, incremental change across multiple dimension
Technologies used in interactive architecture
Interactive architecture part of the Internet of things, a term first coined by Kevin Ashton of Procter & Gamble, later MIT's Auto-ID Center, in 1999, can include both interior and exterior elements. Within the interior, many technologies are competing to see who will emerge as the dominant communicative signal. 4GLTE LTE (telecommunication) being replaced eventually by 5G, is the obvious solution; however, visible light communication or Li-Fi, a term first introduced by Harald Haas during a 2011 TEDGlobal talk in Edinburgh, is gaining ground as research into this type of data transfer method increases. Interactive architecture and designing buildings with this technology embedded in it is essential in the development of smart cities.
Another essential element in the development of a smart city is the landscape architecture. The space in-between buildings used by the public, or the public realm as it is more commonly termed. There are two levels of communication within the public realm and the difference between the two are commonly accepted as the differentiation between IoT and IoE. IoE, or the Internet of Everything, was a phrase first used by Cisco in an attempt to achieve polarity with competitors that had embraced the term IoT. In Cisco's definition, however, they highlighted interaction with the human node as one main difference between IoT and IoE.
The two public realm communication protocols that make that space a smart space are:
The Intelligent Realm, or i-realm, defined as a realm designed with embedded information and communication technology, which allows the silo elements of that space, lighting, ventilation, traffic signals, transportation, waste management, to communicate with one another for the purpose of making that urban area more efficient and effective.
The second communication protocol is the Interactive Realm, defined as incorporating all of the technology needed to create an intelligent realm but in addition, using communication methods such as Global Positioning System, geo-fence, near-field communication and embedded Bluetooth Low Energy, to allow communication between the architecture of the space and the consumers of it. Sometimes referred to as the physical web by Google, an interactive realm uses exterior lighting, bollards, street furniture, bus stops and other elements to communicate to the public via their smartphone or tablet.
Whilst IoT concerns itself with communication between objects in order to make the design more efficient and interactive from an operational stand point. IoE also incorporates communication between embedded objects and user devices. The applications include wayfinding, safety, anti-terrorism, targeted advertising, general information such as history of the space or simply just to make the space more enjoyable.
References
Architecture | Interactive architecture | [
"Engineering"
] | 1,442 | [
"Construction",
"Architecture"
] |
47,093,634 | https://en.wikipedia.org/wiki/Two-dimensional%20liquid | A two-dimensional liquid (2D liquid) is a collection of objects constrained to move in a planar space or other two-dimensional space in a liquid state.
Relations with 3D liquids
The movement of the particles in a 2D liquid is similar to 3D, but with limited degrees of freedom. E.g. rotational motion can be limited to rotation about only one axis, in contrast to a 3D liquid, where rotation of molecules about two or three axis would be possible.
The same is true for the translational motion. The particles in 2D liquids can move in a 2D plane, whereas the particles is a 3D liquid can move in three directions inside the 3D volume.
Vibrational motion is in most cases not constrained in comparison to 3D.
The relations with other states of aggregation (see below) are also analogously in 2D and 3D.
Relation to other states of aggregation
2D liquids are related to 2D gases. If the density of a 2D liquid is decreased, a 2D gas is formed. This was observed by scanning tunnelling microscopy under ultra-high vacuum (UHV) conditions for molecular adsorbates.
2D liquids are related to 2D solids. If the density of a 2D liquid is increased, the rotational degree of freedom is frozen and a 2D solid is created.
References
Liquids
Non-equilibrium thermodynamics
Statistical mechanics
Planes (geometry) | Two-dimensional liquid | [
"Physics",
"Chemistry",
"Mathematics"
] | 272 | [
"Thermodynamics stubs",
"Statistical mechanics stubs",
"Planes (geometry)",
"Non-equilibrium thermodynamics",
"Phases of matter",
"Mathematical objects",
"Infinity",
"Thermodynamics",
"Dynamical systems",
"Statistical mechanics",
"Physical chemistry stubs",
"Matter",
"Liquids"
] |
42,508,883 | https://en.wikipedia.org/wiki/Ana%20Maria%20Rey | Ana Maria Rey is a Colombian theoretical physicist, professor at University of Colorado at Boulder, a JILA fellow, a fellow at National Institute of Standards and Technology and a fellow of the American Physical Society. Rey was the first Hispanic woman to win the Blavatnik Awards for Young Scientists in 2019.
In 2023, she was elected to the National Academy of Sciences. She is currently the chair of DAMOP, the American Physical Society's division in Atomic, Molecular and Optical Physics (AMO).
Education
Rey earned a bachelor's degree in physics at Universidad de los Andes in Bogotá in 1999 with a magna cum laude distinction. She got her Ph.D. in physics at University of Maryland in 2004. She was a postdoctoral researcher at the National Institute of Standards and Technology from 2004 to 2005 in the group of Charles W. Clark. She went on to work as a postdoctoral fellow at the Institute of Theoretical Atomic, Molecular and Optical Physics (ITAMP) at Harvard University from 2005 to 2008.
Research and career
After her postdoctoral position at ITAMP, she joined the University of Colorado Boulder Physics Department as an assistant research professor and JILA as an associate fellow in 2008. She was promoted to JILA Fellow in 2012 and shifted her position in the Department of Physics to adjoint professor in 2017.
Rey is a theoretical quantum physicist who studies new techniques for controlling quantum systems and their applications ranging from quantum simulations and quantum information to time and frequency standards. Her research is often directly applicable to state-of-the-art experiments, particularly to atomic clocks, quantum computing, and precision measurements. Her contribution to the understanding of out-of-equilibrium quantum phenomena have led to pioneer measurements of quantum information scrambling, and the synthesis of magnetic and topological quantum materials. Her publications have been cited more than 11,000 times as of 2020.
Awards and honours
2013 MacArthur Fellowship
2013 Presidential Early Career Award for Scientists and Engineers
2013 “Great Minds in STEM” Most Promising Scientist Award
2014 Early Career National Hispanic Scientist of the Year
2014 Maria Goeppert-Mayer Award of the American Physical Society.
2014 Fellow of the American Physical Society
2019 Blavatnik Awards for Young Scientists. Rey was the first Hispanic woman to win this award.
2023 Elected Member of the National Academy of Sciences
2023 Vannevar Bush Faculty Fellowship from the Department of Defense
Personal life
On July 29, 2000, Rey got married. Two days later, she immigrated to the United States.
Selected publications
The most cited publications by Rey to the date are:
S Trotzky, P Cheinet, S Fölling, M Feld, U Schnorrberger, AM Rey, A. Polkovnikov, E. A. Demler, M. D. Lukin, I. Bloch. Time-resolved observation and control of superexchange interactions with ultracold atoms in optical lattices. (2008) Science 319 (5861), 295-299
AV Gorshkov, M Hermele, V Gurarie, C Xu, PS Julienne, J Ye, P Zoller. Two-orbital SU (N) magnetism with ultracold alkaline-earth atoms. (2010) Nature physics 6 (4), 289-295
B Yan, SA Moses, B Gadway, JP Covey, KRA Hazzard, AM Rey, DS Jin. Observation of dipolar spin-exchange interactions with lattice-confined polar molecules. (2013) Nature 501 (7468), 521-525
JG Bohnet, BC Sawyer, JW Britton, ML Wall, AM Rey, M Foss-Feig. Quantum spin dynamics and entanglement generation with hundreds of trapped ions. (2016) Science 352 (6291), 1297-1301
M Gärttner, JG Bohnet, A Safavi-Naini, ML Wall, JJ Bollinger, AM Rey. Measuring out-of-time-order correlations and multiple quantum spectra in a trapped-ion quantum magnet. (2017) Nature Physics 13 (8), 781-786
X Zhang, M Bishof, SL Bromley, CV Kraus, MS Safronova, P Zoller, A. M. Rey, J. Ye. Spectroscopic observation of SU (N)-symmetric interactions in Sr orbital magnetism. (2014) Science 345 (6203), 1467-1473
References
External links
Interview with Ana Maria Rey on La W Radio
Interview of Ana Maria Rey by David Zierler on April 6, 2021, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA, www.aip.org/history-programs/niels-bohr-library/oral-histories/47007
Curriculum Vitae at JILA
Colombian emigrants to the United States
Living people
People from Bogotá
MacArthur Fellows
Theoretical physicists
Colombian women physicists
1970s births
Harvard University alumni
21st-century physicists
21st-century American scientists
21st-century American women scientists
University System of Maryland alumni
Hispanic and Latino American physicists
Fellows of the American Physical Society
Members of the United States National Academy of Sciences
Hispanic and Latino American women scientists
Recipients of the Presidential Early Career Award for Scientists and Engineers | Ana Maria Rey | [
"Physics"
] | 1,075 | [
"Theoretical physics",
"Theoretical physicists"
] |
42,515,713 | https://en.wikipedia.org/wiki/Yoshimura%20buckling | In mechanical engineering, Yoshimura buckling is a triangular mesh buckling pattern found in thin-walled cylinders under compression along the axis of the cylinder, producing a corrugated shape resembling the Schwarz lantern. The same pattern can be seen on the sleeves of Mona Lisa.
This buckling pattern is named after Yoshimaru Yoshimura (吉村慶丸), the Japanese researcher who provided an explanation for its development in a paper first published in Japan in 1951, and later republished in the United States in 1955. Unknown to Yoshimura, the same phenomenon had previously been studied by Theodore von Kármán and Qian Xuesen in 1941.
The crease pattern for folding the Schwarz lantern from a flat piece of paper, a tessellation of the plane by isosceles triangles, has also been called the Yoshimura pattern based on the same work by Yoshimura. The Yoshimura creasing pattern is related to both the Kresling and Hexagonal folds, and can be framed as a special case of the Miura fold. Unlike the Miura fold which is rigidly deformable, both the Yoshimura and Kresling patterns require panel deformation to be folded to a compact state.
References
Mechanical failure modes
Structural analysis
Paper folding
Mechanics
Mona Lisa | Yoshimura buckling | [
"Physics",
"Materials_science",
"Mathematics",
"Technology",
"Engineering"
] | 261 | [
"Structural engineering",
"Mechanical failure modes",
"Recreational mathematics",
"Structural analysis",
"Technological failures",
"Mechanics",
"Mechanical engineering",
"Aerospace engineering",
"Mechanical failure",
"Paper folding"
] |
51,146,427 | https://en.wikipedia.org/wiki/11C-UCB-J | 11C-UCB-J is a PET tracer for imaging the synaptic vesicle glycoprotein 2A in the human brain.
It is used to study the brain changes associated with several diseases including Alzheimer's disease, schizophrenia, and depression.
References
PET radiotracers
Pyridines
Pyrrolidones | 11C-UCB-J | [
"Chemistry"
] | 72 | [
"Chemicals in medicine",
"Medicinal radiochemistry",
"PET radiotracers"
] |
51,147,036 | https://en.wikipedia.org/wiki/Model-theoretic%20grammar | Model-theoretic grammars, also known as constraint-based grammars, contrast with generative grammars in the way they define sets of sentences: they state constraints on syntactic structure rather than providing operations for generating syntactic objects. A generative grammar provides a set of operations such as rewriting, insertion, deletion, movement, or combination, and is interpreted as a definition of the set of all and only the objects that these operations are capable of producing through iterative application. A model-theoretic grammar simply states a set of conditions that an object must meet, and can be regarded as defining the set of all and only the structures of a certain sort that satisfy all of the constraints. The approach applies the mathematical techniques of model theory to the task of syntactic description: a grammar is a theory in the logician's sense (a consistent set of statements) and the well-formed structures are the models that satisfy the theory.
History
David E. Johnson and Paul M. Postal introduced the idea of model-theoretic syntax in their 1980 book Arc Pair Grammar.
Examples of model-theoretic grammars
The following is a sample of grammars falling under the model-theoretic umbrella:
the non-procedural variant of Transformational grammar (TG) of George Lakoff, that formulates constraints on potential tree sequences
Johnson and Postal's formalization of Relational grammar (RG) (1980), Generalized phrase structure grammar (GPSG) in the variants developed by Gazdar et al. (1988), Blackburn et al. (1993) and Rogers (1997)
Lexical functional grammar (LFG) in the formalization of Ronald Kaplan (1995)
Head-driven phrase structure grammar (HPSG) in the formalization of King (1999)
Constraint Handling Rules (CHR) grammars
The implicit model underlying The Cambridge Grammar of the English Language
Strengths
One benefit of model-theoretic grammars over generative grammars is that they allow for gradience in grammaticality. A structure may deviate only slightly from a theory or it may be highly deviant. Generative grammars, in contrast "entail a sharp boundary between the perfect and the nonexistent, and do not even permit gradience in ungrammaticality to be represented."
References
Grammar
Grammar frameworks
Mathematical logic
Model theory | Model-theoretic grammar | [
"Mathematics"
] | 488 | [
"Mathematical logic",
"Model theory"
] |
51,147,390 | https://en.wikipedia.org/wiki/Kazhdan%E2%80%93Margulis%20theorem | In Lie theory, an area of mathematics, the Kazhdan–Margulis theorem is a statement asserting that a discrete subgroup in semisimple Lie groups cannot be too dense in the group. More precisely, in any such Lie group there is a uniform neighbourhood of the identity element such that every lattice in the group has a conjugate whose intersection with this neighbourhood contains only the identity. This result was proven in the 1960s by David Kazhdan and Grigory Margulis.
Statement and remarks
The formal statement of the Kazhdan–Margulis theorem is as follows.
Let be a semisimple Lie group: there exists an open neighbourhood of the identity in such that for any discrete subgroup there is an element satisfying .
Note that in general Lie groups this statement is far from being true; in particular, in a nilpotent Lie group, for any neighbourhood of the identity there exists a lattice in the group which is generated by its intersection with the neighbourhood: for example, in , the lattice satisfies this property for small enough.
Proof
The main technical result of Kazhdan–Margulis, which is interesting in its own right and from which the better-known statement above follows immediately, is the following.
Given a semisimple Lie group without compact factors endowed with a norm , there exists , a neighbourhood of in , a compact subset such that, for any discrete subgroup there exists a such that for all .
The neighbourhood is obtained as a Zassenhaus neighbourhood of the identity in : the theorem then follows by standard Lie-theoretic arguments.
There also exist other proofs. There is one proof which is more geometric in nature and which can give more information, and there is a third proof, relying on the notion of invariant random subgroups, which is considerably shorter.
Applications
Selberg's hypothesis
One of the motivations of Kazhdan–Margulis was to prove the following statement, known at the time as Selberg's hypothesis (recall that a lattice is called uniform if its quotient space is compact):
A lattice in a semisimple Lie group is non-uniform if and only if it contains a unipotent element.
This result follows from the more technical version of the Kazhdan–Margulis theorem and the fact that only unipotent elements can be conjugated arbitrarily close (for a given element) to the identity.
Volumes of locally symmetric spaces
A corollary of the theorem is that the locally symmetric spaces and orbifolds associated to lattices in a semisimple Lie group cannot have arbitrarily small volume (given a normalisation for the Haar measure).
For hyperbolic surfaces this is due to Siegel, and there is an explicit lower bound of for the smallest covolume of a quotient of the hyperbolic plane by a lattice in (see Hurwitz's automorphisms theorem). For hyperbolic three-manifolds the lattice of minimal volume is known and its covolume is about 0.0390. In higher dimensions the problem of finding the lattice of minimal volume is still open, though it has been solved when restricting to the subclass of arithmetic groups.
Wang's finiteness theorem
Together with local rigidity and finite generation of lattices the Kazhdan-Margulis theorem is an important ingredient in the proof of Wang's finiteness theorem.
If is a simple Lie group not locally isomorphic to or with a fixed Haar measure and there are only finitely many lattices in of covolume less than .
See also
Margulis lemma
Notes
References
Algebraic groups
Geometric group theory
Lie groups | Kazhdan–Margulis theorem | [
"Physics",
"Mathematics"
] | 753 | [
"Lie groups",
"Geometric group theory",
"Mathematical structures",
"Group actions",
"Algebraic structures",
"Symmetry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.