id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
70,925,428 | https://en.wikipedia.org/wiki/Tetracenomycin%20C | Tetracenomycin C is an antitumor anthracycline-like antibiotic produced by Streptomyces glaucescens GLA.0.
The pale-yellow antibiotic is active against some gram-positive bacteria, especially against streptomycetes. Gram-negative bacteria and fungi are not inhibited. In considering the differences of biological activity and the functional groups of the molecule, tetracenomycin C is not a member of the tetracycline or anthracyclinone group of antibiotics. Tetracenomycin C is notable for its broad activity against actinomycetes. As in other anthracycline antibiotics, the framework is synthesized by a polyketide synthase and subsequently modified by other enzymes.
Structure and properties
The structure of tetracenomycin C was established by chemical and spectroscopic methods. The three hydroxy groups, at C-4, C-4a, and C-12a, are cis to each other. The two at C-4a and C-12a are involved in intramolecular hydrogen bonding to the carbonyl oxygen atoms at C-5 and C-1, respectively. The carboxymethyl group at C-9 is almost perpendicular to the planar rings C and D. The crystal packing is stabilized by intermolecular hydrogen bonds with participation of methanol molecules.
Biosynthesis
As in other anthracycline antibiotics, the framework is synthesized by a polyketide synthase and subsequently modified by other enzymes. Early studies of tetracenomycin C biosynthesis utilized mutants that were blocked in its production to describe many of the pathway's intermediates.
Complementation of the mutations allowed the cloning of a large gene cluster that included all of the genes required for production, as well as resistance genes. Transformation of the cluster into heterologous streptomycete hosts like Streptomyces lividans resulted in the overproduction of several intermediates of the pathway. Sequence analysis of the polyketide synthase genes showed that they included two β-ketoacyl synthases (tcmK and tcmL), an acyl carrier protein (tcmM), and several cyclases.
Streptomyces glaucescens protects itself from the deleterious effect of tetracenomycin C by the action of the tcmA and tcmR gene products. TcmA has several transmembrane loops and is believed to act as a tetracenomycin C exporter. Its expression is controlled by the TcmR repressor. TcmR binds to operator sites in the tcmA promoter. When tetracenomycin C is present, it binds to TcmR, releasing it from the DNA and initiating tcmA expression.
References
Naphthalenes
Carboxylic acids
Alcohols
Triketones
Methoxy compounds
Antibiotics | Tetracenomycin C | [
"Chemistry",
"Biology"
] | 615 | [
"Biotechnology products",
"Carboxylic acids",
"Functional groups",
"Antibiotics",
"Biocides"
] |
70,925,431 | https://en.wikipedia.org/wiki/Nano-Micro%20Letters | Nano-Micro Letters is a peer-reviewed open-access scientific journal covering nanotechnology. It is published by Springer Science+Business Media on behalf of Shanghai Jiao Tong University. The editor-in-chief is Yafei Zhang (Shanghai Jiao Tong University). The journal was established in 2009.
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index Expanded and Scopus. According to the Journal Citation Reports, the journal has a 2023 impact factor of 31.6.
References
External links
English-language journals
Springer Science+Business Media academic journals
Creative Commons Attribution-licensed journals
Academic journals established in 2009
Shanghai Jiao Tong University
Nanotechnology journals | Nano-Micro Letters | [
"Materials_science"
] | 143 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Nanotechnology journals",
"Nanotechnology stubs",
"Nanotechnology"
] |
70,926,268 | https://en.wikipedia.org/wiki/Sysbench | In computing, sysbench is an open-source software tool. Specifically, it is a scriptable multi-threaded benchmarking tool designed for Linux systems. It is a C binary and uses LuaJIT scripts to execute benchmarks. It is most frequently used for database benchmarks, for example MySQL, but can also be used to create arbitrarily complex workloads that do not involve a database server for general testing. It is a multi-purpose benchmark that features tests for CPU, memory, I/O, and database performance testing. It is a basic command line utility that offers a direct way to benchmark computer hardware. It now comes packaged in most major Linux distribution repositories such as Debian, Ubuntu, CentOS and Arch Linux.
History
Sysbench was originally created by Peter Zaitsev in 2004. Soon after, Alexy Kopytov took over its development.
Design
Sysbench tests the load by running multiple threads at the same time. The number of threads is specified by the user. Depending on the testing mode, Sysbench can test the total number of requests or the amount of time required to run the complete benchmark, or both.
Usage
Sysbench can run benchmark tests specified in command line flags or in shell scripts. The type of test to run is specified in the command options and would be one of:
cpu: CPU performance test
fileio: File I/O test
memory: Memory speed test
mutex: Mutex performance test
threads: Threads subsystem performance test
Sample Command Usage
A commonly used variation of Sysbench may look like the following: sysbench --test=cpu --cpu-max-prime=20000 --threads=32 run.
References
Linux software
Benchmarks (computing) | Sysbench | [
"Technology"
] | 380 | [
"Benchmarks (computing)",
"Computing comparisons",
"Computer performance"
] |
70,926,421 | https://en.wikipedia.org/wiki/Subglacial%20lakes%20on%20Mars | Salty subglacial lakes are controversially inferred from radar measurements to exist below the South Polar Layered Deposits (SPLD) in Ultimi Scopuli of Mars' southern ice cap. The idea of subglacial lakes due to basal melting at the polar ice caps on Mars was first hypothesized in the 1980s. For liquid water to persist below the SPLD, researchers propose that perchlorate is dissolved in the water, which lowers the freezing temperature, but other explanations such as saline ice or hydrous minerals have been offered. Challenges for explaining sufficiently warm conditions for liquid water to exist below the southern ice cap include low amounts of geothermal heating from the subsurface and overlying pressure from the ice. As a result, it is disputed whether radar detections of bright reflectors were instead caused by other materials such as saline ice or deposits of minerals such as clays. While lakes with salt concentrations 20 times that of the ocean pose challenges for life, potential subglacial lakes on Mars are of high interest for astrobiology because microbial ecosystems have been found in deep subglacial lakes on Earth, such as in Lake Whillans in Antarctica below 800 m of ice.
Features
A study from 2018 first reported radar observations of a potential 20-km wide subglacial lake centered at 193°E, 81°S at the base of the SPLD using data from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) instrument on the European Space Agency’s Mars Express spacecraft. The team noticed radar echoes stronger than what ice or rock would reflect coming from 1.5 km below the surface at the base of the SPLD. They interpreted the bright radar reflections to indicate high permittivity (the ability of a material to become polarized and store energy in response to an electric field), consistent with liquid water. Three additional subglacial lakes on the km-wide scale next to the original lake were also proposed from a more detailed study, though the study also indicates the possibility that the three locations could contain wet sediment instead of lakes.
Though the SHAllow RADar (SHARAD) on the Mars Reconnaissance Orbiter operates at higher frequencies, a subglacial lake should be detectable but bright radar reflectors are absent. However, with the discovery of many widespread occurrences of the radar features in the SPLD area, corroboration between the two instruments might become possible.
Physical limits
Geothermal heating and perchlorate
The radar evidence can be difficult to understand due to scattering effects of the layers in the SPLD on radar reflections (according to an eLetter by Hecht et al. replying to the original publication along with other sources ). As a result, further work has focused on explaining how the freezing temperature at the base of the SPLD might be lowered due to a combination of perchlorate salt and enhanced regional geothermal flux. Following the detection of perchlorate in the northern plains of Mars by the Phoenix lander, it was predicted that perchlorate could allow a brine of 1–3 meters deep to exist at the base of the northern ice cap of Mars. Perchlorate is a salt now considered to be widespread on Mars and is known to lower the freezing point of water. The studies in support of the subglacial lake hypothesis proposed that magnesium and calcium perchlorate at the base of the SPLD would lower the freezing point of water to temperatures as low as 204 and 198 K, thereby allowing the existence of briny liquid water. However, even taking into account perchlorate, computer simulations predict the temperature to still be too cold for liquid water to exist at the bottom of the southern ice cap. This is due to a small amount of pressure melting (Mars' gravity is about a third of Earth's) that would only lower the melting point by 0.3-0.5 K and an estimated low geothermal heat flux of 14-30 mW/m2. A geothermal heat flux greater than 72 mW/m2 would support the subglacial lake, thus requiring a local enhancement in the heat flux, perhaps sourced by geologically recent (within hundreds of thousands of years ago) magmatism in the subsurface. Similarly, another study based on the surface topography and ice thickness found that the radar detection did not coincide with their predictions of locations for subglacial lakes based on hydrological potential, and as a result, they proposed the detection was due to a localized patch of basal melting rather than a lake.
Liquid brine water is proposed to be plausible at the SPLD because magnesium and calcium perchlorate solutions can be supercooled to as low as 150 K and the surface temperature at the south pole is approximately 160 K. In addition, it is expected that the temperature at depth for the ice would increase at a rate based on the undetermined geothermal flux and thermal properties of the SPLD. However, a study found the bright radar reflectors to be widespread across the SPLD, rather than limited to the previously identified areas of the putative subglacial lakes. Since the bright radar detections covered a wide variety of conditions at the SPLD (e.g., different temperatures, ice thicknesses), this presents challenges for all of the bright radar reflectors to be indicative of liquid water.
Surface features
Additional approaches to determining the plausibility of the subglacial lakes included a study looking for surface features induced by such lakes. On Earth, examples of surface features caused by a subglacial lake include fractures or ridge features like at Pine Island Glacier in Antarctica. While a study on Mars only found surface features that match CO2 and wind-related processes and none corresponding to the putative subglacial lakes, the lack of surface features also do not rule out the possibility of the subglacial lake. This is because while the surface of the SPLD is expected to be at least thousands of years old and possibly millions of years old, it is hard to constrain when the putative subglacial lake would have modified the surface features.
Alternative hypotheses
In contrast with the hypothesis of subglacial water at the base of the SPLD, other suggestions include materials such as saline ice, a conductive mineral deposit such as clays, and igneous materials. Future work is necessary to resolve how these alternative hypotheses hold under Mars-like conditions using instruments like MARSIS.
Saline ice
While the initial study assumed negligible conductivity in their calculation of the permittivity values, by accounting for conductivity, conductive materials that are not liquid water may also be considered. Instead of the assumption that the bright radar reflections at the base of the ice cap are due to a large contrast in dielectric permittivity, another study suggested that the bright reflection is instead due to a large contrast in electric conductivity in the materials. Saline ice, observed on Earth beneath the Taylor Glacier in Antarctica, is one potential source for the bright basal reflections, though the electric conductivity of saline ice at martian temperatures is unknown.
Hydrous minerals
The mineralogical explanation is the most favored in follow-up studies, especially with specific hydrous minerals such as jarosite (a sulfate) and smectite (a clay mineral). Smectites have high enough dielectric permittivity to account for the bright reflections (though at laboratory temperatures of 230 K higher than expected conditions on Mars), and they exist at the edges of the SPLD. Ultimately, although the studies propose these new hypotheses, they do not completely reject the possibility of liquid water as the source of the bright radar returns.
Igneous materials
Another study applied computer simulations to look for what other regions on Mars might cause similar bright basal reflectors if there was a 1.4-km thick ice shell covering the base material. They found that 0.3%-2% of the surface of Mars could produce similar signals, most of which belong to volcanic regions. While the permittivity of igneous materials requires more research, they pointed out how high density igneous content may also cause the observed bright radar reflectors.
Terrestrial analogue sites and habitability
The putative subglacial lakes are of interest for the possibility of supporting life. If physical conditions allowed one location of subglacial liquid water on Mars to exist, then this might extend to other subsurface biospheres on the planet. On Earth, subglacial lakes exist below hundreds of meters of ice in both the Arctic and Antarctic and act as a planetary analog for both the potential subglacial lakes on Mars and liquid oceans below icy shells of moons like Europa. To study life in subglacial lakes on Earth, ice core drilling is used to reach the water, but contamination is commonly considered to have compromised attempts to sample the water of both Lake Vostok and Lake Ellsworth. However, microbes have been sampled from the accretion ice (frozen lake water) of Lake Vostok. Also, Lake Whillans was a successful sampling endeavor from under 800 m of ice, where over 4000 species of chemoautotroph microbes have been identified. Whether similar microbes could survive in the putative salty subglacial lakes on Mars is still unknown, but if liquid water is present, it could preserve inactive microbial life.
See also
Lunar water
References
Mars
Life in outer space
Water on Mars
Extraterrestrial lakes | Subglacial lakes on Mars | [
"Astronomy"
] | 1,939 | [
"Life in outer space",
"Outer space"
] |
70,926,504 | https://en.wikipedia.org/wiki/Magnesium%20ozonide | Magnesium ozonide is a compound with the formula MgO3. Much like other ozonides, it is only stable at low temperatures. Unlike other ozonide compounds, magnesium ozonide is white rather than the typical red colour.
Preparation
Magnesium ozonide can be made by passing a dilute mixture of ozone in liquid nitrogen and over magnesium at -259 °C:
Magnesium bisonozide
Magnesium is also known to form bisozonide complexes, containing Mg(O3)2 complexed with argon or carbon monoxide, in an argon matrix.
References
Ozonides
Magnesium compounds | Magnesium ozonide | [
"Chemistry"
] | 123 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
70,926,536 | https://en.wikipedia.org/wiki/Mean%20transverse%20energy | In accelerator physics, the mean transverse energy (MTE) is a quantity that describes the variance of the transverse momentum of a beam. While the quantity has a defined value for any particle beam, it is generally used in the context of photoinjectors for electron beams.
Definition
For a beam consisting of particles with momenta and mass traveling prominently in the direction the mean transverse energy is given by
Where is the component of the momentum perpendicular to the beam axis . For a continuous, normalized distribution of particles the MTE is
Relation to Other Quantities
Emittance is a common quantity in beam physics which describes the volume of a beam in phase space, and is normally conserved through typical linear beam transformations; for example, one may transition from a beam with a large spatial size and a small momentum spread to one with a small spatial size and a large momentum spread, both cases retaining the same emittance. Due to its conservation, the emittance at the species source (e.g., photocathode for electrons) is the lower limit on attainable emittance.
For a beam born with a spatial size and a 1-D MTE the minimum 2-D ( and ) emittance is
The emittance of each dimension may be multiplied together to get the higher dimensional emittance. For a photocathode the spatial size of the beam is typically equal to the spatial size of the ionizing laser beam and the MTE may depend on several factors involving the
cathode, the laser, and the extraction field. Due to the linear independence of the laser spot size and the MTE, the beam size is often factored out, formulating the 1-D thermal emittance
Likewise, the maximum brightness, or phase space density, is given by
References
Accelerator physics | Mean transverse energy | [
"Physics"
] | 366 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
70,926,814 | https://en.wikipedia.org/wiki/Cuphophyllus%20atlanticus | Cuphophyllus atlanticus is a species of agaric (gilled mushroom) in the family Hygrophoraceae. Until recently (2021), the species was considered to be conspecific with the North American Cuphophyllus canescens, but DNA sequencing has shown that it is distinct. As C. canescens, it has been given the recommended English name of felted waxcap in the United Kingdom. Cuphophyllus atlanticus has a European and North American distribution, occurring in Europe mainly in agriculturally unimproved grassland. Threats to its habitat have resulted in C. canescens (including C. atlanticus) being assessed as globally "vulnerable" on the IUCN Red List of Threatened Species.
Taxonomy
The species was first described from Norway in 2021 as a result of molecular research, based on cladistic analysis of DNA sequences. Previously, European specimens had been referred to the similar Cuphophyllus canescens, but the latter species appears to be confined to North America.
Description
Basidiocarps are agaricoid, up to 40mm (1.5 in) tall, the cap broadly convex to flat when expanded, up to 40mm (1.5 in) across. The cap surface is smooth, dry, slightly felted, grey to bluish grey. The lamellae (gills) are thick, decurrent (running down the stipe), pale grey. The stipe (stem) is smooth, white to pale grey, lacking a ring. The spore print is white, the spores (under a microscope) smooth, inamyloid, subglobose, c. 5.5 to 6 by 4.5 to 5 μm.
Similar species
In Europe, the felted waxcap is distinguished from other grey species of Cuphophyllus by its dry, slightly felted cap that is never viscid or greasy. In North America, C. canescens is very similar, but is said to have drab or brownish tints in the cap and (microscopically) smaller, more globose spores measuring 4 to 5 μm across.
Distribution and habitat
In Europe, DNA studies have confirmed the presence of the felted waxcap in Norway, Sweden, and Scotland. This is the same northerly distribution previously noted for "C. canescens" in Europe. In North America, C. atlanticus has been confirmed from the United States (North Carolina) and Canada.
Recent research suggests waxcaps are neither mycorrhizal nor saprotrophic but may be associated with mosses.
Conservation
In Europe, Cuphophyllus atlanticus is typical of waxcap grasslands, a declining habitat due to changing agricultural practices. As a result, the species is of global conservation concern and (as C. canescens sensu lato) is listed as "vulnerable" on the IUCN Red List of Threatened Species.
See also
List of fungi by conservation status
References
Fungi of Europe
Fungi of North America
Hygrophoraceae
Fungi described in 2021
Fungus species | Cuphophyllus atlanticus | [
"Biology"
] | 629 | [
"Fungi",
"Fungus species"
] |
63,742,889 | https://en.wikipedia.org/wiki/Compact%20Cassette%20tape%20types%20and%20formulations | Audio compact cassettes use magnetic tape of three major types which differ in fundamental magnetic properties, the level of bias applied during recording, and the optimal time constant of replay equalization. Specifications of each type were set in 1979 by the International Electrotechnical Commission (IEC): Type I (IEC I, 'ferric' or 'normal' tapes), Type II (IEC II, or 'chrome' tapes), Type III (IEC III, ferrichrome or ferrochrome), and Type IV (IEC IV, or 'metal' tapes). 'Type 0' was a non-standard designation for early compact cassettes that did not conform to IEC specification.
By the time the specifications were introduced, Type I included pure gamma ferric oxide formulations, Type II included ferricobalt and chromium(IV) oxide formulations, and Type IV included metal particle tapes—the best-performing, but also the most expensive. Double-layer Type III tape formulations, advanced by Sony and BASF in the 1970s, never gained substantial market presence.
In the 1980s the lines between three types blurred. Panasonic developed evaporated metal tapes that could be made to match any of the three IEC types. Metal particle tapes migrated to Type II and Type I, ferricobalt formulations migrated to Type I. By the end of the decade performance of the best Type I ferricobalt tapes (superferrics) approached that of Type IV tapes; performance of entry-level Type I tapes gradually improved until the very end of compact cassette production.
Specifications
Magnetic properties
Magnetic recording relies on the use of hard ferrimagnetic or ferromagnetic materials. These require strong external magnetic fields to be magnetized, and retain substantial residual magnetization after the magnetizing field is removed. Two fundamental magnetic properties, relevant for audio recording, are:
Saturation remanence limits maximum output level and, indirectly, dynamic range of audio recordings. Remanence of audio tapes, referred to quarter-inch tape width, varies from around for basic ferric tapes to for Type IV tapes; advertised remanence of the 1986 JVC Type IV cassette reached .
Coercivity is a measure of the external magnetic flux required to magnetize the tape, and an indicator of the necessary bias level. The coercivity of audio tapes varies from to . High-coercivity particles are more difficult to erase, bias and record, but also less prone to high-frequency losses during recording, and to external interference and self-demagnetization during storage.
A useful figure of merit of tape technology is the squareness ratio of the hysteresis curve. It is an indicator of tape uniformity and its linearity in analogue recording. An increase in the squareness ratio defers the onset of compression and distortion, and allows fuller utilization of the tape's dynamic range within the limits of remanence. The squareness ratio of basic ferric tapes rarely exceeds 0.75, and the squareness ratio of the best tapes exceeds 0.9.
Electromagnetic properties
Manufacturers of bulk tape provided extremely detailed technical descriptions of their product, with numerous charts and dozens of numeric parameters. From the end user viewpoint, the most important electromagnetic properties of the tape are:
Maximum output levels, usually specified in dB relative to the nominal zero reference level of or the 'Dolby level' of . Often incorrectly called recording levels, these are always expressed in terms of the tape's output, thus taking its sensitivity out of the equation. Performance at low and middle, and at treble frequencies was traditionally characterized by two related but different parameters:
Maximum output level (MOL) is relevant at low and middle frequencies. It is usually specified at 315Hz (MOL315) or 400Hz (MOL400), and its value marks the point when the third harmonic coefficient reaches 3%. Further magnetization of the tape is technically possible, but at the cost of unacceptable compression and distortion. For all types of tape, MOL reaches a maximum in the 125800Hz area, while dropping off below and above . The maximum output of Type I tape at is 35dB lower than MOL400, while in Type IV tapes it is 67dB lower. As a result, ferric tapes handle bass-heavy music with apparent ease compared to expensive metal tapes. Double-layer Type III (IEC III, ferrichrome or ferrochrome) tape formulations were supposed to allow bass frequencies to be recorded deeper into the ferric layer, while keeping the high frequencies in the upper chromium oxide layer.
At treble frequencies the playback head cannot reliably reproduce harmonics of the recorded signal. This makes distortion measurements impossible; instead of MOL, high-frequency performance is characterized by saturation output level (SOL), usually specified at (SOL10k). Once the tape reaches saturation point, any further increase in recording flux actually decreases output to below SOL.
Noise level, usually understood as bias noise (hiss) of a tape recorded with zero input signal, replayed without noise reduction, A-weighted and referred to the same level as MOL and SOL. The difference between bias noise and the noise of virgin tape is an indicator of tape uniformity. Another important but rarely quantified type of noise is modulation noise, which appears only in the presence of a recorded signal, and which cannot be reduced by Dolby or dbx noise reduction systems.
Dynamic range, or signal-to-noise ratio, was usually understood as the ratio between MOL and A-weighted bias noise level. High fidelity audio requires a dynamic range of at least 6065dB; the best cassettes tapes reached this threshold in the 1980s, at least partially eliminating the need for the use of noise reduction systems. Dynamic range is the most important property of the tape. The higher the dynamic range of music, the more demanding it is of tape quality; alternatively, heavily compressed music sources can do well even with basic, inexpensive tapes.
Sensitivity of the tape, referred to that of an IEC reference tape and expressed in dB, was usually measured at and .
Stability of playback in time. Low-quality or damaged cassette tape is notoriously prone to signal dropouts, which are absolutely unacceptable in high fidelity audio. For high quality tapes, playback stability is sometimes lumped together with modulation noise and wow and flutter into an integral smoothness parameter.
Frequency range, per se, is usually unimportant. At low recording levels (−20 dB referred to nominal level) all quality tapes can reliably reproduce frequencies from to , which is sufficient for high fidelity audio. However, at high recording levels the treble output is further limited by saturation. At the Dolby recording level the upper frequency limit shrinks to a value between for a typical chromium dioxide tape, and for metal tapes; for chromium dioxide tapes, this is partially offset by lower hiss levels. In practice, the extent of the high-level frequency range is not as important as the smoothness of the midrange and treble frequency response.
Standards
The original specification for Compact Cassette was set by Philips in 1962–1963. Of the three then available tape formulations that matched the company's requirements, the BASF PES-18 tape became the original reference. Other chemical companies followed with tapes of varying quality, often incompatible with the BASF reference. By 1970, a new, improved generation of tapes firmly established themselves on the market, and became the de facto reference for aligning tape recorders — thus the compatibility issue worsened even further. In 1971 it was tackled by the Deutsches Institut für Normung (DIN), which set the standard for chromium dioxide tapes. In 1978 the International Electrotechnical Commission (IEC) enacted the comprehensive standard on cassette tapes (IEC 60094). One year later the IEC mandated the use of notches for automatic tape type recognition. Since then, the four cassette tape types were known as IEC I, IEC II, IEC III and IEC IV. The numerals follow the historical sequence in which these tape types were commercialized, and do not imply their relative quality or intended purpose.
An integral part of the IEC 60094 standard family is the set of four IEC reference tapes. Type I and Type II reference tapes were manufactured by BASF, Type III reference tapes by Sony, and Type IV reference tapes by TDK. Unlike consumer tapes, which were manufactured continuously over the years, each reference tape was made in a single production batch by the IEC-approved factory. These batches were made large enough to fill the need of the industry for many years. A second run was impossible, because chemists were unable to replicate the reference tape type formulation with proper precision. From time to time, the IEC revised the set of references; the final revision took place in April 1994. The choice of reference tapes, and the role of the IEC in general, has been debated. Meinrad Liebert, designer of Studer and Revox cassette decks, criticized the IEC for failing to enforce the standards and lagging behind the constantly changing market. In 1987, Liebert wrote that while the market clearly branched into distinct, incompatible "premium" and "budget" subtypes, the IEC tried in vain to select an elusive "market average"; meanwhile, the industry moved forward, disregarding outdated references. This, according to Liebert, explained sudden demand for built-in tape calibration tools that were almost unheard-of in the 1970s.
From the end user viewpoint, the IEC 60094 defined two principal properties of each tape type:
Bias level for each type was set equal to the optimal bias of the relevant IEC reference tape, and sometimes changed when the IEC changed the reference tapes, though the BASF datasheet for the Y348M tape, approved as the IEC Type I reference in 1994, says that its optimal bias is exactly 0.0 dB from the previous reference (BASF R723DG). The IEC reference tape bias definition is: Using the relevant IEC reference tape and heads according to Ref. 1.1, the bias current providing the minimum third harmonic distortion ratio for a 1 kHz signal recorded at the reference level is the reference bias setting. Type II bias ('high bias') equals around 150% of Type I bias, Type IV bias ('metal bias') equals around 250% of Type I bias. Real cassette tapes invariably deviate from the references and require fine tuning of bias; recording a tape with improper bias increases distortion and alters frequency response. A 1990 comparative test of 35 Type I tapes showed that their optimal bias levels were within of the Type I reference, while Type IV tapes deviated from the Type IV reference by up to . Some typical cassette deck frequency response curves showing the effects of different bias settings are provided in the relevant figure.
Time constant of replay equalization (often shortened to EQ) for Type I tapes equals , as per the Philips specification. The time constant for Type II, III and IV tapes is set at a lower value of . The purpose of replay equalization is to compensate for high-frequency losses during recording, which, in case of ferric cassettes, usually start at around 11.5kHz. The choice of time constant is a somewhat arbitrary decision, seeking the best combination of conflicting parameters — extended treble response, maximum output, minimum noise and minimum distortion. High-frequency roll-off that is not fully compensated in the replay channel may be offset by pre-emphasis during recording. Lower replay time constants decrease the apparent level of hiss (by 4dB when stepping down from 120 to ), but also decrease apparent high-frequency saturation level, so the choice of time constants was a matter of compromise and debate. "Hard" maximum and saturation levels, in terms of voltage output of playback head, remain unchanged. However, the high-frequency voltage level at the output of the replay equalizer decreases with a decrease in time constant. The industry and the IEC decided that it would be safe to decrease the time constant of Type II, III and IV tapes to , because they are less prone to high-frequency saturation than contemporary ferric tapes. Many disagreed, arguing that the risk of saturation at is unacceptably high. Nakamichi and Studer complied with the IEC, but provided an option for playing Type II and Type IV tapes using the setting and matching pre-emphasis filters in the recording path. A similar pre-emphasis was applied by duplicators of prerecorded chromium dioxide cassettes; although loaded with Type II tape, these cassettes were packaged in Type I cassette shells and were intended to be replayed as Type I tapes.
Type I
Type I, or IEC I, ferric or 'normal' cassettes were historically the first, the most common and the least expensive; they dominated the prerecorded cassette market. The magnetic layer of a ferric tape consists of around 30% synthetic binder and 70% magnetic powder — acicular (oblong, needle-like) particles of gamma ferric oxide (γ-Fe2O3), with a length of to . Each particle of such size contains a single magnetic domain. The powder was and still is manufactured in bulk by chemical companies specializing in mineral pigments for the paint industry. Ferric magnetic layers are brown in colour, whose shade and intensity depends mostly on the size of the particles.
Type I tapes must be recorded with 'normal' (low) bias flux and replayed with a time constant. Over time, ferric oxide technology developed continuously, with new, superior generations emerging around every five years. Cassettes of various periods and price points can be sorted into three distinct groups: basic coarse-grained tapes; advanced fine-grained, or microferric, tapes; and highest-grade ferricobalt tapes, having ferric oxide particles encapsulated in a thin layer of cobalt-iron compound. Ferricobalt tapes are often called 'cobalt doped', however, this is historically incorrect. Cobalt doping in a strict sense involves uniform substitution of iron atoms with cobalt. This technology has been tried for audio and failed, losing to chromium dioxide. Later, the industry has chosen the far more reliable and repeatable process of cobalt adsorption — encapsulation of unmodified iron oxide particles in a thin layer of cobalt ferrite.
The remanence and squareness properties of the three groups substantially differ, while coercivity remains almost unchanged at around ( for the IEC reference tape approved in 1979). Quality Type I cassettes have higher midrange MOL than most Type II tapes, slow and gentle MOL roll-off at low frequencies, but less high-frequency headroom than Type II. In practice, that means that ferric tapes have lower fidelity compared to chrome tapes and metal tapes at high frequencies, but are often better at reproducing the low frequencies found in bass-heavy music.
Basic ferric
Entry-level ferric formulations are made of pure, unmodified, coarse-grained ferric oxide. Relatively large (up to in length), irregularly-shaped oxide particles have protruding branches or dendrites; these irregularities prevent tight packing of particles, reducing iron content of the magnetic layer and, consequently, its remanence (13001400G) and maximum output level. The squareness ratio is low, around 0.75, resulting in early but smooth onset of distortion. These tapes, historically labeled and sold as 'low noise', have high levels of hiss and relatively low sensitivity; their optimal bias level is 1–2dB lower than that of the IEC reference tape.
This group also includes most of the so-called 'Type 0' cassettes — a mixed bag of ferric tapes that do not meet the IEC standard or the original Philips specification. Historically, informal 'Type 0' denoted early cassettes loaded with tape designed for reel-to-reel recorders. In the 1980s, many otherwise decent and usable basic tapes were effectively demoted to 'Type 0' status when equipment manufacturers began aligning their decks for use with premium ferricobalts (the latter having much higher sensitivity and bias). In the 21st century, 'Type 0' denotes all sorts of low-quality, counterfeit or otherwise unusable cassettes. They require unusually low bias, and even then only a few of them perform on par with quality Type I tapes. A 'Type 0' tape, if it is usable at all, is incompatible with Dolby noise reduction: with the Dolby decoder engaged, the tape sounds dull, resulting from its poor sensitivity causing severe Dolby mistracking.
Microferric
In the beginning of the 1970s, gradual technological improvements over the previous decade resulted in the second generation of Type I tapes. These tapes had uniformly needle-shaped, highly orientable particles (HOP) of much smaller size, around in length, hence the trade term microferrics. Their uniform shape allowed very dense packing of particles, with less binder and more particles per unit volume, and a corresponding rise in remanence to around . The first microferric (TDK SD) was introduced in 1971, and in 1973 Pfizer began marketing patented microferric powder that soon became an industry standard. In the 20th century, Pfizer had a strong mineral pigment division, with factories in California, Illinois and Indiana. In 1990 Pfizer sold its iron-oxide business to Harrisons & Crosfield of the United Kingdom. The next step was to align needle-shaped particles in parallel with the flux lines generated by the recording head; this was done by controlled flow of liquid magnetic mix over substrate (rheological orientation), or by applying a strong magnetic field while the binder was curing.
Typical microferric cassettes of the 1980s had less hiss and at least higher MOL than basic Type I tapes, at the cost of increased print-through. Noise and print-through are interrelated, and directly depend on the size of oxide particles. A decrease in particle size invariably decreases noise and increases print-through. The worst combination of noise and print-through occurs in highly irregular formulations containing both unusually large and unusually small particles. Small improvements continued for thirty years, with a gradual rise of squareness ratio from 0.75 to over 0.9. Newer tapes consistently produced higher output with less distortion at the same levels of bias and audio recording signals. The transition was smooth; after the introduction of new, superior tape formulations, manufacturers often kept older ones in production, selling them in different markets or under different, cheaper, designations. Thus, for example, TDK ensured that its premium microferric AD cassette was always ahead of entry-level microferric D, having finer particles and lower noise.
Ferricobalt Type I
The third, and best performing, class of ferric tapes is made of fine ferric particles encapsulated in a thin layer of cobalt-iron mix, similar in composition to cobalt ferrite. The first cobalt-doped cassettes, introduced by 3M in 1971, had exceptionally high sensitivity and MOL for the period, and were an even match for contemporary chromium dioxide tapes — hence the trade name superferrics. Of many competing cobalt-doping technologies, the most widespread was low-temperature encapsulation of ferric oxide in aqueous solution of cobalt salts with subsequent drying at 100150°C. Encapsulated microferric particles retain needle-like shape and can be tightly packed into uniform anisotropic layers. The process was first commercialized in Japan in the early 1970s.
The remanence of ferricobalt cassettes is around , resulting in around gain in MOL and 23dB gain in sensitivity compared to basic Type I tapes; their hiss level is on par with contemporary microferric formulations. The dynamic range of the best ferricobalt cassettes (true superferrics) equals 6063dB, and the MOL at lower frequencies exceeds the MOL of Type IV tapes. Overall, superferrics are a good match to Type IV, especially in recording acoustical music with a wide dynamic range. This was reflected in the price of top-of-the-line superferric tapes like Maxell XLI-S or TDK AR-X, which by 1992 matched the price of 'entry-level' metal tapes.
Type II
IEC Type II tapes are intended for recording using high (150% of normal) bias and replay with the 70μs time constant. All generations of Type II reference tapes, including the 1971 DIN reference that pre-dated the IEC standard, were manufactured by BASF. Type II has been historically known as 'chromium dioxide tape' or simply 'chrome tape', but in reality most Type II cassette tapes do not contain chromium. The "pseudochromes" (including almost all Type IIs made by the Big Three Japanese makers — Maxell, Sony and TDK) are actually ferricobalt formulations optimized for Type II recording and playback settings. A true chrome tape may have a distinctive 'old crayon' smell, more specifically, any oil or wax chalks that have chrome dioxide pigments in them like chrome yellow, which is missing in "pseudochromes". Both kinds of Type II tapes have, on average, lower high-frequency MOL and SOL, and higher signal-to-noise ratio than quality Type I tapes. This is caused by the midrange and treble pre-emphasis applied during recording to match the 70μs equalization at playback.
Chromium dioxide
In the middle of 1960s, DuPont created and patented an industrial process for making fine ferromagnetic particles of chromium dioxide (CrO2). The first CrO2 tapes for data and video appeared in 1968. In 1970, BASF, who would become the main proponent of CrO2, launched its chrome cassette production; in the same year Advent introduced the first cassette deck with chrome capability and Dolby noise reduction. The combination of low noise CrO2 tape with companding noise reduction brought a revolutionary improvement to compact-cassette sound reproduction, almost reaching the high fidelity level. However, CrO2 tape required redesign of the bias and replay equalization circuitry. This problem was resolved during the 1970s, but three unsolved issues remained: the cost of making CrO2 powder, the cost of royalties charged by DuPont, and the pollution effects of hexavalent chromium waste.
The reference CrO2 tape, approved by the IEC in 1981, is characterized by coercivity of (high bias) and remanence of . Retail CrO2 cassettes had coercivity in the range from 400 to . Owing to the very 'clean', uniform shape of the particles, chrome tapes easily attain an almost perfect squareness ratio of 0.90. 'True chromes', not modified by the addition of ferric additives or coatings, have very low and euphonic hiss (bias noise), and very low modulation noise at high frequencies. Double-layer CrO2 cassettes have the lowest absolute noise among all the audio formulations; these cassettes generate less noise at than a ferric tape at . Their sensitivity is usually also very high, but MOL is low, on par with basic Type I tapes. CrO2 tape does not tolerate overload very well: the onset of distortion is sharp and dissonant, so recording levels should be set conservatively, well below MOL. At low frequencies, the MOL of CrO2 tapes rolls off faster than in ferric or metal tapes, hence the reputation of 'bass shyness'. CrO2 cassettes are best fit for recording dynamic music with rich harmonic content and relatively low bass levels; their dynamic range is a good fit for recording from uncompressed digital sources and for music with extended quiet passages. Good ferric tapes may have the same or higher treble SOL, but CrO2 tapes still sound subjectively better owing to lower hiss and modulation noise.
Ferricobalt Type II
After the introduction of CrO2 cassettes, Japanese companies began developing a royalty-free alternative to DuPont's patent, based on an already established cobalt doping process. A controlled increase in cobalt content causes an almost linear increase in coercivity, thus a Type II "pseudochrome" tape can be made by simply adding around 3% cobalt to a Type I ferricobalt tape. By 1974 the technology was ready for mass production, and TDK and Maxell introduced their classic "pseudochromes" (TDK SA and Maxell UD-XL), while killing their true chrome lines (TDK KR and Maxell CR). By 1976, ferricobalt formulations took over the video tape market, and eventually they became the dominant high-performance tape for audio cassette. Chromium dioxide disappeared from the Japanese domestic market, although chrome remained the tape of choice for high fidelity cassette duplication among the music labels. In consumer markets, chrome coexisted as a distant second with "pseudochromes" until the very end of the cassette era. Ferricobalt technology developed continuously: in the 1980s Japanese companies introduced 'premium' double-layered ferricobalts with exceptionally high MOL and SOL; in the middle of the 1990s TDK launched the first and only triple-coated ferricobalt, the SA-XS.
The electromagnetic properties of Type II ferricobalts are very close to those of their Type I cousins. Owing to the use of replay equalization, the hiss level is lower, but so is the treble saturation level. The dynamic range of Type II ferricobalts, according to the 1990 tests, lies between 60 and 65dB. The coercivity of 580700Oe and remanence of 13001550G are close to the CrO2 reference tape, but the difference is big enough to cause compatibility problems. TDK SA was the informal reference in Japan. TDK advertisements boasted that "more decks are aligned to SA than any other tape", but there is very little first-hand information on which tapes were actually used at the factories. Japanese manufacturers provided lists of recommended tapes but did not disclose their reference tapes. There is, however, enough indirect information converging on TDK SA. For example, in 1982, when Japanese-owned Harman Kardon sent samples for Dolby certification, they were aligned to the IEC CrO2 reference. However, production copies of the same models were aligned to TDK SA. Since the Japanese already dominated both the cassette and hi-fi equipment markets, incompatibility further undermined the market share of European-made cassette decks and CrO2 cassettes. In 1987, the IEC resolved the compatibility issue by appointing a new Type II reference tape U 564 W, a BASF ferricobalt with properties that were very close to contemporary TDK tapes. With the short-lived 1988 Reference Super, even BASF started the manufacture and sale of Type II ferricobalt tapes.
Metal particle Type II
The coercivity of iron-cobalt metal particle mix, precipitated from aqueous solutions, depends on the cobalt content. A change in cobalt content from 0% to 30% causes a gradual rise in coercivity from around (Type I level) to (Type IV level); alloyed iron-cobalt particles can reach a coercivity of . This makes possible manufacturing of metal particle tapes conforming to Type II and even Type I biasing requirements.
In practice, only Denon, Taiyo Yuden, and, for only a few years, TDK, ever attempted making Type II metal tape. These rare expensive cassettes were characterized by high remanence, approaching that of Type IV tapes (); their coercivity of was closer to Type II than Type IV tapes, but still quite far from either type reference. Independent tests of the 1990 Denon and Taiyo Yuden tapes placed them at the very top of the Type II spectrum — if the recording deck could cope with unusually high sensitivity and provide unusually high bias current.
Type III
In 1973, Sony introduced double-layer ferrichrome tapes having a five-micron ferric base coated with one micron of CrO2 pigment. The new cassettes were advertised as 'the best of both worlds' — combining the good low-frequency MOL of microferric tapes with good high-frequency performance of chrome tapes. The novelty became part of the IEC standard, codenamed Type III; the Sony CS301 formulation became the IEC reference. However, the idea failed to attract followers. Apart from Sony, only BASF, Scotch and Agfa introduced their own ferrichrome cassette tapes.
These expensive ferrichrome tapes never gained substantial market share, and after the release of metal tapes they lost their perceived exclusivity. Their place in the market was taken over by superior and less expensive ferricobalt formulations. By 1983, tape deck manufacturers stopped providing an option for recording Type III tapes. Ferrichrome tape remained in the BASF and Sony lineups until 1984 and 1988, respectively.
The use of ferrichrome tapes was complicated by the conflicting rationale of the playback of these tapes. Officially, they were intended to be played back using equalisation. The information leaflet that Sony included in each box of ferrichrome cassette tapes recommended that, "If the selector has two positions, NORMAL and CrO2, set it to the NORMAL position." (which applies equalisation). The leaflet notes that the high frequency range will be enhanced and that the tone control should be adjusted to compensate. The same leaflet recommends that if the playback machine offers a 'Fe-Cr' selection, that this should be selected. On Sony's machines, this automatically selects equalisation. The service manual for the Sony TC-135SD, which was one of the few cassette decks offering a 'Fe-Cr' position, shows the tape type selector switch paralleling the ferrichrome equalisation selection with that of chrome dioxide (). Neither Sony nor BASF cassette tapes feature the notches on the back surface that automatically select equalisation on those machines that featured an automatic detection system.
Type IV
Metal particle Type IV
Pure metal particles have an inherent advantage over oxide particles due to 34times higher remanence, very high coercivity and far smaller particle size, resulting in both higher MOL and SOL values. First attempts to make metal particle (MP) tape, rather than metal oxide particle tape, date back to 1946; viable iron-cobalt-nickel formulations appeared in 1962. In the early 1970s, Philips began development of MP formulations for the Compact Cassette. Contemporary powder metallurgy could not yet produce fine, submicron size particles, and properly passivate these highly pyrophoric powders. Although the latter problems were soon solved, the chemists did not convince the market in terms of the long-term stability of MP tapes; suspicions of inevitable early degradation persisted until the end of the cassette era. The fears did not materialize, and most metal particle tapes survived decades of storage just as well as Type I tapes; however, signals recorded on metal particle tapes do degrade at about the same rate as in chromium tapes, around 2dB over the estimated lifetime of the cassette.
Metal particle Compact Cassettes, or simply 'metal' tapes, were introduced in 1979 and were soon standardized by the IEC as Type IV. They share the same replay time constant as Type II tapes, and can be correctly reproduced by any deck equipped with Type II equalization. Recording onto a metal tape requires special high-flux magnetic heads and high-current amplifiers to drive them. Typical metal tape is characterized by remanence of 30003500G and coercivity of 1100Oe, thus its bias flux is set at 250% of Type I level. Traditional glass ferrite heads would saturate their magnetic cores before reaching these levels. "Metal capable" decks had to be equipped with new heads built around sendust or permalloy cores, or the new generation of glass ferrite heads with specially treated gap materials.
Metal particle tapes, particularly top-of-the-line double coated tapes, have record high midrange MOL and treble SOL, and the widest dynamic range coupled with the lowest distortion. They were always expensive, almost exclusive, out of reach of most consumers. They excel at reproducing fine nuances of uncompressed acoustic music, or music with very high treble content, like brass and percussion. However, they need a high quality, properly aligned deck to reveal their potential. First-generation metal particle tapes were consistently similar in their biasing requirements, but by 1983 newer formulations drifted away from each other and the reference tape.
Metal evaporated
Unlike wet coating processes, metal evaporated (ME) media are fabricated by physical deposition of vaporized cobalt or cobalt-nickel mix in a vacuum chamber. There is no synthetic binder to hold particles together; instead, they adhere directly to polyester tape substrate. An electron beam melts source metal, creating a continuous directional flow of cobalt atoms towards the tape. The zone of contact between the beam and the tape is blown with a controlled flow of oxygen, which helps formation of polycrystalline metal-oxide coating. A massive liquid-cooled rotating drum, which pulls the tape into the contact zone, protects it from overheating.
Metal evaporated coatings, along with barium ferrite, have the highest information density of all rerecordable media. The technology was introduced in 1978 by Panasonic, initially in the form of audio microcassettes, and matured through the 1980s. Metal evaporated media established itself in analogue (Hi8) and digital (Digital8, DV and MicroMV) videotape market, and data storage (Advanced Intelligent Tape, Linear Tape Open). The technology seemed promising for analogue audio recording; however, very thin metal evaporated layers were too fragile for consumer cassette decks, the coatings too thin for good MOL, and manufacturing costs were prohibitively high. Panasonic Type I, Type II and Type IV metal evaporated cassettes, introduced in 1984, were sold for only a few years in Japan alone, and remained unknown in the rest of the world.
Measured performance characteristics
During the many years that cassette decks were popular, many audio magazines published comparative measurements of the performance characteristics of the wide variety of different tapes that were available in the marketplace. These measurements typically included parameters such as MOL, SOL, frequency response at 0-dB and −20-dB re Dolby Level, signal-to-noise ratio, modulation noise, bias level, and sensitivity. The first figure shows frequency response plots for sample TypeI, TypeII, and TypeIV cassette tapes comparing their MOL, SOL, and 0-dB performance.
The second figure shows the frequency response performance of typical TypeI, TypeII, and TypeIV cassette tapes, obtained for a number of different input signal levels, using a high quality Pioneer CT-93 stereo cassette deck from the 1990s. For each of the three tape formulations, the record/replay characteristics of the cassette deck were aligned with the relevant IEC Reference Tape, and each tested tape was measured with the bias and equalization unchanged from that reference position. The record/replay frequency response was tested at four levels: +6VU, 0VU, −10VU and −20VU (Dolby Level is marked at +3VU for the CT-93). Thus, these plots provide data
on the linearity of the different tape formulations at both high and moderate recording levels. It is interesting to note that the TypeI tape shows +6VU and at 0VU responses that are much flatter than that of the TypeII tape. At +6VU, the TypeII tape displays significant amounts of signal level compression across the entire frequency range, reducing to about 2dB of signal compression between 80Hz and 1kHz.
Some representative measured performance characteristics of a small number of commercially available tape types are presented in the table below.
References
Bibliography
Tape recording
Audio storage
Inorganic chemistry
Technology-related lists | Compact Cassette tape types and formulations | [
"Chemistry",
"Technology"
] | 7,523 | [
"Recording devices",
"nan",
"Tape recording"
] |
63,742,908 | https://en.wikipedia.org/wiki/Leo%20P | Leo P is a small, star-forming irregular galaxy located in the constellation Leo, discovered through the blind HI Arecibo Legacy Fast ALFA (ALFALFA) survey, as an ultra-compact high-velocity cloud (UCHVC) of hydrogen gas. Its confirmation as a dwarf galaxy in 2013 suggests that other such UCHVCs are possibly undiscovered dwarf galaxies themselves. Leo P is noteworthy for harbouring one of the most metal-poor environments in the local universe. Its metallicity is just 3% that of the Sun's, meaning that its stars contain 30 times less heavy elements than the Sun. This makes Leo P similar to the pristine environments of primordial galaxies.
Leo P is located on the very outskirts of the Local Group, nearly 5.3 million light years away, and may not be part of it, instead being part of the Antlia-Sextans Group, a small grouping of galaxies adjacent to the Local Group, sometimes considered bound to it.
Properties
Leo P is one of the smallest, least massive and faintest star-forming galaxies in the Local Group. Its total luminosity is less than 440,000 times that of the Sun (absolute magnitude of −9.27), and its stellar mass is only about 560,000 solar masses, implying a small stellar population. Leo P is also very rich in gas, containing about 810,000 solar masses of neutral hydrogen. Leo P's half-light radius is about 570 pc.
Leo P's stellar population consists of a strong concentration of massive, bright and blue stars in the centre of the galaxy, which may be B and A-type main sequence stars. Some fainter and redder stars are also observed, presumably red giants from an older stellar population. 10 RR Lyrae stars have been detected in the galaxy, as well as one H II region, which is ionised by LP26, an O-type star of 22 solar masses, the only one in Leo P.
Star formation
Leo P is one of the few Local Group galaxies which are currently forming stars. Its star formation rate is about every year, or 1 solar mass every 20,400 years, and it is the Local Group's most metal-poor star-forming galaxy. Its star formation history shows mostly constant star formation throughout its lifetime, something which is also observed in larger irregular galaxies. Models also suggest that there was not much star formation post-reionisation, 12–8 billion years ago, and over the last 4 billion years, star formation has been happening at a constant rate.
References
Irregular galaxies
NGC 3109 subgroup
Leo (constellation)
Astronomical objects discovered in 2013 | Leo P | [
"Astronomy"
] | 543 | [
"Leo (constellation)",
"Constellations"
] |
63,746,422 | https://en.wikipedia.org/wiki/Kritisk%20Revy | Kritisk Revy () was a quarterly architecture magazine. It was briefly published between 1926 and 1928 in Copenhagen, Denmark. The magazine played a significant role in developing avant-garde culture in Scandinavia in the period between World War I and World War II. It is also the early source for the Danish modern.
History and profile
Kritisk Revy was established in 1926. The first issue appeared in July 1926. The founders were architects and left-wing intellectuals. The headquarters was in Copenhagen. The editor of the magazine was Poul Henningsen. Although three issues were published in the first year, the frequency of Kritisk Revy was quarterly for the following years.
Kritisk Revy contained articles that led to various polemics. These articles were not only written in Danish but also in other languages. The focus of magazine was avant-garde architecture and design. However, the topics were not limited to these subjects in that the magazine covered various topics related to Danish life, including nature preservation, literature and religion. The magazine also embraced a wide range of modern topics, including advertising, shop window design, jazz music, variety theatre and film.
The contributors adopted the notion of art for society's sake. The magazine laid the basis of early Scandinavian modernism. Poul Henningsen developed a new approach towards modernism in the magazine which focused on functionalism, criticism and clarity. It frequently carried articles about the architecture and planning of Copenhagen and other Nordic cities. Significant contributors of Kritisk Revy included Otto Gelsted, Edvard Heiberg and Hans Kirk who would be a member of the Danish Communist Party.
The magazine did not share the political approach of Klingen, a former Danish magazine, but affected from its approach towards European art. This effect was observed in the large format of Kritisk Revy (35.2 x 21.6 cm). In addition, the magazine also included frequent illustrations and graphic formats like Klingen.
The circulation of Kritisk Revy ranged between 1800 and 2000 copies. The magazine ceased publication after the publication of the eleventh issue appeared in Christmas 1928 with an announcement that Kritisk Revy accomplished the goals.
See also
List of avant-garde magazines
List of magazines in Denmark
References
1926 establishments in Denmark
1928 disestablishments in Denmark
Architecture magazines
Avant-garde magazines
Defunct magazines published in Denmark
Design magazines
Magazines established in 1926
Magazines disestablished in 1928
Magazines published in Copenhagen
Modernism
Multilingual magazines
Political magazines published in Denmark
Quarterly magazines published in Denmark
Triannual magazines | Kritisk Revy | [
"Engineering"
] | 515 | [
"Design magazines",
"Design"
] |
63,747,637 | https://en.wikipedia.org/wiki/Enalapril/hydrochlorothiazide | Enalapril/hydrochlorothiazide, sold under the brand name Vaseretic among others, is a fixed-dose combination medication used for the treatment of hypertension (high blood pressure). It contains enalapril, an angiotensin converting enzyme inhibitor, and hydrochlorothiazide a diuretic. It is taken by mouth.
The most frequent side effects include dizziness, headache, fatigue, and cough.
History
Enalapril/hydrochlorothiazide was approved for medical use in the United States in October 1986.
References
Further reading
ACE inhibitors
Combination antihypertensive drugs
Diuretics
Prodrugs | Enalapril/hydrochlorothiazide | [
"Chemistry"
] | 142 | [
"Chemicals in medicine",
"Prodrugs"
] |
63,748,529 | https://en.wikipedia.org/wiki/Sebastian%20Deffner | Sebastian Deffner is a German theoretical physicist and a professor in the Department of Physics at the University of Maryland, Baltimore County (UMBC). He is known for his contributions to the development of quantum thermodynamics with focus on the thermodynamics of quantum information, quantum speed limit for open systems, quantum control and shortcuts to adiabaticity.
Education
Deffner received his Diplom-Physiker (Master of Science) in 2008 from the University of Augsburg; and he received his doctorate from the same university in 2011 under the supervision of .
Career
From 2008 until 2011, Deffner was a research fellow at the University of Augsburg. From 2011 to 2014, he was a Research Associate in the group of Christopher Jarzynski at the University of Maryland, College Park (UMD) for which he had received the DAAD postdoctoral fellowship.
From 2014 to 2016, he took up the position of a Director’s Funded Postdoctoral Fellow with Wojciech H. Zurek at the Los Alamos National Laboratory.
Since 2016, he has held a position as a faculty member of the Department of Physics at the University of Maryland, Baltimore County (UMBC), where he leads the quantum thermodynamics group, and a position as a Visiting Professor at the University of Campinas in Brazil.
Honors and awards
Deffner’s contributions to quantum thermodynamics have been recognized through the 2016 Early Career Award from the New Journal of Physics, as well as the Leon Heller Postdoctoral Publication Prize from the Los Alamos National Laboratory in 2016. Since 2017, Deffner has been a member of the international editorial board for IOP Publishing's Journal of Physics Communications, and since 2019 he has been on the editorial advisory board of the Journal of Non-Equilibrium Thermodynamics, and a member of the Section Board for Quantum Information of Entropy. He is also a member of the inaugural editorial board of PRX Quantum.
2017 APS Outstanding Referee (American Physical Society).
2016 Leon Heller Postdoctoral Publication Prize (Los Alamos National Laboratory).
2016 Early Career Award (New Journal of Physics).
Personal life
As of 2020, Deffner is married to Catherine Nakalembe, a remote sensing scientist. They have 2 children.
See also
Shortcuts to adiabaticity
Books
References
External links
Quantum Thermodynamics at UMBC, led by Sebastian Deffner
Sebastian Deffner at UMBC
Sebastian Deffner at Publons
Sebastian Deffner at arXiv
German theoretical physicists
Thermodynamicists
21st-century German physicists
Living people
1983 births
University of Maryland, Baltimore County faculty | Sebastian Deffner | [
"Physics",
"Chemistry"
] | 547 | [
"Thermodynamics",
"Thermodynamicists"
] |
63,749,890 | https://en.wikipedia.org/wiki/NGC%20544 | NGC 544 (also known as GC 320 or h 2411) is a faint, small, and round elliptical galaxy located in the Sculptor constellation. The galaxy was discovered by John Herschel on 23 October 1835 and its apparent size is 1.5 by 1.1 arc minutes. It is approximately 360 million light years away from Earth, it is similar to those of NGC 534, NGC 546 and NGC 549.
References
External links
Elliptical galaxies
544
Sculptor (constellation)
Astronomical objects discovered in 1835
Discoveries by John Herschel | NGC 544 | [
"Astronomy"
] | 110 | [
"Constellations",
"Sculptor (constellation)"
] |
63,750,068 | https://en.wikipedia.org/wiki/Natalie%20Prystajecky | Natalie Anne Prystajecky a Canadian biologist and the Environmental Microbiology program at the British Columbia Centre for Disease Control Public Health Laboratory. She holds a Clinical Assistant Professor position at the University of British Columbia. During the COVID-19 pandemic Prystajecky was involved with the development COVID-19 testing capabilities.
Early life and education
Prystajecky studied environmental science and biology at the University of Calgary. She moved to British Columbia as a graduate student, where she first worked toward a certificate in watershed management. In 2010 Prystajecky earned her doctoral degree at the University of British Columbia. Her research considered epidemiological studies of Giardia spp.
Research and career
After completing her doctorate, Prystajecky joined the British Columbia Provincial Health Services Authority, where she led British Columbians through outbreaks of norovirus and influenza. At the time, Prystajecky's advice was to “wash your hands all the time, and soap and water is the best,”.
Prystajecky leads the Environmental Microbiology program at the British Columbia Centre for Disease Control Public Health Laboratory. She investigates the relationship between environmental exposures and clinical outcomes. To do this, Prystajecky developed technology for genome sequencing. She has used these genomic technologies to search for pathogens that might cause foodborne illnesses. Prystajecky has used metagenomics to test for bacteria and viruses in water in an effort to improve the health of people and ecosystems.
In early 2020 Prystajecky was involved in two British Columbian oyster One Health studies named UPCOAST-V for Vibrio parahaemolyticus and UPCOAST-N for Norovirus, Improved detection of the viruses will help to reduce the spread of disease and help the Canadian oyster industry.
During the COVID-19 pandemic Prystajecky was involved with the development COVID-19 testing capabilities. The first quantitative PCR acid was shared by researchers in Wuhan with World Health Organization, and forms the basis of many COVID-19 tests, including those developed by Prystajecky. In particular, Prystajecky looked to reduce the time taken between testing and obtaining results in an effort to understand transmission and protect vulnerable members of the population. The British Columbia Centre for Disease Control program that conducts the testing is known as Responding to Emerging Serious Pathogen Outbreaks using Next-gen Data (RESPOND), and makes use of genome sequencing to identify which patients have been infected by the disease.
Selected publications
Personal life
Prystajecky has two children.
References
Living people
Year of birth missing (living people)
Canadian women biologists
Pathogen genomics
University of Calgary alumni
University of British Columbia alumni
Public health researchers
21st-century Canadian biologists
21st-century Canadian women scientists | Natalie Prystajecky | [
"Biology"
] | 564 | [
"Molecular genetics",
"DNA sequencing",
"Pathogen genomics"
] |
63,751,776 | https://en.wikipedia.org/wiki/Diophantus%20and%20Diophantine%20Equations | Diophantus and Diophantine Equations is a book in the history of mathematics, on the history of Diophantine equations and their solution by Diophantus of Alexandria. It was originally written in Russian by Isabella Bashmakova, and published by Nauka in 1972 under the title Диофант и диофантовы уравнения. It was translated into German by Ludwig Boll as Diophant und diophantische Gleichungen (Birkhäuser, 1974) and into English by Abe Shenitzer as Diophantus and Diophantine Equations (Dolciani Mathematical Expositions 20, Mathematical Association of America, 1997).
Topics
In the sense considered in the book, a Diophantine equation is an equation written using polynomials whose coefficients are rational numbers. These equations are to be solved by finding rational-number values for the variables that, when plugged into the equation, make it become true. Although there is also a well-developed theory of integer (rather than rational) solutions to polynomial equations, it is not included in this book.
Diophantus of Alexandria studied equations of this type in the second century AD. Scholarly opinion has generally held that Diophantus only found solutions to specific equations, and had no methods for solving general families of equations. For instance, Hermann Hankel has written of the works of Diophantus that "not the slightest trace of a general, comprehensive method is discernible; each problem calls for some special method which refuses to work even for the most closely related problems". In contrast, the thesis of Bashmakova's book is that Diophantus indeed had general methods, which can be inferred from the surviving record of his solutions to these problems.
The opening chapter of the books tells what is known of Diophantus and his contemporaries, and surveys the problems published by Diophantus. The second chapter reviews the mathematics known to Diophantus, including his development of negative numbers, rational numbers, and powers of numbers, and his philosophy of mathematics treating numbers as dimensionless quantities, a necessary preliminary to the use of inhomogeneous polynomials. The third chapter brings in more modern concepts of algebraic geometry including the degree and genus of an algebraic curve, and rational mappings and birational equivalences between curves.
Chapters four and five concern conic sections, and the theorem that when a conic has at least one rational point it has infinitely many. Chapter six covers the use of secant lines to generate infinitely many points on a cubic plane curve, considered in modern mathematics as an example of the group law of elliptic curves. Chapter seven concerns Fermat's theorem on sums of two squares, and the possibility that Diophantus may have known of some form of this theorem. The remaining four chapters trace the influence of Diophantus and his works through Hypatia and into 19th-century Europe, particularly concentrating on the development of the theory of elliptic curves and their group law.
The German edition adds supplementary material including a report by Joseph H. Silverman on progress towards a proof of Fermat's Last Theorem. An updated version of the same material was included in the English translation.
Audience and reception
Very little mathematical background is needed to read this book.
Despite "qualms about Bashmakova's historical claims", reviewer David Graves writes that "a wealth of material, both mathematical and historical, is crammed into this remarkable little book", and he recommends it to any number theorist or scholar of the history of mathematics. Reviewer Alan Osborne is also positive, writing that it is "well-crafted, ... offering considerable historical information while inviting the reader to explore a great deal of mathematics."
References
Diophantine equations
Books about the history of mathematics
1972 non-fiction books
1974 non-fiction books
1997 non-fiction books | Diophantus and Diophantine Equations | [
"Mathematics"
] | 810 | [
"Diophantine equations",
"Mathematical objects",
"Equations",
"Number theory"
] |
63,754,763 | https://en.wikipedia.org/wiki/COVID-19%20Immunity%20Task%20Force | The COVID-19 Immunity Task Force (CITF) is one of the Government of Canada's early efforts to track the 2020 coronavirus pandemic. An external, dedicated secretariat was established in order to maximize the efficiency of the CITF's work.
Purpose
The CITF was to use a serology "to survey representative samples of the population for the presence of antibodies to the virus". Trudeau's press release on 23 April 2020, on the initiation of the CCITF listed several goals it would help to achieve notably that it would:
A Vaccine Surveillance Reference Group (VSRG) was also established within the CITF to monitor the safety and effectiveness of COVID-19 vaccines made available in Canada.
Task Force membership
The CITF Board is composed of doctors, infectious disease experts, and policy makers.
Leadership Group
Executive Committee
David Naylor, Co-chair
Catherine Hankins, Co-chair
Timothy Evans, Executive Director
Heather Hannah
Mona Nemer
Howard Njoo
Gina Ogilvie
Jutta Preiksaitis
Gail Tomblin Murphy
Paul Van Caeseele
Government of Canada representatives
Theresa Tam, Chief Public Health Officer of Canada
Mona Nemer, Chief Science Advisor of Canada
Stephen Lucas, Deputy Minister of Health of Canada
Members
The CCITF leadership group expanded on 2 May 2020. Its additional members as of March 2022 are:
Provincial & Territorial representatives
Shelly Bolotin, Ontario
Marguerite Cameron, Prince Edward Island
Catherine Elliott, Yukon
Richard Garceau, New Brunswick
Heather Hannah, Northwest Territories
Mel Krajden, British Columbia
Christie Lutsiak, Alberta
Richard Massé, Quebec
Jessica Minion, Saskatchewan
Michael Patterson, Nunavut
Gail Tomblin Murphy, Nova Scotia
Paul Van Caeseele, Manitoba
References
External links
Covid-19 Test Kits
Serology
Blood tests
Epidemiology
Immunologic tests
Funding bodies of Canada
COVID-19 pandemic in Canada
National responses to the COVID-19 pandemic
Health Canada
Clinical pathology
Scientific organizations based in Canada
Scientific organizations established in 2020
2020 establishments in Canada
Task forces established for the COVID-19 pandemic | COVID-19 Immunity Task Force | [
"Chemistry",
"Biology",
"Environmental_science"
] | 425 | [
"Blood tests",
"Immunologic tests",
"Epidemiology",
"Chemical pathology",
"Environmental social science"
] |
63,755,186 | https://en.wikipedia.org/wiki/Bicycle%20counter | Bicycle counters are electronic devices that detect the number of bicycles passing by a location for a certain period of time. Some advanced counters can also detect the speed, direction, and type of bicycles. These systems are sometimes referred to as bicycle barometers, but the term is misleading because it indicates the measurement of pressure. Most counting stations only consist of sensors, the internal computing device, although some use a display to show the total number of cyclists of the day and the current year. There are counting stations all over the world in over hundreds of cities, for example in Manchester, Zagreb, or Portland. The first bicycle counting station was installed in Odense, Denmark, in 2002.
Persuasive aspects
Bicycle counters are mainly being installed to assist city planning with reliable data on the development of bicycle usage. Bicycle counting stations are said to raise awareness for cycling as a mode of transportation, encourage more people to use their bicycles and give cyclists acknowledgement. There has been no representative study on the impact of bicycle counters on citizens or by-passers, but some early empirical clues that urban visualizations can "become appropriate communication media for sharing, discussing, and co-producing socially relevant data".
To increase visibility, bicycle counters are mostly installed at positions with high traffic volume and visibility to a range of road users.
They have been called urban visualizations and fulfill certain criteria of ambient intelligence, such as being embedded, context-aware and adaptive. Bicycle counting stations can be described as persuasive technology.
"Through sensing technology, a display can act as a tool that increases the capability to capture a behavior (e.g., measuring residential energy consumption, bicycle use, etc.); through its visual imagery, it can function as a medium that provides useful information, such as behavioral statistics or cause-and-effect relationships; and through its networking ability, it can become a social actor, encouraging community-based feedback and social interaction".
Technical setup
Different techniques are used for detection of bicycles, such as built in induction loops, piezoelectric strips, pneumatic hoses, infrared sensing or cameras. Different setups provide different advantages such as more precise counting, battery life, reduced costs or differentiation between different road users such as cyclists, pedestrians or cars. Independent testing has shown that pneumatic tubes can record with over 95% accuracy and piezoelectric sensors reach 99% accuracy. Manufacturers state a 90% precision for induction loops.
Data
Unlike manual counting or other bicycle related interventions or citizen science, where citizens manually put in data, bicycle counting stations automatically generate citizen related data. Automatic counting systems are said to be cheaper than manual counting by people. Because of the use of communication technology in the urban context, bicycle counters can be counted as smart city technology, urban informatics or urban computing. Most of the organizations who install bicycle counters, provide the number of cyclists as open data.
Criticism
There has been criticism on the precision of the counting and on the cost of bicycle counters as a waste of tax money (14000-31000€).
See also
Different cities, such as Bonn or Lahti mentioned cyclists that are a round number of counting (like number 100.000).
Cycling barometer is also the name of a ranking by the European Cyclists' Federation for the most bicycle-friendly nations in the EU.
There has been creative use of the data generated by counting stations, such as an information design poster which includes number of daily cyclists, precipitation and temperature.
Gallery
References
Cycling infrastructure
Road traffic management
Bicycle transportation planning
Road transport
Counting instruments | Bicycle counter | [
"Mathematics",
"Technology",
"Engineering"
] | 719 | [
"Numeral systems",
"Counting instruments",
"Measuring instruments"
] |
75,216,955 | https://en.wikipedia.org/wiki/Stigmidium%20cerinae | Stigmidium cerinae is a species of lichenicolous (lichen-dwelling) fungus in the family Mycosphaerellaceae. It was formally described as a new species in 1994 by mycologists Claude Roux and Dagmar Triebel. The type specimen was collected in Austria from the apothecia of the muscicolous (moss-dwelling) species Caloplaca stillicidiorum. It infects lichens in the genus Caloplaca, and more generally, members of the family Teloschistaceae. Infection by the fungus results in bleaching of the host hymenium.
Description
Stigmidium cerinae is distinguished by its globular to slightly elongated ascomata, which are exceptionally dark, glossy, and appear in abundance, ranging from 6 to 60 on the apothecia of the lichen host. These ascomata partially or fully darken the of the host, appearing embedded to varying degrees. The wall of the ascomata has a deep rufous-brown hue, with the upper portion appearing darker compared to the lighter lower part. This structure measures between 5 and 10 μm in thickness and consists of cells with a similarly coloured wall, which are internally coated with very fine brown pigment granules.
The cellular within the ascomata wall are distinguishable, with sizes varying in tangential and vertical planes. The and within the ascomata are well-defined and visible. The asci, which house the spores, have a club-like shape and are almost or bear a short stalk. As for the , they initially appear colourless, turning to a light brown towards the end of their lifecycle, possibly when they are dead. These spores are long, narrow, and range in their dimensions, typically three to four times as long as they are wide. They possess a thin wall and an outer that is barely discernible, not creating a . The cells within the spores are nearly equal, containing two large oil droplets.
In addition to the reproductive ascomata, Stigmidium cerinae also features conidiomata, albeit infrequently observed. These structures are globular and consist of a light brown wall made up of cells. The conidia generated are small in size. Vegetative hyphae are present, colourless, and hardly visible without staining, scattered throughout the hymenium and subhymenium of the host.
Distribution
The fungus has been recorded from several localities: Austria, Germany, Italy, Switzerland, Taymyr Peninsula in the Far North of Russia, the East Siberian Lowland, Romania, and Slovenia. Although it was reported from North America in 2001, these sightings were later revised to represent the species Stigmidium epistigmellum.
References
Mycosphaerellaceae
Fungi described in 1994
Fungi of Europe
Lichenicolous fungi
Taxa named by Claude Roux
Taxa named by Dagmar Triebel
Fungus species | Stigmidium cerinae | [
"Biology"
] | 618 | [
"Fungi",
"Fungus species"
] |
75,217,515 | https://en.wikipedia.org/wiki/Ocedurenone | Ocedurenone, formerly known as KBP-5074, is a nonsteroidal, selective mineralocorticoid receptor antagonist that is being developed to treat hypertension in patients with chronic kidney disease with less risk of hyperkalemia than existing treatments. In 2023, KPB Biosciences entered into talks to sell the drug to Novo Nordisk for USD$1.3 billion. It is a small molecule drug administered orally and is in a Phase III trial that is scheduled to complete in 2024.
References
Antimineralocorticoids
Antihypertensive agents
Piperidines
Benzonitriles
Chloroarenes | Ocedurenone | [
"Chemistry"
] | 139 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,217,568 | https://en.wikipedia.org/wiki/298%20%28number%29 | 298 is the natural number following 297 and preceding 299.
In mathematics
298 is an even composite number with two prime factors.
298 is a nontotient and noncototient number which means that phi(x) and x-phi(x) both cannot result in 298.
298 is the number of polynomial symmetric functions in matrix of order 6 with separate row and column permutations.
298 is a number where 6n+1 and 6n-1 are both prime.
References
Integers | 298 (number) | [
"Mathematics"
] | 101 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
75,217,983 | https://en.wikipedia.org/wiki/299%20%28number%29 | 299 is the natural number following 298 and preceding 300.
In mathematics
299 is an odd composite number with two prime factors.
299 is a highly cototient number, meaning that it has more values for x-phi(x)= that number than any before it.
299 is a self number, meaning that it has 298 integer partitions.
299 is the twelfth cake number, the maximum number of pieces to get from 12 slices of a cake.
299 is a brilliant number meaning that it is the product of 2 primes with both having the same number of digits.
References
Integers | 299 (number) | [
"Mathematics"
] | 118 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
75,218,296 | https://en.wikipedia.org/wiki/Selenium%20tetraazide | Selenium tetraazide is an inorganic chemical compound with the formula . It is a highly sensitive explosive, and has been prepared directly from selenium tetrafluoride and trimethylsilyl azide.
Properties
Selenium tetraazide is a yellow solid which precipitates frequently due to its low solubility. The compound is very susceptible to combustion even at low temperatures, and was only found to be stable at -50 degrees Celsius.
References
azide
selenium | Selenium tetraazide | [
"Chemistry"
] | 107 | [
"Explosive chemicals",
"Azides",
"Inorganic compounds",
"Inorganic compound stubs"
] |
75,219,014 | https://en.wikipedia.org/wiki/301%20%28number%29 | 301 is the natural number following 300 and preceding 302.
In mathematics
301 is an odd composite number with two prime factors.
301 is a Stirling number of the second kind represented by {7/3} meaning that it is the number of ways to organize 7 objects into 3 non-empty sets.
301 is the sum of consecutive primes 97, 101, and 103.
301 is a happy number, meaning that infinitely taking the sum of the squares of the digits will eventually result in 1.
301 is a lazy caterer number meaning that it is the maximum number of pieces made by cutting a circle with 24 cuts.
References
Integers | 301 (number) | [
"Mathematics"
] | 127 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
75,219,053 | https://en.wikipedia.org/wiki/302%20%28number%29 | 302 is the natural number following 301 and preceding 303.
In mathematics
302 is an even composite number with two prime factors.
302 is the number of prime partitions of 40 meaning that there are 302 ways to separate 40 into the sum of prime parts.
302 is a happy number meaning that repeatedly taking the sum of the squares of the digits of the number will eventually result in 1.
302 is a nontotient number meaning that it is an even number where phi(x)=302 has no answers.
References
Integers | 302 (number) | [
"Mathematics"
] | 106 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
75,219,857 | https://en.wikipedia.org/wiki/Crisugabalin | Crisugabalin (HSK16149) is a selective GABA analog in development for the treatment of chronic pain. It has a wider therapeutic index than pregabalin, which as a similar mechanism of action. In China, it was approved in 2024 for the treatment of diabetic peripheral neuropathic pain. In the United States, it is in Phase III trials as of 2023. The drug can be administered with or without food.
See also
List of investigational analgesics
References
GABA analogues | Crisugabalin | [
"Chemistry"
] | 111 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,219,908 | https://en.wikipedia.org/wiki/Xeligekimab | Xeligekimab (GR1501) is a monoclonal antibody that neutralizes interleukin-17A; it is being developed for plaque psoriasis, axial spondyloarthritis, and lupus nephritis. It is in a Phase III trial in 2023.
References
Monoclonal antibodies
Disease-modifying antirheumatic drugs | Xeligekimab | [
"Chemistry"
] | 80 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,220,295 | https://en.wikipedia.org/wiki/Sotatercept | Sotatercept, sold under the brand name Winrevair is a medication used for the treatment of pulmonary arterial hypertension. It is an activin signaling inhibitor, based on the extracellular domain of the activin type 2 receptor expressed as a recombinant fusion protein with immunoglobulin Fc domain (ACTRIIA-Fc). It is given by subcutaneous injection.
The most common side effects include headache, epistaxis (nosebleed), rash, telangiectasia (spider veins), diarrhea, dizziness, and erythema (redness of the skin).
Sotatercept was approved for medical use in the United States in March 2024, and in the European Union in August 2024. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
In the United States, sotatercept is indicated for the treatment of adults with pulmonary arterial hypertension (PAH, WHO Group 1).
In the European Union, sotatercept, in combination with other pulmonary arterial hypertension therapies, is indicated for the treatment of pulmonary arterial hypertension in adults with WHO Functional Class (FC) II to III, to improve exercise capacity.
Side effects
The most common adverse reactions include headache, epistaxis, rash, telangiectasia, diarrhea, dizziness, and erythema.
Sotatercept causes increases in hemoglobin (red blood cells). High concentrations of red blood cells in blood may increase the risk of blood clots. Sotatercept causes decreases in platelet count, which can result in bleeding problems.
Based on findings in animal studies, sotatercept may impair female and male fertility and cause fetal harm when administered during pregnancy.
History
The US Food and Drug Administration (FDA) approved sotatercept based on evidence of safety and effectiveness from a clinical trial of 323 participants with PAH (WHO group 1 functional class II or III). The trial was conducted at 126 sites in 21 countriesArgentina, Australia, Belgium, Brazil, Canada, the Czech Republic, France, Germany, Israel, Italy, Mexico, the Netherlands, New Zealand, Poland, Serbia, South Korea, Spain, Sweden, Switzerland, the United Kingdom, and the United States. The study included 88 participants inside the United States (43 in the sotatercept group and 45 in the placebo group).
Society and culture
Legal status
Sotatercept was approved for medical use in the United States in March 2024. The FDA granted the application breakthrough therapy designation.
In June 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Winrevair, intended for the treatment of pulmonary arterial hypertension. The applicant for this medicinal product is Merck Sharp & Dohme B.V. Sotatercept was approved for medical use in the European Union in August 2024.
Economics
Following its approval in 2024, the list price of Winrevair as single-vial and double-vial kit was announced at per vial, with an estimated annual cost of $240,000 a year.
Names
Sotatercept is the international nonproprietary name.
Sotatercept is sold under the brand name Winrevair.
Research
It was initially developed to increase bone density but during its early development was found to increase hemoglobin and red blood cell counts, and was subsequently studied for use in anemia associated with multiple conditions including beta thalassemia and multiple myeloma. Development of this drug was superseded by the development of luspatercept (Reblozyl), a modified activin receptor type 2B (ACTRIIB-Fc) based ligand trap with improved properties for anemia. Hypothesizing that this drug might block the effects of activin in promoting pulmonary vascular disease, this molecule was found to inhibit vascular obliteration in multiple models of experimental pulmonary hypertension, providing rationale to reposition sotatercept for PAH in the PULSAR and STELLAR clinical trials for PAH.
References
Further reading
External links
Antihypertensive agents
Drugs developed by Merck & Co.
Peptides
Orphan drugs | Sotatercept | [
"Chemistry"
] | 897 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
75,221,052 | https://en.wikipedia.org/wiki/Spinach%20%28software%29 | Spinach is an open-source magnetic resonance simulation package initially released in 2011 and continuously updated since. The package is written in Matlab and makes use of the built-in parallel computing and GPU interfaces of Matlab.
The name of the package whimsically refers to the physical concept of spin and to Popeye the Sailor who, in the eponymous comic books, becomes stronger after consuming spinach.
Overview
Spinach implements magnetic resonance spectroscopy and imaging simulations by solving the equation of motion for the density matrix in the time domain:
where the Liouvillian superoperator is a sum of the Hamiltonian commutation superoperator , relaxation superoperator , kinetics superoperator , and potentially other terms that govern spatial dynamics and coupling to other degrees of freedom:
Computational efficiency is achieved through the use of reduced state spaces, sparse matrix arithmetic, on-the-fly trajectory analysis, and dynamic parallelization.
Standard functionality
As of 2023, Spinach is cited in over 300 academic publications. According to the documentation and academic papers citing its features, the most recent version 2.8 of the package performs:
Time-domain nuclear magnetic resonance (NMR) simulations of:
Standard NMR experiments (DEPT, COSY, NOESY, HSQC, TOCSY, etc.).
Protein and nucleic acid NMR experiments (HNCA, HNCOCA, HNCO, etc.).
Magic angle spinning NMR experiments (CP-MAS, MQMAS, WISE, etc.).
Experiments involving residual dipolar coupling and other residual order effects.
Zero- and ultra-low field experiments, including Earth's field NMR.
Nuclear quadrupole resonance, including overtone NMR.
Time-domain magnetic resonance imaging (MRI) simulations, including:
Standard and user-specified imaging sequences.
Diffusion coefficient and diffusion tensor imaging.
Three-dimensional point-resolved NMR spectroscopy.
Ultrafast spatially encoded NMR spectroscopy.
Time-domain electron spin resonance (ESR) simulations of:
Standard pulsed ESR experiments (HYSCORE, ENDOR, ESEEM, etc.).
Pulsed dipolar spectroscopy (DEER, RIDME, etc.).
Dynamic nuclear polarization for static and spinning samples.
Common models of spin relaxation (Redfield theory, stochastic Liouville equation, Lindblad theory) and chemical kinetics are supported, and a library of powder averaging grids is included with the package.
Optimal control module
Spinach contains an implementation the gradient ascent pulse engineering (GRAPE) algorithm for quantum optimal control. The documentation and the book describing the optimal control module of the package list the following features:
L-BFGS quasi-Newton and Newton-Raphson GRAPE optimizers.
Spin system trajectory analysis by coherence and correlation order.
Spectrogram analysis of the pulse waveform.
Prefixes, suffixes, keyholes, and freeze masks.
Optimization of cooperative pulses and phase cycles.
Waveform penalty functionals and instrument response.
Dissipative background evolution generators and control operators are supported, as well as ensemble control over distributions in common instrument calibration parameters, such as control channel power and offset.
References
Computational chemistry software
Physics software | Spinach (software) | [
"Physics",
"Chemistry"
] | 652 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Physics software"
] |
75,221,257 | https://en.wikipedia.org/wiki/Beno%20Rothenberg | Beno Rothenberg (; October 23, 1914, in Frankfurt am Main – March 13, 2012, in Ramat Gan, Israel) was an Israeli photographer, archaeologist, and one of the founders of archaeometallurgy.
Early life and education
Beno Rothenberg was born in a wealthy Hassidic Jewish family in Frankfurt am Main on October 23, 1914. He emigrated to Palestine with his family in 1933, where he started right away his studies of mathematics and philosophy at the Hebrew University of Jerusalem.
Military career; early work as photographer
Three years after his arrival in Palestine, he joined the Hagana. In 1945 he bought a camera, taught himself photography, and became a press photographer. During the Second World War he served with the Royal Air Force Meteorological Service in Egypt. During the 1948 War of Independence, he was assigned as a photographer to an armed brigade under Yitzhak Sadeh.
Photography, philosophy and poetry
Rothenberg is considered as one of the important photographers of the last pre-state and early post-independence years of Israel, who resonated with the romantic pioneering spirit of the time, and had access to important Israeli personalities of whom he took remarcable portraits. His artistic talent, coupled with his equal passion for scientific exploration, gave his illustrated books their specific quality and made them very popular. In the same vein, Rothenberg also published a number of notable philosophy articles, along with a book of poetry.
Scientific work
Once he signed on in 1952 as photographer for the archaeological survey of the Negev desert, he entered the field of activity he would dedicate the rest of his life to. For the next several decades, he documented archaeological work in Israel, while also resuming his studies at the University of Frankfurt, where he received his PhD in 1961.
Rothenberg took about 32,000 photos from 1947 to 1957. They are now preserved in the Meitar Collection at the National Library of Israel.
His photography led him to work with American archaeologist Nelson Glueck in the 1950s surveying biblical sites for King Solomon's mines. He became an expedition supervisor and an administrator of the field team. His first major work was a survey of the Sinai Peninsula in 1956. He later worked with Yohanan Aharoni, whose scientific approach influenced Rothenberg and created frictions with Glueck, who was more inclined towards biblical literalism. Rothenberg went on to lead excavations uncovering the expansive Egyptian-controlled ancient copper mines at Timna Valley, part of the Arava Valley in the Negev Desert. The Arava Expedition he headed found a "vast ancient industrial landscape around" there as well as a temple dedicated to the Egyptian goddess Hathor from the 14th-12th centuries BCE, which together overthrew the prevailing view of the mines being founded by King Solomon of biblical fame. The major Arava Expedition was followed in 1956 and later in 1967-78 by a survey of the Sinai Peninsula, which fundamentally changed what was known about that region.
In 1968, Rothenberg joined American Theodore Wertime, "on a long reconnaissance journey through Turkey, Iran and Afghanistan in search of the origins of pyrotechnology".
Though he worked for many years in the Tel Aviv University, he did not get a permanent position there, and in 1973, Rothenberg together with Mortimer Wheeler founded the Institute for Archaeo-Metallurgical Studies at the University College London to support his work. He partnered with academic institutions in the UK and Germany, establishing archaeometallurgy as an academic field.
Rothenberg trained many students who became leaders in archaeometallurgy. He lectured into his nineties, and gave his last lecture in 2008, at 94. Rothenberg died in Ramat Gan at the age of 97, on March 13, 2012.
Publications
References
External links
Beno Rothenberg's archive
"Photography has taught me two things", article by Ruth Oren
1914 births
2012 deaths
People from Frankfurt
Jewish emigrants from Nazi Germany to Mandatory Palestine
Israeli archaeologists
Israeli photographers
Hebrew University of Jerusalem alumni
20th-century archaeologists
Archaeometallurgists
War photographers
Non-British Royal Air Force personnel of World War II
Mandatory Palestine military personnel of World War II
German Zionists
Haganah members | Beno Rothenberg | [
"Chemistry"
] | 860 | [
"Archaeometallurgy",
"Archaeometallurgists"
] |
75,221,283 | https://en.wikipedia.org/wiki/Appinite | Appinite is an amphibole-rich plutonic rock of high geochemical variability. Appinites are therefore regarded as a rock series comprising hornblendites, meladiorites, diorites, but also granodiorites and granites. Appinites have formed from magmas very rich in water. They occur in very different geological environments. The ultimate source region of these peculiar rocks is the upper mantle, which was altered metasomatically and geochemically before melting.
Etymology
The rock appinite was named after its type locality Appin near Ballachulish in Scotland. Appin was originally called An Appain in Scottish Gaelic. This is derived from Middle Irish apdain or from Old Irish aibit with the meaning of abbey — referring to the ancient abbey on the neighbouring island Lismore.
Definition
Bailey and Maufe (1916) defined appinite originally as
a medium- to coarse-grained, meso- to melanocratic igneous rock, that stands out by conspicuous crystals of hornblende, which are enclosed by a matrix of plagioclase (oligoclase to andesine) and/or orthoclase. Quartz often is present, but can also be absent.
Generally, appinites are plutonic equivalents of calc-alkaline lamprophyres such as vogesite and spessartite.
Introduction
Appinites — often synonymously used for hornblende diorites — are a coeval rock suite of plutonic or subvolcanic igneous rocks with variable chemical compositions, covering ultramafic to felsic igneous rocks. They are characterized in all their lithologies by euhedral hornblende crystals as the dominant mafic mineral. Hornblende mainly appears as big prismatic phenocrysts, but can also be found in the groundmass.
On top appinites have very different textures — featuring planar and linear magmatic fabrics, cumulate textures, intercumulate textures and also poikilitic fabrics. They also can occur as mafic pegmatites and show common mixing and mingling between coeval mafic and felsic magmas. Often they are variably contaminated by the country rocks.
Most appinites crystallize from an important gas phase. This implies an anomalously water-rich magma including both mantle components and meteoric components. The appinite suite therefore offers a unique occasion to study the role of water in the production and in the crystallization history of mafic to felsic magmas, but also more generally in intrusional processes.
Appinitic intrusions possess a whole gamut of differing plutonic bodies and show very different ways of emplacement. Most of the appinites precede granitic intrusions, but can appear also at the same time. This can be perfectly observed at the Ardara pluton in Donegal. Their emplacement is usually directed by tectonics — especially by important shear zones, who potentially facilitate the rising of the magmas through the crust.
General remarks
In general, appinites appear as relatively small, rather flat intrusional bodies in the crust. Their diameter never exceeds more than two kilometers — like for instance the defining appinites in Scotland. Appinites rose along the periphery of granitic plutons and usually are associated with important, deep reaching faults along which they ascended into higher crustal levels.
Often appinites — and likewise the Scottish appinites — get tied up with active subduction, the formation of granitoids and also the termination of subduction by slab breakoff. In the case of the Scottish appinites it is believed that they only formed once the Iapetus Ocean was closed by continental collision between the southern continental margin of Laurentia and the northwestern side of Eastern Avalonia and that the subduction within Iapetus had stopped.
Yet newer geochronological studies seem to indicate, that the relation between subduction, appinite formation and granite magmatism involves a rather lengthy process.
It is also believed that the mafic component of appinites only was able to form once the subducting plate had broken off enabling hot asthenospheric material to flow in through the gap. The asthenospheric extra heat initiated magmas containing juvenile mantle components, but also components of Subcontinental Lithospheric Mantle (SCLM). Furthermore, the magmas show affinities to Shoshonites. The felsic components of appinites are connected to big batholiths with fractional crystallization being the main petrogenetic process. The assimilation of country rocks was of hardly any importance.
Occurrences and ages
Appinites occur more or less worldwide. Temporally, the oldest appinites are 2700 million years old (the Neoarchaean Era); the youngest are of Holocene age. The Neoarchaean appinites are associated genetically with coeval sanukitoids. This is often taken as proof for plate tectonics going back that far in time.
Besides the type locality in the Scottish Caledonides (within the Central Highlands Terrane or Grampian Terrane) appinites also occur in Ireland within and in the vicinity of the Donegal Batholith — especially in association with the Ardara pluton — but also within the Leinster Granite and within the Galway granite batholith.
All these appinites have Silurian ages. Further occurrences in Scotland are found near Loch Lomond and in central Sutherland, which already belongs to the Northern Highlands Terrane. The appinites in the Northern Highlands Terrane are mainly associated with the Ratagain Complex, the Rogart Granite and the Strontian Granite. The appinites from the Rogart Granite and from the Strontian Granite also have Silurian ages and are between 425 and 420 million years old.
So far the oldest known appinites come from northern Michigan. They go back in time roughly 2700 million years and belong to the Northern Complex — a greenstone belt along the southern edge of the Superior craton.
Fairly old appinites are reported from Canada, for instance from the Frog Lake hornblende gabbro situated within the late neoproterozoic Avalon Terrane in Nova Scotia. The Wamsutta Diorite in the White Mountains of New Hampshire also has similarities with appinites. The diorite is 408 million years old and belongs to the Acadian Orogeny.
Younger appinites from the Carboniferous appear near Puebla de Sanabria in the Variscides of northwestern Spain. They are also found in the Avila Batholith. Amongst Variscan occurrences appinites often carry local names like Durbachites (in the Black Forest), Redwitzites (in the Fichtelgebirge), Vaugnerites (in the French Massif Central), and sometimes they also hide under the header High Ba Sr Granitoids (an example being the Rogart Granite in Scotland).
Variscan appinites can also be found in the Southern Alps of Northern Italy. They are associated here with the permian Serie dei Laghi — a rock series of gabbros and granites. The age of these Italian appinites is about 285 million years.
In Asia appinites are known to occur in China and in Tibet.
In China appinites appear in the Upper Ordovician (495 - 452 million years) Datong Pluton of the Western Kunlun. and again in the Triassic Laocheng Pluton of the Qinling During the Upper Permian appinites formed along the northern edge of the North China Craton (in northwestern Liaoning) and during the Triassic in Heilongjiang (near Duobaoshan), also belonging to the North China Craton.
In the Tibetan Himalaya Appinite-cumulates are found in the Gangdese Batholith of the Lhasa Terrane. These appinites formed during the Upper Triassic and are 220 to 213 million years old. Another appinite association in Tibet occurs near Pengcuolin northwest of Xigazê. It belongs to the southern Lhasa Terrane and is only 51 million years old i.e. Ypresian (Eocene).
Very young examples of appinites come from Iran, like appinites from the Baneh Pluton in the Zagros. These appinites are 40 million years old and stem from the Middle Eocene. They mark the Zagros Suture Zone. At about the same time appinites also formed near Sardasht more to the northwest.
Mineralogy
Appinites consist mainly of amphibole (hornblende) taking up between 50 and 80 volume percent. Anorthite-rich plagioclase with An50-70 reaches about 20 vol. %. The rest is made up of clinopyroxene (5 to 15 vol. %) and olivine (5 to 10 vol. %). Some biotite and occasional phlogopite are also encountered. In more felsic appinites appear alkali feldspar and quartz. Represented amongst the accessory minerals are sphene, ilmenite, zircon and apatite. Allanite can be found in more felsic members.
A special occurrence is myrmekite found in an appinite of the Italian Serie dei Laghi — indicating metasomatic alterations.
Amongst the amphiboles (mainly brown amphiboles, but also some greenish amphiboles) two populations with high and low aluminium content can be differentiated. Tschermakite and magnesiohastingsite are rich in aluminium, whereas magnesiohornblende contains much less. Plagioclase can also be subdivided into two groups — one anorthite-rich with An80-88 and the other anorthite-poor with An36-52. Plagioclase with a high anorthite component is surrounded by amphiboles or mantled by plagioclases with a low anorthite component. Therefore, it can be assumed, that plagioclase crystallized before amphibole. The grain size of amphiboles varies from 2 millimeters to several centimeters.
Plagioclase, olivine and clinopyroxene settled as cumulates, whereas amphiboles grew afterwards as intercumulate crystals which also can show corona textures.
Petrology
Major elements
Amongst the major elements the SiO2 contents of the appinite suite usually vary between 42 and 61 weight %. The rocks are therefore ultramafic, mafic and intermediate in their geochemical composition. Felsic end members can reach up to 72.1 weight % SiO2. The SiO2 contents correspond with the rock types cortlandtite (a melagabbro), hornblendite, hornblende diorite, meladiorite and diorite, the felsic end members with granodiorite till granite.
The Al2O3 contents vary between 13 and 22 weight %. Appinites are metaluminous with A/NK > 1 and A/CNK < 1. The contents of MgO fall between 5 and 16 weight % and the magnesium numbers generally oscillate between 0.22 and 0.57 (or between 22 and 57). Appinites are magnesian rocks (and not ferroan), because in the relation SiO2 plotted against Fe2O3tot/(Fe2O3tot + MgO) their values are always lower than 0.66. Their magnesium contents are higher than what can be expected from melting of metabasalts and they approach sanukitoids of modern island arcs. The K2O contents vary between 0.5 and 4.0 weight %, appinites are thus calc-alkaline (middle K and high K). Strongly differentiated samples can even touch into the shoshonitic field. With a value of 0.3 weight % K2O the appinite from Kilrean has not been differentiated at all and represents an island arc tholeiite. The ratio Na2O/K2O is rather high in appinites (right up to 5.43) and is similar to Cenozoic adakites, which were produced by the melting of subducted oceanic crust. Accordingly, appinites are a rock suite dominated by sodium.
In the TAS diagram appinites appear mainly in the subalcaline field, but they can extend into the alcaline field. They plot in the fields of basalt, basaltic andesite and andesite, but touch as well the fields of basanite, trachybasalt, basaltic trachyandesite and trachyandesite. The magmatic equivalents are gabbro, gabbroic diorite and diorite, extending towards peridotgabbro, foidgabbro, monzogabbro and monzodiorite. Monzonite hardly ever is realized.
The following table shows major element compositions of several appinites — in comparison with the lamprophyre from Narin-Portnoo:
Trace elements
Amongst the trace elements the mafic members of appinites manifest high concentrations in transitional metals like nickel (98-288 ppm), chromium (100-810 ppm) and vanadium (179-462 ppm). The large-ion lithophile elements (LILE), for example rubidium, potassium, barium (253-528 ppm), cesium and strontium (415-813 ppm), also have elevated concentrations — and so do the light rare-earth elements (LREE). Low in concentration are the heavy rare-earth elements (HREE) and also the high field strength elements (HFSE) niobium, tantalium, zirconium, phosphorus, titanium and thorium. Still the HFSE are higher concentrated than in the associated granodiorites and granites. Compared with chondrites the LREE show an enrichment by factors 20-200. The HREE fractionation (expressed through the ratio GdN/YbN) shows values between 1.4 and 6.1. A positive europium anomaly is very weakly expressed and in more felsic appinites the anomaly turns slightly negative (0.96-0.70). The values for yttrium are rather low (17-30 ppm).
The high concentrations in the elements Mg, Ni, Cr and Ba point towards a mantle source region.
Compared with MORBs the elements rubidium, barium, potassium and also cerium are strongly enriched, yet titanium, ytterbium and yttrium are depleted.
The following table shows trace elements of different appinites:
Isotopes
According to Harmon et al. (1984) appinites possess the following εNd-, εSr- and εHf values:
εNd varies between − 8 and + 2 (i. e. between 0.5123 and 0.51275 – in the Serie dei Laghi between 0.5119 and 0.5123 for 143Nd/144Nd)
εSr varies between − 5 und + 10 (i. e. between 0.7044 and 0.711 for 87Sr/86Sr).
εHf(t) in zircon varies between 3.3 and 7.9, but can descend to − 1.7.
Appinites prolong the mantle array into the field of negative εNd. Yet their mafic members plot very close to enriched MORB (EMORB) with εNd = + 2 and 87Sr/86Sr = 0.7048. Their εSr falls slightly above 0.
Whole rock analyses for δ18O delivered values of 6.7 ‰, yet for single minerals values from 4.3 to 6.1 ‰.
The isotopic ratio 206Pb/204Pb varies between 17.9 and 18.4.
Geochemistry
The geochemical composition of appinites is mainly calc-alkaline, sometimes shoshonitic and rarely tholeiitic. Therefore, appinites resemble shoshonites, shoshonitic lamprophyres, but also magnesian andesites, sanukitoids, adakites and TTG rocks (tonalites, trondhjemites and granodiorites). The TTGs appear especially in the late Archean and during the Paleoproterozoic.
Genesis
The appinites in western Scotland and in northwestern Ireland originated from a gas-rich basaltic magma. The occurrences near Ballachulish are calc-alkaline and belong to the high-K type. They are evolving towards more continental conditions. In contrast, the Ardara appinites show transitions from calc-alkaline towards tholeiitic, and were thus evolving towards island arc rocks. The Loch Lomond appinites are intermediate between the two, and they are common calc-alkaline rocks.
In the appinites from Ballachulish, olivine appears on the liquidus at a depth of about 70 to 80 kilometers, from where they ascended into overlying crustal domains. Their ascent was impeded by structural complications caused by folded rocks of the Dalradian Supergroup. Further crystallizations then happened under falling temperatures and rather variable gas pressures, caused by explosions within subvolcanic pipes.
Olivine crystallized first then clinopyroxene, amphibole, mica and plagioclase, creating a progressive rock suite covering ultramafic to felsic compositions.
Experimental and theoretical studies show that, with rising water pressure, the stability field of hornblende expands, restricting the stability fields of olivine and clinopyroxene. The characteristic textures of appinites point to rapid crystal growth. These studies also support the reduction of melt viscosity, whereby ions can be transported more effectively to the sites of mineral growth.
Source region
The general source region of appinitic magmas is estimated to be situated at about 40 kilometers depth, just below the base of the continental crust. From there the magmas ascended and finally stalled at about 15 kilometers depth in upper crustal levels.
The water-bearing, basaltic appinitic magmas probably derive from underplated mafic sources with differing degrees of fractionation. They most likely resulted from subduction processes. From within the subcontinental lithospheric mantle they then rose into the MASH zone (abbreviation of Melting, Assimilation, Storage and Homogenisation) just above the MOHO. Here they engendered copious granitic magmas by partial melting processes.
It is assumed, that once the subduction came to an end water-bearing magmas rose from the underplated region into middle and upper crustal levels with 15 kilometers as upper intrusional depth level (corresponding to a pressure of 0.3 to 0.6 GPa or 3 to 6 kilobar). Here the magmas stalled, differentiated and crystallized under water-saturated conditions.
The granitic magmas also ascended in pulsating fashion and were making use of structures in the host rocks that were oriented to the local stress field in a favourable way — thus enabling the ascent. But later mafic pulses were hindered in their ascent by structurally higher, already crystallized granitic bodies — which functioned as rheological barriers. Still the appinite magmas were able to circumvent these barriers by using as ascent ways deep-reaching faults along the edges of the granitoids. According to this model appinites provide a direct link to mafic underplating. Their mafic members also offer insights into the formation of granitic batholiths — and more generally into the crustal growth process underneath island arcs.
Melting
The melting of appinites was triggered by the incursion of hot and less viscous asthenospheric material. The incursion was due to slab breakoff after the collision of terranes or after outright continental collision. Another possibility is the opening of a slab window, which is resulting from the collision of a mid-ocean ridge with a subduction zone.
Mafic appinite magmas can contain a juvenile component. Neodymium isotopes show, however, that an additional SCLM-component was engaged. Quite often the SCLM-component had previously been metasomatized by hot fluids and magmas. This subcontinental lithospheric mantle component then was underplated by other mafics during subduction. Therefore, the composition of the mafic starting magmas can be quite variable for appinites. This explains, why certain appinite suites have calc-alkaline and others tholeiitic compositions — and therefore differ from the shoshonitic type locality.
Some felsic appinite magmas are thought to have formed by anatexis — and not by fractional crystallization.
Overview
The overview centers on the example of the Pengcuolin appinite in the Tibetan Lhasa terrane. In this case the source region is assumed to be directly above oceanic crust of the Neotethys domain subducting northwards underneath the Tibetan plateau, i.e. Eurasia. The pressure in the source region is estimated at 3.6 GPa corresponding to a depth of 120 kilometers. This is quite deep considering the above-mentioned value of 80 kilometers. An explanation is of course overthickened crust caused by the continental collision of India and Eurasia.
The subcontinental mantle rocks were of lherzolithic composition, to be more specific an olivine lherzolite.
The temperatures were estimated at fairly low 800 °C due to the subducted oceanic crust. The overlying subcontinental lherzolite was fluxed by fluids rising from the slab, became hydrated and was therefore metasomatized. Incoming asthenospheric material additionally provided heat to the lherzolite which was slowly rising, mainly along deep-reaching tectonic fracture zones. At a pressure of 2.7 GPa or 90 kilometers depth the lherzolite had reached a temperature of 1329 °C and started to melt. The primary magma rose quite quickly along faults within the subcontinental mantle. Having traversed the MOHO and arrived at 27 kilometers depth (corresponding to a pressure of 0.8 GPa) the melt collected in a first magma chamber. Plagioclase rich in anorthite began crystallizing and olivine plus pyroxene fractionated. This anorthite-rich appinitic magma kept on rising through the lower crust and stagnated once more at 16 kilometers depth (or at 0.5 GPa). Meanwhile, it had cooled down to just above 800 °C and started to crystallize aluminium-rich amphibole and plagioclase depleted in anorthite. The final batch of appinitic magma then finally stalled in the upper crust at a depth of 10 kilometers (or 0.3 GPa). The last crystals to settle out then were aluminium-poor amphibole and anorthite-poor plagioclase.
Heat and additional water contributed in the first magma chamber at 27 kilometers depth to produce felsic melts, which also rose into the upper crust and intruded as granitic plutons. The associated granitoids therefore owe their existence to the heat input of the appinites enabling lower crustal material to be melted anatectically. Consequently, appinites can be regarded as midwives of collisional granitoids.
Literature
References
Mafic rocks
Intermediate rocks
Felsic rocks
Subvolcanic rocks | Appinite | [
"Chemistry"
] | 4,960 | [
"Felsic rocks",
"Intermediate rocks",
"Mafic rocks",
"Igneous rocks by composition"
] |
75,222,055 | https://en.wikipedia.org/wiki/NGC%202001 | NGC 2001 (also known as PGC 3518062, 056-SC137, SL 507 and part of LH 64) is an open cluster located in the Dorado constellation and is part of the Large Magellanic Cloud.
Background
It was discovered by James Dunlop on September 27, 1826. It's apparent magnitude is 7 by 3.5 arc minutes. and is also known as GC 1204, h 2888, Dunlop 178 according to both cseligman and seds. However, Wolfgang Steinicke lists this as Dunlop 136, not Dunlop 178.
It is around 160 to 165 thousand light year distance of the Large Magellanic Cloud and the loose grouping of stars is about 330 to 335 light years across. NGC 2001 is also listed as part of Lucke-Hodge stellar association 64, along with ANONb4 and e135.
References
External links
open clusters
2001
3518062
Dorado
Large Magellanic Cloud
Astronomical objects discovered in 1826
Discoveries by James Dunlop
056-SC137 | NGC 2001 | [
"Astronomy"
] | 213 | [
"Dorado",
"Constellations"
] |
75,223,282 | https://en.wikipedia.org/wiki/NGC%202003 | NGC 2003 (also known as PGC 3518064, ESO 086-SC006 and SL 526) is a globular cluster located in the Dorado constellation and is part of the Large Magellanic Cloud.
Background
It is not visible to the naked eye and requires a telescope to observe. The cluster is located at a distance of approximately 163,000 light-years from Earth. It was first discovered by John Herschel on 23 November 1834. Its apparent size is about 1.75 by 0.9 arc minutes.
References
External links
Globular clusters
2001
3518064
Dorado
Large Magellanic Cloud
Astronomical objects discovered in 1834
Discoveries by John Herschel
086-SC006 | NGC 2003 | [
"Astronomy"
] | 146 | [
"Dorado",
"Constellations"
] |
75,223,286 | https://en.wikipedia.org/wiki/NGC%204123 | NGC 4123 is a modest-sized, strongly-barred spiral galaxy located away in the equatorial constellation of Virgo. It was discovered February 25, 1784 by William Herschel. This is a member of the Virgo cluster, and it belongs to a group of three galaxies. A companion galaxy, NGC 4116, lies at an angular separation of to the southwest. There is no indication of an interaction between the two galaxies. The third member of the group is NGC 4179.
The morphological classification of NGC 4123 is SBx(rs)ab, which indicates this is a spiral galaxy with a central X-shaped bar (SBx) encircled by an incomplete ring structure (rs) and moderate to tightly wound spiral arms (ab). The plane of the galaxy is inclined at an angle of 46.9° to the line of sight from the Earth. It lacks a large spheroidal bulge at the core, showing only a luminous point-like source. Blue knots in the outer spiral arms indicate that star formation is ongoing. The galaxy has a stellar mass of with a star formation rate of . The atomic gas in the galaxy has a mass of .
Radio emission has been detected from an HII nucleus, which is consistent with it having a weak active galactic nucleus. If there is a supermassive black hole at the core, it has an estimated mass of .
References
Further reading
4123
Virgo (constellation)
Barred spiral galaxies
038531
Astronomical objects discovered in 1784
Discoveries by William Herschel
7116
38531 | NGC 4123 | [
"Astronomy"
] | 310 | [
"Virgo (constellation)",
"Constellations"
] |
75,223,552 | https://en.wikipedia.org/wiki/Fotagliptin | Fotagliptin (SAL067) is a DPP-4 inhibitor under development for the treatment of type 2 diabetes. Like other DPP-4 inhibitors, it works by increasing endogenously produced GLP-1 and GIP. In a phase 3 trial it showed similar results as alogliptin.
References
Dipeptidyl peptidase-4 inhibitors
Benzonitriles
Fluoroarenes
Piperidines | Fotagliptin | [
"Chemistry"
] | 92 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,223,644 | https://en.wikipedia.org/wiki/Levilimab | Levilimab is an anti-IL-6 monoclonal antibody initially developed to treat rheumatoid arthritis. In 2020, it was approved as a treatment for COVID-19 in Russia.
References
Monoclonal antibodies
COVID-19 drug development | Levilimab | [
"Chemistry"
] | 55 | [
"Pharmacology",
"Drug discovery",
"Medicinal chemistry stubs",
"COVID-19 drug development",
"Pharmacology stubs"
] |
75,224,036 | https://en.wikipedia.org/wiki/Osoresnontrine | Osoresnontrine (BI-409306) is a phosphodiesterase 9 inhibitor in development for schizophrenia, attenuated psychosis syndrome, and Alzheimer's disease. A preclinical study suggested that it increases memory in rodents.
References
PDE9 inhibitors | Osoresnontrine | [
"Chemistry"
] | 62 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,224,127 | https://en.wikipedia.org/wiki/Phosphodiesterase%209%20inhibitor | Phosphodiesterase 9 inhibitors or PDE9 inhibitors are a class of drugs that work by inhibiting the activity of PDE9. The first compound with this effect, BAY 73-6691, was reported in 2004. PDE9 inhibitors are under investigation for the treatment of obesity, hepatic fibrosis, Alzheimer's disease, schizophrenia, other psychotic disorders, heart failure, and sickle cell anemia. Drug candidates include CRD-733, osoresnontrine, tovinontrine, and PF-04447943. Cannabidiol acts as a PDE9 inhibitor in vitro. There are no PDE9 inhibitors that have been approved as of 2023.
References
PDE9 inhibitors | Phosphodiesterase 9 inhibitor | [
"Chemistry"
] | 157 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,224,457 | https://en.wikipedia.org/wiki/HGH%20Fragment%20176%E2%80%93191 | Human Growth Hormone Fragment 176–191 (hGH frag 176–191) is a peptide fragment of human growth hormone. It has erroneously been presented as a lipolytic peptide fragment based on extrapolations of clinical data pertaining to AOD9604, a modified form of hGH frag 176–191. In contrast to AOD9604, hGH frag 176–191 has not been studied in humans.
References
Extra reading
Growth hormones
Peptides | HGH Fragment 176–191 | [
"Chemistry"
] | 97 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
75,224,605 | https://en.wikipedia.org/wiki/307%20%28number%29 | 307 is the natural number following 306 and preceding 308.
In mathematics
307 is the 63rd prime number and an odd prime number. It is an isolated (i.e., not twin) prime, but because 309 is a semiprime, 307 is a Chen prime.
307 is the number of one-sided noniamonds meaning that it is the number of ways to organize 9 triangles with each one touching at least one other on the edge.
307 is the third non-palindromic number to have a palindromic square. 3072=94249.
307 is the number of solid partitions of 7.
307 is one of only 16 natural numbers for which the imaginary quadratic field has class number 3.
References
Integers | 307 (number) | [
"Mathematics"
] | 154 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
75,225,355 | https://en.wikipedia.org/wiki/Mazdutide | Mazdutide (also known as IBI362 or LY3305677) is a dual agonist of the GLP-1 receptor and glucagon receptor. It is an analog of oxyntomodulin (OXM). The drug is developed by Eli Lilly and is currently in multiple Phase III studies.
References
Glucagon receptor agonists
Drugs developed by Eli Lilly and Company
GLP-1 receptor agonists
Peptide therapeutics | Mazdutide | [
"Chemistry"
] | 100 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,225,422 | https://en.wikipedia.org/wiki/Aleniglipron | Aleniglipron (development code GSBR-1290) is a small-molecule GLP-1 agonist developed by Structure Therapeutics. It is delivered orally and is in a Phase II trial as of 2023. In June 2024, Structure Therapeutics reported positive topline data from a Phase 2a obesity study in which GSBR-1290 demonstrated clinically meaningful and statistically significant placebo-adjusted mean weight loss and generally favorable safety and tolerability results.
References
GLP-1 receptor agonists
Experimental diabetes drugs
Organophosphine oxides
Anilines
Imidazoles
Oxadiazoles
Pyrazolopyridines
Indoles
4-Fluorophenyl compounds
Ureas
Carboxamides
Cyclopropyl compounds
Tetrahydropyrans | Aleniglipron | [
"Chemistry"
] | 165 | [
"Organic compounds",
"Ureas"
] |
75,225,594 | https://en.wikipedia.org/wiki/Suzetrigine | Suzetrigine (developmental code name VX-548) is a non-opioid, small-molecule analgesic that works as a selective inhibitor of Nav1.8-dependent pain-signaling pathways in the peripheral nervous system. It is being developed by Vertex Pharmaceuticals and has completed two phase III clinical trials.
Vertex Pharmaceuticals announced in January 2024 that suzetrigine had successfully met several endpoints in its phase III trials. The drug relieved moderate-to-severe post-surgical pain comparable to an opioid–acetaminophen combination. The company hopes that the drug, which operates on peripheral nerves, will avoid the addictive potential of opioids.
Vertex Pharmaceuticals announced on July 30, 2024, that the U.S. Food and Drug Administration had accepted a New Drug Application (NDA) for suzetrigine. The FDA granted suzetrigine priority review and assigned a Prescription Drug User Fee Act (PDUFA) target action date of January 30, 2025. Suzetrigine has already been granted FDA Fast Track and Breakthrough Therapy designations for the treatment of moderate-to-severe acute pain.
Vertex Pharmaceuticals also plans to seek a broad label for peripheral neuropathic pain, citing positive phase II results in painful diabetic peripheral neuropathy. The drug is additionally under development for the treatment of radiculopathy and pain in other contexts.
See also
List of investigational analgesics
References
Analgesics
Experimental drugs
Fluoroarenes
Pyridines
Sodium channel blockers
Trifluoromethyl compounds | Suzetrigine | [
"Chemistry"
] | 329 | [
"Amides",
"Functional groups"
] |
75,225,694 | https://en.wikipedia.org/wiki/HD%20201647 | HD 201647 (HR 8100; Gliese 9726; LTT 8410) is a solitary star located in the southern constellation Microscopium. It is faintly visible to the naked eye as a yellowish-white-hued star with an apparent magnitude of 5.83. The object is located relatively close at a distance of light-years based on Gaia DR3 parallax measurements, but it is receding with a heliocentric radial velocity of . At its current distance, HD 201647's brightness is diminished by 0.11 magnitudes due to interstellar extinction and it has an absolute magnitude of +3.33. It has a relatively high proper motion across the celestial sphere, moving at a rate of 226.331 mas/yr.
HD 201647 has a stellar classification of F5 V, indicating that it is an ordinary F-type main-sequence star that is generating energy via hydrogen fusion at its core. It has 1.28 times the mass of the Sun and 1.47 times the radius of the Sun. It radiates 3.79 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 201647 is slightly metal enriched with an iron abundance of [Fe/H] = +0.06 or 115% of the Sun's. It is estimated to be 916 million years old and it spins modestly with a projected rotational velocity of .
In the discovery paper for Lacaille 8760, HD 201647 was reported to be a variable star that varied between 5.83 and 5.86 in the visual passband. As of 2004 however, it has not been confirmed to be variable.
References
F-type main-sequence stars
Suspected variables
Microscopium
Microscopii, 55
CD-40 14216
9726
201647
104680
8100
00159670453 | HD 201647 | [
"Astronomy"
] | 388 | [
"Microscopium",
"Constellations"
] |
75,225,735 | https://en.wikipedia.org/wiki/Ivarmacitinib | Ivarmacitinib (SHR0302) is a small molecule drug and selective janus kinase 1 (JAK1) inhibitor. It is being developed for ulcerative colitis, eczema, alopecia areata, and graft-versus-host disease.
References
Janus kinase inhibitors
Pyrrolopyrimidines
Thiadiazoles
Ureas | Ivarmacitinib | [
"Chemistry"
] | 84 | [
"Organic compounds",
"Ureas"
] |
75,225,876 | https://en.wikipedia.org/wiki/BPI-16350 | BPI-16350 is a small molecule CDK4/6 inhibitor that is being studied for the treatment of cancer. It has a similar structure to abemaciclib but is more selective for CDK4/6 over CDK9 according to preclinical research.
References
CDK inhibitors
Piperazines
2-Aminopyridines
Aminopyrimidines
Fluoropyrimidines
Benzimidazoles | BPI-16350 | [
"Chemistry"
] | 88 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,225,956 | https://en.wikipedia.org/wiki/Obicetrapib | Obicetrapib is an experimental CETP inhibitor that is intended to treat dyslipidemia. In a clinical trial, as an add-on to statins, compared with placebo, it decreased concentrations of LDL-C (by up to 51%), apolipoprotein B (by up to 30%) and non-high-density lipoprotein cholesterol (non-HDL-C) (by up to 44%), and increased HDL-C concentration (by up to 165%). As of 2023, it is in a Phase III trial.
History
Obicetrapib was initially developed by Amgen as AMG-899 and was abandoned in 2017. In 2020, Amgen licensed the drug to NewAmsterdam Pharma.
References
CETP inhibitors
Experimental drugs
Carboxylic acids
Trifluoromethyl compounds
Pyrimidines
Ethyl esters
Quinolines | Obicetrapib | [
"Chemistry"
] | 200 | [
"Carboxylic acids",
"Functional groups"
] |
75,226,153 | https://en.wikipedia.org/wiki/Govorestat | Govorestat (AT-007) is an aldose reductase inhibitor and experimental drug to treat galactosemia and sorbitol dehydrogenase deficiency.
After a report circulating on the internet accused the developer Applied Therapeutics of cutting corners in its studies of the drug, the FDA put a hold on it in 2020. Applied Therapeutics said that the report was a fraudulent attempt to manipulate its stock price.
References
Aldose reductase inhibitors
Trifluoromethyl compounds
Benzothiazoles
Carboxylic acids
Pyridazines
Thienopyridines | Govorestat | [
"Chemistry"
] | 124 | [
"Carboxylic acids",
"Functional groups"
] |
75,226,235 | https://en.wikipedia.org/wiki/Efocipegtrutide | Efocipegtrutide (HM15211) is a triple agonist of the glucagon, GIP, and glucagon-like peptide 1 receptors. It is being studied for obesity and nonalcoholic steatohepatitis.
References
GLP-1 receptor agonists
Glucagon receptor agonists
GIP receptor agonists
Peptide therapeutics | Efocipegtrutide | [
"Chemistry"
] | 85 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,226,404 | https://en.wikipedia.org/wiki/Casiraghi%20formylation | In organic synthesis, the Casiraghi formylation is the formation of a salicylaldehyde from a phenol and paraformaldehyde. The reaction requires a strong Brønsted base and a weak Lewis acid, and gives a methanol coproduct:
Formally, it combines the Cannizzaro disproportionation with a directed Friedel-Crafts acylation.
In Casiraghi's original 1978 formulation, Grignard reagents served as both the hindered base and Lewis acid.
Applications include the synthesis of tocopherol derivatives.
References
Addition reactions
Benzaldehydes | Casiraghi formylation | [
"Chemistry"
] | 135 | [
"Chemical reaction stubs"
] |
75,227,434 | https://en.wikipedia.org/wiki/Efpeglenatide | Efpeglenatide is a GLP-1 receptor agonist under development for the treatment of type 2 diabetes and obesity and reducing the risk of cardiovascular incidents in people with these conditions. Its developer is Hanmi Pharmaceutical.
References
GLP-1 receptor agonists
Experimental diabetes drugs
Peptide therapeutics
Amino acids | Efpeglenatide | [
"Chemistry"
] | 65 | [
"Biomolecules by chemical classification",
"Pharmacology",
"Medicinal chemistry stubs",
"Amino acids",
"Pharmacology stubs"
] |
75,228,664 | https://en.wikipedia.org/wiki/Troriluzole | Troriluzole is an experimental medication that has been investigated as a potential treatment for Machado–Joseph disease (MJD), obsessive–compulsive disorder (OCD), and glioblastoma. It is a prodrug formulation of the medication riluzole.
Pharmacology
Pharmacokinetics
While riluzole is typically taken twice-daily and on an empty stomach, troriluzole may offer a potential once-daily dosing with or without food along with greater bioavailability.
Research
In 2024, researchers published a study in the Journal of Neurochemistry that reported troriluzole could reverse some early Alzheimer's disease brain changes in mice, reduce harmful glutamate levels, and improve memory and learning abilities.
References
Experimental drugs
Trifluoromethoxy compounds
Benzothiazoles
Acetamides
Amines | Troriluzole | [
"Chemistry"
] | 193 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
75,228,801 | https://en.wikipedia.org/wiki/N-0385 | N-0385 is an experimental small molecule TMPRSS2-inhibitor being investigated for its potential use in the prevention and treatment of COVID-19.
Mechanism of action
N-0385 is thought to have antiviral effects by targeting key proteins involved in the viral entry process, including TMPRSS2, ACE2, and DPP4. By interfering with the interactions between these proteins and the SARS-CoV-2 spike protein, N-0385 effectively blocks the virus from gaining access to host cells. Additionally, N-0385 appears to modify the immune responses and inflammatory pathways associated with the infection by regulating TLR7, NLRP3, and IL-10, potentially reducing the severity of COVID-19 symptoms and reducing tissue damage associated with the infection.
References
COVID-19 drug development
Experimental antiviral drugs
Benzothiazoles
Guanidines
Amides
Sulfonamides
Tripeptides | N-0385 | [
"Chemistry"
] | 194 | [
"Pharmacology",
"Drug discovery",
"Guanidines",
"Functional groups",
"Medicinal chemistry stubs",
"COVID-19 drug development",
"Pharmacology stubs",
"Amides"
] |
75,228,906 | https://en.wikipedia.org/wiki/Chinese%20character%20IT | Chinese character IT is the information technology for computer processing of Chinese characters.
While the English writing system uses a few dozen different characters, Chinese language needs a much larger character set. There are over ten thousand characters in the Xinhua Dictionary. In the Unicode multilingual character set of 149,813 characters, 98,682 (about two-thirds) are Chinese. That means computer processing of Chinese characters is the toughest among other languages.
Chinese faces special issues compared to other languages, including the technology of computer input, internal encoding and output of Chinese characters.
Character input
Computer input of Chinese characters is by no means as easy as English. English is written with 26 letters and a handful of other characters, and each character is assigned to a key on the keyboard. Chinese can be input in a similar way. However that would involve a huge keyboard with at least thousands of keys. Searching for a character on the keyboard would be a daunting job.
People did try to 'shrink' the Chinese keyboard by putting multiple characters on one key. That turned the original one-step input procedure into two steps for the writer:
pressing the key for the character group of the target character,
selecting the target character in the group.
The resulting keyboard still remained clumsy, because if you put more characters on one key, the key becomes bigger to make the characters recognizable, and selecting a character from a large group is difficult. Additionally, it is not easy to group the characters evenly in a reasonable and easy-to-learn way. Another drawback of a Chinese keyboard for direct whole character input is its inconsistency with English input.
An alternative way is to encode each Chinese character in English characters, enabling Chinese input on an English keyboard. As a matter of fact, this method has become predominant for Chinese computer input.
The software of an encoding input method includes a character-code table (). When an ASCII input code is typed on the English keyboard, the software will search for matching Chinese characters in the table. If there are multiple characters sharing the same code, they will be presented to the user for selection.
To make the input method easy to learn, encoding must be based on distinctive features in forms, sounds or meanings of Chinese characters. Because the meanings of characters tend to be more abstract and complicated, input encoding is normally based on the sound or form.
Sound-based encodings
Sound-based encoding is normally based on an existing Latin character scheme for Chinese phonetics, such as pinyin for Putonghua, and Jyutping for Cantonese. The input code of a Chinese character is its pinyin letter string followed by an optional number representing the tone. For example, the Putonghua pinyin input code of (Hong Kong) is xianggang or xiang1gang3, and the Cantonese Jyutping code is hoenggong or hoeng1gong2, all of which can be easily input via an English keyboard.
In Putonghua pinyin, there are two letters not appearing on the English keyboard: ê and ü. According to the national standard, ê should be represented by 'ea', and ü by 'v' in the pinyin input code. In some Chinese input software ê is also represented as 'e^', and ü as 'u:' or 'uu'.
Popular sound-based input methods in China include Microsoft Pinyin, Sogou Pinyin, Google Pinyin and Jyutping on the mainland and Hong Kong, and bopomofo in Taiwan.
There are a number of advantages for sound-based encoding:
Easy to learn because most Chinese writers have already got a good command of Putonghua and pinyin.
Consistent with Chinese language learning.
Allows simplified and traditional Chinese characters to be input in a similar way.
Allows writing Chinese and English on the same keyboard.
The shortcomings of sound-based encoding lie in its high degree of duplicate encoding, with homophone Chinese characters sharing the same code. A Chinese character is normally pronounced with one syllable. Chinese Putonghua only has about 400 different syllables without considering tones, or approximately 1,200 syllables when tones are considered. On the other hand, there are tens of thousands of Chinese characters. That means on the average, each syllable has to cover over 10 characters. This problem can be largely solved by inputting Chinese word by word instead of character by character, because most words in modern Chinese consist of more than one character and duplicate encoding is much less frequent at words level. For example, the pinyin of 香港 (Hong Kong) is unique to the word, while either character 香 or 港 shares its pronunciation with many other characters. Another limitation of sound-based Chinese input is that you must know the pronunciation of a Chinese character before you can input it into the computer. This issue can be solved by form-based encoding.
Form-based encodings
A Chinese character can alternatively be input according to its form (or shape) and structure. Most Chinese characters can be divided into a sequence of components each of which is in turn composed of a sequence of strokes in writing order. For example, the character 福 ('good fortune', 'happiness') can be decomposed as
There are a few hundred basic components, much less than the number of characters. By representing each component with an English letter and putting them in writing order of the character, the input method creator can get a letter string ready to be used as an input code on the English keyboard. Of course the creator can also design a rule to select representative letters from the string if it is too long. For example, in the Cangjie input method, character 疆 ('border') is encoded as "NGMWM" corresponding to components "弓土一田一", with some components omitted.
Stroke-based coding is simpler than component-based coding. But the codes tend to be longer. There are approximately 30~40 distinctive strokes of Chinese characters. They are usually classified into five categories of heng (一), shu (丨), pie (丿), dian (丶) and zhe (𠃍) for dictionary consultancy and Chinese input on a mobile phone. For Chinese input with an ASCII keyboard, 2 strokes can be combined to form 5*5=25 different pairs for mapping to the English letters. For example, in input method ZYQ, the sequence of stroke pairs '一一, 一丨, 一丿, ..., 𠃍丿, 𠃍丶, 𠃍𠃍' are represented by 'a, b, c, ..., w, x, y' respectively. Popular form-based encoding methods include Wubi on the mainland and Cangjie in Taiwan and Hong Kong.
The pros and cons of form-based input methods are complementary to sound-based methods. The major advantage of form-based methods lies in their low degree of duplicate encoding, enabling high speed input of Chinese characters. And the major shortcoming is difficulty of learning. Normally students have to remember over one hundred components and their corresponding English letters. In addition, they have to learn the complicated rules for breaking a character into a sequence of components and making a selection among them.
Optical character recognition
Chinese characters can also be input into the computer by optical character recognition (OCR), handwriting recognition and speech recognition based on technology similar to that of English.
Compared with English, Chinese OCR and handwriting recognition is more difficult, because there are thousands of different commonly-used characters instead of 26 letters. Generally speaking, print character recognition is more accurate than handwriting characters because their forms are more standardized. There are OCR tools for different fonts, including the popular Song, Kai and Hei. In comparison with offline handwriting, online handwriting recognition is more efficient, because the computer not only 'sees' the written character but also the procedure of writing it.
Speech recognition
Speech recognition converts a continuous speech signal into a sequence of words. There are two problems: the variation in pronunciation of words by different speakers and the existence of homophones such as 'pair', 'pear' and 'pare' in English, and 攻势, 公式, 公示 (gong1shi4) in Chinese. Speech recognition relies on corpus statistical methods and linguistic rules. A helpful feature of Chinese is that each character is pronounced with one syllable.
Both Chinese character recognition and speech recognition has reached application level. However neither can guarantee 100% correctness without human proofreading or online character selection.
Intelligent input engines
The most important feature of intelligent input is application of contextual constraints for candidate characters selection. For example, on Microsoft Pinyin, when the user types input code "daxuejiaoshou", they will get 大学教授 (University Professor), when types "daxuepiaopiao" the computer suggested 大雪飘飘 (heavy snow flying). Though the non-diacritical pinyin letters of 大学 and 大雪 are both "daxue", the computer can make a reasonable selection based on the subsequent words.
Intelligent Chinese input also makes use of corpus information and linguistic rules. The computer's selection among ambiguous Chinese characters is not always correct, and further improvement is required.
Other input
In the Chinese writing system, there are graphemes other than complete Chinese characters, such as punctuation marks (e.g. '。', '、' and '《》'), strokes (e.g. '丿', '𠃍' and '乚'), radicals (e.g. '氵', '宀' and '刂'), and letters used for romanization, like the vowel letters with diacritics used in pinyin and the Yale romanization of Cantonese. (e.g. 'ā', 'á', 'ǎ', 'à').
There are facilities available on Microsoft Windows, Office and the web, which will enable us to input almost all of these Chinese auxiliary characters, ranging from the input of punctuation marks in general Chinese input methods, to inputting diacritical pinyin with soft keyboards, to inputting strokes and radicals from the Unicode website and by Unicode-character conversion, as well as the application of special tools on the Web to input pinyin and other characters. More information on non-logogram input can be found in paper, which includes a list of 280 non-ASCII non-logograms, with each annotated with its Unicode code point and the input code of the author's design. It is also possible to input a character on Microsoft Word by typing its Unicode code point and pressing keys Alt+X.
Chinese character encoding for information interchange
Inside the computer each character is represented by an internal code. When a character is sent between two machines, it is in information interchange code. Nowadays, information interchange codes, such as ASCII and Unicode, are often directly employed as internal codes. The following sections will introduce the most important encoding standards used in Chinese information technology, including GB, Big5 and Unicode.
GB
GB stands for Guobiao, "Guojia Biaozhun" (国家标准, or ‘national standard’) in Putonghua, and is the prefix for reference numbers of official standards issued by the People's Republic of China.
The first GB Chinese character encoding standard is GB 2312, which was released in 1980. It includes 6,763 Chinese characters, with 3,755 frequently-used ones sorted by Pinyin, and the rest by radicals (indexing components). GB2312 was designed for simplified Chinese characters. Traditional characters which have been simplified are not covered. The code of a character is represented by a two-byte hexadecimal number, for instance, the GB codes of (Hong Kong) are CFE3 and B8DB respectively. GB2312 is still in use on some computers and the WWW, though newer versions with extended character sets, such as GB13000.1 and GB18030, have been released.
The latest version of GB encoding is GB18030. GB18030 supports both simplified and traditional Chinese characters, and is consistent with Unicode's character set.
Big5
Big5 encoding was designed by five big IT companies in Taiwan in the early 1980s, and has been the de facto standard for representing traditional Chinese in computers ever since. Big5 is popularly used in Taiwan, Hong Kong and Macau.
The original Big5 standard included 13,053 Chinese characters, with no simplified characters of the Mainland. Each character is encoded with a two byte hexadecimal code, for example, 香 (ADBB) 港 (B4E4) 龍 (C073). Chinese characters in the Big5 character set are arranged in radical order.
Extended versions of Big5 include Big-5E and Big5-2003, which include some simplified characters and Hong Kong Cantonese characters.
Unicode
Unicode is the most influential international standard for multilingual character encoding. It is consistent with (or virtually equivalent to) standard ISO/IEC10646. The full version of Unicode represents a character with a 4-byte digital code, providing a huge encoding space to cover all characters of all languages in the world. The Basic Multilingual Plane (BMP) is a 2-byte kernel version of Unicode with 2^16=65,536 code points for important characters of many languages. There are 27,522 characters in the CJKV (China, Japan, Korea and Vietnam) Ideographs Area, including all the simplified and traditional Chinese characters in GB2312 and Big5 traditional.
In Unicode 15.0, there is a multilingual character set of 149,813 characters, among which 98,682, about two-thirds, are Chinese sorted by Kangxi radicals. Even very rarely-used characters are available. The following are some example characters with their Unicode put in brackets:
H (0048) K (004B), 香 (9999), 港 (6E2F), 龍(9F8D), 龙 (9F99), 龖 (9F96), 龘 (9F98), 𪚥 (2A6A5).
All the 5,009 characters of the Hong Kong Supplementary Character Set (HKSCS) are included in Unicode. HKSCS was developed by the Hong Kong government as a collection of locally specific Chinese characters not available on the computer in the early days, for instance 咗 (already), 嘢 (thing), 脷 (tongue), and 曱甴 (cockroach).
As GB, Big5 and Unicode are concurrently used in Chinese encoding, when the computer mistakenly interprets a text with an encoding standard different from its original code, it will be presented with wrong characters, a phenomenon called "luànmǎ" (code confusing), which occasionally happens on the Web or in emails. This problem is often solved by manual selection of encoding or character set (such as the case on Web browsers) or by code conversion beforehand.
Unicode is becoming more and more popular. It is reported that UTF-8 (Unicode) is used by 98.1% of all the websites. It is widely believed that Unicode will ultimately replace all other information interchange codes and internal codes, and there will be no more code confusing.
Output
Typefaces
Like English and other languages, Chinese characters are output on printers and screens in different fonts and styles. The most popular Chinese fonts are the Song (宋体), Kai (楷体), Hei (黑体) and Fangsong (仿宋体) families, for example,
汉字字体 (Song)
汉字字体 (Kai)
汉字字体 (Hei or Black)
汉字字体 (FangSong)
Font size
Fonts appear in different sizes. In addition to the international measurement system of points, Chinese characters are also measured by size numbers (called zihao, 字号) invented by an American for Chinese printing in 1859. Table 1 is a list of all the font sizes in numbers available on Chinese version MS Word and their equivalent points.
Table 1:Chinese font sizes in numbers, points and mm
字号 (Number) 点数 (pt) 毫米 (mm) Example
八号 (#8) 5 1.76 中文
七号 (#7) 5.5 1.93 中文
小六号 (#small 6) 6.5 2.28 中文
六号 (#6) 7.5 2.64 中文
小五号 (#small 5) 9 3.16 中文
五号 (#5) 10.5 3.69 中文
小四号 (#small 4) 12 4.22 中文
四号 (#4) 14 4.92 中文
小三号 (#small 3) 15 5.27 中文
三号 (#3) 16 5.62 中文
小二号 (#small 2) 18 6.33 中文
二号 (#2) 22 7.73 中文
小一号 (#small 1) 24 8.44 中文
一号 (#1) 26 9.14 中文
小初号 (#small primary) 36 12.65 中文
初号 (#primary) 42 14.76 中文
This table is particularly useful for Chinese typesetting on computers not supporting font sizes in numbers. For example, from the table, we get to know that Chinese size number 3 (三号) is equivalent to 16 points, or 5.62mm high, as shown by the example characters.
The image of a Chinese character in a particular font is represented in the computer by a matrix of dots (called dot matrix fonts or bitmapped font) or by outlines (called outline font), again like the case in English.
See also
Chinese character encoding
Chinese Character Code for Information Interchange
Chinese computational linguistics
Japanese language and computers
Unihan
List of CJK fonts
Notes
References
Citations
Works cited
Computational linguistics
IT | Chinese character IT | [
"Technology"
] | 3,679 | [
"Natural language and computing",
"Computational linguistics"
] |
75,229,687 | https://en.wikipedia.org/wiki/Datenklo | The Datenklo (German) or data toilet (English translation) is a portable toilet cubicle which has been re-purposed to provide connectivity at hacker camps. This typically includes Wi-Fi and wired communication such as Ethernet. A major event can be served by many data toilets: for example, in 1999 the Chaos Communication Camp had 17 booths.
History
The original Datenklo or CCC-Modem was an acoustic coupler modem in the early 1980s for which the Chaos Computer Club made plans and schematics available. The moniker 'loo' refers to an ingenious idea in the construction of the device: The use of rubber cuffs, commonly available as plumbing supplies, to connect the audio transducers to a normal telephone receiver. The goal of this acoustically-coupled device was to avoid prosecution for the (illegal at the time) connection of an unlicensed device to the telephone line.
The Datenklo name was subsequently repurposed to describe the use of rented portable toilet cubicles to host communications infrastructure at hacker camps:
Connectivity
The Datenklo provides a temporary wiring closet for the networking equipment required to deliver pervasive connectivity to hacker camp attendees:
Datenklos are interconnected to create the site network, with hacker camp attendees asked to bring their own cables if they would like a wired connection to their tent:
The role of Angels
Hacker camps are generally organized and run by volunteers known as angels, and the Network Operation Centre angels not only create the network as part of site build-up and tear it down afterwards, but also connect attendees' cables to the switches in the Datenklo:
Many other roles exist for angels:
Back to the future
Whilst wireless networking and Internet protocols have become the norm for voice and data communication, the Datenklo provides hacker camp attendees with the opportunity to build and learn about Plain Old Telephone Service (POTS), turning the Datenklo into a "phone box" of sorts:
References
Sources
Computer networking
Hacker camps | Datenklo | [
"Technology",
"Engineering"
] | 403 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
75,229,858 | https://en.wikipedia.org/wiki/Retrieval-augmented%20generation | Retrieval-Augmented Generation (RAG) is a technique that grants generative artificial intelligence models information retrieval capabilities. It modifies interactions with a large language model (LLM) so that the model responds to user queries with reference to a specified set of documents, using this information to augment information drawn from its own vast, static training data. This allows LLMs to use domain-specific and/or updated information.
Use cases include providing chatbot access to internal company data or giving factual information only from an authoritative source.
Process
The RAG process is made up of four key stages. First, all the data must be prepared and indexed for use by the LLM. Thereafter, each query consists of a retrieval, augmentation, and generation phase.
Indexing
Typically, the data to be referenced is converted into LLM embeddings, numerical representations in the form of large vectors. RAG can be used on unstructured (usually text), semi-structured, or structured data (for example knowledge graphs). These embeddings are then stored in a vector database to allow for document retrieval.
Retrieval
Given a user query, a document retriever is first called to select the most relevant documents that will be used to augment the query. This comparison can be done using a variety of methods, which depend in part on the type of indexing used.
Augmentation
The model feeds this relevant retrieved information into the LLM via prompt engineering of the user's original query. Newer implementations () can also incorporate specific augmentation modules with abilities such as expanding queries into multiple domains and using memory and self-improvement to learn from previous retrievals.
Generation
Finally, the LLM can generate output based on both the query and the retrieved documents. Some models incorporate extra steps to improve output, such as the re-ranking of retrieved information, context selection, and fine-tuning.
Improvements
Improvements to the basic process above can be applied at different stages in the RAG flow.
Encoder
These methods center around the encoding of text as either dense or sparse vectors. Sparse vectors, used to encode the identity of a word, are typically dictionary length and contain almost all zeros. Dense vectors, used to encode meaning, are much smaller and contain far fewer zeros. Several enhancements can be made to the way similarities are calculated in the vector stores (databases).
Performance can be improved with faster dot products, approximate nearest neighors, or centroid searches.
Accuracy can be improved with Late Interactions.
Hybrid vectors: dense vector representations can be combined with sparse one-hot vectors in order to use the faster sparse dot products rather than the slower dense ones. Other methods can combine sparse methods (BM25, SPLADE) with dense ones like DRAGON.
Retriever-centric methods
These methods focus on improving the quality of hits from the vector database:
pre-train the retriever using the Inverse Cloze Task.
progressive data augmentation. The method of Dragon samples difficult negatives to train a dense vector retriever.
Under supervision, train the retriever for a given generator. Given a prompt and the desired answer, retrieve the top-k vectors, and feed those vectors into the generator to achieve a perplexity score for the correct answer. Then minimize the KL-divergence between the observed retrieved vectors probability and LM likelihoods to adjust the retriever.
use reranking to train the retriever.
Language model
By redesigning the language model with the retriever in mind, a 25-time smaller network can get comparable perplexity as its much larger counterparts. Because it is trained from scratch, this method (Retro) incurs the high cost of training runs that the original RAG scheme avoided. The hypothesis is that by giving domain knowledge during training, Retro needs less focus on the domain and can devote its smaller weight resources only to language semantics. The redesigned language model is shown here.
It has been reported that Retro is not reproducible, so modifications were made to make it so. The more reproducible version is called Retro++ and includes in-context RAG.
Chunking
Chunking involves various strategies for breaking up the data into vectors so the retriever can find details in it.
Three types of chunking strategies are:
Fixed length with overlap. This is fast and easy. Overlapping consecutive chunks helps to maintain semantic context across chunks.
Syntax-based chunks can break the document up into sentences. Libraries such as spaCy or NLTK can also help.
File format-based chunking. Certain file types have natural chunks built in, and it's best to respect them. For example, code files are best chunked and vectorized as whole functions or classes. HTML files should leave <table> or base64 encoded <img> elements intact. Similar considerations should be taken for pdf files. Libraries such as Unstructured or Langchain can assist with this method.
Challenges
If the external data source is large, retrieval can be slow. The use of RAG does not completely eliminate the general challenges faced by LLMs, including hallucination.
References
Large language models
Natural language processing
Information retrieval systems
Generative artificial intelligence | Retrieval-augmented generation | [
"Technology",
"Engineering"
] | 1,052 | [
"Generative artificial intelligence",
"Information retrieval systems",
"Natural language processing",
"Information technology",
"Artificial intelligence engineering",
"Natural language and computing"
] |
75,230,915 | https://en.wikipedia.org/wiki/Rachid%20Idrissi | Moulay Rachid Idrissi (; 1939 – October 18, 1971) was a Moroccan nuclear chemist and engineer. Idrissi gained notoriety after his work on the recovery of uranium from phosphates, where he discovered a significant amount of uranium in Moroccan phosphates. Shortly after this discovery, he died in a traffic accident near Rabat, the circumstances of which remain contested.
Early life and education
Moulay Rachid Idrissi was born in 1939 in the Douar Oulad Belhlou near the town of Outat El Haj, near Taza. His family claimed descendance from the Idrisid dynasty.
He studied in the town throughout primary school where he obtained his Certificate of Primary Education before studying at the prestigious Collège d'Azrou. After obtaining his scientific baccalaureate from the collège, he moved to Rabat and continued his studies in chemistry.
As a young man, Idrissi worked on development projects in his hometown of Outat El Haj, coordinating with UNESCO to build a youth house in the village. He had also worked to establish an agricultural cooperative near the Moulouya River and a preparatory high school in the village. In 1970, Idrissi held a ceremony and handed out prizes to outstanding students from his hometown.
With the help of classmate Mohamed Chafik, Idrissi pursued his studies in France. He obtained a Doctorate of Science in nuclear chemistry in 1970 from the Faculty of Science at the University of Paris after doing research regarding electrophilic fluorination of uranium dioxide at the Zoé reactor in Fontenay-aux-Roses. He also obtained a degree in chemical engineering from the National Institute for Nuclear Science and Technology in Saclay.
Scientific career
Throughout his career, he rejected offers made to him by foreign laboratories and other parties. After returning to Morocco, Idrissi gained an interest in politics and was an ardent trade unionist and adopted Third-Worldism. He became a community activist and a politician under the banner of the Socialist Union of Popular Forces.
He became a professor at the Mohammadia School of Engineering and moved to Safi, where he conducted research in a number of laboratories in the late 1960s. Idrissi's field of research focused on the recovery of uranium from phosphates, which was Morocco's biggest export. During his research, he mapped the Ganntour basin and its uranium repartition.
In 1968, he discovered a significant amount of uranium in Moroccan phosphates, which he announced to local press. Idrissi had estimated that about 72 thousand tons of uranium could be extracted annually as a low-cost byproduct from Moroccan phosphates. The media praised his discovery, and he supplied data regarding his findings to the IAEA.
Death and legacy
Rachid Idrissi died on October 18, 1971, in Salé in a car accident after being hit by a truck on National Road 15 while crossing a bridge on Bou Regreg from Rabat on his way to his hometown of Outat El Haj. His sudden death immediately raised suspicion of a political assassination from his entourage. He was buried in the cemetery in Douar El Kchahda, near Outat El Haj.
During a speech at his funeral, engineer Mohamed Ait Kaddour, a colleague of Idrissi, stated that he fell victim to "his involvement in establishing a defense project in the Arab world based on his possession of science, knowledge, and ability". A scholar in Idrissi's hometown, Hajj Mohamed Harmouche, blamed his death on "external parties". Newspaper Al Ittihad Al Ichtiraki claimed that Idrissi had been surveilled by foreign intelligence agencies prior to his death, but this remains unconfirmed.
A posthumous Rachid Idrissi El Ouatati Prize for Criticism was created in May 2023 by the Oboure Cultural Publishing Association in Rabat, the prize crowns works of literary criticism in Morocco and commemorates Idrissi. In December 2023, a synopsium was held by the Moulay Rachid Idrissi Center for Studies and Research in Outat El Haj regarding Idrissi's life.
References
1939 births
1971 deaths
Moroccan chemists
Nuclear chemists
Moroccan engineers
University of Paris alumni
People from Fès-Meknès
Alumni of Collège d'Azrou | Rachid Idrissi | [
"Chemistry"
] | 888 | [
"Nuclear chemists"
] |
75,230,924 | https://en.wikipedia.org/wiki/Diffusion%20metamaterial | Diffusion metamaterials are a subset of the metamaterial family, which primarily comprises thermal metamaterials, particle diffusion metamaterials, and plasma diffusion metamaterials. Currently, thermal metamaterials play a pivotal role within the realm of diffusion metamaterials. The applications of diffusion metamaterials span various fields, including heat management, chemical sensing, and plasma control, offering capabilities that surpass those of traditional materials and devices.
History
In 1968, Veselago introduced the concept of negative refractive index. Subsequently, John Pendry recognized the potential of using artificial microstructures to achieve unconventional electromagnetic properties. He conducted pioneering research involving metal wire arrays and split ring structures. His groundbreaking contributions ignited a surge of interest in the field of electromagnetic or optical metamaterials. Researchers began to focus on manipulating transverse waves through metamaterials, a concept governed by Maxwell's equations, which serve as wave equations.
In 2000, Ping Sheng unveiled the phenomenon of local resonance in sonic materials, which possess longitudinal wave properties. This discovery expanded the horizons of metamaterial research to encompass other wave systems. This extension included control equations such as the acoustic wave equation and elastic wave equation.
In 2008, Ji-Ping Huang extended the application of metamaterials to thermal diffusion systems. His initial research focused on steady-state heat conduction equations. Using transformation theory, he introduced the concept of thermal cloaking. In 2013, the application of metamaterials was further extended to particle diffusion systems, with the first proposal of particle diffusion cloaking under low diffusivity conditions. Subsequently, in 2022, metamaterials were applied to plasma diffusion systems, where transformation theory was used to design functional devices capable of showcasing several novel phenomena, including cloaking.
Contemporary researchers can categorize the realm of metamaterials into three primary branches, each defined by its governing equations: electromagnetic and optical wave metamaterials which involve Maxwell's equations for transverse waves; other wave metamaterials which involve various wave equations for longitudinal and transverse waves; and diffusion metamaterials which involve the diffusion processes described by diffusion equations. In diffusion metamaterials, which are designed to control a variety of diffusion behaviors, the key measurement is the diffusion length. This metric varies over time yet remains unaffected by frequency changes. On the other hand, wave metamaterials, engineered to alter different modes of wave travel, rely on the wavelength of incoming waves as their critical dimension. This value is constant over time but shifts with frequency. Essentially, the fundamental metric for diffusion metamaterials is distinctly different from that of wave metamaterials, revealing a relationship of complementarity between them.
Basic theory
Transformation theory
It denotes a theoretical methodology that links spatial geometric structural parameters with physical properties such as thermal conductivity. This is achieved through the application of coordinate transformations between two separate spatial domains. Its roots can be traced back to the realm of transformation optics, originally conceived for wave systems.
Diffusion equations
Diffusion metamaterials can be crafted by explicitly solving the relevant diffusion equations while considering suitable boundary conditions, such as thermal conduction equations.
Effective medium theory
Prominent examples of effective medium theories include the Maxwell-Garnett theory and the Bruggeman theory.
Scattering cancellation theory
This method is proposed based on the cancellation of relevant physical quantities, such as temperature disturbances.
Phase transition theory
This method relies on various types of phase transitions and can be employed to craft diffusion metamaterials featuring novel properties, such as a zero-energy-consumption thermostat and thermal meta-terrace.
Computer simulation
It encompasses finite element simulations, machine learning, topology optimization, particle swarm optimization, and similar techniques.
Characteristic length
In accordance with the definition, metamaterials must possess a characteristic length. For example, electromagnetic or optical metamaterials employ incident wavelengths as their characteristic lengths, and their structural elements are (significantly) smaller in size compared to these characteristic lengths. This design principle enables us to gain insights into the unique properties of these artificially engineered materials through the lens of effective medium theory.
Similarly, diffusion metamaterials possess analogous characteristic length scales. Taking thermal metamaterials as an example, the characteristic length for conductive thermal metamaterials is the thermal diffusion length. Convective thermal metamaterials are characterized by the migration length of the fluid, while radiative thermal metamaterials hinge on the wavelength of thermal radiation.
Applications
Diffusion metamaterials have found multiple practical applications. In the field of thermal metamaterials, the thermal cloak structure has been utilized for providing infrared thermal protection in underground shelters. Designs of thermal metamaterials have been used in managing heat in electronic devices, and films with radiative cooling have been used in commercial applications.
References
Metamaterials | Diffusion metamaterial | [
"Materials_science",
"Engineering"
] | 988 | [
"Metamaterials",
"Materials science"
] |
75,231,330 | https://en.wikipedia.org/wiki/Western%20values | "Western values" are a set of values strongly associated with the West which generally posit the importance of an individualistic culture. They are often seen as stemming from Judeo-Christian values and the Age of Enlightenment, although since the 20th century they have become marked by other sociopolitical aspects of the West, such as free-market capitalism, feminism, liberal democracy, the scientific method, and the legacy of the sexual revolution.
Background
Western values were historically adopted around the world in large part due to colonialism and post-colonial dominance by the West, and are influential in the discourse around and justification of these phenomena. This has induced some opposition to Western values and spurred a search for alternative values in some countries, though Western values are argued by some to have underpinned non-Western peoples' quest for human rights, and to be more global in character than often assumed. The World wars forced the West to introspect on its application of its values to itself, as internal warfare and the rise of the Nazis within Europe, who openly opposed Western values, had greatly weakened it; after World War II and the start of the post-colonial era, global institutions such as the United Nations were founded with a basis in Western values.
Western values have been used to explain a variety of phenomena relating to the global dominance and success of the West, such as the emergence of modern science and technology. They have been disseminated around the world through several mediums, such as through the spread of Western sports. The global esteem which Western values are held in has been considered by some to be leading to a harmful decline of non-Western cultures and values.
Reception
A constant theme of debate around Western values has been around their universal applicability or lack thereof; in modern times, as various non-Western nations have risen, they have sought to oppose certain Western values, with even Western countries also backing down to some extent from championing its own values in what some see as a contested transition to a post-Western era of the world. Western values is also often contrasted with Asian values of the East, which among other factors highly posits communitarianism and a deference to authority instead.
The adoption of Western values among immigrants to the West has also been scrutinised, with some Westerners opposing immigration from the Muslim world or other parts of the non-West due to a perceived incompatibility of values; others support immigration on the basis of multiculturalism.
See also
Anti-Western sentiment
Asian values
Eurocentrism
European values
Western education
References
Western culture
Sociology | Western values | [
"Biology"
] | 522 | [
"Behavioural sciences",
"Behavior",
"Sociology"
] |
75,232,253 | https://en.wikipedia.org/wiki/Tremella%20globispora | Tremella globispora is a species of fungus in the family Tremellaceae. It produces hyaline, pustular, gelatinous basidiocarps (fruit bodies) and is parasitic on pyrenomycetous fungi (Diaporthe species) on dead herbaceous stems and wood. It was originally described from England.
Taxonomy
The species was formerly referred to Tremella tubercularia, a nomen novum proposed by Miles Joseph Berkeley when transferring his earlier Tubercularia albida to the genus Tremella (to avoid creating a homonym of Tremella albida Huds.). In 1970, examination of Berkeley's original collections by English mycologist Derek Reid showed, however, that Tremella tubercularia is a gelatinous ascomycete, now known as Ascocoryne albida. Reid therefore described Tremella globispora (as "T. globospora") to accommodate the genuine Tremella species that had previously and mistakenly been referred to T. tubercularia. The type collection from Sussex was on perithecia of Diaporthe eres on dead canes of bramble (Rubus fruticosus).
Description
Fruit bodies are gelatinous, hyaline, pustular, up to 0.5 cm across, but sometimes becoming larger (up to 1 cm across) through confluence. They emerge from the perithecia of their host. Microscopically, the basidia are tremelloid (ellipsoid, with oblique septa), 4-celled, 10 to 18 by 9 to 13 μm. The basidiospores are subglobose, smooth, 6 to 8 by 6 to 7 μm.
Similar species
In Europe, Tremella indecorata, described from Norway, is morphologically very similar, though fruit bodies are said to darken when drying. The type collection was associated with pyrenomycetes (Nitschkia grevillei and a species of Valsaceae) on willow and said to have a spore range of 6.5 to 7.5 μm or 9.5 to 12 by 8.5 to 11 μm. It is not clear if the two species are distinct, though Scandinavian collections identified as T. indecorata are grey to date brown when mature and have larger spores (8.5 to 15 by 8 to 12.5 μm). Tremella karstenii is a similar species parasitic on Colpoma juniperi on juniper. Tremella colpomaticola parasitizes Colpoma quercinum on oak. Tremella subalpina was recently described from rhododendron in Russia.
Outside Europe, Chen considered North American collections as "closely related" to but possibly not conspecific with Tremella globispora. Chen also considered Tremella bambusina, described from the Philippines, as a probable synonym, differing only in its brownish orange colour.
Habitat and distribution
Tremella globispora is a parasite on Diaporthe species and possibly other ascomycetous hosts. It is found on dead, attached or fallen wood and on dead herbaceous stems.
The species was described from England and has been widely reported in Europe. The species has also been reported from Canada and the USA (on Valsa and Diaporthe species) and from the Russian Far East.
References
globispora
Fungi of Europe
Fungi described in 1970
Fungus species
Taxa named by Derek Reid | Tremella globispora | [
"Biology"
] | 751 | [
"Fungi",
"Fungus species"
] |
75,232,289 | https://en.wikipedia.org/wiki/NGC%20855 | NGC 855 is a star-forming dwarf elliptical galaxy located in the Triangulum constellation. The discovery and a first description (as H 26 613) was realized by William Herschel on 26 October 1786 and the findings made public through his Catalogue of Nebulae and Clusters of Stars, published the same year.
NGC 855's relative velocity to the cosmic microwave background is 343 ± 18 km/s (343 ± 18) km/s, corresponding to a Hubble distance of 5.06 ± 0.44 Mpc (~16.5 million ly). There is some uncertainty about its precise distance since two surface brightness fluctuation measurements give a distance of 9.280 ± 0.636 Mpc (~30.3 million ly), a range outside the Hubble distance determined by the galaxy's redshift survey.
Star formation
Using infrared data collected from two regions in the center of the galaxy by the Spitzer Space Telescope, astronomers were able to suggest NGC 855 to be a star-forming galaxy. Its HI distribution (Neutral atomic hydrogen emission lines) suggests the star-forming activity might have been triggered by a minor merger.
See also
New General Catalogue
List of NGC objects
References
External links
NASA/IPAC Extragalactic Database - Extensive database of NGC objects.
Dwarf elliptical galaxies
Triangulum
855
8557
Astronomical objects discovered in 1786
Discoveries by William Herschel | NGC 855 | [
"Astronomy"
] | 287 | [
"Triangulum",
"Constellations"
] |
65,291,136 | https://en.wikipedia.org/wiki/Tulane%20National%20Primate%20Research%20Center | The Tulane National Primate Research Center (TNPRC) is a federally funded biomedical research facility affiliated with Tulane University. The TNPRC is one of seven National Primate Research Centers which conduct biomedical research on primates. The TNPRC is situated in 500 acres of land in Covington, Louisiana, and originally opened as the Delta Regional Primate Center in 1964. The center uses five types of non-human primates in its research: cynomolgus macaques, African green monkeys, mangabeys, pig-tailed macaques and rhesus macaques. The TNPRC employs over three hundred people and has an estimated economic impact of $70.1 million a year.
Research
The TNPRC has four divisions: Comparative Pathology, Microbiology, Immunology, and Veterinary Medicine. The center investigates diseases including HIV/AIDS, celiac disease, Krabbe disease, leukemia, Lyme disease, respiratory syncytial virus (RSV), rotavirus, tuberculosis, varicella zoster virus (VZV), and Zika virus.
Facilities
The TNPRC is located on 500 acres of land, in unincorporated St. Tammany Parish, Louisiana, with a Covington, Louisiana postal address. In addition to its research facilities, the center has an on-site, Biosafety Level 3 biocontainment laboratory. The TNPRC also operates a large breeding colony of non-human primates.
Breeding colony
The TNPRC operates an on-site breeding colony of 5,000 non-human primates.
Incidents and controversies
In 1998, two dozen rhesus macaques escaped from their cage into the surrounding area of the TNPRC.
In 2005, over 50 monkeys escaped from their cage into the surrounding area of the TNPRC. Four of the primates died or were never found.
In 2006, thirteen baboons were killed after being placed in a crowded chute.
In September 2012, a rhesus macaque was inadvertently left in an unattended vehicle for approximately 22 hours. As a result, the macaque was dehydrated and later died.
In September 2014, a USDA inspection report revealed that several of the animal cages had been kept in unclean and unsanitary conditions.
In November 2014, three macaques in the TNPRC's breeding colony were affected by a biosecurity breach due to staff members not following proper procedure. As a result, the animals were euthanized.
In September 2015, a USDA inspection revealed that personnel at the TNPRC were not following appropriate procedures regarding the criteria for euthanizing animals.
References
External links
TNPRC home page
Primate research centers
Animal testing on non-human primates
Tulane University
Medical research institutes in the United States
Biomedical research foundations
1964 establishments in Louisiana
Research institutes in Louisiana
Covington, Louisiana
Buildings and structures in St. Tammany Parish, Louisiana | Tulane National Primate Research Center | [
"Engineering",
"Biology"
] | 604 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
65,291,784 | https://en.wikipedia.org/wiki/Moto%20E6 | The Moto E6 (stylized by Motorola as moto e6) is the 6th generation of the low-end Moto E family of Android smartphones developed by Motorola Mobility.
Submodels comparison
References
Motorola smartphones
Android (operating system) devices
Mobile phones with user-replaceable battery
Mobile phones with multiple rear cameras | Moto E6 | [
"Technology"
] | 68 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
65,291,864 | https://en.wikipedia.org/wiki/Slater%E2%80%93Pauling%20rule | In condensed matter physics, the Slater–Pauling rule states that adding an element to a metal alloy will reduce the alloy's saturation magnetization by an amount proportional to the number of valence electrons outside of the added element's d shell. Conversely, elements with a partially filled d shell will increase the magnetic moment by an amount proportional to number of missing electrons. Investigated by the physicists John C. Slater and Linus Pauling in the 1930s, the rule is a useful approximation for the magnetic properties of many transition metals.
Application
The use of the rule depends on carefully defining what it means for an electron to lie outside of the d shell. The electrons outside a d shell are the electrons which have higher energy than the electrons within the d shell. The Madelung rule (incorrectly) suggests that the s shell is filled before the d shell. For example, it predicts Zinc has a configuration of [Ar] 4s2 3d10. However, Zinc's 4s electrons actually have more energy than the 3d electrons, putting them outside the d shell. Ordered in terms of energy, the electron configuration of Zinc is [Ar] 3d10 4s2. (see: the n+ℓ energy ordering rule)
The basic rule given above makes several approximations. One simplification is rounding to the nearest integer. Because we are describing the number of electrons in a band using an average value, the s and d shells can be filled to non-integer numbers of electrons, allowing the Slater–Pauling rule to give more accurate predictions. While the Slater–Pauling rule has many exceptions, it is often a useful as an approximation to more accurate, but more complicated physical models.
Building on further theoretical developments done by physicists such as Jacques Friedel, a more widely applicable version of the rule, known as the generalized Slater–Pauling rule was developed.
See also
Spin states (d electrons)
Ferromagnetism
Metallic bonding
References
Electric and magnetic fields in matter
Magnetism
Electronic band structures | Slater–Pauling rule | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 409 | [
"Electron",
"Materials science stubs",
"Electric and magnetic fields in matter",
"Materials science",
"Electronic band structures",
"Condensed matter physics",
"Condensed matter stubs"
] |
65,291,922 | https://en.wikipedia.org/wiki/Moto%20E7 | The Moto E7 (stylized by Motorola as moto e7) is the 7th generation of the low-end Moto E family of Android smartphones developed by Motorola Mobility.
Submodels comparison
References
Android (operating system) devices
Motorola smartphones
Mobile phones introduced in 2020
Mobile phones with multiple rear cameras | Moto E7 | [
"Technology"
] | 65 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
65,293,114 | https://en.wikipedia.org/wiki/A%20History%20of%20the%20Theories%20of%20Aether%20and%20Electricity | A History of the Theories of Aether and Electricity is any of three books written by British mathematician Sir Edmund Taylor Whittaker FRS FRSE on the history of electromagnetic theory, covering the development of classical electromagnetism, optics, and aether theories. The book's first edition, subtitled from the Age of Descartes to the Close of the Nineteenth Century, was published in 1910 by Longmans, Green. The book covers the history of aether theories and the development of electromagnetic theory up to the 20th century. A second, extended and revised, edition consisting of two volumes was released in the early 1950s by Thomas Nelson, expanding the book's scope to include the first quarter of the 20th century. The first volume, subtitled The Classical Theories, was published in 1951 and served as a revised and updated edition to the first book. The second volume, subtitled The Modern Theories (1900–1926), was published two years later in 1953, extended this work covering the years 1900 to 1926. Notwithstanding a notorious controversy on Whittaker's views on the history of special relativity, covered in volume two of the second edition, the books are considered authoritative references on the history of electricity and magnetism as well as classics in the history of physics.
The original book was well-received, but it ran out of print by the early 1920s. Whittaker believed that a new edition should include the developments in physics that took part at the turn of the twentieth century and declined to have it reprinted. He wrote the second edition of the book after his retirement and published The Classical Theories in 1951, which also received critical acclaim. In the 1953 second volume, The Modern Theories (1900–1926), Whittaker argued that Henri Poincaré and Hendrik Lorentz developed the theory of special relativity before Albert Einstein, a claim that has been rejected by most historians of science. Though overall reviews of the book were generally positive, due to its role in this relativity priority dispute, it receives far fewer citations than the other volumes, outside of references to the controversy.
Background
The book was originally written in the period immediately following the publication of Einstein's Annus Mirabilis papers and several years following the early work of Max Planck; it was a transitional period for physics, where special relativity and old quantum theory were gaining traction. The book serves to document the developments of electricity and magnetism before the quantum revolution and the birth of quantum mechanics. Whittaker was an established mathematician by the publication of this book, and he brought unique qualifications to its authorship. As a teacher at Trinity College, beginning after his election as a fellow in 1896, Whittaker gave advanced lectures in spectroscopy, astrophysics, and electro-optics. His first book, Modern Analysis, was initially published in 1902 and remained a standard reference for applied mathematicians. His second major release, Analytical Dynamics, a mathematical physics textbook, was published in 1906 and was, according to Victor Lenzen in 1952, "still the best exposition of the subject on the highest possible level."
Whittaker wrote the first edition in his spare time while he was thirty-seven years old, during which time he was serving as Royal Astronomer of Ireland from 1906 onwards. The post's relative ease allowed him to devote time to reading for the project, which he worked on until its release in 1910. During this same period, Whittaker also published the book The theory of optical instruments in 1907 as well as publish eight papers, with six in astronomy, during the same period. He also continued performing fundamental research in analytical dynamics at Trinity College in Dublin throughout this period.
The original version of the book was universally praised and was considered an authoritative reference work in the history of physics, despite its difficulty to obtain past the 1920s. When the first edition of the book ran out of print, there was a long delay before the publication of the revised edition in 1951 and 1953. The delay was due, in Whittaker's own words, to his view that "any new issue should describe the origins of relativity and quantum theory, and their development since 1900". The task required more time than his career as a mathematician allowed for, so the project was put on hold until he retired from his professorship at the University of Edinburgh in 1946.
From the age of Descartes to the Close of the Nineteenth Century
The first edition of the book, written in 1910, gives a detailed account of the aether theories and their development from René Descartes to Hendrik Lorentz and Albert Einstein, including the contributions of Hermann Minkowski. The volume focuses heavily on aether theories, Michael Faraday, and James Clerk Maxwell, devoting each one or more chapters. It was well-received and established Whittaker as a respected historian of science. The book ran out of print and was unavailable for many years before the publication of the second edition, as Whittaker declined to reprint it. Published in the United States prior to 1925, the book is now in the public domain in the United States and has been reprinted by several publishers.
Summary
The book consists of twelve chapters that begin with a discussion on the theories of aether in the 17th century, focusing heavily on René Descartes, and end with a discussion of electronics and the theories of aether at the close of the 19th century, extensively covering contributions from Isaac Newton, René Descartes, Michael Faraday, James Clerk Maxwell, and J. J. Thomson. The book follows logical sequences of development, so the chapters are somewhat independent; the book is not fully chronological. The book uses vector analysis throughout and there is an explanatory table at the beginning of the book for those unfamiliar with vector notation.
The first chapter covers the 17th-century development of the theory of aether. Beginning with Descartes' conjectures, the chapter focuses on contributions from Christiaan Huygens and Isaac Newton while it highlights the work of Petrus Peregrinus, William Gilbert, Pierre de Fermat, Robert Hooke, Galileo, and Ole Rømer. Chapter 2 covers the initial mathematical development of the magnetic field before the introduction of the vector potential and scalar potential, covering action at a distance. The third chapter covers galvanism, beginning with Luigi Galvani and extending through Georg Ohm's theory of the circuit. Chapter 4 covers the early developments of the luminiferous aether theories stretch from James Bradley to Augustin-Jean Fresnel. The fifth chapter covers the developments that mostly take place over the first half of the nineteenth century, with some contributions by Joseph Valentin Boussinesq and Lord Kelvin. Here the idea of the luminiferous aether is modelled as an elastic solid. Chapter 6 focuses almost exclusively on the experiments of Michael Faraday. Chapter seven discusses the mathematicians who worked after Faraday but before James Clerk Maxwell and who adopted views of action at a distance over Faraday's lines of force. The chapter includes a discussion of the contributions made by Franz Neumann, Wilhelm Eduard Weber, Bernhard Riemann, James Prescott Joule, Hermann von Helmholtz, Lord Kelvin, Gustav Kirchhoff, and Jean Peltier. Chapter 8 focuses on Maxwell's contributions to electromagnetism and Chapter 9 details further developments to the models of aether made after Maxwell's publications. Contributions by Lord Kelvin, Carl Anton Bjerknes, James MacCullagh, Bernhard Riemann, George Francis FitzGerald, and William Mitchinson Hicks. The tenth chapter covers physicists following in Maxwell's tracks in the mid-nineteenth century, with contributions from Helmholtz, Fitzgerald, Weber, Hendrik Lorentz, H. A. Rowland, J. J. Thomson, Oliver Heaviside, John Henry Poynting, Heinrich Hertz, and John Kerr. Chapter 11 covers the conduction in solids and gases extending from Faraday's work, covered in chapter six, to that of J. J. Thomson while the final chapter gives an account of the theories of aether in the late 1800s, ending with Owen Willans Richardson's work at the turn of the century.
Reviews
The book received several reviews in 1911, including one by the physicist C. M. Sparrow. Sparrow wrote that the book lives up to the legacy left by Whittaker's A Course in Modern Analysis and A Treatise on the Analytical Dynamics of Particles and Rigid Bodies. He then noted several expandable areas of the book before he went on to state: "That some slight errors or inaccuracies should creep into a book of this nature is to be expected, but the one or two we have observed are of too trivial a character to deserve mention, and affect in no way the general excellence of the work. The book is attractively printed and remarkably free from misprints." Another 1911 review of the book deemed it an "excellent volume" and predicted that it "will be welcomed by all physicists as a valuable contribution." A third 1911 review of the book praised it for its careful depiction of the developments, asserting "the treatment of the more important advances, without being [exhaustive], is sufficiently adequate to define them clearly in their historical setting".
Among other reviewers, E. B. Wilson, in a 1913 review, noted one theory that Whittaker overlooked before going on to say: "To go into further detail with regard[s] to the contents of this History, which should and will be widely read, is needless. Suffice it to say that a careful study of all of the work twice, and of many portions of it several times, leaves but one resolution, namely, to continue the study indefinitely; for there is always something new to learn where so much material is so well presented." A second 1913 review, by Herbert Hall Turner wrote that the "book is probably the greatest act of piety towards the past which has been produced in this generation" and that it "would seem advisable to keep the book on one of the easily accessible shelves of the study, where it may be referred to constantly." The book also received a positive review in Italian in 1914.
Several reviewers of the first volume of the second edition praised this edition in their reviews. A. M. Tyndall, wrote in 1951 that he remembered how pleasurable and enlightening reading this edition was forty-one years prior. Carl Eckart wrote in 1952 that the book "has been the authoritative reference work for the historical aspects of the theories of optics, electromagnetism, and the [a]ether." In 1952, Victor Lenzen wrote that the book was "without rival in its field." In his 1952 review, W. H. McCrea wrote that it "gave a superbly well-knit account of its subject".
Extended and revised edition
In 1951 (Vol. 1) and 1953 (Vol. 2), Whittaker published an extended and revised edition of his book in two volumes. The first volume is a revision of the original 1910 book while the second volume, published two years later, contains an extension of the history into the twentieth century, covering the years 1900 through 1926. The books are considered authoritative texts on the developments of classical electromagnetism and continue to be cited by widely adopted textbooks on the subject. A third volume, covering the years 1925 to 1950, was promised in the second edition but was never published, as Whittaker died in 1956. The two volumes provide an account of the historical development of the fundamental theories of physics and they are said to "contain the distilled essence of their author's reading and study over a period of more than half a century."
The Classical Theories
The first volume, subtitled The Classical Theories, was initially published in 1951 by Thomas Nelson and Sons. The book is a revision of the original 1910 book, with an added chapter on classical radiation theory, some new material, but remains focused on pre-1900 physics. The book has a similar scope as the first edition, though occasionally modified toward the beginning with more extensive edits towards the end. A reviewer noted that about 80 per cent of the book is a reproduction of the original edition, with revisions accounting for developments over the first forty years of the 20th century throughout. The work covers the development of optics, electricity, and magnetism, with some side-plots in the history of thermodynamics and gravitation, over three centuries, through the close of the nineteenth century.
Overview (vol. 1)
Chapter one of the first volume was renamed the theory of the aether to the death of Newton after being mostly rewritten, though it still focuses on René Descartes, Isaac Newton, Pierre de Fermat, Robert Hooke, and Christiaan Huygens, among others. The chapter begins with a discussion of physics from the initial formulations of space by René Descartes, which evolved into the aether theories, through the death of Newton, witnessing the first attempts at a wave theory of light by Hooke and Huygens. The new volume traces the early development of the aether theories back to the time of Aristotle.
While there are many new paragraphs, references, and expanded footnotes throughout chapters two through eleven, much of the content remains the same as the first edition. Chapters two and three, as in the first edition, initiate the subject of electricity and magnetism, including Galvanism. Chapter two traces the history of electrostatics and magnetostatics from early developments through George Green's work on potential theory and his introduction of the vector potential and scalar potential. Chapter three, on Galvanism, discusses the history of electric current, centering on Galvani, Ohm, and Ampere. The fourth chapter, on the luminiferous medium, includes the discoveries of optical aberrations, polarization, and interference. This is the period of transition, from when Newton's corpuscular theory of light was widely held until the establishment of the wave theory after the experiments by Fresnel and Young. The fifth chapter records the development of theories modeling the aether as an elastic solid.
Chapters six through eight present the development of electromagnetism as a line from Faraday to Maxwell, including the development of theories of electricity and magnetism modelled on Newtonian mechanics. The chapter was largely expanded from its 1910 counterpart. Chapters seven and eight were extensively rewritten with new material throughout. Chapter nine, on models of the aether, discusses, among others, contributions of Maxwell, William Thomson, James MacCullagh, Riemann, George Francis FitzGerald, and Hermann von Helmholtz, the preeminent physicists of the nineteenth century.
The final three chapters pave the way for twentieth-century developments, to be described in the second volume. Chapter eleven was renamed to conduction in solutions and gases, from Faraday to the discovery of the electron in the new edition. Chapter twelve, titled classical radiation-theory is completely new and focuses on the empirical development of spectral series as well as the historical development of black body radiation physics. The final chapter, chapter eight, was renamed to classical theory in the age of Lorentz and contains new material, while omitting several details, saving them for the second volume. The chapter largely focuses on electric and thermal conduction and the Lorentz theory of electrons. The table of contents has been praised as being "extremely useful" for breaking down the chapters into sections that highlight the key developments.
Reception (vol. 1)
Arthur Mannering Tyndall, William Hunter McCrea, and Julius Miller reviewed the book upon its release in 1951. Arthur Tyndall noted his preference for the setup of the new edition and wrote that "if there are any mistakes or omissions in it, the reviewer was too immersed in the atmosphere of the book to notice them". Tyndall recommended the book for teachers who are looking to develop students' interest in the historical background of optics and electricity, as he believes a lot of the content can be directly incorporated into lectures and that students can be advised to read parts of the book in their undergraduate studies. In a second 1951 review, William McCrea stated that Whittaker had succeeded, "possibly more than any other historian of science", in imparting "a comprehensive and authentic impression of that wherein the great pioneers were truly great", which allowing the reader to "see their work, with its lack of precedence, against the background of strangely assorted experimental data and of contemporary conflicting general physical concepts" and "to see how they yet contributed each his share to what we are bound to recognize as permanent progress". McCrea praised the book by saying "[n]o better factual account exists to show how hardly won this progress has been." In a second review, published in 1952, McCrea stated "[o]ut of the riches of his mathematical and historical scholarship, Sir Edmund Whittaker has given us a very great book." In his review, Julius Miller claimed that the book was beyond review, saying it sufficed to note that "it is the work of a foremost scholar of this century and the last—a physicist, philosopher, mathematician." Miller noted that while it is primarily a history book, it is also "philosophy, physics, and mathematics of the first temper" and that it gives an "elegant penetrating examination of The Classical Theories". He also noted that although it is "heavy reading", the work is "delightfully clear" and that the "documentation is astonishing".
Among others, Carl Eckart, Victor Lenzen, John Synge, Stephen Toulmin, Edwin C. Kemble, and I. Bernard Cohen reviewed the book in 1952. Carl Eckart opened his review by praising the first edition of the book and writing: "This second edition will almost certainly continue to occupy the same position for many years to come." Eckart noted that the book was ambitious, but it was carried out with "unusual success" using the same clarity and elegance which had made Whittaker famous. He went on to say that the book is a "true history of ideas" which has been and will continue as a "most influential book". In his review, Victor Lenzen stated that he "knows of no work on physical theories which is comparable to the present one in the analytical and critical discussion of the mathematical formulation of the theories." His review closes by stating that the book is a testament to the "boundless intellectual curiosity" which drives humankind to understand the universe where we live. In a third 1952 review, John Synge noted that the book is "backed by a vast erudition", but is not overpowering and that "the style is sprightly and the author is singularly successful in putting himself and the reader in the place of each physicist". Synge goes on to say that Whittaker, with great skill, was able to "mingle the atmosphere of contemporary confusion which always accompanies scientific progress with an appreciation of what is actually going on, as viewed in light of later knowledge." Stephen Toulmin, in his review, refers to Whittaker's original edition as a standard reference, but noted that a supplement was almost immediately required to cover later developments. Toulmin went on to state that physicists in the first half of the twentieth century had a difficult time "keeping afloat on the tide of new theories and discoveries" and that Whittaker's position historian of science had been "quite inaccessible", and so "we are lucky in having Professor Whittaker once more as our guide." Edwin Kemble, in a fifth 1952 review, stated that the book was "in a class by itself" and summarized it as a "high-level account" of the steps in the development of the classical theory of electromagnetism that it is "well documented and extraordinarily comprehensive." In his review, I. Bernard Cohen wrote that he knew "of no other history of electricity which is as sound as Whittaker's", though he noted several improvements that he wished Whittaker had made in updating the 1910 classic.
Analysis (vol. 1)
Arthur Tyndall, in his 1951 review, stated the book is "rich in experimental fact", with comparatively fewer mathematical sections, with notable exceptions such as those on Lorentz and Maxwell, saying that "this new volume is not a heavy treatise in theoretical physics, as perhaps its name might suggest". William McCrea noted that the book is "a history of theories", but also provides "very clear statements of the experimental discoveries at all stages." He goes on to note that the book focuses on the developments of the aether theories and electricity, which McCrea states are the most fundamental parts of physics, but is also informative in other relevant areas of physics, such as elasticity and thermodynamics. Some reviewers commented on the new chapter on classical radiation theory, including Tyndall who notes that the material was barely covered in the first edition and was a natural addition that helps pave the way for the second volume and Cart Eckart who says that the history of spectra and thermal radiation is "given its proper place in the historical perspective."
Several reviewers criticized the book for certain omissions, including Eckart, who criticized Whittaker for leaving out Euclid and Lobatchewsky and points to this and the fact that Whittaker continued to write about the aether from a nineteenth century perspective as defects he would have ignored in a lesser volume. Victor Lenzen states that he disagrees with Whittaker on a point of emphases, especially as it relates to not mentioning Joseph Henry outside a single footnote. He also mentions Whittaker's distinction between Platonic and Aristotelian philosophies where he says Whittaker sides with Aristotle's empirical methods, while he believes that Plato was more prophetic of the future of mathematical methods in science.
The Modern Theories (1900–1926)
The second volume, subtitled The Modern Theories (1900–1926), was originally published in 1951 by Thomas Nelson and Sons. The book is the continuation of Whittaker's survey of the history of physics into the period 1900–1926 and describes the revolution in physics over the first quarter of the 20th century. The major historical developments covered in the book include the special theory of relativity, old quantum theory, matrix mechanics, and Schrödinger's equation and its use in quantum mechanics, referred to as "wave mechanics".
Chapter two of the book is highly controversial, and constitutes Whittaker's major role in the relativity priority dispute. Whittaker's view on the history of special relativity is that Lorentz and Poincare had successfully developed the theory before Einstein and that priority belonged to them. Despite Whittaker's objection, scientific consensus remains strongly in favor of Einstein's priority on the theory, with authors noting that while the theories of Poincare and Lorentz are mathematically and experimentally equivalent to Einstein's theory, they are not based on the relativistic postulates and do not constitute what is now known as Einstein's relativity. While parts of the book have received notable praise, due to its role in the historical controversy, the book overall has been said to fall short of the standards of the others and it has historically received many fewer citations.
Overview (vol. 2)
The first chapter, the age of Rutherford, discusses the state of empirical physics at the turn of the twentieth century. Chapter two discusses is on the origins of special relativity and is highly controversial, and is the base of Whittaker's role in the relativity priority dispute. In this chapter, as the title suggests, Whittaker gives priority for special relativity to Hendrik Lorentz and Henri Poincaré as opposed to the generally accepted crediting of Albert Einstein, a point for which Whittaker has been rebuked by many scholars.
Chapters three and four detail the developments of old quantum theory and deal mostly with "complicated experimental facts and their preliminary explanations". Chapter three covers early developments in old quantum theory, discussing Max Planck's contributions to physics and touching on Einstein and Arnold Sommerfeld. Chapter four, on spectroscopy in old quantum theory, discusses many of Niels Bohr's precursors, including Arthur W. Conway, Penry Vaughan Bevan, John William Nicholson, and Niels Bjerrum. Chapter five switches to gravitation, discussing the history of cosmology and the general theory of relativity. Chapter six returns to quantum theory and describes the connection between older and more modern concepts in physics, discussing phenomena and theories such as Louis de Broglie's matter waves, Bose statistics, and Fermi statistics. The final two chapters give an account of the birth of quantum mechanics. Matrix mechanics is discussed in chapter eight, including the Heisenberg picture and the introduction of physical operators. Erwin Schrödinger, the Schrödinger picture, and Schrödinger's equation are all discussed in the final chapter.
Reception (vol. 2)
In a 1954 book review of the second volume, Max Born praised both volumes of the expanded and revised second edition, saying "[t]his second volume is a magnificent work, excellent not only through a brilliant style and clarity of expression, but also through an incredible scholarship and erudition" and that "this work makes us look forward keenly to the promised third volume". Born believes that a book like this one is a "most essential contribution to our literature and should be read by every student of physics and of al sciences connected with physics, including scientific history and philosophy." Born singles out chapters three and four on the development of old quantum theory, calling them "the most amazing feats of learning, insight, and discriminations". He also singles out chapter five, on gravitation, as being "perfect" due to Whittaker's own scholarship in the field, going on to say it is "the most readable and elucidating short presentation of general relativity and cosmology". In his 1956 book Physics in My Generation, Born goes on to call it an "excellent book" and talks about using the first edition as a reference when he was a student.
Freeman Dyson, in a 1954 review, said the second volume is "more limited and professional in its scope" than the first volume, giving a "clear, logical account of the sequence of events in the intellectual struggles which led up to relativity and quantum mechanics." He calls the volume a "mathematical textbook" on the theory of relativity and quantum mechanics, emphasizing a historical approach, as it explains all the necessary mathematics. He states that "Whittaker's two volumes reflect faithfully the different climates of science in the two periods they cover" and goes on to say that although he is unable to comment on the book's historical accuracy, he thinks "it is likely that this is the most scholarly and generally authoritative history of its period that we shall ever get."
In the opening remarks of his 30 November 1954 address to the Royal Society, president Edgar Adrian states that Whittaker is perhaps the most well-known British mathematician of the time, due to his "numerous, varied, and important contributions" and the offices which he had held, but that of all his works, this History is probably the most important, while he notes that Whittaker's books on analytical dynamics and modern analysis have been widely influential both in the UK and internationally. He singles out the then-recently published second volume as a "great work" which gives "a critical appreciation of the development of physical theory up to the year 1925." He goes on to say that all of Whittaker's writings showcase his "powers of arrangement and exposition" which are of "a most unusual order". He closes by saying that the "astonishing quantity and quality of his work is probably unparalleled in modern mathematics and it is most appropriate that the Royal Society should confer on Whittaker its most distinguished award", referring to Whittaker's receipt of the Copley Medal in 1954.
In a 1954 review Rolf Hagedorn states that "One need read only a few pages of the book to sense the thoroughness and conscientiousness of the whole work". He states the book is an invaluable reference and that it is "essential for any library". He goes on to say that Whittaker "brings the reader to real understanding by a coherent mathematical description enabling him to follow the development step by step" and that the "clarity and didactic construction make it a pleasure to follow". In another William Fuller Brown Jr. notes that the book is a history of published papers rather than a history of the scientists who published them, but goes on to say that the book is illuminating and the reader "will get from it a better appreciation of the process of scientific discovery. Among others, Science posted a review of the book that opened with: "The present volume is not, as the title would suggest, merely a 26-year extension of the work originally written by Sir. Edmond Whittaker under the same title in 1910. It is, rather, a thorough and authoritative chronicle of the development of theoretical physics from in the period 1900–1926, including atomic structure, special relativity, [old] quantum theory, general relativity, matrix mechanics, and wave mechanics".
A review by P. W. Bridgman in 1956 says "The readers first impression at this formidable treatise, I believe, will almost invariably be one of stupefaction at the industry and versatility of the author, who has been able to assimilate and critically review so much." He goes on to say that older physicists would also "find it an epitome" of their "own experience", and that it would recount for them "many critical situations".
Analysis (vol. 2)
In a September 1953 letter to Albert Einstein published in 1971, Max Born writes that, other than the relativity priority issues, it was "particularly unpleasant" for him that Whittaker "had woven all sorts of personal information into his account of quantum mechanics" while Born's role in the development was "extolled". But states in the commentary in 1971 that the book is "a brilliant and historic philosophical work" which he found "extremely useful" in his earlier years. In a 1954 book review, Born praises the book for its "extremely careful" record of "obscure or forgotten papers which contain some essential new idea though perhaps in an imperfect form". And points out that the last two chapters of the book give a "detailed and lively account of the birth of quantum mechanics in both of its forms, matrix mechanics and wave mechanics." He also praises Whittaker for setting aside his philosophical interests, saying "Whittaker the conscientious historian of science, has the upper hand over Whittaker the metaphysician, and it is just this feature which makes the book a safe guide through the tangle of events". Born states that the title of the second chapter, or "the historical view expressed by it", is the only point where Born does not share Whittaker's opinion. Born also points out that the book goes beyond what ordinary textbooks can do, which he believes offer students "the shortest and simplest way to knowledge and understanding", and "are in cases not only unhistorical but a distortion of history".
Freeman Dyson, in his 1954 review, remarks that the second volume has, by necessity, a "very different style from the first" due to the rapid mathematical development in the early 1900s. He summarizes the first volume as a description of "historical accidents", which resulted in changes in the way scientists thought about the problems, with discussions of the connections between physics and the more general philosophical climate of the times, while saying the second volume covers the history of physics when the progress was determined by the "speed with which observations could be understood and expressed in exact mathematical terms".
In his 1954 Nature review, Rolf Hagedorn notes that readers should be familiar with the book differential, integral calculus, and linear algebra, saying "is not written for the layman interested in the history of science, and certainly does not belong to the category of popular science books." He praises the book for justifying each statement with "at least one quotation", stating he estimates the total to be greater than one thousand. He goes on to say that "it is inconceivable that an author with such a profound knowledge of his sources could have overlooked any important fact." He also acknowledges that the book is sometimes hard to read due to the "condensed style" as well as "the fact that he often employs the nomenclature used in original work instead of that which would be used to-day."
In his 1956 book review, P. W. Bridgman states that it is "doubtless" that the most controversial part of the book is in giving priority to Lorentz and Poincare for special relativity, but chooses not to defend the priority of Einstein, deferring the readers to Max Born's responses. He does state that it "is to be remembered, however, that Whittaker was in the thick of things during the development of the theory, and there is much forgotten history". He praises Whittaker for highlighting the "little known pre-history" for the mass-energy relation. Bridgman also notes that the volume does not discuss whether the "aether" should be considered superfluous in light of the special and general theories of relativity, but notes the preface to the original edition argues to keep the word aether to describe the quantum vacuum.
In relation to the early development of general relativity and the equivalence principle, Roberto Torretti, in his 1983 book, criticized Whittaker for attributing to Max Planck the implication that "all energy must gravitate" even though Planck's 1907 paper was "saying the opposite" according to Torretti.
Special relativity priority dispute
In the second volume, a chapter titled "The Relativity Theory of Poincaré and Lorentz" credits Henri Poincaré and Hendrik Lorentz for developing special relativity, and especially alluded to Lorentz's 1904 paper (dated by Whittaker as 1903), Poincaré's St. Louis speech (The Principles of Mathematical Physics) of September 1904, and Poincaré's June 1905 paper. He attributed to Einstein's special relativity paper only little importance, which he said "set forth the relativity theory of Poincaré and Lorentz with some amplifications, and which attracted much attention". Roberto Torretti states, in his 1983 book Relativity and Geometry, "Whittaker's views on the origin of special relativity have been rejected by the great majority of scholars", citing Max Born, Gerald Holton, Charles Scribner, Stanley Goldberg, Elie Zahar, Tetu Hirosige, Kenneth F. Schaffner, and Arthur I. Miller. While he notes that G. H. Keswani sides with Whittaker, though "he somewhat tempers the latter's view". Miller, in his 1981 book, writes that the "lack of historic credibility" of the second chapter had been "demonstrated effectively" by Holton's 1960 article on the origins of special relativity.
Max Born rebuttals
Born wrote a letter to Einstein in September 1953 where he explained to Einstein that Whittaker, a friend of his, was publishing the second volume which is "peculiar in that Lorentz and Poincare are credited" with the development of special relativity while Einstein's papers are treated as "less important". He goes on to tell Einstein that he had done all he could over the previous three years to "dissuade Whittaker from carrying out his plan", mentioning that Whittaker "cherished" the idea and "loved to talk" about it. He told Einstein that Whittaker insists that all the important features were developed by Poincare while Lorentz "quite plainly had the physical interpretation". Born said this annoyed him as Whittaker is a "great authority in the English speaking countries" and was worried that "many people are going to believe him". Einstein reassures Born that there is nothing to worry about in an October response, saying "Don't lose any sleep over your friend's book. Everybody does what he considers right or, in deterministic terms, what he has to do. If he manages to convince others, that is their own affair." He states that he does not find it sensible to defend the results of his research as somehow belonging to him. In the 1971 commentary on this response Born says that Einstein's response simply proves his "utter indifference to fame and glory".
In his 1954 book review, Born states that "there is much to be said in favour of Whittaker’s judgment. From the mathematical standpoint the Lorentz transformations contain the whole of special relativity, and there seems to be no doubt that Poincare was, perhaps a little ahead of Einstein, aware of most of the important physical consequences". Though he goes on to side with the "general use in naming relativity after Einstein", though "without disregarding the great contributions of Lorentz and Poincare." Born expands on these thoughts in his 1956 book, where he points out a response from Einstein to Carl Seelig in which Einstein was asked about the scientific literature which most influenced his special theory of relativity. Einstein points out that he knew only the work by Lorentz from the 1890s. Born says this "makes the situation perfectly clear." He points out that the 1905 papers on relativity and light quantum were connected, and the research was independent of Lorentz’ and Poincare's later work. He goes on to highlight Einstein's "audacity" in "challenging Isaac Newton’s established philosophy, the traditional concepts of space and time." This, for Born, "distinguishes Einstein’s work from his predecessors and gives us the right to speak of Einstein’s theory of relativity, in spite of Whittaker’s different opinion."
George Holton rebuttal
In his explicit rebuttal of 1960, Holton notes that Einstein's paper "was indeed one of a number of contributions by many different authors", but goes on to point out that Whittaker's assessment was lacking and plainly wrong at places. He notes that crediting Lorentz with a 1903 rather than 1904 paper was "not merely a mistake", but rather is at least a "symbolic mistake" that is "symbolic of the way a biographer's preconceptions interact with his material." He goes on to say that Whittaker insinuated that Einstein's work was based on Lorentz's despite the statements by Einstein and his colleagues to the contrary, and that there were multiple pieces of evidence in the 1905 paper that implies Einstein did not know of Lorentz's later work, including the fact that Einstein derived the Lorentz transform while Lorentz assumed it and that Einstein was acute in giving credit to others whose work influenced his own. He also points out a key difference between the papers in which Einstein argues that the "laws of electrodynamics and optics" were "valid in all frames of reference" to the order of , whereas Lorentz claimed, as a "key point" in his 1904 paper, "to have extended the theory to the second order in ". He notes finally that Planck had pointed out in 1906 that Einstein's expression for the mass of charged particles was "far less suitable than Lorentz's". Holton goes on to note the "equally significant fact" that Lorentz's paper was "not on the special relativity as we understand the term since Einstein", as his "fundamental assumptions are not relativistic". He goes on to say that Lorentz never claimed credit for relativity and in fact referred to it as Einstein's relativity. He notes finally that Lorentz's formulation was valid only for small , but the point of Einstein's theory was general validity. Holton has written other works on the history of special relativity as well, defending Einstein's priority.
Rebuttals from other notable scholars
Roberto Torretti, in his 1983 book, notes the theory set out by Poincare and Lorentz was both "experimentally indistinguishable from and mathematically equivalent to" Einstein's On the Electrodynamics of Moving Bodies, but their philosophy is very different than the special relativity of Einstein. Torretti notes that their theory, in stark contrast to Einstein's, relies on the assumption of an aether which interacted with systems moving across it, affecting the clocks shrinking bodies. He goes on to note that it is doubtless that Einstein could have drawn inspiration from the works of Poincare, He points out that Poincare's theory was not universally applicable like Einstein's and that it does not rest on a modification of the notions of space and time. He also mentions that Lorentz regularly referred to the theory as Einstein's, but that Poincare never truly became a relativist, who referred to the theory as Lorentz's. Torretti notes that Poincare's failure to catch on was his notorious conventionalism, and the fact that he may have been a little too proud to admit that "he had lost the glory of founding 20th-century physics to a young Swiss patent clerk."
Charles Scribner, in his 1984 article Henri Poincaré and the Principle of Relativity, stated his belief that Whittaker's view on the matter "fails to do justice to the available historical evidence" and notes that it may also "create obstacles for students". He continues saying "Einstein played a unique role in establishing the universal validity of the principle of relativity and in revealing and capitalizing on its radical implications." He notes several of the points later raised by Holton in his 1960 rebuttal, including discrepancy in powers of and that Poincare never truly accepted the theory in the manner Einstein had put forward.
The controversy is mentioned in other books on the history of science as well. In his book Subtle is the Lord, Abraham Pais, wrote a scathing review of Whittaker, writing the treatment of special relativity "shows how well the author's lack of physical insight matches his ignorance of the literature", phrasing that was rebuked by at least one notable reviewer as "scurrilous" and "lamentable". Somewhat paradoxically, he also states that both he and his colleagues believe Whittaker's original edition "is a masterpiece". He further notes that he would not have felt the need to comment if the book had not "raised questions in many minds about the priorities in the discovery of this theory". A more sympathetic review come from Clifford Truesdell, who wrote that Whittaker "aroused colossal antagonism by trying to set the record straight on the basis of print and record rather than recollection and folklore and professional propaganda,…", in his 1984 book An Idiot's Fugitive Essays on Science
Long term impact
In one of Whittaker's 1958 obituaries, William McCrea remarked that the books are achievements so remarkable that "as time passes, the risk will be of all Whittaker's other great achievements tending to be overlooked in comparison." He predicts that future readers would "have difficulty" in acknowledging it was only the result of "a few years at both ends of a career of the highest distinction in other pursuits." In a 1956 obituary, Alexander Aitken calls the book series Whittaker's "magnum opus", amid a career of distinction, and expresses regret that Whittaker was unable to complete the promised third volume. Other obituaries include one that claims that the two volumes of the second edition "form Whittaker's magnum opus", amid many other distinctions, including 4 standard works other than the History. In a fourth obituary the work is said to be "brilliant" and a "colossal undertaking involving wide reading and accurate understanding".
The book was included in a curated 1958 list of "important books on science" in a Science article by Ivy Kellerman Reed and Alexander Gode, where the volumes are said to be the "first exhaustive history of the classical and modern theories of aether and electricity". In 1968, John L. Heilbron states that the "great value" of Whittaker's second volume on quantum mechanics lies in its ability to connect developments in quantum mechanics with those in other fields as well as its "rich citations", going on to recommend readers it and several other books on the history of science.
John David Jackson recommends both volumes to his readers in the preface of the first edition of the famous graduate textbook Classical Electrodynamics (1962), which has been reprinted in all later editions, including the standard third edition of 1999. Jackson give a brief account of the history of the mathematical development of electrodynamics and says the "story of the development of our understanding of electricity and magnetism is, of course, much longer and richer than the mention of a few names from one century would indicate." He goes on to tell his readers to consult both "authoritative" volumes for a "detailed account of the fascinating history".
In a 1988 Isis review of a combined reprint of the second edition, including both the first and second volumes bound together, published in New York by the American Institute of Physics and Tomash Publishers in 1981, science historian Bruce J. Hunt says that the books stand up "remarkably well" to time and that it is unlikely that others would try to write such books in modern times, as the "encyclopedic sweep is too broad" and the "purely internalist focus too narrow" for recent trends, though he says "we can be glad that someone did write it" and that it is, perhaps, fortunate that Whittaker did so such a long time ago. He goes on to state his appreciation for the new reprint. In contrast to the first volume on The Classical Theories, Hunt notes that the second volume, The Modern Theories, is "rarely cited today, except in connection with this controversy" and that it has had "relatively little influence" on later publications in the history of modern physics. He goes on to say the first volume "continues to be a standard reference". He says that book's greatest weakness is that it lacks a "real historical sense", that it misses wider contexts and is therefore incomplete, as it focuses on theories rather than people. Hunt closes by noting that the book is, in many ways, a "relic of a past age", but remains "very useful" when "approached critically" and praises Whittaker as "one of the last and most thoughtful of the great Victorian mathematical physicists."
In a 2003 review of a book by the French science historian Olivier Darrigol, L. Pearce Williams compares the newer book with Whittaker's second edition, which he calls "old but still valuable". In 2007 Stephen G. Brush included the second volume of the second edition in a curated list of books on the history of light-quantum developments, such as black body radiation.
Others scholars have singled out the original volume, including Darrigol who, in a 2010 article, highlighted the work as an authoritative reference and Abraham Pais who states that both him and his colleagues believe the book to be a "masterpiece" in his 1982 book on Einstein.
Release details
First edition
The book was originally published in 1910 by Longmans, Green, and co. in London, New York, Bombay, and Calcutta, and by Hodges, Figgis, and co. in Dublin. It was out of print by the 1920s and was notoriously difficult to obtain thereafter. It was part of the Dublin University Press and Landmarks of Science series of books. As it was registered with the U.S. copyright office prior to 1925, the book is now in the public domain in the United States and can be found on the Internet Archive free of charge and is free to be reprinted.
Second edition
Original printing of the first volume:—
Original printing of the second volume:—
First reprinting of the edition, combines both volumes as one:—
Reprint by the American Institute of Physics and Tomash Publishing:—
Reprint by Dover Publications:—
See also
The Maxwellians:—Book by Bruce J. Hunt detailing the development of electromagnetism in the years after the publication of Maxwell's Treatise
Timeline of electromagnetism and classical optics:—Dynamic list of major developments in the history of electromagnetism and history of optics
History of chemistry:—The history of chemistry from ancient through modern times
History of electrical engineering:—Details the development of electrical engineering
:–Succinct expression of the principle of relativity with a classical geometric notion
References
Works cited
Relativity priority
Notable reviews
Further reading
External links
1910 non-fiction books
1951 non-fiction books
1953 non-fiction books
Aether theories
Books about the history of physics
History of electrical engineering
History of optics
History of thermodynamics
Longman books
Thomas Nelson (publisher) books
Reference works
Books by E. T. Whittaker | A History of the Theories of Aether and Electricity | [
"Physics",
"Chemistry",
"Engineering"
] | 10,217 | [
"Electrical engineering",
"History of thermodynamics",
"Thermodynamics",
"History of electrical engineering"
] |
65,295,683 | https://en.wikipedia.org/wiki/Dickkopf | Dickkopf (DKK) is a family of proteins consisting of five members as of 2020. That is, vertebrates usually contain five genes that are members of the family. The most well-studied is Dickkopf-related protein 1 (DKK1). DKK proteins inhibit the Wnt signaling pathway coreceptors LRP5 and LRP6. They bind with high affinity as ligands to KREMEN1 and KREMEN2, which are transmembrane proteins. DKK proteins have important roles in the development of vertebrates.
Etymology
is a German word meaning "stubborn person", or literally, "thick head". It was coined as the name for these proteins in a 1998 Nature paper by Glinka et al. in reference to the discovery that DKK1 induces head formation in the embryogenesis of Xenopus.
Structure
DKK proteins are glycoproteins consisting of 255–350 amino acids. DKK1, DKK2, and DKK4 have similar molecular weights, at 24–29 kDa (kilodaltons). DKK3 is heaviest, at 38 kDa. In addition to having similar weights, DKK1, -2, and -4 have high structural similarity, with two shared cysteine-rich domains. DKK3 differs from -1, -2, and -4 by the presence of a Soggy domain at its N-terminus.
Proteins
Four DKK proteins and one DKK-like protein occur in humans and other vertebrates, with five proteins in the family in total:
DKK1
DKK2
DKK3
DKK4
DKKL1 (soggy-1, Cancer/testis antigen 34)
Human disease
DKK proteins are believed to be involved with several human diseases, including bone cancer and neurodegenerative disease. Evidence also indicates DKK1 and DKK3 are involved in the pathophysiology of the artery, where they could contribute to atherosclerosis.
References
Protein families | Dickkopf | [
"Biology"
] | 425 | [
"Protein families",
"Protein classification"
] |
65,295,885 | https://en.wikipedia.org/wiki/Crystalline%20coatings | Crystalline coatings (or crystalline mirrors) are a type of thin-film optical interference coating that is made by merging monocrystalline multilayers deposited via processes such as molecular-beam epitaxy (MBE) and metalorganic vapour-phase epitaxy (MOVPE) with microfabrication techniques including direct bonding and selective etching. In this technique heterostructures such as gallium arsenide / aluminum gallium arsenide (GaAs/AlGaAs) distributed Bragg reflectors (DBRs) are grown and then transferred to polished optical surfaces, resulting in high-performance single-crystal optical coatings on arbitrary, including curved, substrates. the maximum diameter achievable is 20 cm, limited by commercially-available GaAs wafers. The tightest curvature demonstrated for such coatings is 5 cm.
The substrate-transferred crystalline coating process was developed in 2013 by Garrett Cole and colleagues at the Institute for Quantum Optics and Quantum Information at the Austrian Academy of Sciences and the University of Vienna. With additional refinement, the technique became capable of generating high-reflectivity mirrors with optical losses on par with the best ion-beam-sputtered coatings, with optical absorption in the 1000–2000 nm spectral range demonstrated to be < 1 part-per-million (ppm) and optical scatter < 3 ppm in the best optics. Additional advantages of these coatings include:
Significantly reduced elastic losses (at least a factor of 10 over typical amorphous interference coatings) resulting in minimal thermal noise, enabling ultrastable interferometers for optical atomic clocks and gravitational-wave detectors such as LIGO.
The realization of ppm-levels of optical losses (absorption + scatter) in the mid-infrared spectral region demonstrating enhancement cavities for cavity ring-down spectrometers with a finesse > 400 000 at wavelengths to ~4500 nm.
High thermal conductivity, over 20 times higher than typical metal-oxide based coatings, making crystalline coatings promising for high-power continuous wave (CW) and quasi-CW lasers, with a CW damage threshold of 75 MW/cm2 demonstrated in a deformable mirror device at 1064 nm.
Owing to the low Brownian noise of crystalline coatings there have been a number of advancements in quantum-limited interferometry, with these mirrors being instrumental in efforts relevant to macroscopic quantum phenomena and enabling the demonstration of ponderomotive squeezing at room temperature, the broadband reduction of quantum radiation pressure noise via squeezed light injection, and the room temperature measurement of quantum back action in the audio band.
Garrett Cole and Markus Aspelmeyer founded Crystalline Mirror Solutions in 2013 to commercialize the technology. They were awarded second prize from the Berthold Leibinger Innovationspreis in 2016. The company was acquired by Thorlabs in December 2019 and rebranded as Thorlabs Crystalline Solutions.
References
Thin-film optics | Crystalline coatings | [
"Materials_science",
"Mathematics"
] | 600 | [
"Thin-film optics",
"Planes (geometry)",
"Thin films"
] |
65,296,384 | https://en.wikipedia.org/wiki/Andrew%20Pollard%20%28immunologist%29 | Sir Andrew John Pollard (born 29 August 1965) is the Ashall Professor of Infection & Immunity at the University of Oxford and a Fellow of St Cross College, Oxford. He is an honorary consultant paediatrician at John Radcliffe Hospital and the director of the Oxford Vaccine Group. He is the chief investigator on the University of Oxford COVID-19 Vaccine (ChAdOx-1 n-CoV-19) trials and has led research on vaccines for many life-threatening infectious diseases, including typhoid fever, Neisseria meningitidis, Haemophilus influenzae type b, streptococcus pneumoniae, pertussis, influenza, rabies, and Ebola.
Because "In order to prevent any perceived conflict of interest it was agreed that the Joint Committee on Vaccination and Immunisation (JCVI) Chair (Professor Andrew Pollard), who is involved in the development of a SARS-CoV-2 vaccine at Oxford, would recuse himself from all JCVI COVID-19 meetings", JCVI Deputy Chair Professor Anthony Harnden acts in his stead on these matters.
Pollard was awarded the coveted James Spence Medal by the Royal College of Paediatrics and Child Health (RCPCH) in 2022.
Education
Pollard attended St Peter's Catholic School, Bournemouth, where he was head boy. He attended Guy's Hospital Medical School graduating with a BSc in 1986, and subsequently obtained an MBBS from the University of London (1989) at St Bartholomew's Hospital Medical School, where he was awarded the Wheelwright's Prize in Paediatrics (1988) and Honours Colours. After house jobs at Barts and Whipps Cross Hospital and working as an A&E senior house officer at the Whittington Hospital, London, he trained in Paediatrics at Birmingham Children's Hospital, UK, specialising in Paediatric Infectious Diseases at St Mary's Hospital, London, and at British Columbia Children's Hospital, Vancouver. He obtained his PhD at St Mary's Hospital, from the University of London in 1999.
Career
He chaired the scientific panel of the Spencer Dayman Meningitis Laboratories Charitable Trust (2002–2006) and was a member of the scientific committee of the Meningitis Research Foundation (2009–2014). He is currently chair of trustees of the Knoop Trust and a trustee of the Jenner Vaccine Foundation. Pollard has been the chair of the UK's JCVI since 2013, but does not participate in the COVID-19 vaccine Committee. Pollard has been a member of the WHO Strategic Advisory Group of Experts (SAGE) on Immunization since 2016. He was Director of Graduate Studies in the Department of Paediatrics at the University of Oxford 2012-2020 and was Vice-Master of the University of Oxford's St Cross College, Oxford 2017–2021 and remains a Fellow of the college. He has been a member of the British Commission on Human Medicines' Clinical Trials, Biologicals and Vaccines expert advisory group since 2013, and chaired the European Medicines Agency Scientific Advisory Group on Vaccines over the years between 2012 and 2020.
Honours and awards
Pollard has received multiple awards throughout his career. For example, he received the “Science Honor and Truth Award” of the Instituto de Patologia en la Altura in La Paz, Bolivia in 2002. In 2020, Pollard received the Oxford University Vice Chancellor's Innovation Award for his work on typhoid vaccines. In 2021, Pollard was knighted in the Birthday Honours for services to public health, particularly during the COVID-19 pandemic. In 2022, Brazil awarded him the Order of Medical Merit. He was elected as a Fellow of the Royal Society (FRS) in 2024.
Publications
, Pollard has published five books (including one on mountaineering), six book chapters, 12 conference papers, and 647 journal articles. His most cited works are:
Personal life
Pollard is an avid runner, cyclist, and mountaineer.
References
External links
PubMed search for Andrew J. Pollard
1965 births
Living people
People educated at St Peter's Catholic School, Bournemouth
Alumni of King's College London
Alumni of the Medical College of St Bartholomew's Hospital
Alumni of Imperial College London
Vaccinologists
British immunologists
Fellows of the Academy of Medical Sciences (United Kingdom)
Academics of the University of Oxford
Fellows of the Higher Education Academy
Fellows of St Cross College, Oxford
Knights Bachelor
Vaccination advocates
Fellows of the Royal College of Paediatrics and Child Health
Fellows of the Royal Society | Andrew Pollard (immunologist) | [
"Biology"
] | 930 | [
"Vaccination",
"Vaccinologists",
"Vaccination advocates"
] |
65,297,106 | https://en.wikipedia.org/wiki/Adaptive%20design%20%28medicine%29 | In an adaptive design of a clinical trial, the parameters and conduct of the trial for a candidate drug or vaccine may be changed based on an interim analysis. Adaptive design typically involves advanced statistics to interpret a clinical trial endpoint. This is in contrast to traditional single-arm (i.e. non-randomized) clinical trials or randomized clinical trials (RCTs) that are static in their protocol and do not modify any parameters until the trial is completed. The adaptation process takes place at certain points in the trial, prescribed in the trial protocol. Importantly, this trial protocol is set before the trial begins with the adaptation schedule and processes specified. Adaptions may include modifications to: dosage, sample size, drug undergoing trial, patient selection criteria and/or "cocktail" mix. The PANDA (A Practical Adaptive & Novel Designs and Analysis toolkit) provides not only a summary of different adaptive designs, but also comprehensive information on adaptive design planning, conduct, analysis and reporting.
Purpose
The aim of an adaptive trial is to more quickly identify drugs or devices that have a therapeutic effect, and to zero in on patient populations for whom the drug is appropriate. When conducted efficiently, adaptive trials have the potential to find new treatments while minimizing the number of patients exposed to the risks of clinical trials. Specifically, adaptive trials can efficiently discover new treatments by reducing the number of patients enrolled in treatment groups that show minimal efficacy or higher adverse-event rates. Adaptive trials can adjust almost any part of its design, based on pre-set rules and statistical design, such as sample size, adding new groups, dropping less effective groups and changing the probability of being randomized to a particular group, for example.
History
In 2004, a Strategic Path Initiative was introduced by the United States Food and Drug Administration (FDA) to modify the way drugs travel from lab to market. This initiative aimed at dealing with the high attrition levels observed in the clinical phase. It also attempted to offer flexibility to investigators to find the optimal clinical benefit without affecting the study's validity. Adaptive clinical trials initially came under this regime.
The FDA issued draft guidance on adaptive trial design in 2010. In 2012, the President's Council of Advisors on Science and Technology (PCAST) recommended that the FDA "run pilot projects to explore adaptive approval mechanisms to generate evidence across the lifecycle of a drug from the pre-market through the post-market phase." While not specifically related to clinical trials, the council also recommended that they "make full use of accelerated approval for all drugs meeting the statutory standard of addressing an unmet need for a serious or life-threatening disease, and demonstrating an impact on a clinical endpoint other than survival or irreversible morbidity, or on a surrogate endpoint, likely to predict clinical benefit."
By 2019, the FDA updated their 2010 recommendations and issued "Adaptive Design Clinical Trials for Drugs and Biologics Guidance". In October of 2021, the FDA Center for Veterinary Medicine issued the Guidance Document "Adaptive and Other Innovative Designs for Effectiveness Studies of New Animal Drugs".
Characteristics
Traditionally, clinical trials are conducted in three steps:
The trial is designed.
The trial is conducted as prescribed by the design.
Once the data are ready, they are analysed according to a pre-specified analysis plan.
Types
Overview
Any trial design that can change its design, during active enrollment, could be considered an adaptive clinical trial. There are a number of different types, and real life trials may combine elements from these different trial types: In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained.
Dose finding design
Phase I of clinical research focuses on selecting a particular dose of a drug to carry forward into future trials. Historically, such trials have had a "rules-based" (or "algorithm-based") design, such as the 3+3 design. However, these "A+B" rules-based designs are not appropriate for phase I studies and are inferior to adaptive, model-based designs. An example of a superior design is the continual reassessment method (CRM).
Group sequential design
Group sequential design is the application of sequential analysis to clinical trials. At each interim analysis, investigators will use the current data to decide whether the trial should either stop or should continue to recruit more participants. The trial might stop either because the evidence that the treatment is working is strong ("stopping for benefit") or weak ("stopping for futility"). Whether a trial may stop for futility only, benefit only, or either, is stated in advance. A design has "binding stopping rules" when the trial must stop when a particular threshold of (either strong or weak) evidence is crossed at a particular interim analysis. Otherwise it has "non-binding stopping rules", in which case other information can be taken into account, for example safety data. The number of interim analyses is specified in advance, and can be anything from a single interim analysis (a "two-stage" design") to an interim analysis after every participant ("continuous monitoring").
For trials with a binary (response/no response) outcome and a single treatment arm, a popular and simple group sequential design with two stages is the Simon design. In this design, there is a single interim analysis partway through the trial, at which point the trial either stops for futility or continues to the second stage. Mander and Thomson also proposed a design with a single interim analysis, at which point the trial could stop for either futility or benefit.
For single-arm, single-stage binary outcome trials, a trial's success or failure is determined by the number of responses observed by the end of the trial. This means that it may be possible to know the conclusion of the trial (success or failure) with certainty before all the data are available. Planning to stop a trial once the conclusion is known with certainty is called non-stochastic curtailment. This reduces the sample size on average. Planning to stop a trial when the probability of success, based on the results so far, is either above or below a certain threshold is called stochastic curtailment. This reduces the average sample size even more than non-stochastic curtailment. Stochastic and non-stochastic curtailment can also be used in two-arm binary outcome trials, where a trial's success or failure is determined by the number of responses observed on each arm by the end of the trial.
Usage
The adaptive design method developed mainly in the early 21st century. In November 2019, the US Food and Drug Administration provided guidelines for using adaptive designs in clinical trials.
In 2020 COVID-19 related trials
In April 2020, the World Health Organization published an "R&D Blueprint (for the) novel Coronavirus" (Blueprint). The Blueprint documented a "large, international, multi-site, individually randomized controlled clinical trial" to allow "the concurrent evaluation of the benefits and risks of each promising candidate vaccine within 3–6 months of it being made available for the trial." The Blueprint listed a Global Target Product Profile (TPP) for COVID‑19, identifying favorable attributes of safe and effective vaccines under two broad categories: "vaccines for the long-term protection of people at higher risk of COVID-19, such as healthcare workers", and other vaccines to provide rapid-response immunity for new outbreaks.
The international TPP team was formed to 1) assess the development of the most promising candidate vaccines; 2) map candidate vaccines and their clinical trial worldwide, publishing a frequently-updated "landscape" of vaccines in development; 3) rapidly evaluate and screen for the most promising candidate vaccines simultaneously before they are tested in humans; and 4) design and coordinate a multiple-site, international randomized controlled trialthe "Solidarity trial" for vaccinesto enable simultaneous evaluation of the benefits and risks of different vaccine candidates under clinical trials in countries where there are high rates of COVID‑19 disease, ensuring fast interpretation and sharing of results around the world. The WHO vaccine coalition prioritized which vaccines would go into PhaseII and III clinical trials, and determined harmonized PhaseIII protocols for all vaccines achieving the pivotal trial stage.
The global "Solidarity" and European "Discovery" trials of hospitalized people with severe COVID‑19 infection applied adaptive design to rapidly alter trial parameters as results from the four experimental therapeutic strategies emerge. The US National Institute of Allergy and Infectious Diseases (NIAID) initiated an adaptive design, international Phase III trial (called "ACTT") to involve up to 800 hospitalized COVID‑19 people at 100 sites in multiple countries.
Breast cancer
An adaptive trial design enabled two experimental breast cancer drugs to deliver promising results after just six months of testing, far shorter than usual. Researchers assessed the results while the trial was in process and found that cancer had been eradicated in more than half of one group of patients. The trial, known as I-Spy 2, tested 12 experimental drugs.
I-SPY 1
For its predecessor I-SPY 1, 10 cancer centers and the National Cancer Institute (NCI SPORE program and the NCI Cooperative groups) collaborated to identify response indicators that would best predict survival for women with high-risk breast cancer. During 2002–2006, the study monitored 237 patients undergoing neoadjuvant therapy before surgery. Iterative MRI and tissue samples monitored the biology of patients to chemotherapy given in a neoadjuvant setting, or presurgical setting. Evaluating chemotherapy's direct impact on tumor tissue took much less time than monitoring outcomes in thousands of patients over long time periods. The approach helped to standardize the imaging and tumor sampling processes, and led to miniaturized assays. Key findings included that tumor response was a good predictor of patient survival, and that tumor shrinkage during treatment was a good predictor of long-term outcome. Importantly, the vast majority of tumors identified as high risk by molecular signature. However, heterogeneity within this group of women and measuring response within tumor subtypes was more informative than viewing the group as a whole. Within genetic signatures, level of response to treatment appears to be a reasonable predictor of outcome. Additionally, its shared database has furthered the understanding of drug response and generated new targets and agents for subsequent testing.
I-SPY 2
I-SPY 2 is an adaptive clinical trial of multiple Phase 2 treatment regimens combined with standard chemotherapy. I-SPY 2 linked 19 academic cancer centers, two community centers, the FDA, the NCI, pharmaceutical and biotech companies, patient advocates and philanthropic partners. The trial is sponsored by the Biomarker Consortium of the Foundation for the NIH (FNIH), and is co-managed by the FNIH and QuantumLeap Healthcare Collaborative. I-SPY 2 was designed to explore the hypothesis that different combinations of cancer therapies have varying degrees of success for different patients. Conventional clinical trials that evaluate post-surgical tumor response require a separate trial with long intervals and large populations to test each combination. Instead, I-SPY 2 is organized as a continuous process. It efficiently evaluates multiple therapy regimes by relying on the predictors developed in I-SPY 1 that help quickly determine whether patients with a particular genetic signature will respond to a given treatment regime. The trial is adaptive in that the investigators learn as they go, and do not continue treatments that appear to be ineffective. All patients are categorized based on tissue and imaging markers collected early and iteratively (a patient's markers may change over time) throughout the trial, so that early insights can guide treatments for later patients. Treatments that show positive effects for a patient group can be ushered to confirmatory clinical trials, while those that do not can be rapidly sidelined. Importantly, confirmatory trials can serve as a pathway for FDA Accelerated Approval. I-SPY 2 can simultaneously evaluate candidates developed by multiple companies, escalating or eliminating drugs based on immediate results. Using a single standard arm for comparison for all candidates in the trial saves significant costs over individual Phase 3 trials. All data are shared across the industry. I-SPY 2 is comparing 11 new treatments against 'standard therapy', and is estimated to complete in Sept 2017. By mid 2016 several treatments had been selected for later stage trials.
Alzheimer's
Researchers under the EPAD project by the Innovative Medicines Initiative are utilizing an adaptive trial design to help speed development of Alzheimer's disease treatments, with a budget of 53 million euros. The first trial under the initiative was expected to begin in 2015 and to involve about a dozen companies. As of 2020, 2,000 people over the age of 50 have been recruited across Europe for a long term study on the earliest stages of Alzheimer's. The EPAD project plans to use the results from this study and other data to inform 1,500 person selected adaptive clinical trials of drugs to prevent Alzheimer's.
Bayesian designs
The adjustable nature of adaptive trials inherently suggests the use of Bayesian statistical analysis. Bayesian statistics inherently address updating information such as that seen in adaptive trials that change from updated information derived from interim analysis. The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning.
According to FDA guidelines, an adaptive Bayesian clinical trial can involve:
Interim looks to stop or to adjust patient accrual
Interim looks to assess stopping the trial early either for success, futility or harm
Reversing the hypothesis of non-inferiority to superiority or vice versa
Dropping arms or doses or adjusting doses
Modification of the randomization rate to increase the probability that a patient is allocated to the most appropriate treatment (or arm in the multi-armed bandit model)
The Bayesian framework Continuous Individualized Risk Index which is based on dynamic measurements from cancer patients can be effectively used for adaptive trial designs. Platform trials rely heavily on Bayesian designs.
For regulatory submission of Bayesian clinical trial design, there exist two Bayesian decision rules that are frequently used by trial sponsors. First, posterior probability approach is mainly used in decision-making to quantify the evidence to address the question, "Does the current data provide convincing evidence in favor of the alternative hypothesis?" The key quantity of the posterior probability approach is the posterior probability of the alternative hypothesis being true based on the data observed up to the point of analysis. Second, predictive probability approach is mainly used in decision-making is to answer the question at an interim analysis: "Is the trial likely to present compelling evidence in favor of the alternative hypothesis if we gather additional data, potentially up to the maximum sample size (or current sample size)?" The key quantity of the predictive probability approach is the posterior predictive probability of the trial success given the interim data.
In most regulatory submissions, Bayesian trial designs are calibrated to possess good frequentist properties. In this spirit, and in adherence to regulatory practice, regulatory agencies typically recommend that sponsors provide the frequentist type I and II error rates for the sponsor's proposed Bayesian analysis plan. In other words, the Bayesian designs for the regulatory submission need to satisfy the type I and II error requirement in most cases in the frequentist sense. Some exception may happen in the context of external data borrowing where the type I error rate requirement can be relaxed to some degree depending on the confidence of the historical information.
Statistical analysis
The problem of adaptive clinical trial design is more or less exactly the bandit problem as studied in the field of reinforcement learning.
Added complexity
The logistics of managing traditional, non-adaptive design clinical trials may be complex. In adaptive design clinical trials, adapting the design as results arrive adds to the complexity of design, monitoring, drug supply, data capture and randomization. Furthermore, it should be stated in the trial's protocol exactly what kind of adaptation will be permitted. Publishing the trial protocol in advance increases the validity of the final results, as it makes clear that any adaptation that took place during the trial was planned, rather than ad hoc. According to PCAST "One approach is to focus studies on specific subsets of patients most likely to benefit, identified based on validated biomarkers. In some cases, using appropriate biomarkers can make it possible to dramatically decrease the sample size required to achieve statistical significance—for example, from 1500 to 50 patients."
Adaptive designs have added statistical complexity compared to traditional clinical trial designs. For example, any multiple testing, either from looking at multiple treatment arms or from looking at a single treatment arm multiple times, must be accounted for. Another example is statistical bias, which can be more likely when using adaptive designs, and again must be accounted for.
While an adaptive design may be an improvement over a non-adaptive design in some respects (for example, expected sample size), it is not always the case that an adaptive design is a better choice overall: in some cases, the added complexity of the adaptive design may not justify its benefits. An example of this is when the trial is based on a measurement that takes a long time to observe, as this would mean having an interim analysis when many participants have started treatment but cannot yet contribute to the interim results.
Risks
Shorter trials may not reveal longer term risks, such as a cancer's return.
Resources (external links)
See also
References
Sources
External links
Gottlieb K. (2016) The FDA adaptive trial design guidance in a nutshell - A review in Q&A format for decision makers. PeerJ Preprints 4:e1825v1
Clinical trials
Drugs
Design of experiments | Adaptive design (medicine) | [
"Chemistry"
] | 3,581 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
65,297,373 | https://en.wikipedia.org/wiki/Vaccine%20Taskforce | The Vaccine Taskforce in the United Kingdom of Great Britain and Northern Ireland was set up in April 2020 by the Second Johnson ministry, in collaboration with Chief Scientific Advisor Patrick Vallance and Chief Medical Officer Professor Chris Whitty, in order to facilitate the path towards the introduction of a COVID-19 vaccine in the UK and its global distribution. The taskforce coordinated the research efforts of government with industry, academics and funding agencies in order to expedite vaccine development and deployment.
The minister responsible for the body was the Secretary of State for Health and Social Care, although the body was a joint unit of the Department of Health and Social Care and the Department for Business, Energy and Industrial Strategy. Oversight was by the Parliamentary Under-Secretary of State for COVID-19 Vaccine Deployment, and in November 2020 the first person to take this role was Nadhim Zahawi MP.
The Vaccine Taskforce closed in autumn 2022. Its role in vaccine supply was merged into the UK Health Security Agency, and its work in bringing vaccine manufacture in-country transferred to the Office for Life Sciences.
History
The body was set up in April 2020 by the Government's Chief Scientific Advisor, Sir Patrick Vallance.
On 16 May 2020, venture capitalist Kate Bingham was named to chair the body. On 1 July, Bingham told the Science and Technology Select Committee that Sarah Gilbert and "Oxford University (are) leading the world in developing a vaccine against COVID-19 and offers the best chance of having something protective against the virus as we go into winter."
On 12 September, it came to light that Sir John Bell was a member of the body.
On 14 October, the chair managed public expectation by stating that a vaccine for COVID-19 was expected to be no more efficacious than the flu vaccine, which immunises against the influenza virus with around 50 per cent success. Bingham added: "We shouldn't assume it's going to be better than a flu vaccine, because that's an equivalent – it's a mutating … respiratory virus that gets in through the nose and eyes and respiratory tract".
Speaking to BBC Scotland's The Seven on 17 October, Bingham said that the government would have to arrive at an agreement with the Joint Committee on Vaccination and Immunisation (JCVI) as to how any COVID-19 vaccine should be distributed; the staff of care homes and the elderly are likely to be prioritised. She stated that initially there would be a limited supply any COVID-19 vaccine.
On 18 October 2020, SAGE committee member, Sir Jeremy Farrar, commented on Sophy Ridge On Sunday that the Vaccine Taskforce "has done an absolutely extraordinary job" and the country is in an "extraordinarily strong position" with regard to the line-up of possible vaccines.
A government press release of 20 October shed further light on the initial formation of the taskforce, stating that it was created under the auspices of the Department for Business, Energy and Industrial Strategy (BEIS) in May 2020. Nadhim Zahawi was appointed to the new role of Parliamentary Under-Secretary of State for COVID-19 Vaccine Deployment on 28 November 2020, with responsibility for the taskforce. On 1 March 2021, ministerial responsibility transferred from BEIS to the Secretary of State for Health and Social Care, and the taskforce became a joint unit of BEIS and the Department of Health and Social Care.
Personnel
On 14 June 2021, the microbiologist Sir Richard Sykes was appointed chair of the Vaccine Taskforce.
, the director-general of the taskforce is Madelaine McTernan.
Steering group
In June 2021, the Department for Business, Energy and Industrial Strategy confirmed in response to a Freedom of Information Act request that the taskforce's formal steering group had been disbanded, with the taskforce now being managed by a senior leadership team of civil servants and experts, with Sir Richard Sykes as its external chair.
Former membership
Until November 2020, the membership of the taskforce was unknown. A Freedom of Information Act request to obtain the membership was responded to with three pages of redacted names. As of that month, the steering group was made up of:
Kate Bingham, chair
Clive Dix, deputy chair
Nick Elliott MBE, Director-General, Department for Business, Energy and Industrial Strategy (BEIS)
Ruth Todd, Director, BEIS
Madelaine McTernan, Director, UK Government Investments
Tim Colley, Director, BEIS
Dan Osgood, Director, BEIS
Divya Chadha Manek, National Institute for Health Research
Ian McCubbin, Manufacturing Advisor – former Senior Vice President for Global Manufacturing and Supply at GlaxoSmithKline
Steve Bates, Chief Executive Officer, BioIndustry Association
Professor Jonathan Van-Tam, Clinical and Public Health Adviser to the VTF, Deputy Chief Medical Officer, Department of Health and Social Care
Representatives of other government departments and public sector organisations attended VTF Steering Group meetings as required
Developments
On 20 October 2020, the Financial Times reported that potential COVID-19 vaccines would be selected for testing by the taskforce towards the end of the first quarter of 2021, but this was dependent on the outcome of "characterisation studies". The article also mentioned funding of £33.6 million being provided by government to accelerate the development of new COVID-19 vaccines by exposing human trial participants to the coronavirus in controlled conditions around 30 days after having received a shortlisted vaccine. The work of the taskforce was bolstered by a further tranche of £19.7 million in funding for clinical trial-related blood testing facilities at Public Health England, specifically at PHE Porton Down.
On 22 October, Oxford Immunotec announced that the company had been chosen by the taskforce to be the unique supplier of T cell testing for SARS-Cov-2. The move was underscored with a £3 million investment, as the Business Secretary, Alok Sharma, emphasised the importance of T cell diagnostic capabilities in assessing the performance of candidate vaccines within COVID-19 vaccine trials.
On 27 October 2020, an article by Bingham was published in The Lancet. It highlighted the taskforce's overall strategy of a diverse portfolio of vaccines, with an emphasis on those thought capable of achieving an immune response in the over-65s. From an initial pool of 240 potential vaccines, the taskforce selected six candidates which employ four varied methods: adenoviral vectors, mRNA, adjuvanted proteins, and whole inactivated viral vaccines. The article also revealed that Clive Dix was the taskforce's deputy chair. It was reported the following day that Bingham had warned in the Lancet article that first-generation COVID-19 vaccines would probably not be perfect, and would only lessen symptoms rather than prevent infection and that they "might not work for everyone or for long".
Related bodies
The Department of Health and Social Care set up an Antivirals Taskforce in April 2021, to identify and deploy post-infection antiviral medicines which could be taken by people at home. By September 2022, the name of the body had changed to the COVID-19 Antivirals and Therapeutics Taskforce.
See also
Joint Committee on Vaccination and Immunisation
COVID-19 vaccination programme in the United Kingdom
References
Public bodies and task forces of the United Kingdom government
Vaccination-related organizations
Government agencies established in 2020
2020 establishments in the United Kingdom
COVID-19 pandemic in the United Kingdom
COVID-19 vaccines | Vaccine Taskforce | [
"Biology"
] | 1,537 | [
"Vaccination-related organizations",
"Vaccination"
] |
65,297,377 | https://en.wikipedia.org/wiki/Bis%282-chloroethyl%29selenide | Bis(2-chloroethyl)selenide is the organoselenium compound with the formula . As a haloalkyl derivative of selenium, it is an analogue of bis(2-chloroethyl)sulfide, the prototypical sulfur mustard used in chemical warfare. Bis(2-chloroethyl)selenide has not been used as a chemical warfare agent, however it is still a potent alkylating agent and has potential in chemotherapy.
See also
Diethyl selenide
O-Mustard
References
Alkylating agents
Chloroethyl compounds
Mustard compounds
Selenoethers | Bis(2-chloroethyl)selenide | [
"Chemistry"
] | 133 | [
"Alkylating agents",
"Organic compounds",
"Reagents for organic chemistry",
"Organic compound stubs",
"Organic chemistry stubs"
] |
65,297,669 | https://en.wikipedia.org/wiki/Plant%20cryopreservation | Plant cryopreservation is a genetic resource conservation strategy that allows plant material, such as seeds, pollen, shoot tips or dormant buds to be stored indefinitely in liquid nitrogen. After thawing, these genetic resources can be regenerated into plants and used on the field. While this cryopreservation conservation strategy can be used on all plants, it is often only used under certain circumstances: 1) crops with recalcitrant seeds e.g. avocado, coconut 2) seedless crops such as cultivated banana and plantains or 3) crops that are clonally propagated such as cassava, potato, garlic and sweet potato.
History
The history of plant cryopreservation started in 1965 when Hirai was studying the biology activities that happened when biological samples were frozen. Three years later, there was the first successful attempt cryopreserving callus cells. The following years, new methods to cryopreserve were developed, such as direct immersion, slow freezing and vitrification, as well as applied to more and more plants species and plant tissues.
Methods
Direct immersion. This is the immersion of plant material directly in liquid nitrogen, or after desiccation. This is often done with (orthodox) seeds that already have a low moisture content or pollen. This method cannot be used with tissues that contain a lot of water or are sensitive to dehydration.
Slow freezing. This method relies on the mechanism of freeze dehydration to pull water out of the cells and thus prevent ice formation in the cell.
Vitrification. By freezing at an ultra-fast rate and using osmotic dehydration, the water that is still present in the cell is unable to form crystals and will be part of a glass-like or vitrified solution. This method can be further split in different variants e.g. droplet vitrification, encapsulation dehydration and plate vitrification. These techniques were successfully used with several economically-important crops, such as chrysanthemum or bleeding heart.
Hurdles and limitations
Aside from the challenges involved with cryopreservation in general, an important hurdle, when developing cryopreservation protocols for storage of plant germplasm, is that plants within a species can have a different tolerance to cryopreservation. This difference seems to be linked with the drought resistance of the different cultivars within the species. Even within the plant itself there can be noticeable differences depending on the tissue that is used, as both structure and composition play an important role during cryopreservation.
Organizations relying on plant cryopreservation
Alliance of Bioversity International and Ciat
International Potato Center
Huntington Garden
Rural development administration of Korea
Leibniz Institute of Plant Genetics and Crop Plant Research
References
Cryopreservation
Plant conservation | Plant cryopreservation | [
"Chemistry"
] | 584 | [
"Cryopreservation",
"Cryobiology"
] |
65,299,251 | https://en.wikipedia.org/wiki/Leptosphaeria%20orae-maris | Leptosphaeria orae-maris is a marine fungus. The chemical compound leptosphaerin has been isolated from it.
References
Fungal plant pathogens and diseases
Pleosporales
Fungus species | Leptosphaeria orae-maris | [
"Biology"
] | 44 | [
"Fungi",
"Fungus species"
] |
65,299,368 | https://en.wikipedia.org/wiki/Southwest%20National%20Primate%20Research%20Center | The Southwest National Primate Research Center (SNPRC) is a federally funded biomedical research facility affiliated with the Texas Biomedical Research Institute. The SNPRC became the seventh National Primate Research Center in 1999.
Research
The SNPRC has two scientific units: "Infectious Diseases Immunology & Control" and "Comparative Medicine & Health Outcomes". The SNPRC also has a Laboratory Core Services Division, which consists of three laboratories: immunology, research imaging, and pathology.
Primates in captivity
The center houses over 2,500 non-human primates. Among the primates held in captivity at the SNPRC are baboons, chimpanzees, common marmosets, and rhesus macaques. The center houses over 1,000 baboons, which makes it the world's largest colony of baboons used for biomedical research. Furthermore, the center sells primates from their colonies to other researchers.
Incidents and controversies
2014
In 2014, a male baboon was injured in its cage and died after its injuries were uncared for. The injuries went unreported and the baboon went uncared for several days after. As a result, the baboon was emaciated, developed scabs and a large abscess on its leg, and also contracted blood poisoning from which he died.
In 2014, a macaque was placed in a new group of other macaques, and sustained several severe injuries during the following year including a tail degloving injury and multiple lacerations to the face and body. A veterinarian recommended that the group be assessed by the facility behavior team, but no assessment was ever conducted.
In 2014, the USDA cited the SNPRC for inaccuracies on their 2013 annual report. More specifically, the SNPRC did not accurately report the number of animals which had pain or distress that did not have anesthetic, analgesics or tranquilizing drugs administered.
In 2014, a juvenile baboon was killed when a guillotine door fell on the animal.
2015
In 2015, a USDA inspection found that one research protocol contained incomplete descriptions of methods for hand rearing and euthanizing neonatal animals.
In 2015, the USDA cited the SNPRC for three instances of negligence. The first instance involved the center having a supply of numerous outdated drugs and medical supplies. The second instance involved several bags of food enrichment items being left open, which may have allowed contamination and/or deterioration of the food. The third instance involved a large amount of cockroaches living in a primate housing area.
In 2015, there were two incidents in which baboons were injured or killed, which were due to errors made by employees at the SNPRC. In one incident, a female baboon was injured after three male baboons gained access to here chute system. In the second incident, a male baboon gained access to a chute containing a female and her infant, attacked the two, and killed the infant.
2016
In 2016, a USDA inspection revealed several instances of negligence and breaches of protocol at the SNPRC. In one instance, researchers had failed to use the approved scoring sheet and euthanasia criteria for a particular study. In another instance, the center had used 45 more animals in two studies than they had been approved for. In yet another instance, the USDA found that the SNPRC's 2015 annual report was missing information regarding the standards and regulations for the sanitation of primate enclosures and the feeding of primates. In another instance, it was revealed that animals in some studies may have experienced unrelieved pain or distress prior to euthanasia.
2017
In 2017, a baboon received second-degree burns from an exposed heating pipe in its cage.
In 2017, a USDA inspection report revealed deteriorating and unmaintained conditions in some of the primate cages. More specifically, some surfaces were deteriorating and paint was eroding from one of the walls.
In 2017, two macaques sustained injuries after they opened a divider between their enclosures and comingled. This incident was the fault of a caretaker who failed to secure the latch on the divider.
2018
In April 2018, four baboons escaped from the SNPRC but were later recaptured.
2019
In 2019, a macaque sustained an injury to her finger after sticking it in a hole in her enclosure. As a result, the macaque's finger had to be amputated. Staff at the SNPRC were aware of the risk of the hole in the enclosure, but did not take protections against it on that day.
In 2019, a marmoset was severely injured and then euthanized after another marmoset gained access to its cage.
In 2019, a USDA inspection report revealed several instances of unclean and deteriorating conditions at the center. For example, the report described dirty light fixtures, peeling paint, damaged dry wall, and damaged edge of a counter.
2021
In 2021, a USDA inspectors reported that the walls to numerous animal enclosures had peeling paint, which makes the walls difficult to properly sanitize.
See also
Texas Biomedical Research Institute
References
External links
SNPRC home page
Primate research centers
Animal testing on non-human primates
Medical research institutes in Texas
Biomedical research foundations | Southwest National Primate Research Center | [
"Engineering",
"Biology"
] | 1,097 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
65,299,641 | https://en.wikipedia.org/wiki/Committee%20on%20Earth%20Observation%20Satellites |
The Committee on Earth Observation Satellites (CEOS) is an international organization created in 1984 around the topic of Earth observation satellites.
As of 2023, it has 34 national space agencies as regular members and other 29 associate members. Space agencies that are regular members include those from Argentina, Australia, Brazil, Canada, China, France, Germany, India, Italy, Japan, South Korea, the Netherlands, Russia, South Africa, Spain, Thailand, Turkey, Ukraine, the United Kingdom and the United States.
National space agencies as regular members
- Agenzia Spaziale Italiana (ASI)
- Canadian Space Agency (CSA)
- Centre National d'Etudes Spatiales (CNES); European Space Agency (ESA)
- Centro para Desarrollo Tecnólogico Industrial (CDTI)
- China Academy of Space Technology (CAST); China Center for Resources Satellite Data and Applications (CRESDA); National Remote Sensing Center of China (NRSCC); National Satellite Meteorological Center/China Meteorological Administration (NSMC/CMA)
- Comisión Nacional de Actividades Espaciales (CONAE)
- Commonwealth Scientific and Industrial Research Organisation (CSIRO)
- Deutsches Zentrum für Luft-und Raumfahrt (DLR); European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT)
- European Commission (EC)
- Indian Space Research Organisation (ISRO)
- Instituto Nacional de Pesquisas Espaciais (INPE)
- Japan Aerospace Exploration Agency/Ministry of Education, Culture, Sports, Science, and Technology (JAXA/MEXT)
- Korea Aerospace Research Institute (KARI); Korea Meteorological Administration (KMA); National Institute of Environmental Research (NIER)
- National Aeronautics & Space Administration (NASA); National Oceanic and Atmospheric Administration (NOAA); United States Geological Survey (USGS)
- National Space Research and Development Agency (NASRDA)
- Netherlands Space Office (NSO)
- Roscosmos State Corporation for Space Activities (Roscosmos); Russian Federal Service for Hydrometeorology and Environmental Monitoring (ROSHYDROMET)
- South African National Space Agency (SANSA)
- State Space Agency of Ukraine (SSAU)
- TÜBİTAK Space Technologies Research Institute (TÜBİTAK UZAY)
- United Arab Emirates Space Agency (UAESA)
- United Kingdom Space Agency (UKSA)
- Vietnam Academy of Science and Technology / Vietnam National Space Center (VAST/VNSC)
See also
Global Change Master Directory
Group on Earth Observations
List of Earth observation satellites
References
Further reading
External links
International scientific organizations
Space organizations
Remote sensing organizations | Committee on Earth Observation Satellites | [
"Astronomy"
] | 543 | [
"Space organizations",
"Astronomy stubs",
"Astronomy organizations",
"Astronomy organization stubs"
] |
65,301,782 | https://en.wikipedia.org/wiki/Phenylacetylrinvanil | Phenylacetylrinvanil (IDN-5890) is a synthetic analogue of capsaicin which acts as a potent and selective agonist for the TRPV1 receptor, with slightly lower potency than resiniferatoxin, though still around 300 times the potency of capsaicin. It is an amide of vanillylamine and ricinoleic acid, with the hydroxyl group on ricinoleic acid esterified with phenylacetic acid. It is used to study the function of the TRPV1 receptor and its downstream actions, and has also shown anti-cancer effects in vitro.
References
Capsaicinoids
Transient receptor potential channel modulators | Phenylacetylrinvanil | [
"Chemistry"
] | 148 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
65,302,088 | https://en.wikipedia.org/wiki/NGC%20548 | NGC 548, also occasionally referred to as PGC 5326 or UGC 1010, is an elliptical galaxy in the constellation Cetus. It is located approximately 244 million light-years from the Solar System and was discovered on 2 November 1867 by American astronomer George Mary Searle.
Observation history
Searle discovered NGC 548 at Harvard Observatory using a 15" Merz refractor telescope. His given micrometric position also matches UGC 1010 and PGC 5326.
See also
Elliptical galaxy
List of NGC objects (1–1000)
Cetus
References
External links
SEDS
Elliptical galaxies
Cetus
0548
5326
Astronomical objects discovered in 1867
Discoveries by George Searle | NGC 548 | [
"Astronomy"
] | 136 | [
"Cetus",
"Constellations"
] |
65,302,904 | https://en.wikipedia.org/wiki/Mark%20O.%20Robbins | Mark Owen Robbins was an American condensed matter physicist who specialized in computational studies of friction, fracture and adhesion, with a particular focus on nanotribology, contact mechanics, and polymers. He was a professor in the department of physics and astronomy at Johns Hopkins University at the time of his death.
Early life and education
Robbins was born in Indianapolis, Indiana, and was raised in Newton, Massachusetts. After completing his BA and MA degrees in physics at Harvard University in 1977, he spent a year as a Churchill Fellow at Cambridge University. He completed a Ph.D. in physics at the University of California, Berkeley in 1983.
Career
After graduating from UC Berkeley, Robbins held a three-year appointment as a postdoctoral research fellow at Exxon Corporation's research science laboratory in New Jersey. In 1986, he joined the faculty of the department of physics and astronomy at Johns Hopkins University, where he was promoted to Associate (1988) and Full (1992) professor. He served as chair of the advisory board of the Kavli Institute of Theoretical Physics (KITP) at the University of California, Santa Barbara from 2007 to 2008, and chaired the Gordon Research Conference on Tribology in 2010. He also served as the associate director for the Institute for Data Intensive Engineering and Science.
Research
Robbins was known for his work in the application of molecular simulations to the non equilibrium phenomena of friction, fracture and adhesion. A recurring theme in his research was the elucidation of new physics on the atomic/molecular scale that could not be described by conventional continuum methods, and the use of scaling relations to predict a physical system's behavior at one length or time scale based on how it behaves at another. The scope of his research included the microscopic origins of macro scale friction laws, shear flow of fluids in nanoscale confinement, the toughness of polymer adhesives and the stiffness of elastic contacts..
Honors and awards
1986 Presidential Young Investigator Award
1987 Sloan Foundation Fellowship
1999 Fellow of the American Physical Society, "For his contributions to our understanding of the molecular origins of friction, lubrication, spreading and adhesion."
2018 Fellow of the American Association for the Advancement of Science, "For using simulations to reveal the microscopic origins of macroscopic behavior of matter"
Personal life
Robbins was born in Indianapolis, Indiana. He was the eldest of five children raised in Newton, Massachusetts by Dorothy (Bigelow) and Owen Robbins. He married Dr. Patricia McGuiggan, a materials science research professor, in 1993. They were married until his death, and had two children. After traveling to Brazil in the 1980s, Robbins developed an interest in orchids and began collecting and cultivating them at home. By 2003, his orchid collection had grown into the hundreds, and he had created two new varieties that he named after his children. He died at his home in Baltimore, Maryland, on August 13, 2020.
References
External links
Mark Robbins homepage
Harvard University alumni
Fellows of the American Physical Society
21st-century American physicists
2020 deaths
University of California, Berkeley alumni
20th-century American physicists
Fellows of the American Association for the Advancement of Science
1955 births
Tribologists
Scientists from Indianapolis | Mark O. Robbins | [
"Materials_science"
] | 645 | [
"Tribology",
"Tribologists"
] |
65,303,257 | https://en.wikipedia.org/wiki/Fire%20drill%20%28tool%29 | A fire drill, sometimes called fire-stick, is a device to start a fire by friction between a rapidly rotating wooden rod (the spindle or shaft) and a cavity on a stationary wood piece (the hearth or fireboard).
Composition
The device can be any of the ancient types of hand-operated drills, including a hand drill, bow drill (or strap drill), or pump drill. The spindle is usually 1–2 cm thick and ends in a dull point. The spindle and fireboard are typically made from dry, medium-soft non-resinous wood such as spruce, cedar, balsam, yucca, aspen, basswood, buckeye, willow, tamarack, or similar. The Native American Indians along the western coast of the United-States traditionally made use of dead wood from the buckeye tree for preparing the fire-board.
Principle
Whatever the method used to drive the shaft, its lower end is placed into a shallow cavity of the fireboard with a "V" notch cut into it. The primary goal is to generate heat by the friction between the tip of the shaft and the fireboard. controlled by rotation speed and pressure. The heat eventually turns the wood at the point of contact into charcoal, which is ground to a powder by the friction, that collects into the "V" notch. Continuing operation eventually ignites the charcoal dust producing a tiny ember, which can be used to start a fire in a "tinder bundle" (a nest of stringy, fluffy, and combustible material).
Other methods of creating an ember include drilling partway into a hearth made by lashing two sticks together from one side, and then drilling from the other side to meet this hole; or using the area where two branches separate. This is to keep the coal off wet or snow-covered ground.
Gallery
See also
Control of fire by early humans
Fire piston
Kumano Taisha, known for the "Kumano firestarter"
Campfire
References
Relevant literature
Philips-Chan, A. Our Stories Etched in Ivory: The Smithsonian Collections of Engraved Drill Bows with Stories from the Arctic.
Fire making
Mechanical hand tools | Fire drill (tool) | [
"Physics"
] | 441 | [
"Mechanics",
"Mechanical hand tools"
] |
66,522,114 | https://en.wikipedia.org/wiki/Crystal%20Kewe | Crystal Kewe is a web and app developer based in Papua New Guinea and the CEO and CTO of Crysan Technology Ltd.
Early life and education
Kewe's interest in programming started with an interest in video gaming, and around age 12, she started teaching herself to program software writing in python. She attended Paradise High School in Papua New Guinea up until grade 10, in 2014, after which she stopped going to school and continued pursuing a self-created education in programming.
Kewe intends on seeking a college degree in New Zealand in 2022.
Advocacy
Kewe has been a guest lecturer at the University of Papua New Guinea, and she currently serves as on the academic advisory board for Papua New Guinea's International Training Institute. She says that the supportive infrastructure is the most important part of successful educational systems, and that institutions do not fairly support different groups of people, highlighting girls as a marginalized group in scientific and technical education.
Career
In 2014, Kewe and her father co-founded the technology company Crysan when she was 15, making her one of the world's youngest CEO's of a software development company. The company has since expanded to include a team of over 20 employees, some of whom are based in Southeast Asia, Europe, and South America, and has partnered with the Papua New Guinea government on education initiatives as well as building a platform designed to provide transparency surrounding public development funding.
Apec App Challenge
Kewe and her father won first prize in the 24-hour hackathon which is a partnership between Apec, The Asia Foundation, and Google. The challenge was to “build an app that would help bilum artisans and entrepreneurs in Papua New Guinea," as bilum makers often face difficult hurdles selling their crafts. The team won for the conception and development of an app called Biluminous which is an e-commerce platform that features bilum makers and highlights their process while connecting them directly with customers, and is one of the first mobile payment providers in Papua New Guinea.
Awards
2018 APEC Digital Prosperity Award
2019 Papua New Guinea Security Congress Award for Excellence in Individual Achievement
Coral Sea Cable System Repeater Dedicatee, Dec 2019
2019 Westpac Outstanding Women Awards - IBBM Young Achievers Award
References
Living people
2000s births
Papua New Guinean women in business
Software engineers | Crystal Kewe | [
"Engineering"
] | 466 | [
"Software engineering",
"Software engineers"
] |
66,522,474 | https://en.wikipedia.org/wiki/Ascosphaera%20apis | Ascosphaera apis is a species of fungus belonging to the family Ascosphaeraceae. It was one of the first entomopathogen genomes to be sequenced. It has a cosmopolitan distribution.
It causes the chalkbrood diseases in bees, which rarely kills infected colonies but can weaken them and lead to reduced honey yields and susceptibility to other pests and diseases.
References
Onygenales
Fungus species | Ascosphaera apis | [
"Biology"
] | 91 | [
"Fungi",
"Fungus species"
] |
66,523,160 | https://en.wikipedia.org/wiki/Meme%20stock | A meme stock is a stock that gains popularity among retail investors through social media. The popularity of meme stocks is generally based on internet memes shared among traders, on platforms such as Reddit's r/wallstreetbets. Investors in such stocks are often young and inexperienced investors. As a result of their popularity, meme stocks often trade at prices that are above their estimated valueas based on fundamental analysis and are known for being extremely speculative and volatile.
History
Interest in meme stocks started in 2020, in what the U.S. Securities and Exchange Commission has called a "meme stock phenomenon". The stock of American video game retailer GameStop has been one of the most popular meme stocks, with mass purchases of the stock leading to a short squeeze on GameStop in early 2021. The stock of entertainment company AMC is also cited as a prominent example. Other examples include the stocks of Bed, Bath & Beyond, National Beverage, and Koss. The distinction between a meme stock and a non-meme stock is not always clear; for example, Tesla has some of the characteristics of a meme stock: a high price-earnings ratio and being frequently discussed by amateur retail traders on social media, yet some professional analysts do not consider it to be overpriced.
Interest in meme stocks is associated with trading platform Robinhood, which pioneered commission-free trading. According to The New York Times, "Robinhood was the tool of choice for traders in the original meme stocks".
Some meme stocks have often become popular among retail investors after being targeted by short-selling professional investors, such as hedge funds, with participants having the explicit aim of causing losses among those firms. News coverage has described the choice to purchase such stocks as an act of rebellion intended to humble short-selling professional investors.
According to an SEC report, while some hedge funds had big losses, the meme stocks phenomenon did not widely impact hedge funds. The SEC staff report also stated, "some investors that had been invested in the target stocks prior to the market events benefitted unexpectedly from the price rises, while others, including quantitative and high-frequency hedge funds, joined the market rally to trade profitably." By June 2021, according to Financial Times, some hedge funds were systematically analyzing meme stocks.
On July 5, 2024 Reddit users speculated that Keith Gill, who was previously involved in the GameStop meme stock fad, was about to invest in headphone maker Koss Corporation around July 4 (US Independence Day) after a May post by him in which a microphone emoji appeared with a US flag on the backdrop. As a result of the speculation, a single Koss share raised to $18.50 before ending at $13.35 in that day's session.
See also
Day trading
Dogecoin
Do-it-yourself investing
Keith Gill
Meme man
Meme coin
Retail investor
YOLO (aphorism)
References
2020s in economic history
Internet culture
Internet memes
Reddit
Robinhood (company)
Social media | Meme stock | [
"Technology"
] | 624 | [
"Computing and society",
"Social media"
] |
66,525,073 | https://en.wikipedia.org/wiki/Phenylmethanediol | Phenylmethanediol is an organic compound that is a geminal diol, the hydrate of benzaldehyde. It is a short-lived intermediate in some chemical reactions, such as oxidations of toluene and benzaldehyde and the reduction of benzoic acid.
References
Geminal diols
Benzyl compounds | Phenylmethanediol | [
"Chemistry"
] | 71 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
66,527,786 | https://en.wikipedia.org/wiki/Video%20super-resolution | Video super-resolution (VSR) is the process of generating high-resolution video frames from the given low-resolution video frames. Unlike single-image super-resolution (SISR), the main goal is not only to restore more fine details while saving coarse ones, but also to preserve motion consistency.
There are many approaches for this task, but this problem still remains to be popular and challenging.
Mathematical explanation
Most research considers the degradation process of frames as
where:
— original high-resolution frame sequence,
— blur kernel,
— convolution operation,
— downscaling operation,
— additive noise,
— low-resolution frame sequence.
Super-resolution is an inverse operation, so its problem is to estimate frame sequence from frame sequence so that is close to original . Blur kernel, downscaling operation and additive noise should be estimated for given input to achieve better results.
Video super-resolution approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension. Complex designs are not uncommon. Some most essential components for VSR are guided by four basic functionalities: Propagation, Alignment, Aggregation, and Upsampling.
Propagation refers to the way in which features are propagated temporally
Alignment concerns on the spatial transformation applied to misaligned images/features
Aggregation defines the steps to combine aligned features
Upsampling describes the method to transform the aggregated features to the final output image
Methods
When working with video, temporal information could be used to improve upscaling quality. Single image super-resolution methods could be used too, generating high-resolution frames independently from their neighbours, but it's less effective and introduces temporal instability. There are a few traditional methods, which consider the video super-resolution task as an optimization problem. Last years deep learning based methods for video upscaling outperform traditional ones.
Traditional methods
There are several traditional methods for video upscaling. These methods try to use some natural preferences and effectively estimate motion between frames. The high-resolution frame is reconstructed based on both natural preferences and estimated motion.
Frequency domain
Firstly the low-resolution frame is transformed to the frequency domain. The high-resolution frame is estimated in this domain. Finally, this result frame is transformed to the spatial domain.
Some methods use Fourier transform, which helps to extend the spectrum of captured signal and though increase resolution. There are different approaches for these methods: using weighted least squares theory, total least squares (TLS) algorithm, space-varying or spatio-temporal varying filtering.
Other methods use wavelet transform, which helps to find similarities in neighboring local areas. Later second-generation wavelet transform was used for video super resolution.
Spatial domain
Iterative back-projection methods assume some function between low-resolution and high-resolution frames and try to improve their guessed function in each step of an iterative process. Projections onto convex sets (POCS), that defines a specific cost function, also can be used for iterative methods.
Iterative adaptive filtering algorithms use Kalman filter to estimate transformation from low-resolution frame to high-resolution one. To improve the final result these methods consider temporal correlation among low-resolution sequences. Some approaches also consider temporal correlation among high-resolution sequence. To approximate Kalman filter a common way is to use least mean squares (LMS). One can also use steepest descent, least squares (LS), recursive least squares (RLS).
Direct methods estimate motion between frames, upscale a reference frame, and warp neighboring frames to the high-resolution reference one. To construct result, these upscaled frames are fused together by median filter, weighted median filter, adaptive normalized averaging, AdaBoost classifier or SVD based filters.
Non-parametric algorithms join motion estimation and frames fusion to one step. It is performed by consideration of patches similarities. Weights for fusion can be calculated by nonlocal-means filters. To strength searching for similar patches, one can use rotation invariance similarity measure or adaptive patch size. Calculating intra-frame similarity help to preserve small details and edges. Parameters for fusion also can be calculated by kernel regression.
Probabilistic methods use statistical theory to solve the task. maximum likelihood (ML) methods estimate more probable image. Another group of methods use maximum a posteriori (MAP) estimation. Regularization parameter for MAP can be estimated by Tikhonov regularization. Markov random fields (MRF) is often used along with MAP and helps to preserve similarity in neighboring patches. Huber MRFs are used to preserve sharp edges. Gaussian MRF can smooth some edges, but remove noise.
Deep learning based methods
Aligned by motion estimation and motion compensation
In approaches with alignment, neighboring frames are firstly aligned with target one. One can align frames by performing motion estimation and motion compensation (MEMC) or by using Deformable convolution (DC). Motion estimation gives information about the motion of pixels between frames. motion compensation is a warping operation, which aligns one frame to another based on motion information. Examples of such methods:
Deep-DE (deep draft-ensemble learning) generates a series of SR feature maps and then process them together to estimate the final frame
VSRnet is based on SRCNN (model for single image super resolution), but takes multiple frames as input. Input frames are first aligned by the Druleas algorithm
VESPCN uses a spatial motion compensation transformer module (MCT), which estimates and compensates motion. Then a series of convolutions performed to extract feature and fuse them
DRVSR (detail-revealing deep video super-resolution) consists of three main steps: motion estimation, motion compensation and fusion. The motion compensation transformer (MCT) is used for motion estimation. The sub-pixel motion compensation layer (SPMC) compensates motion. Fusion step uses encoder-decoder architecture and ConvLSTM module to unit information from both spatial and temporal dimensions
RVSR (robust video super-resolution) have two branches: one for spatial alignment and another for temporal adaptation. The final frame is a weighted sum of branches' output
FRVSR (frame recurrent video super-resolution) estimate low-resolution optical flow, upsample it to high-resolution and warp previous output frame by using this high-resolution optical flow
STTN (the spatio-temporal transformer network) estimate optical flow by U-style network based on Unet and compensate motion by a trilinear interpolation method
SOF-VSR (super-resolution optical flow for video super-resolution) calculate high-resolution optical flow in coarse-to-fine manner. Then the low-resolution optical flow is estimated by a space-to-depth transformation. The final super-resolution result is gained from aligned low-resolution frames
TecoGAN (the temporally coherent GAN) consists of generator and discriminator. Generator estimates LR optical flow between consecutive frames and from this approximate HR optical flow to yield output frame. The discriminator assesses the quality of the generator
TOFlow (task-oriented flow) is a combination of optical flow network and reconstruction network. Estimated optical flow is suitable for a particular task, such as video super resolution
MMCNN (the multi-memory convolutional neural network) aligns frames with target one and then generates the final HR-result through the feature extraction, detail fusion and feature reconstruction modules
RBPN (the recurrent back-projection network). The input of each recurrent projection module features from the previous frame, features from the consequence of frames, and optical flow between neighboring frames
MEMC-Net (the motion estimation and motion compensation network) uses both motion estimation network and kernel estimation network to warp frames adaptively
RTVSR (real-time video super-resolution) aligns frames with estimated convolutional kernel
MultiBoot VSR (the multi-stage multi-reference bootstrapping method) aligns frames and then have two-stage of SR-reconstruction to improve quality
BasicVSR aligns frames with optical flow and then fuse their features in a recurrent bidirectional scheme
IconVSR is a refined version of BasicVSR with a recurrent coupled propagation scheme
UVSR (unrolled network for video super-resolution) adapted unrolled optimization algorithms to solve the VSR problem
Aligned by deformable convolution
Another way to align neighboring frames with target one is deformable convolution. While usual convolution has fixed kernel, deformable convolution on the first step estimate shifts for kernel and then do convolution. Examples of such methods:
EDVR (The enhanced deformable video restoration) can be divided into two main modules: the pyramid, cascading and deformable (PCD) module for alignment and the temporal-spatial attention (TSA) module for fusion
DNLN (The deformable non-local network) has alignment module, based on deformable convolution with the hierarchical feature fusion module (HFFB) for better quality) and non-local attention module
TDAN (The temporally deformable alignment network) consists of an alignment module and a reconstruction module. Alignment performed by deformable convolution based on feature extraction and alignment
Multi-Stage Feature Fusion Network for Video Super-Resolution uses the multi-scale dilated deformable convolution for frame alignment and the Modulative Feature Fusion Branch to integrate aligned frames
Aligned by homography
Some methods align frames by calculated homography between frames.
TGA (Temporal Group Attention) divide input frames to N groups dependent on time difference and extract information from each group independently. Fast Spatial Alignment module based on homography used to align frames
Spatial non-aligned
Methods without alignment do not perform alignment as a first step and just process input frames.
VSRResNet like GAN consists of generator and discriminator. Generator upsamples input frames, extracts features and fuses them. Discriminator assess the quality of result high-resolution frames
FFCVSR (frame and feature-context video super-resolution) takes unaligned low-resolution frames and output high-resolution previous frames to simultaneously restore high-frequency details and maintain temporal consistency
MRMNet (the multi-resolution mixture network) consists of three modules: bottleneck, exchange, and residual. Bottleneck unit extract features that have the same resolution as input frames. Exchange module exchange features between neighboring frames and enlarges feature maps. Residual module extract features after exchange one
STMN (the spatio-temporal matching network) use discrete wavelet transform to fuse temporal features. Non-local matching block integrates super-resolution and denoising. At the final step, SR-result is got on the global wavelet domain
MuCAN (the multi-correspondence aggregation network) uses temporal multi-correspondence strategy to fuse temporal features and cross-scale nonlocal-correspondence to extract self-similarities in frames
3D convolutions
While 2D convolutions work on spatial domain, 3D convolutions use both spatial and temporal information. They perform motion compensation and maintain temporal consistency
DUF (the dynamic upsampling filters) uses deformable 3D convolution for motion compensation. The model estimates kernels for specific input frames
FSTRN (The fast spatio-temporal residual network) includes a few modules: LR video shallow feature extraction net (LFENet), LR feature fusion and up-sampling module (LSRNet) and two residual modules: spatio-temporal and global
3DSRnet (The 3D super-resolution network) uses 3D convolutions to extract spatio-temporal information. Model also has a special approach for frames, where scene change is detected
MP3D (the multi-scale pyramid 3D convolutional network) uses 3D convolution to extract spatial and temporal features simultaneously, which then passed through reconstruction module with 3D sub-pixel convolution for upsampling
DMBN (the dynamic multiple branch network) has three branches to exploit information from multiple resolutions. Finally, information from branches fuse dynamically
Recurrent neural networks
Recurrent convolutional neural networks perform video super-resolution by storing temporal dependencies.
STCN (the spatio-temporal convolutional network) extract features in the spatial module, pass them through the recurrent temporal module and final reconstruction module. Temporal consistency is maintained by long short-term memory (LSTM) mechanism
BRCN (the bidirectional recurrent convolutional network) has two subnetworks: with forward fusion and backward fusion. The result of the network is a composition of two branches' output
RISTN (the residual invertible spatio-temporal network) consists of spatial, temporal and reconstruction module. Spatial module composed of residual invertible blocks (RIB), which extract spatial features effectively. The output of the spatial module is processed by the temporal module, which extracts spatio-temporal information and then fuses important features. The final result is calculated in the reconstruction module by deconvolution operation
RRCN (the residual recurrent convolutional network) is a bidirectional recurrent network, which calculates a residual image. Then the final result is gained by adding a bicubically upsampled input frame
RRN (the recurrent residual network) uses a recurrent sequence of residual blocks to extract spatial and temporal information
BTRPN (the bidirectional temporal-recurrent propagation network) use bidirectional recurrent scheme. Final-result combined from two branches with channel attention mechanism
RLSP (recurrent latent state propagation) fully convolutional network cell with highly efficient propagation of temporal information through a hidden state
RSDN (the recurrent structure-detail network) divide input frame into structure and detail components and process them in two parallel streams
Videos
Non-local methods extract both spatial and temporal information. The key idea is to use all possible positions as a weighted sum. This strategy may be more effective than local approaches (the progressive fusion non-local method) extract spatio-temporal features by non-local residual blocks, then fuse them by progressive fusion residual block (PFRB). The result of these blocks is a residual image. The final result is gained by adding bicubically upsampled input frame
NLVSR (the novel video super‐resolution network) aligns frames with target one by temporal‐spatial non‐local operation. To integrate information from aligned frames an attention‐based mechanism is used
MSHPFNL also incorporates multi-scale structure and hybrid convolutions to extract wide-range dependencies. To avoid some artifacts like flickering or ghosting, they use generative adversarial training
Metrics
The common way to estimate the performance of video super-resolution algorithms is to use a few metrics:
PSNR (Peak signal-noise ratio) calculates the difference between two corresponding frames based on mean squared error (MSE)
SSIM (Structural similarity index) measures the similarity of structure between two corresponding frames
IFC (Information Fidelity Criterion) shows information similarity with the reference frame
MOVIE (Motion-based Video Integrity Evaluation index) integrates explicit motion information by estimating distortions along motion trajectories
VMAF (Video Multimethod Assessment Fusion) predicts subjective video quality based on a reference and distorted video sequence
VIF (Visual Information Fidelity) is a full-reference image quality assessment index based on natural scene statistics and the notion of image information extracted by the human visual system
LPIPS (Learned Perceptual Image Patch Similarity) compares the perceptual similarity of frames based on high-order image structure
tOF measures pixel-wise motion similarity with reference frame based on optical flow
tLP calculates how LPIPS changes from frame to frame in comparison with the reference sequence
FSIM (Feature Similarity Index for Image Quality) uses phase congruency as the primary feature to measure the similarity between two corresponding frames.
Currently, there aren't so many objective metrics to verify video super-resolution method's ability to restore real details. Research is currently underway in this area.
Another way to assess the performance of the video super-resolution algorithm is to organize the subjective evaluation. People are asked to compare the corresponding frames, and the final mean opinion score (MOS) is calculated as the arithmetic mean overall ratings.
Datasets
While deep learning approaches of video super-resolution outperform traditional ones, it's crucial to form a high-quality dataset for evaluation. It's important to verify models' ability to restore small details, text, and objects with complicated structure, to cope with big motion and noise.
Benchmarks
A few benchmarks in video super-resolution were organized by companies and conferences. The purposes of such challenges are to compare diverse algorithms and to find the state-of-the-art for the task.
NTIRE 2019 Challenge
The NTIRE 2019 Challenge was organized by CVPR and proposed two tracks for Video Super-Resolution: clean (only bicubic degradation) and blur (blur added firstly). Each track had more than 100 participants and 14 final results were submitted.
Dataset REDS was collected for this challenge. It consists of 30 videos of 100 frames each. The resolution of ground-truth frames is 1280×720. The tested scale factor is 4. To evaluate models' performance PSNR and SSIM were used. The best participants' results are performed in the table:
Youku-VESR Challenge 2019
The Youku-VESR Challenge was organized to check models' ability to cope with degradation and noise, which are real for Youku online video-watching application. The proposed dataset consists of 1000 videos, each length is 4–6 seconds. The resolution of ground-truth frames is 1920×1080. The tested scale factor is 4. PSNR and VMAF metrics were used for performance evaluation. Top methods are performed in the table:
AIM 2019 Challenge
The challenge was held by ECCV and had two tracks on video extreme super-resolution: first track checks the fidelity with reference frame (measured by PSNR and SSIM). The second track checks the perceptual quality of videos (MOS).
Dataset consists of 328 video sequences of 120 frames each. The resolution of ground-truth frames is 1920×1080. The tested scale factor is 16. Top methods are performed in the table:
AIM 2020 Challenge
Challenge's conditions are the same as AIM 2019 Challenge. Top methods are performed in the table:
MSU Video Super-Resolution Benchmark
The MSU Video Super-Resolution Benchmark was organized by MSU and proposed three types of motion, two ways to lower resolution, and eight types of content in the dataset. The resolution of ground-truth frames is 1920×1280. The tested scale factor is 4. 14 models were tested. To evaluate models' performance PSNR and SSIM were used with shift compensation. Also proposed a few new metrics: ERQAv1.0, QRCRv1.0, and CRRMv1.0. Top methods are performed in the table:
MSU Super-Resolution for Video Compression Benchmark
The MSU Super-Resolution for Video Compression Benchmark was organized by MSU. This benchmark tests models' ability to work with compressed videos. The dataset consists of 9 videos, compressed with different Video codec standards and different bitrates. Models are ranked by BSQ-rate over subjective score. The resolution of ground-truth frames is 1920×1080. The tested scale factor is 4. 17 models were tested. 5 video codecs were used to compress ground-truth videos. Top combinations of Super-Resolution methods and video codecs are performed in the table:
Application
In many areas, working with video, we deal with different types of video degradation, including downscaling. The resolution of video can be degraded because of imperfections of measuring devices, such as optical degradations and limited size of camera sensors. Bad light and weather conditions add noise to video. Object and camera motion also decrease video quality.
Super Resolution techniques help to restore the original video. It's useful in a wide range of applications, such as
video surveillance (to improve video captured from the camera and recognize car numbers and faces)
medical imaging (to discover better some organs or tissues for clinical analysis and medical intervention)
forensic science (to help in the investigation during the criminal procedure)
astronomy (to improve quality of video of stars and planets)
remote sensing (to alleviate observation of an object)
microscopy (to strength microscopes' ability)
It also helps to solve task of object detection, face and character recognition (as preprocessing step). The interest to super-resolution is growing with the development of high definition computer displays and TVs.
Video super-resolution finds its practical use in some modern smartphones and cameras, where it is used to reconstruct digital photographs.
Reconstructing details on digital photographs is a difficult task since these photographs are already incomplete: the camera sensor elements measure only the intensity of the light, not directly its color. A process called demosaicing is used to reconstruct the photos from partial color information. A single frame doesn't give us enough data to fill in the missing colors, however, we can receive some of the missing information from multiple images taken one after the other. This process is known as burst photography and can be used to restore a single image of good quality from multiple sequential frames.
When we capture a lot of sequential photos with a smartphone or handheld camera, there is always some movement present between the frames because of the hand motion. We can take advantage of this hand tremor by combining the information on those images. We choose a single image as the "base" or reference frame and align every other frame relative to it.
There are situations where hand motion is simply not present because the device is stabilized (e.g. placed on a tripod). There is a way to simulate natural hand motion by intentionally slightly moving the camera. The movements are extremely small so they don't interfere with regular photos. You can observe these motions on Google Pixel 3 phone by holding it perfectly still (e.g. pressing it against the window) and maximally pinch-zooming the viewfinder.
See also
Super-resolution imaging
Image resolution
High definition video
Display resolution
Ultra-high-definition television
Oversampling
High-dynamic-range video
References
Signal processing
Film and video technology
Image processing | Video super-resolution | [
"Technology",
"Engineering"
] | 4,624 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
66,527,789 | https://en.wikipedia.org/wiki/Clavaria%20pampaeana | Clavaria pampaeana is a species of fungus belonging to the family Clavariaceae. Growing in open soil, it is shaped like a small white club, growing in scattered groups or sometimes fused. The stem may appear translucent near the base.
Clavaria vermicularis is similar but usually larger.
References
Clavariaceae
Taxa named by Carlo Luigi Spegazzini
Fungi described in 1881
Fungus species | Clavaria pampaeana | [
"Biology"
] | 87 | [
"Fungi",
"Fungus species"
] |
66,529,313 | https://en.wikipedia.org/wiki/Laser%20induced%20white%20emission | Laser-induced white emission (LIWE) is a broadband light in the visible spectral range. This phenomenon was reported for the first time by Jiwei Wang and Peter Tanner in 2010 for fully concentrated lanthanide oxides in vacuum, excited by a focused beam of infrared laser diode operating in continuous wave (CW) mode. The white light emission intensity is exponentially dependent on excitation power density and pressure surrounding the samples. It was found that light emission is assisted by photocurrent generation and hot electron emission.
Outline
In 2010, Tanner and Wang demonstrated an innovative method of white light generation from lanthanide materials located in strictly defined conditions, by exciting them with a concentrated beam from an infrared (IR) laser diode. Most importantly, this emission is characterized by a wide band covering the entire visible range, in contrast to light sources known so far, which generate white light by mixing several spectral lines. The discovery was interesting enough to attract the attention of many research groups around the world. Intensive work has begun to explore the mechanism responsible for generating this type of emission. As a result, the number of scientific publications on broadband white luminescence has been steadily increasing since 2010.
Materials capable of LIWE generation
The broadband, laser-induced white emission was reported in a number of different materials. Most common are inorganic hosts. These may be:
fully doped (Er2O3, LiYbP4O12, LiYbF4, NdAlO3, PrO2);
partially doped (Y2O3:Nd3+, Yb3+:Y2Si2O7, Y4Zr3O12:Yb3+, ZrO2:Yb3+, ZnSe:Yb, Gd2O3:Yb3+, YVO4:Yb3+,Er3+, Eu3+:Sr2CeO4, Yb3+:YAG);
or not doped (Y2O3, Al2O3, GaN, Y2Si2O7.)
with lanthanide or transition (Cr3+:Y3A5O12, CaCuSiO4O10, Gd3Ga5O12:Cr3+) metal ions.
There are also reports in the literature considering oxide matrices containing gold (Nd2O3/Au, Yb2O3/Au ) or silver (Ag-SiO2-Er2O3) in their structure. Carbon-based materials (graphene ceramics, graphene foam, μ-diamonds) or other organic ([(RMSn)(PhSn)3S6] with RM = [(Et3P)3Ag],[(Me3P)3Au], [(RSn)4S6] with R = 4-(CH2=CH)–C6H4) complexes undoped and doped with lanthanide ions ([YbL3]0.7[TbL3]0.3 with L = pentafluorophenyl) are another relevant group of compounds. All of these materials exhibit very intense warm white light in a range of 400-800 nm.
Impact of excitation power
The LIWE generation process is non-linear and strongly depends on the excitation power density. An increase in population power density (P) leads to a slight increase in white emission intensity (I) until a certain excitation threshold value is reached. Then, the increase in LIWE intensity is drastic (see Fig. 1). The dependence of intensity on power is described by the formula: , where N is the number of near infrared photons absorbed for LIWE generation. The characteristics of power dependence is not always the same and may vary depending on the tested material. In the literature some papers can be found where the increase in intensity is reported and is supported by two thresholds. The emission intensity increases to a certain value of pumping power, then decreases, then increases again. According to author of this publication, it could be related with regular anti-Stokes photoluminescence, heat collection and LIWE generation, respectively. Sometimes such behavior may be caused by the presence of lanthanide ions in the investigated host due to their effective absorption of radiation used to generate LIWE. It is worth to notice that the parameter N depends on the excitation wavelength.
Impact of ambient pressure
The atmospheric pressure strongly influences the LIWE intensity. Usually, under reduced pressure conditions the intensity of white luminescence is very high due to the fact that the sample temperature is increased as a result of irradiation with a concentrated beam of the IR laser diode. The increase in pressure causes the intensity () is constant up to the threshold above which there is a sharp reduction in LIWE (see Fig. 2). Depending on the tested material, its luminescence may be completely quenched at atmospheric pressure. Such behavior is well described by heat dissipation model according to following formula: , where is a critical magnitude of ambient pressure above which the luminescence intensity decreases.
Sample temperature
Taking into account that broadband LIWE occurs upon illumination of the focused beam of infrared laser diode, it seems necessary to determine the sample temperature during the experiment. Several approaches to achieving this goal can be found in the literature. First is to use a thermal camera. Results obtained using this technique indicate that the sample temperature is below 1000 °C. However it should be kept in mind, that the principle of this device is to determine the temperature from the sample surface, from a small point arising as a result of irradiation with a laser beam. Therefore, the measurement may be inaccurate, because the temperature of the sample in the entire volume may be different from the temperature determined from a single, small point on the surface. To define the sample temperature during the LIWE generation process, temperature markers in the form of up-conversion materials can be used. The sample temperature values determined using this method are similar to those obtained with a thermal camera. The third approach is to fit the spectral curve using Plank's law to determine the temperature of the sample during exposure. Results obtained using this method show values over 2000 °C.
Laser-induced photocurrent
The LIWE phenomenon is accompanied by efficient photocurrent generation and hot electron emission. It was found that no effects were observed for the pumping power density below the LIWE generation threshold (1 W). However, above this threshold the conductivity increases with the excitation power. The conductivity for low frequencies (near DC) usually increases by several orders of magnitude after exposure of the sample with the maximum power of the excitation diode compared to the sample in the dark. The effects associated with photoconductivity found in the materials studied so far can be explained using the hopping mechanism. Similar tendency of photoelectric phenomena can be observed in other host reported in literature.
Mechanisms
The broadband anti-Stokes white emission is observed from many different materials. However, to date, there is no unambiguous model that should be used for its interpretation. Some scientists assume that LIWE is a thermal process. In this case, it is natural to use the black body radiation (BBR) model to describe this phenomenon. In general, the theory of BBR assumes that objects heated to a sufficiently high temperature will emit white light. This means that its emission spectrum strongly depends on temperature and their curve course, which is comparable to the course of LIWE, can be well fitted using Planck’s law. Moreover, usually shift of the emission maximum with increasing sample temperature (with pumping power) is observed according to the Wien's law. This suggests that the applied model is correct. Unfortunately, often the sample temperature value obtained from fitting the emission spectrum is higher than the melting point of the material. This raises doubts about the validity of the BBR model.
For this reason, scientists have started research on alternative mechanisms explaining generation of the broadband anti-Stokes white luminescence. One of them assumes formation of RE2+-CT clusters as a result of multiphoton absorption process. However, many further experimental and theoretical investigations led to modification of this model. It involves ionization of the host as a consequence of its illumination with a concentrated beam of an IR laser diode. In result, free electrons in the conductivity band (CB) arise. They combine with the ions already located in CB to form pairs between ions with different degrees of oxidation states. As a consequence, the intervalence charge transfer (IVCT) transitions appeared resulting in LIWE generation.
Another approach presented for inorganic materials considers the creation of oxygen vacancies due to thermal effects caused by the increased sample temperature, as a result of irradiation with a concentrated IR laser beam. Then, excited electrons are captured from the excited levels of the host by oxygen vacancies through tunneling process. Subsequently, the electrons return to the valence band via radiation transitions. In case of organic materials, scientists proposed a mechanism closely related to the size of the HOMO-LUMO gap and the morphology of analyzed compound. They report that irradiation of the sample by near infrared (NIR) CW laser diode causes excitation of electrons located near Fermi level. Due to the fact that their energy is below HOMO-LUMO gap, the kind of ligands strongly influences on the emission energy. It was found that carbon based materials also show the ability to generate LIWE under strong excitation. Recently reported mechanism assumes ionization of the graphene associated with intense NIR excitation, which leads to a temporary disturbance of the electronic order of its ground state. In consequence, hybridization of carbon atoms changes from sp2 to sp3 resulting in opening of the graphene band gap and finally generating LIWE.
References
Electromagnetic radiation | Laser induced white emission | [
"Physics"
] | 2,021 | [
"Electromagnetic radiation",
"Physical phenomena",
"Radiation"
] |
66,531,933 | https://en.wikipedia.org/wiki/Infrastructure%20Transitions%20Research%20Consortium | The UK Infrastructure Transitions Research Consortium (ITRC) was established in January 2011. The ITRC provides data and modelling to help governments, policymakers and other stakeholders in infrastructure make more sustainable and resilient infrastructure decisions. It is a collaboration between seven universities and more than 55 partners from infrastructure policy and practice.
During its first research programme, running from 2011 to 2016, ITRC developed the world's first national infrastructure system-of-systems model, known as NISMOD (National Infrastructure Systems Model) which has been used to analyse long-term investment strategies for energy, transport, digital communications, water, waste water and solid waste. This work is described in the book The Future of National Infrastructure, an introduction to the NISMOD models and tools describing their application to inform infrastructure planning in Britain.
The second phase of this programme (2016–2021) is called ITRC-MISTRAL where MISTRAL stands for Multi-Scale Infrastructure Systems Analytics. MISTRAL allowed ITRC to develop the national-scale modelling in ITRC to simulate infrastructure at city, regional and global scales.
Based in the University of Oxford's Environmental Change Institute, ITRC is led by Director Jim Hall who is also Professor of Environmental Risks at the University of Oxford.
Funding: The ITRC is funded by two programme grants from the UK Engineering and Physical Science and Research Council (EPSRC). The 2011–2016 ITRC programme grant was £4.7m and the 2016–2021 grant, for ITRC-MISTRAL, is £5.4m.
Consortium: The seven universities making up the ITRC consortium are: University of Southampton, University of Oxford, Newcastle University, Cardiff University, University of Cambridge, University of Leeds and University of Sussex.
Partners: ITRC's partners are from across the infrastructure sector. They include infrastructure investors such as the World Bank, consultancies including Ordnance Survey and KPMG, providers such as Siemens, High Speed 2 (HS2), Network Rail and National Grid, policy-makers (i.e. Environment Agency) and regulatory bodies (OFCOM).
References
External links
Official website
Climate action plans
Engineering and Physical Sciences Research Council
Infrastructure
Infrastructure by country
Sustainability
Technology consortia | Infrastructure Transitions Research Consortium | [
"Engineering"
] | 448 | [
"Construction",
"Infrastructure"
] |
66,535,076 | https://en.wikipedia.org/wiki/Single-particle%20trajectory | Single-particle trajectories (SPTs) consist of a collection of successive discrete points causal in time. These trajectories are acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule.
Molecules can now by visualized based on recent super-resolution microscopy, which allow routine collections of thousands of short and long trajectories. These trajectories explore part of a cell, either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell, as emphasized in various cell types such as neuronal cells, astrocytes, immune cells and many others.
SPTs allow observing moving molecules inside cells to collect statistics
SPT allowed observing moving particles. These trajectories are used to investigate cytoplasm or membrane organization, but also the cell nucleus dynamics, remodeler dynamics or mRNA production. Due to the constant improvement of the instrumentation, the spatial resolution is continuously decreasing, reaching now values of approximately 20 nm, while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues. A variant of super-resolution microscopy called sptPALM is used to detect the local and dynamically changing organization of molecules in cells, or events of DNA binding by transcription factors in mammalian nucleus. Super-resolution image acquisition and particle tracking are crucial to guarantee a high quality data
Assembling points into a trajectory based on tracking algorithms
Once points are acquired, the next step is to reconstruct a trajectory. This step is done known tracking algorithms to connect the acquired points. Tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise.
Extract physical parameters from redundant SPTs
The redundancy of many short (SPTs) is a key feature to extract biophysical information parameters from empirical data at a molecular level. In contrast, long isolated trajectories have been used to extract information along trajectories, destroying the natural spatial heterogeneity associated to the various positions. The main statistical tool is to compute the mean-square displacement (MSD) or second order statistical moment:
(average over realizations), where is the called the anomalous exponent.
For a Brownian motion, , where D is the diffusion coefficient, n is dimension of the space. Some other properties can also be recovered from long trajectories, such as the radius of confinement for a confined motion. The MSD has been widely used in early applications of long but not necessarily redundant single-particle trajectories in a biological context. However, the MSD applied to long trajectories suffers from several issues. First, it is not precise in part because the measured points could be correlated. Second, it cannot be used to compute any physical diffusion coefficient when trajectories consists of switching episodes for example alternating between free and confined diffusion. At low spatiotemporal resolution of the observed trajectories, the MSD behaves sublinearly with time, a process known as anomalous diffusion, which is due in part to the averaging of the different phases of the particle motion. In the context of cellular transport (ameoboid), high resolution motion analysis of long SPTs in micro-fluidic chambers containing obstacles revealed different types of cell motions. Depending on the obstacle density: crawling was found at low density of obstacles and directed motion and random phases can even be differentiated.
Physical model to recover spatial properties from redundant SPTs
Langevin and Smoluchowski equations as a model of motion
Statistical methods to extract information from SPTs are based on stochastic models, such as the Langevin equation or its Smoluchowski's limit and associated models that account for additional localization point identification noise or memory kernel. The Langevin equation describes a stochastic particle driven by a Brownian force and a field of force (e.g., electrostatic, mechanical, etc.) with an expression :
where m is the mass of the particle and is the friction coefficient of a diffusing particle, the viscosity. Here is the -correlated Gaussian white noise. The force can derived from a potential well U so that and in that case, the equation takes the form
where is the energy and the Boltzmann constant and T the temperature. Langevin's equation is used to describe trajectories where inertia or acceleration matters. For example, at very short timescales, when a molecule unbinds from a binding site or escapes from a potential well and the inertia term allows the particles to move away from the attractor and thus prevents immediate rebinding that could plague numerical simulations.
In the large friction limit the trajectories of the Langevin equation converges in probability to those of the Smoluchowski's equation
where is -correlated. This equation is obtained when the diffusion coefficient is constant in space. When this is not case, coarse grained equations (at a coarse spatial resolution) should be derived from molecular considerations. Interpretation of the physical forces are not resolved by Ito's vs Stratonovich integral representations or any others.
General model equations
For a timescale much longer than the elementary molecular collision, the position of a tracked particle is described by a more general overdamped limit of the Langevin stochastic model. Indeed, if the acquisition timescale of empirical recorded trajectories is much lower compared to the thermal fluctuations, rapid events are not resolved in the data. Thus at this coarser spatiotemporal scale, the motion description is replaced by an effective stochastic equation
where is the drift field and the diffusion matrix. The effective diffusion tensor can vary in space ( denotes the transpose of ). This equation is not derived but assumed. However the diffusion coefficient should be smooth enough as any discontinuity in D should be resolved by a spatial scaling to analyse the source of discontinuity (usually inert obstacles or transitions between two medias). The observed effective diffusion tensor is not necessarily isotropic and can be state-dependent, whereas the friction coefficient remains constant as long as the medium stays the same and the microscopic diffusion coefficient (or tensor) could remain isotropic.
Statistical analysis of these trajectories
The development of statistical methods are based on stochastic models, a possible deconvolution procedure applied to the trajectories. Numerical simulations could also be used to identify specific features that could be extracted from single-particle trajectories data. The goal of building a statistical ensemble from SPTs data is to observe local physical properties of the particles, such as velocity, diffusion, confinement or attracting forces reflecting the interactions of the particles with their local nanometer environments. It is possible to use stochastic modeling to construct from diffusion coefficient (or tensor) the confinement or local density of obstacles reflecting the presence of biological objects of different sizes.
Empirical estimators for the drift and diffusion tensor of a stochastic process
Several empirical estimators have been proposed to recover the local diffusion coefficient, vector field and even organized patterns in the drift, such as potential wells. The construction of empirical estimators that serve to recover physical properties from parametric and non-parametric statistics. Retrieving statistical parameters of a diffusion process from one-dimensional time series statistics use the first moment estimator or Bayesian inference.
The models and the analysis assume that processes are stationary, so that the statistical properties of trajectories do not change over time. In practice, this assumption is satisfied when trajectories are acquired for less than a minute, where only few slow changes may occur on the surface of a neuron for example. Non stationary behavior are observed using a time-lapse analysis, with a delay of tens of minutes between successive acquisitions.
The coarse-grained model Eq. 1 is recovered from the conditional moments of the trajectory by computing the increments :
Here the notation means averaging over all trajectories that are at point x at time t. The coefficients of the Smoluchowski equation can be statistically estimated at each point x from an infinitely large sample of its trajectories in the neighborhood of the point x at time t.
Empirical estimation
In practice, the expectations for a and D are estimated by finite sample averages and is the time-resolution of the recorded trajectories. Formulas for a and D are approximated at the time step , where for tens to hundreds of points falling in any bin. This is usually enough for the estimation.
To estimate the local drift and diffusion coefficients, trajectories are first grouped within a small neighbourhood. The field of observation is partitioned into square bins of side r and centre and the local drift and diffusion are estimated for each of the square. Considering a sample with trajectories where are the sampling times, the discretization of equation for the drift at position is given for each spatial projection on the x and y axis by
where is the number of points of trajectory that fall in the square . Similarly, the components of the effective diffusion tensor are approximated by the empirical sums
The moment estimation requires a large number of trajectories passing through each point, which agrees precisely with the massive data generated by the a certain types of super-resolution data such as those acquired by sptPALM technique on biological samples. The exact inversion of Lagenvin's equation demands in theory an infinite number of trajectories passing through any point x of interest. In practice, the recovery of the drift and diffusion tensor is obtained after a region is subdivided by a square grid of radius r or by moving sliding windows (of the order of 50 to 100 nm).
Automated recovery of the boundary of a nanodomain
Algorithms based on mapping the density of points extracted from trajectories allow to reveal local binding and trafficking interactions and organization of dynamic subcellular sites. The algorithms can be applied to study regions of high density, revealved by SPTs. Examples are organelles such as endoplasmic reticulum or cell membranes. The method is based on spatiotemporal segmentation to detect local architecture and boundaries of high-density regions for domains measuring hundreds of nanometers.
References
Cell biology
Stochastic processes
Biophysics
Data analysis
Neuroscience
Applied mathematics
Statistical mechanics | Single-particle trajectory | [
"Physics",
"Mathematics",
"Biology"
] | 2,151 | [
"Neuroscience",
"Applied and interdisciplinary physics",
"Cell biology",
"Applied mathematics",
"Biophysics",
"Statistical mechanics"
] |
66,536,665 | https://en.wikipedia.org/wiki/Super%20Sopper | The Super Sopper is a rolling sponge used to remove water from sports grounds, particularly cricket fields, but also golf greens, tennis courts, racecourses, and other sports venues. The device can also be used to remove surface water from other spaces open to the elements, including hard courts and floors at building sites, to allow activities to continue without having to wait for the area to be dry out or be cleared of water by other means.
The concept was invented in Australia in 1974 by Gordon Withnall when he was 80 years old, after his golf ball landed in a puddle of water while he was playing a round of golf near Liverpool, New South Wales. He recognised the need for some means to collect surplus water from the grass at sports fields, to allow the sports to continue more quickly and effectively after a shower of rain. His invention initially attached sponges to the rotating drum of a manual lawn roller, with perforations in the metal drum allowing the water to be collected in an internal tank. The invention was featured on the ABC-TV show The Inventors in 1974 but did not win the programme. Withnall moved on to a vehicle similar to a road roller, with wide water-collecting rollers at each end, developed for Melbourne Cricket Ground in 1979, and then exported to the Oval in London. Many hundreds were sold to schools in Japan by Withnall's company, Kuranda Manufacturing, which is now run by his son Len Withnall.
A full-size Super Sopper typically comprises a small motorised vehicle with large rollers at each end. The outside surfaces of each roller are covered with a spongy water-absorbing material, such as polyurethane foam. The large rollers collect water as they roll over the ground, and the continuing rotary motion of each rollers takes the water-filled sponge past a smaller hard roller rotating in the opposite direction, which squeeze the water out of the sponge and into a storage tank, allowing the large rollers to collect more water when they return to the ground again. When the tank is filled, the collected water can be emptied into a drain. Water is collected in a continuous process as the vehicle moves forward, supplementing any drainage installed in the ground, and allowing sporting activities to resume more quickly after interruptions for rain. Smaller versions with one roller can be towed or pushed by hand. Similar devices exist, such as the "Supersopper" made in India.
References
Super Sopper water removal system, Museum of Applied Arts & Sciences, Australia
Science Week 2014: Super Sopper and super fun at Castle Hill, Museum of Applied Arts & Sciences, 11 August 2014
Super Sopper roller, 1974, Museum of Applied Arts & Sciences archive
Withnall, Gordon, Encyclopedia of Australian Science
History of the Super Sopper, supersopper.com.au
Australian inventions
Sports equipment
Cleaning tools
Cleaning products | Super Sopper | [
"Chemistry"
] | 581 | [
"Cleaning products",
"Products of chemical industry"
] |
66,537,977 | https://en.wikipedia.org/wiki/Koy%20%28animal%29 | The koy (Mishnaic Hebrew: ) is a kosher animal classified in the Mishnah as an intermediate between cattle and beast.
Spelling and pronunciation
Ashkenazi Jews traditionally pronounced the word as kvi (Hebrew: ); while Yemenite Jews and most Sephardi Jews pronounced it as koy (Hebrew: ). Nowadays, many Ashkenazi Jews also pronounce it as koy which is the correct spelling according to most scholars.
Different spellings are found in various manuscripts such as the Kaufmann Manuscript and others.
Etymology
One opinion is that the word is from the same root as cow, and refers to the cow of the Germanic peoples which was halfway domesticated. Another possibility is that the word is from the same root as the Arabic (kawy), meaning strong.
Identity
The Talmud cites three opinions regarding the identity of the koy:
It is a type of wild deer (Hebrew: ) (identified by some as the mouflon).
The Talmud brings down that some say it is a hybrid of a male goat (Hebrew: ) and a female gazelle (Hebrew: ). Since the father is considered a type of cattle and the mother a type of beast, the question remains whether the offspring (the koy) has the halakhic status of cattle or beast. This might be the opinion of Rav Chisda.
It is a separate species for itself.
Some modern scholars identify the koy as the Bubalus.
Halakhic ramifications
The sages were not sure whether the koy has the halakhic status of cattle or beast, and therefore they ruled stringently, sometimes giving it the status of cattle and sometimes giving it the status of beast. The details of these laws are recorded in the Mishnah.
References
Mammals
Jewish law
Mishnah | Koy (animal) | [
"Biology"
] | 372 | [
"Mammals",
"Animals"
] |
66,538,864 | https://en.wikipedia.org/wiki/Zymoseptoria%20passerinii | Zymoseptoria passerinii is a species of fungus belonging to the family Mycosphaerellaceae.
Synonym:
Septoria passerinii Sacc.
References
Mycosphaerellaceae
Fungus species | Zymoseptoria passerinii | [
"Biology"
] | 45 | [
"Fungi",
"Fungus species"
] |
66,539,007 | https://en.wikipedia.org/wiki/Country-centred%20design | Country-centred design, also spelt Country Centred Design, is a design methodology of Indigenous peoples in Australia. It is a means of ensuring that the design of the built environment happens with the Aboriginal or Torres Strait Islander concept of "Country" at the centre of the design. Country-centred design stands as a counterpoint to human-centred design.
The concept of Country
Embedded in the Country-centred design methodology is the Indigenous experience of Country. This has a particular and distinct meaning, which is different to the Western understanding of country (with a lowercase "c"). Danièle Hromek, a spatial designer of Budawang/Yuin heritage from the New South Wales South Coast, explains this as follows:In the Aboriginal sense of the word, Country relates to the nation or cultural group and land that they/we belong to, yearn for, find healing from and will return to. However, Country means much more than land, it is their/our place of origin in cultural, spiritual and literal terms. It includes not only land but also skies and waters.
Country is a wholistic worldview that incorporates human, non-human and all the natural systems that connect them.
Country-centred design methodology
Developed intergenerationally and communally over many generations of Indigenous peoples, the Country-centred design methodology is "owned" by Indigenous peoples, and embodied through Indigenous designers. It positions Country as the guide for design processes. It inherently includes the relationships between the various elements of Country, community, non-humans and people, and understands the connections and kinships between all that share space. The methodology is interpreted differently dependent on the spatial practitioner and their individual relationships to Country, culture and community. It has been described as "a process used and created by First Peoples over generations through their care and management of Country".
The methodology is also referred to as "Country centred design", "Country centric design", "Country-led design", "privileging Country in design", and "designing with Country".
Publication and dissemination
Contemporary Indigenous architects and designers in Australia have used the term "Country Centred Design" to describe this methodology since the 1990s. More recently, the methodology has been disseminated within the Australian architecture and design communities through a range of publications and public discussions. Led by Indigenous architects, designers and scholars, these initiatives aim to increase knowledge among the larger community of built environment practitioners and to ensure that Indigenous knowledge and methodologies are embedded in the design of the Australian built environment.
In 2017 Anthony McKnight, Awabakal, Gumaroi, and Yuin scholar, published his thesis titled "Singing up Country in academia: teacher education academics and preservice teachers' experience with Yuin Country", in which he describes Country as "decentr[ing] the human authorship privilege of overseer, creator, controller, implementer, and owner".
In a 2018 article, Dillon Kombumerri, an architect of Yugembir heritage, described a process of design that is "very much country-centred, that talk[s] to the deep history of country." In a 2020 article in Architecture Australia, Kombumerri describes Country-centred design in relation to traditional greetings:The traditional form of greeting is not saying "hello" – that's a human-centred approach. Traditional communication protocols are about sharing where you're from and who your mob is. The human-centred approach needs to shift to be Country centred. My advice to architects is to ground yourself within Country and community.
The New South Wales Government Architect 2020 discussion paper, Designing with Country, builds on the earlier forum of the same name held in Sydney in 2018. It aims to provide a clear guide to support built environment practitioners to "respond to Aboriginal culture and heritage responsibly, appropriately and respectfully". It is part of a larger program of activity from the Government Architect to ensure that sensitive sites are respected, and to support strong Aboriginal culture in the Australian built environment. This document also articulates the contrast between Country-Centred Design and Human-Centred Design:Prioritising people and their needs when designing is widely regarded as fundamental in contemporary design and planning. However, appreciating an Indigenous or Aboriginal worldview suggests that there are limitations imposed by an entirely human-centred approach to design. If people and their needs are at the 'centre' of design considerations, then the landscape and nature are reduced to second order priorities. If design and planning processes considered natural systems that include people, animals, resources and plants equally – similar to an Aboriginal world view – this could make a significant contribution to a more sustainable future world.
Country-centred design was the topic of the annual Monash University Whyte Lecture in 2020, given by Angie Abdilla and Pia Waugh.
See also
Indigenous architecture
References
Design
Indigenous architecture | Country-centred design | [
"Engineering"
] | 990 | [
"Design"
] |
66,539,067 | https://en.wikipedia.org/wiki/Zygodesmus%20fuscus | Zygodesmus fuscus is a species of fungus with unknown classification.
It is known two forms of the species:
Zygodesmus fuscus f. fuscus
Zygodesmus fuscus f. geogena Sacc.
References
Enigmatic Basidiomycota taxa
Fungus species | Zygodesmus fuscus | [
"Biology"
] | 68 | [
"Fungus stubs",
"Fungi",
"Fungus species"
] |
66,539,073 | https://en.wikipedia.org/wiki/TIC%20168789840 | TIC 168789840, also known as TYC 7037-89-1, is a stellar system with six stars. Three pairs of binary stars circle a common barycenter. While other systems with three pairs of stars have been discovered, this was the first system where the stars can be observed eclipsing one another, as the Earth lies approximately on their planes of rotation.
Discovery
The Transiting Exoplanet Survey Satellite identified that the star system consisted of six eclipsing stars. The discovery was announced in January 2021. It is approximately from Earth, in the constellation Eridanus, west of the river asterism's sharpest bend, Upsilon2 Eridani, often called Theemin. To be seen the group needs strong magnification from Earth as is much fainter than red clump giant star Theemin and is about nine times further away.
Orbits
Two sets of the binaries co-orbit relatively closely, while the third pair of stars takes 2,000 years to orbit the entire system barycenter. The inner A pair and C pair orbit each other in 3.7 years. These are, as taken from the paired B stars, about away (specifically the mean telescopic separation is ) and the three lettered pairs, as groups, have been resolved (the three gaps made out). From A pair to C pair is calculated to be () apart, which means this gap should be resolvable using speckle interferometry which has not yet been achieved.
Note, the three binaries (here close pairs) A, B, and C are resolved only as systems, the pairs being just (), (), and () apart, respectively.
According to Jeanette Kazmierczak of NASA's Goddard Space Flight Center:
Stellar characteristics
The primary stars of all three close binaries are slightly hotter and brighter than the Sun, while the secondary stars are much cooler and dimmer. Because the two closely bound pairs are so close, only the third, more distant pair could have planets. The primaries are all beginning to evolve away from the main sequence, while the less massive and longer-lived secondaries are all still firmly on the main sequence and fusing hydrogen in their cores.
See also
Castor (star) – the second-brightest (apparent) "star" in Gemini, likewise a (double-double)-double system
References
6
Astronomical objects discovered in 2021
Eridanus (constellation)
Eclipsing binaries | TIC 168789840 | [
"Astronomy"
] | 516 | [
"Eridanus (constellation)",
"Constellations"
] |
66,539,461 | https://en.wikipedia.org/wiki/Abortiporus%20biennis | Abortiporus biennis, commonly known as the blushing rosette, is a species of fungus belonging to the family Meruliaceae.
Synonyms:
Boletus biennis Bull. 1790 (= basionym)
References
Meruliaceae
Fungus species
Fungi_described_in_1944 | Abortiporus biennis | [
"Biology"
] | 60 | [
"Fungi",
"Fungus species"
] |
66,539,579 | https://en.wikipedia.org/wiki/Aleurodiscus%20lividocoeruleus | Aleurodiscus lividocoeruleus is a species of fungus belonging to the family Stereaceae.
It is native to Europe and Northern America.
Synonyms:
Acanthophysellum lividocoeruleum (P.Karst.) Parmasto
References
Stereaceae
Fungus species
Taxa named by Petter Adolf Karsten | Aleurodiscus lividocoeruleus | [
"Biology"
] | 72 | [
"Fungi",
"Fungus species"
] |
66,540,143 | https://en.wikipedia.org/wiki/Glossary%20of%20industrial%20automation | This glossary of industrial automation is a list of definitions of terms and illustrations related specifically to the field of industrial automation. For a more general view on electric engineering, see Glossary of electrical and electronics engineering. For terms related to engineering in general, see Glossary of engineering.
A
See also
Glossary of engineering
Glossary of power electronics
Glossary of civil engineering
Glossary of mechanical engineering
Glossary of structural engineering
Notes
References
Attribution
External links
Websites
Glossary of Industrial Automation
Automation Glossary of terms
Glossary of technical terms commonly used by ABB
An automation glossary
Glossary - Industrial Electronic/Electrical Terms
Robotics Glossary: a Guide to Terms and Technologies
PDFs
Glossary of Terms used in Programmable Controller-based Systems
Glossary of Terms for Process Control
INDUSTRY 4.0: Glossary of terms/buzzwords/jargon
Electrical engineering
Electronic engineering
Industrial automation
Industrial automation
Industrial automation
Wikipedia glossaries using description lists | Glossary of industrial automation | [
"Technology",
"Engineering"
] | 187 | [
"Computer engineering",
"Automation",
"Industrial engineering",
"Electronic engineering",
"Electrical engineering",
"Industrial automation"
] |
66,540,512 | https://en.wikipedia.org/wiki/ERV-Fc | ERV-Fc was an endogenous retrovirus (ERV), or a genus or family of them, related to the modern murine leukemia virus. It was active and infectious among many species of mammals in several orders, jumping species more than 20 times between about 33 million and about 15 million years ago, in the Oligocene and early Miocene, in all large areas of the world except for Australia and Antarctica. After about 15 million years ago, it became extinct as an active infectious virus, perhaps due to its hosts developing inherited resistance to it, but inactive damaged copies and partial copies and fragments of its DNA survive as inclusions in the hereditary nuclear DNA of many species of mammals, some in different orders, including humans and other great apes.
That has let interspecies jump routes of the spreading virus be tracked, and timed by the molecular clock in their extant descendants, but with gaps where trails were lost by passing through infected animals who left no extant descendants or by loss of the integrated sequence in some lineages.
References
External links
Where in the world is ERV-Fc? Tracing the spread of a virus over 30 million years, 28 species, and 5 continents.
A viral 'family tree' - BC biologists reveal the global spread of ancient retroviruses
Endogenous retroviruses
Retroviridae
Oligocene first appearances | ERV-Fc | [
"Biology"
] | 274 | [
"Virus stubs",
"Viruses"
] |
66,540,523 | https://en.wikipedia.org/wiki/Schamel%20equation | The Schamel equation (S-equation) is a nonlinear partial differential equation of first order in time and third order in space. Similar to a Korteweg–De Vries equation (KdV), it describes the development of a localized, coherent wave structure that propagates in a nonlinear dispersive medium. It was first derived in 1973 by Hans Schamel to describe the effects of electron trapping in the trough of the potential of a solitary electrostatic wave structure travelling with ion acoustic speed in a two-component plasma. It now applies to various localized pulse dynamics such as:
electron and ion holes or phase space vortices in collision-free plasmas such as space plasmas,
axisymmetric pulse propagation in physically stiffened nonlinear cylindrical shells,
"Soliton" propagation in nonlinear transmission lines or in fiber optics and laser physics.
The equation
The Schamel equation is
where stands for . In the case of ion-acoustic solitary waves, the parameter reflects the effect of electrons trapped in the trough of the electrostatic potential . It is given by , where , the trapping parameter, reflects the status of the trapped electrons, representing a flat-topped stationary trapped electron distribution, a dip or depression.
It holds , where is the wave amplitude. All quantities are normalized: the potential energy by electron thermal energy, the velocity by ion sound speed, time by inverse ion plasma frequency and space by electron Debye length. Note that for a KdV equation is replaced by such that the nonlinearity becomes bilinear (see later).
Solitary wave solution
The steady state solitary wave solution, , is given in the comoving frame by:
The speed of the structure is supersonic, , since has to be positive, , which corresponds in the ion acoustic case to a depressed trapped electron distribution .
Proof by pseudo-potential method
The proof of this solution uses the analogy to classical mechanics via with , being the corresponding pseudo-potential. From this we get by an integration: , which represents the pseudo-energy, and from the Schamel equation:
. Through the obvious demand, namely that at potential maximum, , the slope of vanishes we get: . This is a nonlinear dispersion relation (NDR) because it determines the phase velocity given by the second expression. The canonical form of is obtained by replacing with the NDR. It becomes:
The use of this expression in , which follows from the pseudo-energy law, yields by integration:
This is the inverse function of as given in the first equation. Note that the integral in the denominator of exists and can be expressed by known mathematical functions. Hence is a mathematically disclosed function. However, the structure often remains mathematically undisclosed, i.e. it cannot be expressed by known functions (see for instance Sect. Logarithmic Schamel equation). This generally happens if more than one trapping scenarios are involved, as e.g. in driven intermittent plasma turbulence.
Non-integrability
In contrast to the KdV equation, the Schamel equation is an example of a non-integrable evolution equation. It only has a finite number of (polynomial) constants of motion and does not pass a Painlevé test. Since a so-called Lax pair (L,P) does not exist, it is not integrable by the inverse scattering transform.
Generalizations
Schamel–Korteweg–de Vries equation
Taking into account the next order in the expression for the expanded electron density, we get
, from which we obtain the pseudo-potential -. The corresponding evolution equation then becomes:
which is the Schamel–Korteweg–de Vries equation.
Its solitary wave solution reads
with and . Depending on Q it has two limiting solitary wave solutions: For we find , the Schamel solitary wave.
For we get which represents the ordinary ion acoustic soliton. The latter is fluid-like and is achieved for or representing an isothermal electron equation of state. Note that the absence of a trapping effect (b = 0) does not imply the absence of trapping, a statement that is usually misrepresented in the literature, especially in textbooks. As long as is nonzero, there is always a nonzero trapping width in velocity space for the electron distribution function.
Logarithmic Schamel equation
Another generalization of the S-equation is obtained in the case of ion acoustic waves by admitting a second trapping channel. By considering an additional, non-perturbative trapping scenario, Schamel received:
,
a generalization called logarithmic S-equation. In the absence of the square root nonlinearity, , it is solved by a Gaussian shaped hole solution: with and has a supersonic phase velocity . The corresponding pseudo-potential is given by . From this follows which is the inverse function of the Gaussian mentioned. For a non-zero b, keeping , the integral to get can no longer be solved analytically, i.e. by known mathematical functions. A solitary wave structure still exists, but cannot be reached in a disclosed form.
Schamel equation with random coefficients
The fact that electrostatic trapping involves stochastic processes at resonance caused by chaotic particle trajectories has led to considering b in the S-equation as a stochastic quantity. This results in a Wick-type stochastic S-equation.
Time-fractional Schamel equation
A further generalization is obtained by replacing the first time derivative by a Riesz fractional derivative yielding a time-fractional S-equation. It has applications e.g. for the broadband electrostatic noise observed by the Viking satellite.
Schamel–Schrödinger equation
A connection between the Schamel equation and the nonlinear Schrödinger equation can be made within the context of a Madelung fluid. It results in the Schamel–Schrödinger equation.
and has applications in fiber optics and laser physics.
References
External links
www.hans-schamel.de : further information by Hans Schamel
Partial differential equations
Plasma physics equations
Ionosphere
Space plasmas
Waves in plasmas | Schamel equation | [
"Physics"
] | 1,263 | [
"Waves in plasmas",
"Space plasmas",
"Physical phenomena",
"Equations of physics",
"Plasma phenomena",
"Astrophysics",
"Waves",
"Plasma physics equations"
] |
68,046,678 | https://en.wikipedia.org/wiki/Nirogacestat | Nirogacestat, sold under the brand name Ogsiveo, is an anti-cancer medication used for the treatment of desmoid tumors. It is a selective gamma secretase inhibitor that is taken by mouth.
The most common side effects include diarrhea, ovarian toxicity, rash, nausea, fatigue, stomatitis, headache, abdominal pain, cough, alopecia, upper respiratory tract infection and dyspnea.
Nirogacestat was approved for medical use in the United States in November 2023. It is the first medication approved by the US Food and Drug Administration (FDA) for the treatment of desmoid tumors. The FDA considers it to be a first-in-class medication.
Medical uses
Nirogacestat is indicated for adults with progressing desmoid tumors who require systemic treatment.
History
The effectiveness of nirogacestat was evaluated in DeFi (NCT03785964), an international, multicenter, randomized (1:1), double-blind, placebo-controlled trial in 142 adult participants with progressing desmoid tumors not amenable to surgery. Participants were randomized to receive 150 milligrams (mg) of nirogacestat or placebo orally, twice daily, until disease progression or unacceptable toxicity. The main efficacy outcome measure was progression-free survival (the length of time after the start of treatment for which a person is alive and their cancer does not grow or spread). Objective response rate (a measure of tumor shrinkage) was an additional efficacy outcome measure. The pivotal clinical trial demonstrated that nirogacestat provided clinically meaningful and statistically significant improvement in progression-free survival compared to placebo. Additionally, the objective response rate was also statistically different between the two arms with a response rate of 41% in the nirogacestat arm and 8% in the placebo arm. The progression-free survival results were also supported by an assessment of patient-reported pain favoring the nirogacestat arm.
As of 2021, nirogacestat was in phase II clinical trials for unresectable desmoid tumors. In addition, a phase III clinical trial, DeFi, was in progress for nirogacestat for adults with desmoid tumors and aggressive fibromatosis. In addition, three trials were recruiting patients that include nirogacestat with other anticancer therapies in multiple myeloma, including the UNIVERSAL study for nirogacestat with the allogeneic CAR-T therapy ALLO-715.
The FDA granted the application for nirogacestat priority review, fast track, breakthrough therapy, and orphan drug designations. The FDA granted the approval of Ogsiveo to SpringWorks Therapeutics Inc.
Society and culture
Legal status
Nirogacestat was granted breakthrough therapy designation by the FDA in September 2019, for adults with progressive, unresectable, recurrent or refractory desmoid tumors or deep fibromatosis.
References
Amides
Chemotherapy
Fluoroarenes
Gamma secretase inhibitors
Imidazoles
Orphan drugs
Secondary amines
Tetralins
Neopentyl compounds | Nirogacestat | [
"Chemistry"
] | 659 | [
"Amides",
"Functional groups"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.