id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,991,073 | https://en.wikipedia.org/wiki/Surface%20plasmon%20resonance | Surface plasmon resonance (SPR) is a phenomenon that occurs where electrons in a thin metal sheet become excited by light that is directed to the sheet with a particular angle of incidence, and then travel parallel to the sheet. Assuming a constant light source wavelength and that the metal sheet is thin, the angle of incidence that triggers SPR is related to the refractive index of the material and even a small change in the refractive index will cause SPR to not be observed. This makes SPR a possible technique for detecting particular substances (analytes) and SPR biosensors have been developed to detect various important biomarkers.
Explanation
The surface plasmon polariton is a non-radiative electromagnetic surface wave that propagates in a direction parallel to the negative permittivity/dielectric material interface. Since the wave is on the boundary of the conductor and the external medium (air, water or vacuum for example), these oscillations are very sensitive to any change of this boundary, such as the adsorption of molecules to the conducting surface.
To describe the existence and properties of surface plasmon polaritons, one can choose from various models (quantum theory, Drude model, etc.). The simplest way to approach the problem is to treat each material as a homogeneous continuum, described by a frequency-dependent relative permittivity between the external medium and the surface. This quantity, hereafter referred to as the materials' "dielectric function", is the complex permittivity. In order for the terms that describe the electronic surface plasmon to exist, the real part of the dielectric constant of the conductor must be negative and its magnitude must be greater than that of the dielectric. This condition is met in the infrared-visible wavelength region for air/metal and water/metal interfaces (where the real dielectric constant of a metal is negative and that of air or water is positive).
LSPRs (localized surface plasmon resonances) are collective electron charge oscillations in metallic nanoparticles that are excited by light. They exhibit enhanced near-field amplitude at the resonance wavelength. This field is highly localized at the nanoparticle and decays rapidly away from the nanoparticle/dielectric interface into the dielectric background, though far-field scattering by the particle is also enhanced by the resonance. Light intensity enhancement is a very important aspect of LSPRs and localization means the LSPR has very high spatial resolution (subwavelength), limited only by the size of nanoparticles. Because of the enhanced field amplitude, effects that depend on the amplitude such as magneto-optical effect are also enhanced by LSPRs.
Implementations
In order to excite surface plasmon polaritons in a resonant manner, one can use electron bombardment or incident light beam (visible and infrared are typical). The incoming beam has to match its momentum to that of the plasmon. In the case of p-polarized light (polarization occurs parallel to the plane of incidence), this is possible by passing the light through a block of glass to increase the wavenumber (and the momentum), and achieve the resonance at a given wavelength and angle. S-polarized light (polarization occurs perpendicular to the plane of incidence) cannot excite electronic surface plasmons.
Electronic and magnetic surface plasmons obey the following dispersion relation:
where k() is the wave vector, is the relative permittivity, and is the relative permeability of the material (1: the glass block, 2: the metal film), while is angular frequency and is the speed of light in vacuum.
Typical metals that support surface plasmons are silver and gold, but metals such as copper, titanium or chromium have also been used.
When using light to excite SP waves, there are two configurations which are well known. In the Otto configuration, the light illuminates the wall of a glass block, typically a prism, and is totally internally reflected. A thin metal film (for example gold) is positioned close enough to the prism wall so that an evanescent wave can interact with the plasma waves on the surface and hence excite the plasmons.
In the Kretschmann configuration (also known as Kretschmann–Raether configuration), the metal film is evaporated onto the glass block. The light again illuminates the glass block, and an evanescent wave penetrates through the metal film. The plasmons are excited at the outer side of the film. This configuration is used in most practical applications.
SPR emission
When the surface plasmon wave interacts with a local particle or irregularity, such as a rough surface, part of the energy can be re-emitted as light. This emitted light can be detected behind the metal film from various directions.
Analytical implementations
Surface plasmon resonance can be implemented in analytical instrumentation. SPR instruments consist of a light source, an input scheme, a prism with analyte interface, a detector, and computer.
Detectors
The detectors used in surface plasmon resonance convert the photons of light reflected off the metallic film into an electrical signal. A position sensing detector (PSD) or charged-coupled device (CCD) may be used to operate as detectors.
Applications
Surface plasmons have been used to enhance the surface sensitivity of several spectroscopic measurements including fluorescence, Raman scattering, and second-harmonic generation. In their simplest form, SPR reflectivity measurements can be used to detect molecular adsorption, such as polymers, DNA or proteins, etc. Technically, it is common to measure the angle of minimum reflection (angle of maximum absorption). This angle changes in the order of 0.1° during thin (about nm thickness) film adsorption. (See also the Examples.) In other cases the changes in the absorption wavelength is followed. The mechanism of detection is based on the adsorbing molecules causing changes in the local index of refraction, changing the resonance conditions of the surface plasmon waves. The same principle is exploited in the recently developed competitive platform based on loss-less dielectric multilayers (DBR), supporting surface electromagnetic waves with sharper resonances (Bloch surface waves).
If the surface is patterned with different biopolymers, using adequate optics and imaging sensors (i.e. a camera), the technique can be extended to surface plasmon resonance imaging (SPRI). This method provides a high contrast of the images based on the adsorbed amount of molecules, somewhat similar to Brewster angle microscopy (this latter is most commonly used together with a Langmuir–Blodgett trough).
For nanoparticles, localized surface plasmon oscillations can give rise to the intense colors of suspensions or sols containing the nanoparticles. Nanoparticles or nanowires of noble metals exhibit strong absorption bands in the ultraviolet–visible light regime that are not present in the bulk metal. This extraordinary absorption increase has been exploited to increase light absorption in photovoltaic cells by depositing metal nanoparticles on the cell surface. The energy (color) of this absorption differs when the light is polarized along or perpendicular to the nanowire. Shifts in this resonance due to changes in the local index of refraction upon adsorption to the nanoparticles can also be used to detect biopolymers such as DNA or proteins.
Related complementary techniques include plasmon waveguide resonance, QCM, extraordinary optical transmission, and dual-polarization interferometry.
SPR immunoassay
The first SPR immunoassay was proposed in 1983 by Liedberg, Nylander, and Lundström, then of the Linköping Institute of Technology (Sweden). They adsorbed
human IgG onto a 600-Ångström silver film, and used the assay to detect anti-human IgG in water solution. Unlike many other immunoassays, such as ELISA, an SPR immunoassay is label free in that a label molecule is not required for detection of the analyte. Additionally, the measurements on SPR can be followed real-time allowing the monitoring of individual steps in sequential binding events particularly useful in the assessment of for instance sandwich complexes.
Material characterization
Multi-parametric surface plasmon resonance, a special configuration of SPR, can be used to characterize layers and stacks of layers. Besides binding kinetics, MP-SPR can also provide information on structural changes in terms of layer true thickness and refractive index. MP-SPR has been applied successfully in measurements of lipid targeting and rupture, CVD-deposited single monolayer of graphene (3.7Å) as well as micrometer thick polymers.
Data interpretation
The most common data interpretation is based on the Fresnel formulas, which treat the formed thin films as infinite, continuous dielectric layers. This interpretation may result in multiple possible refractive index and thickness values. Usually only one solution is within the reasonable data range. In multi-parametric surface plasmon resonance, two SPR curves are acquired by scanning a range of angles at two different wavelengths, which results in a unique solution for both thickness and refractive index.
Metal particle plasmons are usually modeled using the Mie scattering theory.
In many cases no detailed models are applied, but the sensors are calibrated for the specific application, and used with interpolation within the calibration curve.
Novel applications
Due to the versatility of SPR instrumentation, this technique pairs well with other approaches, leading to novel applications in various fields, such as biomedical and environmental studies.
When coupled with nanotechnology, SPR biosensors can use nanoparticles as carriers for therapeutic implants. For instance, in the treatment of Alzheimer's disease, nanoparticles can be used to deliver therapeutic molecules in targeted ways. In general, SPR biosensing is demonstrating advantages over other approaches in the biomedical field due to this technique being label-free, lower in costs, applicable in point-of-care settings, and capable of producing faster results for smaller research cohorts.
In the study of environmental pollutants, SPR instrumentation can be used as a replacement for former chromatography-based techniques. Current pollution research relies on chromatography to monitor increases in pollution in an ecosystem over time. When SPR instrumentation with a Kretschmann prism configuration was used in the detection of chlorophene, an emerging pollutant, it was demonstrated that SPR has similar precision and accuracy levels as chromatography techniques. Furthermore, SPR sensing surpasses chromatography techniques through its high-speed, straightforward analysis.
Examples
Layer-by-layer self-assembly
One of the first common applications of surface plasmon resonance spectroscopy was the measurement of the thickness (and refractive index) of adsorbed self-assembled nanofilms on gold substrates. The resonance curves shift to higher angles as the thickness of the adsorbed film increases. This example is a 'static SPR' measurement.
When higher speed observation is desired, one can select an angle right below
the resonance point (the angle of minimum reflectance), and measure the reflectivity changes at that point.
This is the so-called 'dynamic SPR' measurement. The interpretation of the data assumes that the structure of the film does not change significantly during the measurement.
Binding constant determination
SPR can be used to study the real-time kinetics of molecular interactions. Determining the affinity between two ligands involves establishing the equilibrium dissociation constant, representing the equilibrium value for the product quotient. This constant can be determined using dynamic SPR parameters, calculated as the dissociation rate divided by the association rate.
In this process, a ligand is immobilized on the dextran surface of the SPR crystal. Through a microflow system, a solution with the analyte is injected over the ligand-covered surface. The binding of the analyte to the ligand causes an increase in the SPR signal (expressed in response units, RU). Following the association time, a solution without the analyte (typically a buffer) is introduced into the microfluidics to initiate the dissociation of the bound complex between the ligand and analyte. As the analyte dissociates from the ligand, the SPR signal decreases. From these association ('on rate', ) and dissociation rates ('off rate', ), the equilibrium dissociation constant ('binding constant', ) can be calculated.
The detected SPR signal is a consequence of the electromagnetic 'coupling' of the incident light with the surface plasmon of the gold layer. This interaction is particularly sensitive to the characteristics of the layer at the gold–solution interface, which is usually just a few nanometers thick. When substances bind to the surface, it alters the way light is reflected, causing a change in the reflection angle, which can be measured as a signal in SPR experiments. One common application is measuring the kinetics of antibody-antigen interactions.
Thermodynamic analysis
As SPR biosensors facilitate measurements at different temperatures, thermodynamic analysis can be performed
to obtain a better understanding of the studied interaction. By performing measurements at different temperatures,
typically between 4 and 40 °C, it is possible to relate association and dissociation rate constants with activation
energy and thereby obtain thermodynamic parameters including binding enthalpy, binding entropy,
Gibbs free energy and heat capacity.
Pair-wise epitope mapping
As SPR allows real-time monitoring, individual steps in sequential binding events can be thoroughly assessed when investigating the suitability between antibodies in a sandwich configuration. Additionally, it allows the mapping of epitopes as antibodies of overlapping epitopes will be associated with an attenuated signal compared to those capable of interacting simultaneously.
Innovations
Magnetic plasmon resonance
Recently, there has been an interest in magnetic surface plasmons. These require materials with large negative magnetic permeability, a property that has only recently been made available with the construction of metamaterials.
Graphene
Layering graphene on top of gold has been shown to improve SPR sensor performance. Its high electrical conductivity increases the sensitivity of detection. The large surface area of graphene also facilitates the immobilization of biomolecules while its low refractive index minimizes its interference. Enhancing SPR sensitivity by incorporating graphene with other materials expands the potential of SPR sensors, making them practical in a broader range of applications. For instance, the enhanced sensitivity of graphene can be used in conjunction with a silver SPR sensor, providing a cost-effective alternative for measuring glucose levels in urine.
Graphene has also been shown to improve the resistance of SPR sensors to high-temperature annealing up to 500 °C.
Fiber-optic SPR
Recent advancements in SPR technology have given rise to novel formats increasing the scope and applicability of SPR sensing. Fiber optic SPR involves the integration of SPR sensors onto the ends of optical fibers, enabling the direct coupling of light with the surface plasmons as the analytes are passed through a hollow SPR core. This format offers enhanced sensitivity and allows for the development of compact sensing devices, making it particularly valuable for applications requiring remote sensing in the field. It also offers an increased surface area for analytes to bind to the inner lining of the fiber optic.
See also
Hydrogen sensor
Multi-parametric surface plasmon resonance
Nano-optics
Plasmon
Spinplasmonics
Surface plasmon polariton
Waves in plasmas
Localized surface plasmon
Quartz crystal microbalance
References
Further reading
A selection of free-download papers on Plasmonics in New Journal of Physics
Electromagnetism
Nanotechnology
Spectroscopy
Biochemistry methods
Biophysics
Forensic techniques
Protein–protein interaction assays
Plasmonics
Optical phenomena | Surface plasmon resonance | Physics,Chemistry,Materials_science,Engineering,Biology | 3,320 |
32,274,243 | https://en.wikipedia.org/wiki/GRB%20090429B | GRB 090429B was a gamma-ray burst observed on 29 April 2009 by the Burst Alert Telescope aboard the Swift satellite. The burst triggered a standard burst-response observation sequence, which started 106 seconds after the burst. The X-ray telescope aboard the satellite identified an uncatalogued fading source. No optical or UV counterpart was seen in the UV–optical telescope. Around 2.5 hours after the burst trigger, a series of observations was carried out by the Gemini North telescope, which detected a bright object in the infrared part of the spectrum. No evidence of a host galaxy was found either by Gemini North or by the Hubble Space Telescope. Though this burst was detected in 2009, it was not until May 2011 that its distance estimate of 13.14 billion light-years was announced. With 90% likelihood, the burst had a photometric redshift greater than z = 9.06, which would make it the most distant GRB known, although the error bar on this estimate is large, providing a lower limit of z > 7.
The amount of energy released in the burst was estimated at 3.5 × 1052 erg. For a comparison, the Sun's luminosity is 3.8 × 1033 erg/s.
See also
GRB 090423, the most distant gamma-ray burst with spectroscopic confirmation
References
090429B
20090429
April 2009
Canes Venatici | GRB 090429B | Physics,Astronomy | 299 |
14,194,283 | https://en.wikipedia.org/wiki/Non-Hausdorff%20manifold | In geometry and topology, it is a usual axiom of a manifold to be a Hausdorff space. In general topology, this axiom is relaxed, and one studies non-Hausdorff manifolds: spaces locally homeomorphic to Euclidean space, but not necessarily Hausdorff.
Examples
Line with two origins
The most familiar non-Hausdorff manifold is the line with two origins, or bug-eyed line. This is the quotient space of two copies of the real line,
and (with ), obtained by identifying points and whenever
An equivalent description of the space is to take the real line and replace the origin with two origins and The subspace retains its usual Euclidean topology. And a local base of open neighborhoods at each origin is formed by the sets with an open neighborhood of in
For each origin the subspace obtained from by replacing with is an open neighborhood of homeomorphic to Since every point has a neighborhood homeomorphic to the Euclidean line, the space is locally Euclidean. In particular, it is locally Hausdorff, in the sense that each point has a Hausdorff neighborhood. But the space is not Hausdorff, as every neighborhood of intersects every neighbourhood of It is however a T1 space.
The space is second countable.
The space exhibits several phenomena that do not happen in Hausdorff spaces:
The space is path connected but not arc connected. In particular, to get a path from one origin to the other one can first move left from to within the line through the first origin, and then move back to the right from to within the line through the second origin. But it is impossible to join the two origins with an arc, which is an injective path; intuitively, if one moves first to the left, one has to eventually backtrack and move back to the right.
The intersection of two compact sets need not be compact. For example, the sets and are compact, but their intersection is not.
The space is locally compact in the sense that every point has a local base of compact neighborhoods. But the line through one origin does not contain a closed neighborhood of that origin, as any neighborhood of one origin contains the other origin in its closure. So the space is not a regular space, and even though every point has at least one closed compact neighborhood, the origin points do not admit a local base of closed compact neighborhoods.
The space does not have the homotopy type of a CW-complex, or of any Hausdorff space.
Line with many origins
The line with many origins is similar to the line with two origins, but with an arbitrary number of origins. It is constructed by taking an arbitrary set with the discrete topology and taking the quotient space of that identifies points and whenever Equivalently, it can be obtained from by replacing the origin with many origins one for each The neighborhoods of each origin are described as in the two origin case.
If there are infinitely many origins, the space illustrates that the closure of a compact set need not be compact in general. For example, the closure of the compact set is the set obtained by adding all the origins to , and that closure is not compact. From being locally Euclidean, such a space is locally compact in the sense that every point has a local base of compact neighborhoods. But the origin points do not have any closed compact neighborhood.
Branching line
Similar to the line with two origins is the branching line.
This is the quotient space of two copies of the real line
with the equivalence relation
This space has a single point for each negative real number and two points for every non-negative number: it has a "fork" at zero.
Etale space
The etale space of a sheaf, such as the sheaf of continuous real functions over a manifold, is a manifold that is often non-Hausdorff. (The etale space is Hausdorff if it is a sheaf of functions with some sort of analytic continuation property.)
Properties
Because non-Hausdorff manifolds are locally homeomorphic to Euclidean space, they are locally metrizable (but not metrizable in general) and locally Hausdorff (but not Hausdorff in general).
See also
Notes
References
General topology
Manifolds
Topology | Non-Hausdorff manifold | Physics,Mathematics | 859 |
24,664,296 | https://en.wikipedia.org/wiki/Psalmotoxin | Psalmotoxin (PcTx1) is a spider toxin from the venom of the Trinidad tarantula Psalmopoeus cambridgei. It selectively blocks Acid Sensing Ion Channel 1-a (ASIC1a), which is a proton-gated sodium channel.
Sources
Psalmotoxin is a toxin produced in the venom glands of the South American tarantula Psalmopoeus cambridgei.
Chemistry
The psalmotoxin structure can be classified as an inhibitor cystine knot (ICK) protein. Many ion channel effectors from snail, spider, and scorpion venoms share a similar ICK structure, although they possess very different pharmalogical profiles. Among ICK toxins, psalmotoxin is the only peptide known to act on homomeric ASIC1 channels.
Psalmotoxin is a 40-amino acid peptide, possessing 6 cysteines linked by three disulfide bridges. The three-dimensional structure consists of a compact disulfide-bonded core from which three loops and the N and C termini emerge. The main element of the structure is a three-stranded antiparallel β-sheet.
Target
Psalmotoxin can bind to a particular isoform of the Acid Sensing Ion Channel, the Acid Sensing Ion Channel 1 (ASIC1). The binding of psalmotoxin has an effect on both of the two splice variants known of ASIC1, ASIC1a and ASIC1b.
ASIC1 has two transmembrane components. After the first transmembrane component it forms a large extracellular bridge with the second transmembrane component, an extracellular loop. This extracellular loop contains cysteine rich domains. Psalmotoxin specifically binds these cysteine rich domains in the extracellular loop of ASIC1. This implicates this domain is the receptor site of ASIC1 for psalmotoxin.
ASICs are proton-gated sodium channels. ASICs open when H+ binds. This occurs when the H+-concentration in the environment of the neuron is slightly higher compared to resting H+-concentrations (pH = 7.4).
The expression of ASIC1a is high in both the central nervous system and in the sensory neurons of the dorsal root ganglia. ASIC1b is only expressed in sensory neurons. Expression of ASIC1a in the central nervous system relates to the involvement of ASIC1a in higher brain functions, such as learning, memory and fear conditioning. Expression of ASIC1a and ASIC1b in sensory neurons relates to their involvement in nociception and taste.
Mode of action
Binding of psalmotoxin to ASIC1a is reported to increase the affinity of ASIC1a for H+. This increase in affinity for H+ results in the shift of ASIC1a into the desensitized state at resting H+-concentrations (pH = 7.4). The channel being desensitized means that the ion channel is bound to its ligand, H+, but is not able to let ions pass through the ion channel. The underlying mechanism of how this increase in affinity for H+ accounts for a shift of the ASIC1a channels into the desensitized state is not yet specified.
Psalmotoxin also interacts with ASIC1b. In contrast to psalmotoxin binding to ASIC1a, binding of psalmotoxin to ASIC1b results in promoting the opening of the channel. This agonistic effect of psalmotoxin on ASIC1b only occurs in slightly acidic conditions (pH = 7.1).
Toxicity
The role of psalmotoxin in prey capture and the importance of ASIC1a channels as targets of venom components remains unclear.
Therapeutic uses
Psalmotoxin is currently not used for therapeutic purposes, but understanding the psalmotoxin/ASIC1a interaction may be of therapeutic value. Recently, it has been shown that activation of ASIC1a during the acidosis accompanying brain ischemia leads to significant Ca2+ influx, which contributes to neuronal cell death. Inhibition of ASIC1a by psalmotoxin significantly decreased ischemic neuronal cell death. Therefore, it is suggested that desensitized ASIC1's by pharmacological intervention could be beneficial for patients at risk of having a stroke. For the same reasons, psalmotoxin could contribute in the search for a cure for gliomas. Inhibition of ASIC1a in the amygdala by psalmotoxin could have an anxiolytic effect. As ASIC's play a role in nociception, psalmotoxin could be helpful in designing new analgesic drugs acting directly against pain at the nociceptor level.
See also
Vanillotoxin
References
Spider toxins
Neurotoxins
Ion channel toxins | Psalmotoxin | Chemistry | 998 |
56,292,355 | https://en.wikipedia.org/wiki/Geobacter%20chapellei | Geobacter chapellei is a Gram-negative, strictly anaerobic, mesophilic and non-motile bacterium from the genus of Geobacter which has been isolated fromaq uifer sediments from the Atlantic Coastal Plain in the United States.
See also
List of bacterial orders
List of bacteria genera
References
Bacteria described in 2001
Thermodesulfobacteriota | Geobacter chapellei | Biology | 77 |
315,487 | https://en.wikipedia.org/wiki/Nickel%20silver | Nickel silver, maillechort, German silver, argentan, new silver, nickel brass, albata, or alpacca is a cupronickel (copper with nickel) alloy with the addition of zinc. The usual formulation is 60% copper, 20% nickel and 20% zinc. Nickel silver does not contain the element silver. It is named for its silvery appearance, which can make it attractive as a cheaper and more durable substitute. It is also well suited for being plated with silver.
A naturally occurring ore composition in China was smelted into the alloy known as or () ('white copper' or cupronickel). The name German Silver refers to the artificial recreation of the natural ore composition by German metallurgists. All modern, commercially important, nickel silvers (such as those standardized under ASTM B122) contain zinc and are sometimes considered a subset of brass.
History
Nickel silver was first used in China, where it was smelted from readily available unprocessed ore. During the Qing dynasty, it was "smuggled into various parts of the East Indies", despite a government ban on the export of nickel silver. It became known in the West from imported wares called (Mandarin) or (Cantonese) (白 銅, literally "white copper"), for which the silvery metal colour was used to imitate sterling silver. According to Berthold Laufer, it was identical to khar sini, one of the seven metals recognized by Jābir ibn Hayyān.
In Europe, consequently, it was at first called , which is about the way is pronounced in the Cantonese dialect. The earliest European mention of occurs in the year 1597. From then until the end of the eighteenth century there are references to it as having been exported from Canton to Europe.
German artificial recreation of the natural ore composition, however, began to appear from about 1750 onward. In 1770, the Suhl metalworks were able to produce a similar alloy. In 1823, a German competition was held to perfect the production process: the goal was to develop an alloy that possessed the closest visual similarity to silver. The brothers Henniger in Berlin and Ernst August Geitner in Schneeberg independently achieved this goal. The manufacturer Berndorf named the trademark brand Alpacca, which became widely known in northern Europe for nickel silver. In 1830, the German process of manufacture was introduced into England, while exports of from China gradually stopped. In 1832, a form of German silver was also developed in Birmingham, England.
After the modern process for the production of electroplated nickel silver was patented in 1840 by George Richards Elkington and his cousin Henry Elkington in Birmingham, the development of electroplating caused nickel silver to become widely used. It formed an ideal, strong and bright substrate for the plating process. It was also used unplated in applications such as cutlery.
Uses
Nickel silver first became popular as a base metal for silver-plated cutlery and other silverware, notably the electroplated wares called EPNS (electroplated nickel silver). It is used in zippers, costume jewelry, for making musical instruments (e.g., flutes, clarinets), and is preferred for the track in electric model railway layouts, as its oxide is conductive. Better quality keys and lock cylinder pins are made of nickel silver for durability under heavy use. The alloy has been widely used in the production of coins (e.g. Portuguese escudo and the former GDR marks). Its industrial and technical uses include marine fittings and plumbing fixtures for its corrosion resistance, and heating coils for its high electrical resistance.
In the nineteenth century, particularly after 1868, North American Plains Indian metalsmiths were able to easily acquire sheets of German silver. They used them to cut, stamp, and cold hammer a wide range of accessories and also horse gear. Presently, Plains metalsmiths use German silver for pendants, pectorals, bracelets, armbands, hair plates, conchas (oval decorative plates for belts), earrings, belt buckles, necktie slides, stickpins, dush-tuhs, and tiaras. Nickel silver is the metal of choice among contemporary Kiowa and Pawnee in Oklahoma. Many of the metal fittings on modern higher-end equine harness and tack are of nickel silver.
Early in the twentieth century, German silver was used by automobile manufacturers before the advent of steel sheet metal. For example, the famous Rolls-Royce Silver Ghost of 1907 used German silver. After about 1920, it became widely used for pocketknife bolsters, due to its machinability and corrosion resistance. Prior to this, the most common metal was iron.
Musical instruments, including the flute, saxophone, trumpet, and French horn, string instrument frets, and electric guitar pickup parts, can be made of nickel silver. Many professional-level French horns are entirely made of nickel silver. Some saxophone manufacturers, such as Keilwerth, offer saxophones made of nickel silver (Shadow model); these are far rarer than traditional lacquered brass saxophones. Student-level flutes and piccolos are also made of silver-plated nickel silver, although upper-level models are likely to use sterling silver. Nickel silver produces a bright and powerful sound quality; an additional benefit is that the metal is harder and more corrosion resistant than brass. Because of its hardness, it is used for most clarinet, flute, oboe and similar wind instrument keys, normally silver-plated. It is used to produce the tubes (called staples) onto which oboe reeds are tied.
Many parts of brass instruments are made of nickel silver, such as tubes, braces or valve mechanism. Trombone slides of many manufacturers offer a lightweight nickel silver (LT slide) option for faster slide action and weight balance. The material was used in the construction of the National tricone resophonic guitar. The frets of guitar, mandolin, banjo, bass, and related string instruments are typically nickel silver. Nickel silver is sometimes used as ornamentation on the great highland bagpipe.
Nickel silver is also used in artworks. The Dutch sculptor Willem Lenssinck has made several pieces from German silver. Outdoors art made from this material easily withstands all kinds of weather.
See also
Argentium sterling silver
Britannia silver
Britannia metal
Cupronickel
Sheffield plate
Nickel Directive
List of named alloys
References
External links
Silver's Sterling Qualities
Chinese inventions
Copper alloys
Nickel alloys
Economy of the Qing dynasty
Silver | Nickel silver | Chemistry | 1,341 |
186,101 | https://en.wikipedia.org/wiki/Complex%20geometry | In mathematics, complex geometry is the study of geometric structures and constructions arising out of, or described by, the complex numbers. In particular, complex geometry is concerned with the study of spaces such as complex manifolds and complex algebraic varieties, functions of several complex variables, and holomorphic constructions such as holomorphic vector bundles and coherent sheaves. Application of transcendental methods to algebraic geometry falls in this category, together with more geometric aspects of complex analysis.
Complex geometry sits at the intersection of algebraic geometry, differential geometry, and complex analysis, and uses tools from all three areas. Because of the blend of techniques and ideas from various areas, problems in complex geometry are often more tractable or concrete than in general. For example, the classification of complex manifolds and complex algebraic varieties through the minimal model program and the construction of moduli spaces sets the field apart from differential geometry, where the classification of possible smooth manifolds is a significantly harder problem. Additionally, the extra structure of complex geometry allows, especially in the compact setting, for global analytic results to be proven with great success, including Shing-Tung Yau's proof of the Calabi conjecture, the Hitchin–Kobayashi correspondence, the nonabelian Hodge correspondence, and existence results for Kähler–Einstein metrics and constant scalar curvature Kähler metrics. These results often feed back into complex algebraic geometry, and for example recently the classification of Fano manifolds using K-stability has benefited tremendously both from techniques in analysis and in pure birational geometry.
Complex geometry has significant applications to theoretical physics, where it is essential in understanding conformal field theory, string theory, and mirror symmetry. It is often a source of examples in other areas of mathematics, including in representation theory where generalized flag varieties may be studied using complex geometry leading to the Borel–Weil–Bott theorem, or in symplectic geometry, where Kähler manifolds are symplectic, in Riemannian geometry where complex manifolds provide examples of exotic metric structures such as Calabi–Yau manifolds and hyperkähler manifolds, and in gauge theory, where holomorphic vector bundles often admit solutions to important differential equations arising out of physics such as the Yang–Mills equations. Complex geometry additionally is impactful in pure algebraic geometry, where analytic results in the complex setting such as Hodge theory of Kähler manifolds inspire understanding of Hodge structures for varieties and schemes as well as p-adic Hodge theory, deformation theory for complex manifolds inspires understanding of the deformation theory of schemes, and results about the cohomology of complex manifolds inspired the formulation of the Weil conjectures and Grothendieck's standard conjectures. On the other hand, results and techniques from many of these fields often feed back into complex geometry, and for example developments in the mathematics of string theory and mirror symmetry have revealed much about the nature of Calabi–Yau manifolds, which string theorists predict should have the structure of Lagrangian fibrations through the SYZ conjecture, and the development of Gromov–Witten theory of symplectic manifolds has led to advances in enumerative geometry of complex varieties.
The Hodge conjecture, one of the millennium prize problems, is a problem in complex geometry.
Idea
Broadly, complex geometry is concerned with spaces and geometric objects which are modelled, in some sense, on the complex plane. Features of the complex plane and complex analysis of a single variable, such as an intrinsic notion of orientability (that is, being able to consistently rotate 90 degrees counterclockwise at every point in the complex plane), and the rigidity of holomorphic functions (that is, the existence of a single complex derivative implies complex differentiability to all orders) are seen to manifest in all forms of the study of complex geometry. As an example, every complex manifold is canonically orientable, and a form of Liouville's theorem holds on compact complex manifolds or projective complex algebraic varieties.
Complex geometry is different in flavour to what might be called real geometry, the study of spaces based around the geometric and analytical properties of the real number line. For example, whereas smooth manifolds admit partitions of unity, collections of smooth functions which can be identically equal to one on some open set, and identically zero elsewhere, complex manifolds admit no such collections of holomorphic functions. Indeed, this is the manifestation of the identity theorem, a typical result in complex analysis of a single variable. In some sense, the novelty of complex geometry may be traced back to this fundamental observation.
It is true that every complex manifold is in particular a real smooth manifold. This is because the complex plane is, after forgetting its complex structure, isomorphic to the real plane . However, complex geometry is not typically seen as a particular sub-field of differential geometry, the study of smooth manifolds. In particular, Serre's GAGA theorem says that every projective analytic variety is actually an algebraic variety, and the study of holomorphic data on an analytic variety is equivalent to the study of algebraic data.
This equivalence indicates that complex geometry is in some sense closer to algebraic geometry than to differential geometry. Another example of this which links back to the nature of the complex plane is that, in complex analysis of a single variable, singularities of meromorphic functions are readily describable. In contrast, the possible singular behaviour of a continuous real-valued function is much more difficult to characterise. As a result of this, one can readily study singular spaces in complex geometry, such as singular complex analytic varieties or singular complex algebraic varieties, whereas in differential geometry the study of singular spaces is often avoided.
In practice, complex geometry sits in the intersection of differential geometry, algebraic geometry, and analysis in several complex variables, and a complex geometer uses tools from all three fields to study complex spaces. Typical directions of interest in complex geometry involve classification of complex spaces, the study of holomorphic objects attached to them (such as holomorphic vector bundles and coherent sheaves), and the intimate relationships between complex geometric objects and other areas of mathematics and physics.
Definitions
Complex geometry is concerned with the study of complex manifolds, and complex algebraic and complex analytic varieties. In this section, these types of spaces are defined and the relationships between them presented.
A complex manifold is a topological space such that:
is Hausdorff and second countable.
is locally homeomorphic to an open subset of for some . That is, for every point , there is an open neighbourhood of and a homeomorphism to an open subset . Such open sets are called charts.
If and are any two overlapping charts which map onto open sets of respectively, then the transition function is a biholomorphism.
Notice that since every biholomorphism is a diffeomorphism, and is isomorphism as a real vector space to , every complex manifold of dimension is in particular a smooth manifold of dimension , which is always an even number.
In contrast to complex manifolds which are always smooth, complex geometry is also concerned with possibly singular spaces. An affine complex analytic variety is a subset such that about each point , there is an open neighbourhood of and a collection of finitely many holomorphic functions such that . By convention we also require the set to be irreducible. A point is singular if the Jacobian matrix of the vector of holomorphic functions does not have full rank at , and non-singular otherwise. A projective complex analytic variety is a subset of complex projective space that is, in the same way, locally given by the zeroes of a finite collection of holomorphic functions on open subsets of .
One may similarly define an affine complex algebraic variety to be a subset which is locally given as the zero set of finitely many polynomials in complex variables. To define a projective complex algebraic variety, one requires the subset to locally be given by the zero set of finitely many homogeneous polynomials.
In order to define a general complex algebraic or complex analytic variety, one requires the notion of a locally ringed space. A complex algebraic/analytic variety is a locally ringed space which is locally isomorphic as a locally ringed space to an affine complex algebraic/analytic variety. In the analytic case, one typically allows to have a topology that is locally equivalent to the subspace topology due to the identification with open subsets of , whereas in the algebraic case is often equipped with a Zariski topology. Again we also by convention require this locally ringed space to be irreducible.
Since the definition of a singular point is local, the definition given for an affine analytic/algebraic variety applies to the points of any complex analytic or algebraic variety. The set of points of a variety which are singular is called the singular locus, denoted , and the complement is the non-singular or smooth locus, denoted . We say a complex variety is smooth or non-singular if it's singular locus is empty. That is, if it is equal to its non-singular locus.
By the implicit function theorem for holomorphic functions, every complex manifold is in particular a non-singular complex analytic variety, but is not in general affine or projective. By Serre's GAGA theorem, every projective complex analytic variety is actually a projective complex algebraic variety. When a complex variety is non-singular, it is a complex manifold. More generally, the non-singular locus of any complex variety is a complex manifold.
Types of complex spaces
Kähler manifolds
Complex manifolds may be studied from the perspective of differential geometry, whereby they are equipped with extra geometric structures such as a Riemannian metric or symplectic form. In order for this extra structure to be relevant to complex geometry, one should ask for it to be compatible with the complex structure in a suitable sense. A Kähler manifold is a complex manifold with a Riemannian metric and symplectic structure compatible with the complex structure. Every complex submanifold of a Kähler manifold is Kähler, and so in particular every non-singular affine or projective complex variety is Kähler, after restricting the standard Hermitian metric on or the Fubini-Study metric on respectively.
Other important examples of Kähler manifolds include Riemann surfaces, K3 surfaces, and Calabi–Yau manifolds.
Stein manifolds
Serre's GAGA theorem asserts that projective complex analytic varieties are actually algebraic. Whilst this is not strictly true for affine varieties, there is a class of complex manifolds that act very much like affine complex algebraic varieties, called Stein manifolds. A manifold is Stein if it is holomorphically convex and holomorphically separable (see the article on Stein manifolds for the technical definitions). It can be shown however that this is equivalent to being a complex submanifold of for some . Another way in which Stein manifolds are similar to affine complex algebraic varieties is that Cartan's theorems A and B hold for Stein manifolds.
Examples of Stein manifolds include non-compact Riemann surfaces and non-singular affine complex algebraic varieties.
Hyper-Kähler manifolds
A special class of complex manifolds is hyper-Kähler manifolds, which are Riemannian manifolds admitting three distinct compatible integrable almost complex structures which satisfy the quaternionic relations . Thus, hyper-Kähler manifolds are Kähler manifolds in three different ways, and subsequently have a rich geometric structure.
Examples of hyper-Kähler manifolds include ALE spaces, K3 surfaces, Higgs bundle moduli spaces, quiver varieties, and many other moduli spaces arising out of gauge theory and representation theory.
Calabi–Yau manifolds
As mentioned, a particular class of Kähler manifolds is given by Calabi–Yau manifolds. These are given by Kähler manifolds with trivial canonical bundle . Typically the definition of a Calabi–Yau manifold also requires to be compact. In this case Yau's proof of the Calabi conjecture implies that admits a Kähler metric with vanishing Ricci curvature, and this may be taken as an equivalent definition of Calabi–Yau.
Calabi–Yau manifolds have found use in string theory and mirror symmetry, where they are used to model the extra 6 dimensions of spacetime in 10-dimensional models of string theory. Examples of Calabi–Yau manifolds are given by elliptic curves, K3 surfaces, and complex Abelian varieties.
Complex Fano varieties
A complex Fano variety is a complex algebraic variety with ample anti-canonical line bundle (that is, is ample). Fano varieties are of considerable interest in complex algebraic geometry, and in particular birational geometry, where they often arise in the minimal model program. Fundamental examples of Fano varieties are given by projective space where , and smooth hypersurfaces of of degree less than .
Toric varieties
Toric varieties are complex algebraic varieties of dimension containing an open dense subset biholomorphic to , equipped with an action of which extends the action on the open dense subset. A toric variety may be described combinatorially by its toric fan, and at least when it is non-singular, by a moment polytope. This is a polygon in with the property that any vertex may be put into the standard form of the vertex of the positive orthant by the action of . The toric variety can be obtained as a suitable space which fibres over the polytope.
Many constructions that are performed on toric varieties admit alternate descriptions in terms of the combinatorics and geometry of the moment polytope or its associated toric fan. This makes toric varieties a particularly attractive test case for many constructions in complex geometry. Examples of toric varieties include complex projective spaces, and bundles over them.
Techniques in complex geometry
Due to the rigidity of holomorphic functions and complex manifolds, the techniques typically used to study complex manifolds and complex varieties differ from those used in regular differential geometry, and are closer to techniques used in algebraic geometry. For example, in differential geometry, many problems are approached by taking local constructions and patching them together globally using partitions of unity. Partitions of unity do not exist in complex geometry, and so the problem of when local data may be glued into global data is more subtle. Precisely when local data may be patched together is measured by sheaf cohomology, and sheaves and their cohomology groups are major tools.
For example, famous problems in the analysis of several complex variables preceding the introduction of modern definitions are the Cousin problems, asking precisely when local meromorphic data may be glued to obtain a global meromorphic function. These old problems can be simply solved after the introduction of sheaves and cohomology groups.
Special examples of sheaves used in complex geometry include holomorphic line bundles (and the divisors associated to them), holomorphic vector bundles, and coherent sheaves. Since sheaf cohomology measures obstructions in complex geometry, one technique that is used is to prove vanishing theorems. Examples of vanishing theorems in complex geometry include the Kodaira vanishing theorem for the cohomology of line bundles on compact Kähler manifolds, and Cartan's theorems A and B for the cohomology of coherent sheaves on affine complex varieties.
Complex geometry also makes use of techniques arising out of differential geometry and analysis. For example, the Hirzebruch-Riemann-Roch theorem, a special case of the Atiyah-Singer index theorem, computes the holomorphic Euler characteristic of a holomorphic vector bundle in terms of characteristic classes of the underlying smooth complex vector bundle.
Classification in complex geometry
One major theme in complex geometry is classification. Due to the rigid nature of complex manifolds and varieties, the problem of classifying these spaces is often tractable. Classification in complex and algebraic geometry often occurs through the study of moduli spaces, which themselves are complex manifolds or varieties whose points classify other geometric objects arising in complex geometry.
Riemann surfaces
The term moduli was coined by Bernhard Riemann during his original work on Riemann surfaces. The classification theory is most well-known for compact Riemann surfaces. By the classification of closed oriented surfaces, compact Riemann surfaces come in a countable number of discrete types, measured by their genus , which is a non-negative integer counting the number of holes in the given compact Riemann surface.
The classification essentially follows from the uniformization theorem, and is as follows:
g = 0:
g = 1: There is a one-dimensional complex manifold classifying possible compact Riemann surfaces of genus 1, so-called elliptic curves, the modular curve. By the uniformization theorem any elliptic curve may be written as a quotient where is a complex number with strictly positive imaginary part. The moduli space is given by the quotient of the group acting on the upper half plane by Möbius transformations.
g > 1: For each genus greater than one, there is a moduli space of genus g compact Riemann surfaces, of dimension . Similar to the case of elliptic curves, this space may be obtained by a suitable quotient of Siegel upper half-space by the action of the group .
Holomorphic line bundles
Complex geometry is concerned not only with complex spaces, but other holomorphic objects attached to them. The classification of holomorphic line bundles on a complex variety is given by the Picard variety of .
The picard variety can be easily described in the case where is a compact Riemann surface of genus g. Namely, in this case the Picard variety is a disjoint union of complex Abelian varieties, each of which is isomorphic to the Jacobian variety of the curve, classifying divisors of degree zero up to linear equivalence. In differential-geometric terms, these Abelian varieties are complex tori, complex manifolds diffeomorphic to , possibly with one of many different complex structures.
By the Torelli theorem, a compact Riemann surface is determined by its Jacobian variety, and this demonstrates one reason why the study of structures on complex spaces can be useful, in that it can allow one to solve classify the spaces themselves.
See also
Bivector (complex)
Calabi–Yau manifold
Cartan's theorems A and B
Complex analytic space
Complex Lie group
Complex polytope
Complex projective space
Cousin problems
Deformation Theory#Deformations of complex manifolds
Enriques–Kodaira classification
GAGA
Hartogs' extension theorem
Hermitian symmetric space
Hodge decomposition
Hopf manifold
Imaginary line (mathematics)
Kobayashi metric
Kobayashi–Hitchin correspondence
Kähler manifold
-lemma
Lelong number
List of complex and algebraic surfaces
Mirror symmetry
Multiplier ideal
Projective variety
Pseudoconvexity
Several complex variables
Stein manifold
References
E. H. Neville (1922) Prolegomena to Analytical Geometry in Anisotropic Euclidean Space of Three Dimensions, Cambridge University Press.
Complex manifolds
Several complex variables
Algebraic geometry
Complex geometry | Complex geometry | Mathematics | 3,905 |
5,192,690 | https://en.wikipedia.org/wiki/Glycosyltransferase | Glycosyltransferases (GTFs, Gtfs) are enzymes (EC 2.4) that establish natural glycosidic linkages. They catalyze the transfer of saccharide moieties from an activated nucleotide sugar (also known as the "glycosyl donor") to a nucleophilic glycosyl acceptor molecule, the nucleophile of which can be oxygen- carbon-, nitrogen-, or sulfur-based.
The result of glycosyl transfer can be a carbohydrate, glycoside, oligosaccharide, or a polysaccharide. Some glycosyltransferases catalyse transfer to inorganic phosphate or water. Glycosyl transfer can also occur to protein residues, usually to tyrosine, serine, or threonine to give O-linked glycoproteins, or to asparagine to give N-linked glycoproteins. Mannosyl groups may be transferred to tryptophan to generate C-mannosyl tryptophan, which is relatively abundant in eukaryotes. Transferases may also use lipids as an acceptor, forming glycolipids, and even use lipid-linked sugar phosphate donors, such as dolichol phosphates in eukaryotic organism, or undecaprenyl phosphate in bacteria.
Glycosyltransferases that use sugar nucleotide donors are Leloir enzymes, after Luis F. Leloir, the scientist who discovered the first sugar nucleotide and who received the 1970 Nobel Prize in Chemistry for his work on carbohydrate metabolism. Glycosyltransferases that use non-nucleotide donors such as dolichol or polyprenol pyrophosphate are non-Leloir glycosyltransferases.
Mammals use only 9 sugar nucleotide donors for glycosyltransferases: UDP-glucose, UDP-galactose, UDP-GlcNAc, UDP-GalNAc, UDP-xylose, UDP-glucuronic acid, GDP-mannose, GDP-fucose, and CMP-sialic acid. The phosphate(s) of these donor molecules are usually coordinated by divalent cations such as manganese, however metal independent enzymes exist.
Many glycosyltransferases are single-pass transmembrane proteins, and they are usually anchored to membranes of Golgi apparatus
Mechanism
Glycosyltransferases can be segregated into "retaining" or "inverting" enzymes according to whether the stereochemistry of the donor's anomeric bond is retained (α→α) or inverted (α→β) during the transfer. The inverting mechanism is straightforward, requiring a single nucleophilic attack from the accepting atom to invert stereochemistry.
The retaining mechanism has been a matter of debate, but there exists strong evidence against a double displacement mechanism (which would cause two inversions about the anomeric carbon for a net retention of stereochemistry) or a dissociative mechanism (a prevalent variant of which was known as SNi). An "orthogonal associative" mechanism has been proposed which, akin to the inverting enzymes, requires only a single nucleophilic attack from an acceptor from a non-linear angle (as observed in many crystal structures) to achieve anomer retention.
Reaction reversibility
The recent discovery of the reversibility of many reactions catalyzed by inverting glycosyltransferases served as a paradigm shift in the field and raises questions regarding the designation of sugar nucleotides as 'activated' donors.
Classification by sequence
Sequence-based classification methods have proven to be a powerful way of generating hypotheses for protein function based on sequence alignment to related proteins. The carbohydrate-active enzyme database presents a sequence-based classification of glycosyltransferases into over 90 families. The same three-dimensional fold is expected to occur within each of the families.
Structure
In contrast to the diversity of 3D structures observed for glycoside hydrolases, glycosyltransferase have a much smaller range of structures. In fact, according to the Structural Classification of Proteins database, only three different folds have been observed for glycosyltransferases Very recently, a new glycosyltransferase fold was identified for the glycosyltransferases involved in the biosynthesis of the NAG-NAM polymer backbone of peptidoglycan.
Inhibitors
Many inhibitors of glycosyltransferases are known. Some of these are natural products, such as moenomycin, an inhibitor of peptidoglycan glycosyltransferases, the nikkomycins, inhibitors of chitin synthase, and the echinocandins, inhibitors of fungal β-1,3-glucan synthases. Some glycosyltransferase inhibitors are of use as drugs or antibiotics. Moenomycin is used in animal feed as a growth promoter. Caspofungin has been developed from the echinocandins and is in use as an antifungal agent. Ethambutol is an inhibitor of mycobacterial arabinotransferases and is used for the treatment of tuberculosis. Lufenuron is an inhibitor of insect chitin syntheses and is used to control fleas in animals. Imidazolium-based synthetic inhibitors of glycosyltransferases have been designed for use as antimicrobial and antiseptic agents.
Determinant of blood type
The ABO blood group system is determined by what type of glycosyltransferases are expressed in the body.
The ABO gene locus expressing the glycosyltransferases has three main allelic forms: A, B, and O. The A allele encodes 1-3-N-acetylgalactosaminyltransferase that bonds α-N-acetylgalactosamine to D-galactose end of H antigen, producing the A antigen. The B allele encodes 1-3-galactosyltransferase that joins α-D-galactose bonded to D-galactose end of H antigen, creating the B antigen. In case of O allele the exon 6 contains a deletion that results in a loss of enzymatic activity. The O allele differs slightly from the A allele by deletion of a single nucleotide - Guanine at position 261. The deletion causes a frameshift and results in translation of an almost entirely different protein that lacks enzymatic activity. This results in H antigen remaining unchanged in case of O groups.
The combination of glycosyltransferases by both alleles present in each person determines whether there is an AB, A, B or O blood type.
Uses
Glycosyltransferases have been widely used in both the targeted synthesis of specific glycoconjugates as well as the synthesis of differentially glycosylated libraries of drugs, biological probes or natural products in the context of drug discovery and drug development (a process known as glycorandomization). Suitable enzymes can be isolated from natural sources or produced recombinantly. As an alternative, whole cell-based systems using either endogenous glycosyl donors or cell-based systems containing cloned and expressed systems for synthesis of glycosyl donors have been developed. In cell-free approaches, the large-scale application of glycosyltransferases for glycoconjugate synthesis has required access to large quantities of the glycosyl donors. On the flip-side, nucleotide recycling systems that allow the resynthesis of glycosyl donors from the released nucleotide have been developed. The nucleotide recycling approach has a further benefit of reducing the amount of nucleotide formed as a by-product, thereby reducing the amount of inhibition caused to the glycosyltransferase of interest – a commonly observed feature of the nucleotide byproduct.
See also
Carbohydrate chemistry
Chemical glycosylation
Glucuronosyltransferase
Glycogen synthase
Glycosyl acceptor
Glycosyl donor
Glycosylation
Oligosaccharyltransferase
References
Carbohydrates
Carbohydrate chemistry
Transferases
EC 2.4
EC 2.4.1
EC 2.4.2
Peripheral membrane proteins
Glycobiology | Glycosyltransferase | Chemistry,Biology | 1,872 |
40,447,583 | https://en.wikipedia.org/wiki/Customs%20Convention%20on%20Containers | The Customs Convention on Containers is a United Nations and International Maritime Organization treaty whereby states agree to allow intermodal containers to be temporarily brought into their states duty- and tax-free.
The original Convention was concluded in Geneva on 18 May 1956 and entered into force on 4 August 1959. On 2 December 1972, a new Convention was concluded with the provision that when it entered into force, it would replace the 1956 Convention for the parties that ratify it. The 1972 Convention entered into force on 6 December 1975. The 1956 Convention was ratified by 44 states; as of 2016, the 1972 Convention has been ratified by 40 states. The International Container Bureau was instrumental in the creation of the revised 1972 Convention.
The Convention allows for shipping containers to be brought from a ratifying state into a ratifying state duty- and tax-free for a period of three months.
The Convention was concluded at the same conference that concluded the Customs Convention on the Temporary Importation of Commercial Road Vehicles, the Customs Convention on the Temporary Importation for Private Use of Aircraft and Pleasure Boats, and the CMR Convention.
The Convention was somewhat superseded in 1990 by the Istanbul Convention, which combines in one single instrument the various conventions on the temporary admission of specific goods.
See also
ATA Carnet
External links
Text of 1972 Convention
Ratification status of 1956 Convention
Ratification status of 1972 Convention
Customs Convention on Containers Handbook, World Customs Organization
International Maritime Organization treaties
United Nations treaties
Customs treaties
Transport treaties
Intermodal transport
1956 in Switzerland
1972 in Switzerland
1956 in transport
1972 in transport
Treaties concluded in 1956
Treaties concluded in 1972
Treaties entered into force in 1959
Treaties entered into force in 1975
Treaties of Algeria
Treaties of Antigua and Barbuda
Treaties of Australia
Treaties of Austria
Treaties of Belgium
Treaties of Bosnia and Herzegovina
Treaties of the People's Republic of Bulgaria
Treaties of the Kingdom of Cambodia (1953–1970)
Treaties of Cameroon
Treaties of Canada
Treaties of Croatia
Treaties of Cuba
Treaties of the Czech Republic
Treaties of Czechoslovakia
Treaties of Finland
Treaties of France
Treaties of West Germany
Treaties of Denmark
Treaties of the Kingdom of Greece
Treaties of the Hungarian People's Republic
Treaties of Ireland
Treaties of Israel
Treaties of Italy
Treaties of Jamaica
Treaties of Japan
Treaties of Luxembourg
Treaties of Malawi
Treaties of Mauritius
Treaties of Moldova
Treaties of Montenegro
Treaties of the Netherlands
Treaties of Norway
Treaties of the Polish People's Republic
Treaties of the Estado Novo (Portugal)
Treaties of the Socialist Republic of Romania
Treaties of Serbia and Montenegro
Treaties of Sierra Leone
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of Francoist Spain
Treaties of Sweden
Treaties of Switzerland
Treaties of Trinidad and Tobago
Treaties of the United Kingdom
Treaties of the United States
Treaties of Yugoslavia
Treaties of Armenia
Treaties of Azerbaijan
Treaties of the Byelorussian Soviet Socialist Republic
Treaties of Burundi
Treaties of the People's Republic of China
Treaties of Georgia (country)
Treaties of Indonesia
Treaties of Kazakhstan
Treaties of Kyrgyzstan
Treaties of Lebanon
Treaties of Liberia
Treaties of Lithuania
Treaties of Morocco
Treaties of New Zealand
Treaties of South Korea
Treaties of the Soviet Union
Treaties of Saudi Arabia
Treaties of Tunisia
Treaties of Turkey
Treaties of the Ukrainian Soviet Socialist Republic
Treaties of Uzbekistan
Treaties of Liechtenstein
Treaties of East Germany
Treaties extended to Puerto Rico
Treaties extended to the Territory of Papua and New Guinea
Treaties extended to Norfolk Island
Treaties extended to Christmas Island
Treaties extended to the Cocos (Keeling) Islands
Treaties extended to the Netherlands Antilles
Treaties extended to Aruba
Treaties extended to Dutch New Guinea
Treaties extended to the Isle of Man
Treaties extended to Guernsey
Treaties extended to Jersey
Treaties extended to the West Indies Federation
Treaties extended to Bermuda
Treaties extended to the British Solomon Islands
Treaties extended to Brunei (protectorate)
Treaties extended to British Cyprus
Treaties extended to the Falkland Islands
Treaties extended to the Gambia Colony and Protectorate
Treaties extended to Gibraltar
Treaties extended to the Gilbert and Ellice Islands
Treaties extended to British Mauritius
Treaties extended to the Colony of North Borneo
Treaties extended to the Colony of Sarawak
Treaties extended to the Colony of Sierra Leone
Treaties extended to the Crown Colony of Singapore
Treaties extended to the Sultanate of Zanzibar
Treaties extended to British Hong Kong
Treaties extended to West Berlin
Treaties extended to Liechtenstein | Customs Convention on Containers | Physics | 807 |
76,412,912 | https://en.wikipedia.org/wiki/Antrodia%20ramentacea | Antrodia ramentacea is a species of polypore fungus in the family Fomitopsidaceae, first described in 1879 by Miles Joseph Berkeley and Broome and transferred into its current genus by Marinus Anton Donk in 1966.
Distribution and habitat
It appears in North America, Europe and Asia, most often in Europe. It usually grows on dead conifer wood, mostly pine and spruce.
References
Fomitopsidaceae
Fungi described in 1879
Fungi of Asia
Fungi of Europe
Fungi of North America
Taxa named by Elias Magnus Fries
Taxa named by Christopher Edmund Broome
Fungus species | Antrodia ramentacea | Biology | 118 |
4,472,896 | https://en.wikipedia.org/wiki/Caldwell%20catalogue | The Caldwell catalogue is an astronomical catalogue of 109 star clusters, nebulae, and galaxies for observation by amateur astronomers. The list was compiled by Patrick Moore as a complement to the Messier catalogue.
While the Messier catalogue is used by amateur astronomers as a list of deep-sky objects for observation, Moore noted that Messier's list was not compiled for that purpose and excluded many of the sky's brightest deep-sky objects, such as the Hyades, the Double Cluster (NGC 869 and NGC 884), and the Sculptor Galaxy (NGC 253). The Messier catalogue was actually compiled as a list of known objects that might be confused with comets. Moore also observed that since Messier compiled his list from observations in Paris, it did not include bright deep-sky objects visible in the Southern Hemisphere, such as Omega Centauri, Centaurus A, the Jewel Box, and 47 Tucanae. Moore compiled a list of 109 objects to match the commonly accepted number of Messier objects (he excluded M110), and the list was published in Sky & Telescope in December 1995.
Moore used his other surname – Caldwell – to name the list, since the initial of "Moore" is already used for the Messier catalogue. Entries in the catalogue are designated with a "C" and the catalogue number (1 to 109).
Unlike objects in the Messier catalogue, which are listed roughly in the order of discovery by Messier and his colleagues, the Caldwell catalogue is ordered by declination, with C1 being the most northerly and C109 being the most southerly, although two objects (NGC 4244 and the Hyades) are listed out of sequence. Other errors in the original list have since been corrected: it incorrectly identified the S Norma Cluster (NGC 6087) as NGC 6067 and incorrectly labelled the Lambda Centauri Cluster (IC 2944) as the Gamma Centauri Cluster.
Reception
The Caldwell Catalogue has generated controversy in the amateur astronomy community for several reasons.
Moore did not discover any of the objects in his catalogue which are often very well known objects and not 'neglected' as claimed by Moore.
Its presentation as a catalogue with distinct designations, rather than a list, potentially may cause confusion amongst amateur astronomers as the 'C' Designation is not commonly used.
The list was promoted as an extension of the Messier Catalogue, however the objects are often arbitrary with many easily viewable objects omitted while some objects not readily available to visual observers are included.
Caldwell advocates, however, see the catalogue as a useful list of some of the brightest and best known non-Messier deep-sky objects. Thus, advocates dismiss any "controversy" as being fabricated by older amateurs simply not able or willing to memorize the new designations despite every telescope database using the Caldwell IDs as the primary designation for over 25 years. NASA/Hubble also lists the 109 objects by their Caldwell number.
Caldwell star chart
Number of objects by type in the Caldwell catalogue
Caldwell objects
See also
Messier Catalogue
Herschel 400 Catalogue
New General Catalogue (NGC)
Index Catalogue (IC)
Revised New General Catalogue (RNGC)
Revised Index Catalogue (RIC)
References
External links
The Caldwell Catalogue at SEDS
The Caldwell Club
Caldwell Star Charts, Images and more
Searchable Caldwell Catalogue list
Clickable Caldwell Object table
Astronomical catalogues | Caldwell catalogue | Astronomy | 678 |
861,864 | https://en.wikipedia.org/wiki/Snort%20%28software%29 | Snort is a free open source network intrusion detection system (IDS) and intrusion prevention system (IPS) created in 1998 by Martin Roesch, founder and former CTO of Sourcefire. Snort is now developed by Cisco, which purchased Sourcefire in 2013.
In 2009, Snort entered InfoWorld's Open Source Hall of Fame as one of the "greatest [pieces of] open source software of all time".
Uses
Snort's open-source network-based intrusion detection/prevention system (IDS/IPS) has the ability to perform real-time traffic analysis and packet logging on Internet Protocol (IP) networks. Snort performs protocol analysis, content searching and matching.
The program can also be used to detect probes or attacks, including, but not limited to, operating system fingerprinting attempts, semantic URL attacks, buffer overflows, server message block probes, and stealth port scans.
Snort can be configured in three main modes: 1. sniffer, 2. packet logger, and 3. network intrusion detection.
Sniffer Mode
The program will read network packets and display them on the console.
Packet Logger Mode
In packet logger mode, the program will log packets to the disk.
Network Intrusion Detection System Mode
In intrusion detection mode, the program will monitor network traffic and analyze it against a rule set defined by the user. The program will then perform a specific action based on what has been identified.
Third-party tools
There are several third-party tools interfacing Snort for administration, reporting, performance and log analysis:
Snorby – a GPLv3 Ruby on Rails application
BASE
Sguil (free)
See also
List of free and open-source software packages
Sigma
Suricata (software)
YARA
Zeek
References
External links
Snort Blog
Talos Intelligence
Free security software
Computer security software
Linux security software
Unix network-related software
Lua (programming language)-scriptable software
Intrusion detection systems | Snort (software) | Engineering | 394 |
59,715 | https://en.wikipedia.org/wiki/Scientific%20notation | Scientific notation is a way of expressing numbers that are too large or too small to be conveniently written in decimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to as scientific form or standard index form, or standard form in the United Kingdom. This base ten notation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certain arithmetic operations. On scientific calculators, it is usually known as "SCI" display mode.
In scientific notation, nonzero numbers are written in the form
or m times ten raised to the power of n, where n is an integer, and the coefficient m is a nonzero real number (usually between 1 and 10 in absolute value, and nearly always written as a terminating decimal). The integer n is called the exponent and the real number m is called the significand or mantissa. The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of the fractional part of the common logarithm. If the number is negative then a minus sign precedes m, as in ordinary decimal notation. In normalized notation, the exponent is chosen so that the absolute value (modulus) of the significand m is at least 1 but less than 10.
Decimal floating point is a computer arithmetic system closely related to scientific notation.
History
Styles
Normalized notation
Any real number can be written in the form in many ways: for example, 350 can be written as or or .
In normalized scientific notation (called "standard form" in the United Kingdom), the exponent n is chosen so that the absolute value of m remains at least one but less than ten (). Thus 350 is written as . This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number of orders of magnitude separating the numbers. It is also the form that is required when using tables of common logarithms. In normalized notation, the exponent n is negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as ). The 10 and exponent are often omitted when the exponent is 0. For a series of numbers that are to be added or subtracted (or otherwise compared), it can be convenient to use the same value of m for all elements of the series.
Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized or differently normalized form, such as engineering notation, is desired. Normalized scientific notation is often called exponential notation – although the latter term is more general and also applies when m is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (for example, ).
Engineering notation
Engineering notation (often named "ENG" on scientific calculators) differs from normalized scientific notation in that the exponent n is restricted to multiples of 3. Consequently, the absolute value of m is in the range 1 ≤ |m| < 1000, rather than 1 ≤ |m| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example, can be read as "twelve-point-five nanometres" and written as , while its scientific notation equivalent would likely be read out as "one-point-two-five times ten-to-the-negative-eight metres".
E notation
Calculators and computer programs typically present very large or small numbers using scientific notation, and some can be configured to uniformly present all numbers that way. Because superscript exponents like 107 can be inconvenient to display or type, the letter "E" or "e" (for "exponent") is often used to represent "times ten raised to the power of", so that the notation for a decimal significand m and integer exponent n means the same as . For example is written as or , and is written as or . While common in computer output, this abbreviated version of scientific notation is discouraged for published documents by some style guides.
Most popular programming languages – including Fortran, C/C++, Python, and JavaScript – use this "E" notation, which comes from Fortran and was present in the first version released for the IBM 704 in 1956. The E notation was already used by the developers of SHARE Operating System (SOS) for the IBM 709 in 1958. Later versions of Fortran (at least since FORTRAN IV as of 1961) also use "D" to signify double precision numbers in scientific notation, and newer Fortran compilers use "Q" to signify quadruple precision. The MATLAB programming language supports the use of either "E" or "D".
The ALGOL 60 (1960) programming language uses a subscript ten "10" character instead of the letter "E", for example: 6.0221023. This presented a challenge for computer systems which did not provide such a character, so ALGOL W (1966) replaced the symbol by a single quote, e.g. 6.022'+23, and some Soviet ALGOL variants allowed the use of the Cyrillic letter "ю", e.g. . Subsequently, the ALGOL 68 programming language provided a choice of characters: , , , , or 10. The ALGOL "10" character was included in the Soviet GOST 10859 text encoding (1964), and was added to Unicode 5.2 (2009) as .
Some programming languages use other symbols. For instance, Simula uses (or for long), as in . Mathematica supports the shorthand notation (reserving the letter for the mathematical constant e).
The first pocket calculators supporting scientific notation appeared in 1972. To enter numbers in scientific notation calculators include a button labeled "EXP" or "×10x", among other variants. The displays of pocket calculators of the 1970s did not display an explicit symbol between significand and exponent; instead, one or more digits were left blank (e.g. 6.022 23, as seen in the HP-25), or a pair of smaller and slightly raised digits were reserved for the exponent (e.g. 6.022 23, as seen in the Commodore PR100). In 1976, Hewlett-Packard calculator user Jim Davidson coined the term decapower for the scientific-notation exponent to distinguish it from "normal" exponents, and suggested the letter "D" as a separator between significand and exponent in typewritten numbers (for example, ); these gained some currency in the programmable calculator user community. The letters "E" or "D" were used as a scientific-notation separator by Sharp pocket computers released between 1987 and 1995, "E" used for 10-digit numbers and "D" used for 20-digit double-precision numbers. The Texas Instruments TI-83 and TI-84 series of calculators (1996–present) use a small capital E for the separator.
In 1962, Ronald O. Whitaker of Rowco Engineering Co. proposed a power-of-ten system nomenclature where the exponent would be circled, e.g. 6.022 × 103 would be written as "6.022③".
Significant figures
A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroes indicated to be significant. Leading and trailing zeroes are not significant digits, because they exist only to show the scale of the number. Unfortunately, this leads to ambiguity. The number is usually read to have five significant figures: 1, 2, 3, 0, and 4, the final two zeroes serving only as placeholders and adding no precision. The same number, however, would be used if the last two digits were also measured precisely and found to equal 0 – seven significant figures.
When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the placeholding zeroes are no longer required. Thus would become if it had five significant digits. If the number were known to six or seven significant figures, it would be shown as or . Thus, an additional advantage of scientific notation is that the number of significant figures is unambiguous.
Estimated final digits
It is customary in scientific measurement to record all the definitely known digits from the measurement and to estimate at least one additional digit if there is any information at all available on its value. The resulting number contains more information than it would without the extra digit, which may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together).
Additional information about precision can be conveyed through additional notation. It is often useful to know how exact the final digit or digits are. For instance, the accepted value of the mass of the proton can properly be expressed as , which is shorthand for . However it is still unclear whether the error ( in this case) is the maximum possible error, standard error, or some other confidence interval.
Use of spaces
In normalized scientific notation, in E notation, and in engineering notation, the space (which in typesetting may be represented by a normal width space or a thin space) that is allowed only before and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character.
Further examples of scientific notation
An electron's mass is about . In scientific notation, this is written .
The Earth's mass is about . In scientific notation, this is written .
The Earth's circumference is approximately . In scientific notation, this is . In engineering notation, this is written . In SI writing style, this may be written ().
An inch is defined as exactly . Using scientific notation, this value can be uniformly expressed to any desired precision, from the nearest tenth of a millimeter to the nearest nanometer , or beyond.
Hyperinflation means that too much money is put into circulation, perhaps by printing banknotes, chasing too few goods. It is sometimes defined as inflation of 50% or more in a single month. In such conditions, money rapidly loses its value. Some countries have had events of inflation of 1 million percent or more in a single month, which usually results in the rapid abandonment of the currency. For example, in November 2008 the monthly inflation rate of the Zimbabwean dollar reached 79.6 billion percent (470% per day); the approximate value with three significant figures would be %, or more simply a rate of .
Converting numbers
Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed.
Decimal to scientific
First, move the decimal separator point sufficient places, n, to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append × 10n; to the right, × 10−n. To represent the number in normalized scientific notation, the decimal separator would be moved 6 digits to the left and × 106 appended, resulting in . The number would have its decimal separator shifted 3 digits to the right instead of the left and yield as a result.
Scientific to decimal
Converting a number from scientific notation to decimal notation, first remove the × 10n on the end, then shift the decimal separator n digits to the right (positive n) or left (negative n). The number would have its decimal separator shifted 6 digits to the right and become , while would have its decimal separator moved 3 digits to the left and be .
Exponential
Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shifted x places to the left (or right) and x is added to (or subtracted from) the exponent, as shown below.
Basic operations
Given two numbers in scientific notation,
and
Multiplication and division are performed using the rules for operation with exponentiation:
and
Some examples are:
and
Addition and subtraction require the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted:
Next, add or subtract the significands:
An example:
Other bases
While base ten is normally used for scientific notation, powers of other bases can be used too, base 2 being the next most commonly used one.
For example, in base-2 scientific notation, the number 1001b in binary (=9d) is written as or using binary numbers (or shorter if binary context is obvious). In E notation, this is written as (or shorter: 1.001E11) with the letter "E" now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter "B" instead of "E", a shorthand notation originally proposed by Bruce Alan Martin of Brookhaven National Laboratory in 1968, as in (or shorter: 1.001B11). For comparison, the same number in decimal representation: (using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes or shorter 1.001B3.
This is closely related to the base-2 floating-point representation commonly used in computer arithmetic, and the usage of IEC binary prefixes (e.g. 1B10 for 1×210 (kibi), 1B20 for 1×220 (mebi), 1B30 for 1×230 (gibi), 1B40 for 1×240 (tebi)).
Similar to "B" (or "b"), the letters "H" (or "h") and "O" (or "o", or "C") are sometimes also used to indicate times 16 or 8 to the power as in 1.25 = = 1.40H0 = 1.40h0, or 98000 = = 2.7732o5 = 2.7732C5.
Another similar convention to denote base-2 exponents is using a letter "P" (or "p", for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal. This notation can be produced by implementations of the printf family of functions following the C99 specification and (Single Unix Specification) IEEE Std 1003.1 POSIX standard, when using the %a or %A conversion specifiers. Starting with C++11, C++ I/O functions could parse and print the P notation as well. Meanwhile, the notation has been fully adopted by the language standard since C++17. Apple's Swift supports it as well. It is also required by the IEEE 754-2008 binary floating-point standard. Example: 1.3DEp42 represents .
Engineering notation can be viewed as a base-1000 scientific notation.
See also
Positional notation
ISO/IEC 80000 – an international standard which guides the use of physical quantities and units of measurement in science
Suzhou numerals – a Chinese numeral system formerly used in commerce, with order of magnitude written below the significand
RKM code – a notation to specify resistor and capacitor values, with symbols for powers of 1000
References
External links
Decimal to Scientific Notation Converter
Scientific Notation to Decimal Converter
Scientific Notation in Everyday Life
An exercise in converting to and from scientific notation
Scientific Notation Converter
Scientific Notation chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series.
Mathematical notation
Measurement
Numeral systems | Scientific notation | Physics,Mathematics | 3,472 |
15,928,042 | https://en.wikipedia.org/wiki/The%20E%20and%20B%20Experiment | The E and B Experiment (EBEX) was an experiment that measured the cosmic microwave background radiation of a part of the sky during two sub-orbital (high-altitude) balloon flights and took large, high-fidelity images of the CMB polarization anisotropies using a telescope which flew at over high. The altitude of the telescope made it possible to reduce the atmospheric absorption of microwaves, which allowed massive cost reduction compared to a satellite probe, however, only a small part of the sky can be scanned and for a shorter duration than a typical satellite mission such as WMAP.
The first flight was an engineering flight over North America in 2009. For the science flight, EBEX was launched on 29 December 2012, near McMurdo Station in Antarctica. It circled around the South Pole using the polar vortex winds before landing on 24 January 2013 about from McMurdo.
Instrumentation
EBEX consists of a 1.5 m Dragone-type telescope that provides a resolution of 8 arcminutes in frequency bands centered on 150, 250, and 410 GHz. Polarimetry is achieved with a continuously-rotating achromatic half-wave plate supported by a superconducting magnetic bearing and a fixed wire grid polarizer. The wire grid is mounted at 45 degrees to the incoming light beam and transmits one polarization state while reflecting the other. Each polarization state is subsequently detected by its own focal plane with a 6 degree instantaneous field-of-view on the sky. Each of the focal planes contains up to 960 transition-edge sensors read out with frequency-domain SQUID multiplexing.
Temporary disappearance
The EBEX telescope was reported missing in May 2012, while in transit from the University of Minnesota to the NASA
Columbia Scientific Balloon Facility in Palestine, Texas. The driver of the truck said that the trailer had been stolen while parked at a motel in Dallas. Scientists and employees of the trucking company searched the area and found the missing trailer parked at a truck wash near Hutchins, Texas. The trailer had been opened, but no scientific equipment had been stolen and the telescope was undamaged.
Flight
EBEX launched from Williams Field on the Antarctic coast on 29 December 2012.
See also
Cosmic microwave background experiments
Observational cosmology
References
External links
Main (UMN) Site
Miller CMB group
Physics experiments
Radio astronomy
Cosmic microwave background experiments
Balloon-borne telescopes
Astronomical experiments in the Antarctic | The E and B Experiment | Physics,Astronomy | 492 |
4,231,031 | https://en.wikipedia.org/wiki/Canavanine | L-(+)-(S)-Canavanine is a non-proteinogenic amino acid found in certain leguminous plants. It is structurally related to the proteinogenic α-amino acid L-arginine, the sole difference being the replacement of a methylene bridge (-- unit) in arginine with an oxa group (i.e., an oxygen atom) in canavanine. Canavanine is accumulated primarily in the seeds of the organisms which produce it, where it serves both as a highly deleterious defensive compound against herbivores (due to cells mistaking it for arginine) and a vital source of nitrogen for the growing embryo. The related L-canaline is similar to ornithine.
Toxicity
The mechanism of canavanine's toxicity is that organisms that consume it typically mistakenly incorporate it into their own proteins in place of L-arginine, thereby producing structurally aberrant proteins that may not function properly. Cleavage by arginase also produces canaline, a potent insecticide.
The toxicity of canavanine may be enhanced under conditions of protein starvation, and canavanine toxicity, resulting from consumption of Hedysarum alpinum seeds with a concentration of 1.2% canavanine weight/weight, has been implicated in the death of a malnourished Christopher McCandless. (McCandless was the subject of Jon Krakauer's book (and subsequent movie) Into the Wild).
In mammals
NZB/W F1, NZB, and DBA/2 mice fed L-canavanine develop a syndrome similar to systemic lupus erythematosus, while BALB/c mice fed a steady diet of protein containing 1% canavanine showed no change in lifespan.
Alfalfa seeds and sprouts contain L-canavanine. The L-canavanine in alfalfa has been linked to lupus-like symptoms in primates, including humans, and other auto-immune diseases. Often stopping consumption reverses the problem.
Tolerance
Some specialized herbivores tolerate L-canavanine either because they metabolize it efficiently (cf. L-canaline) or avoid its incorporation into their own nascent proteins.
By metabolic detoxification
Herbivores may be able to metabolize canavanine efficiently. The beetle Caryedes brasiliensis is able to break canavanine down to canaline, then further detoxifies canaline by reductive deamination to form homoserine and ammonia. As a result, the beetle not only tolerates the chemical, but uses it as a source of nitrogen to synthesize its other amino acids to allow it to develop.
By selectivity
An example of this ability can be found in the larvae of the tobacco budworm Heliothis virescens, which can tolerate large (lethal concentration 50 or LC50 300 mM) amounts of dietary canavanine. These larvae fastidiously avoid incorporation of L-canavanine into their nascent proteins due to gastrointestinal expression of canavanine hydrolase, an enzyme that cleaves L-canavanine into L-homoserine and hydroxyguanidine, and L-arginine kinase, which phosphorylates L-canavanine. In contrast, larvae of the tobacco hornworm Manduca sexta can only tolerate tiny amounts (1.0 microgram per kilogram of fresh body weight) of dietary canavanine because their arginine-tRNA ligase has little, if any, discriminatory capacity. No one has examined experimentally the arginine-tRNA synthetase of these organisms. But comparative studies of the incorporation of radiolabeled L-arginine and L-canavanine have shown that in Manduca sexta, the ratio of incorporation is about 3 to 1.
Dioclea megacarpa seeds contain high levels of canavanine. The beetle Caryedes brasiliensis is able to tolerate this however as it has the most highly discriminatory arginine-tRNA ligase known (as of 1982). In this insect, the level of radiolabeled L-canavanine incorporated into newly synthesized proteins is barely measurable. Moreover, this beetle uses canavanine as a nitrogen source (see above).
See also
Canaline
Arginine
References
Bibliography
and in particularly large amounts in Canavalia gladiata (sword bean).
.........
Alpha-Amino acids
Toxic amino acids
Non-proteinogenic amino acids
Plant toxins
Oxime ethers | Canavanine | Chemistry | 954 |
4,652,094 | https://en.wikipedia.org/wiki/Wildlife%20of%20the%20Gal%C3%A1pagos%20Islands | The Galápagos Islands are off the west coast of South America straddling the equator. The Galápagos are located at the confluence of several currents including the cold Humboldt Current travelling north from South America and the Panama Current travelling south from Central America. These currents cool the islands and provide the perfect environment for the wildlife there.
The islands are volcanic in origin and were never attached to any continent. Galápagos wildlife arrived by flying, floating or swimming. Birds might have flown there by accident and decided to settle there due to favourable conditions. Mammals or reptiles might have floated on a piece of wood and drifted to the islands. Some animals like marine iguanas, may have swam there. In most environments the larger mammals are the predators at the top of the food chain, but those animals did not make it to the Galápagos. Thus the giant Galápagos tortoise became the largest land animal. Due to the lack of natural predators, the wildlife in the Galápagos is extremely tame and has no instinctive fear.
The Galápagos Islands are home to a remarkable number of endemic species. The stark rocky islands (many with few plants) made it necessary for many species to adapt to survive and by doing so evolved into new species. It was after visiting the Galápagos and studying the wildlife that a young Charles Darwin developed his theory of evolution.
Fauna
One of the best-known animals is the Galápagos tortoise, which once lived on ten of the islands. Now, some tortoise species are extinct or extinct in the wild and they live on six of the islands. The tortoises have an average lifespan of over 130 years.
The marine iguana is also extremely unusual, since it is the only iguana adapted to life in the sea. Land iguanas, lava lizards, geckos and harmless snakes are also found on the islands. The large number and range of birds is also of interest to scientists and tourists. Around 56 species live in the archipelago, of which 27 are found only in the Galápagos. Some of these are found only on one island.
The most outstanding are the Galápagos penguins, which live on the colder coasts. Also notable are Darwin's finches, frigatebirds, albatrosses, gulls, boobies, pelicans and Galápagos hawks. Two birds, the flightless cormorant and the Galápagos crake, which is nearly flightless, evolved to their successful form on the islands without natural predators.
On the other hand, there are many mammal species, mostly sea mammals such as whales, dolphins and sea lions. A few species of endemic Galápagos mice (or rice rats) the Santiago Galápagos mouse and the Fernandina Galápagos mouse have also been recently rediscovered.
Charles Darwin discovered over 100 species of birds on the island. The most famous of which were Darwin's finches.
Flora
On the larger Galápagos Islands, four ecological zones have been defined: coastal, low or dry, transitional and humid. In the first, species such as myrtle, mangrove and saltbush can be found. In the second grow cactus, Bursera graveolens (incense tree), carob tree, manchineel (poison apple tree), chala and yellow cordia, among others. In the transitional zone taller trees, epiphytes and perennial herbs can be seen. The best known varieties are the cat's claw, espuela de gallo. In the humid sector are the cogojo, Galápagos guava, cat's claw, Galápagos coffee, passionflower and some types of moss, ferns and fungus.
Invasive species
Invasive species are organisms that are not native to a place. They can wreak havoc on ecosystems, infrastructure and economies. Species can be introduced naturally or more commonly, through human actions such as colonization, tourism, or the releasing of pets or livestock. There are over 1,300 invasive species in the Galápagos Islands, consisting of over 500 insects, over 750 plants and over 30 vertebrates. Most of the plants were brought for agricultural and aesthetic reasons. Due to their isolation, the Galápagos Islands are highly susceptible to invasive species, but the biodiversity of the islands make them one of Ecuador's most prized features. Scientists who study the flora and fauna in the Galápagos agree that the increasing number of invasive species in the region is "the single greatest threat to the terrestrial ecosystems".
Feral goats introduced to the islands for agricultural reasons had a huge impact. They are dangerous to the environment because they eat almost everything, destroying many habitats. The lack of natural predators led to overpopulation, which had a huge impact on the Galápagos tortoise, driving the tortoises near to extinction.
Fixing invasive species problems is difficult and expensive. There are many organizations dedicated to preventing and eradicating invasive species. For instance, the Charles Darwin Foundation helped create the Galápagos Inspection and Quarantine System (SICGAL) that checks the luggage brought into the Galapagos Islands for potentially invasive animals and plants. Project Isabela worked to rid the islands of their feral goats. This was very gruesome due to the massacre of goats which left large amounts of dead goats on the ground. The slaughtered goats were left on the ground so that the nutrients from the goats would return to the ecosystem. Other invasive species that were successfully eradicated were fire ants, rock pigeons, cats, and a species of blackberry bush. Scientists have also suggested the release of natural enemies to control population growth amongst the invasive species.
In 2024, the Galápagos National Park Directorate and the Galápagos Conservancy successfully rehabilitated 136 Galápagos tortoises on the Island of Isabela. The young tortoises between the ages of 5 and 9 years old were reared in the Arnaldo Tupiza Breeding and Rearing Center on Isabela and transported by helicopter to another area of the island in Cinco Cerros near the Cerro Azul volcano. Tortoises are a vital part of the ecosystem as they disperse seeds. Breeding programs for the Galápagos tortoises have been successful across the islands. On Española Island the breeding program was such a success that 14 tortoises on the island multiplied to 3,000.
References
External links
Birding Site Guide provides birders with free, where to watch birds information worldwide
Fauna of Ecuador
Flora of the Galápagos Islands
Biota of archipelagoes | Wildlife of the Galápagos Islands | Biology | 1,331 |
22,316,878 | https://en.wikipedia.org/wiki/Horizontal%20boring%20machine | A horizontal boring machine or horizontal boring mill is a machine tool which bores holes in a horizontal direction. There are three main types — table, planer and floor. The table type is the most common and, as it is the most versatile, it is also known as the universal type.
A horizontal boring machine has its work spindle parallel to the ground and work table. Typically there are three linear axes in which the tool head and part move. Convention dictates that the main axis that drives the part towards the work spindle is the Z axis, with a cross-traversing X axis and a vertically traversing Y axis. The work spindle is referred to as the C axis and, if a rotary table is incorporated, its centre line is the B axis.
Horizontal boring machines are often heavy-duty industrial machines used for roughing out large components, but there are high-precision models too. Modern machines use advanced computer numerical control (CNC) systems and techniques. Charles DeVlieg entered the Machine Tool Hall of Fame for his work upon a highly precise model, which he called a JIGMIL. The accuracy of this machine convinced the United States Air Force to accept John Parson's idea for numerically controlled machine tools.
References
Machine tools | Horizontal boring machine | Engineering | 258 |
3,718,083 | https://en.wikipedia.org/wiki/Quasar%20Equatorial%20Survey%20Team | The Quasar Equatorial Survey Team (QUEST) is a joint venture between Yale University, Indiana University, and Centro de Investigaciones de Astronomia (CIDA) to photographically survey the sky using a digital camera, an array of 112 charge-coupled devices. Since 2009, it has used the 1 m ESO Schmidt Telescope in Chile. From 2003–2007, it used the 48 inch (1.22 m) Samuel Oschin telescope at the Palomar Observatory. Before that, it had used the 1.0-metre Schmidt telescope at the Llano del Hato National Astronomical Observatory in Venezuela.
References
As of 08/09/2017 all the following links are broken.
Astronomy organizations
Astronomical surveys | Quasar Equatorial Survey Team | Astronomy | 146 |
32,523,922 | https://en.wikipedia.org/wiki/Wassermann%20radar | The Wasserman radar was an early-warning radar built by Germany during World War II. The radar was a development of FuMG 80 Freya and was operated during World War II for long range detection. It was developed under the direction of Theodor Schultes, beginning in 1942. Wasserman was based on largely unchanged Freya electronics, but used an entirely new antenna array in order to improve range, height-finding and bearing precision.
Development
Seven different versions were developed. The two most important versions are:
The radio measurement equipment FuMG.41 Wassermann L (German: Leicht = light) was a constellation of four Freya antennas on top of each other, mounted on a rotatable steel lattice mast.
A later version was the FuMG.42 Wassermann S (German: Schwer = heavy). For this eight Freya antenna arrays were mounted on a pipe mast in two columns, each four antennae high.
The combination of the antennae in this way resulted in a concentration of the radiated energy to a smaller beam, thus resulting in a higher radiated power in the main direction (Effective Radiated Power = ERP) without increasing the transmitter power. The result was a longer range. With the L-version the horizontal opening angle of the antenna array remained the same, but the vertical opening angle was reduced (so flatter radiation pattern). Because the horizontal opening angle was not changed, the bearing measuring performance was not changed. With the S-version also the horizontal opening angle was reduced, resulting in a better bearing resolution.
Technical Info
Search bearing: mechanical rotation of 360°
Range: depending on target altitude and station altitude, e.g.:
Target altitude Range
50 m 35 km
6,000 m 190 km
Range accuracy: +/−300 m
Detection accuracy:
• Bearing: +/−°
• Altitude: +/−° (in the range of 3–18°)
• Altitude detection possible
Detection possibly up to 12,000 m
Mass: 30–60 t
Seize: Height of mast: 37–57 m
Width 6–12, 40 m
Jamming resisted due to three different frequency ranges:
• 1.9–2.5 m
• 1.2–1.9 m
• 2.4–4.0 m
Detection of Friend or Foe in cooperation with the FuG.25a Erstling equipment.
References
Bibliography
World War II German radars
Air defence radar networks
Early warning systems
Military equipment introduced from 1940 to 1944 | Wassermann radar | Technology | 503 |
634,240 | https://en.wikipedia.org/wiki/Wheel%20theory | A wheel is a type of algebra (in the sense of universal algebra) where division is always defined. In particular, division by zero is meaningful. The real numbers can be extended to a wheel, as can any commutative ring.
The term wheel is inspired by the topological picture of the real projective line together with an extra point ⊥ (bottom element) such that .
A wheel can be regarded as the equivalent of a commutative ring (and semiring) where addition and multiplication are not a group but respectively a commutative monoid and a commutative monoid with involution.
Definition
A wheel is an algebraic structure , in which
is a set,
and are elements of that set,
and are binary operations,
is a unary operation,
and satisfying the following properties:
and are each commutative and associative, and have and as their respective identities.
is an involution, for example
is multiplicative, for example
Algebra of wheels
Wheels replace the usual division as a binary operation with multiplication, with a unary operation applied to one argument similar (but not identical) to the multiplicative inverse , such that becomes shorthand for , but neither nor in general, and modifies the rules of algebra such that
in the general case
in the general case, as is not the same as the multiplicative inverse of .
Other identities that may be derived are
where the negation is defined by and if there is an element such that (thus in the general case ).
However, for values of satisfying and , we get the usual
If negation can be defined as above then the subset is a commutative ring, and every commutative ring is such a subset of a wheel. If is an invertible element of the commutative ring then . Thus, whenever makes sense, it is equal to , but the latter is always defined, even when .
Examples
Wheel of fractions
Let be a commutative ring, and let be a multiplicative submonoid of . Define the congruence relation on via
means that there exist such that .
Define the wheel of fractions of with respect to as the quotient (and denoting the equivalence class containing as ) with the operations
(additive identity)
(multiplicative identity)
(reciprocal operation)
(addition operation)
(multiplication operation)
In general, this structure is not a ring unless it is trivial, as in the usual sense - here with we get , although that implies that is an improper relation on our wheel .
This follows from the fact that , which is also not true in general.
Projective line and Riemann sphere
The special case of the above starting with a field produces a projective line extended to a wheel by adjoining a bottom element noted ⊥, where . The projective line is itself an extension of the original field by an element , where for any element in the field. However, is still undefined on the projective line, but is defined in its extension to a wheel.
Starting with the real numbers, the corresponding projective "line" is geometrically a circle, and then the extra point gives the shape that is the source of the term "wheel". Or starting with the complex numbers instead, the corresponding projective "line" is a sphere (the Riemann sphere), and then the extra point gives a 3-dimensional version of a wheel.
See also
NaN
Citations
References
(a draft)
(also available online here).
Fields of abstract algebra | Wheel theory | Mathematics | 706 |
3,823,190 | https://en.wikipedia.org/wiki/Environment%20Agency%20Wales | Environment Agency Wales () was a Welsh Government sponsored body that was part of the Environment Agency of England and Wales from 1996 to 2013. Its principal aims were to protect and improve the environment in Wales and to promote sustainable development. On 1 April 2013 the organisation was merged with the Countryside Council for Wales and Forestry Commission Wales into a single environmental body, Natural Resources Wales.
It had an operational area defined along its Eastern boundary by the catchments of the River Dee and the River Wye. Those parts of the River Severn in Wales were managed by the Environment Agency whilst those parts of the River Dee and River Wye catchments that are in England were nevertheless managed by Environment Agency Wales. The agency also had a public facing boundary which corresponded to the political boundary of Wales.
Role and responsibilities
Environment Agency Wales' role included: reducing industry's impacts on the environment, enforcing pollution legislation and reducing the harm caused by flooding and pollution incidents. It also oversaw the management of waste, water resources and freshwater fisheries; cleaning up rivers, coastal waters and contaminated land and improving wildlife habitats.
By influencing others to change attitudes and behaviour, it aimed to make the environment cleaner and healthier for people and wildlife.
Priorities
Act to reduce climate change and its consequences
Environment Agency Wales managed the risk of flooding from rivers and the sea in Wales. To do this, Environment Agency Wales built, maintained and operated flood defences to protect people and property. It also issued flood warnings and works with communities at risk of flooding to help them find appropriate solutions to flood risk through its Flood Awareness Wales programme. When a flood happened, Environment Agency Wales worked with the emergency services and local authorities to minimise the harm to people and damage to property.
Protect and improve water, land and air
Environment Agency Wales was responsible for ensuring that environmental legislation was implemented properly on behalf of the Welsh and UK governments. This included regulating businesses such as power stations, chemical factories, metal processors, waste management sites, construction industry, food and drink manufacturers, farms and the water industry – to make sure that their work did not damage the environment. Where people and businesses needed to take water for drinking, industry and irrigation, Environment Agency Wales ensured that they did so without damaging the environment. To do this, Environment Agency Wales gave advice and issued permits, authorisations and consents to businesses that complied with legislative requirements. When pollution occurred, Environment Agency Wales worked to minimise any environmental damage, identify the source and stop any further pollution. If businesses failed to comply with their permits, and pollution occurred, Environment Agency Wales took enforcement action against them, including prosecution on occasion. Environment Agency Wales also led on dealing with serious waste crime which is often organised, large-scale and profitable. Priority waste crime types include large-scale illegal dumping, illegal waste sites and illegal exports of waste. It also dealt with high risk activities such as illegal disposal of wastes, where there was an actual or imminent threat of significant flooding or pollution.
Work with people and communities to create better places
Environment Agency Wales created and improved habitats for fish and other water-based wildlife such as salmon and helped species that were at risk, such as the freshwater pearl mussel. It also managed licences for fishing and navigation, so that people in Wales – and people visiting Wales – could enjoy the water environment.
Work with businesses and other organisations to use resources wisely
Environment Agency Wales licensed water abstractions from groundwater, rivers and lakes to ensure that the needs of people and business were met whilst protecting the environment and wildlife. It also regulated waste management facilities, such as landfill sites or large composting facilities, to ensure that they did not cause environmental damage and monitored the Landfill Allowances Scheme to track how waste is managed in Wales.
Accreditation
For the fourth year running, Stonewall Cymru, named Environment Agency Wales as the best place to work in Wales for lesbian, gay and bisexual people in 2012.
Environment Agency Wales was accredited to both the ISO 14001 and EMAS environmental management standard.
As a Welsh Government Sponsored Body, Environment Agency Wales had to provide its services within the Welsh political boundary (its public facing boundary) in Welsh as well as in English in accordance with the Welsh Language Act 1993.
Structure
Environment Agency Wales had three operational areas, South East Wales, South West Wales and North Wales. Its other departments were Flood and Coastal Risk Management, Corporate Services, Human Resources, Finance and the policy department known as Strategic Unit Wales. All departments reported to the Director of Environment Agency Wales, Chris Mills.
Committees
Environment Agency Wales had three statutory committees, each with their own responsibility and each contributing towards making Wales a better place for people and wildlife.
The committees were set up by Parliament under the Environment Act 1995. The committees were made up of external members who were elected to stand on the committee because they had the relevant expertise.
Environment Protection Advisory Committee Wales (EPAC)
EPAC advised Environment Agency Wales on issues of environmental protection, pollution control, water resources, air quality and waste regulation.
Chairman: Prof Tom Pritchard
Flood Risk Management Wales Committee (FRMW)
Chairman: Deep Sagar
Fisheries, Ecology and Recreation Advisory Committee (FERAC)
FERAC advised Environment Agency Wales on maintaining, improving and developing fisheries as well as recreation, navigation and conservations issues.
Chairman: Dr Graeme Harris
Natural Resources Wales
On 1 April 2013, Environment Agency Wales, Countryside Council for Wales and Forestry Commission Wales were merged into Natural Resources Wales, a single body delivering the Welsh Government's environmental priorities for Wales.
References
External links
Environment Agency Wales website
Natural Resources Wales
Welsh Government sponsored bodies
Atmospheric dispersion modeling
Waste legislation in the United Kingdom
Waste organizations
Wales
Environmental organisations based in Wales | Environment Agency Wales | Chemistry,Engineering,Environmental_science | 1,138 |
77,929,507 | https://en.wikipedia.org/wiki/Peak%20power | Peak power refers to the maximum of the instantaneous power waveform, which, for a sine wave, is always twice the average power. For other waveforms, the relationship between peak power and average power is the peak-to-average power ratio (PAPR). It always produces a higher value than the average power figure, however, and so has been tempting to use in advertising without context, making it look as though the amp has twice the power of competitors .
Peak power is a fundamental concept in electrical engineering, relevant to various types of waveforms, including alternating current (AC) and other signal forms. It represents the maximum instantaneous power level that a system can handle or produce. This article explores the significance of peak power across different applications and waveforms.
The peak power of an amplifier is determined by the voltage rails and the maximum amount of current its electronic components can handle for an instant without damage. This characterizes the ability of equipment to handle quickly changing power levels, as many audio signals have a highly dynamic nature.
Radio frequency
Peak power is the highest power level that a transmitter can achieve during its operation. Unlike average power, which is the mean power output over a period, peak power represents the maximum power output at any given instant. This distinction is crucial in applications where signal peaks can significantly exceed the average power level. Peak power is a critical parameter in the field of radio frequency (RF) and telecommunications. It refers to the maximum instantaneous power level that a transmitter can output. Understanding peak power is essential for designing and operating efficient and effective communication systems.
Importance
Peak power is a fundamental concept in the design and operation of transmitters. It plays a crucial role in ensuring signal integrity, system performance, and component reliability. By understanding and managing peak power, engineers can design more efficient and effective communication systems.
Signal Integrity: High peak power ensures that the transmitted signal can overcome noise and interference, maintaining signal integrity over long distances.
System Performance: In systems like radar and communication transmitters, peak power is vital for achieving the desired range and clarity.
Component Stress: Understanding peak power helps in designing components that can withstand these power levels without damage.
Measurement of Peak Power
Measuring peak power involves capturing the highest power level within a specified time frame. This can be done using specialized equipment like peak power meters, which can accurately track and record these peaks. The measurement process must account for various factors, including signal type and modulation.
Applications of Peak Power
Radar Systems: In radar systems, peak power determines the maximum range and resolution. Higher peak power allows for better detection and imaging of distant objects.
Communication Systems: In communication systems, peak power ensures that signals can be transmitted over long distances without significant loss of quality.
Broadcasting: In broadcasting, peak power is crucial for maintaining signal strength and quality, especially in areas with high interference.
Challenges and Considerations
Heat Dissipation: High peak power levels can generate significant heat, requiring efficient cooling systems to prevent damage.
Intermodulation Distortion: Non-linearities in the transmitter can cause intermodulation distortion, affecting signal quality. Proper design and calibration are necessary to minimize these effects.
Regulatory Compliance: Transmitters must comply with regulatory limits on peak power to avoid interference with other communication systems.
References
External links
Definition of peak-to-average ratio – ATIS (Alliance for Telecommunications Industry Solutions) Telecom Glossary 2K
Definition of crest factor – ATIS (Alliance for Telecommunications Industry Solutions) Telecom Glossary 2K
Peak-to-average power ratio (PAPR) of OFDM systems - tutorial
Waveforms
Power (physics) | Peak power | Physics,Mathematics | 723 |
38,759,714 | https://en.wikipedia.org/wiki/Rhombitriapeirogonal%20tiling | In geometry, the rhombtriapeirogonal tiling is a uniform tiling of the hyperbolic plane with a Schläfli symbol of rr{∞,3}.
Symmetry
This tiling has [∞,3], (*∞32) symmetry. There is only one uniform coloring.
Similar to the Euclidean rhombitrihexagonal tiling, by edge-coloring there is a half symmetry form (3*∞) orbifold notation. The apeireogons can be considered as truncated, t{∞} with two types of edges. It has Coxeter diagram , Schläfli symbol s2{3,∞}. The squares can be distorted into isosceles trapezoids. In the limit, where the rectangles degenerate into edges, an infinite-order triangular tiling results, constructed as a snub triapeirotrigonal tiling, .
Related polyhedra and tiling
Symmetry mutations
This hyperbolic tiling is topologically related as a part of sequence of uniform cantellated polyhedra with vertex configurations (3.4.n.4), and [n,3] Coxeter group symmetry.
See also
List of uniform planar tilings
Tilings of regular polygons
Uniform tilings in hyperbolic plane
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Apeirogonal tilings
Hyperbolic tilings
Isogonal tilings
Uniform tilings | Rhombitriapeirogonal tiling | Physics | 327 |
69,662,589 | https://en.wikipedia.org/wiki/Oberheim%20DSX | The Oberheim DSX is a 9-track digital sequencer equipped with the Oberheim Serial Buss for connecting with the company's OB-Xa or OB-8 synthesizers and DMX drum machine. Connected and used together, Oberheim marketed these products as "The System". In addition to the Oberheim Serial Buss, the DSX has an 8-channel CV/Gate interface for sequencing traditional analog synthesizers.
Features
The DSX is capable of storing and sequencing over 6,000 events, over 10 songs of 10 patterns each. The DSX is capable of driving up to 16 voices concurrently. Sequences are stored in internal memory after power-off using static RAM which remains powered up from an internal NiCad battery.
The DSX equipped with the Oberheim Serial Buss, a pre-MIDI proprietary parallel bus designed to directly interface the DSX with Oberheim's OB-Xa or OB-8 synthesizers along with their DMX drum machine. Connection was via a heavy 1:1 cable, which plugged from the host DSX to the target synthesizer using a rear DB-37 connector. The combination of the DSX, DMX and either OB-Xa or OB-8 were marketed by Oberheim as "The System".
Notable users
Michael Beinhorn
Trevor Horn
Geddy Lee
Mike Oldfield
Steve Roach
Sting
References
Music sequencers
Oberheim synthesizers | Oberheim DSX | Engineering | 290 |
2,042,178 | https://en.wikipedia.org/wiki/Ball%20pit | A ball pit (originally called a ball crawl, also known as a ball pool or ball pond) is a padded box or pool filled with small colorful hollow plastic balls generally no larger than in diameter. They are typically marketed as recreation and exercise for children.
They are sometimes found at nurseries, carnivals, amusement parks, fun centers, fast-food restaurants, and large video arcades, frequently incorporated into larger play structures such as mazes, slides and jungle gyms. They may be rented for parties, and smaller versions are sold for home use. Ball pits are also sometimes used in therapy and educational settings, as they can provide a stimulating and sensory-rich environment.
Age for ball pit
Generally, ball pits are considered safe and enjoyable for children who are at least 10 months old and able to sit up and move independently. At this age, they have better head and neck control, reducing the risk of accidental suffocation in the ball pit.
History
Eric McMillan is credited with creating the first ball pit in 1976 at SeaWorld Captain Kids World in San Diego, US as a result of his experience at Ontario Place in Canada. However, IKEA claims that they had a ball pit in the early 1970s in Kungens Kurva, Sweden.
Urban legends
Beginning in the late 1990s, a number of urban legends arose about children being severely injured or killed in ball pit encounters with vipers or hypodermic needles. There is no truth to these stories.
In popular culture
In China Miéville's short story "The Ball Room" (Looking for Jake), the ghost of a child who died in a ball pit haunts a local IKEA-like store.
In the Johnny Bravo episode "Johnny Meets Donny Osmond", Donny pushes Johnny into a fast-food ball pit, where he comes across a young boy who claims to have been there since the age of five.
In the Rugrats episode "Piggy's Pizza Palace", the Rugrats jump on a costumed pig named Piggy as an act of revenge to get Angelica's tickets back. It causes the ball pit structure to split open and the balls fall out all over the restaurant.
In season 3 episode 14 ("The Einstein Approximation") of the TV series The Big Bang Theory, Sheldon seeks inspiration in a ball pit at a shopping mall, then hides from Leonard, who then tries to retrieve Sheldon from the pit.
In 2014, a YouTube vlogger under the name Roman Atwood made a video of transforming the living room of his home into a massive ball pit, intended as a prank for his girlfriend who had returned from a trip. He later collaborates with another vlogger, Freddie Wong, to create a comedy video involving giant ball pit and "ball monster" prank.
See also
Inflatable castle
DashCon, for the "extra hour in the ball pit" meme
References
Play (activity)
Entertainment
Balls
Sensory toys | Ball pit | Biology | 596 |
78,608,930 | https://en.wikipedia.org/wiki/Potassium%20aspartate | Potassium aspartate is a potassium salt of L-aspartic acid.
Medical application
Potassium aspartate is not approved for use as a chemical in its own right (but may be approved as a component in a product covered by a group standard) in the United States or European Union or New Zealand or Australia, for treating any medical condition, but is studied as an alternative to potassium chloride to treat high blood pressure (hypertension): potassium chloride reduces blood pressure, with a more pronounced effect in patients with hypertension—averaging a reduction of 8.2 mm Hg systolic and 4.5 mm Hg diastolic; yet, potassium aspartate may have a greater impact on lowering blood pressure at lower doses. While increasing intake of potassium-rich foods like bananas, grapefruit, dried beans, peas, broccoli, spinach, pumpkins, and squash is preferable, potassium aspartate is studied as a potential adjunctive treatment for hypertension.
See also
Magnesium aspartate
References
Potassium compounds
Salts of carboxylic acids
Metal-amino acid complexes
Aspartic acids | Potassium aspartate | Chemistry | 228 |
3,337,849 | https://en.wikipedia.org/wiki/2005%20Iranian%20Air%20Force%20C-130%20crash | On 6 December 2005 (Azar 15, 1384) at 14:10 local time (10:40 UTC), a Lockheed C-130 Hercules military transport aircraft of the Islamic Republic of Iran Air Force, tail number , c/n 4399, crashed into a ten-story apartment building in a residential area of Tehran, the capital city of Iran.
Accident
The aircraft, bound for Bandar Abbas on the Persian Gulf, was carrying 10 crew and 84 passengers, of whom 68 were reportedly journalists en route to watch a series of military exercises off the country's southern coast.
Shortly after takeoff, the pilot reported engine problems and unsuccessfully attempted to make an emergency landing at the city's Mehrabad International Airport, from which the aircraft had departed. The aircraft came down in a densely populated area of Hasanabad-e Baqeraf, near Tehran, crashing into an apartment building where many Iranian air force personnel resided.
Iranian State media reported a death toll of 128 victims, and some other news agencies reported a toll of 116. However, an official accident report created by the Aviation Safety Network stated that 106 people had died, including 12 on the ground. All 94 on board the aircraft were killed.
Casualties
Tehran mayor Mohammad Bagher Ghalibaf said that all 94 people on board, including 40 journalists, were killed upon impact. State radio reported at least 34 people were confirmed dead on the ground, putting the official death toll at 128. An Interior Ministry Spokesperson, Mojtaba Mir-Abdolahi, confirmed that 116 bodies were recovered from the site. However, it was later determined by the Aviation Safety Network that 12 people on the ground had died in the crash.
The Mehr news agency reported that 40 journalists on board worked for the Islamic Republic of Iran Broadcasting, and the others were from the Islamic Republic News Agency, Iranian Students' News Agency and Fars News Agency, and several newspapers.
Iason Sowden of Global Radio News in Tehran said there were reports of charred bodies on the ground near the crash site. Sowden also said that one wing of the plane was lying in front of the building. Initial pictures shown on Sky News and CNN showed complete chaos at the scene. Earlier in the day, all children were advised to stay at home due to high levels of smog and pollution.
Reuters reported that 28 people were transported to a nearby hospital. Iranian state radio reported that 90 people sustained serious injuries. The head of Tehran's rescue services was quoted in an interview with the Iranian Students' News Agency as saying that 132 people had been injured.
Engine problems
According to the police, the pilot reported engine difficulties minutes after takeoff. An emergency landing was requested, but the aircraft crashed just short of the runway.
Rescue operation
Eyewitnesses, whose accounts were carried on the BBC World Service, have stated that emergency crews arrived within three minutes of impact. SBS World News reported that riot police were called in to control onlookers who were blamed for blocking the access of emergency workers.
Context
This crash was the deadliest aviation disaster in Iran since February 2003, when 275 people were killed as a military transport aircraft crashed in southern Iran. Due to U.S. sanctions, Iran has been unable to buy new Western aircraft (whether commercial or military) or spare parts for existing aircraft from U.S. manufacturers. American-built military planes now operating in Iran were purchased under the old regime during the 1970s. Iranian officials blamed the country's poor aviation record on the sanctions.
See also
Aviation accidents and incidents
1981 Iranian Air Force C-130 crash
Footnotes
External links
2000s disasters in Iran
2005 disasters in Asia
2005 in Iran
Aviation accidents and incidents in Iran
Islamic Republic of Iran Air Force
Aviation accidents and incidents in 2005
Accidents and incidents involving the Lockheed C-130 Hercules
2005 in military history
2000s in Tehran
Accidental deaths in Iran
December 2005 events in Iran
High-rise fires
Aviation accidents and incidents caused by engine failure | 2005 Iranian Air Force C-130 crash | Technology | 797 |
206,457 | https://en.wikipedia.org/wiki/Dependency%20%28project%20management%29 | In a project network, a dependency is a link among a project's terminal elements.
The A Guide to the Project Management Body of Knowledge (PMBOK Guide) does not define the term dependency, but refers for this term to a logical relationship, which in turn is defined as dependency between two activities, or between an activity and a milestone.
Standard types of dependencies
There are four standard types of dependencies:
Finish to start (FS)
A FS B means "activity A must finish before activity B can begin" (or "B can't start until A has finished").
(Foundations dug) FS (Concrete poured)
Finish to finish (FF)
A FF B means "activity A must finish before activity B can finish" (or "B can't finish before A is finished") .
(Last chapter written) FF (Entire book written)
Start to start (SS).
A SS B means "activity A must start before activity B can start" (or "B can't start until A has started").
(Project work started) SS (Project management activities started)
Start to finish (SF)
A SF B means "activity A must start before activity B finishes" (or "B can't finish until A has started")
(New shift started) SF (Previous shift finished)
Finish-to-start is considered a "natural dependency". The Practice Standard for Scheduling recommends, that "Typically, each predecessor activity would finish prior to the start of its successor activity (or activities)(known as finish-to-start (FS) relationship). Sometimes it is necessarily to overlap activities; an option may be selected to use start-to-start (SS), finish-to-finish (FF) or start-to-finish (SF) relationships....Whenever possible, the FS logical relationship should be used. If other types of relationships are used, they shall be used sparingly and with full understanding of how the relationships have been implemented in the scheduling software being used. Ideally, the sequence of all activities will be defined in such a way that the start of every activity has a logical relationship from a predecessor and the finish of every activity has a logical relationship to a successor".
SF is rarely used, and should generally be avoided. Microsoft recommends to use SF dependency for just-in-time scheduling. It can be easily shown however, that this would only work if resource levelling is not used, because resource levelling can delay a successor activity (an activity, which shall be finished just-in-time) in such a way, that it will finish later than the start of its logical predecessor activity, thus not fulfilling the just-in-time requirement.
There are three kinds of dependencies with respect to the reason for the existence of dependency:
Causal (logical)
It is impossible to edit a text before it is written
It is illogical to pour concrete before you dig the foundations of a building
Resource constraints
It is logically possible to paint four walls in a room simultaneously but there is only one painter
Discretionary (preferential)
I want to paint the living room before painting the dining room, although I could do it the other way round, too
Early critical path-derived schedules often reflected only on causal (logical) or discretionary (preferential) dependencies because the assumption was that resources would be available or could be made available. Since at least the mid-1980s, competent project managers and schedulers have recognized that schedules must be based on resource availability. The critical chain method necessitates taking into account resource constraint-derived dependencies as well.
Leads and lags
Dependencies can be modified by leads, and lags. Both leads and lags can be applied to all 4 types of dependencies.
PMBOK defines lag as "the amount of time whereby a successor activity will be delayed with respect to a predecessor activity".
For example:
When building two walls from a novel design, one might start the second wall 2 days after the first so that the second team can learn from the first. This is an example of a lag in a Start-Start relationship.
In accordance to PMBOK a lead is "the amount of time whereby a successor activity can be advanced with respect to a predecessor activity For example, on a project to construct a new office building, the landscaping could be scheduled to start prior to the scheduled punch list completion. This would be shown as a finish-to-start with two-week lead".
Example
If you are building a building, you can't paint the walls before installing the water pipes into the walls.
Advanced cases of activities dependencies
Maximal-type relationships
Activity A and Activity B are said to have a Maximal-Type Relationship, if Activity B can start after Activity A, but with the delay of no more than X. Real life examples, which are simulated by Maximal-Type Relation:
Shoring of the trench has to be done not necessarily immediately after excavation, but within certain time, otherwise the trench will collapse.
Vaccination of baby has to be done not immediately after birth, but within certain time
Renewal of the passport has to be done some time after the current one has been issued, but before it expires.
Invoice payment does not have to be done immediately, but within certain time after it has been issued.
Maximal-type relationships are rarely implemented in the project management software, most probably because with this feature it is too easy to create contradictory dependencies.
See also
Dependency structure matrix
Outline of project management
Project network
Project planning
Citations
References
Schedule (project management) | Dependency (project management) | Physics | 1,146 |
36,897,200 | https://en.wikipedia.org/wiki/Environmental%20stewardship | Environmental stewardship (or planetary stewardship) refers to the responsible use and protection of the natural environment through active participation in conservation efforts and sustainable practices by individuals, small groups, nonprofit organizations, federal agencies, and other collective networks. Aldo Leopold (1887–1949) championed environmental stewardship in land ethics, exploring the ethical implications of "dealing with man's relation to land and to the animals and plants which grow upon it."
Resilience-based ecosystem stewardship
Resilience-based ecosystem stewardship emphasizes resilience as an integral feature of responding to and interacting with the environment in a constantly changing world. Resilience refers to the ability of a system to recover from disturbance and return to its basic function and structure. For example, ecosystems do not serve as singular resources but rather are function-dependent in providing an array of ecosystem services. Additionally, this type of stewardship recognizes resource managers and management systems as influential and informed participants in the natural systems that are serviced by humans.
Social science implications
Studies have explored the benefits of environmental stewardship in various contexts such as the evaluation, modeling, and integration into policy, system management, and urban planning. One study examined how social attributes of environmental stewardship can be used to reconfigure local conservation efforts. Social ties to environmental stewardship are emphasized by the National Recreation and Park Association's efforts to place environmental stewardship at the forefront of childhood development and youths' consciousness of the outdoors. Practicing environmental stewardship has also been suggested as an effective mental health treatment and natural therapy.
Roles of environmental stewards
Based on pro-organizational stewardship theory principles, environmental stewards can be categorized into three roles: doers, donors, and practitioners.
Doers actively engage in environmental aid, such as volunteering for hands-on work like cleaning up oil spills. Donors support causes financially or through gifts in kind, including fundraising or personal donations. Practitioners work daily in environmental stewardship, acting as advocates in collaboration with various environmental agencies and groups. All three roles contribute to promoting environmental literacy and encouraging participation in conservation efforts.
From a biocultural conservation perspective, Ricardo Rozzi and collaborators propose participatory intercultural approaches to earth stewardship. This perspective emphasizes the role of long-term socio-ecological research (LTSER) sites in coordinating local initiatives with global networking and implementing culturally diverse earth stewardship forms.
Examples
Many programs, partnerships, and funding initiatives have tried to implement environmental stewardship into the workings of society. Pesticide Environmental Stewardship Program (PESP), a partnership program overseen by the US Environmental Protection Agency, provides pesticide-user consultation to reduce the use of hazardous chemicals and identify the detrimental impact these chemicals can have on social and environmental health.
In 2006, England placed environmental stewardship at the center of an agricultural incentives mechanism, encouraging cattle farmers to better manage their land, crops, animals, and material use. The Environmental Stewardship Award was created as part of this initiative to highlight members whose actions exemplify alignment with environmental stewardship.
See also
References
Environmental conservation
Stewardship
Sustainability and environmental management
Environmental protection
Natural resources | Environmental stewardship | Environmental_science | 628 |
1,572,831 | https://en.wikipedia.org/wiki/Plant%20senescence | Plant senescence is the process of aging in plants. Plants have both stress-induced and age-related developmental aging. Chlorophyll degradation during leaf senescence reveals the carotenoids, such as anthocyanin and xanthophylls, which are the cause of autumn leaf color in deciduous trees. Leaf senescence has the important function of recycling nutrients, mostly nitrogen, to growing and storage organs of the plant. Unlike animals, plants continually form new organs and older organs undergo a highly regulated senescence program to maximize nutrient export.
Hormonal regulation of senescence
Programmed senescence seems to be heavily influenced by plant hormones. The hormones abscisic acid, ethylene, jasmonic acid and salicylic acid are accepted by most scientists as promoters of senescence, but at least one source lists gibberellins, brassinosteroids and strigolactone as also being involved. Cytokinins help to maintain the plant cell and expression of cytokinin biosynthesis genes late in development prevents leaf senescence. A withdrawal of or inability of the cell to perceive cytokinin may cause it to undergo apoptosis or senescence. In addition, mutants that cannot perceive ethylene show delayed senescence. Genome-wide comparison of mRNAs expressed during dark-induced senescence versus those expressed during age-related developmental senescence demonstrate that jasmonic acid and ethylene are more important for dark-induced (stress-related) senescence while salicylic acid is more important for developmental senescence.
Annual versus perennial benefits
Some plants have evolved into annuals which die off at the end of each season and leave seeds for the next, whereas closely related plants in the same family have evolved to live as perennials. This may be a programmed "strategy" for the plants.
The benefit of an annual strategy may be genetic diversity, as one set of genes does continue year after year, but a new mix is produced each year. Secondly, being annual may allow the plants a better survival strategy, since the plant can put most of its accumulated energy and resources into seed production rather than saving some for the plant to overwinter, which would limit seed production.
Conversely, the perennial strategy may sometimes be the more effective survival strategy, because the plant has a head start every spring with growing points, roots, and stored energy that have survived through the winter. In trees for example, the structure can be built on year after year so that the tree and root structure can become larger, stronger, and capable of producing more fruit and seed than the year before, out-competing other plants for light, water, nutrients, and space. This strategy will fail when environmental conditions change rapidly. If a certain bug quickly takes advantage and kills all of the nearly identical perennials, then there will be a far lesser chance that a random mutation will slow the bug compared to more diverse annuals.
Plant self-pruning
There is a speculative hypothesis on how and why a plant induces part of itself to die off. The theory holds that leaves and roots are routinely pruned off during the growing season whether they are annual or perennial. This is done mainly to mature leaves and roots and is for one of two reasons; either both the leaves and roots that are pruned are no longer efficient enough nutrient acquisition-wise or that energy and resources are needed in another part of the plant because that part of the plant is faltering in its resource acquisition.
Poor productivity reasons for plant self pruning – the plant rarely prunes young dividing meristematic cells, but if a fully grown mature cell is no longer acquiring nutrients that it should acquire, then it is pruned.
Shoot efficiency self pruning reasons – for instance, presumably a mature shoot cell must on average produce enough sugar, and acquire enough oxygen and carbon dioxide to support both it and a similar sized root cell. Actually, since plants are obviously interested in growing it is arguable, that the "directive" of the average shoot cell, is to "show a profit" and produce or acquire more than enough sugar and gases than is necessary to support both it and a similar sized root cell. If this "profit" is not shown, the shoot cell is killed off and resources are redistributed to "promising" other young shoots or leaves in the hope that they will be more productive.
Root efficiency self pruning reasons – similarly a mature root cell must acquire on average, more than enough minerals and water needed to support both it and a similar sized shoot cell that does not acquire water and minerals. If this does not happen, the root is killed off and resources sent to new young root candidates.
Shortage/need-based reason for plant self pruning – this is the other side of efficiency problems.
Shoot shortages – if a shoot is not getting enough root derived minerals and water, the idea is that it will kill part of itself off, and send the resources to the root to make more roots.
Root shortages – the idea here is that if the root is not getting enough shoot derived sugar and gases it will kill part of itself off and send resources to the shoot, to allow more shoot growth.
This is an oversimplification, in that it is arguable that some shoot and root cells serve other functions than to acquire nutrients. In these cases, whether they are pruned or not would be "calculated" by the plant using some other criteria. It is also arguable that, for example, mature nutrient-acquiring shoot cells would have to acquire more than enough shoot nutrients to support both it and its share of both shoot and root cells that do not acquire sugar and gases whether they are of a structural, reproductive, immature, or just plain, root nature.
The idea that a plant does not impose efficiency demands on immature cells is that most immature cells are part of so-called dormant buds in plants. These are kept small and non-dividing until the plant needs them. They are found in buds, for instance in the base of every lateral stem.
Theory of hormonal induction of senescence
There is little theory on how plants induce themselves to senesce, although it is reasonably widely accepted that some of it is done hormonally. Botanists generally concentrate on ethylene and abscisic acid as culprits in senescence, but neglect gibberellin and brassinosteroid which inhibits root growth if not causing actual root pruning. This is perhaps because roots are below the ground and thus harder to study.
Shoot pruning – it is now known that ethylene induces the shedding of leaves much more than abscisic acid. ABA originally received its name because it was discovered to have a role in leaf abscission. Its role is now seen to be minor and only occurring in special cases.
Hormonal shoot pruning theory – a new simple theory says that even though ethylene may be responsible for the final act of leaf shedding, it is ABA and strigolactones that induces senescence in leaves due to a run away positive feedback mechanism. What supposedly happens is that ABA and strigolactones are released by mostly mature leaves under water and or mineral shortages. The ABA and strigolactones act in mature leaf cells however, by pushing out minerals, water, sugar, gases and even the growth hormones auxin and cytokinin (and possibly jasmonic and salicylic acid in addition). This causes even more ABA and strigolactones to be made until the leaf is drained of all nutrients. When conditions get particularly bad in the emptying mature leaf cell, it will experience sugar and oxygen deficiencies and so lead to gibberellin and finally ethylene emanation. When the leaf senses ethylene it knows its time to excise.
Root pruning – the concept that plants prune the roots in the same kind of way as they abscise leaves, is not a well discussed topic among plant scientists, although the phenomena undoubtedly exists. If gibberellin, brassinosteroid and ethylene are known to inhibit root growth it takes just a little imagination to assume they perform the same role as ethylene does in the shoot, that is to prune the roots too.
Hormonal root pruning theory – in the new theory just like ethylene, GA, BA and Eth are seen both to be induced by sugar (GA/BA) and oxygen (ETH) shortages (as well as maybe excess levels of carbon dioxide for Eth) in the roots, and to push sugar and oxygen, as well as minerals, water and the growth hormones out of the root cell causing a positive feedback loop resulting in the emptying and death of the root cell. The final death knell for a root might be strigolactone or most probably ABA as these are indicators of substances that should be abundant in the root and if they cannot even support themselves with these nutrients then they should be senesced.
Parallels to cell division – the theory, perhaps even more controversially, asserts that just as both auxin and cytokinin seem to be needed before a plant cell divides, in the same way perhaps ethylene and GA/BA (and ABA and strigolactones) are needed before a cell would senesce.
Seed senescence
Seed germination performance is a major determinant of crop yield. Deterioration of seed quality with age is associated with accumulation of DNA damage. In dry, aging rye seeds, DNA damages occur with loss of viability of embryos. Dry seeds of Vicia faba accumulate DNA damage with time in storage, and undergo DNA repair upon germination. In Arabidopsis, a DNA ligase is employed in repair of DNA single- and double-strand breaks during seed germination and this ligase is an important determinant of seed longevity. In eukaryotes, the cellular repair response to DNA damage is orchestrated, in part, by the DNA damage checkpoint kinase ATM. ATM has a major role in controlling germination of aged seeds by integrating progression through germination with the repair response to DNA damages accumulated during the dry quiescent state.
See also
Ageing
Senescence
DNA damage theory of aging
References
Special issue about plant senescence in Plant Biology volume 10 issue s1
External links
The Adaptive Reasons For And The Physiological Causes Of Senescence In Annual Plants
The Start at a General Theory of Plant Senescence
Plant physiology
Senescence in non-human organisms | Plant senescence | Biology | 2,171 |
57,042,077 | https://en.wikipedia.org/wiki/Ball%20State%20Center%20for%20Energy%20Research/Education/Service | The Ball State Center for Energy Research/Education/Service (CERES) is an interdisciplinary academic support unit at Ball State University focused on enhancing research and education on issues of energy usage and conservation. The center was established in 1982 as an addition to the existing Estopinal College of Architecture and Planning. The center currently states the following as its mission: "To maintain ongoing programs for the examination of state-of-the-art energy conservation and end-use practices; To investigate alternative solutions to contemporary energy problems; To develop projections and implications of the results of these solutions; To devise means of implementing these ideas; To disseminate findings to the appropriate publics—professionals, educators, policy planners, students and laypersons."
History
Concerns about environmentalism and conservation became popularized in the 1970s following the successes of the inaugural Earth Day that allowed for a sort of fusion between radical counterculture ideas regarding environmentalism and more mainstream ideas presented by politicians such as Gaylord Nelson. In addition, energy crises, such as the 1973 Oil Crisis and the 1979 Three Mile Island accident caused concern about the way contemporary energy sources were being utilized, prompting some to begin research in alternative forms of energy or better forms of energy conservation. In response, Ball State University started their efforts to implement energy education into the college; in 1979, with funding from the Indiana General Assembly, the university began planning of the Center for Energy Research/Education/Service, with construction beginning in the summer of 1980. Simultaneously, Ball State University began planning for incorporating this sort of education about conservation and energy education into their curriculum, including proposed courses in environmental architecture and in utilizing solar energy. In 1982, CERES was completed and opened for usage for Ball State's campus.
Since its inception in Ball State's campus, CERES has served as a means to incorporate energy education into Ball State's campus. By 1986, even, this had become one of its missions, with the Center becoming focused on placing Ball State as a leader in reaching a "sustainable future." The Center also had national significance; from the period between 1985 and 1988, CERES was one of only five test laboratories accredited by the Solar Rating and Certification Corporation for their solar research. Going forward, the center has remained an integral part of energy and conservation research and education, both on Ball State's campus and in the greater Muncie community.
References
Ball State University
Education in Delaware County, Indiana
Universities and colleges established in 1982
1982 establishments in Indiana
Environmental education | Ball State Center for Energy Research/Education/Service | Environmental_science | 505 |
24,999,792 | https://en.wikipedia.org/wiki/Parrishia | Parrishia is an extinct genus of sphenosuchian crocodylomorph known from the Late Triassic Chinle, Dockum, and Santa Rosa Formations in Arizona and New Mexico.
Discovery and naming
The genus was named in 1995 from fossils found from the Placerias quarry of the Chinle Group in Apache County, Arizona. It was named after the paleontologist J. Michael Parrish, with the type species being P. mccreai. Parrishia was distinguished from the closely related genus Hesperosuchus on the basis of more robust vertebral centra and the lack of dorsoventrally offset articular faces of the cervical centra, thus causing the neck to be straight rather than anterodorsally curved as in Hesperosuchus.
In their description of a new crocodylomorph skeleton from the famous Whitaker quarry in Ghost Ranch, Clark et al. (2000) treated Parrishia as a nomen dubium because they considered the holotype and referred specimens undiagnostic. More complete postcranial skeletons such as PEFO 26681 have been found that clearly show that the cervical centra of Parrishia possess articular faces that are dorsoventrally offset as in Hesperosuchus. Additionally, in the holotype specimen (UCMP A269/139623) the anterior surfaces of the centa are positioned more dorsally than the posterior surfaces, giving the neck an anterodorsal curve like Hesperosuchus. Therefore, the only distinguishing character that distinguishes Parrishia from Hesperosuchus is the robustness of the vertebrae. Material from Parrishia cannot be assigned to any other known sphenosuchian genus because of the lack of postcranial apomorphies; as a result, it is considered an indeterminate genus.
In an SVP 2018 conference abstract, William Parker and colleagues reported the discovery of new specimens indicating that Parrishia represents a phytosaur and not a crocodylomorph.
References
Terrestrial crocodylomorphs
Triassic crocodylomorpha
Late Triassic archosaurs of North America
Chinle fauna
Nomina dubia
Prehistoric pseudosuchian genera | Parrishia | Biology | 464 |
73,165,077 | https://en.wikipedia.org/wiki/Yasmin%20Umar | Mohammad Yasmin bin Haji Umar (born 23 April 1956) is a Bruneian aristocrat, politician, and retired military officer who served as minister of energy from 2010 to 2018 and deputy minister of defence from 2005 to 2010.
Early life and education
Mohammad Yasmin bin Haji Umar, born in Brunei on 23 April 1956, pursued his early education at Anthony Abell College in Seria. On 12 July 1979, he earned a Bachelor of Science (Hons) degree in electronics from the University of Wales in the United Kingdom. Continuing his academic journey, he enrolled at the University of Loughborough, also in the UK, where he specialised in digital communication systems. On 1 December 1981, he was awarded a Master of Science degree by the faculty of electrical and electronic engineering.
Military career
Yasmin began his career in the Royal Brunei Malay Regiment (RBMR) as a commissioned officer, receiving a promotion to lieutenant on 9 November 1981. On 25 June 1986, he was awarded the certified chartered engineer insignia by the Institution of Chartered Engineers. Throughout his career, he participated in numerous courses, seminars, and workshops in the United Kingdom, Australia, Singapore, Japan, and the United States. In 1987, he attended the 22nd army staff course, division 1, at the Royal Military College of Science in the United Kingdom.
He held various roles in policy, corporate management, logistics, and strategy. He began as an engineering officer, initially assigned to the First Flotilla of the RBMR, now known as the Royal Brunei Navy, where he served as a weapons engineering officer. On 1 April 1988, he was appointed senior engineering officer, leading the Naval Engineering Department.
Political career
Ministry of Defence
Subsequently, on 14 September 1990, Yasmin became head of research in the defence minister's office and the directorate of strategic planning (DMO/DSP). In 1991, he participated in the Defence Research Fellow Exchange Programme at the National Institute of Defence Studies in Japan. In 1992, Yasmin took on the role of staff officer grade 1 maintenance at the directorate of logistics, where he developed maintenance guidelines for armed forces equipment. On 2 May 1994, he returned to the DMO/DSP as a staff officer grade 1. He was appointed director of intelligence and security on 14 July 1995, a position he held until December 1998. He attended the Australian Defence College in Canberra in 1999. He was appointed as the director of DMO/DSP on 4 January 1999.
Yasmin was appointed as one of three newly appointed permanent secretaries in Brunei, assuming a role at the Ministry of Defence on 24 January 2003, where he oversaw policy and administration. This appointment was later confirmed when Sultan Hassanal Bolkiah received the appointees at Istana Nurul Iman on 6 February that same year. Yasmin was officially appointed to the Legislative Council by the sultan on 6 September 2004.
Deputy Minister of Defence
On 24 May 2005, Yasmin was appointed as the deputy minister of defence under the sultan's order as part of a cabinet reshuffle.
On 2 March 2007, Yasmin emphasised the importance of human resources in strengthening Brunei's defence readiness. During the 23rd National Day celebration, he reiterated the sultan's message that the country's future progress, both regionally and internationally, relies on effectively managing its human resources to produce specialists and intellectuals. Yasmin noted that without a skilled and knowledgeable workforce, Brunei would struggle to compete with more advanced nations. On 5 December 2007, Yasmin was present at the Langkawi International Maritime and Aerospace Exhibition, where a memorandum of understanding (MOU) was signed between World Aerospace (M) and Royal Brunei Technical Services for the management of BRIDEX 2009.
Minister of Energy
As part of a cabinet reshuffle, Yasmin was appointed minister of energy at the Prime Minister's Office (PMO) on 29 May 2010. Shortly after his appointment, on 2 November 2011, Yasmin became one of the respondents in a legal case filed by Captain (Retired) Huraizah Duraman, who alleged wrongful dismissal from the Royal Brunei Armed Forces (RBAF). Yasmin, along with other defendants, was accused of recommending or conspiring to cause Huraizah's discharge from the RBAF. However, the court ruled that the dismissal was solely a result of the sultan's prerogative power, which could not be influenced or questioned by the respondents. Additionally, Yasmin and the other defendants were protected by constitutional immunity, shielding them from legal action regarding their actions in this matter. Ultimately, the court dismissed the case, concluding that Yasmin’s involvement did not lead to the plaintiff's dismissal.
In 2011, Yasmin criticised Brunei Shell Petroleum (BSP) for allowing large businesses to dominate energy contracts, which he believed hindered the growth of small and medium-sized enterprises (SMEs). He called for greater transparency and faster vendor registration to support SMEs, advocating for a more inclusive approach to contract allocation in the energy sector. His remarks aimed to foster a more equitable environment for SMEs in Brunei's energy industry. Following this, on 1 February 2012, the Energy Department at the PMO, with the Sultan's approval, released Directive No. 2–Local Business Development (LBD) Framework. Yasmin hoped this would lead to spin-offs, as Brunei Shell Joint Venture and TotalEnergies planned to invest B$5–6 billion over the next two years.
Minister of Energy and Industry
On 22 October 2015, Yasmin was appointed Minister of Energy and Industry in the PMO as part of a wider cabinet reshuffle, which saw several top officials reassigned to new roles. In his new position, Yasmin took charge of Brunei's increasingly important energy sector, overseeing the nation's energy policies and fostering the growth of the oil and gas industry, crucial to the country's economic development.
On 3 November 2016, Yasmin reaffirmed Brunei's commitment to a zero-tolerance policy towards corruption. He stressed that corruption could undermine the country’s progress by depriving citizens of essential opportunities, such as job creation. Yasmin also warned international companies operating in Brunei against interfering with corruption investigations, describing corruption as a destructive force akin to a disease that could erode the social fabric if not addressed. He underscored the need for a workforce that aligns with Brunei's principles of , emphasising the importance of integrity in public and private sectors.
On 7 May 2017, Yasmin met with Saudi Arabia's Ministry of Energy, Industry and Mineral Resources, Khalid A. Al-Falih, to discuss strengthening Brunei–Saudi relations. The two discussed potential Saudi investments in Brunei's ammonia and urea projects, as well as opportunities in the petrochemical sector, particularly the supply of Saudi crude oil for the downstream industry. They also reviewed the extension of the December 2016 agreement on oil output adjustments under the OPEC/non-OPEC cooperation declaration. On the same day, Yasmin emphasised the importance of Brunei's MSMEs engaging in the digital economy, stressing that for MSMEs to thrive, they must embrace digital commerce. He highlighted the government's initiative to train 1,000 MSMEs in e-commerce through Darussalam Enterprise, aimed at improving their operations and boosting the national economy. Additionally, Yasmin reaffirmed the government's commitment to enhancing the business climate by simplifying business processes and supporting MSMEs. He encouraged MSMEs to seize opportunities to expand their market presence and contribute to Brunei's GDP, particularly through participation in expos.
In August 2017, Amrtur Corporation filed a US$45 million claim against BSP, alleging lost revenues of B$61.2 million (US$45 million) between 2012 and 2016 due to breaches of contracts with BSP. Yasmin was named as one of the 12 defendants in the case, which became widely discussed after a leaked letter related to the dispute went viral on social media. The case was linked to allegations of corruption within the Brunei sultanate, with Yasmin accused of involvement. Accusations arose that he had a conflict of interest during his time on the BSP board. Armtr Corporation's complaint focused on an alleged breach of contracts and income loss between 2012 and 2016, and the case was seen as part of the sultan's efforts to resolve conflicts of interest and promote government transparency. Later, Yasmin accompanied the sultan on his state visit to Beijing on 13 September 2017, where he also attended the 14th China–ASEAN Expo.
Following a cabinet reshuffle on 30 January 2018, Yasmin's was removed from the role as Minister of Energy and Industry, with Mat Suny succeeding him. This significant reorganisation, aimed at advancing the sultan's commitment to combating corruption and fostering national development, sought to introduce fresh talent and accelerate the implementation of Wawasan Brunei 2035.
Political views
Using Brunei's energy sector as a basis for economic growth and diversification was at the heart of Yasmin's political views. Through government LBD directives, which promoted the expansion of local businesses and generated employment possibilities, he fervently argued for maximising local content in oil and gas activities. In order to enhance Brunei's oil and gas resources, lessen its susceptibility to price swings, and draw in substantial foreign direct investment (FDI), like the multibillion-dollar investments made by Hengyi Industries and Brunei Fertilizer Industries, Yasmin placed a strong emphasis on the growth of downstream industries.
Yasmin also highlighted Brunei's ability to compete for FDI by making doing business easier. He did this by pointing out improvements that made Brunei the most improved economy in the World Bank's 2016 and 2017 Doing Business Reports. In line with the Wawasan Brunei 2035 goal of diversifying the economy and lowering dependency on oil, he thought these measures increased investor confidence in both the oil and non-oil industries. Yasmin went on to highlight Brunei's distinct assets, including its unexplored natural biodiversity and high-quality halal standards, as major forces behind regional competitiveness in high-priority industries including halal, technology, tourism, and business services.
Personal life
Yasmin married Datin Hajah Noryasimah binti Abdullah on 5 August 1983, and the couple has a daughter.
Titles, styles and honours
Titles and styles
Yasmin was honoured by Sultan Hassanal Bolkiah with the manteri title of , bearing the style .
Honours
Yasmin has been bestowed the following honours:
National
Order of Setia Negara Brunei First Class (PSNB; 15 July 2011) – Dato Seri Setia
Order of Seri Paduka Mahkota Brunei First Class (SPMB; 15 July 2006) – Dato Seri Paduka
Order of Seri Paduka Mahkota Brunei Second Class (DPMB; 15 July 2003) – Dato Paduka
Order of Seri Paduka Mahkota Brunei Third Class (SMB)
Sultan Hassanal Bolkiah Medal First Class (PHBS; 15 July 2010)
Sultan of Brunei Silver Jubilee Medal (5 October 1992)
Sultan of Brunei Golden Jubilee Medal (5 October 2017)
National Day Silver Jubilee Medal (23 February 2009)
Proclamation of Independence Medal (1997)
General Service Medal
Long Service Medal and Good Conduct (PKLPB)
Royal Brunei Armed Forces Silver Jubilee Medal (31 May 1986)
Fellow of Pertubuhan Ukur Jurutera & Arkitek (1 May 2010)
Foreign
Jordan:
Grand Cordon of the Order of Independence (13 May 2008)
Philippines:
Grand Cross of the Order of Sikatuna (GCrS; 24 August 2008)
Singapore:
Darjah Utama Bakti Cemerlang (Tentera) (DUBC; 16 June 2011)
United Kingdom:
Fellow of the Institution of Engineering and Technology (28 April 2008)
References
Further reading
Living people
1956 births
Bruneian Muslims
Government ministers of Brunei
Members of the Legislative Council of Brunei
Bruneian military personnel
Alumni of the University of Wales
Alumni of Loughborough University
Grand Cordons of the Order of Independence (Jordan)
Bruneian colonels
Fellows of the Institution of Engineering and Technology
Recipients of the Darjah Utama Bakti Cemerlang (Tentera) | Yasmin Umar | Engineering | 2,537 |
18,934,904 | https://en.wikipedia.org/wiki/Technical%20standard | A technical standard is an established norm or requirement for a repeatable technical task which is applied to a common and repeated use of rules, conditions, guidelines or characteristics for products or related processes and production methods, and related management systems practices. A technical standard includes definition of terms; classification of components; delineation of procedures; specification of dimensions, materials, performance, designs, or operations; measurement of quality and quantity in describing materials, processes, products, systems, services, or practices; test methods and sampling procedures; or descriptions of fit and measurements of size or strength.
It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes, and practices. In contrast, a custom, convention, company product, corporate standard, and so forth that becomes generally accepted and dominant is often called a de facto standard.
A technical standard may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc. Standards can also be developed by groups such as trade unions and trade associations. Standards organizations often have more diverse input and usually develop voluntary standards: these might become mandatory if adopted by a government (i.e., through legislation), business contract, etc.
The standardization process may be by edict or may involve the formal consensus of technical experts.
Types
The primary types of technical standards are:
A standard specification is an explicit set of requirements for an item, material, component, system or service. It is often used to formalize the technical aspects of a procurement agreement or contract. For example, there may be a specification for a turbine blade for a jet engine that defines the exact material and performance requirements.
A standard test method describes a definitive procedure that produces a test result. It may involve making a careful personal observation or conducting a highly technical measurement. For example, a physical property of a material is often affected by the precise method of testing: any reference to the property should therefore reference the test method used.
A standard practice or procedure gives a set of instructions for performing operations or functions. For example, there are detailed standard operating procedures for operation of a nuclear power plant.
A standard guide is general information or options that do not require a specific course of action.
A standard definition is formally established terminology.
Standard units, in physics and applied mathematics, are commonly accepted measurements of physical quantities.
Definitions
Technical standards are defined as:
Voluntary consensus standards, which are standards developed or adopted by voluntary consensus standards bodies, domestic (national), regional and international.
Industry standards, also referred to as private standards, which are standards developed in the private sector but not in the full consensus process, typically requiring a financial contribution. UNIDO define private standards as three categories; Consortia standards, Civil society standards and Company-specific standards.
Government standards, which are standards developed by the government for its own uses.
Availability
Technical standards may exist as:
Public documents on the internet, public library, etc. (Some technical standards may be found at a major central library or at the library of a good technical university)
Published documents available for purchase
Private documents owned by an organization or corporation, used and circulated as the owner determines necessary or useful
Documents publicly available under intellectual property (copyright, etc.)
Closed or controlled documents that contain trade secrets or classified information
Geographic levels
When a geographically defined community must solve a community-wide coordination problem, it can adopt an existing standard or produce a new one. The main geographic levels are:
National standard: by National standards organizations. For example, Telecommunications Industry Association standards.
Regional standard: see standards of the Regional standards organizations. For example, CEN standards.
International standard: see International standards organizations Example, ISO and ASTM International.
National/Regional/International standards is one way of overcoming technical barriers in inter-local or inter-regional commerce caused by differences among technical regulations and standards developed independently and separately by each local, local standards organisation, or local company. Technical barriers arise when different groups come together, each with a large user base, doing some well established thing that between them is mutually incompatible. Establishing national/regional/international standards is one way of preventing or overcoming this problem. To further support this, the WTO Technical Barriers to Trade (TBT) Committee published the "Six Principles" guiding members in the development of international standards.
Usage
The existence of a published standard does not imply that it is always useful or correct. For example, if an item complies with a certain standard, there is not necessarily assurance that it is fit for any particular use. The people who use the item or service (engineers, trade unions, etc.) or specify it (building codes, government, industry, etc.) have the responsibility to consider the available standards, specify the correct one, enforce compliance, and use the item correctly. Validation of suitability is necessary.
Standards often get reviewed, revised and updated on a regular basis. It is critical that the most current version of a published standard be used or referenced. The originator or standard writing body often has the current versions listed on its web site.
In social sciences, including economics, a standard is useful if it is a solution to a coordination problem:
it emerges from situations in which all parties realize mutual gains, but only by making mutually consistent decisions.
Examples:
Private Standards (consortia)
Private standards are developed by private entities such as companies, non-governmental organizations or private sector multi-stakeholder initiatives, also referred to as multistakeholder governance. Not all technical standards are created equal. In the development of a technical standard, private standards adopt a non-consensus process in comparison to voluntary consensus standards. This is explained in the paper International standards and private standards.
The International Trade Centre published a literature review series with technical papers on the impacts of private standards and the Food and Agriculture Organization (FAO) published a number of papers in relation to the proliferation of private food safety standards in the agri-food industry, mostly driven by standard harmonization under the multistakeholder governance of the Global Food Safety Initiative (GFSI). With concerns around private standards and technical barriers to trade (TBT), and unable to adhere to the TBT Committee's Six Principles for the development of international standards because private standards are non-consensus, the WTO does not rule out the possibility that the actions of private standard-setting bodies may be subject to WTO law.
BSI Group compared private food safety standards with "plugs and sockets", explaining the food sector is full of "confusion and complexity". Also, "the multiplicity of standards and assurance schemes has created a fragmented and inefficient supply chain structure imposing unnecessary costs on businesses that have no choice but to pass on to consumers". BSI provide examples of other sectors working with a single international standard; ISO 9001 (quality), ISO 14001 (environment), ISO 45001 (occupational health and safety), ISO 27001 (information security) and ISO 22301 (business continuity). Another example of a sector working with a single international standard is ISO 13485 (medical devices), which is adopted by the International Medical Device Regulators Forum (IMDRF).
In 2020, Fairtrade International, and in 2021, Programme for the Endorsement of Forest Certification (PEFC) issued position statements defending their use of private standards in response to reports from The Institute for Multi-Stakeholder Initiative Integrity (MSI Integrity) and Greenpeace.
Private standards typically require a financial contribution in terms of an annual fee from the organizations who adopt the standard. Corporations are encouraged to join the board of governance of the standard owner which enables reciprocity. Meaning corporations have permission to exert influence over the requirements in the standard, and in return the same corporations promote the standards in their supply chains which generates revenue and profit for the standard owner. Financial incentives with private standards can result in a perverse incentive, where some private standards are created solely with the intent of generating money. BRCGS, as scheme owner of private standards, was acquired in 2016 by LGC Ltd who were owned by private equity company Kohlberg Kravis Roberts. This acquisition triggered substantial increases in BRCGS annual fees. In 2019, LGC Ltd was sold to private equity companies Cinven and Astorg.
See also
De facto standard
Harmonization (standards)
International Standard
International Organization for Standardization
List of international common standards
List of computer standards
List of technical standard organisations
Software standard
Specification (technical standard)
Standard (metrology)
Standards organization
Standardization
World Standards Day
World Standards Cooperation
References
Further reading
Kellermann, Martin (2019). Ensuring Quality to Gain Access to Global Markets: A Reform Toolkit (PDF). International Bank for Reconstruction and Development / The World Bank and Physikalisch-Technische Bundesanstalt (PTB). Standards Chapter, pp. 45–68. .
Good Standardization Practices (GSP) (2019). International Organization for Standardization (ISO). .
Standards
Documents
Technical communication
Technical specifications | Technical standard | Technology | 1,823 |
16,602,852 | https://en.wikipedia.org/wiki/Eric%20Litman | Eric Austin Litman (born August 1, 1973) is an American entrepreneur and angel investor, and CEO of the robotics health technology company, Aescape, inc. He co-founded Proxicom, built Viaduct from a one-man shop through a merger with the Wolf Group and was the founder and CEO of Medialets, a mobile ad serving and advertising analytics company acquired by WPP plc.
He has been profiled and quoted by The Wall Street Journal, Forbes, Wired, and Fast Company; was named a 2010 Game Changer by New York Enterprise Report, and in 2011 was called one of the "best operators in online advertising" by TechCrunch
Early life and education
Litman was born in Los Angeles and grew up on Saint Thomas, one of the United States Virgin Islands where he graduated from high school at 15. He attended the University of Maryland, College Park in College Park northwest of Landover, Maryland.
Career
Beginning in business
While in college, he worked in pre-sales support and engineering at NeXT, the start-up founded by Apple CEO Steve Jobs. Litman went on to be a senior systems engineer for Digicon, building secure, distributed networks, and applications for the U.S. Department of Defense.
Proxicom
Litman and three other colleagues from Digicon founded Proxicom in 1991. Proxicom, one of the first-generation Internet professional services agencies, went public on NASDAQ in 1998 and was sold to the global consultancy Dimension Data after a bidding war against Compaq (prior to Compaq's merger with Hewlett-Packard).
Viaduct
After Proxicom, Litman founded Viaduct Technologies, an interactive agency and was its CEO. Viaduct was acquired by Wolf Group in 2000. After the acquisition Litman stayed as Viaduct's chief operating officer.
WashingtonVC
Litman was the managing director of WashingtonVC, an early stage venture capital fund in Washington, DC, where he focused on investments in online media, consumer Internet and telecommunications. He conceptualized and launched Aux Interactive in March, 2008. He left WashingtonVC in May, 2008.
Medialets
Litman founded Medialets, a mobile ad serving, attribution and measurement provider in 2008. In April, 2015, Medialets was acquired by WPP plc, the world's largest advertising company, where he was the senior vice president of Mobile Worldwide until April 2017.
Aescape
In May 2017, Litman founded Aescape, inc., a robotics health technology company focused on building intuitive massage therapy experiences designed to help people of all walks of life feel and live better and longer which counts as investors Peter Wurman, the co-founder of Kiva Systems (now Amazon Robotics), Fabrice Grinda of FJ Labs, Seth Levine and Brad Feld of Foundry Group, NBA championship player Matthew Dellavedova, Shane Feldberg, and others.
References
External links
Eric Litman's Blog
New York Times: THE MEDIA BUSINESS: ADVERTISING -- ADDENDA; Wolfe and Omnicom Make Acquisitions
Washington VC Names Litman Executive Director
University Venture Summit Starts Students Early on Venture Capital Path
Shashi Bellamkonda : Meeting a VC in DC - Eric Litman
American technology chief executives
American computer businesspeople
Computer systems engineers
Living people
Businesspeople in software
1973 births | Eric Litman | Technology | 674 |
6,763,077 | https://en.wikipedia.org/wiki/Group%20key | In cryptography, a group key is a cryptographic key that is shared between a group of users. Typically, group keys are distributed by sending them to individual users, either physically, or encrypted individually for each user using either that user's pre-distributed private key.
A common use of group keys is to allow a group of users to decrypt a broadcast message that is intended for that entire group of users, and no one else.
For example, in the Second World War, group keys (known as "iodoforms", a term invented by a classically educated non-chemist, and nothing to do with the chemical of the same name) were sent to groups of agents by the Special Operations Executive. These group keys allowed all the agents in a particular group to receive a single coded message.
In present-day applications, group keys are commonly used in conditional access systems, where the key is the common key used to decrypt the broadcast signal, and the group in question is the group of all paying subscribers. In this case, the group key is typically distributed to the subscribers' receivers using a combination of a physically distributed secure cryptoprocessor in the form of a smartcard and encrypted over-the-air messages.
References
Cryptography | Group key | Mathematics,Engineering | 264 |
146,253 | https://en.wikipedia.org/wiki/Shock%20wave | In physics, a shock wave (also spelled shockwave), or shock, is a type of propagating disturbance that moves faster than the local speed of sound in the medium. Like an ordinary wave, a shock wave carries energy and can propagate through a medium, but is characterized by an abrupt, nearly discontinuous, change in pressure, temperature, and density of the medium.
For the purpose of comparison, in supersonic flows, additional increased expansion may be achieved through an expansion fan, also known as a Prandtl–Meyer expansion fan. The accompanying expansion wave may approach and eventually collide and recombine with the shock wave, creating a process of destructive interference. The sonic boom associated with the passage of a supersonic aircraft is a type of sound wave produced by constructive interference.
Unlike solitons (another kind of nonlinear wave), the energy and speed of a shock wave alone dissipates relatively quickly with distance. When a shock wave passes through matter, energy is preserved but entropy increases. This change in the matter's properties manifests itself as a decrease in the energy which can be extracted as work, and as a drag force on supersonic objects; shock waves are strongly irreversible processes.
Terminology
Shock waves can be:
Normal At 90° (perpendicular) to the shock medium's flow direction.
Oblique At an angle to the direction of flow.
Bow Occurs upstream of the front (bow) of a blunt object when the upstream flow velocity exceeds Mach 1.
Some other terms:
Shock front: The boundary over which the physical conditions undergo an abrupt change because of a shock wave.
Contact front: In a shock wave caused by a driver gas (for example the "impact" of a high explosive on the surrounding air), the boundary between the driver (explosive products) and the driven (air) gases. The contact front trails the shock front.
In supersonic flows
The abruptness of change in the features of the medium, that characterize shock waves, can be viewed as a phase transition: the pressure–time diagram of a supersonic object propagating shows how the transition induced by a shock wave is analogous to a dynamic phase transition.
When an object (or disturbance) moves faster than the information can propagate into the surrounding fluid, then the fluid near the disturbance cannot react or "get out of the way" before the disturbance arrives. In a shock wave the properties of the fluid (density, pressure, temperature, flow velocity, Mach number) change almost instantaneously. Measurements of the thickness of shock waves in air have resulted in values around 200 nm (about 10−5 in), which is on the same order of magnitude as the mean free path of gas molecules. In reference to the continuum, this implies the shock wave can be treated as either a line or a plane if the flow field is two-dimensional or three-dimensional, respectively.
Shock waves are formed when a pressure front moves at supersonic speeds and pushes on the surrounding air. At the region where this occurs, sound waves travelling against the flow reach a point where they cannot travel any further upstream and the pressure progressively builds in that region; a high-pressure shock wave rapidly forms.
Shock waves are not conventional sound waves; a shock wave takes the form of a very sharp change in the gas properties. Shock waves in air are heard as a loud "crack" or "snap" noise. Over longer distances, a shock wave can change from a nonlinear wave into a linear wave, degenerating into a conventional sound wave as it heats the air and loses energy. The sound wave is heard as the familiar "thud" or "thump" of a sonic boom, commonly created by the supersonic flight of aircraft.
The shock wave is one of several different ways in which a gas in a supersonic flow can be compressed. Some other methods are isentropic compressions, including Prandtl–Meyer compressions. The method of compression of a gas results in different temperatures and densities for a given pressure ratio which can be analytically calculated for a non-reacting gas. A shock wave compression results in a loss of total pressure, meaning that it is a less efficient method of compressing gases for some purposes, for instance in the intake of a scramjet. The appearance of pressure-drag on supersonic aircraft is mostly due to the effect of shock compression on the flow.
Normal shocks
In elementary fluid mechanics utilizing ideal gases, a shock wave is treated as a discontinuity where entropy increases abruptly as the shock passes. Since no fluid flow is discontinuous, a control volume is established around the shock wave, with the control surfaces that bound this volume parallel to the shock wave (with one surface on the pre-shock side of the fluid medium and one on the post-shock side). The two surfaces are separated by a very small depth such that the shock itself is entirely contained between them. At such control surfaces, momentum, mass flux and energy are constant; within combustion, detonations can be modelled as heat introduction across a shock wave. It is assumed the system is adiabatic (no heat exits or enters the system) and no work is being done. The Rankine–Hugoniot conditions arise from these considerations.
Taking into account the established assumptions, in a system where the downstream properties are becoming subsonic: the upstream and downstream flow properties of the fluid are considered isentropic. Since the total amount of energy within the system is constant, the stagnation enthalpy remains constant over both regions. However, entropy is increasing; this must be accounted for by a drop in stagnation pressure of the downstream fluid.
Other shocks
Oblique shocks
When analyzing shock waves in a flow field, which are still attached to the body, the shock wave which is deviating at some arbitrary angle from the flow direction is termed oblique shock. These shocks require a component vector analysis of the flow; doing so allows for the treatment of the flow in an orthogonal direction to the oblique shock as a normal shock.
Bow shocks
When an oblique shock is likely to form at an angle which cannot remain on the surface, a nonlinear phenomenon arises where the shock wave will form a continuous pattern around the body. These are termed bow shocks. In these cases, the 1d flow model is not valid and further analysis is needed to predict the pressure forces which are exerted on the surface.
Shock waves due to nonlinear steepening
Shock waves can form due to steepening of ordinary waves. The best-known example of this phenomenon is ocean waves that form breakers on the shore. In shallow water, the speed of surface waves is dependent on the depth of the water. An incoming ocean wave has a slightly higher wave speed near the crest of each wave than near the troughs between waves, because the wave height is not infinitesimal compared to the depth of the water. The crests overtake the troughs until the leading edge of the wave forms a vertical face and spills over to form a turbulent shock (a breaker) that dissipates the wave's energy as sound and heat.
Similar phenomena affect strong sound waves in gas or plasma, due to the dependence of the sound speed on temperature and pressure. Strong waves heat the medium near each pressure front, due to adiabatic compression of the air itself, so that high pressure fronts outrun the corresponding pressure troughs. There is a theory that the sound pressure levels in brass instruments such as the trombone become high enough for steepening to occur, forming an essential part of the bright timbre of the instruments. While shock formation by this process does not normally happen to unenclosed sound waves in Earth's atmosphere, it is thought to be one mechanism by which the solar chromosphere and corona are heated, via waves that propagate up from the solar interior.
Analogies
A shock wave may be described as the furthest point upstream of a moving object which "knows" about the approach of the object. In this description, the shock wave position is defined as the boundary between the zone having no information about the shock-driving event and the zone aware of the shock-driving event, analogous with the light cone described in the theory of special relativity.
To produce a shock wave, an object in a given medium (such as air or water) must travel faster than the local speed of sound. In the case of an aircraft travelling at high subsonic speed, regions of air around the aircraft may be travelling at exactly the speed of sound, so that the sound waves leaving the aircraft pile up on one another, similar to a traffic jam on a motorway. When a shock wave forms, the local air pressure increases and then spreads out sideways. Because of this amplification effect, a shock wave can be very intense, more like an explosion when heard at a distance (not coincidentally, since explosions create shock waves).
Analogous phenomena are known outside fluid mechanics. For example, charged particles accelerated beyond the speed of light in a refractive medium (such as water, where the speed of light is less than that in a vacuum) create visible shock effects, a phenomenon known as Cherenkov radiation.
Phenomenon types
Below are a number of examples of shock waves, broadly grouped with similar shock phenomena:
Moving shock
Usually consists of a shock wave propagating into a stationary medium
In this case, the gas ahead of the shock is stationary (in the laboratory frame) and the gas behind the shock can be supersonic in the laboratory frame. The shock propagates with a wavefront which is normal (at right angles) to the direction of flow. The speed of the shock is a function of the original pressure ratio between the two bodies of gas.
Moving shocks are usually generated by the interaction of two bodies of gas at different pressure, with a shock wave propagating into the lower pressure gas and an expansion wave propagating into the higher pressure gas.
Examples: Balloon bursting, shock tube, shock wave from explosion.
Detonation wave
A detonation wave is essentially a shock supported by a trailing exothermic reaction. It involves a wave travelling through a highly combustible or chemically unstable medium, such as an oxygen-methane mixture or a high explosive. The chemical reaction of the medium occurs following the shock wave, and the chemical energy of the reaction drives the wave forward.
A detonation wave follows slightly different rules from an ordinary shock since it is driven by the chemical reaction occurring behind the shock wavefront. In the simplest theory for detonations, an unsupported, self-propagating detonation wave proceeds at the Chapman–Jouguet flow velocity. A detonation will also cause a shock to propagate into the surrounding air due to the overpressure induced by the explosion.
When a shock wave is created by high explosives such as TNT (which has a detonation velocity of 6,900 m/s), it will always travel at high, supersonic velocity from its point of origin.
Bow shock (detached shock)
These shocks are curved and form a small distance in front of the body. Directly in front of the body, they stand at 90 degrees to the oncoming flow and then curve around the body. Detached shocks allow the same type of analytic calculations as for the attached shock, for the flow near the shock. They are a topic of continuing interest, because the rules governing the shock's distance ahead of the blunt body are complicated and are a function of the body's shape. Additionally, the shock standoff distance varies drastically with the temperature for a non-ideal gas, causing large differences in the heat transfer to the thermal protection system of the vehicle. See the extended discussion on this topic at atmospheric reentry. These follow the "strong-shock" solutions of the analytic equations, meaning that for some oblique shocks very close to the deflection angle limit, the downstream Mach number is subsonic. See also bow shock or oblique shock.
Such a shock occurs when the maximum deflection angle is exceeded. A detached shock is commonly seen on blunt bodies, but may also be seen on sharp bodies at low Mach numbers.
Examples: Space return vehicles (Apollo, Space shuttle), bullets, the boundary (bow shock) of a magnetosphere. The name "bow shock" comes from the example of a bow wave, the detached shock formed at the bow (front) of a ship or boat moving through water, whose slow surface wave speed is easily exceeded (see ocean surface wave).
Attached shock
These shocks appear as attached to the tip of sharp bodies moving at supersonic speeds.
Examples: Supersonic wedges and cones with small apex angles.
The attached shock wave is a classic structure in aerodynamics because, for a perfect gas and inviscid flow field, an analytic solution is available, such that the pressure ratio, temperature ratio, angle of the wedge and the downstream Mach number can all be calculated knowing the upstream Mach number and the shock angle. Smaller shock angles are associated with higher upstream Mach numbers, and the special case where the shock wave is at 90° to the oncoming flow (Normal shock), is associated with a Mach number of one. These follow the "weak-shock" solutions of the analytic equations.
In rapid granular flows
Shock waves can also occur in rapid flows of dense granular materials down inclined channels or slopes. Strong shocks in rapid dense granular flows can be studied theoretically and analyzed to compare with experimental data. Consider a configuration in which the rapidly moving material down the chute impinges on an obstruction wall erected perpendicular at the end of a long and steep channel. Impact leads to a sudden change in the flow regime from a fast moving supercritical thin layer to a stagnant thick heap. This flow configuration is particularly interesting because it is analogous to some hydraulic and aerodynamic situations associated with flow regime changes from supercritical to subcritical flows.
In astrophysics
Astrophysical environments feature many different types of shock waves. Some common examples are supernovae shock waves or blast waves travelling through the interstellar medium, the bow shock caused by the Earth's magnetic field colliding with the solar wind and shock waves caused by galaxies colliding with each other. Another interesting type of shock in astrophysics is the quasi-steady reverse shock or termination shock that terminates the ultra relativistic wind from young pulsars.
Meteor entering events
Shock waves are generated by meteoroids when they enter the Earth's atmosphere. The Tunguska event and the 2013 Russian meteor event are the best documented evidence of the shock wave produced by a massive meteoroid.
When the 2013 meteor entered into the Earth's atmosphere with an energy release equivalent to 100 or more kilotons of TNT, dozens of times more powerful than the atomic bomb dropped on Hiroshima, the meteor's shock wave produced damage as in a supersonic jet's flyby (directly underneath the meteor's path) and as a detonation wave, with the circular shock wave centred at the meteor explosion, causing multiple instances of broken glass in the city of Chelyabinsk and neighbouring areas (pictured).
Technological applications
In the examples below, the shock wave is controlled, produced by (ex. airfoil) or in the interior of a technological device, like a turbine.
Recompression shock
These shocks appear when the flow over a transonic body is decelerated to subsonic speeds.
Examples: Transonic wings, turbines
Where the flow over the suction side of a transonic wing is accelerated to a supersonic speed, the resulting re-compression can be by either Prandtl–Meyer compression or by the formation of a normal shock. This shock is of particular interest to makers of transonic devices because it can cause separation of the boundary layer at the point where it touches the transonic profile. This can then lead to full separation and stall on the profile, higher drag, or shock-buffet, a condition where the separation and the shock interact in a resonance condition, causing resonating loads on the underlying structure.
Pipe flow
This shock appears when supersonic flow in a pipe is decelerated.
Examples:
In supersonic propulsion: ramjet, scramjet, unstart.
In flow control: needle valve, choked venturi.
In this case the gas ahead of the shock is supersonic (in the laboratory frame), and the gas behind the shock system is either supersonic (oblique shocks) or subsonic (a normal shock) (Although for some oblique shocks very close to the deflection angle limit, the downstream Mach number is subsonic.) The shock is the result of the deceleration of the gas by a converging duct, or by the growth of the boundary layer on the wall of a parallel duct.
Combustion engines
The wave disk engine (also named "Radial Internal Combustion Wave Rotor") is a kind of pistonless rotary engine that utilizes shock waves to transfer energy between a high-energy fluid to a low-energy fluid, thereby increasing both temperature and pressure of the low-energy fluid.
Memristors
In memristors, under externally-applied electric field, shock waves can be launched across the transition-metal oxides, creating fast and non-volatile resistivity changes.
Shock capturing and detection
Advanced techniques are needed to capture shock waves and to detect shock waves in both numerical computations and experimental observations.
Computational fluid dynamics is commonly used to obtain the flow field with shock waves. Though shock waves are sharp discontinuities, in numerical solutions of fluid flow with discontinuities (shock wave, contact discontinuity or slip line), the shock wave can be smoothed out by low-order numerical method (due to numerical dissipation) or there are spurious oscillations near shock surface by high-order numerical method (due to Gibbs phenomena).
There exist some other discontinuities in fluid flow than the shock wave. The slip surface (3D) or slip line (2D) is a plane across which the tangent velocity is discontinuous, while pressure and normal velocity are continuous. Across the contact discontinuity, the pressure and velocity are continuous and the density is discontinuous. A strong expansion wave or shear layer may also contain high gradient regions which appear to be a discontinuity. Some common features of these flow structures and shock waves and the insufficient aspects of numerical and experimental tools lead to two important problems in practices:
(1) some shock waves can not be detected or their positions are detected wrong, (2) some flow structures which are not shock waves are wrongly detected to be shock waves.
In fact, correct capturing and detection of shock waves are important since shock waves have the following influences:
(1) causing loss of total pressure, which may be a concern related to scramjet engine performance,
(2) providing lift for wave-rider configuration, as the oblique shock wave at lower surface of the vehicle can produce high pressure to generate lift,
(3) leading to wave drag of high-speed vehicle which is harmful to vehicle performance,
(4) inducing severe pressure load and heat flux, e.g. the Type IV shock–shock interference could yield a 17 times heating increase at vehicle surface, (5) interacting with other structures, such as boundary layers, to produce new flow structures such as flow separation, transition, etc.
See also
Blast wave
Shock waves in astrophysics
Atmospheric focusing
Atmospheric reentry
Cherenkov radiation
Explosion
Hydraulic jump
Joule–Thomson effect
Mach wave
Magnetopause
Moreton wave
Normal shock tables
Oblique shock
Prandtl condition
Prandtl–Meyer expansion fan
Shocks and discontinuities (MHD)
Shock (mechanics)
Sonic boom
Supercritical airfoil
Undercompressive shock wave
Unstart
Shock diamond
Kelvin wake pattern
References
Nikonov, V. A Semi-Lagrangian Godunov-Type Method without Numerical Viscosity for Shocks. Fluids 2022, 7, 16. https://doi.org/10.3390/fluids7010016
Further reading
Smoller, Joel: (1983), Shock Waves and Reaction—Diffusion Equations, Springer ISBN 9780387907529.
External links
NASA Glenn Research Center information on:
Oblique Shocks
Multiple Crossed Shocks
Expansion Fans
Selkirk college: Aviation intranet: High speed (supersonic) flight
Energy loss in a shock wave, normal and oblique shock waves
Formation of a normal shock wave
Fundamentals of compressible flow, 2007
NASA 2015 Schlieren image shock wave T-38C | Shock wave | Physics | 4,193 |
2,497,875 | https://en.wikipedia.org/wiki/Alternating%20algebra | In mathematics, an alternating algebra is a -graded algebra for which for all nonzero homogeneous elements and (i.e. it is an anticommutative algebra) and has the further property that (nilpotence) for every homogeneous element of odd degree.
Examples
The differential forms on a differentiable manifold form an alternating algebra.
The exterior algebra is an alternating algebra.
The cohomology ring of a topological space is an alternating algebra.
Properties
The algebra formed as the direct sum of the homogeneous subspaces of even degree of an anticommutative algebra is a subalgebra contained in the centre of , and is thus commutative.
An anticommutative algebra over a (commutative) base ring in which 2 is not a zero divisor is alternating.
See also
Alternating multilinear map
Exterior algebra
Graded-symmetric algebra
Supercommutative algebra
References
Algebraic geometry | Alternating algebra | Mathematics | 185 |
49,015,166 | https://en.wikipedia.org/wiki/NGC%206052 | NGC 6052 is a pair of galaxies in the constellation of Hercules. It was discovered on 11 June 1784 by William Herschel. It was described as "faint, pretty large, irregularly round" by John Louis Emil Dreyer, the compiler of the New General Catalogue.
The two components of NGC 6052 are designated NGC 6052A and NGC 6052B, respectively. The two, attracted by each other's gravity, have collided and are interacting with each other. NGC 6052 is currently in a late stage of merging, where the shape of the two galaxies is not distinctly defined.
SN 1982aa, a powerful radio supernova, was detected in NGC 6052.
Gallery
See also
Hercules (Chinese astronomy)
List of largest galaxies
List of nearest galaxies
List of NGC objects (6001–7000)
References
Notes
External links
6052
10182
209
57039
Interacting galaxies
Luminous infrared galaxies
Hercules (constellation)
Markarian galaxies | NGC 6052 | Astronomy | 191 |
17,118,781 | https://en.wikipedia.org/wiki/Phoronix%20Test%20Suite | Phoronix Test Suite (PTS) is a free and open-source benchmark software for Linux and other operating systems.
The Phoronix Test Suite, developed by Michael Larabel and Matthew Tippett, has been endorsed by sites such as Linux.com, LinuxPlanet, and Softpedia.
Features
Phoronix Test Suite supports over 220 test profiles and over 60 test suites. It uses an XML-based testing architecture. Tests available to use include MEncoder, FFmpeg and lm sensors, along with OpenGL games such as Doom 3, Nexuiz, and Enemy Territory: Quake Wars, and many more. The suite also contains a feature called PTS Global where users may upload their test results and system information for sharing. By executing a single command, other users can compare their test results to a selected system in an easy-comparison mode. Before 2014, these benchmark results could be uploaded to the Phoronix Global online database, but since 2013, these benchmark results can be uploaded to openbenchmarking.org. Phoronix supports automated Git bisecting on a performance basis to find performance regressions, and features statistical significance verification.
Components
Phoromatic
Phoromatic is a web-based remote test management system for the Phoronix Test Suite. It allows the automatic scheduling of tests. It's aimed at the enterprise. It can manage multiple test nodes simultaneously within a test farm or distributed environment.
Phoromatic Tracker
Phoromatic Tracker is an extension of Phoromatic that provides a public interface into test farms. Currently, their reference implementations autonomously monitor the performance of the Linux kernel on a daily basis, Fedora Rawhide, and Ubuntu.
PTS Desktop Live
PTS Desktop Live was a stripped-down x86-64 Linux distribution, which included Phoronix Test Suite 2.4. It was designed for testing/benchmarking computers from a LiveDVD / LiveUSB environment.
Phodevi
Phodevi (Phoronix Device Interface) is a library that provides a clean, stable, platform-independent API for accessing software and hardware information.
PCQS
Phoronix Certification & Qualification Suite (PCQS) is a reference specification for the Phoronix Test Suite.
Phoronix website
Phoronix is a technology website that offers information on the development of the Linux kernel, product reviews, interviews, and news regarding free and open-source software by monitoring the Linux kernel mailing list or interviews.
Phoronix was started in June 2004 by Michael Larabel, who currently serves as the owner and editor-in-chief.
History
Founded on June 5, 2004, Phoronix started as a website with a handful of hardware reviews and guides, moving to articles covering operating systems based on Linux and open-source software such as Ubuntu, Fedora, SUSE, and Mozilla (Firefox/Thunderbird) around the start of 2005. Phoronix focuses on benchmarking hardware running Linux, with a slant toward graphics articles that monitor and compare free and open-source graphics device drivers and Mesa 3D with AMD's and Nvidia's proprietary graphics device drivers. In June 2006, the website added forums to accompany news content. On April 20, 2007, Phoronix redesigned its website and began publishing Solaris hardware reviews and news in addition to Linux content.
Other technical publications, such as CNET News, have cited Phoronix benchmarks.
Open Benchmarking
OpenBenchmarking.org is a web-based service created to work with the Phoronix Test Suite. It is a collaborative platform that allows users to share their hardware and software benchmarks through an organized online interface.
It is primarily used for performance benchmarking and testing hardware/software performance, typically in the context of Linux-based systems (unlike SoapUI, which is used for testing web services).
Release history
On June 5, 2008, Phoronix Test Suite 1.0 was released under the codename Trondheim. This 1.0 release was made up of 57 test profiles and 23 test suites.
On September 3, 2008, Phoronix Test Suite 1.2 was released with support for the OpenSolaris operating system, a module framework accompanied by tests focusing upon new areas, and new test profiles.
Phoronix Test Suite 1.8 includes a graphical user interface (GUI) using GTK+ written using the PHP-GTK bindings.
3.4 includes MATISK benchmarking module and initial support for the GNU Hurd.
See also
Inquisitor
Stresslinux
References
External links
2008 software
Benchmarking software for Linux
Benchmarks (computing)
Free software programmed in PHP | Phoronix Test Suite | Technology | 986 |
17,960,514 | https://en.wikipedia.org/wiki/Drum%20stick | A drum stick (or drumstick) is a type of percussion mallet used particularly for playing snare drum, drum kit, and some other percussion instruments, and particularly for playing unpitched percussion.
Specialized beaters used on some other percussion instruments, such as the metal beater used with a triangle or the mallets used with tuned percussion (such as xylophone and timpani), are not normally referred to as drumsticks. Drumsticks generally have all of the following characteristics:
They are normally supplied and used in pairs.
They may be used to play at least some sort of drum (as well as other instruments).
They are normally used only for unpitched percussion.
Construction
The archetypical drumstick is turned from a single piece of wood, most commonly of hickory, less commonly of maple, and least commonly but still in significant numbers, of oak. Drumsticks of the traditional form are also made from metal, carbon fibre, and other modern materials.
The tip or bead is the part most often used to strike the instrument. Originally and still commonly of the same piece of wood as the rest of the stick, sticks with nylon tips have also been available since 1958. In the 1970s, an acetal tip was introduced.
Tips of whatever material are of various shapes, including acorn, barrel, oval, teardrop, pointed and round.
The shoulder of the stick is the part that tapers towards the tip, and is normally slightly convex. It is often used for playing the bell of a cymbal. It can also be used to produce a cymbal crash when applied with a glancing motion to the bow or edge of a cymbal, and for playing ride patterns on china, swish, and pang cymbals.
The shaft is the body of the stick, and is cylindrical for most applications including drum kit and orchestral work. It is used for playing cross stick and applied in a glancing motion to the rim of a cymbal for the loudest cymbal crashes.
The butt is the opposite end of the stick to the tip. Some rock and metal musicians use it rather than the tip.
Conventional numbering
Plain wooden drumsticks are most commonly described using a number to describe the weight and diameter of the stick followed by one or more letters to describe the tip. For example, a 7A is a common jazz stick with a wooden tip, while a 7AN is the same weight of stick with a nylon tip, and a 7B is a wooden tip but with a different tip profile, shorter and rounder than a 7A. A 5A is a common wood tipped rock stick, heavier than a 7A but with a similar profile. The numbers are most commonly odd but even numbers are used occasionally, in the range 2 (heaviest) to 9 (lightest).
The exact meanings of both numbers and letters differ from manufacturer to manufacturer, and some sticks are not described using this system at all, just being known as jazz (typically a 7A, 8A or 8D) or heavy rock (typically a 5B) for example. The most general purpose stick is a 5A. However, there is no one stick for any particular style of music.
Grip
There are two main ways of holding drumsticks:
Traditional grip, in which right and left hands use different grips.
Matched grip, in which the two hand grips are mirror-image.
Traditional grip was developed to conveniently play a snare drum while riding a horse, and was documented by Sanford A. Moeller in The Art of Snare Drumming (1925). It was the standard grip for kit drummers in the first half of the twentieth century and remains popular.
Matched grips became popular towards the middle of the twentieth century, threatening to displace the traditional grip for kit drumming. However the traditional grip has since made a comeback, and both types of grip are still used and promoted by leading drummers and teachers.
Popular brands
Pro-Mark
Vic Firth
Vater Percussion
Regal Tip
Tama Drums
Collision Drumsticks
See also
Percussion mallet
References
Human–machine interaction
Stick
Drumming
Musical instrument parts and accessories
Percussion instrument beaters | Drum stick | Physics,Technology,Engineering,Biology | 845 |
32,162,058 | https://en.wikipedia.org/wiki/Jacquet%20module | In mathematics, the Jacquet module is a module used in the study of automorphic representations. The Jacquet functor is the functor that sends a linear representation to its Jacquet module. They are both named after Hervé Jacquet.
Definition
The Jacquet module J(V) of a representation (π,V) of a group N is the space of co-invariants of N; or in other words the largest quotient of V on which N acts trivially, or the zeroth homology group H0(N,V). In other words, it is the quotient V/VN where VN is the subspace of V generated by elements of the form π(n)v - v for all n in N and all v in V.
The Jacquet functor J is the functor taking V to its Jacquet module J(V).
Applications
Jacquet modules are used to classify admissible irreducible representations of a reductive algebraic group G over a local field, and N is the unipotent radical of a parabolic subgroup of G. In the case of p-adic groups, they were studied by .
For the general linear group GL(2), the Jacquet module of an admissible irreducible representation has dimension at most two. If the dimension is zero, then the representation is called a supercuspidal representation. If the dimension is one, then the representation is a special representation. If the dimension is two, then the representation is a principal series representation.
References
Representation theory | Jacquet module | Mathematics | 332 |
76,994,474 | https://en.wikipedia.org/wiki/Pieter%20Maarten%20de%20Wolff | Pieter Maarten de Wolff (23 July 1919 – 10 April 1998), or Pim de Wolff was a Dutch physicist, crystallographer, and professor at Delft University of Technology. He was one of the founders of N-dimensional crystallography together with Ted Janssen and Aloysio Janner.
Education and career
De Wolff was born in the Dutch East Indies as the youngest of four children. His father was Maarten de Wolff, a civil engineering engineer, and his mother was Hermine Elizabeth van Vliet. From 1929 the family lived in Medan, Sumatra, where he went to school. In 1932 they returned to the Netherlands and he went to the Hogere Burgerschool in The Hague.
He studied physics at Delft University of Technology in 1936, where he studied X-ray powder diffraction in his graduation research. He obtained his engineering degree in 1941, during Nazi Germany's occupation of the Netherlands, just before the Nazis closed the college in Delft. Unable to continue his studies, de Wolff, through the intercession of Henk Dorgelo, went to work at the Technical Physics Department of the Netherlands Organisation for Applied Scientific Research. In 1951 de Wolff obtained his PhD from Henk Dorgelo with a thesis entitled Contributions to the theory and practice of quantitative determinations by the X-ray powder diffraction method.
In 1958 de Wolff became professor of theoretical and applied physics at the Delft University of Technology, a position he held until his retirement in 1984. He was chairman of the Applied Physics Department (1971–1973) and of the Physics Practicum department (1974–1980). He also chaired the Committee on the Nomenclature of Symmetry at the International Union of Crystallography.
Honors and awards
De Wolff's honors include receiving the Gilles Holst Medal of the Royal Netherlands Academy of Arts and Sciences for the Guinier–de Wolff Camera in 1976, the Abraham Gottlob Werner Medal of the German Mineralogical Society in 1986, and a distinguished fellowship award from the International Center for Diffraction Data in 1994. He received the Gregori Aminoff Prize from the Royal Swedish Academy of Sciences in 1998. He was too ill to go to Stockholm to receive the medal from the Swedish king. He died ten days later.
References
1919 births
1998 deaths
Dutch physicists
Crystallographers
Delft University of Technology alumni
Academic staff of the Delft University of Technology
Members of the Royal Netherlands Academy of Arts and Sciences
People from Bandung | Pieter Maarten de Wolff | Chemistry,Materials_science | 508 |
36,264,729 | https://en.wikipedia.org/wiki/Nexus%20Q | Nexus Q is a digital media player developed by Google. Unveiled at the Google I/O developers' conference on June 27, 2012, the device was expected to be released to the public in the United States shortly thereafter for US$300. The Nexus Q was designed to leverage Google's online media offerings, such as Google Play Music, Google Play Movies & TV, and YouTube, to provide a "shared" experience. Users could stream content from the supported services to a connected television, or speakers connected to an integrated amplifier, using their Android device and the services' respective apps as a remote control for queueing content and controlling playback.
The Nexus Q received mixed reviews from critics following its unveiling. While its unique spherical design was praised, the Nexus Q was criticized for its lack of functionality in comparison to similar devices such as Apple TV, including a lack of support for third-party content services, no support for streaming content directly from other devices using the DLNA standard, as well as other software issues that affected the usability of the device. The unclear market positioning of the Nexus Q was also criticized, as it carried a significantly higher price than competing media players with wider capabilities; The New York Times technology columnist David Pogue described the device as being 'wildly overbuilt' for its limited functions.
The Nexus Q was given away at no cost to attendees of Google I/O, but the product's consumer launch was indefinitely postponed the following month, purportedly to collect additional feedback. Those who had pre-ordered the Nexus Q following its unveiling received the device at no cost. The Nexus Q was quietly shelved in January 2013, and support for the device in the Google Play apps was phased out beginning in May 2013. Some of the Nexus Q's concepts were repurposed for a more-successful device known as Chromecast, which similarly allows users to wirelessly queue content for playback using functions found in supported apps, but is designed as a smaller HDMI dongle with support for third-party services.
Development
An early iteration of the Nexus Q was first demoed at Google I/O in 2011 under the name "Project Tungsten"; the device could stream music wirelessly from another Android device to attached speakers. It served as a component of a home automation concept known as "Android@Home", which aimed to provide an Android-based framework for connected devices within a home. Following the launch of the Google Music service in November 2011, a decision was made to develop a hardware device to serve as a tie-in—a project that eventually resulted in the Nexus Q. Google engineering director Joe Britt explained that the device was designed to make music a "social, shared experience", encouraging real-world interaction between its users. He also felt that there had been "a generation of people who’ve grown up with white earbuds", who had thus not experienced the difference of music played on speakers.
The Nexus Q was the first hardware product developed entirely in-house by Google, and was manufactured in a U.S.-based factory—which allowed Google engineers to inspect the devices during their production.
Hardware and software
The Nexus Q takes the form of a sphere with a flat base; Google designer Mike Simonian explained that its form factor was meant to represent a device that pointed towards "the cloud", and "people all around" to reflect its communal nature. The sphere is divided into two halves; the top half can be rotated to adjust the audio volume being output over attached speakers or to other home theater equipment, and tapped to mute. In between the two halves is a ring of 32 LEDs; these lights serve as a music visualizer that animate in time to music, and can be set to one of five different color schemes. The rear of the device contains a power connector, ethernet jack, micro HDMI and optical audio outputs, banana plugs for connecting speakers to the device's built-in 25-watt "stereo-grade" amplifier, and a micro USB connector meant to "connect future accessories and encourage general hack-ability". The Nexus Q includes an OMAP4 processor, 1 GB of RAM, and 16 GB of storage used for caching of streamed content. It also supports near-field communication and Bluetooth for pairing devices and initial setup.
The Nexus Q runs a stripped-down version of Android 4.0 "Ice Cream Sandwich", and is controlled solely via supported apps on Android devices running Android 4.1 "Jelly Bean". Google announced plans to support older versions of Android following the device's official launch. Media could be queued to play on the device using a "Play to" button shown within the Google Play Music, Google Play Movies & TV, and YouTube apps. Content is streamed directly from the services by the Nexus Q, with the Android device used like a remote control. For music, multiple users could collaboratively queue songs from Google Play Music onto a playlist. A management app could be used to adjust Nexus Q hardware settings. Nexus Q did not support any third-party media services, nor could media be stored to the device, or streamed to it using the standardized DLNA protocol.
Reception
Most criticism of the Nexus Q centered on its relatively high price in comparison to contemporary media streaming devices and set-top boxes, such as Apple TV and Roku, especially considering its lack of features when compared to these devices. The New York Times technology columnist David Pogue described the Nexus Q as being a "baffling" device, stating that it was "wildly overbuilt for its incredibly limited functions, and far too expensive", and arguing that it would probably appeal only to people "whose living rooms are dominated by bowling ball collections." Engadget was similarly mixed, arguing that while it was a "sophisticated, beautiful device with such a fine-grained degree of engineering you can't help but respect it", and that its amplifier was capable of producing "very clean sound", the Nexus Q was a "high-price novelty" that lacked support for DLNA, lossless audio, and playback of content from external or internal storage among other features.
Discontinuation
Nexus Q units were distributed as a gift to attendees of Google I/O 2012, with online pre-orders to the public opening at a price of US$300. On July 31, 2012, Google announced that it would delay the official launch of the Nexus Q in order to address early feedback, and that all customers who pre-ordered the device would receive it for free. By January 2013, the device was no longer listed for sale on the Google Play website, implying that its official release had been cancelled indefinitely. Google began to discontinue software support for the Nexus Q in May 2013, beginning with an update to the Google Play Music app, and a similar update to Google Play Movies & TV in June.
The Nexus Q has also been the subject of third-party development and experimentation; XDA-developers users discovered means for side-loading Android applications onto the Nexus Q to expand its functionality. One user demonstrated the ability to use a traditional Android home screen with keyboard and mouse input, as well as the official Netflix app. In December 2013, an unofficial build of Android 4.4 "KitKat" based on CyanogenMod code was also released for the Nexus Q, although it was unstable and lacked reliable Wi-Fi support.
The Nexus Q received a de facto successor in July 2013 with the unveiling of Chromecast, a streaming device that similarly allows users to queue the playback of remote content ("cast") via a mobile device. Chromecast is contrasted by its compact HDMI dongle form factor, the availability of an SDK that allows third-party services to integrate with the device, and its considerably lower price in comparison to the Nexus Q. In late 2014, Google and Asus released a second Nexus-branded digital media player known as the Nexus Player, which served as a launch device for the digital media player and smart TV platform Android TV.
See also
Comparison of set-top boxes
Google TV
Chromebit
References
Further reading
Gross, Doug, "Google's new Nexus Q: Made in the U.S.A.", CNN, Thu June 28, 2012
Android (operating system) devices
Digital media players
Google Nexus
Networking hardware
Products introduced in 2012
Streaming media systems
Vaporware | Nexus Q | Technology,Engineering | 1,710 |
40,867,477 | https://en.wikipedia.org/wiki/Isotricha | Isotricha is a genus of protozoa (single-celled organisms) which are commensals of the rumen of ruminant animals. They are approximately long.
Species include:
Isotricha intestinalis Stein 1858
Isotricha prostoma Stein 1858
References
Biological interactions
Ciliate genera
Litostomatea
Symbiosis | Isotricha | Biology | 74 |
3,961,220 | https://en.wikipedia.org/wiki/Hydnoroideae | Hydnoroideae is a subfamily of parasitic flowering plants in the order Piperales. Traditionally, and as recently as the APG III system it given family rank under the name Hydnoraceae. It is now submerged in the Aristolochiaceae. It contains two genera, Hydnora and Prosopanche:
Prosopanche is native to Central and South America;
Hydnora can be found in semi-arid to desert regions of Africa, the Arabian Peninsula, and Madagascar.
Members of this subfamily have been described as the strangest plants in the world.
Description
The most striking aspect of the Hydnoroideae is probably the complete absence of leaves (not even in modified forms such as scales). Some species are mildly thermogenic (capable of producing heat), presumably as a means of dispersing their scent.
Morphology in pictures
Ecology
The plants are pollinated by insects such as dermestid beetles or carrion flies, attracted by the fetid odor of the flowers. In Hydnora africana there are bait bodies with a strong smell, whereas in Hydnora johannis the scent comes from a region at the tip of the perianth called a cucullus. The flowers may be above ground or underground. The fruits have edible, fragrant pulp, which attracts animals such as porcupines, monkeys, jackals, rhinoceros, and armadillos, as well as humans. The host plants, in the case of Hydnora, generally are in the family Euphorbiaceae and the genus Acacia. Hosts for Prosopanche include various species of Prosopis and other legumes.
Biochemistry
The plants contain high levels of tannins.
Genomics
The complete plastid genome sequence of one species of Hydnoroideae, Hydnora visseri, has been determined. As compared to the chloroplast genome of its closest photosynthetic relatives, the plastome of Hydnora visseri shows extreme reduction in both size (27,233 bp) and gene content (24 genes appear to be functional). The plastome of Hydnora visseri is therefore one of the smallest among flowering plants.
Classification
Like many parasitic plants, the affinities with non-parasitic plants are not obvious, and 19th and 20th century botanists proposed a variety of placements for the taxon. Molecular data places them in the Piperales, and nested within the Aristolochiaceae and allied with the Piperaceae or Saururaceae.
References
Aristolochiaceae
Plant subfamilies
Parasitic plants | Hydnoroideae | Biology | 551 |
43,223,929 | https://en.wikipedia.org/wiki/Threema | Threema is a paid cross-platform encrypted instant messaging app developed by Threema GmbH in Switzerland and launched in 2012. The service operates on a decentralized architecture and offers end-to-end encryption. Users can make voice and video calls, send photos, files, and voice notes, share locations, and make groups. Unlike many other popular secure messaging apps, Threema does not require phone numbers or email addresses for registration, only a one-time purchase that can be paid via an app store or anonymously with Bitcoin or cash.
Threema is available on iOS and Android, and has clients for Windows, macOS, Linux, and HarmonyOS and can be accessed via web browser but requires a mobile app to function.
Features
The service claims to be based on the privacy by design principles by not requiring a phone number or other personally identifiable information. This helps anonymize the users to a degree.
Threema uses a user ID, created after the initial app launch by a random generator, instead of requiring a linked email address or phone number to send messages. It is possible to find other users by phone number or email address if the user allows the app to synchronize their address book. Linking a phone number or email address to a Threema ID is optional. Hence, the service can be used anonymously. Users can verify the identity of their Threema contacts by scanning their QR code when they meet physically. The QR code contains the public key of the user, which is cryptographically tied to the ID and will not change during the lifetime of the identity. Using this strong authentication feature, users can make sure they have the correct public key from their chat partners, which provides additional security against a man-in-the-middle attack. Threema knows three levels of authentication (trust levels of the contact's identity). The verification level of each contact is displayed in the Threema application as dots next to the corresponding contact.
In addition to text messaging, users can make voice and video calls, send multimedia, locations, voice messages, and files. A web app version, Threema Web, can be used on desktop devices, but only as long as the phone with the Threema installation of the user is online. There is a beta for iOS users, where it is possible to take the phone offline and still use the desktop app.
In addition to one-on-one chats, Threema offers group chats up to 256 people. Users can make voice and video calls, send text and voice messages, multimedia, locations, and files of any type (up to 50 MB per file). It is also possible to create polls in personal or group chats.
Software
Threema is developed by the Swiss company Threema GmbH. The servers are in Switzerland and the development is based in Pfäffikon SZ. As of May 2021, Threema had 10 million users and the business version, Threema Work, was used by 2 million users across 5,000 companies and organizations.
At the end of July, 2021 Threema introduced the ability for companies to host the messenger on their own server, primarily intended for companies with significantly high privacy concerns.
Clients
With Threema Web, a client for web browsers, Threema can be used from other devices like desktop computers, though only as long as the original device is online.
Threema optionally supports Android Wear smartwatch and Android Auto. Threema launched support for end-to-end encrypted video calls on August 10, 2020. The calls are person-to-person with group calls unavailable.
The application does not allow the self-deletion of messages after a period defined by the interlocutors.
The application does prevent screenshots in conversations when configured to do so.
Architecture
The entire communication via Threema is end-to-end encrypted. During the initial setup, the application generates a key pair and sends the public key to the server while keeping the private key on the user's device. The application then encrypts all messages and files that are sent to other Threema users with their respective public keys. Once a message is delivered successfully, it is immediately deleted from the servers.
The encryption process used by Threema is based on the open-source library NaCl library. Threema uses asymmetric ECC-based encryption, with 256-bit strength. Threema offers a "Validation Logging" feature that makes it possible to confirm that messages are end-to-end encrypted using the NaCl Networking and Cryptography library. In August 2015, Threema was subjected to an external security audit. Researchers from cnlab confirmed that Threema allows secure end-to-end encryption, and claimed that they were unable to identify any weaknesses in the implementation. Cnlab researchers also confirmed that Threema provides anonymity to its users and handles contacts and other user data as advertised.
History
Threema was founded in December 2012 by Manuel Kasper. The company was initially called Kasper Systems GmbH. Martin Blatter and Silvan Engeler joined Kasper to develop an Android application that was released in early 2013.
In Summer 2013, the Snowden leaks helped create an interest in Threema, boosting the user numbers to the hundreds of thousands. When Facebook took over WhatsApp in February 2014, Threema got 200,000 new users, doubling its userbase in 24 hours. Around 80% percent of those new users came from Germany. By March 2014 Threema had 1.2 million users.
In Spring 2014, operations were transferred to the newly created Threema GmbH. Martin Blatter took over the position of CEO.
In December 2014, Apple listed Threema as the most-sold app of 2014 at the German App Store.
In 2020, Threema expanded with video calls, plans to open-source its client-side apps and introduce reproducible builds of them, as well as introduce Threema Education, a variation of Threema intended for education institutions.
During the second week of 2021, Threema saw a quadrupling of daily downloads spurred on by controversial privacy changes in the WhatsApp messaging service. A spokesperson for the company also confirmed that Threema had risen to the top of the charts for paid applications in Germany, Switzerland, and Austria. This trend continued into the third week of the year, with the head of Marketing & Sales confirming that downloads had increased to ten times the regular amount, leading to "hundreds of thousands of new users each day".
In October 2022, researchers from ETH Zurich reported multiple vulnerabilities affecting Threema's security against network, server and client-based attacks. A new release fixing these issues was released in November 2022 and the vulnerabilities were announced publicly in January 2023.
In September 2024, CEO Martin Blatter and the remaining founders and original developers left the company.
Related products
Threema Work: On May 25, 2016, Threema Work, a corporate version of Threema, was released. Threema Work offers extended administration and deployment capabilities. Threema Work is based on a yearly subscription model.
Threema Gateway: On March 20, 2015, Threema released a gateway for companies. Similar to an SMS gateway, businesses can use it to send messages to their users who have Threema installed. The code for the Threema Gateway SDK is open for developers and available on GitHub.
Threema Broadcast: On August 9, 2018, Threema released Threema Broadcast, a tool for top-down communication. Similar to emails in electronic newsletters, Threema messages can be sent to any number of feed subscribers, and the Threema Broadcast allows to create chatbots.
Threema Education: On September 10, 2020, Threema released Threema Education, a version of its messenger designed for education institutions. The app integrates Threema Broadcast and requires a one-time payment for each device used. It's intended for use by teachers, students, and parents.
Threema OnPrem: On July 27, 2021, Threema released Threema OnPrem, a version of the messenger which could be hosted on a company's own servers for maximum security purposes.
Privacy
Since Threema's servers are in Switzerland, they are subject to the Swiss federal law on data protection. The data center is ISO/IEC 27001-certified. Linking a phone number and/or email address to a Threema ID is optional; when doing so, only checksum values (SHA-256 HMAC with a static key) of the email address and/or phone number are sent to the server. Due to the small number of possible digit combinations of a telephone number, the phone number associated with a checksum could be determined by brute force. The transmitted data is TLS-secured. The address book data is kept only in the volatile memory of the server and is deleted immediately after synchronizing contacts. If a user chooses to link a phone number or email address with their Threema ID, they can remove the phone number or email address at any time. Should a user ever lose their device (and their private key), they can revoke their Threema ID if a revocation password for that ID has been set.
Groups are solely managed on users’ devices and group messages are sent to each recipient as an individual message, encrypted with the respective public key. Thus, group compositions are not directly exposed to the server.
Data (including media files) stored on the users’ devices is encrypted with AES 256. On Android, it can be additionally protected by a passphrase.
Since 2016, Threema GmbH publishes a transparency report where public authority inquiries are disclosed.
On March 9, 2017, Threema was listed in the "Register of organizers of information dissemination in the Internet" operated by the Federal Service for Supervision of Communications, Information Technology and Mass Media of the Russian Federation.
In a response, a Threema spokesperson publicly stated: "We operate under Swiss law and are neither allowed nor willing to provide any information about our users to foreign authorities."
On April 29, 2021, Threema won a significant case at the Federal Supreme Court of Switzerland against the Swiss Federal Department of Police and Justice, who wished to classify the company as a telecommunications provider. Had they lost the case, Threema would have had a legal requirement to identify users and send information about their users to law enforcement.
Starting January 2022, Swiss Armed Forces suggested that the troops should use Threema instead of WhatsApp, Telegram and Signal, citing Threema being Swiss-based without servers in the United States and thus not subject to the CLOUD Act, also promising that soldiers would be reimbursed for the cost.
In 2024 the FBI, using direct access to a mobile phone with the app installed, were able to obtain Threema messages and lay charges on 2 suspects.
Reception
In February 2014, German consumer organisation Stiftung Warentest evaluated several data-protection aspects of Threema, WhatsApp, Telegram, BlackBerry Messenger and Line. It considered the security of the data transmission between clients, the services' terms of use, the transparency of the service providers, the availability of the source code, and the apps' overall availability. Threema was the only app rated as 'non-critical' () in relation to data and privacy protection, but lost marks due to its closed-source nature, though this has changed for its frontend clients since the end of 2020.
Along with Cryptocat and Surespot, Threema was ranked first in a study evaluating the security and usability of instant messaging encryption software, conducted by the German PSW Group in June 2014.
, Threema had a score of 6 out of 7 points on the – now withdrawn and outdated – Electronic Frontier Foundation's "Secure Messaging Scorecard". It received points for having communications encrypted in transit, having communications encrypted with keys the provider doesn't have access to (i.e. having end-to-end encryption), making it possible for users to independently verify their correspondent's identities, having past communications secure if the keys are stolen (i.e. implementing forward secrecy), having its security design well-documented and having completed an independent security audit. It lost a point because its source code was not open to independent review (i.e. it was not open-source, though in late 2020 its frontend apps were open-sourced, leaving only its server component proprietary).
See also
Comparison of instant messaging clients
References
External links
Introduction to Threema
Alleged vulnerabilities
Instant messaging clients
2012 software
Cryptographic software
Internet privacy software
IOS software
Android (operating system) software
Windows Phone software
Swiss brands | Threema | Mathematics,Technology | 2,606 |
14,457,458 | https://en.wikipedia.org/wiki/Women%20in%20medicine | The presence of women in medicine, particularly in the practicing fields of surgery and as physicians, has been traced to the earliest of history. Women have historically had lower participation levels in medical fields compared to men with occupancy rates varying by race, socioeconomic status, and geography.
Women's informal practice of medicine in roles such as caregivers, or as allied health professionals, has been widespread. Since the start of the 20th century, most countries of the world provide women with access to medical education. Not all countries ensure equal employment opportunities, and gender equality has yet to be achieved within medical specialties and around the world.
History
Ancient medicine
The involvement of women in the field of medicine has been recorded in several early civilizations. An Egyptian of the Old Kingdom of Egypt, Peseshet, described in an inscription as "lady overseer of the female physicians", is the earliest woman named in the history of science. Ubartum lived around 2050 BC in Mesopotamia and came from a family of several physicians. Agamede was cited by Homer as a healer in ancient Greece before the Trojan War. Agnodice was the first female physician to practice legally in 4th century BC Athens. Metrodora was a physician and generally regarded as the first female medical writer. Her book, On the Diseases and Cures of Women, was the oldest medical book written by a female and was referenced by many other female physicians. She credited much of her writings to the ideologies of Hippocrates.
Medieval Europe
During the Middle Ages, convents were a centralized place of education for women, and some of these communities provided opportunities for women to contribute to scholarly research. An example is the German abbess Hildegard of Bingen, whose prolific writings include treatments of various scientific subjects, including medicine, botany and natural history (–58). She is considered Germany's first female physician.
Women in the Middle Ages participated in healing techniques and several capacities in medicine and medical education. Women occupied select ranks of medical personnel during the period. They worked as herbalists, midwives, surgeons, barber-surgeons, nurses, and traditional empirics. Women healers treated most patients, not limiting themselves to treating solely women. The names of 24 women described as surgeons in Naples, Italy between 1273 and 1410 have been recorded, and references have been found to 15 women practitioners, most of them Jewish and none described as midwives, in Frankfurt, Germany between 1387 and 1497. The earliest known English women doctors, Solicita and Matilda Ford, date to the late twelfth century; they were referred to as medica, a term for trained physicians.
Women also engaged in midwifery and healing arts without having their activities recorded in written records, and practiced in rural areas or where there was little access to medical care. Society in the Middle Ages limited women's role as physician. Once universities established faculties of medicine during the thirteenth century, women were excluded from advanced medical education. Licensure began to require clerical vows for which women were ineligible, and healing as a profession became male-dominated.
In many occasions, women had to fight against accusation of illegal practice done by males, putting into question their motives. If they were not accused of malpractice, then women were considered "witches" by both clerical and civil authorities. Surgeons and barber-surgeons were often organized into guilds, which could hold out longer against the pressures of licensure. Like other guilds, a number of the barber-surgeon guilds allowed the daughters and wives of their members to take up membership in the guild, generally after the man's death. Katherine "la surgiene" of London, daughter of Thomas the surgeon and sister of William the Surgeon, belonged to a guild in 1286. Documentation of female members in the guilds of Lincoln, Norwich, Dublin and York continue until late in the period.
Midwives, those who assisted pregnant women through childbirth and some aftercare, included only women. Midwives constituted roughly one third of female medical practitioners. Men did not involve themselves in women's medical care; women did not involve themselves in men's health care. The southern Italian coastal town of Salerno was a center of medical education and practice in the 12th century. In Salerno the physician Trota of Salerno compiled a number of her medical practices in several written collections. One work on women's medicine that was associated with her, the ('On Treatments for Women') formed the core of what came to be known as the Trotula ensemble, a compendium of three texts that circulated throughout medieval Europe. Trota herself gained a reputation that spread as far as France and England. There are also references in the writings of other Salernitan physicians to the ('Salernitan women'), which give some idea of local empirical practices.
Dorotea Bucca, an Italian physician, was chair of philosophy and medicine at the University of Bologna for over forty years from 1390. Other Italian women whose contributions in medicine have been recorded include Abella, Jacqueline Felice de Almania, Alessandra Giliani, Rebecca de Guarna, Margarita, Mercuriade (14th century), Constance Calenda, Clarice di Durisio (15th century), Constanza, Maria Incarnata and Thomasia de Mattio.
Medieval Islamic world
For the medieval Islamic world, little information is known about female medical practitioners although it is likely that women were regularly involved in medical practice in some capacity. Male medical writers refer to the presence of female practitioners (a ṭabība) in describing certain procedures or situations. The late-10th to early-11th century Andalusi physician and surgeon al-Zahrawi wrote that certain medical procedures were difficult for male doctors practicing on female patients because of the need to touch the genitalia. The male practitioner was required to either find a female doctor who could perform the procedure, or a eunuch physician, or a midwife who took instruction from the male surgeon. The existence of female practitioners can be inferred, albeit not explicitly, through direct evidence. Midwives played a prominent role in the delivery of women's healthcare. For these practitioners, there is more detailed information, both in terms of the prestige of their craft (ibn Khaldun calls it a noble craft, "something necessary in civilization") and in terms of biographical information on historic women. To date, no known medical treatise written by a woman in the medieval Islamic world has been identified.
Western medicine in China
Traditional Chinese medicine based on the use of herbal medicine, acupuncture, massage and other forms of therapy has been practiced in China for thousands of years. Western medicine was introduced to China in the 19th century, mainly by medical missionaries sent from various Christian mission organizations, such as the London Missionary Society (Britain), the Methodist Church (Britain) and the Presbyterian Church (US). Benjamin Hobson (1816–1873), a medical missionary sent by the London Missionary Society in 1839, set up the Wai Ai Clinic () in Guangzhou, China. The Hong Kong College of Medicine for Chinese () was founded in 1887 by the London Missionary Society, with its first graduate (in 1892) being Sun Yat-sen ().
Due to the social custom that men and women should not be near to one another, Chinese women were reluctant to be treated by Western male doctors. This resulted in a need for female doctors. One of these was Sigourney Trask of the Methodist Episcopal Church, who set-up a hospital in Fuzhou during the mid-19th century. Trask also arranged for a local girl, Hü King Eng, to study medicine at Ohio Wesleyan Female College, with the intention that Hü would return to practise western medicine in Fuzhou. After graduation, Hü became the resident physician at Fuzhou's Woolston Memorial Hospital in 1899 and trained several female physicians. Another female medical missionary Mary H. Fulton (1854–1927) was sent by the Foreign Missions Board of the Presbyterian Church (US) to found the first medical college for women in China. Known as the Hackett Medical College for Women (), this college was located in Guangzhou, China, and was enabled by a large donation from Edward A. K. Hackett (1851–1916) of Indiana. The college was dedicated in 1902 and offered a four-year curriculum. By 1915, there were more than 60 students, mostly in residence. Most students became Christians, due to the influence of Fulton. The college was aimed at the spreading of Christianity and modern medicine and the elevation of Chinese women's social status. The graduates of this college included Chau Lee-sun (, 1890–1979) and Wong Yuen-hing (), both of whom graduated in the late 1910s and then practiced medicine in the hospitals in Guangdong province.
Midwifery in 18th-century America
During this era, the majority of American women whether European or African American, childbirth was considered a female event where female friends, relatives, and the local midwife gathered to support the birthing mother. Midwives gained their knowledge through experience and apprenticeship. Out of the different occupations women took on around this time, midwifery was one of the highest-paying industries. In the 18th century, households tended to have an abundance of children largely in part to having hired help and diminished mortality rates. Despite the high chance of complications in labor, American midwife Martha Ballard, specifically, had high success rates in delivering healthy babies to healthy mothers.
Women's health movement, 1970s
The 1970s marked an increase of women entering and graduating from medical school in the United States. From 1930 to 1970, a period of 40 years, around 14,000 women graduated from medical school. From 1970 to 1980, a period of 10 years, over 20,000 women graduated from medical school. This increase of women in the medical field was due to both political and cultural changes. Two laws in the U.S. lifted restrictions for women in the medical field – Title IX of the Higher Education Act Amendments of 1972 and the Public Health Service Act of 1975, banning discrimination on grounds of gender. In November 1970, the Assembly of the Association of American Medical Colleges rallied for equal rights in the medical field.
Throughout the decade women's ideas about themselves and their relation to the medical field were shifting due to the women's feminist movement. A sharp increase of women in the medical field led to developments in doctor-patient relationships, changes in terminology and theory. One area of medical practice that was challenged and changed was gynecology. Author Wendy Kline noted that "to ensure that young brides were ready for the wedding night, [doctors] used the pelvic exam as a form of sex instruction."
With higher numbers of women enrolled in medical school, medical practices like gynecology were challenged and subsequently altered. In 1972, the University of Iowa Medical School instituted a new training program for pelvic and breast examinations. Students would act both as the doctor and the patient, allowing each student to understand the procedure and create a more gentle, respectful examination. With changes in ideologies and practices throughout the 70s, by 1980 over 75 schools had adopted this new method.
Along with women entering the medical field and feminist rights movement, came along the women's health movement which sought alternative methods of health care for women. This came through the creation of self-help books, most notably Our Bodies, Ourselves: A Book by and for Women. This book gave women a "manual" to help understand their body. It challenged hospital treatment, and doctors' practices. Aside from self-help books, many help centres were opened: birth centres run by midwives, safe abortion centres, and classes for educating women on their bodies, all with the aim of providing non-judgmental care for women. The women's health movement, along with women involved in the medical field, opened the doors for research and awareness for female illness like breast cancer and cervical cancer.
Scholars in the history of medicine had developed some study of women in the field—biographies of pioneering women physicians were common prior to the 1960s—and study of women in medicine took particular root with the advent of the women's movement in the 1960s, and in conjunction with the women's health movement.
Modern medicine
In 1540, Henry VIII of England granted the charter for the Company of Barber-Surgeons; while this led to the specialization of healthcare professions (i.e. surgeons and barbers), women were barred from professional practice. Women did continue to practice during this time without formal training or recognition in England and eventually North America for the next several centuries.
Women's participation in the medical professions was generally limited by legal and social practices during the decades while medicine was professionalizing. Women openly practiced medicine in the allied health professions (nursing, midwifery, etc.), and throughout the nineteenth and twentieth centuries, women made significant gains in access to medical education and medical work through much of the world. These gains were sometimes tempered by setbacks; for instance, Mary Roth Walsh documented a decline in women physicians in the US in the first half of the twentieth century, such that there were fewer women physicians in 1950 than there were in 1900. Through the latter half of the twentieth century, women made gains generally across the board. In the United States, for instance, women were 9% of total US medical school enrollment in 1969; this had increased to 20% in 1976. By 1985, women constituted 16% of practicing American physicians.
At the beginning of the 21st century in industrialized nations, women have made significant gains, but have yet to achieve parity throughout the medical profession. Women have achieved parity in medical school in some industrialized countries, since 2003 forming the majority of the United States medical school applicants. In 2007–2008, women accounted for 49% of medical school applicants and 48.3% of those accepted. According to the Association of American Medical Colleges (AAMC) 48.4% (8,396) of medical degrees awarded in the US in 2010–2011 were earned by women, an increase from 26.8% in 1982–1983. While more women are taking part in the medical field, a 2013–2014 study reported that there are significantly fewer women in leadership positions within the academic realm of medicine. This study found that women accounted for 16% of deans, 21% of the professors, and 38% of faculty, as compared to their male counterparts.
The practice of medicine remains disproportionately male overall. In industrialized nations, the recent parity in gender of medical students has not yet trickled into parity in practice. In many developing nations, neither medical school nor practice approach gender parity. Moreover, there are skews within the medical profession: some medical specialties, such as surgery, are significantly male-dominated, while other specialties are significantly female-dominated, or are becoming so. For example, in the United States, female physicians outnumber male physicians in pediatrics and female residents outnumber male residents in family medicine, obstetrics and gynecology, pathology, and psychiatry. In several different areas of medicine (general practice, medical specialties, surgical specialties) and in various roles, medical professionals tend to overestimate women's true representation, and this correlates with a decreased willingness to support gender-based initiatives among men, impeding further progress towards gender parity.
Women continue to dominate in nursing. In 2000, 94.6% of registered nurses in the United States were women. In health care professions as a whole in the US, women numbered approximately 14.8 million, as of 2011.
Biomedical research and academic medical professions—i.e., faculty at medical schools—are also disproportionately male. Research on this issue, called the "leaky pipeline" by the National Institutes of Health and other researchers, shows that while women have achieved parity with men in entering graduate school, a variety of discrimination causes them to drop out at each stage in the academic pipeline: graduate school, postdoc, faculty positions, achieving tenure; and, ultimately, in receiving recognition for groundbreaking work.
Glass ceiling
The "glass ceiling" is a metaphor to convey the undefined obstacles that women and minorities face in the workplace. Female physicians of the late 19th-century faced discrimination in many forms due to the prevailing Victorian era attitude that the ideal woman be demure, display a gentle demeanor, act submissively, and enjoy a perceived form of power that should be exercised over and from within the home. Medical degrees were difficult for women to earn, and once practicing, discrimination from landlords for medical offices, left female physicians to set up their practices on "Scab Row" or "bachelor's apartments."
The Journal of Women's Health surveyed physician mothers and their physician daughters to analyze the effect that discrimination and harassment have on the individual and their career. This study included 84% of physician mothers that graduated medical school prior to 1970, with the majority of these physicians graduating in the 1950s and 1960s. The authors of this study stated that discrimination in the medical field persisted after the title VII discrimination legislation was passed in 1965. This was the case until 1970, when the National Organization for Women (NOW) filed a class action lawsuit against all medical schools in the United States. By 1975, the number of women in medicine had nearly tripled, and has continued to grow. By 2005, more than 25% of physicians and around 50% of medical school students were women. The increase of women in medicine also came with an increase of women identifying as a racial/ethnic minority, yet this population is still largely underrepresented in comparison to the general population of the medical field.
Within this specific study, 22% of physician mothers and 24% of physician daughters identified themselves as being an ethnic minority. These women reported experiencing instances of exclusion from career opportunities as a result of their race and gender. According to this article, females tend to have lessened confidence in their abilities as a doctor, yet their performance is equivalent to that of their male counterparts. This study also commented on the impact of power dynamics within medical school, which is established as a hierarchy that ultimately shapes the educational experience. Instances of sexual harassment attribute to the high attrition rates of females in the STEM fields.
Competition between midwifery and obstetrics
A shift from women midwifery to male obstetrics occurs in the growth of medical practices such as the founding of the American Medical Association. Instead of assisting labor in the basis of an emergency, doctors took over the delivery of babies completely; putting midwifery second. This is an example of the growing sense of competition between male physicians and female midwives as a rise in obstetrics took hold. The education of women on the basis of midwifery was stunted by both physicians and public-health reformers, driving midwifery to be seen as out of practice. Societal roles also played a fact in the downfall of the practice in midwifery because women were unable to obtain the education needed for licensing and once married, women were to embrace a domestic lifestyle. In 2018, there were 11,826 certified nurse midwives (CNMs). In 2019 there were 42,720 active physicians in Obstetrics and Gynecology.
Outside of the United States, midwifery is still practiced in several countries such as in Africa. The first school of midwives in Africa was supposedly founded by Dr. Ernst Rodenwalt in Togo in 1912. In comparison, The Juba College of Nursing and Midwifery in South Sudan (a country that gained its independence in 2011) graduated its first class of students in 2013.
Women's contributions to medicine
Historical women's medical schools
When women were routinely forbidden from medical school, they sought to form their own medical schools.
New England Female Medical College, Boston, founded in 1848.
Woman's Medical College of Pennsylvania (founded 1850 as Female Medical College of Pennsylvania)
London School of Medicine for Women (founded 1874 by Sophia Jex-Blake)
Edinburgh School of Medicine for Women (founded 1886 by Sophia Jex-Blake)
First Pavlov State Medical University of St. Petersburg (founded 1897 as Female Medical University)
Tokyo Women's Medical University (founded 1900 by Yoshioka Yayoi)
Hackett Medical College for Women, Guangzhou, China, founded in 1902 by Presbyterian Church (USA).
Historical hospitals with significant female involvement
Woman's Hospital of Philadelphia, founded in 1861, provided clinical experience for Woman's Medical College of Pennsylvania students
New England Hospital for Women and Children (now called Dimock Community Health Center), founded in 1862 by women doctors "for the exclusive use of women and children"
New Hospital for Women (founded in the 1870s by Elizabeth Garrett Anderson and run largely by women, for women)
South London Hospital for Women and Children (founded 1912 by Eleanor Davies-Colley and Maud Chadburn; closed 1984; employed an all-woman staff)
Pioneering women in early modern medicine
18th century
Madeleine-Françoise Calais ( – fl. 1740) was a pioneer who is referred to as the first female dentist in France.
Dorothea Erxleben (1715–1762) was the first female doctor in Germany and the first woman worldwide to be granted an MD by a university.
Salomée Halpir (1718 – after 1763) was a Polish medic and oculist who is often referred to as the first female doctor from the Grand Duchy of Lithuania.
19th century
Lovisa Årberg (1801–1881) was the first female doctor and surgeon in Sweden; whereas, Amalia Assur (1803–1889) was the first female dentist in Sweden and possibly Europe.
Marie Durocher (1809–1893) was a Brazilian obstetrician, midwife and physician. She is considered the first female doctor in Brazil and the Americas.
Ann Preston (1813–1872) was the first female to become the dean of a medical school [Woman's Medical College of Pennsylvania (WMCP)] in 1866.
Elizabeth Blackwell (1821–1910), who was England-born, was the first woman to graduate from medical school in the United States. She obtained her MD in 1849 from Geneva College, New York City.
Rebecca Lee Crumpler, (1831–1895) became the first African American female physician in the United States in 1864 upon being awarded her M.D. by New England Female Medical College in Boston.
Lucy Hobbs Taylor (1833–1910) was the first female dentist in the United States.
Elizabeth Garrett Anderson (1836–1917) was a pioneering feminist in Britain who became the first female doctor in the United Kingdom in 1865 and a co-founder of London School of Medicine for Women.
Madeleine Brès (1839–1925) was the first female medical doctor in France.
Sophia Jex-Blake (1840–1912) was an English physician, feminist and teacher who was the first woman to practice medicine in Scotland in 1878.
Sophia Bambridge (1841–1910) was the first female doctor in American Samoa.
Frances Hoggan (1843–1927) became the first female doctor in Wales in 1870. She was also the first British woman to receive a doctorate in medicine (1870).
Eliza Walker Dunbar (1845-1925) was the first woman in the UK to be appointed as a House Surgeon with responsibilities over male doctors (1874) and the first to receive a UK medical licence by examination (1877).
Jennie Kidd Trout (1841–1921) was the first woman in Canada to become a licensed medical doctor in March 1875.
Rosina Heikel (1842–1929) was a feminist and the first female physician in Finland (1878), as well as in the Nordic countries.
Isala Van Diest (7 May 1842 – 6 February 1916) was the first female medical doctor and the first female university graduate in Belgium.
Nadezhda Suslova (1843–1918), a graduate of Zurich University, was the first female doctor in Russia
Edith Pechey-Phipson (1845–1908) was a pioneering English doctor in India. She received her MD in 1877 from the University of Bern and Licentiate in Midwifery in 1877 at the Royal College of Physicians of Ireland.
Mary Scharlieb (1845–1930) was a pioneer British female physician, as she was the first woman to be elected to the honorary visiting staff of a hospital in the United Kingdom.
Vilma Hugonnai (1847–1922) was the first female doctor in Hungary. She studied medicine in Zürich and received her degree in 1879. However, she had to work as a midwife until 1897 when the Hungarian authorities finally accepted her degree. Hugonnai then started her own medical practice.
Margaret Cleaves (1848–1917) was a pioneering doctor in brachytherapy who obtained her M.D. in 1873. She was the first female appointed to the University of Iowa Medical Department's examining committee in 1885.
Anastasia Golovina, also known as Anastassya Nikolau Berladsky-Golovina, and Atanasya Golovina (1850–1933), was the first female doctor in Bulgaria.
Ogino Ginko (1851–1913) was the first licensed and practicing female physician of Western medicine in Japan.
Bohuslava Kecková (1854–1911), first Bohemian (Czech) woman to obtain a medical degree in 1880 from University of Zurich.
Aletta Jacobs (1854–1929) was the first woman to complete a university course in the Netherlands and the first female doctor in the country.
Hope Bridges Adams Lehmann (1855–1916) was the first female general practitioner and gynecologist in Munich, Germany.
Grace Cadell (1855–1918) and Marion Gilchrist (1864–1952) were the first women to qualify as doctors in Scotland respectively in 1891 and 1894.
Draga Ljočić-Milošević (1855–1926) was a feminist activist and the first female physician in Serbia. She graduated from Zurich University in 1879
Henriette Saloz-Joudra (1855–1928) successfully defended a doctoral thesis in cardiology at the University of Geneva in June 1883.
Ana Galvis Hotz (1855–1934) was the first female doctor in Colombia. She was also the first Colombian woman (and first woman from Latin America) to obtain a medical degree.
Constance Stone (1856–1902) was the first woman to practice medicine in Australia.
Dolors Aleu i Riera (1857–1913) was the first female medical doctor in Spain when she started practicing medicine in 1879.
Maria Cuțarida-Crătunescu (1857–1919) was the first female doctor in Romania.
Lilian Welsh (1858–1938) was the first woman full professor at Goucher College.
Sonia Belkind (1858–1943), who was Russian-born, was the first female doctor in Palestine.
Isabel Cobb (1858–1947), who earned her M.D. in 1892, was Cherokee and the first woman physician in Indian territory. She was also an alumnus of Woman's Medical College of Pennsylvania.
Matilde Montoya (1859–1939) became the first female physician in Mexico in 1887.
Kadambini Ganguly (1861–1923) was the first Indian woman to obtain a medical degree in India upon graduating from the Calcutta Medical College in 1886.
Elsie Inglis (1864–1917), born in India, was a pioneering Scottish doctor and suffragist who obtained her MD at Edinburgh School of Medicine for Women and worked at Rotunda Hospital, Dublin.
Annie Lowrie Alexander (1864–1929) was the first licensed female physician in the Southern United States
Emily Charlotte Thomson (1864–1955) was one of the first women admitted to professional medical societies in Scotland and co-founded the Dundee Women's Hospital in 1896.
Anandi Gopal Joshi (1865–1887), the first Indian woman to obtain a medical degree having graduated from the Woman's Medical College of Pennsylvania in 1886.
Susan La Flesche Picotte (1865–1915) was the first Native American woman to obtain a medical degree.
Sofia Okunevska (1865–1926) was the first Ukrainian female doctor.
Mary Josephine Hannan (1865–1935) was the first Irishwoman to graduate with the following credentials: LRCPI & SI and LM.
Marie Spångberg Holth (1865–1942) was the first woman doctor in Norway after graduating in medicine from the Royal Frederiks University of Christiania in 1893.
Anne Walter Fearn (1865–1938) practiced as a medical doctor in Shanghai, China, for almost 40 years.
Eloísa Díaz (1866–1950) became the first female doctor in Chile upon graduating from the Universidad de Chile on 27 December 1886. She obtained her degree on 3 January 1887.
Merbai Ardesir Vakil (1868–1941) was an Indian physician and the first Asian woman to graduate from a Scottish university.
Eva Jellett (1868–1958), first woman to graduate from Trinity College Dublin with a medical degree in 1905.
Bertha E. Reynolds (1868–1961) was among the first women licensed to practice medicine in Wisconsin (serving the rural communities of Lone Rock and Avoca).
Emma K. Willits (1869–1965) was believed to be only the third woman to specialize in surgery and the first to head a Department of General Surgery at Children's Hospital in San Francisco, 1921–1934.
Alice Hamilton (1869–1970) was an American physician, research scientist, and author who is best known as a leading expert in the field of occupational health and a pioneer in the field of industrial toxicology. She was also the first woman appointed to the faculty of Harvard University.
Vera Gedroitz (1870–1932) was the first female professor of surgery in the world, as well as the first female military surgeon in Russia.
Maria Montessori (1870–1952), renowned educator and one of the first female medical doctors in Italy.
Milica Šviglin Čavov (b. unknown, circa 1870s) was the first Croatian female doctor. She graduated from the Medical School in Zürich in 1893, but was not allowed to work in Croatia.
Florence Sabin (1871–1953) was the first woman elected to the United States National Academy of Sciences.
Yoshioka Yayoi (1871–1959), one of the first women to gain a medical degree in Japan; founded a medical school for women in 1900.
Hannah Myrick (1871–1973) had helped to introduce the use of X-rays at the New England Hospital for Women and Children.
Laura Esther Rodriguez Dulanto (1872–1919) was the first female doctor in Peru upon obtaining her medical degree.
Marie Equi (1872–1952) was an American doctor and activist for women's access to birth control and abortion.
Fannie Almara Quain (1874–1950) was the first woman born in North Dakota to earn a doctor of medicine degree.
Karola Maier Milobar (born 1876) became the first female physician to practice in Croatia in 1906.
Bertha De Vriese (1877–1958) was the first Belgian woman to obtain a medical degree from Ghent University.
Selma Feldbach (1878–1924) was the first Estonian woman to become a medical doctor.
Andrea Evangelina Rodríguez Perozo (1879–1947) was the first female medical school graduate in the Dominican Republic.
Alice Mary Barry (1880–1955) was a doctor and the first woman nominated fellow of the Royal College of Physicians of Ireland.
Ernestina Paper (b. unknown, circa mid–1800s) was the first Italian woman to receive an advanced degree (in medicine) in 1877.
Doctor Ethel Constance Cousins (1882–1944) and nurse Elizabeth Brodie were the first European women admitted to Bhutan in 1918 as part of a missionary effort to curtail a cholera outbreak.
Muthulakshmi Reddi (1886–1968) was one of the early female medical doctors in India and a major social reformer.
María Elisa Rivera Díaz (1887–1981) (1909), Ana Janer (1909), Palmira Gatell (1910), and Dolores Piñero (1892–1975) (1913) were the first women to earn a medical degree in Puerto Rico. María Elisa Rivera Díaz and Ana Janer graduated in the same medical school class in 1909 and thus could both be considered the first female Puerto Rican physicians.
Anna Petronella van Heerden (1887–1975) was the first Afrikaner woman to qualify as a medical doctor in South Africa. Her thesis, which she obtained a doctorate on in 1923, was the first medical thesis written in Afrikaans.
Matilde Hidalgo (1889–1974) was the first female doctor in Ecuador.
Johanna Hellman (1889–1982) was a German physician who specialized in surgery, and the first woman to be a member of the German Society for Surgery.
Sun Chau Lee (, 1890–1979) was one of the first female Chinese doctors of Western medicine in China.
Mabel Wolff (1890–1981) and her sister Gertrude L. Wolff developed the first midwifery training school in Sudan in 1930. Mastura Khidir, one of the original students, was awarded a medal from King George V in 1945 for being the last surviving midwife from the first graduating class.
Mary Hearn (1891–1969) was a gynaecologist and first woman fellow of the Royal College of Physicians of Ireland.
Concepción Palacios Herrera (1893–1981) was the first female physician in Nicaragua.
Evelyn Totenhofer (1894–1977) became the first (female) resident nurse for Pitcairn Islands in 1944.
Jane Cummins (1899–1982), who possessed a DMRE and DTM&H, was an officer in the WRAF.
Irene Condachi (1899–1970), who earned her M.D. in 1927, was one of only two practicing female doctors in Malta during World War II.
Ah-hsin Tsai (1899–1990) was colonial Taiwan's first female physician.
20th and 21st centuries
Ana Aslan (1897–1988) was a Romanian biologist and physician, specialist in gerontology, academician from 1974 and the director of the National Institute of Geriatrics and Gerontology (1958–1988).
Marguerite Champendal (1870–1928) was the first woman from Geneva to earn her M.D. at the University of Geneva in 1900.
Emily Siedeberg (1873–1968) became the first female doctor in New Zealand in 1896. Ellen Dougherty (1844–1919) became New Zealand's first registered nurse in 1902 whereas Akenehi Hei (1878–1910) was the first Māori female to qualify as a nurse in 1908 in New Zealand.
Yu Meide (1874–1960) became the first Chinese Western medicine female doctor in Macau when she started a medical practice in 1906.
Oból Voansnac and Sofie Lyberth were the first Western-educated Greenlandic women to train as midwives in Greenland sometime in the early 20th century.
Lilian Grandin (1876–1924) was the first female doctor in Jersey. In 1907, Eleanor Diaper became the first nurse to work as a district nurse in Jersey.
Grace Pepe Malemo Haleck (1894–1987), Initia Taveuveu and Feiloa'iga Iosefa became the first qualified female nurses in American Samoa upon completing their training in 1916.
Dorothy Pantin (1896–1985) was the first woman doctor and surgeon of the Isle of Man.
Deaconess Mette Cathrine Thomsen was the first trained female nurse to work in the Faroe Islands from 1897 to 1915.
Eshba Dominika Fominichna (born 1897) became the first female doctor in Abkhazia after having returned from earning her medical degree in 1925 at the Baku State University.
Safiye Ali (1894–1952) was the first Turkish woman to have obtained a medical degree.
Damaye Soumah Cissé, mother of the renowned educator and politician Jeanne Martin Cissé (1926–2017), was one of the first midwives in Guinea.
Josephine Rera (1903–1987) was the first woman doctor in Borough Park and Bensonhurst, Brooklyn in New York City. She received the American Medical Association commendation for 50th Year in Practice. Rera graduated in 1926 with an M.D. diploma at the New York Homeopathic Medical College and Flower Hospital (now the New York Medical College in Valhalla, New York).
Lai Po-cheun was the first female to study and graduate as a medical student at the Hong Kong University during the 1920s.
Fatma bint Saada Nassor Lamki became the first female doctor in Zanzibar sometime during the 1920s.
Kornelija Sertić (1897–1988) was the first woman to graduate from the Medical School in Zagreb (which occurred in 1923).
Agnes Yewande Savage (1906–1964) was the first woman in West Africa to qualify in medicine
Joan Refshauge (1906–1979) was the first female doctor appointed to Papua New Guinea by the Australian government in 1947.
Henriette Bùi Quang Chiêu (1906–2012) was the first female doctor in Vietnam.
Sophie Redmond (1907–1955) became the first female doctor in Suriname after graduating from medical school in 1935.
Alma Dea Morani (1907–2001) was the first woman admitted to the American Society of Plastic and Reconstructive Surgeons.
Yvonne Sylvain (1907–1989) was the first female doctor in Haiti. She was the first woman accepted into the medical school of the University of Haiti, and earned her medical degree there in 1940.
Virginia Apgar (1909–1974), significant work in anesthesiology and teratology; founded field of neonatology; first woman granted full professorship at Columbia University College of Physicians & Surgeons.
Pearl Dunlevy (1909–2002) was a physician and epidemiologist and the first female president of the Biological Society of the Royal College of Surgeons of Ireland.
Isobel Addey Tate (1875–1917) was one of the first women to die while serving as a doctor overseas during World War I.
Beatrice Emmeline Simmons, a missionary and nurse, was the first Caucasian (female) formally trained in a health care profession to settle as an educator in Kiribati in 1910.
Elizabeth Abimbola Awoliyi (1910–1971) was the first female physician in Nigeria.
Badri Teymourtash (1911–1989) was the first Iranian female dentist, who received her higher education in Belgium.
Andréa de Balmann (1911–2007) was the first female doctor in French Polynesia.
Jane Elizabeth Hodgson (1915–2006) was a pioneering provider of reproductive healthcare for women and advocate for women's rights.
Matilda J. Clerk (1916–1984) was the first Ghanaian woman to win a scholarship for university education abroad and the second Ghanaian woman to become a physician. She was also the first woman to obtain a postgraduate diploma in colonial Ghana and West Africa.
Mary Malahele-Xakana (1917–1982) was the first black woman to register as a medical doctor in South Africa (in 1947).
Susan Gyankorama De-Graft Johnson (1917–1985) was the first woman to qualify as a physician in colonial Ghana.
Fatima Al-Zayani (1918–1982) became the first qualified female nurse in Bahrain in 1941. In 1969, Sadeeqa Ali Al-Awadi became the first female doctor in Bahrain upon her graduating from medical school.
Kakish Ryskulova (1918–2018) was the first woman from Kyrgyzstan to qualify as a surgeon.
Salma Ismail (1918–2014) was the first Malay woman to qualify as a doctor.
Katherine Burdon, wife of the then-government administrator, was among the women formally registered as midwives for St. Kitts and Anguilla in 1920.
Ogotu Head (1920–2001) was the first female nursing graduate from Niue after having completed her training in Samoa in 1939.
Ethna Gaffney (1920–2011) was the first female RCSI Professor of Chemistry.
Estela Gavidia (b. unknown, circa 1920) was the first woman to graduate as a doctor in El Salvador, which occurred in 1945.
Gabriela Valenzuela and Froilana Mereles were the first women to graduate with a medical degree in Paraguay in 1924. Valenzuela, however, is considered Paraguay's first practicing female doctor.
Augusta Jawara (1924–1981) was the first woman from The Gambia to qualify as a state certified midwife in 1953. She completed her training in England.
Kula Fiaola (1924–2003) became the first qualified (female) nurse in Tokelau in 1951.
Barbara Ball (1924–2011) was the first female doctor in Bermuda after having started her practice in 1949.
Margery Clare McKinnon (1924–2014) became the first female doctor in Norfolk Island around 1955.
Jean Lenore Harney (1925–2020) was the first female doctor from St. Kitts, Nevis and Anguilla to study medicine at the United Kingdom's Liverpool University ()
Kapelwa Sikota (1928–2006) became the first registered nurse in Zambia in 1952.
Mary Grant (1928–2016) was the third Ghanaian woman to qualify in medicine
Daphne Steele (1929–2004), a nurse from Guyana, became the first Black Matron in the National Health Service in 1964.
Josephine Nambooze (born 1930) started her practice as the first female doctor in Uganda in 1962. Selina Rwashana was the first psychiatric nurse in Uganda after having completed her training in the United Kingdom during the 1950s.
Tu Youyou (born 1930), first Chinese Nobel laureate in physiology or medicine and the first female citizen of the People's Republic of China to receive a Nobel Prize in any category (2015).
Lucie Lods and Jacqueline Exbroyat (1931–2013) were the first female doctors in New Caledonia. Lods started her practice in 1938, whereas Exbroyat did so during the 1960s.
Ayten Berkalp (born 1933) became the first female doctor in Northern Cyprus in 1963.
Lobsang Dolma Khangkar (1934–1989) was the first female doctor in the region of Tibet.
Widad Kidanemariam (1935–1988) became the first female doctor in Ethiopia during the 1960s.
Xhanfize (Frashëri) Basha returned to Albania to become the country's first female doctor upon completing her studies at the University of Philadelphia in 1937.
Edna Adan Ismail (born 1937) became Somaliland's first nurse midwife during the 1950s upon completing her training at the then-named Borough Polytechnic in the United Kingdom.
Hajah Habibah Haji Mohd Hussain (born 1937) was among the first women in Brunei to work as a nurse after finishing nursing school in 1955.
Marguerite Issembe became the first midwife in Gabon in 1940.
Ulai Otobed (born 1941) from Palau became the first female doctor in Micronesia. In 2020, Lara Reklai became the first Palauan female to complete her medical studies in Cuba.
María Herminia Yelsi and Digna Maldonado de Candía became the first female professional nurses in Paraguay in 1941.
Barbara Ross-Lee (born 1942) was the first African American female dean of a U.S. medical school (1993) (Ohio University College of Osteopathic Medicine).
Kek Galabru (born 1942) became the first female doctor in Cambodia upon obtaining her medical degree in France in 1968.
Choua Thao (born 1943), at the age of 14, was one of two Hmong girls recruited to receive nursing training around the time of the Secret War in Laos.
Dalva Maria Carvalho Mendes (born 1956), Brazilian doctor and soldier; first woman to be made a rear admiral in the Brazilian Navy
Nancy Dickey (born 1950) was the first female president of the American Medical Association.
Rosa Mari Mandicó (born 1951) became the first qualified female nurse in Andorra in 1971. In 1991, Concepció Álvarez Martínez, Isabel Navarro Gilabert, Dominica Ramond Punsola, Montserrat Rue Capella, Pilar Serrano Gascón, Purificación Valverde Hernández and Maria Líria Viñolas Blasco were the first nurse graduates in Andorra.
Nancy C. Andrews (born 1958), first female dean of a top-ten medical school in the United States (2007), Duke University School of Medicine.
Alganesh Haregot and Alganesh Adhanom were among the first women to graduate from a formal nursing school in Eritrea in 1959.
Ramlati Ali (born 1961) became the first female doctor in Mayotte in 1996.
Anniest Hamilton, the first female doctor in Turks and Caicos Islands, began her healthcare career sometime during the 1960s.
Under the tutelage of matron Daw Dem, Pem Choden, Nim Dem, Choni Zangmo, Gyem, Namgay Dem and Tsendra Pem became the first nurses in Bhutan in 1962.
Clara Raquel Epstein (born 1963), first Mexican-American woman U.S. trained and U.S. board certified in neurological surgery and youngest recipient of the prestigious Lifetime Achievement Award in Neurosurgery.
Viopapa Annandale-Atherton is the first Samoan woman to become a doctor upon graduating from New Zealand's University of Otago in 1964. She later returned to Samoa in 1993 and started a medical practice.
Cora LeEthel Christian became the first female doctor in the United States Virgin Islands upon completing her medical education in the early 1970s.
Madeline Nyamwanza-Makonese (b. unknown, mid-20th century) was the first female doctor in Zimbabwe. She was the second African woman to become a doctor and the first African woman to graduate from the University of Rhodesia Medical School in 1970.
Rehana Kausar (b. mid-20th century) became the first woman doctor from Azad Kashmir to graduate from Medical School in Pakistan in 1971.
Elwyn Chomba became the first female doctor in Zambia in 1973. In 1999, Jacqueline Mulundika-Mulwanda became Zambia's first female surgeon.
N'Guessan Affoué Christine from Ivory Coast is the first midwife advisor of the United Nations Population Fund (UNFPA). She retired from the profession in 2016 after having worked in the field since 1976.
Zoe Gardner becomes the first woman in 1976 to overwinter with the Australian Antarctic Program as a medical officer on sub-Antarctic Macquarie Island.
Margaret Allen (born 1948) became the first female heart transplant surgeon in the United States after having performed a transplant performed in 1985
Desiree Cox became the first (female) Rhodes Scholar from The Bahamas in 1987. She became a medical doctor upon earning her MBBS at the University of Oxford in 1992.
Marlene Toma became the first Saint Martin woman to graduate in midwifery in 1990.
Kinneh Sogur was the first home-trained female medical doctor to graduate from the University of the Gambia (UTG) in 2007. The medical school was the first one to be established in the country in 1999.
Margeret 'Molly' Brown (died 2008) was the first female doctor in the Cayman Islands
Esther Apuahe became the first female surgeon in Papua New Guinea in 2011. Naomi Kori Pomat (died 2021) was the first female doctor in Papua New Guinea's Western Province.
ʻAmelia Afuhaʻamango Tuʻipulotu became the first Tongan (female) to receive a Nursing PhD in 2012.
Neti Tamarua Herman became the first Cook Islands (female) nurse to earn a doctorate degree in 2015.
Alice Niragire was the first Rwandan female to graduate with a master's degree in surgery in 2015 since the course was introduced in 2006. In 2018, Claire Karekezi returned to Rwanda to become the country's first female neurosurgeon.
Natalie Joyce Brewley (died 2016) was the first female doctor in the British Virgin Islands. Stacy Rhymer is considered the first female doctor in the British Virgin Islands' Virgin Gorda.
Jin Cody became the first (female) certified nurse-midwife in the Northern Mariana Islands in 2017.
Elisa Gaspar becomes the first female to lead the Medical Association of Angola (ORMED) in 2019.
George Tarer was the first midwife to graduate in Guadeloupe.
Olivia Torres Cruz is the first Chamorro female doctor in Guam.
Errolyn Tungu is the first female obstetrician-gynaecologist in Vanuatu.
Rebecca Edwards became the first Falkland Islander woman to become a doctor after completing her medical training at the University College London.
Sergelen Orgoi developed low-cost liver transplantation for developing countries.
Adama Saidou is the first female surgeon in Niger, as well as the first woman to lead a surgical department.
See also
American Medical Women's Association
Female education
History of medicine
History of nursing
List of British women physicians
List of first female pharmacists by country
List of first female physicians by country
List of first women dentists by country
Sexism in medicine
Timeline of women's education
Timeline of women in science
Women in dentistry
Phanostratê
References
Bibliography
Abram, Ruth Abram., Send Us a Lady Physician: Women Doctors in America, 1835–1920
Blake, Catriona. The Charge of the Parasols: Women's Entry to the Medical Profession
Borst, Charlotte G. Catching Babies: Professionalization of Childbirth, 1870–1920 (1995), Cambridge, MA: Harvard University Press
Elisabeth Brooke, Women Healers: Portraits of Herbalists, Physicians, and Midwives (biographical encyclopedia)
Chenevert, Melodie. STAT: Special Techniques in Assertiveness Training for Women in the Health Profession
Barbara Ehrenreich and Deirdre English, Witches, Midwives, and Nurses: A History of Women Healers
Deirdre English and Barbara Ehrenreich, For Her Own Good (gendering of history of midwifery and professionalization of medicine)
Henderson, Metta Lou. American Women Pharmacists: Contributions to the Profession
Junod, Suzanne White and Seaman, Barbara, eds. Voices of the Women's Health Movement, Volume OneSeven Stories Press. New York. 2012. pp 60–62.
Luchetti, Cathy. Medicine Women: The Story of Early-American Women Doctors. New York: Crown,
Regina Morantz-Sanchez, Sympathy and Science: Women Physicians in American Medicine (1985 first ed.; 2001)
More, Ellen S. Restoring the Balance: Women Physicians and the Profession of Medicine, 1850–1995
Perrone, Bobette H. et al. Medicine Women, Curanderas, and Women Doctors (1993); cross-cultural anthropological survey of traditional societies
Pringle, Rosemary. Sex and Medicine: Gender, Power and Authority in the Medical Profession
Schwirian, Patricia M. Professionalization of Nursing: Current Issues and Trends (1998), Philadelphia: Lippencott,
Walsh, Mary Roth. Doctors Wanted: No Women Need Apply: Sexual Barriers in the Medical Profession, 1835–1975 (1977)
Biographies
Laurel Thatcher Ulrich, A Midwife's Tale: The Life of Martha Ballard Based on Her Diary, 1785–1812 (1991)
Rebecca Wojahn, Dr. Kate: Angel on Snowshoes (1956)
External links
The Archives for Women in Medicine , Countway Library, Harvard Medical School
"Changing the Face of Medicine", 2003 Exhibition at the National Library of Medicine;"NLM Exhibit Honors Outstanding Women", NIH Record, 11 November 2003. exhibition website at Changing the Face of Medicine .
Women are Changing the face of medicine
Women Physicians: 1850s–1970s – online exhibit at the Drexel University College of Medicine Archives and Special Collections on Women in Medicine and Homeopathy
"The Stethoscope Sorority", an online exhibit from the Archives for Women in Medicine
Women in Medicine Oral History Project Collection held at the University of Toronto Archives and Records Management Services
What's It Like to Be a Woman in Medicine? – online website at Cedar Sinai
Women scientists | Women in medicine | Technology | 10,747 |
69,476,862 | https://en.wikipedia.org/wiki/Australian%20Bird%20Calls | Australian Bird Calls (also referred to as Songs of Disappearance: Australian Bird Calls and just Songs of Disappearance) is an album of Australian bird calls, released on 3 December 2021 by the Bowerbird Collective and BirdLife Australia. It was created to bring attention to endangered and threatened species of Australian birds. The recordings were made by nature recordist David Stewart and Nature Sound.
Following its physical release, Australian Bird Calls peaked at number two on the Australian ARIA Charts.
Although the title initially appeared as Songs of Disappearance, this later became the de facto "artist" name for the Bowerbird Collective's effort to bring attention to threatened and endangered Australian species, with the album itself then taking on the title of Australian Bird Calls as a "sequel" album of frog calls titled Australian Frog Calls, attributed to Songs of Disappearance, was released on 2 December 2022.
Background
The album came from an idea by Anthony Albrecht, a PhD student at Charles Darwin University and co-founder of the Bowerbird Collective, and his supervisor Stephen Garnett, who wrote the report The Action Plan for Australian Birds 2020, published in December 2021, which found one in six (216 out of 1,299) Australian bird species are threatened. Garnett's report, released in collaboration with BirdLife Australia, further identified 50 species of Australian birds closest to "facing extinction due to lack of policy support and rampant climate change".
Violinist Simone Slattery, the other co-founder of Bowerbird Collective, arranged the first track, a collage of the 53 bird songs recorded by David Stewart over four decades. Slattery said she kept listening to the isolated bird calls until a structure came to mind "like a quirky dawn chorus. Some of these sounds will shock listeners because they're extremely percussive, they're not melodious at all. They're clicks, they're rattles, they're squawks and deep bass notes." The Guardian noted the "morse code-like song" of the night parrot, which had not been heard until 2013, as well as the call of the regent honeyeater, a bird now considered "so rare that it is literally losing its own voice out of loneliness".
BirdLife Australia CEO Paul Sullivan called the album "some rare recordings of birds that may not survive if we don't come together to protect them. While this campaign is fun, there's a serious side to what we're doing, and it's been heartening to see bird enthusiasts showing governments and businesses that Australians care about these important birds."
Reception
A staff writer at The Music gave the album four-and-a-half out of five stars and posted a review consisting entirely of bird noises.
Commercial performance
The album debuted at number five on the Australian ARIA Albums Chart dated 13 December 2021, selling over 2,000 units, with 1,500 of those being pre-ordered copies. The following week, it ascended to number three. It later re-entered at number two.
Track listing
Charts
Release history
References
External links
Official website
2021 albums
Animal sounds | Australian Bird Calls | Biology | 631 |
4,062,914 | https://en.wikipedia.org/wiki/Swedish%20Meteorological%20and%20Hydrological%20Institute | The Swedish Meteorological and Hydrological Institute (, SMHI) is a Swedish government agency and operates under the Ministry of Climate and Enterprise. SMHI has expertise within the areas of meteorology, hydrology and oceanography, and has extensive service and business operations within these areas.
History
On 1 January 1873, Statens Meteorologiska Centralanstalt was founded, an autonomous part of the Royal Swedish Academy of Sciences, but the first meteorological observations began on 1 July 1874. It was not until 1880 that the first forecasts were issued. The latter will be broadcast on Stockholm radio from 19 February 1924.
In 1908, the Hydrographic Office (Hydrografiska byrån, HB) was created. Its task is to scientifically map Sweden's freshwater and collaborate with the weather service in taking certain weather observations such as precipitation and snow cover. In 1919, the two services merged and became the Statens meteorologisk-hydrografiska anstalt (SMHA).
In 1945, the service was renamed Sveriges meteorologiska och hydrologiska institut. Prior to 1975 it was located in Stockholm but after a decision taken in the Riksdag in 1971 it was relocated to Norrköping in 1975.
Staff and organisation
SMHI has offices in Gothenburg, Malmö, Sundsvall and Upplands Väsby, as well as its headquarters. To the Swedish public SMHI is mostly known for the weather forecasts in the public-service radio provided by Sveriges Radio. Many of the other major media companies in Sweden also buy weather forecasts from SMHI.
SMHI has about 650 employees. The research staff includes some 100 scientists at the Research Unit, which includes the Rossby Centre. The research division is divided into six units:
Meteorological prediction and analysis
Air quality
Oceanography
Hydrology
Rossby Centre (Regional and Global Climate Modelling)
Atmospheric Remote Sensing
The regional and global climate modelling is at the Rossby Centre, which was established at SMHI in 1997.
Environmental research spans all six research units. There is also a project for providing contributions to the HIRLAM (High Resolution Limited Area Model) project.
The main goal of the research division is to support the Institute and the society with research and development. The scientists participate in many national and international research projects.
Air quality research
The air quality research unit of SMHI has 10 scientists, all of whom have expertise in air quality, atmospheric pollution transport, and atmospheric pollution dispersion modelling.
Some of the atmospheric pollution dispersion models developed by the air quality research unit are:
the DISPERSION21 model (also called DISPERSION 2.1)
the MATCH model
Allegations of harassment and corruption
An anonymous letter sent to the Swedish ministry of environment in 2019 and written by 100 SMHI employees, claims
that harassment and threats from the management happen frequently within the institution. A claim that SMHI's former general director did not wish to address thoroughly.
In 2020, it was revealed that the sea routing department was sold to its recently resigned former director for a very low price, without any public offer. The matter was reported at the Swedish parliament.
References
External links
SMHI website
The Model Documententation System (MDS) of the European Topic Centre on Air and Climate Change (part of the European Environment Agency)
Airviro web page
Airviro page on Westlakes website
Government agencies of Sweden
Sweden
Atmospheric dispersion modeling
1945 establishments in Sweden
Government agencies established in 1945
National meteorological and hydrological services
Oceanographic organizations
Hydrology organizations | Swedish Meteorological and Hydrological Institute | Chemistry,Engineering,Environmental_science | 716 |
61,295,777 | https://en.wikipedia.org/wiki/CooA | CooA is a heme-containing transcription factor that responds to the presence of carbon monoxide. This protein forms homodimers and is a homolog of cAMP receptor protein. CooA regulates the expression of carbon monoxide dehydrogenase, an enzyme that catalyzes the oxidation of CO to CO2. The most well-studied CooA homolog comes from Rhodospirillum rubrum (RrCooA), but the CooA homolog from Carboxydothermus hydrogenoformans (ChCooA) has been studied as well. The main difference between these two CooA homologs is the ferric heme coordination. For RrCooA, the ferric heme iron is bound to a cysteine and the amine of the N-terminal proline, while, in the ferrous state, a ligand switch occurs where a nearby histidine displaces the thiolate. For ChCooA, the heme iron is ligated by a histidine and the N-terminal amine in both the ferric and ferrous states. For both homologs, CO displaces the amine ligand and activates the protein to bind to its target DNA sequence. Several structures of CooA exist: RrCooA in the ferrous state (1FT9), ChCooA in the ferrous, imidazole-bound state (2FMY), and ChCooA in the ferrous, CO-bound state (2HKX).
References
Gene expression
Protein families
DNA
Gaseous signaling molecules | CooA | Chemistry,Biology | 337 |
12,525,325 | https://en.wikipedia.org/wiki/Manicule | The manicule, , is a typographic mark with the appearance of a hand with its index finger extending in a pointing gesture. Originally used for handwritten marginal notes, it later came to be used in printed works to draw the reader's attention to important text. Though once widespread, it is rarely used today, except as an occasional archaic novelty or on informal directional signs.
Terminology
For most of its history, the mark has been inconsistently referred to by a variety of names. William H. Sherman, in the first dedicated study of the mark, uses the term manicule (from the Latin root manicula, meaning "little hand"), but also identifies 14 further names which have been used:
hand
pointing hand
hand director
pointer
digit
fist
mutton fist
bishop's fist
index
indicator
indicule
maniple
pilcrow
The last three Sherman labels erroneous, with indicule and maniple being mishearings or conflations, and pilcrow properly referring to the paragraph mark, .
History
Handwritten manicules
The symbol originates in scribal tradition of the medieval and Renaissance period, appearing in the margin of manuscripts to mark corrections or notes. The earliest book known to include manicules is the 1086 Domesday Book, where they are used for marginal annotations alongside other marks such as daggers. The age of the annotations is not known, and they may date to later than the 11th century.
Manicules are first known to appear in the 12th century in handwritten manuscripts in Spain, and became common in the 14th and 15th centuries in Italy with some very elaborate with shading and artful cuffs. Some were playful and elaborate, but others were as simple as "two squiggly strokes suggesting the barest sketch of a pointing hand" and thus quick to draw.
After the popularization of the printing press starting in the 1450s, the handwritten version continued in handwritten form as a means to annotate printed documents, eventually falling out of popularity by the nineteenth century.
In print
Early printers using a type representing the manicule included
Mathias Huss and Johannes Schabeler in Lyons in their 1484 edition of Paulus Florentinus's . Writer John Boardley identifies the first appearance of a manicule in a printed book as an earlier 1479 edition of the same work, , printed in Milan by Leonhard Pachel and Ulrich Scinzenzeller.
In contrast with their handwritten use, early printed manicules appeared in the main text, pointing outward toward corresponding printed margin notes. Later, beginning in the sixteenth century, the manicule came to be used as a decorative element on the title pages of books, alongside other so-called "dingbats" such as the fleuron ().
The manicule attained a great degree of popularity in the nineteenth century, particularly in advertisements. At this time, they also became more visually diverse, with larger and more complex fists being created. They were also widely used in signage, with some fingerposts having relief-printed or even fully three-dimensional physical manifestations of pointing hands. The United States Postal Service has also used a pointing hand as a graphical indicator for its "Return to Sender" stamp.
Its popularity declined toward the end of the nineteenth century, perhaps due to its oversaturation in advertising. By the 1890s, it was rarely used unless for ironic effect. Sherman (2005) argues that as the symbols became standardized, they were no longer reflective of individuality in comparison to other writing, and this explains their diminished popularity.
Usage examples
The typical use of the pointing hand is as a bullet-like symbol to direct the reader's attention to important text, having roughly the same meaning as the word "attention" or "note". It is used this way both by annotators and printers. Even in the first few centuries of use, it can be seen used to draw attention to specific text, such as a title (in some cases in the form of a row of manicules), inserted text, noteworthy passage, or sententiae. In some cases, flower marks and asterisks were used for similar purposes. Less commonly, in earlier centuries the pointing hand acted as a section divider with a pilcrow as paragraph divider; or more rarely as the paragraph divider itself.
Some encyclopedias use it in articles to cross-reference, as in ☞ other articles.
It occasionally sees use in magazines and comic books to indicate to the reader that a story on the right-hand page continues onto the next.
In modern printing, it was used as a standard typographical symbol marking notes. The American Dictionary of Printing and Bookmaking (1894) treats it as the seventh in the standard sequence of footnote markers, following the paragraph sign (pilcrow).
In linguistics, the symbol is used in optimality theory tableaux to identify the optimal output in a candidate of generated possibilities from a given input.
American science fiction writer Kurt Vonnegut used the symbol as a form of margin on the first line of every paragraph in his novel Breakfast of Champions. The literary effect of this was to create separation between each paragraph, reinforcing the stream of consciousness style of the text.
American essayist and cultural critic H.L. Mencken, often credited with having first coined the aphorism, "When you point one finger, there are three fingers pointing back to you," is also reported to have used this symbol to convey this sentiment in shorthand, seen first in his telegrams as early as the 1920s.
Thomas Pynchon parodies this punctuation mark in his novel Gravity's Rainbow by depicting a middle finger, rather than an index finger, pointing at a line of text.
Computer cursor
An upward pointing hand is often used in the mouse cursor in graphical user interfaces (such as those in Adobe Acrobat and Photoshop) to indicate an object that can be manipulated.
The first is believed to be the Xerox Star. Many web browsers use an upward pointing hand cursor to indicate a clickable hyperlink. CSS 2.0 allows the "cursor" property to be set to "hand" or "pointer" to intentionally change the mouse cursor to this symbol when hovering over an object; "move" may produce a closed fisted hand.
Many video games made in the 1980s and '90s, primarily text-based adventure games, also used these cursors.
Unicode
Unicode (version 1.0, 1991) introduced six "pointing index" characters in the Miscellaneous Symbols block:
Unicode 6.0 (2010) included four more pointing hands in Miscellaneous Symbols and Pictographs:
Unicode 7.0 (2014) added several more indices to the Miscellaneous Symbols and Pictographs block, sourced from the Wingdings 2 font:
Unicode 13.0 (2020) added a three-part index (🯁🯂🯃) in the Symbols for Legacy Computing block:
Emoji
Five Unicode manicule characters are emoji, including one of those in Unicode 1.0 and all four introduced in Unicode 6.0. All five have standardized variants for text and emoji presentation.
See also
V sign
Obelus (historic text pointer)
Notes
References
Sources
(also )
External links
Collection of photographs of manicules on Flickr
Palaeography
Typographical symbols | Manicule | Mathematics | 1,506 |
14,094 | https://en.wikipedia.org/wiki/Human%20cloning | Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissue. It does not refer to the natural conception and delivery of identical twins. The possibilities of human cloning have raised controversies. These ethical concerns have prompted several nations to pass laws regarding human cloning.
Two commonly discussed types of human cloning are therapeutic cloning and reproductive cloning.
Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants. It is an active area of research, and is in medical practice over the world. Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and (more recently) pluripotent stem cell induction.
Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues.
History
Although the possibility of cloning humans had been the subject of speculation for much of the 20th century, scientists and policymakers began to take the prospect seriously in 1969. J. B. S. Haldane was the first to introduce the idea of human cloning, for which he used the terms "clone" and "cloning", which had been used in agriculture since the early 20th century. In his speech on "Biological Possibilities for the Human Species of the Next Ten Thousand Years" at the Ciba Foundation Symposium on Man and his Future in 1963, he said:
Nobel Prize-winning geneticist Joshua Lederberg advocated cloning and genetic engineering in an article in The American Naturalist in 1966 and again, the following year, in The Washington Post. He sparked a debate with conservative bioethicist Leon Kass, who wrote at the time that "the programmed reproduction of man will, in fact, dehumanize him." Another Nobel Laureate, James D. Watson, publicized the potential and the perils of cloning in his Atlantic Monthly essay, "Moving Toward the Clonal Man", in 1971.
With the cloning of a sheep known as Dolly in 1996 by somatic cell nuclear transfer (SCNT), the idea of human cloning became a hot debate topic. Many nations outlawed it, while a few scientists promised to make a clone within the next few years. The first hybrid human clone was created in November 1998, by Advanced Cell Technology. It was created using SCNT; a nucleus was taken from a man's leg cell and inserted into a cow's egg from which the nucleus had been removed, and the hybrid cell was cultured and developed into an embryo. The embryo was destroyed after 12 days.
In 2004 and 2005, Hwang Woo-suk, a professor at Seoul National University, published two separate articles in the journal Science claiming to have successfully harvested pluripotent, embryonic stem cells from a cloned human blastocyst using SCNT techniques. Hwang claimed to have created eleven different patient-specific stem cell lines. This would have been the first major breakthrough in human cloning. However, in 2006 Science retracted both of his articles on account of clear evidence that much of his data from the experiments was fabricated.
In January 2008, Dr. Andrew French and Samuel Wood of the biotechnology company Stemagen announced that they successfully created the first five mature human embryos using SCNT. In this case, each embryo was created by taking a nucleus from a skin cell (donated by Wood and a colleague) and inserting it into a human egg from which the nucleus had been removed. The embryos were developed only to the blastocyst stage, at which point they were studied in processes that destroyed them. Members of the lab said that their next set of experiments would aim to generate embryonic stem cell lines; these are the "holy grail" that would be useful for therapeutic or reproductive cloning.
In 2011, scientists at the New York Stem Cell Foundation announced that they had succeeded in generating embryonic stem cell lines, but their process involved leaving the oocyte's nucleus in place, resulting in triploid cells, which would not be useful for cloning.
In 2013, a group of scientists led by Shoukhrat Mitalipov published the first report of embryonic stem cells created using SCNT. In this experiment, the researchers developed a protocol for using SCNT in human cells, which differs slightly from the one used in other organisms. Four embryonic stem cell lines from human fetal somatic cells were derived from those blastocysts. All four lines were derived using oocytes from the same donor, ensuring that all mitochondrial DNA inherited was identical. A year later, a team led by Robert Lanza at Advanced Cell Technology reported that they had replicated Mitalipov's results and further demonstrated the effectiveness by cloning adult cells using SCNT.
In 2018, the first successful cloning of primates using SCNT was reported with the birth of two live female clones, crab-eating macaques named Zhong Zhong and Hua Hua.
Methods
Somatic cell nuclear transfer (SCNT)
In somatic cell nuclear transfer ("SCNT"), the nucleus of a somatic cell is taken from a donor and transplanted into a host egg cell, which had its own genetic material removed previously, making it an enucleated egg. After the donor somatic cell genetic material is transferred into the host oocyte with a micropipette, the somatic cell genetic material is fused with the egg using an electric current. Once the two cells have fused, the new cell can be permitted to grow in a surrogate or artificially. This is the process that was used to successfully clone Dolly the sheep (see ). The technique, now refined, has indicated that it was possible to replicate cells and reestablish pluripotency, or "the potential of an embryonic cell to grow into any one of the numerous different types of mature body cells that make up a complete organism".
Induced pluripotent stem cells (iPSCs)
Creating induced pluripotent stem cells ("iPSCs") is a long and inefficient process. Pluripotency refers to a stem cell that has the potential to differentiate into any of the three germ layers: endoderm (interior stomach lining, gastrointestinal tract, the lungs), mesoderm (muscle, bone, blood, urogenital), or ectoderm (epidermal tissues and nervous tissue). A specific set of genes, often called "reprogramming factors", are introduced into a specific adult cell type. These factors send signals in the mature cell that cause the cell to become a pluripotent stem cell. This process is highly studied and new techniques are being discovered frequently on how to improve this induction process.
Depending on the method used, reprogramming of adult cells into iPSCs for implantation could have severe limitations in humans. If a virus is used as a reprogramming factor for the cell, cancer-causing genes called oncogenes may be activated. These cells would appear as rapidly dividing cancer cells that do not respond to the body's natural cell signaling process. However, in 2008 scientists discovered a technique that could remove the presence of these oncogenes after pluripotency induction, thereby increasing the potential use of iPSC in humans.
Comparing SCNT to reprogramming
Both the processes of SCNT and iPSCs have benefits and deficiencies. Historically, reprogramming methods were better studied than SCNT derived embryonic stem cells (ESCs). However, more recent studies have put more emphasis on developing new procedures for SCNT-ESCs. The major advantage of SCNT over iPSCs at this time is the speed with which cells can be produced. iPSCs derivation takes several months while SCNT would take a much shorter time, which could be important for medical applications. New studies are working to improve the process of iPSC in terms of both speed and efficiency with the discovery of new reprogramming factors in oocytes. Another advantage SCNT could have over iPSCs is its potential to treat mitochondrial disease, as it uses a donor oocyte. No other advantages are known at this time in using stem cells derived from one method over stem cells derived from the other.
Uses and actual potential
Work on cloning techniques has advanced understanding of developmental biology in humans. Observing human pluripotent stem cells grown in culture provides great insight into human embryo development, which otherwise cannot be seen. Scientists are now able to better define steps of early human development. Studying signal transduction along with genetic manipulation within the early human embryo has the potential to provide answers to many developmental diseases and defects. Many human-specific signaling pathways have been discovered by studying human embryonic stem cells. Studying developmental pathways in humans has given developmental biologists more evidence toward the hypothesis that developmental pathways are conserved throughout species.
iPSCs and cells created by SCNT are useful for research into the causes of disease, and as model systems used in drug discovery.
Cells produced with SCNT, or iPSCs could eventually be used in stem cell therapy, or to create organs to be used in transplantation, known as regenerative medicine. Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplantation is a widely used form of stem cell therapy. No other forms of stem cell therapy are in clinical use at this time. Research is underway to potentially use stem cell therapy to treat heart disease, diabetes, and spinal cord injuries. Regenerative medicine is not in clinical practice, but is heavily researched for its potential uses. This type of medicine would allow for autologous transplantation, thus removing the risk of organ transplant rejection by the recipient. For instance, a person with liver disease could potentially have a new liver grown using their same genetic material and transplanted to remove the damaged liver. In current research, human pluripotent stem cells have been promised as a reliable source for generating human neurons, showing the potential for regenerative medicine in brain and neural injuries.
Ethical implications
In bioethics, the ethics of cloning refers to a variety of ethical positions regarding the practice and possibilities of cloning, especially human cloning. While many of these views are religious in origin, for instance relating to Christian views of procreation and personhood, the questions raised by cloning engage secular perspectives as well, particularly the concept of identity.
Advocates support development of therapeutic cloning in order to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology.
Opposition to therapeutic cloning mainly centers around the status of embryonic stem cells, which has connections with the abortion debate. The moral argument put forward is based on the notion that embryos deserve protection from the moment of their conception because it is at this precise moment that a new human entity emerges, already a unique individual. Since it is deemed unacceptable to sacrifice human lives for any purpose, the argument asserts that the destruction of embryos for research purposes is no longer justifiable.
Some opponents of reproductive cloning have concerns that technology is not yet developed enough to be safe – for example, the position of the American Association for the Advancement of Science while others emphasize that reproductive cloning could be prone to abuse (leading to the generation of humans whose organs and tissues would be harvested), and have concerns about how cloned individuals could integrate with families and with society at large.
Members of religious groups are divided. Some Christian theologians perceive the technology as usurping God's role in creation and, to the extent embryos are used, destroying a human life; others see no inconsistency between Christian tenets and cloning's positive and potentially life-saving benefits.
Legal status of human therapeutic cloning maps
Legal status of human cloning by jurisdiction
Legal status of human cloning by U.S. state
In popular culture
Science fiction has used cloning, most commonly and specifically human cloning, due to the fact that it brings up controversial questions of identity. Humorous fiction, such as Multiplicity (1996) and the Maxwell Smart feature The Nude Bomb (1980), have featured human cloning. A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. Robin Cook's 1997 novel Chromosome 6, Michael Bay's The Island, and Nancy Farmer's 2002 novel House of the Scorpion are examples of this; Chromosome 6 also features genetic manipulation and xenotransplantation. The Star Wars saga makes use of millions of human clones to form the Grand Army of the Republic that participated in the Clone Wars. The series Orphan Black follows human clones' stories and experiences as they deal with issues and react to being the property of a chain of scientific institutions. In the 2019 horror film Us, the entirety of the United States' population is secretly cloned. Years later, these clones (known as The Tethered) reveal themselves to the world by successfully pulling off a mass genocide of their counterparts.
In the 2005 novel Never Let Me Go, Kazuo Ishiguro crafts a subtle exploration into the ethical complications of cloning humans for medical advancement and longevity.
See also
Homunculus
Hwang affair
Notes and references
Notes
References
Further reading
Araujo, Robert John, "The UN Declaration on Human Cloning: a survey and assessment of the debate," 7 The National Catholic Bioethics Quarterly 129 – 149 (2007).
Oregon Health & Science University. "Human skin cells converted into embryonic stem cells: First time human stem cells have been produced via nuclear transfer." ScienceDaily. ScienceDaily, 15 May 2013. Human skin cells converted into embryonic stem cells: First time human stem cells have been produced via nuclear transfer.
Seyyed Hassan Eslami Ardakani, Human Cloning in Catholic and Islamic Perspectives, University of Religions and Denominations, 2007
External links
"Variations and voids: the regulation of human cloning around the world" academic article by S. Pattinson & T. Caulfield
Cloning Fact Sheet
General Assembly Adopts United Nations Declaration on Human Cloning By Vote of 84-34-37
How Human Cloning Will Work
Moving Toward the Clonal Man
Should We Really Fear Reproductive Human Cloning
United Nation declares law against cloning.
Biotechnology | Human cloning | Engineering,Biology | 2,980 |
33,972 | https://en.wikipedia.org/wiki/Wolfgang%20Pauli | Wolfgang Ernst Pauli (; ; 25 April 1900 – 15 December 1958) was an Austrian theoretical physicist and a pioneer of quantum physics. In 1945, after having been nominated by Albert Einstein, Pauli received the Nobel Prize in Physics for his "decisive contribution through his discovery of a new law of Nature, the exclusion principle or Pauli principle". The discovery involved spin theory, which is the basis of a theory of the structure of matter.
Early life
Pauli was born in Vienna to a chemist, (né Wolf Pascheles, 1869–1955), and his wife, Bertha Camilla Schütz; his sister was Hertha Pauli, a writer and actress. Pauli's middle name was given in honor of his godfather, physicist Ernst Mach. Pauli's paternal grandparents were from prominent Jewish families of Prague; his great-grandfather was the Jewish publisher Wolf Pascheles. Pauli's mother, Bertha Schütz, was raised in her mother's Roman Catholic religion; her father was Jewish writer Friedrich Schütz. Pauli was raised as a Roman Catholic.
Pauli attended the Döblinger-Gymnasium in Vienna, graduating with distinction in 1918. Two months later, he published his first paper, on Albert Einstein's theory of general relativity. He attended the University of Munich, working under Arnold Sommerfeld, where he received his PhD in July 1921 for his thesis on the quantum theory of ionized diatomic hydrogen ().
Career
Sommerfeld asked Pauli to review the theory of relativity for the Encyklopädie der mathematischen Wissenschaften (Encyclopedia of Mathematical Sciences). Two months after receiving his doctorate, Pauli completed the article, which came to 237 pages. Einstein praised it; published as a monograph, it remains a standard reference on the subject.
Pauli spent a year at the University of Göttingen as the assistant to Max Born, and the next year at the Institute for Theoretical Physics in Copenhagen (later the Niels Bohr Institute). From 1923 to 1928, he was a professor at the University of Hamburg. During this period, Pauli was instrumental in the development of the modern theory of quantum mechanics. In particular, he formulated the exclusion principle and the theory of nonrelativistic spin.
In 1928, Pauli was appointed Professor of Theoretical Physics at ETH Zurich in Switzerland. He was awarded the Lorentz Medal in 1930. He held visiting professorships at the University of Michigan in 1931 and the Institute for Advanced Study in Princeton in 1935.
Jung
At the end of 1930, shortly after his postulation of the neutrino and immediately after his divorce and his mother's suicide, Pauli experienced a personal crisis. In January 1932 he consulted psychiatrist and psychotherapist Carl Jung, who also lived near Zürich. Jung immediately began interpreting Pauli's deeply archetypal dreams and Pauli became a collaborator of Jung's. He soon began to critique the epistemology of Jung's theory scientifically, and this contributed to a certain clarification of Jung's ideas, especially about synchronicity. A great many of these discussions are documented in the Pauli/Jung letters, today published as Atom and Archetype. Jung's elaborate analysis of more than 400 of Pauli's dreams is documented in Psychology and Alchemy. In 1933 Pauli published the second part of his book on physics, Handbuch der Physik, which was considered the definitive book on the new field of quantum physics. Robert Oppenheimer called it "the only adult introduction to quantum mechanics."
The German annexation of Austria in 1938 made Pauli a German citizen, which became a problem for him in 1939 after World War II broke out. In 1940, he tried in vain to obtain Swiss citizenship, which would have allowed him to remain at the ETH.
United States and Switzerland
In 1940, Pauli moved to the United States, where he was employed as a professor of theoretical physics at the Institute for Advanced Study. In 1946, after the war, he became a naturalized U.S. citizen and returned to Zürich, where he mostly remained for the rest of his life. In 1949, he was granted Swiss citizenship.
In 1958, Pauli was awarded the Max Planck medal. The same year, he fell ill with pancreatic cancer. When his last assistant, Charles Enz, visited him at the Rotkreuz hospital in Zürich, Pauli asked him, "Did you see the room number?" It was 137. Throughout his life, Pauli had been preoccupied with the question of why the fine-structure constant, a dimensionless fundamental constant, has a value nearly equal to 1/137. Pauli died in that room on 15 December 1958.
Scientific research
Pauli made many important contributions as a physicist, primarily in the field of quantum mechanics. He seldom published papers, preferring lengthy correspondences with colleagues such as Niels Bohr from the University of Copenhagen in Denmark and Werner Heisenberg, with whom he had close friendships. Many of his ideas and results were never published and appeared only in his letters, which were often copied and circulated by their recipients. In 1921 Pauli worked with Bohr to create the Aufbau Principle, which described building up electrons in shells based on the German word for building up, as Bohr was also fluent in German.
Pauli proposed in 1924 a new quantum degree of freedom (or quantum number) with two possible values, to resolve inconsistencies between observed molecular spectra and the developing theory of quantum mechanics. He formulated the Pauli exclusion principle, perhaps his most important work, which stated that no two electrons could exist in the same quantum state, identified by four quantum numbers including his new two-valued degree of freedom. The idea of spin originated with Ralph Kronig. A year later, George Uhlenbeck and Samuel Goudsmit identified Pauli's new degree of freedom as electron spin, in which Pauli for a very long time wrongly refused to believe.
In 1926, shortly after Heisenberg published the matrix theory of modern quantum mechanics, Pauli used it to derive the observed spectrum of the hydrogen atom. This result was important in securing credibility for Heisenberg's theory.
Pauli introduced the 2×2 Pauli matrices as a basis of spin operators, thus solving the nonrelativistic theory of spin. This work, including the Pauli equation, is sometimes said to have influenced Paul Dirac in his creation of the Dirac equation for the relativistic electron, though Dirac said that he invented these same matrices himself independently at the time. Dirac invented similar but larger (4x4) spin matrices for use in his relativistic treatment of fermionic spin.
In 1930, Pauli considered the problem of beta decay. In a letter of 4 December to Lise Meitner et al., beginning, "Dear radioactive ladies and gentlemen", he proposed the existence of a hitherto unobserved neutral particle with a small mass, no greater than 1% the mass of a proton, to explain the continuous spectrum of beta decay. In 1934, Enrico Fermi incorporated the particle, which he called a neutrino, "little neutral one" in Fermi's native Italian, into his theory of beta decay. The neutrino was first confirmed experimentally in 1956 by Frederick Reines and Clyde Cowan, two and a half years before Pauli's death. On receiving the news, he replied by telegram: "Thanks for message. Everything comes to him who knows how to wait. Pauli."
In 1940, Pauli re-derived the spin-statistics theorem, a critical result of quantum field theory that states that particles with half-integer spin are fermions, while particles with integer spin are bosons.
In 1949, he published a paper on Pauli–Villars regularization: regularization is the term for techniques that modify infinite mathematical integrals to make them finite during calculations, so that one can identify whether the intrinsically infinite quantities in the theory (mass, charge, wavefunction) form a finite and hence calculable set that can be redefined in terms of their experimental values, which criterion is termed renormalization, and which removes infinities from quantum field theories, but also importantly allows the calculation of higher-order corrections in perturbation theory.
Pauli made repeated criticisms of the modern synthesis of evolutionary biology, and his contemporary admirers point to modes of epigenetic inheritance as supporting his arguments.
Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's classical model was also augmented by Pauli and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Pauli said, "Festkörperphysik ist eine Schmutzphysik"—solid-state physics is the physics of dirt.
Pauli was elected a Foreign Member of the Royal Society (ForMemRS) in 1953 and president of the Swiss Physical Society in 1955 for two years. In 1958 he became a foreign member of the Royal Netherlands Academy of Arts and Sciences.
Personality and friendships
The Pauli effect was named after his anecdotal bizarre ability to break experimental equipment simply by being in its vicinity. Pauli was aware of his reputation and was delighted whenever the Pauli effect manifested. These strange occurrences were in line with his controversial investigations into the legitimacy of parapsychology, particularly his collaboration with C. G. Jung on synchronicity. Max Born considered Pauli "only comparable to Einstein himself... perhaps even greater". Einstein declared Pauli his "spiritual heir".
Pauli was famously a perfectionist. This extended not just to his own work, but also to that of his colleagues. As a result, he became known in the physics community as the "conscience of physics", the critic to whom his colleagues were accountable. He could be scathing in his dismissal of any theory he found lacking, often labelling it ganz falsch, "utterly wrong".
But this was not his most severe criticism, which he reserved for theories or theses so unclearly presented as to be untestable or unevaluatable and thus not properly belonging within the realm of science, even though posing as such. They were worse than wrong because they could not be proved wrong. Famously, he once said of such an unclear paper: "It is not even wrong!"
His supposed remark when meeting another leading physicist, Paul Ehrenfest, illustrates this notion of an arrogant Pauli. The two met at a conference for the first time. Ehrenfest was familiar with Pauli's papers and quite impressed with them. After a few minutes of conversation, Ehrenfest remarked, "I think I like your Encyclopedia article [on relativity theory] better than I like you," to which Pauli retorted, "That's strange. With me, regarding you, it is just the opposite." The two became very good friends from then on.
A somewhat warmer picture emerges from this story, which appears in the article on Dirac:
Werner Heisenberg [in Physics and Beyond, 1971] recollects a friendly conversation among young participants at the 1927 Solvay Conference, about Einstein and Planck's views on religion. Wolfgang Pauli, Heisenberg, and Dirac took part in it. Dirac's contribution was a poignant and clear criticism of the political manipulation of religion, that was much appreciated for its lucidity by Bohr, when Heisenberg reported it to him later. Among other things, Dirac said: "I cannot understand why we idle discussing religion. If we are honest – and as scientists honesty is our precise duty – we cannot help but admit that any religion is a pack of false statements, deprived of any real foundation. The very idea of God is a product of human imagination. [ ... ] I do not recognize any religious myth, at least because they contradict one another. [ ... ]" Heisenberg's view was tolerant. Pauli had kept silent, after some initial remarks. But when finally he was asked for his opinion, jokingly he said: "Well, I'd say that also our friend Dirac has got a religion and the first commandment of this religion is 'God does not exist and Paul Dirac is his prophet'". Everybody burst into laughter, including Dirac.
Many of Pauli's ideas and results were never published and appeared only in his letters, which were often copied and circulated by their recipients. Pauli may have been unconcerned that much of his work thus went uncredited, but when it came to Heisenberg's world-renowned 1958 lecture at Göttingen on their joint work on a unified field theory, and the press release calling Pauli a mere "assistant to Professor Heisenberg", Pauli became offended, denouncing Heisenberg's physics prowess. The deterioration of their relationship resulted in Heisenberg ignoring Pauli's funeral, and writing in his autobiography that Pauli's criticisms were overwrought, though ultimately the field theory was proved untenable, validating Pauli's criticisms.
Philosophy
In his discussions with Carl Jung, Pauli developed an ontological theory that has been dubbed the "Pauli–Jung Conjecture" and has been seen as a kind of dual-aspect theory. The theory holds that there is "a psychophysically neutral reality" and that mental and physical aspects are derivative of this reality. Pauli thought that elements of quantum physics pointed to a deeper reality that might explain the mind/matter gap and wrote, "we must postulate a cosmic order of nature beyond our control to which both the outward material objects and the inward images are subject."
Pauli and Jung held that this reality was governed by common principles ("archetypes") that appear as psychological phenomena or as physical events. They also held that synchronicities might reveal some of this underlying reality's workings.
Beliefs
He is considered to have been a deist and a mystic. In No Time to Be Brief: A Scientific Biography of Wolfgang Pauli he is quoted as writing to science historian Shmuel Sambursky, "In opposition to the monotheist religions – but in unison with the mysticism of all peoples, including the Jewish mysticism – I believe that the ultimate reality is not personal."
Personal life
In 1929, Pauli married Käthe Margarethe Deppner, a cabaret dancer. The marriage was unhappy, ending in divorce after less than a year. He married again in 1934 to Franziska Bertram (1901–1987). They had no children.
Death
Pauli died of pancreatic cancer on 15 December 1958, at age 58.
Publications
Pauli W, General Principles of Quantum Mechanics, Springer, 1980.
Pauli W, Lectures on Physics, 6 vols, Dover, 2000.Vol 1: ElectrodynamicsVol 2: Optics and the Theory of ElectronsVol 3: Thermodynamics and the Kinetic Theory of GasesVol 4: Statistical MechanicsVol 5: Wave MechanicsVol 6: Selected Topics in Field Quantization
Pauli W, Meson Theory of Nuclear Forces, 2nd ed, Interscience Publishers, 1948.
Pauli W, Theory of Relativity, Dover, 1981.
Bibliography
See also
List of Jewish Nobel laureates
References
Further reading
Remo, F. Roth: Return of the World Soul, Wolfgang Pauli, C.G. Jung and the Challenge of Psychophysical Reality [unus mundus], Part 1: The Battle of the Giants. Pari Publishing, 2011, .
Remo, F. Roth: Return of the World Soul, Wolfgang Pauli, C.G. Jung and the Challenge of Psychophysical Reality [unus mundus], Part 2: A Psychophysical Theory. Pari Publishing, 2012, .
External links
Pauli bio at the University of St Andrews, Scotland
Wolfgang Pauli bio at "Nobel Prize Winners"
Wolfgang Pauli, Carl Jung and Marie-Louise von Franz
Virtual walk-through exhibition of the life and times of Pauli
Annotated bibliography for Wolfgang Pauli from the Alsos Digital Library for Nuclear Issues
Pauli Archives at CERN Document Server
Virtual exhibition at ETH-Bibliothek, Zürich
Key Participants: Wolfgang Pauli – Linus Pauling and the Nature of the Chemical Bond: A Documentary History
Pauli's letter (December 1930) , the hypothesis of the neutrino (online and analyzed, for English version click 'Télécharger')
Pauli exclusion principle with Melvyn Bragg, Frank Close, Michela Massimi, Graham Farmelo "In Our Time 6 April 2017"
Clary, David C. (2022). Schrödinger in Oxford. World Scientific Publishing. .
1900 births
1958 deaths
Nobel laureates in Physics
Austrian Nobel laureates
Nobel laureates from Austria-Hungary
Jewish Nobel laureates
20th-century Austrian physicists
Jewish emigrants from Austria after the Anschluss to the United States
Deaths from pancreatic cancer in Switzerland
Academic staff of ETH Zurich
Foreign members of the Royal Society
Institute for Advanced Study visiting scholars
Jewish American physicists
Members of the Royal Netherlands Academy of Arts and Sciences
Naturalised citizens of Switzerland
Lorentz Medal winners
Mathematical physicists
Quantum physicists
Scientists from Vienna
Theoretical physicists
Thermodynamicists
Academic staff of the University of Göttingen
Winners of the Max Planck Medal
Carl Jung
Recipients of the Matteucci Medal
Swiss Nobel laureates
Members of the Royal Swedish Academy of Sciences
Academic staff of the University of Hamburg
Recipients of Franklin Medal | Wolfgang Pauli | Physics,Chemistry | 3,650 |
20,324,844 | https://en.wikipedia.org/wiki/Cmap%20%28font%29 | The cmap table is one of the OpenType font tables, which are required to enable correct font functioning. It "defines the mapping of character codes to the glyph index values used in the font."
References
Character encoding
Typography | Cmap (font) | Technology | 51 |
52,802,727 | https://en.wikipedia.org/wiki/Crossover%20value | In genetics, the crossover value is the linked frequency of chromosomal crossover between two gene loci (markers). For a fixed set of genetic and environmental conditions, recombination in a particular region of a linkage structure (chromosome) tends to be constant and the same is then true for the crossover value which is used in the production of genetic maps.
Origin in cell biology
Crossover implies the exchange of chromosomal segments between non-sister chromatids, in meiosis during the production of gametes. The effect is to assort the alleles on parental chromosomes, so that the gametes carry recombinations of genes different from either parent. This has the overall effect of increasing the variety of phenotypes present in a population.
The process of non-sister chromatid exchanges, including the crossover value, can be observed directly in stained cells, and indirectly by the presence or absence of genetic markers on the chromosomes. The visible crossovers are called chiasmata.
The large-scale effect of crossover is to spread genetic variations within a population, as well as genetic basis for the selection of the most adaptable phenotypes. The crossover value depends on the mutual distance of the genetic loci observed. The crossover value is equal to the recombination value or fraction when the distance between the markers in question is short.
See also
Chromosomal crossover
Genetic recombination
References
Classical genetics
Cellular processes
Cytogenetics | Crossover value | Biology | 301 |
11,159,595 | https://en.wikipedia.org/wiki/Air%20sex | Air sex is a performance activity invented in Japan; clothed men and women simulate sexual activity with an invisible partner, often in an exaggerated manner, set to music, and in a competition before an audience. This is somewhat akin to playing air guitar, explaining the name. The creator, J-Taro Sugisaku, says that it was invented in Tokyo in 2006 by a group of bored men without girlfriends.
A report about the phenomenon in the Japanese magazine Weekly Playboy in 2006 was picked up by the English language web site Mainichi Daily News. A video of the air sex "world championship" was uploaded to YouTube in January 2007. This was followed by a segment of the BBC Three documentary show Japanorama in March 2007, subsequently uploaded to YouTube and flagged as "inappropriate for some users".
These videos led to coverage in the blogosphere. The Japanorama video was shown in May 2007 at the JACON 2007 convention in Orlando, Florida, and a spontaneous air sex competition ensued, footage of which also found its way to YouTube.
In late January and early February 2007, a group of American black male teenagers calling themselves "Peer Pressure" produced two videos showing air sex set to music, published on YouTube under the user name "amp6". Fleshbot called these "The Best Air Sex Video of All Time".
Since August 2007, The Alamo Drafthouse has been holding bimonthly Air Sex competitions in the US. In June 2009 The Alamo Drafthouse toured the country on the Air Sex World Championships tour crowning Air Sex Champions in 14 cities. Later that October the first ever world champion was crowned when Shanghai Slammer from Los Angeles out-performed every other city champion. The tour was hosted by comedian Chris Trew. In October 2010 the group was scheduled to tour once more. Photos are posted on the Alamo Drafthouse Flickr page.
References
Comedy
Human sexuality
Sexuality in Japan
Japanese comedy | Air sex | Biology | 389 |
159,035 | https://en.wikipedia.org/wiki/Bytownite | Bytownite is a calcium rich member of the plagioclase solid solution series of feldspar minerals with composition between anorthite and labradorite. It is usually defined as having between 70 and 90%An (formula: ). Like others of the series, bytownite forms grey to white triclinic crystals commonly exhibiting the typical plagioclase twinning and associated fine striations.
The specific gravity of bytownite varies between 2.74 and 2.75. The refractive indices ranges are nα=1.563 – 1.572, nβ=1.568 – 1.578, and nγ=1.573 – 1.583. Precise determination of these two properties with chemical, X-ray diffraction, or petrographic analysis are required for identification.
Occurrence
Bytownite is a rock forming mineral occurring in mafic igneous rocks such as gabbros and anorthosites. It also occurs as phenocrysts in mafic volcanic rocks. It is rare in metamorphic rocks. It is typically associated with pyroxenes and olivine.
The mineral was first described in 1836 and named for an occurrence at Bytown (now Ottawa), Canada. Other noted occurrences in Canada include the Shawmere anorthosite in Foleyet Township, Ontario, and on Yamaska Mountain,
near Abbotsford, Quebec. It occurs on Rùm island, Scotland and Eycott Hill, near Keswick, Cumberland, England. It is reported from Naaraodal, Norway and in the Bushveld complex of South Africa. It is also found in Isa Valley, Western Australia.
In the US it is found in the Stillwater igneous complex of Montana; from near Lakeview, Lake County, Oregon. It occurs in the Lucky Cuss mine, Tombstone, Arizona; and from the Grants district, McKinley County, New Mexico. In the eastern US it occurs at Cornwall, Lebanon County, Pennsylvania and Phoenixville, Chester County, Pennsylvania.
References
Hurlbut, Cornelius S.; Klein, Cornelis, 1985, Manual of Mineralogy, 20th ed., Wiley,
External links
Tectosilicates
Calcium minerals
Sodium minerals
Feldspar
Triclinic minerals
Gemstones
Minerals in space group 2
fr:Bytownite | Bytownite | Physics | 481 |
36,336,899 | https://en.wikipedia.org/wiki/Comparison%20of%20Java%20virtual%20machines |
Version information
Technical information
Supported CPU architectures
Supported operating systems
References
Java platform software
Java virtual machine
Software comparisons | Comparison of Java virtual machines | Technology | 22 |
20,857,292 | https://en.wikipedia.org/wiki/Man%20cave | A man cave, mancave, or manspace, and less commonly a manland or mantuary is a male retreat or sanctuary in a home, such as a specially equipped garage, spare bedroom, media room, den, basement, or tree house. The term "man cave" describes an area in the home where a man can do as he pleases in a masculine space.
Etymology
The first known published use of the phrase is from March 21, 1992, in the Toronto Star by Joanne Lovering: "With his cave of solitude secured against wife intrusion by cold floors, musty smells and a few strategic cobwebs, he will stay down there for hours nestled in very manly magazines and open boxes of tools. Let's call the basement, man cave." The phrase gained traction with the 1993 publication of Men Are from Mars, Women Are from Venus by John Gray.
Purpose
Man caves have multiple purposes: they are a place to be alone, to indulge in hobbies such as watching sports or playing video games, and to hang out with male friends. According to psychiatrist and author Scott Haltzman, it is important for a man to have a place to call his own.
Writer and handyman Sam Martin explained:
Sociologist Tristan Bridges has interviewed American men and their partners about man caves, and found that many men rarely used their man caves. One interviewee said, "I feel like some day guys from my neighbourhood will congregate here after work and we'll share a beer and chat." When asked who these men from the neighborhood were, the interviewee replied "I don't know". Bridges stated that his research has turned partly into "a story about men's loneliness."
In 2005, Paula Aymer of Tufts University suggested it was the "last bastion of masculinity".
Design
According to several sources, the general architectural and design trend of the early 2000s was for men to take traditionally male-only spaces, and equip them with masculine aesthetic choices. Man cave accessories include refrigerators, vending machines, putting greens, kegerators, giant TVs, musical instruments and gear, pool tables, boxing rings, entertainment centers, bars, and sports memorabilia such as trophies. Upscale sports-themed furnishings are available to outfit a man cave. The room may be large enough to accommodate a big screen television, often used for watching sports games with male friends.
In the book Where Men Hide, which Publishers Weekly described as an affable but only "sometimes thought provoking" guide, author James Twitchell and photographer Ken Ross explored areas where men like to be alone. According to Twitchell, some public male-only spaces, such as the barbershop, are declining and being replaced by spaces such as the "grimy garage." The book suggests that "men make their own spaces for good or ill."
Twitchell focused on communal man cave spaces such as male-only groups in megachurches, possibly a modern-day replacement for declining attendance at male-only clubs such as Masonic lodges. Twitchell noted that some anthropologists have speculated that these spots are a place for men to bond before hunting or war, and where they can "smoke or fart" and tell the "same jokes over and over again."
One man redecorated his space to look like a replica model of the bridge of the Starship Enterprise from the TV show Star Trek, while another man spent over two years and $120,000 to make his man cave into a Batcave.
Garages have typically been a male space since they "present a guy with an opportunity to disappear for hours while never leaving the premises." In 2007, it was common for men to "lavish time, money and attention on fixing this spot up", with the intention of making it more welcoming.
Counterparts
Women have created similar spaces in which they can relax and pursue hobbies. These have been referred to as "she-sheds" and "girl-caves". Some analysts have described the manosphere as an online counterpart to the man-cave.
In popular culture
There have been several examples of man caves in pop culture, including:
Al Bundy's garage from the TV sitcom Married... with Children: Al Bundy's garage was his only sanctuary. It was also used to hold the recurring "No Ma'am" meetings.
Tim Taylor's garage in the TV sitcom Home Improvement: Tim Taylor used to "bring to life all manner of high-powered monster machines."
Bada Bing room in the TV show The Sopranos: Tony Soprano's gang would meet in a windowless "dingy office" at the Bada Bing strip club. It was a "guys-only place within a guys-only place."
Doug's garage in the TV show The King of Queens, Doug Heffernan's garage is equipped with a big screen TV, beer fridge, and a couch where Doug and his friends watch football, baseball, and boxing and drink beer in peace away from Doug's wife, Carrie, and Doug's father-in-law, Arthur Spooner.
Charles Deetz's den in the 1988 movie Beetlejuice. It is the only room that survives an extensive home renovation initiated by his wife and her decorator.
See also
Andron
Bachelor pad
Cabinet (room)
Home tiki bars
Male bonding
Mancation
ManSpace (TV series)
Man Caves, a home renovation program specifically targeted in the creation of man caves.
Personal space
Recreation room
Shed
Study (room)
References
Cave
Rooms
1992 neologisms | Man cave | Engineering | 1,141 |
7,569,225 | https://en.wikipedia.org/wiki/Twig%20%28novel%29 | Twig is a children's fantasy novel written and illustrated by Elizabeth Orton Jones. It was originally published by Macmillan in 1942. The book was reissued in a 60th Anniversary Edition by Purple House Press in 2002.
Plot summary
The novel features Twig, an imaginative little city girl who turns a tomato can into a house for fairies. A little elf comes along to live in the house and, at Twig's request, turns her fairy-sized, though he cannot manage wings. A friendly sparrow fetches the Queen of the fairies to help.
Reception
The New York Times praised the book by saying, "Miss Jones, who knows children well, has told stories with warmth and simplicity." The Horn Book Guide'''s review of the anniversary edition described the book as "full of magic, full of fun, full of fantasy interwoven with reality, and full of the kind of tenderness which belongs most particularly to the very young."
The book was so popular at the time Elizabeth moved to the town of Mason, New Hampshire that Elizabeth herself was known as "Twig" to her neighbors; many residents did not know her as anything else.
In popular culture
Twig (although mistakenly referring to the elf and not the heroine) is recalled by Kinsey Millhone as being her favorite book when orphaned at age 5. She mentions the tomato can. ("I" is for Innocent'' by Sue Grafton)
References
1942 American novels
American children's novels
Children's fantasy novels
American fantasy novels
Fiction about size change
1942 children's books | Twig (novel) | Physics,Mathematics | 315 |
35,746,514 | https://en.wikipedia.org/wiki/Package%20principles | In computer programming, package principles are a way of organizing classes in larger systems to make them more organized and manageable. They aid in understanding which classes should go into which packages (package cohesion) and how these packages should relate with one another (package coupling). Package principles also includes software package metrics, which help to quantify the dependency structure, giving different and/or more precise insights into the overall structure of classes and packages.
See also
SOLID
Robert Cecil Martin
References
Software design
Object-oriented programming | Package principles | Engineering | 104 |
51,501,092 | https://en.wikipedia.org/wiki/NGC%20171 | NGC 171 is a barred spiral galaxy with an apparent magnitude of 12, located around 200 million light-years away in the constellation Cetus. The galaxy has two main medium-wound arms, with a few minor arms, and a fairly bright nucleus and bulge. It was discovered on 20 October 1784 by William Herschel. It is also known as NGC 175.
See also
List of NGC objects (1–1000)
References
External links
SEDS
0171
Barred spiral galaxies
Cetus
Astronomical objects discovered in 1784
Discoveries by William Herschel | NGC 171 | Astronomy | 108 |
40,053,586 | https://en.wikipedia.org/wiki/Thomas%20Laxton | Thomas Laxton (1830 – 6 August 1893) was a plant breeder and a correspondent of Charles Darwin, best known for his hybridisation of peas.
Thomas Laxton was born in the village of Tinwell, Rutland in 1830. He practised as a solicitor in Stamford before his interest in horticulture led him to become an authority on plant hybridisation. By 1858 Laxton was breeding plants. Initially he worked from St Mary's Hill, Stamford and corresponded with Charles Darwin from this address. Early correspondence with Darwin referred to Laxton's work on hybridising peas. Laxton applied scientific methods to plant breeding, making careful observations of parent plants. He recognised the susceptibility of plants to disease and resistance of these diseases by their American counterparts.
Laxton made observations on gooseberries and Darwin corresponded with him during the 1860s and 1870s on his work. He is probably widely remembered for his contribution to strawberry breeding. In 1872 he began to introduce strawberry varieties from his plant breeding work. Laxton moved the business to Bedford in 1879 and in 1885 Kelly's Directory identifies him as a 'seed grower and merchant' listed at 1 Harpur Place, Bedford. In 1884 he introduced his first major strawberry success 'Noble' a chance seedling of 'Excelsior' and 'American Sharpless'. Followed by 'King of the Earlies' in 1888. These two varieties formed the parentage of Laxton’s 'Royal Sovereign strawberry' in 1892.
Laxton married twice, with three daughters from his first marriage and four sons from his second marriage. Two sons, William Hudson Lowe Laxton (1866–1923) and Edward Augustine Lowe Laxton MBE (1869–1951) went into partnership to form Laxton Brothers in 1888. Thomas Laxton died in August 1893. The brothers recognised their father's horticultural contribution by introducing the pea 'Thomas Laxton' in his honour in 1898.
References
English botanists
English horticulturists
Plant breeding
1830 births
1893 deaths
People from Rutland
Charles Darwin
19th-century British botanists | Thomas Laxton | Chemistry | 415 |
23,712 | https://en.wikipedia.org/wiki/Pentomino | Derived from the Greek word for '5', and "domino", a pentomino (or 5-omino) is a polyomino of order 5; that is, a polygon in the plane made of 5 equal-sized squares connected edge to edge. When rotations and reflections are not considered to be distinct shapes, there are 12 different free pentominoes. When reflections are considered distinct, there are 18 one-sided pentominoes. When rotations are also considered distinct, there are 63 fixed pentominoes.
Pentomino tiling puzzles and games are popular in recreational mathematics. Usually, video games such as Tetris imitations and Rampart consider mirror reflections to be distinct, and thus use the full set of 18 one-sided pentominoes. (Tetris itself uses 4-square shapes.)
Each of the twelve pentominoes satisfies the Conway criterion; hence, every pentomino is capable of tiling the plane. Each chiral pentomino can tile the plane without being reflected.
History
The earliest puzzle containing a complete set of pentominoes appeared in Henry Dudeney's book, The Canterbury Puzzles, published in 1907. The earliest tilings of rectangles with a complete set of pentominoes appeared in the Problemist Fairy Chess Supplement in 1935, and further tiling problems were explored in the PFCS, and its successor, the Fairy Chess Review.
Pentominoes were formally defined by American professor Solomon W. Golomb starting in 1953 and later in his 1965 book Polyominoes: Puzzles, Patterns, Problems, and Packings. They were introduced to the general public by Martin Gardner in his October 1965 Mathematical Games column in Scientific American. Golomb coined the term "pentomino" from the Ancient Greek / pénte, "five", and the -omino of domino, fancifully interpreting the "d-" of "domino" as if it were a form of the Greek prefix "di-" (two). Golomb named the 12 free pentominoes after letters of the Latin alphabet that they resemble, using the mnemonic FILiPiNo along with the end of the alphabet (TUVWXYZ).
John Horton Conway proposed an alternate labeling scheme for pentominoes, using O instead of I, Q instead of L, R instead of F, and S instead of N. The resemblance to the letters is more strained, especially for the O pentomino, but this scheme has the advantage of using 12 consecutive letters of the alphabet. It is used by convention in discussing Conway's Game of Life, where, for example, one speaks of the R-pentomino instead of the F-pentomino.
Symmetry
F, L, N, P, and Y can be oriented in 8 ways: 4 by rotation, and 4 more for the mirror image. Their symmetry group consists only of the identity mapping.
T, and U can be oriented in 4 ways by rotation. They have an axis of reflection aligned with the gridlines. Their symmetry group has two elements, the identity and the reflection in a line parallel to the sides of the squares.
V and W also can be oriented in 4 ways by rotation. They have an axis of reflection symmetry at 45° to the gridlines. Their symmetry group has two elements, the identity and a diagonal reflection.
Z can be oriented in 4 ways: 2 by rotation, and 2 more for the mirror image. It has point symmetry, also known as rotational symmetry of order 2. Its symmetry group has two elements, the identity and the 180° rotation.
I can be oriented in 2 ways by rotation. It has two axes of reflection symmetry, both aligned with the gridlines. Its symmetry group has four elements, the identity, two reflections and the 180° rotation. It is the dihedral group of order 2, also known as the Klein four-group.
X can be oriented in only one way. It has four axes of reflection symmetry, aligned with the gridlines and the diagonals, and rotational symmetry of order 4. Its symmetry group, the dihedral group of order 4, has eight elements.
The F, L, N, P, Y, and Z pentominoes are chiral; adding their reflections (F′, J, N′, Q, Y′, S) brings the number of one-sided pentominoes to 18. If rotations are also considered distinct, then the pentominoes from the first category count eightfold, the ones from the next three categories (T, U, V, W, Z) count fourfold, I counts twice, and X counts only once. This results in 5×8 + 5×4 + 2 + 1 = 63 fixed pentominoes.
The eight possible orientations of the F, L, N, P, and Y pentominoes, and the four possible orientations of the T, U, V, W, and Z pentominoes are illustrated:
For 2D figures in general there are two more categories:
Being orientable in 2 ways by a rotation of 90°, with two axes of reflection symmetry, both aligned with the diagonals. This type of symmetry requires at least a heptomino.
Being orientable in 2 ways, which are each other's mirror images, for example a swastika. This type of symmetry requires at least an octomino.
Games
Tiling puzzle (2D)
A standard pentomino puzzle is to tile a rectangular box with the pentominoes, i.e. cover it without overlap and without gaps. Each of the 12 pentominoes has an area of 5 unit squares, so the box must have an area of 60 units. Possible sizes are 6×10, 5×12, 4×15 and 3×20.
The 6×10 case was first solved in 1960 by married couple Colin Brian Haselgrove and Jenifer Haselgrove. There are exactly 2,339 solutions, excluding trivial variations obtained by rotation and reflection of the whole rectangle but including rotation and reflection of a subset of pentominoes (which sometimes provides an additional solution in a simple way). The 5×12 box has 1010 solutions, the 4×15 box has 368 solutions, and the 3×20 box has just 2 solutions (one is shown in the figure, and the other one can be obtained from the solution shown by rotating, as a whole, the block consisting of the L, N, F, T, W, Y, and Z pentominoes).
A somewhat easier (more symmetrical) puzzle, the 8×8 rectangle with a 2×2 hole in the center, was solved by Dana Scott as far back as 1958. There are 65 solutions. Scott's algorithm was one of the first applications of a backtracking computer program. Variations of this puzzle allow the four holes to be placed in any position. One of the external links uses this rule.
Efficient algorithms have been described to solve such problems, for instance by Donald Knuth. Running on modern hardware, these pentomino puzzles can now be solved in mere seconds.
Most such patterns are solvable, with the exceptions of placing each pair of holes near two corners of the board in such a way that both corners could only be fitted by a P-pentomino, or forcing a T-pentomino or U-pentomino in a corner such that another hole is created.
The pentomino set is the only free polyomino set that can be packed into a rectangle, with the exception of the trivial monomino and domino sets, each of which consists only of a single rectangle.
Box filling puzzle (3D)
A pentacube is a polycube of five cubes. Of the 29 one-sided pentacubes, exactly twelve pentacubes are flat (1-layer) and correspond to the twelve pentominoes extruded to a depth of one square.
A pentacube puzzle or 3D pentomino puzzle, amounts to filling a 3-dimensional box with the 12 flat pentacubes, i.e. cover it without overlap and without gaps. Since each pentacube has a volume of 5 unit cubes, the box must have a volume of 60 units. Possible sizes are 2×3×10 (12 solutions), 2×5×6 (264 solutions) and 3×4×5 (3940 solutions).
Alternatively one could also consider combinations of five cubes that are themselves 3D, i.e., those which include more than just the 12 "flat" single-layer thick combinations of cubes. However, in addition to the 12 "flat" pentacubes formed by extruding the pentominoes, there are 6 sets of chiral pairs and 5 additional pieces, forming a total of 29 potential pentacube pieces, which gives 145 cubes in total (=29×5); as 145 can only be packed into a box measuring 29×5×1, it cannot be formed by including the non-flat pentominoes.
Commercial board games
There are board games of skill based entirely on pentominoes. Such games are often simply called "Pentominoes".
One of the games is played on an 8×8 grid by two or three players. Players take turns in placing pentominoes on the board so that they do not overlap with existing tiles and no tile is used more than once. The objective is to be the last player to place a tile on the board. This version of Pentominoes is called "Golomb's Game".
The two-player version has been weakly solved in 1996 by Hilarie Orman. It was proved to be a first-player win by examining around 22 billion board positions.
Pentominoes, and similar shapes, are also the basis of a number of other tiling games, patterns and puzzles. For example, the French board game Blokus is played with 4 colored sets of polyominoes, each consisting of every pentomino (12), tetromino (5), triomino (2) domino (1) and monomino (1). Like the game Pentominoes, the goal is to use all of your tiles, and a bonus is given if the monomino is played on the last move. The player with the fewest blocks remaining wins.
The game of Cathedral is also based on polyominoes.
Parker Brothers released a multi-player pentomino board game called Universe in 1966. Its theme is based on a deleted scene from the 1968 film 2001: A Space Odyssey in which an astronaut is playing a two-player pentomino game against the HAL 9000 computer (a scene with a different astronaut playing chess was retained). The front of the board game box features scenes from the movie as well as a caption describing it as the "game of the future". The game comes with four sets of pentominoes in red, yellow, blue, and white. The board has two playable areas: a base 10x10 area for two players with an additional 25 squares (two more rows of 10 and one offset row of five) on each side for more than two players.
Game manufacturer Lonpos has a number of games that use the same pentominoes, but on different game planes. Their 101 Game has a 5 x 11 plane. By changing the shape of the plane, thousands of puzzles can be played, although only a relatively small selection of these puzzles are available in print.
Video games
Tetris was inspired by pentomino puzzles, although it uses four-block tetrominoes. Some Tetris clones and variants, like the game 5s included with Plan 9 from Bell Labs, and Magical Tetris Challenge, do use pentominoes.
Daedalian Opus uses pentomino puzzles throughout the game.
Literature
Pentominoes were featured in a prominent subplot of Arthur C. Clarke's 1975 novel Imperial Earth. Clarke also wrote an essay in which he described the game and how he got hooked on it.
They were also featured in Blue Balliett's Chasing Vermeer, which was published in 2003 and illustrated by Brett Helquist, as well as its sequels, The Wright 3 and The Calder Game.
In The New York Times crossword puzzle for June 27, 2012, the clue for an 11-letter word at 37 across was "Complete set of 12 shapes formed by this puzzle's black squares."
See also
Previous and Next orders
Tetromino
Hexomino
Others
Tiling puzzle
Cathedral board game
Solomon W. Golomb
Notes
References
Chasing Vermeer, with information about the book Chasing Vermeer and a click-and-drag pentomino board.
External links
Pentomino configurations and solutions An exhaustive listing of solutions to many of the classic problems showing how each solution relates to the others.
Mathematical games
Polyforms
Solved games | Pentomino | Mathematics | 2,695 |
41,952,275 | https://en.wikipedia.org/wiki/Artin%27s%20criterion | In mathematics, Artin's criteria are a collection of related necessary and sufficient conditions on deformation functors which prove the representability of these functors as either Algebraic spaces or as Algebraic stacks. In particular, these conditions are used in the construction of the moduli stack of elliptic curves and the construction of the moduli stack of pointed curves.
Notation and technical notes
Throughout this article, let be a scheme of finite-type over a field or an excellent DVR. will be a category fibered in groupoids, will be the groupoid lying over .
A stack is called limit preserving if it is compatible with filtered direct limits in , meaning given a filtered system there is an equivalence of categoriesAn element of is called an algebraic element if it is the henselization of an -algebra of finite type.
A limit preserving stack over is called an algebraic stack if
For any pair of elements the fiber product is represented as an algebraic space
There is a scheme locally of finite type, and an element which is smooth and surjective such that for any the induced map is smooth and surjective.
See also
Artin approximation theorem
Schlessinger's theorem
References
Deformation theory and algebraic stacks - overview of Artin's papers and related research
Algebraic geometry | Artin's criterion | Mathematics | 254 |
53,780,399 | https://en.wikipedia.org/wiki/Maksym%20Radziwill | Maksym Radziwill (born 24 February 1988) is a Polish-Canadian mathematician specializing in number theory. He is currently a professor of mathematics at the Northwestern University.
Life
He was born in Moscow in 1988. His family moved to Poland in 1991 where he graduated from high school and in 2006 to Canada. Radziwill graduated from McGill University in Montreal in 2009, and in 2013 earned a PhD under Kannan Soundararajan at Stanford University in California. In 2013–2014, he was at the Institute for Advanced Study in Princeton, New Jersey as a visiting member, and in 2014 became a Hill assistant professor at Rutgers University. In 2016, he became an assistant professor at McGill. In 2018, he became Professor of Mathematics at California Institute of Technology, and in 2022 he moved to the University of Texas at Austin. In 2023, Radziwill joined Northwestern University as the Wayne and Elizabeth Jones Professor of Mathematics.
Honors and awards
In 2016, along with Kaisa Matomäki of the University of Turku, Radziwill was awarded the SASTRA Ramanujan Prize.
In February 2017, Maksym Radziwill was awarded the prestigious Sloan Fellowship.
In 2018, he was awarded the Coxeter–James Prize by the Canadian Mathematical Society. In 2018 he was invited with Matomäki to present their work at the International Congress of Mathematicians.
With Matomäki, he is one of five winners of the 2019 New Horizons Prize for Early-Career Achievement in Mathematics, associated with the Breakthrough Prize in Mathematics.
In the same year he was awarded the Stefan Banach Prize (2018) of the Polish Mathematical Society. For 2023 he received the Cole Prize in Number Theory of the AMS. In 2023, he was also invited to give a Łojasiewicz Lecture by the Jagiellonian University.
References
1988 births
Living people
Number theorists
Recipients of the SASTRA Ramanujan Prize
Canadian mathematicians
Polish mathematicians | Maksym Radziwill | Mathematics | 402 |
1,779,325 | https://en.wikipedia.org/wiki/List%20of%20office%20suites | In computing, an office suite is a collection of productivity software usually containing at least a word processor, spreadsheet and a presentation program. There are many different brands and types of office suites.
Office suites
Free and open source suites
AndrOpen Office – available for Android
Apache OpenOffice – available for Linux, macOS and Windows
Calligra Suite – available for FreeBSD, Linux, macOS and Windows
Collabora Online – available for Android, ChromeOS, iOS, iPadOS, Linux, macOS, online and Windows
LibreOffice – available for Linux, macOS and Windows, and unofficial: Android, ChromeOS, FreeBSD, Haiku, iOS, iPadOS, OpenBSD, NetBSD and Solaris
NeoOffice – available for macOS
Nextcloud – online collaboration suite, available for Android, iOS, Linux, macOS and Windows
ONLYOFFICE – available for Android, iOS, Linux, macOS, online and Windows
Freeware and proprietary suites
Ability Office – available for Windows
Google Workspace – available for Android, ChromeOS, iOS, iPadOS, Linux, macOS, online and Windows
Hancom Office – available for Windows
Ichitaro – a Japanese-language suite available for Windows
iWork – available for iOS, iPadOS, macOS and online
Lark – available for iOS, iPadOS, macOS, online Windows, and Android
Microsoft 365 – available for Android, iOS, iPadOS, macOS, online and Windows
MobiSystems OfficeSuite – available for Android, iOS and Windows
Polaris Office – available for iOS, macOS and Windows
SoftMaker Office – available for Android, iOS, iPadOS, Linux, macOS and Windows
Tiki Wiki CMS Groupware – online content management
WordPerfect Office – available for Windows
WPS Office – available for Android, iOS, macOS, Linux and Windows
Discontinued office suites
AppleWorks
Aster*x
AUIS – an office suite developed by Carnegie Mellon University and named after Andrew Carnegie
Breadbox Office – DOS software
Corel WordPerfect for DOS
EasyOffice
Hancom Office Suite (formerly ThinkFree Office)
IBM Lotus SmartSuite
IBM Lotus Symphony
IBM Works – an office suite for the IBM OS/2 operating system
Interleaf
Jambo OpenOffice, an abandoned project to translate the OpenOffice.org project into Swahili
Lotus Jazz – Mac sister product to Lotus Symphony
Lotus Symphony
Microsoft Works
Picsel Smart Office
QuickOffice
Siag Office
Sim desk – online office suite from Simdesk Technologies, Inc
StarOffice – continued as open source suite OpenOffice.org then LibreOffice
See also
Comparison of office suites
List of word processors
List of spreadsheets
List of presentation programs
office suites
References | List of office suites | Technology | 559 |
9,951,136 | https://en.wikipedia.org/wiki/Exonic%20splicing%20silencer | An exonic splicing silencer (ESS) is a short region (usually 4-18 nucleotides) of an exon and is a cis-regulatory element. A set of 103 hexanucleotides known as FAS-hex3 has been shown to be abundant in ESS regions. ESSs inhibit or silence splicing of the pre-mRNA and contribute to constitutive and alternate splicing. To elicit the silencing effect, ESSs recruit proteins that will negatively affect the core splicing machinery.
Mechanism of action
Exonic splicing silencers work by inhibiting the splicing of pre-mRNA strands or promoting exon skipping. The single stranded pre-mRNA molecules need to have their intronic and exonic regions spliced in order to be translated. ESSs silence splice sites adjacent to them by interfering with the components of the core splicing complex, such as the snRNP's, U1 and U2. This causes proteins that negatively influence splicing to be recruited to the splicing machinery.
ESSs have four general roles:
inhibiting exon inclusion
inhibiting intron retention
regulating alternative 5' splice site usage
regulating alternative 3' splice site usage
Role in genetic diseases
Myotonic dystrophy
Myotonic dystrophy (MD) is most noticeably caused by inheriting an unstable CTG triplet expansion in the DMPK gene. In healthy genotypes two isoforms of an insulin receptor mRNA transcript exist. The isoform IR-A lacks exon 11 and is expressed ubiquitously in cells. Isoform IR-B contains exon 11 and is expressed in cells of the liver, muscles, kidney, and adipocytes. In individuals with MD, IR-A is upregulated in high amounts in skeletal muscle leading to the disease phenotype.
The ESS nucleotide sequence exists within intron 10 and is thought to be dependent on the CUG triplet repeat in order to silence the splicing of exon 11. Silencing exon 11 splicing leads to the increased transcription of the IR-A isoform.
Cystic fibrosis
Mutations in the CFTR gene are responsible for causing cystic fibrosis. A particular mutation occurs in the CFTR pre-mRNA and leads to the exclusion of exon 9, mRNA lacking this exon folds a truncated protein (a protein shortened by a mutation).
Exclusion of exon 9 is mediated by a polymorphic locus with variable TG repeats and stretches of T nucleotides – this is abbreviated as (TG)mT(n). This locus is an exonic splicing silencer and is located upstream of the exon 9 splice site (site 3c). The silencing is related to the high number of TG repeats and decreased stretches of T repeats (T tracts). A combination of both these factors is shown to increase levels of exon skipping.
The TDP-43 protein is responsible for physically silencing the exon splicing site once it is recruited by the exonic splicing silencer (TG)mT(n). TDP-43 is a DNA binding protein and repressor, it binds to the TG repeat to cause exon 9 skipping. The role of the T tracts is not well understood.
Spinal muscular atrophy
Spinal muscular atrophy is caused by the homozygous loss of the SMN1 gene. Humans have two isoforms of the SMN (survival motor neuron) gene, SMN1 and SMN2. The SMN1 gene produces a complete transcript, while SMN2 produces a transcript without exon 7 which results in a truncated protein.
The ESS that contributes to the disease phenotype is the UAGACA nucleotide sequence. This sequence arises when a C-to-T mutation occurs at position +6 in exon 7 of the SMN2 gene. This transition point mutation leads to the exclusion of exon 7 from the mRNA transcript, it is also the only difference between the SMN2 and SMN1 gene.
The UAGACA ESS is thought to work by disrupting an exonic splicing enhancer and attracting proteins that inhibit splicing by binding sequences on exon 7.
Ataxia telangiectasia
Mutations in the ATM gene are responsible for ataxia telangiectasia. These mutations are generally single base pair substitutions, deletions, or micro-insertions. A 4-nucleotide deletion within intron 20 of the ATM gene disrupts an exonic splicing silencer and causes the inclusion of a 65-nucleotide cryptic exon in the mature transcript. The inclusion of the cryptic exon results in protein truncation and atypical splicing patterns.
References
Genetics | Exonic splicing silencer | Biology | 1,013 |
68,809,157 | https://en.wikipedia.org/wiki/Ghostwriter%20%28hacker%20group%29 | Ghostwriter, also known as UNC1151 and Storm-0257 by Microsoft, is a hacker group allegedly originating from Belarus. According to the cybersecurity firm Mandiant, the group has spread disinformation critical of NATO since at least 2016.
History
The name Ghostwriter comes from the group's first attacks, whereby they would steal credentials of journalists or publishers and publish fake articles using those credentials. Hence, the group effectively became unwanted ghostwriters for those with stolen credentials. UNC1151 is an internal company name by Mandiant given to uncategorized groups of "cyber intrusion activity."
The European Union has blamed this group for hacking German government officials.
EU's foreign policy chef Josep Borrell has threatened Russia for sanctions.
According to Serhiy Demedyuk, deputy secretary of the national security and defense council of Ukraine, the group was responsible for defacement of Ukrainian government websites in January 2022.
In February 2022 The Register reported that a Ukrainian CERT had announced that the group was targeting "private ‘i.ua’ and ‘meta.ua’ [email] accounts of Ukrainian military personnel and related individuals" as part of a phishing attack during the invasion of Ukraine. Mandiant said that two domains mentioned by the CERT, i[.]ua-passport[.]space and id[.]bigmir[.]space were known command and control domains of the group. Mandiant also said "We are able to tie the infrastructure reported by CERT.UA to UNC1151, but have not seen the phishing messages directly. However, UNC1151 has targeted Ukraine and especially its military extensively over the past two years, so this activity matches their historical pattern."
Characteristics and techniques
The group has executed spear-phishing campaigns against members of legitimate press to infiltrate the content management systems of those organizations. Then, the group uses the system to publish their own fake stories.
References
Hacker groups
Hacking in the 2020s | Ghostwriter (hacker group) | Technology | 414 |
60,441,018 | https://en.wikipedia.org/wiki/Muscatel%20%28tea%29 | Muscatel refers to a distinctive flavor found in some Darjeeling teas, especially the second-flush teas. It has been described as a "distinct sweet flavour" that is not present in other flushes or tea from other localities, a "musky spiciness," "a unique muscat-like fruitiness in aroma and flavour," or "dried raisins with a hay like finish." Though difficult to describe, it is prized by tea aficionados.
The flavor develops in part through the action of sap-sucking insects, jassids and thrips, which partly damage the young tea leaves. The tea plant then produces terpene as an insect repellent. This higher concentration of terpene produces the muscatel flavor.
References
Tea
Terpenes and terpenoids | Muscatel (tea) | Chemistry | 172 |
47,520 | https://en.wikipedia.org/wiki/Coccolithophore | Coccolithophores, or coccolithophorids, are single-celled organisms which are part of the phytoplankton, the autotrophic (self-feeding) component of the plankton community. They form a group of about 200 species, and belong either to the kingdom Protista, according to Robert Whittaker's five-kingdom system, or clade Hacrobia, according to a newer biological classification system. Within the Hacrobia, the coccolithophores are in the phylum or division Haptophyta, class Prymnesiophyceae (or Coccolithophyceae). Coccolithophores are almost exclusively marine, are photosynthetic and mixotrophic, and exist in large numbers throughout the sunlight zone of the ocean.
Coccolithophores are the most productive calcifying organisms on the planet, covering themselves with a calcium carbonate shell called a coccosphere. However, the reasons they calcify remain elusive. One key function may be that the coccosphere offers protection against microzooplankton predation, which is one of the main causes of phytoplankton death in the ocean.
Coccolithophores are ecologically important, and biogeochemically they play significant roles in the marine biological pump and the carbon cycle. Depending on habitat, they can produce up to 40 percent of the local marine primary production. They are of particular interest to those studying global climate change because, as ocean acidity increases, their coccoliths may become even more important as a carbon sink. Management strategies are being employed to prevent eutrophication-related coccolithophore blooms, as these blooms lead to a decrease in nutrient flow to lower levels of the ocean.
The most abundant species of coccolithophore, Emiliania huxleyi, belongs to the order Isochrysidales and family Noëlaerhabdaceae. It is found in temperate, subtropical, and tropical oceans. This makes E. huxleyi an important part of the planktonic base of a large proportion of marine food webs. It is also the fastest growing coccolithophore in laboratory cultures. It is studied for the extensive blooms it forms in nutrient depleted waters after the reformation of the summer thermocline. and for its production of molecules known as alkenones that are commonly used by earth scientists as a means to estimate past sea surface temperatures.
Overview
Coccolithophores (or coccolithophorids, from the adjective) form a group of about 200 phytoplankton species. They belong either to the kingdom Protista, according to Robert Whittaker's Five kingdom classification, or clade Hacrobia, according to the newer biological classification system. Within the Hacrobia, the coccolithophores are in the phylum or division Haptophyta, class Prymnesiophyceae (or Coccolithophyceae). Coccolithophores are distinguished by special calcium carbonate plates (or scales) of uncertain function called coccoliths, which are also important microfossils. However, there are Prymnesiophyceae species lacking coccoliths (e.g. in genus Prymnesium), so not every member of Prymnesiophyceae is a coccolithophore.
Coccolithophores are single-celled phytoplankton that produce small calcium carbonate (CaCO3) scales (coccoliths) which cover the cell surface in the form of a spherical coating, called a coccosphere. Many species are also mixotrophs, and are able to photosynthesise as well as ingest prey.
Coccolithophores have been an integral part of marine plankton communities since the Jurassic. Today, coccolithophores contribute ~1–10% to inorganic carbon fixation (calcification) to total carbon fixation (calcification plus photosynthesis) in the surface ocean and ~50% to pelagic CaCO3 sediments. Their calcareous shell increases the sinking velocity of photosynthetically fixed into the deep ocean by ballasting organic matter. At the same time, the biogenic precipitation of calcium carbonate during coccolith formation reduces the total alkalinity of seawater and releases . Thus, coccolithophores play an important role in the marine carbon cycle by influencing the efficiency of the biological carbon pump and the oceanic uptake of atmospheric .
As of 2021, it is not known why coccolithophores calcify and how their ability to produce coccoliths is associated with their ecological success. The most plausible benefit of having a coccosphere seems to be a protection against predators or viruses. Viral infection is an important cause of phytoplankton death in the oceans, and it has recently been shown that calcification can influence the interaction between a coccolithophore and its virus. The major predators of marine phytoplankton are microzooplankton like ciliates and dinoflagellates. These are estimated to consume about two-thirds of the primary production in the ocean and microzooplankton can exert a strong grazing pressure on coccolithophore populations. Although calcification does not prevent predation, it has been argued that the coccosphere reduces the grazing efficiency by making it more difficult for the predator to utilise the organic content of coccolithophores. Heterotrophic protists are able to selectively choose prey on the basis of its size or shape and through chemical signals and may thus favor other prey that is available and not protected by coccoliths.
Structure
Coccolithophores are spherical cells about 5–100 micrometres across, enclosed by calcareous plates called coccoliths, which are about 2–25 micrometres across. Each cell contains two brown chloroplasts which surround the nucleus.
Enclosed in each coccosphere is a single cell with membrane bound organelles. Two large chloroplasts with brown pigment are located on either side of the cell and surround the nucleus, mitochondria, golgi apparatus, endoplasmic reticulum, and other organelles. Each cell also has two flagellar structures, which are involved not only in motility, but also in mitosis and formation of the cytoskeleton. In some species, a functional or vestigial haptonema is also present. This structure, which is unique to haptophytes, coils and uncoils in response to environmental stimuli. Although poorly understood, it has been proposed to be involved in prey capture.
Ecology
Life history strategy
The complex life cycle of coccolithophores is known as a haplodiplontic life cycle, and is characterized by an alternation of both asexual and sexual phases. The asexual phase is known as the haploid phase, while the sexual phase is known as the diploid phase. During the haploid phase, coccolithophores produce haploid cells through mitosis. These haploid cells can then divide further through mitosis or undergo sexual reproduction with other haploid cells. The resulting diploid cell goes through meiosis to produce haploid cells again, starting the cycle over. With coccolithophores, asexual reproduction by mitosis is possible in both phases of the life cycle, which is a contrast with most other organisms that have alternating life cycles. Both abiotic and biotic factors may affect the frequency with which each phase occurs.
Coccolithophores reproduce asexually through binary fission. In this process the coccoliths from the parent cell are divided between the two daughter cells. There have been suggestions stating the possible presence of a sexual reproduction process due to the diploid stages of the coccolithophores, but this process has never been observed.
K or r- selected strategies of coccolithophores depend on their life cycle stage. When coccolithophores are diploid, they are r-selected. In this phase they tolerate a wider range of nutrient compositions. When they are haploid they are K- selected and are often more competitive in stable low nutrient environments. Most coccolithophores are K strategist and are usually found on nutrient-poor surface waters. They are poor competitors when compared to other phytoplankton and thrive in habitats where other phytoplankton would not survive. These two stages in the life cycle of coccolithophores occur seasonally, where more nutrition is available in warmer seasons and less is available in cooler seasons. This type of life cycle is known as a complex heteromorphic life cycle.
Global distribution
Coccolithophores occur throughout the world's oceans. Their distribution varies vertically by stratified layers in the ocean and geographically by different temporal zones. While most modern coccolithophores can be located in their associated stratified oligotrophic conditions, the most abundant areas of coccolithophores where there is the highest species diversity are located in subtropical zones with a temperate climate. While water temperature and the amount of light intensity entering the water's surface are the more influential factors in determining where species are located, the ocean currents also can determine the location where certain species of coccolithophores are found.
Although motility and colony formation vary according to the life cycle of different coccolithophore species, there is often alternation between a motile, haploid phase, and a non-motile diploid phase. In both phases, the organism's dispersal is largely due to ocean currents and circulation patterns.
Within the Pacific Ocean, approximately 90 species have been identified with six separate zones relating to different Pacific currents that contain unique groupings of different species of coccolithophores. The highest diversity of coccolithophores in the Pacific Ocean was in an area of the ocean considered the Central North Zone which is an area between 30 oN and 5 oN, composed of the North Equatorial Current and the Equatorial Countercurrent. These two currents move in opposite directions, east and west, allowing for a strong mixing of waters and allowing a large variety of species to populate the area.
In the Atlantic Ocean, the most abundant species are E. huxleyi and Florisphaera profunda with smaller concentrations of the species Umbellosphaera irregularis, Umbellosphaera tenuis and different species of Gephyrocapsa. Deep-dwelling coccolithophore species abundance is greatly affected by nutricline and thermocline depths. These coccolithophores increase in abundance when the nutricline and thermocline are deep and decrease when they are shallow.
The complete distribution of coccolithophores is currently not known and some regions, such as the Indian Ocean, are not as well studied as other locations in the Pacific and Atlantic Oceans. It is also very hard to explain distributions due to multiple constantly changing factors involving the ocean's properties, such as coastal and equatorial upwelling, frontal systems, benthic environments, unique oceanic topography, and pockets of isolated high or low water temperatures.
The upper photic zone is low in nutrient concentration, high in light intensity and penetration, and usually higher in temperature. The lower photic zone is high in nutrient concentration, low in light intensity and penetration and relatively cool. The middle photic zone is an area that contains the same values in between that of the lower and upper photic zones.
Great Calcite Belt
The Great Calcite Belt of the Southern Ocean is a region of elevated summertime upper ocean calcite concentration derived from coccolithophores, despite the region being known for its diatom predominance. The overlap of two major phytoplankton groups, coccolithophores and diatoms, in the dynamic frontal systems characteristic of this region provides an ideal setting to study environmental
influences on the distribution of different species within these taxonomic groups.
The Great Calcite Belt, defined as an elevated particulate inorganic carbon (PIC) feature occurring alongside seasonally elevated chlorophyll a in austral spring and summer in the Southern Ocean, plays an important role in climate fluctuations, accounting for over 60% of the Southern Ocean area (30–60° S). The region between 30° and 50° S has the highest uptake of anthropogenic carbon dioxide (CO2) alongside the North Atlantic and North Pacific oceans.
Effect of global climate change on distribution
Recent studies show that climate change has direct and indirect impacts on Coccolithophore distribution and productivity. They will inevitably be affected by the increasing temperatures and thermal stratification of the top layer of the ocean, since these are prime controls on their ecology, although it is not clear whether global warming would result in net increase or decrease of coccolithophores. As they are calcifying organisms, it has been suggested that ocean acidification due to increasing carbon dioxide could severely affect coccolithophores. Recent increases have seen a sharp increase in the population of coccolithophores.
Role in the food web
Coccolithophores are one of the more abundant primary producers in the ocean. As such, they are a large contributor to the primary productivity of the tropical and subtropical oceans, however, exactly how much has yet to have been recorded.
Dependence on nutrients
The ratio between the concentrations of nitrogen, phosphorus and silicate in particular areas of the ocean dictates competitive dominance within phytoplankton communities. Each ratio essentially tips the odds in favor of either diatoms or other groups of phytoplankton, such as coccolithophores. A low silicate to nitrogen and phosphorus ratio allows coccolithophores to outcompete other phytoplankton species; however, when silicate to phosphorus to nitrogen ratios are high coccolithophores are outcompeted by diatoms. The increase in agricultural processes lead to eutrophication of waters and thus, coccolithophore blooms in these high nitrogen and phosphorus, low silicate environments.
Impact on water column productivity
The calcite in calcium carbonate allows coccoliths to scatter more light than they absorb. This has two important consequences: 1) Surface waters become brighter, meaning they have a higher albedo, and 2) there is induced photoinhibition, meaning photosythetic production is diminished due to an excess of light. In case 1), a high concentration of coccoliths leads to a simultaneous increase in surface water temperature and decrease in the temperature of deeper waters. This results in more stratification in the water column and a decrease in the vertical mixing of nutrients. However, a 2012 study estimated that the overall effect of coccolithophores on the increase in radiative forcing of the ocean is less than that from anthropogenic factors. Therefore, the overall result of large blooms of coccolithophores is a decrease in water column productivity, rather than a contribution to global warming.
Predator-prey interactions
Their predators include the common predators of all phytoplankton including small fish, zooplankton, and shellfish larvae. Viruses specific to this species have been isolated from several locations worldwide and appear to play a major role in spring bloom dynamics.
Toxicity
No environmental evidence of coccolithophore toxicity has been reported, but they belong to the class Prymnesiophyceae which contain orders with toxic species. Toxic species have been found in the genera Prymnesium Massart and Chrysochromulina Lackey. Members of the genus Prymnesium have been found to produce haemolytic compounds, the agent responsible for toxicity. Some of these toxic species are responsible for large fish kills and can be accumulated in organisms such as shellfish; transferring it through the food chain. In laboratory tests for toxicity members of the oceanic coccolithophore genera Emiliania, Gephyrocapsa, Calcidiscus and Coccolithus were shown to be non-toxic as were species of the coastal genus Hymenomonas, however several species of Pleurochrysis and Jomonlithus, both coastal genera were toxic to Artemia.
Community interactions
Coccolithophorids are predominantly found as single, free-floating haploid or diploid cells.
Competition
Most phytoplankton need sunlight and nutrients from the ocean to survive, so they thrive in areas with large inputs of nutrient rich water upwelling from the lower levels of the ocean. Most coccolithophores require sunlight only for energy production, and have a higher ratio of nitrate uptake over ammonium uptake (nitrogen is required for growth and can be used directly from nitrate but not ammonium). Because of this they thrive in still, nutrient-poor environments where other phytoplankton are starving. Trade-offs associated with these faster growth rates include a smaller cell radius and lower cell volume than other types of phytoplankton.
Viral infection and coevolution
Giant DNA-containing viruses are known to lytically infect coccolithophores, particularly E. huxleyi. These viruses, known as E. huxleyi viruses (EhVs), appear to infect the coccosphere coated diploid phase of the life cycle almost exclusively. It has been proposed that as the haploid organism is not infected and therefore not affected by the virus, the co-evolutionary "arms race" between coccolithophores and these viruses does not follow the classic Red Queen evolutionary framework, but instead a "Cheshire Cat" ecological dynamic. More recent work has suggested that viral synthesis of sphingolipids and induction of programmed cell death provides a more direct link to study a Red Queen-like coevolutionary arms race at least between the coccolithoviruses and diploid organism.
Evolution and diversity
Coccolithophores are members of the clade Haptophyta, which is a sister clade to Centrohelida, which are both in Haptista. The oldest known coccolithophores are known from the Late Triassic, around the Norian-Rhaetian boundary. Diversity steadily increased over the course of the Mesozoic, reaching its apex during the Late Cretaceous. However, there was a sharp drop during the Cretaceous-Paleogene extinction event, when more than 90% of coccolithophore species became extinct. Coccoliths reached another, lower apex of diversity during the Paleocene-Eocene thermal maximum, but have subsequently declined since the Oligocene due to decreasing global temperatures, with species that produced large and heavily calcified coccoliths most heavily affected.
Coccolithophore shells
Exoskeleton: coccospheres and coccoliths
Each coccolithophore encloses itself in a protective shell of coccoliths, calcified scales which make up its exoskeleton or coccosphere. The coccoliths are created inside the coccolithophore cell and while some species maintain a single layer throughout life only producing new coccoliths as the cell grows, others continually produce and shed coccoliths.
Composition
The primary constituent of coccoliths is calcium carbonate, or chalk. Calcium carbonate is transparent, so the organisms' photosynthetic activity is not compromised by encapsulation in a coccosphere.
Formation
Coccoliths are produced by a biomineralization process known as coccolithogenesis. Generally, calcification of coccoliths occurs in the presence of light, and these scales are produced much more during the exponential phase of growth than the stationary phase. Although not yet entirely understood, the biomineralization process is tightly regulated by calcium signaling. Calcite formation begins in the golgi complex where protein templates nucleate the formation of CaCO3 crystals and complex acidic polysaccharides control the shape and growth of these crystals. As each scale is produced, it is exported in a Golgi-derived vesicle and added to the inner surface of the coccosphere. This means that the most recently produced coccoliths may lie beneath older coccoliths.
Depending upon the phytoplankton's stage in the life cycle, two different types of coccoliths may be formed. Holococcoliths are produced only in the haploid phase, lack radial symmetry, and are composed of anywhere from hundreds to thousands of similar minute (ca 0.1 μm) rhombic calcite crystals. These crystals are thought to form at least partially outside the cell. Heterococcoliths occur only in the diploid phase, have radial symmetry, and are composed of relatively few complex crystal units (fewer than 100). Although they are rare, combination coccospheres, which contain both holococcoliths and heterococcoliths, have been observed in the plankton recording coccolithophore life cycle transitions. Finally, the coccospheres of some species are highly modified with various appendages made of specialized coccoliths.
Function
While the exact function of the coccosphere is unclear, many potential functions have been proposed. Most obviously coccoliths may protect the phytoplankton from predators. It also appears that it helps them to create a more stable pH. During photosynthesis carbon dioxide is removed from the water, making it more basic. Also calcification removes carbon dioxide, but chemistry behind it leads to the opposite pH reaction; it makes the water more acidic. The combination of photosynthesis and calcification therefore even out each other regarding pH changes. In addition, these exoskeletons may confer an advantage in energy production, as coccolithogenesis seems highly coupled with photosynthesis. Organic precipitation of calcium carbonate from bicarbonate solution produces free carbon dioxide directly within the cellular body of the alga, this additional source of gas is then available to the Coccolithophore for photosynthesis. It has been suggested that they may provide a cell-wall like barrier to isolate intracellular chemistry from the marine environment. More specific, defensive properties of coccoliths may include protection from osmotic changes, chemical or mechanical shock, and short-wavelength light. It has also been proposed that the added weight of multiple layers of coccoliths allows the organism to sink to lower, more nutrient rich layers of the water and conversely, that coccoliths add buoyancy, stopping the cell from sinking to dangerous depths. Coccolith appendages have also been proposed to serve several functions, such as inhibiting grazing by zooplankton.
Uses
Coccoliths are the main component of the Chalk, a Late Cretaceous rock formation which outcrops widely in southern England and forms the White Cliffs of Dover, and of other similar rocks in many other parts of the world. At the present day sedimented coccoliths are a major component of the calcareous oozes that cover up to 35% of the ocean floor and is kilometres thick in places. Because of their abundance and wide geographic ranges, the coccoliths which make up the layers of this ooze and the chalky sediment formed as it is compacted serve as valuable microfossils.
Calcification, the biological production of calcium carbonate (CaCO3), is a key process in the marine carbon cycle. Coccolithophores are the major planktonic group responsible for pelagic CaCO3 production. The diagram on the right shows the energetic costs of coccolithophore calcification:
(A) Transport processes include the transport into the cell from the surrounding seawater of primary calcification substrates Ca2+ and HCO3− (black arrows) and the removal of the end product H+ from the cell (gray arrow). The transport of Ca2+ through the cytoplasm to the CV is the dominant cost associated with calcification.
(B) Metabolic processes include the synthesis of CAPs (gray rectangles) by the Golgi complex (white rectangles) that regulate the nucleation and geometry of CaCO3 crystals. The completed coccolith (gray plate) is a complex structure of intricately arranged CAPs and CaCO3 crystals.
(C) Mechanical and structural processes account for the secretion of the completed coccoliths that are transported from their original position adjacent to the nucleus to the cell periphery, where they are transferred to the surface of the cell. The costs associated with these processes are likely to be comparable to organic-scale exocytosis in noncalcifying haptophyte algae.
The diagram on the left shows the benefits of coccolithophore calcification. (A) Accelerated photosynthesis includes CCM (1) and enhanced light uptake via scattering of scarce photons for deep-dwelling species (2). (B) Protection from photodamage includes sunshade protection from ultraviolet (UV) light and photosynthetic active radiation (PAR) (1) and energy dissipation under high-light conditions (2). (C) Armor protection includes protection against viral/bacterial infections (1) and grazing by selective (2) and nonselective (3) grazers.
The degree by which calcification can adapt to ocean acidification is presently unknown. Cell physiological examinations found the essential H+ efflux (stemming from the use of HCO3− for intra-cellular calcification) to become more costly with ongoing ocean acidification as the electrochemical H+ inside-out gradient is reduced and passive proton outflow impeded. Adapted cells would have to activate proton channels more frequently, adjust their membrane potential, and/or lower their internal pH. Reduced intra-cellular pH would severely affect the entire cellular machinery and require other processes (e.g. photosynthesis) to co-adapt in order to keep H+ efflux alive. The obligatory H+ efflux associated with calcification may therefore pose a fundamental constraint on adaptation which may potentially explain why "calcification crisis" were possible during long-lasting (thousands of years) CO2 perturbation events even though evolutionary adaption to changing carbonate chemistry conditions is possible within one year. Unraveling these fundamental constraints and the limits of adaptation should be a focus in future coccolithophore studies because knowing them is the key information required to understand to what extent the calcification response to carbonate chemistry perturbations can be compensated by evolution.
Silicate- or cellulose-armored functional groups such as diatoms and dinoflagellates do not need to sustain the calcification-related H+ efflux. Thus, they probably do not need to adapt in order to keep costs for the production of structural elements low. On the contrary, dinoflagellates (except for calcifying species; with generally inefficient CO2-fixing RuBisCO enzymes may even profit from chemical changes since photosynthetic carbon fixation as their source of structural elements in the form of cellulose should be facilitated by the ocean acidification-associated CO2 fertilization. Under the assumption that any form of shell/exoskeleton protects phytoplankton against predation non-calcareous armors may be the preferable solution to realize protection in a future ocean.
The diagram on the right is a representation of how the comparative energetic effort for armor construction in diatoms, dinoflagellates and coccolithophores appear to operate. The frustule (diatom shell) seems to be the most inexpensive armor under all circumstances because diatoms typically outcompete all other groups when silicate is available. The coccosphere is relatively inexpensive under sufficient [CO2], high [HCO3−], and low [H+] because the substrate is saturating and protons are easily released into seawater. In contrast, the construction of thecal elements, which are organic (cellulose) plates that constitute the dinoflagellate shell, should rather be favored at high H+ concentrations because these usually coincide with high [CO2]. Under these conditions dinoflagellates could down-regulate the energy-consuming operation of carbon concentrating mechanisms to fuel the production of organic source material for their shell. Therefore, a shift in carbonate chemistry conditions toward high [CO2] may promote their competitiveness relative to coccolithophores. However, such a hypothetical gain in competitiveness due to altered carbonate chemistry conditions would not automatically lead to dinoflagellate dominance because a huge number of factors other than carbonate chemistry have an influence on species composition as well.
Defence against predation
Currently, the evidence supporting or refuting a protective function of the coccosphere against predation is limited. Some researchers found that overall microzooplankton predation rates were reduced during blooms of the coccolithophore Emiliania huxleyi, while others found high microzooplankton grazing rates on natural coccolithophore communities. In 2020, researchers found that in situ ingestion rates of microzooplankton on E. huxleyi did not differ significantly from those on similar sized non-calcifying phytoplankton. In laboratory experiments the heterotrophic dinoflagellate Oxyrrhis marina preferred calcified over non-calcified cells of E. huxleyi, which was hypothesised to be due to size selective feeding behaviour, since calcified cells are larger than non-calcified E. huxleyi. In 2015, Harvey et al. investigated predation by the dinoflagellate O. marina on different genotypes of non-calcifying E. huxleyi as well as calcified strains that differed in the degree of calcification. They found that the ingestion rate of O. marina was dependent on the genotype of E. huxleyi that was offered, rather than on their degree of calcification. In the same study, however, the authors found that predators which preyed on non-calcifying genotypes grew faster than those fed with calcified cells. In 2018, Strom et al. compared predation rates of the dinoflagellate Amphidinium longum on calcified relative to naked E. huxleyi prey and found no evidence that the coccosphere prevents ingestion by the grazer. Instead, ingestion rates were dependent on the offered genotype of E. huxleyi. Altogether, these two studies suggest that the genotype has a strong influence on ingestion by the microzooplankton species, but if and how calcification protects coccolithophores from microzooplankton predation could not be fully clarified.
Importance in global climate change
Impact on the carbon cycle
Coccolithophores have both long and short term effects on the carbon cycle. The production of coccoliths requires the uptake of dissolved inorganic carbon and calcium. Calcium carbonate and carbon dioxide are produced from calcium and bicarbonate by the following chemical reaction:
Because coccolithophores are photosynthetic organisms, they are able to use some of the released in the calcification reaction for photosynthesis.
However, the production of calcium carbonate drives surface alkalinity down, and in conditions of low alkalinity the is instead released back into the atmosphere.
As a result of this, researchers have postulated that large blooms of coccolithophores may contribute to global warming in the short term. A more widely accepted idea, however, is that over the long term coccolithophores contribute to an overall decrease in atmospheric concentrations. During calcification two carbon atoms are taken up and one of them becomes trapped as calcium carbonate. This calcium carbonate sinks to the bottom of the ocean in the form of coccoliths and becomes part of sediment; thus, coccolithophores provide a sink for emitted carbon, mediating the effects of greenhouse gas emissions.
Evolutionary responses to ocean acidification
Research also suggests that ocean acidification due to increasing concentrations of in the atmosphere may affect the calcification machinery of coccolithophores. This may not only affect immediate events such as increases in population or coccolith production, but also may induce evolutionary adaptation of coccolithophore species over longer periods of time. For example, coccolithophores use H+ ion channels in to constantly pump H+ ions out of the cell during coccolith production. This allows them to avoid acidosis, as coccolith production would otherwise produce a toxic excess of H+ ions. When the function of these ion channels is disrupted, the coccolithophores stop the calcification process to avoid acidosis, thus forming a feedback loop. Low ocean alkalinity, impairs ion channel function and therefore places evolutionary selective pressure on coccolithophores and makes them (and other ocean calcifiers) vulnerable to ocean acidification. In 2008, field evidence indicating an increase in calcification of newly formed ocean sediments containing coccolithophores bolstered the first ever experimental data showing that an increase in ocean concentration results in an increase in calcification of these organisms.
Decreasing coccolith mass is related to both the increasing concentrations of and decreasing concentrations of in the world's oceans. This lower calcification is assumed to put coccolithophores at ecological disadvantage. Some species like Calcidiscus leptoporus, however, are not affected in this way, while the most abundant coccolithophore species, E. huxleyi might be (study results are mixed). Also, highly calcified coccolithophorids have been found in conditions of low CaCO3 saturation contrary to predictions. Understanding the effects of increasing ocean acidification on coccolithophore species is absolutely essential to predicting the future chemical composition of the ocean, particularly its carbonate chemistry. Viable conservation and management measures will come from future research in this area. Groups like the European-based CALMARO are monitoring the responses of coccolithophore populations to varying pH's and working to determine environmentally sound measures of control.
Impact on microfossil record
Coccolith fossils are prominent and valuable calcareous microfossils. They are the largest global source of biogenic calcium carbonate, and significantly contribute to the global carbon cycle. They are the main constituent of chalk deposits such as the white cliffs of Dover.
Of particular interest are fossils dating back to the Palaeocene-Eocene Thermal Maximum 55 million years ago. This period is thought to correspond most directly to the current levels of in the ocean. Finally, field evidence of coccolithophore fossils in rock were used to show that the deep-sea fossil record bears a rock record bias similar to the one that is widely accepted to affect the land-based fossil record.
Impact on the oceans
The coccolithophorids help in regulating the temperature of the oceans. They thrive in warm seas and release dimethyl sulfide (DMS) into the air whose nuclei help to produce thicker clouds to block the sun. When the oceans cool, the number of coccolithophorids decrease and the amount of clouds also decrease. When there are fewer clouds blocking the sun, the temperature also rises. This, therefore, maintains the balance and equilibrium of nature.
See also
CLAW hypothesis
Dimethyl sulfide
Dimethylsulfoniopropionate
Emiliania huxleyi virus 86
Pleurochrysis carterae
References
External links
Sources of detailed information
Nannotax3 – illustrated guide to the taxonomy of coccolithophores and other nannofossils.
INA — International Nannoplankton Association
Emiliania huxleyi Home Page
Introductions to coccolithophores
University of California, Berkeley. Museum of Paleontology: "Introduction to the Prymnesiophyta".
The Paleontology Portal: Calcareous Nanoplankton
RadioLab – podcast on coccolithophores
Haptophytes
Microfossils
Extant Late Triassic first appearances
Planktology
Sedimentology
de:Haptophyta | Coccolithophore | Chemistry | 7,632 |
25,407,989 | https://en.wikipedia.org/wiki/L%20Prize | The L-Prize competition was designed to spur development of LED light replacements for 60W incandescent lamps and PAR38 halogen lamps as well as an ultra-efficient "21st Century Lamp". It was established by the United States Department of Energy (DOE) as directed by the Energy Independence and Security Act of 2007. The original competition, launched in 2008, focused on an LED replacement for the common 60-watt light bulb and this L-Prize was awarded in 2011. The PAR38 competition was launched but received no entries and was suspended in 2014. The 21st Century Lamp competition was never opened.
The current L-Prize Competition launched in 2021 and targets commercial-sector lighting, which accounts for about 36% of lighting energy use in the United States.
2007 L-Prize Competition
60W Competition
The original L-Prize Competition, launched in 2008, sought LED replacements for the common 60-watt light bulb. In late 2009, the L-Prize competition received its first entry, from Philips Lighting North America. The 2,000 samples submitted by Philips went through a rigorous 18-month evaluation that included industry-standard photometric testing, stress testing under extreme conditions, and long-term lumen maintenance testing at elevated temperatures. In addition, field assessments were conducted by L-Prize partners to see how the product performed in real-world settings.
The Philips entry met all requirements and, in August 2011, was declared the L-Prize winner in the 60W replacement category. The product became available in the retail market on April 22, 2012 (Earth Day). The lamp was comparable to a 60W incandescent in color quality (CRI = 93, CCT = 2727 K), light distribution, and light output (940 lumens) but consumed less than 10W (a savings of 83%), and at 25,000 hours of testing, the actual lumen maintenance was 100%, with chromaticity change at less than .002.
PAR38 and 21st Century Lamp Competitions
The L-Prize competition to develop LED replacements for PAR38 halogen lamps was launched but received no entries and was suspended in 2014. The 21st Century Lamp competition was never opened.
2021 L-Prize Competition
The goal of the new L-Prize is to spur development of next-generation LED lighting systems for commercial buildings. Challenging technical requirements put in place by DOE are intended to stimulate creative approaches that would raise the bar for efficacy, quality of light, connectivity, and life cycle environmental impact. In addition to technical innovation, the L-Prize encourages entrants to address diversity, equity, and inclusion (DEI) in the business practice, the supply chain, or other areas where they can effect change through the core business model and operations. The current L-Prize seeks interoperable lighting systems that demonstrate exceptional achievement in all areas.
The current L-Prize has three distinct phases, and competitors can enter any or all phases.
Concept Phase
The L-Prize Concept Phase invited innovative concept proposals documenting a luminaire and lighting system of the future. The Concept Phase completed in February 2022, with four winners announced by Energy Secretary Jennifer Granholm:
Project Tango, submitted by QuarkStar of Las Vegas, Nevada. The networked, white-tunable luminaire concept leverages innovations in optics, LED, and power conversion technology to deliver high efficacy, exceptional quality of light, and precise control of light distribution.
Sustainable and Connected Troffer Retrofit, submitted by Orion Energy Systems of Jacksonville, Florida. The concept offers a high-efficacy, networked LED luminaire with advanced controls that can be retrofitted in less than two minutes to an existing fluorescent luminaire.
Laterally Symmetrical Level 3 Engine for 3D Printing, submitted by Smash the Bulb/Bridgelux of Mountain View, California. This 3D-printed semi-indirect luminaire concept uses a high-performance light engine that requires no secondary optics and delivers high efficacy and excellent quality of light; an innovative optical design that reduces losses and addresses glare; and a luminaire housing that can be 3D printed on the job site.
Papaya Modular Lighting Ecosystem, submitted by Papaya of Evanston, Illinois. This highly modular luminaire platform designed by a team from outside the lighting industry uses a unique community-based approach; an all open-source aspect offers opportunities for innovators of all types to participate in evolving and innovating this lighting solution over time.
Prototype Phase
The L-Prize Prototype Phase invited physical, working prototype systems that emphasized technological innovation and challenged competitors to think outside the standard forms, materials, and price points of commercially available products. Competitors could submit products in the Luminaire Track, the Connected Systems Track, or both. Entries were scored across multiple criteria.
Luminaire Track Winners
Generation Flex: Light Without Compromise, submitted by Signify Innovation of Bridgewater, New Jersey
Low-Carbon Biodegradable Luminaire, submitted by Lightly of Boothwyn, Pennsylvania
Helios HPR-LP160, submitted by Grid interactive Efficient Building Alliance (GiEBA) of San Diego, California
Connected Systems Track Winners
Interact Next-Gen: Light the Way to Building Goals, submitted by Signify Innovation of Bridgewater, New Jersey
Autani: Insights 4REAL, with Sensing by Leviton, submitted by Autani and Leviton of Columbia, Maryland
Bluetooth Mesh Wireless Lighting Control System, submitted by McWong International of Sacramento, California
Manufacturing and Installation Phase
The Manufacturing and Installation Phase is designed to reward production and installation of products meeting the L-Prize technical requirements. Up to four competitors earning the most points based on innovation, U.S. content, production, and installation will share an award of $10 million. Like the Prototype Phase, the Manufacturing and Installation Phase will feature two separate tracks: a luminaire track and a connected systems track. Under the rules, competitors could submit an entry for one track or separate entries for both tracks, and DOE will evaluate each track’s submissions independently.
References
External links
Lighting Prize: U.S. Department of Energy American-Made Challenges
Lighting Prize (L-Prize)
L-Prize Competition. U.S. Department of Energy.
Department of Energy Announces Phase 1 Winners of L-Prize Lighting Competition. U.S. Department of Energy.
LEDs Magazine.com: L-Prize concept Phase Winners Propose Next-Generation SSL Designs (2022 article)
Get a Grip on Lighting: Technical Updates Series: The L-Prize (2022 podcast)
L Prize Competition Drives Technology Innovation, Energy Savings. U.S. Department of Energy.
National Geographic.com: Philips Wins L Prize, but the Race Is Still on for a Better Bulb (2011 article)
New York Times.com: Is This the Light Bulb of the Future? (2009 article)
Time magazine.com: The $10 Million Lightbulb (2009 article)
American science and technology awards
Challenge awards
Design awards
Energy-saving lighting
LED lamps
United States Department of Energy | L Prize | Engineering | 1,417 |
36,237,983 | https://en.wikipedia.org/wiki/DreamPlug | DreamPlug is a compact and low power plug computer running Debian Linux, based on Marvell's Kirkwood 88F6281 ARM9E SoC. It is intended to be a device that could act as a web server, a printing server or any other network service. It uses micro-SD internal storage and an external Secure Digital slot, but also offers USB ports and a Serial ATA port to connect external disks.
Improvements over the GuruPlug
The DreamPlug is an evolution of the GuruPlug, based on the SheevaPlug platform.
Apart from internal processor changes, the DreamPlug features a new case design, second Gigabit Ethernet, eSATA, SD slot (and internal microSD slot replacing the internal NAND), audio in/out, and a removable PSU.
Early version of DreamPlug has 802.11 b/g Wi-Fi and Bluetooth 2.1+EDR. It is also shipped with 2 GB microSD card.
Current version has 802.11 b/g/n Wi-Fi and Bluetooth 3.0. It is also shipped with 4 GB microSD card.
See also
Plug computer
References
External links
Google code project for DreamPlug
Linux-based devices
Computer storage devices
Computer-related introductions in 2011 | DreamPlug | Technology | 265 |
237,132 | https://en.wikipedia.org/wiki/Ribozyme | Ribozymes (ribonucleic acid enzymes) are RNA molecules that have the ability to catalyze specific biochemical reactions, including RNA splicing in gene expression, similar to the action of protein enzymes. The 1982 discovery of ribozymes demonstrated that RNA can be both genetic material (like DNA) and a biological catalyst (like protein enzymes), and contributed to the RNA world hypothesis, which suggests that RNA may have been important in the evolution of prebiotic self-replicating systems.
The most common activities of natural or in vitro evolved ribozymes are the cleavage (or ligation) of RNA and DNA and peptide bond formation. For example, the smallest ribozyme known (GUGGC-3') can aminoacylate a GCCU-3' sequence in the presence of PheAMP. Within the ribosome, ribozymes function as part of the large subunit ribosomal RNA to link amino acids during protein synthesis. They also participate in a variety of RNA processing reactions, including RNA splicing, viral replication, and transfer RNA biosynthesis. Examples of ribozymes include the hammerhead ribozyme, the VS ribozyme, leadzyme, and the hairpin ribozyme.
Researchers who are investigating the origins of life through the RNA world hypothesis have been working on discovering a ribozyme with the capacity to self-replicate, which would require it to have the ability to catalytically synthesize polymers of RNA. This should be able to happen in prebiotically plausible conditions with high rates of copying accuracy to prevent degradation of information but also allowing for the occurrence of occasional errors during the copying process to allow for Darwinian evolution to proceed.
Attempts have been made to develop ribozymes as therapeutic agents, as enzymes which target defined RNA sequences for cleavage, as biosensors, and for applications in functional genomics and gene discovery.
Discovery
Before the discovery of ribozymes, enzymes—which were defined [solely] as catalytic proteins—were the only known biological catalysts. In 1967, Carl Woese, Francis Crick, and Leslie Orgel were the first to suggest that RNA could act as a catalyst. This idea was based upon the discovery that RNA can form complex secondary structures. These ribozymes were found in the intron of an RNA transcript, which removed itself from the transcript, as well as in the RNA component of the RNase P complex, which is involved in the maturation of pre-tRNAs. In 1989, Thomas R. Cech and Sidney Altman shared the Nobel Prize in chemistry for their "discovery of catalytic properties of RNA". The term ribozyme was first introduced by Kelly Kruger et al. in a paper published in Cell in 1982.
It had been a firmly established belief in biology that catalysis was reserved for proteins. However, the idea of RNA catalysis is motivated in part by the old question regarding the origin of life: Which comes first, enzymes that do the work of the cell or nucleic acids that carry the information required to produce the enzymes? The concept of "ribonucleic acids as catalysts" circumvents this problem. RNA, in essence, can be both the chicken and the egg.
In the 1980s, Thomas Cech, at the University of Colorado Boulder, was studying the excision of introns in a ribosomal RNA gene in Tetrahymena thermophila. While trying to purify the enzyme responsible for the splicing reaction, he found that the intron could be spliced out in the absence of any added cell extract. As much as they tried, Cech and his colleagues could not identify any protein associated with the splicing reaction. After much work, Cech proposed that the intron sequence portion of the RNA could break and reform phosphodiester bonds. At about the same time, Sidney Altman, a professor at Yale University, was studying the way tRNA molecules are processed in the cell when he and his colleagues isolated an enzyme called RNase-P, which is responsible for conversion of a precursor tRNA into the active tRNA. Much to their surprise, they found that RNase-P contained RNA in addition to protein and that RNA was an essential component of the active enzyme. This was such a foreign idea that they had difficulty publishing their findings. The following year, Altman demonstrated that RNA can act as a catalyst by showing that the RNase-P RNA subunit could catalyze the cleavage of precursor tRNA into active tRNA in the absence of any protein component.
Since Cech's and Altman's discovery, other investigators have discovered other examples of self-cleaving RNA or catalytic RNA molecules. Many ribozymes have either a hairpin – or hammerhead – shaped active center and a unique secondary structure that allows them to cleave other RNA molecules at specific sequences. It is now possible to make ribozymes that will specifically cleave any RNA molecule. These RNA catalysts may have pharmaceutical applications. For example, a ribozyme has been designed to cleave the RNA of HIV. If such a ribozyme were made by a cell, all incoming virus particles would have their RNA genome cleaved by the ribozyme, which would prevent infection.
Structure and mechanism
Despite having only four choices for each monomer unit (nucleotides), compared to 20 amino acid side chains found in proteins, ribozymes have diverse structures and mechanisms. In many cases they are able to mimic the mechanism used by their protein counterparts. For example, in self cleaving ribozyme RNAs, an in-line SN2 reaction is carried out using the 2’ hydroxyl group as a nucleophile attacking the bridging phosphate and causing 5’ oxygen of the N+1 base to act as a leaving group. In comparison, RNase A, a protein that catalyzes the same reaction, uses a coordinating histidine and lysine to act as a base to attack the phosphate backbone.
Like many protein enzymes, metal binding is also critical to the function of many ribozymes. Often these interactions use both the phosphate backbone and the base of the nucleotide, causing drastic conformational changes. There are two mechanism classes for the cleavage of a phosphodiester backbone in the presence of metal. In the first mechanism, the internal 2’- OH group attacks the phosphorus center in a SN2 mechanism. Metal ions promote this reaction by first coordinating the phosphate oxygen and later stabling the oxyanion. The second mechanism also follows a SN2 displacement, but the nucleophile comes from water or exogenous hydroxyl groups rather than RNA itself. The smallest ribozyme is UUU, which can promote the cleavage between G and A of the GAAA tetranucleotide via the first mechanism in the presence of Mn2+. The reason why this trinucleotide (rather than the complementary tetramer) catalyzes this reaction may be because the UUU-AAA pairing is the weakest and most flexible trinucleotide among the 64 conformations, which provides the binding site for Mn2+.
Phosphoryl transfer can also be catalyzed without metal ions. For example, pancreatic ribonuclease A and hepatitis delta virus (HDV) ribozymes can catalyze the cleavage of RNA backbone through acid-base catalysis without metal ions. Hairpin ribozyme can also catalyze the self-cleavage of RNA without metal ions, but the mechanism for this is still unclear.
Ribozyme can also catalyze the formation of peptide bond between adjacent amino acids by lowering the activation entropy.
Activities
Although ribozymes are quite rare in most cells, their roles are sometimes essential to life. For example, the functional part of the ribosome, the biological machine that translates RNA into proteins, is fundamentally a ribozyme, composed of RNA tertiary structural motifs that are often coordinated to metal ions such as Mg2+ as cofactors. In a model system, there is no requirement for divalent cations in a five-nucleotide RNA catalyzing trans-phenylalanation of a four-nucleotide substrate with 3 base pairs complementary with the catalyst, where the catalyst/substrate were devised by truncation of the C3 ribozyme.
The best-studied ribozymes are probably those that cut themselves or other RNAs, as in the original discovery by Cech and Altman. However, ribozymes can be designed to catalyze a range of reactions, many of which may occur in life but have not been discovered in cells.
RNA may catalyze folding of the pathological protein conformation of a prion in a manner similar to that of a chaperonin.
Ribozymes and the origin of life
RNA can also act as a hereditary molecule, which encouraged Walter Gilbert to propose that in the distant past, the cell used RNA as both the genetic material and the structural and catalytic molecule rather than dividing these functions between DNA and protein as they are today; this hypothesis is known as the "RNA world hypothesis" of the origin of life. Since nucleotides and RNA (and thus ribozymes) can arise by inorganic chemicals, they are candidates for the first enzymes, and in fact, the first "replicators" (i.e., information-containing macro-molecules that replicate themselves). An example of a self-replicating ribozyme that ligates two substrates to generate an exact copy of itself was described in 2002.
The discovery of the catalytic activity of RNA solved the "chicken and egg" paradox of the origin of life, solving the problem of origin of peptide and nucleic acid central dogma. According to this scenario, at the origin of life, all enzymatic activity and genetic information encoding was done by one molecule: RNA.
Ribozymes have been produced in the laboratory that are capable of catalyzing the synthesis of other RNA molecules from activated monomers under very specific conditions, these molecules being known as RNA polymerase ribozymes. The first RNA polymerase ribozyme was reported in 1996, and was capable of synthesizing RNA polymers up to 6 nucleotides in length. Mutagenesis and selection has been performed on an RNA ligase ribozyme from a large pool of random RNA sequences, resulting in isolation of the improved "Round-18" polymerase ribozyme in 2001 which could catalyze RNA polymers now up to 14 nucleotides in length. Upon application of further selection on the Round-18 ribozyme, the B6.61 ribozyme was generated and was able to add up to 20 nucleotides to a primer template in 24 hours, until it decomposes by cleavage of its phosphodiester bonds.
The rate at which ribozymes can polymerize an RNA sequence multiples substantially when it takes place within a micelle.
The next ribozyme discovered was the "tC19Z" ribozyme, which can add up to 95 nucleotides with a fidelity of 0.0083 mutations/nucleotide. Next, the "tC9Y" ribozyme was discovered by researchers and was further able to synthesize RNA strands up to 206 nucleotides long in the eutectic phase conditions at below-zero temperature, conditions previously shown to promote ribozyme polymerase activity.
The RNA polymerase ribozyme (RPR) called tC9-4M was able to polymerize RNA chains longer than itself (i.e. longer than 177 nt) in magnesium ion concentrations close to physiological levels, whereas earlier RPRs required prebiotically implausible concentrations of up to 200 mM. The only factor required for it to achieve this was the presence of a very simple amino acid polymer called lysine decapeptide.
The most complex RPR synthesized by that point was called 24-3, which was newly capable of polymerizing the sequences of a substantial variety of nucleotide sequences and navigating through complex secondary structures of RNA substrates inaccessible to previous ribozymes. In fact, this experiment was the first to use a ribozyme to synthesize a tRNA molecule. Starting with the 24-3 ribozyme, Tjhung et al. applied another fourteen rounds of selection to obtain an RNA polymerase ribozyme by in vitro evolution termed '38-6' that has an unprecedented level of activity in copying complex RNA molecules. However, this ribozyme is unable to copy itself and its RNA products have a high mutation rate. In a subsequent study, the researchers began with the 38-6 ribozyme and applied another 14 rounds of selection to generate the '52-2' ribozyme, which compared to 38-6, was again many times more active and could begin generating detectable and functional levels of the class I ligase, although it was still limited in its fidelity and functionality in comparison to copying of the same template by proteins such as the T7 RNA polymerase.
An RPR called t5(+1) adds triplet nucleotides at a time instead of just one nucleotide at a time. This heterodimeric RPR can navigate secondary structures inaccessible to 24-3, including hairpins. In the initial pool of RNA variants derived only from a previously synthesized RPR known as the Z RPR, two sequences separately emerged and evolved to be mutualistically dependent on each other. The Type 1 RNA evolved to be catalytically inactive, but complexing with the Type 5 RNA boosted its polymerization ability and enabled intermolecular interactions with the RNA template substrate obviating the need to tether the template directly to the RNA sequence of the RPR, which was a limitation of earlier studies. Not only did t5(+1) not need tethering to the template, but a primer was not needed either as t5(+1) had the ability to polymerize a template in both 3' → 5' and 5' 3 → 3' directions.
A highly evolved RNA polymerase ribozyme was able to function as a reverse transcriptase, that is, it can synthesize a DNA copy using an RNA template. Such an activity is considered to have been crucial for the transition from RNA to DNA genomes during the early history of life on earth. Reverse transcription capability could have arisen as a secondary function of an early RNA-dependent RNA polymerase ribozyme.
An RNA sequence that folds into a ribozyme is capable of invading duplexed RNA, rearranging into an open holopolymerase complex, and then searching for a specific RNA promoter sequence, and upon recognition rearrange again into a processive form that polymerizes a complementary strand of the sequence. This ribozyme is capable of extending duplexed RNA by up to 107 nucleotides, and does so without needing to tether the sequence being polymerized.
Artificial ribozymes
Since the discovery of ribozymes that exist in living organisms, there has been interest in the study of new synthetic ribozymes made in the laboratory. For example, artificially produced self-cleaving RNAs with good enzymatic activity have been produced. Tang and Breaker isolated self-cleaving RNAs by in vitro selection of RNAs originating from random-sequence RNAs. Some of the synthetic ribozymes that were produced had novel structures, while some were similar to the naturally occurring hammerhead ribozyme.
In 2015, researchers at Northwestern University and the University of Illinois Chicago engineered a tethered ribosome that works nearly as well as the authentic cellular component that produces all the proteins and enzymes within the cell. Called Ribosome-T, or Ribo-T, the artificial ribosome was created by Michael Jewett and Alexander Mankin. The techniques used to create artificial ribozymes involve directed evolution. This approach takes advantage of RNA's dual nature as both a catalyst and an informational polymer, making it easy for an investigator to produce vast populations of RNA catalysts using polymerase enzymes. The ribozymes are mutated by reverse transcribing them with reverse transcriptase into various cDNA and amplified with error-prone PCR. The selection parameters in these experiments often differ. One approach for selecting a ligase ribozyme involves using biotin tags, which are covalently linked to the substrate. If a molecule possesses the desired ligase activity, a streptavidin matrix can be used to recover the active molecules.
Lincoln and Joyce used in vitro evolution to develop ribozyme ligases capable of self-replication in about an hour, via the joining of pre-synthesized highly complementary oligonucleotides.
Although not true catalysts, the creation of artificial self-cleaving riboswitches, termed aptazymes, has also been an active area of research. Riboswitches are regulatory RNA motifs that change their structure in response to a small molecule ligand to regulate translation. While there are many known natural riboswitches that bind a wide array of metabolites and other small organic molecules, only one ribozyme based on a riboswitch has been described: glmS. Early work in characterizing self-cleaving riboswitches was focused on using theophylline as the ligand. In these studies, an RNA hairpin is formed which blocks the ribosome binding site, thus inhibiting translation. In the presence of the ligand, in these cases theophylline, the regulatory RNA region is cleaved off, allowing the ribosome to bind and translate the target gene. Much of this RNA engineering work was based on rational design and previously determined RNA structures rather than directed evolution as in the above examples. More recent work has broadened the ligands used in ribozyme riboswitches to include thymine pyrophosphate. Fluorescence-activated cell sorting has also been used to engineering aptazymes.
Applications
Ribozymes have been proposed and developed for the treatment of disease through gene therapy. One major challenge of using RNA-based enzymes as a therapeutic is the short half-life of the catalytic RNA molecules in the body. To combat this, the 2’ position on the ribose is modified to improve RNA stability. One area of ribozyme gene therapy has been the inhibition of RNA-based viruses.
A type of synthetic ribozyme directed against HIV RNA called gene shears has been developed and has entered clinical testing for HIV infection.
Similarly, ribozymes have been designed to target the hepatitis C virus RNA, SARS coronavirus (SARS-CoV), Adenovirus and influenza A and B virus RNA. The ribozyme is able to cleave the conserved regions of the virus's genome, which has been shown to reduce the virus in mammalian cell culture. Despite these efforts by researchers, these projects have remained in the preclinical stage.
Known ribozymes
Well-validated naturally occurring ribozyme classes:
GIR1 branching ribozyme
glmS ribozyme
Group I self-splicing intron
Group II self-splicing intron – Spliceosome is likely derived from Group II self-splicing ribozymes.
Hairpin ribozyme
Hammerhead ribozyme
HDV ribozyme
rRNA – Found in all living cells and links amino acids to form proteins.
RNase P
Twister ribozyme
Twister sister ribozyme
VS ribozyme
Pistol ribozyme
Hatchet ribozyme
Viroids
See also
Deoxyribozyme
Spiegelman Monster
Catalysis
Enzyme
RNA world hypothesis
Peptide nucleic acid
Nucleic acid analogues
PAH world hypothesis
SELEX
OLE RNA
Notes and references
Further reading
External links
Tom Cech's Short Talk: "Discovering Ribozymes"
RNA
Catalysts
Biomolecules
Metabolism
Chemical kinetics
RNA splicing | Ribozyme | Chemistry,Biology | 4,175 |
51,428,919 | https://en.wikipedia.org/wiki/Endoplasmic%20reticulum%20membrane%20protein%20complex | The endoplasmic reticulum membrane protein complex (EMC) is a putative endoplasmic reticulum-resident membrane protein (co-)chaperone. The EMC is evolutionarily conserved in eukaryotes (animals, plants, and fungi), and its initial appearance might reach back to the last eukaryotic common ancestor (LECA). Many aspects of mEMC biology and molecular function remain to be studied.
Composition and structure
The EMC consists of up to 10 subunits (EMC1 - EMC4, MMGT1, EMC6 - EMC10), of which only two (EMC8/9) are homologous proteins. Seven out of ten (EMC1, EMC3, EMC4, MMMGT1, EMC6, EMC7, EMC10) subunits are predicted to contain at least one transmembrane domain (TMD), whereas EMC2, EMC8 and EMC9 do not contain any predicted transmembrane domains are herefore likely to interact with the rest of the EMC on the cytosolic face of the endoplasmic reticulum (ER). EMC proteins are thought to be present in the mature complex in a 1:1 stoichiometry.
Subunit primary structure
The majority of EMC proteins (EMC1/3/4/MMGT1/6/7/10) contain at least one predicted TMD. EMC1, EMC7 and EMC10 contain an N-terminal signal sequence.
EMC1
EMC1, also known as KIAA0090, contains a single TMD (aa 959-979) and Pyrroloquinoline quinone (PQQ)-like repeats (aa 21-252), which could form a β-propeller domain. The TMD is part of a domain a larger domain (DUF1620). The functions of the PQQ and DUF1620 domains in EMC1 remain to be determined.
EMC2
EMC2 (TTC35) harbours three tetratricopeptide repeats (TPR1/2/3). TPRs have been shown to mediate protein-protein interactions and can be found in a large variety of proteins of diverse function. The function of TPRs in EMC2 is unknown.
EMC8 and EMC9
EMC8 and EMC9 show marked sequence identity (44.72%) on the amino acid level. Both proteins are members of the UPF0172 family, a member of which (e.g. TLA1) are involved in regulating the antenna size of chlorophyll-a.
Posttranslational modifications
Several subunits of the mammalian EMC (mEMC) are posttranslationally modified. EMC1 contains three predicted N-glycosylation sites at positions 370, 818, and 913. EMC10 features a predicted N-glycosylation consensus motif at position 182.
Evolutionary conservation
EMC proteins are evolutionarily conserved in eukaryotes. No homologues are reported in prokaryotes. Therefore, the EMC has been suggested to have its evolutionary roots in the last eukaryote common ancestor (LECA).
Function
Protein folding and degradation at the ER
The EMC was first identified in a genetic screen in yeast for factors involved in protein folding in the ER. Accordingly, deletion of individual EMC subunits correlates with the induction of an ER stress response in various model organisms. However, it is worth noting that in human osteosarcoma cells (U2OS cells), deletion of EMC6 does not appear to cause ER stress. When overexpressed, several subunits of the mammalian EMC orthologue (mEMC) have been found to physically interact with ERAD components (UBAC2, DER1, DER2) Genetic screens in yeast have shown EMC subunits to be enriched in alongside ERAD genes. Taken together, these findings imply a role of the mEMC in protein homeostasis.
Chaperone
Maturation of polytopic membrane proteins
Several lines of evidence implicate the EMC in promoting the maturation of polytopic membrane proteins. The EMC is necessary to correctly and efficiently insert the first transmembrane domain (also called the signal anchor) of G-protein coupled receptors (GPCRs) such as the beta-adrenergic receptor. Determining features of transmembrane domains that favour EMC involvement seem to be moderate hydrophobicity and ambiguous distribution of TMD flanking charges.
The substrate spectrum of the EMC appears to extend beyond GPCRs. Unifying properties of putative EMC clients are the presence of unusually hydrophilic transmembrane domains containing charged residues. However, mechanistic detail of how the EMC assists in orienting and inserting such problematic transmembrane domains is lacking. In many cases, evidence implicating the EMC in the biogenesis of a certain protein consists of co-depletion when individual subunts of the EMC are disrupted.
A number of putative EMC clients are listed below, but the manner in which the EMC engages them and whether they directly or indirectly depend on the EMC merits further investigation:
Loss of EMC function destabilises the enzyme sterol-O-acyltransferase 1 (SOAT1) and, in conjunction with overlooking the biogenesis of squalene synthase (SQS), helps to maintain cellular cholesterol homeostasis. SOAT1 is an obligatory enzyme for cellular cholesterol storage and detoxification. For SQS, an enzyme controlling the committing step in cholesterol biosynthesis, the EMC has been shown to be sufficient for its integration into liposomes in vitro.
Depletion of EMC6 and additional EMC proteins reduces the cell surface expression of the nicotinic Acetylcholine receptors in C. elegans.
Knockdown of EMC2 has been observed to correlate with decreased CFTRΔF508 levels. EMC2 contains three tetratricopeptide repeat domains (TRPs). TRPs have been shown to mediate protein-protein interaction and can be found in co-chaperones of Hsp90. Therefore, a role of EMC2 in mediating interactions with cytosolic chaperones is conceivable, but remains to be demonstrated.
Loss of EMC subunits in D. melanogaster correlates with strongly reduced cell surface expression of rhodopsin-1 (Rh1), an important polytopic light receptor in the plasma membrane.
In yeast, the EMC has been implicated in maturation or trafficking defects of the polytopic model substrate Mrh1p-GFP.
Recently, structural and functional studies have identified a holdase function for the EMC in the assembly and maturation of the voltage gated calcium channel CaV1.2.
Insertion proteins into the ER
The EMC was shown to be involved in a pathway mediating the membrane integration of tail-anchored proteins containing an unusually hydrophilic or amphiphatic transmembrane domains. This pathway appears to operate in parallel to the conventional Get/Trc40 targeting pathway.
Other suggested functions
Mitochondrial tethering
In S. cerevisiae, the EMC has been reported by Lahiri and colleagues to constitute a tethering complex between the ER and mitochondria. Close apposition of both organelles is a prerequisite for phosphatidylcholine (PS) biosynthesis in which phosphatidylserine (PS) is imported from the ER into mitochondria, and this was previously proposed as evidence for a membrane tether between these two organelles by Jean Vance. Disruption of the EMC by genetic deletion of multiple of its subunits was shown to reduce ER-mitochondrial tethering and to impair transfer of phosphatidylserine (PS) from the ER.
Autophagosome formation
EMC6 interacts with the small GTPase RAB5A and Beclin-1, regulators of autophagosome formation. This observation suggests that the mEMC, and not just EMC6, might be involved in regulating Rab5A and BECLIN-1. However, the molecular mechanism underlying the proposed modulation of autophagosome formation remains to be established.
Involvement in disease
The mEMC has repeatedly been implicated in a range of pathologies including susceptibility of cells to viral infection, cancer, and a congenital syndrome of severe physical and mental disability. None of these pathologies seem to be related by disruption of a single molecular pathway that might be regulated by the mEMC. Consequently, the involvement of the mEMC in these pathologies has only limited use for defining the primary function of this complex.
As a host factor in viral infections
Large-scale genetic screens imply several mEMC subunits in modulating the pathogenicity of flaviviruses such as West Nile virus (WNV), Zika virus (ZV), Dengue fever virus (DFV), and yellow fever virus (YFV). In particular, loss of several mEMC subunits (e.g. EMC2, EMC3) lead to inhibition of WNV-induced cell death. however, WNV was still able to infect and proliferate in cells lacking EMC subunits. The authors made a similar observation of the role of the mEMC in the cell-killing capacity of Saint Louis Encephalitis Virus. The underlying cause for the resistance of EMC2/3-deficient cells to WNV-induced cytotoxicity remains elusive.
Cancer
Dysregulation of individual mEMC subunits correlates with the severity of certain types of cancer. Expression of hHSS1, a secreted splice variant of EMC10 (HSM1), reduces the proliferation and migration of glioma cell lines.
Overexpression of EMC6 has been found to reduce cell proliferation of glioblastoma cells in vitro and in vivo, whereas its RNAi-mediated depletion has the opposite effect. This indicates that the mEMC assumes (an) important function(s) in cancerous cells to establish a malignant tumour.
Pathologies
Mutations in the EMC1 gene have been associated with retinal dystrophy and a severe systemic disease phenotype involving developmental delay, cerebellar atrophy, scoliosis and hypotonia.
Similarly, a homozygous missense mutation (c.430G>A, p.Ala144Thr) within the EMC1 gene has been correlated with the development of retinal dystrophy.
Even though a set of disease-causing mutations in EMC1 has been mapped, their effect on EMC1 function and structure remain to be studied.
References
Proteins | Endoplasmic reticulum membrane protein complex | Chemistry | 2,332 |
29,466,441 | https://en.wikipedia.org/wiki/System%20on%20TPTP | System on TPTP is an online interface to several automated theorem proving systems and other automated reasoning tools.
It allows users to run the systems either on problems from the latest releases from the TPTP problem library or on user-supplied problems in the TPTP syntax.
The system is maintained by Geoff Sutcliffe at the University of Miami. In November 2010, it featured more than 50 systems, including both theorem provers and model finders. System on TPTP can either run user-selected systems, or pick systems automatically based on problem features, and run them in parallel.
References
Automated theorem proving | System on TPTP | Mathematics,Technology | 125 |
10,027,284 | https://en.wikipedia.org/wiki/Hook%20%28hand%20tool%29 | A hook is a hand tool used for securing and moving loads. It consists of a round wooden handle with a strong metal hook about long projecting at a right angle from the center of the handle. The appliance is held in a closed fist with the hook projecting between two fingers.
This type of hook is used in many different industries, and has many different names. It may be called a box hook, cargo hook, loading hook, docker's hook when used by longshoremen, and a baling hook, bale hook, or hay hook in the agricultural industry. Other variants exist, such as in forestry, for moving logs, and a type with a long shaft, used by city workers to remove manhole covers.
Smaller hooks may also be used in food processing and transport.
Dockwork
The longshoreman's hook was historically used by longshoremen (stevedores). Before the age of containerization, freight was moved on and off ships with extensive manual labor, and the longshoreman's hook was the basic tool of the dockworker. The hook became an emblem of the longshoreman's profession in the same way that a hammer and anvil are associated with blacksmiths, or the pipe wrench with pipefitters, sprinklerfitters and plumbers. When longshoremen went on strike or retired, it was known as "hanging up the hook" or "slinging the hook", and the newsletter for retired members of the International Longshore and Warehouse Union's Seattle Local is called The Rusty Hook. A longshoreman's hook was often carried by hooking it through the belt.
Longshoremen carried various types of hooks depending on the cargo they would handle. Cargo could come in the form of bales, sacks, barrels, wood crates, or it could be stowed individually in the cargo hold of the ship. The primary function of the hook was to protect the hands of the longshoreman from being injured while handling the cargo. Hooks also improved the reach of the worker and allowed greater strength and handling of the cargo.
Some cargo items are liable to be damaged if pulled at with a longshoreman's hook: hence the "Use No Hooks" warning sign.
A longshoreman's hook looks somewhat intimidating, and as it was also associated with strong, tough dockworkers, it became a commonly used weapon in crime fiction, similar to the ice pick. For example, in an episode of Alfred Hitchcock Presents entitled Shopping for Death, a character is murdered (off-screen) using a longshoreman's hook. It was sometimes used as a weapon and means of intimidation in real life as well; the book Joey the Hit Man: The Autobiography of a Mafia Killer states "One guy who used to work on the docks was called Charlie the Hook. If he didn't like you he would pick you up with his hook." In the 1957 New York drama film Edge of the City, two longshoremen settle their dispute in a deadly baling hook fight. They are also the primary weapon of Spider Splicers in the BioShock series, so named due to their use of the hooks to crawl on ceilings and attack unexpectedly.
Haying
A hay hook is slightly different in design from a longshoreman's hook, in that the shaft is typically longer. It is used in hay bucking on farms to secure and move bales of hay, which are otherwise awkward to pick up manually.
Gardening
In gardening and agriculture, a variant with a long shaft is used to move large plants. A hook is placed in either side of the baled roots, allowing workers to carry or place the heavy load.
Forestry
Called a "Packhaken", "Hebehaken", or "Forsthaken" in German, this type is used in forestry mainly to lift or move firewood. In Sweden, this tool, though slightly different, is called a "timmerkrok", which translates as "timberhook". It is used mainly by two people to move logs by hooking them in each end.
See also
Cant hook
Fishing gaff
Pickaroon
Prosthetic hook
References
External links
Smithsonian Institution exhibit on the mechanization of the cargo shipping industry.
prohandymantools.com
Images of longshoreman's hooks:
Hand tools
Forestry tools
Food processing
Maritime culture | Hook (hand tool) | Engineering | 900 |
2,468,107 | https://en.wikipedia.org/wiki/PEDOT%3APSS | Poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) is a composite material where PEDOT (the conductive polymer) provides electrical conductivity, and PSS (polystyrene sulfonate) acts as a counter-ion to balance the charge and improve the water solubility and processability of PEDOT. Polystyrene sulfonate is a sulfonated polystyrene. Part of the sulfonyl groups are deprotonated and carry a negative charge. The other component poly(3,4-ethylenedioxythiophene) (PEDOT) is a conjugated polymer and carries positive charges and is based on polythiophene. Together the charged macromolecules form a macromolecular salt.
Synthesis
PEDOT:PSS can be prepared by mixing an aqueous solution of PSS with EDOT monomer, and to the resulting mixture, a solution of sodium persulfate and ferric sulfate.
The addition of these reagents initiates the oxidative chemical polymerization of EDOT in water to form PEDOT. The stabilizing PSS forms a shell around a core of PEDOT in a nano-sized structure. The negatively charged sulfonic acid ions help stabilize the positively charged PEDOT ions.
Applications
PEDOT:PSS has the highest efficiency among conductive organic thermoelectric materials (ZT~0.42) and thus can be used in flexible thermoelectric generators. Yet its largest application is as a transparent, conductive polymer with high ductility. For example, AGFA coats 200 million photographic films per year with a thin, extensively-stretched layer of virtually transparent and colorless PEDOT:PSS as an antistatic agent to prevent electrostatic discharges during production and normal film use, independent of humidity conditions, and as electrolyte in polymer electrolytic capacitors.
If organic compounds, including high boiling solvents like methylpyrrolidone, dimethyl sulfoxide, sorbitol, ionic liquids and surfactants, are added conductivity increases by many orders of magnitude. This makes it also suitable as a transparent electrode, for example in touchscreens, organic light-emitting diodes, flexible organic solar cells and electronic paper to replace the traditionally used indium tin oxide (ITO). Owing to the high conductivity (up to 4600 S/cm), it can be used as a cathode material in capacitors replacing manganese dioxide or liquid electrolytes. It is also used in organic electrochemical transistors.
The conductivity of PEDOT:PSS can also be significantly improved by a post-treatment with various compounds, such as ethylene glycol, dimethyl sulfoxide (DMSO), salts, zwitterions, cosolvents, acids, alcohols, phenol, geminal diols and amphiphilic fluoro-compounds. This conductivity is comparable to that of ITO, the popular transparent electrode material, and it can triple that of ITO after a network of carbon nanotubes and silver nanowires is embedded into PEDOT:PSS and used for flexible organic devices.
PEDOT:PSS is generally applied as a dispersion of gelled particles in water. A conductive layer on glass is obtained by spreading a layer of the dispersion on the surface usually by spin coating and driving out the water by heat. Special PEDOT:PSS inks and formulations were developed for different coating and printing processes. Water-based PEDOT:PSS inks are mainly used in slot die coating, flexography, rotogravure and inkjet printing. If a high viscous paste and slow drying is required like in screen-printing processes PEDOT:PSS can also be supplied in high boiling solvents like propanediol. Dry PEDOT:PSS pellets can be produced with a freeze drying method which are redispersable in water and different solvents, for example ethanol to increase drying speed during printing. Finally, to overcome degradation to ultraviolet light and high temperature or humidity conditions PEDOT:PSS UV-stabilizers are available.
Linköping University claim to have made a "wooden transistor" by replacing the lignin from balsawood with PEDOT:PSS
Mechanical Properties
Since PEDOT:PSS is most frequently used in thin film architectures, several methods have been developed to accurately probe its mechanical properties; for example, water-supported tensile testing, four-point bend tests to measure adhesive and cohesive fracture energy, buckling tests to measure modulus, and bending tests on PDMS and polyethylene supports to probe the crack onset strain. Though PEDOT:PSS has a lower electrical mobility than silicon, which can also be incorporated into flexible electronics through the incorporation of stress-relief structures, sufficiently flexible PEDOT:PSS can enable lower cost-processing, such as roll-to-roll processing. The most important characteristics for an organic semiconductor used in thin-film architectures are low modulus in the elastic regime and high stretchability prior to fracture. These properties have been found to be highly correlated to relative humidity. At high relative humidity (>40%) hydrogen bonds are weakened in the PSS due to the uptake of water which leads to higher strain before fracture and lower elastic modulus. At low relative humidity (<23%) the presence of strong bonding between PSS grains leads to higher modulus and lower strain before fracture. Films at higher relative humidity are presumed to fail by intergranular fracture, whereas lower relative humidity leads to transgranular fracture. Additives like 3-glycidoxypropyltrimethoxysilane (GOPS) can drastically improve the mechanical stability in aqueous media even at low concentrations of 1 wt% without significantly impeding the electrical properties.
PEDOT:PSS can also show self-healing properties if submerged in water after sustaining mechanical damage. This self-healing capability is proposed to be enabled by the hygroscopic property of PSS−. Common PEDOT:PSS additives that improve the electrical conductivity have varying effects on self-healing. While ethylene glycol improves electrical and mechanical self-healing, sulfuric acid reduces the former but improves the latter, presumably because it undergoes autoprotolysis. Polyethylene glycol improves the electrical and thermoelectric self-healing, but reduces the mechanical self-healing.
PEDOT:PSS is also attractive for conductive textile applications. Though it results in inferior thermoelectric properties, wet-spinning has been shown to result in high conductivity and stiff fibers due to preferential alignment of polymer chains during fiber drawing.
References
Organic polymers
Organic semiconductors
Conductive polymers
Transparent electrodes
Polyelectrolytes
Copolymers
Antistatic agents
Display technology | PEDOT:PSS | Chemistry,Engineering | 1,461 |
32,598,425 | https://en.wikipedia.org/wiki/List%20of%20plants%20from%20the%20mountains%20of%20Romania | A list of plants native to the mountain ranges of Romania.
Many Romanian mountain ranges, mountains, and peaks are part of the Southern Carpathians System, and the Carpathian montane forests ecoregion.
List of flowering plants of the Romanian mountain ranges
Aconitum anthora
Androsace lactea
Androsace villosa
Alyssum repens
Artemisia baumgarteni
Anthemis carpatica
Armeria alpina
Aster alpinus
Biscutella laevigata
Bruckenthalia spiculifolia (syn. Erica spiculifolia)
Centaurea pinnatifida
Campanula napuligera
Campanula alpina
Campanula cochleariifolia
Cerastium arvense
Cerastium lanatum
Cortusa matthioli
Carlina acaulis
Calamintha baumgarteni
Dianthus spiculifolius
Dianthus callizonus
Dianthus gelidus
Dianthus nardiformis
Dianthus tenuifolius
Doronicum carpaticum
Draba compacta
Dryas octopetala
Erigeron nanus
Eritrichium nanum
Gentiana kochiana
Geum reptans
Gentiana bulgarica
Gentiana lutea
Gentiana orbicularis
Gentiana frigida
Gypsophila petraea
Geum montanum
Gentiana nivalis
Hedysarum obscurum
Helianthemum tomentosum
Hesperis alpina
Hieracium aurantiacum
Hieracium villosum
Hypochaeris uniflora
Knautia longifolia
Leontopodium alpinum
Leontodon pseudotaraxaci
Libanotis humilis
Linaria alpina
Linum extraaxillare
Lloydia serotina
Minuartia recurva
Minuartia sedoides
Nigritella rubra
Onobrychis transsilvanica
Oxytropis campestris
Oxytropis sericea
Papaver pyrenaicum
Pedicularis verticillata
Pleurogyne carinthiaca
Potentilla inclinata
Rhododendron kotschyi
Scorzonera rosea
Senecio capitatus
Senecio carpathicus
Saxifraga aiatratum
Silene acaulis
Saxifraga aizoides
Saxifraga demissa
Saxifraga opposiiifolia
Saxifraga moschata
Saxifraga luteouiridis
Trollius europaeus
Viola alpina
Viola biflora
See also
References
Al. Beldie, C. Pridvornic - Flori din munții noștri, Ed. Științifică, București, 1959
Flora mică ilustrată a României, Editura agrosilvică, 1966
Sârbu Anca, Biologie vegetală. Note de curs, Editura Universității din București, 1999.
Lucia Popovici, Constanța Moruzi, Ion Toma - Atlas botanic, Editura didactică și pedagogică, București, 1985
Milea Preda, Dicționar dendrofloricol, Editura Științifică și Enciclopedică, București, 1989
Ion I. Băra, Petre Gh. Tarhon, Florin Floria - Plantele - izvor de sănătate, Chișinău, «Știința», 1993
.
Plants
Romania
.Plants
Southern Carpathians
Flora of the Carpathians | List of plants from the mountains of Romania | Biology | 731 |
11,472,211 | https://en.wikipedia.org/wiki/Histamine%20N-methyltransferase | Histamine N-methyltransferase (HNMT) is a protein encoded by the HNMT gene in humans. It belongs to the methyltransferases superfamily of enzymes and plays a role in the inactivation of histamine, a biomolecule that is involved in various physiological processes. Methyltransferases are present in every life form including archaeans, with 230 families of methyltransferases found across species.
Specifically, HNMT transfers a methyl (-CH3) group from S-adenosyl-L-methionine (SAM-e) to histamine, forming an inactive metabolite called Nτ-methylhistamine, in a chemical reaction called Nτ-methylation. In mammals, HNMT operates alongside diamine oxidase (DAO) as the only two enzymes responsible for histamine metabolism; however, what sets HNMT apart is its unique presence within the central nervous system (CNS), where it governs histaminergic neurotransmission, that is a process where histamine acts as a messenger molecule between the neurons—nerve cells—in the brain. By degrading and regulating levels of histamine specifically within the CNS, HNMT ensures the proper functioning of neural pathways related to arousal, appetite regulation, sleep-wake cycles, and other essential brain functions.
Research on knockout mice—that are genetically modified mice lacking the Hnmt gene—has revealed that the absence of this enzyme leads to increased brain histamine concentrations and behavioral changes such as heightened aggression and disrupted sleep patterns. These findings highlight the critical role played by HNMT in maintaining normal brain function through precise regulation of neuronal signaling involving histamine. Genetic variants affecting HNMT activity have also been implicated in various neurological disorders like Parkinson's disease and attention deficit disorder.
Gene
Histamine N-methyltransferase is encoded by a single gene, called HNMT, which has been mapped to chromosome 2 in humans.
Three transcript variants have been identified for this gene in humans, which produce different protein isoforms due to alternative splicing, which allows a single gene to code for multiple proteins by including or excluding particular exons of a gene in the final mRNA produced from that gene. Of those isoforms, only one has histamine-methylating activity.
In the human genome, six exons from the 50-kb HNMT contribute to forming a unique mRNA species, approximately 1.6 kb in size. This mRNA is then translated into the cytosolic enzyme histamine N-methyltransferase, comprising 292 amino acids, of which 130 amino acids are a conserved sequence. HNMT does not have promoter cis-elements, such as TATA and CAAT boxes.
Protein
HNMT is a cytoplasmic protein, meaning that it operates within the cytoplasm of a cell. The cytoplasm fills the space between the outer cell membrane (also known as the cellular plasma membrane) and the nuclear membrane (which surrounds the cell's nucleus). HNMT helps regulate histamine levels by degrading histamine within the cytoplasm, ensuring proper cellular function.
Proteins consist of amino acid residues and form a three-dimensional structure. The crystallographic structure to depict the three-dimensional structure of human HNMT protein was first described in 2001 as a monomeric protein that has a mass of 33 kilodaltons and consists of two structural domains.
The first domain, called the "MTase domain", contains the active site where methylation occurs. It has a classic fold found in many other methyltransferases and consists of a seven-stranded beta-sheet surrounded by three helices on each side. This domain binds to its cofactor, S-adenosyl-L-methionine (SAM-e), which provides the methyl group for Nτ-methylation reactions.
The second domain, called the "substrate binding domain", interacts with histamine, contributing to its binding to the enzyme molecule. This domain is connected to the MTase domain and forms a separate region. It includes an anti-parallel beta sheet along with additional alpha helices and 310 helices.
Species
Histamine N-methyltransferase belongs to methyltransferases, a superfamily of enzymes present in every life form, including archaeans.
These enzymes catalyze methylation, which is a chemical process that involves the addition of a methyl group to a molecule, which can affect its biological function.
To facilitate methylation, methyltransferases transfer a methyl group (-CH3) from a cosubstrate (donor) to a substrate molecule (acceptor), leading to the formation of a methylated molecule. Most methyltransferases use S-adenosyl-L-methionine (SAM-e) as a donor, converting it into S-adenosyl-L-homocysteine (SAH). In various species, members of the methyltransferase superfamily of enzymes methylate a wide range of molecules, including small molecules, proteins, nucleic acids, and lipids. These enzymes are involved in numerous cellular processes such as signaling, protein repair, chromatin regulation, and gene regulation. More than 230 families of methyltransferases have been described in various species.
This specific protein, histamine N-methyltransferase, is found in vertebrates, including mammals, birds, reptiles, amphibians, and fishes, but not in invertebrates and plants.
The complementary DNA (cDNA) of Hnmt was initially cloned from a rat kidney and has since been cloned from human, mouse, and guinea pig sources. Human HNMT shares 55.37% similarity with that of zebrafish, 86.76% with that of mouse, 90.53% with that of dog, and 99.54% with that of chimpanzee. Moreover, expressed sequence tags from cow, pig, and gorilla, as well as genome survey sequences from pufferfish, also exhibit strong similarity to human HNMT, suggesting that it is a highly conserved protein among vertebrates. To understand the role of histamine N-methyltransferase in brain function, researchers have studied Hnmt-deficient (knockout) mice, that were genetically modified to have the Hnmt gene "knocked out", i.e., deactivated. Scientists discovered that disrupting the gene led to a significant rise in histamine levels in the mouse brain that highlighted the role of the gene in the brain's histamine system and suggested that HNMT genetic variations in humans could be linked to brain disorders.
Tissue and subcellular distribution
On subcellular distribution, histamine N-methyltransferase protein in humans is mainly localized to the nucleoplasm (which is an organelle, i.e., a subunit of a cell) and cytosol (which is the intracellular fluid, i.e., a fluid inside cells). In addition, it is localized to the centrosome (another organelle).
In humans, the protein is present in many tissues and is most abundantly expressed in the brain, thyroid gland, bronchus, duodenum, liver, gallbladder, kidney, and skin.
Function
The function of the HNMT enzyme is histamine metabolism by ways of Nτ-methylation using S-adenosyl-L-methionine (SAM-e) as the methyl donor, producing Nτ-methylhistamine, which, unless excreted, can be further processed by monoamine oxidase B (MAOB) or by diamine oxidase (DAO). Methylated histamine metabolites are excreted with urine.
In mammals, there are two main ways to inactivate histamine by metabolism: one is through a process called oxidative deamination, which involves the enzyme diamine oxidase (DAO) produced by the AOC1 gene, and the other is through a process called Nτ-methylation, which involves the enzyme N-methyltransferase. In the context of biochemistry, inactivation by metabolism refers to the process where a substance, such as a hormone, is converted into a form that is no longer active or effective (inactivation), via a process where the substance is chemically altered (metabolism).
HNMT and DAO are two enzymes that play distinct roles in histamine metabolism. DAO is primarily responsible for metabolizing histamine in extracellular (outside cells) fluids, which include interstitial fluid (fluid surrounding cells) and blood plasma. Such histamine can be exogenous (from food or intestinal flora) or endogenous (released from granules of mast cells and basophils, such as during allergic reactions). DAO is predominantly expressed in the cells of the intestinal epithelium and placenta but not in the central nervous system (CNS). In contrast, HNMT is expressed in CNS and involved in the metabolism of intracellular (inside cells) histamine, which is primarily endogenous and persistently present. HNMT operates in the cytosol, which is the fluid inside cells. Histamine is required to be carried into the cytosol through transporters such as plasma membrane monoamine transporter (SLC29A4) or organic cation transporter 3 (SLC22A3). HNMT enzyme is found in cells of diverse tissues: neurons and glia, brain, kidneys, liver, bronchi, large intestine, ovary, prostate, spinal cord, spleen, and trachea, etc. While DAO is primarily found in the intestinal epithelium, HNMT is present in a wider range of tissues throughout the body. This difference in location also requires different transport mechanisms for histamine to reach each enzyme, reflecting the distinct roles of these enzymes in histamine metabolism. Another distinction between HNMT and DAO lies in their substrate specificity. While HNMT has a strong preference for histamine, DAO can metabolize other biogenic amines—substances, produced by a life form (like a bacteria or an animal) that has an amine functional group (−NH2). The examples of biogenic amines besides histamine that DAO can metabolize are putrescine and cadaverine; still, DAO has a preference for histamine. Both DAO and HNMT exhibit comparable affinities toward histamine.
In the brain of mammals, histamine takes part in histaminergic neurotransmission, that is a process where histamine acts as a messenger molecule between the neurons—the nerve cells. Histamine neurotransmitter activity is controlled by HNMT, since DAO is not present in the CNS. Consequently, the deactivation of histamine via HNMT represents the sole mechanism for ending neurotransmission within the mammalian CNS. This highlights the key role of HNMT for the histamine system of the brain and the brain function in general.
Physiological and clinical significance
Role in health
Histamine has important roles in human physiology as both a hormone and a neurotransmitter. As a hormone, it is involved in the inflammatory response and itching. It regulates physiological functions in the gut and acts on the brain, spinal cord, and uterus. As a neurotransmitter, histamine promotes arousal and regulates appetite and the sleep-wake cycle. It also affects vasodilation, fluid production in tissues like the nose and eyes, gastric acid secretion, sexual function, and immune responses.
HNMT is the only enzyme in the human body responsible for metabolizing histamine within the CNS, playing a role in brain function.
HNMT plays a role in maintaining the proper balance of histamine in the human body. HNMT is responsible for the breakdown and metabolism of histamine, converting it into an inactive metabolite, Nτ-methylhistamine, which inhibits HNMT gene expression in a negative feedback loop. By metabolizing histamine, HNMT helps prevent excessive levels of histamine from accumulating in various tissues and organs. This enzymatic activity ensures that histamine remains at appropriate levels to carry out its physiological functions without causing unwanted effects or triggering allergic reactions. In the central nervous system, HNMT plays an essential role in degrading histamine, where it acts as a neurotransmitter, since HNMT is the only enzyme in the body that can metabolize histamine in the CNS, ending its neurotransmitter activity.
HNMT also plays a role in the airway response to harmful particles, which is the body's physiological reaction to immune allergens, bacteria, or viruses in the respiratory system. Histamine is stored in granules in mast cells, basophils, and in the synaptic vesicles of histaminergic neurons of the airways. When exposed to immune allergens or harmful particles, histamine is released from these storage granules and quickly diffuses into the surrounding tissues. However, the released histamine needs to be rapidly deactivated for proper regulation, which is a function of HNMT.
Histamine intolerance
Histamine intolerance is a presumed set of adverse reactions to ingested histamine in food believed to be associated with flawed activity of DAO and HNMT enzymes. This set of reactions include cutaneous reactions (such as itching, flushing and edema), gastrointestinal symptoms (such as abdominal pain and diarrhea), respiratory symptoms (such as runny nose and nasal congestion), and neurological symptoms (such as dizziness and headache). However, this link between DAO and HNMT enzymes and adverse reactions to ingested histamine in food is not shared by mainstream science due to insufficient evidence. The exact underlying mechanisms by which deficiency in these enzymes can cause these adverse reactions are not fully understood but are hypothesized to involve genetic factors. Despite extensive research, there are no definitive, objective measures or indicators that could unambiguously define histamine intolerance as a distinct medical condition.
Activity measurements
The activity of HNMT, unlike that of DAO, cannot be measured by blood (serum) analysis.
Organs that produce DAO continuously release it into the bloodstream. DAO is stored in vesicular structures associated with the plasma membrane in epithelial cells. As a result, serum DAO activity can be measured, but not HNMT. This is because HNMT is primarily found within the cells of internal organs like the brain or liver and is not released to the bloodstream. Measuring intracellular HNMT directly is challenging. Therefore, diagnosis of HNMT activity is typically done indirectly by testing for known genetic variants.
Genetic variants
There is a genetic variant, registered in the Single Nucleotide Polymorphism database (dbSNP) as rs11558538, found in 10% of the population worldwide, which means that the T allele presents at position 314 of HNMT instead of a usual C allele (c.314C>T). This variant causes the protein to be synthesized with threonine (Thr) replaced with isoleucine (Ile) at position 105 (p.Thr105Ile, T105I). This variant is described as loss-of-function allele reducing HNMT activity, and is associated with diseases such as asthma, allergic rhinitis, and atopic eczema (atopic dermatitis). For individuals with this variant, the intake of HNMT inhibitors, which hamper enzyme activity, and histamine liberators, which release histamine from the granules of mast cells and basophils, could potentially influence their histamine levels. Still, this genetic variant is associated with a reduced risk of Parkinson's disease.
Experiments involving Hnmt-knockout mice have shown that a deficiency in HNMT indeed leads to increased brain histamine concentrations, resulting in heightened aggressive behaviors and disrupted sleep-wake cycles in these mice. In humans, genetic variants that affect HNMT activity have been implicated in various brain disorders, such as Parkinson's disease and attention deficit disorder, but it remains unclear whether these alterations in HNMT are a primary cause or secondary effect of these conditions. Additionally, reduced histamine levels in cerebrospinal fluid have been consistently reported in patients with narcolepsy and other conditions characterized by excessive daytime sleepiness. The association between HNMT polymorphisms and gastrointestinal diseases is still uncertain. While mild polymorphisms can lead to diseases such as asthma and inflammatory bowel disease, they may also reduce the risk of brain disorders like Parkinson's disease. On the other hand, severe mutations in HNMT can result in intellectual disability. Despite these findings, the role of HNMT in human health is not fully understood and continues to be an active area of research.
Inhibitors
The following substances are known to be HNMT inhibitors: amodiaquine, chloroquine, dimaprit, etoprine, metoprine, quinacrine, SKF-91488, tacrine, and diphenhydramine. HNMT inhibitors may increase histamine levels in peripheral tissues and aggravate conditions associated with histamine excess, such as allergic rhinitis, urticaria, and peptic ulcer disease. the effect of HNMT inhibitors on brain function is not yet fully understood. Research suggests that using new inhibitors of HNMT to increase the levels of histamine in the brain could potentially contribute to improvements in the treatment of brain disorders.
Methamphetamine overdose
HNMT could be a potential target for the treatment of symptoms of methamphetamine overdose. It is a central nervous system stimulant, which can be abused up to the lethal consequences: numerous deaths related to methamphetamine overdoses have been reported. The reasoning behind this is that such overdose often leads to behavioral abnormalities, and it has been observed that elevated levels of histamine in the brain can attenuate these methamphetamine-induced behaviors. Therefore, by targeting HNMT, it might be possible to increase the levels of histamine in the brain, which could, in turn, help to mitigate the effects of a methamphetamine overdose. This effect could be achieved by using HNMT inhibitors. Studies predict that one such inhibitor can be metoprine, which crosses the blood-brain barrier and can potentially increase brain histamine levels by inhibiting HNMT; still, treatment of methamphetamine overdose by HNMT inhibitors is still an area of research.
Nτ-methylhistamine
Nτ-methylhistamine (NτMH), also known as 1-methylhistamine, is a product of Nτ-methylation of histamine in a reaction catalyzed by the HNMT enzyme.
NτMH is considered a biologically inactive metabolite of histamine. NτMH is excreted in the urine and can be measured to estimate the amounts of active histamine in the body. While NτMH has some biological activity on its own, it is much weaker than histamine. NτMH can bind to histamine receptors but has a lower affinity and efficacy than histamine for these receptors, meaning that it binds less strongly and activates them less effectively. Depending on the receptor subtype and the tissue context, NτMH may act as a partial agonist or an antagonist for some histamine receptors. NτMH may have some modulatory effects on histamine signaling, but it is unlikely to cause significant allergic or inflammatory reactions by itself. NτMH may also serve as a feedback mechanism to regulate histamine levels and prevent excessive histamine release. Still, NMT, being a product in a reaction catalyzed by HNMT, may inhibit expression of HNMT in a negative feedback loop.
Urinary NτMH can be measured in clinical settings when systemic mastocytosis is suspected. Systemic mastocytosis and anaphylaxis are typically associated with at least a two-fold increase in urinary NτMH levels, which are also increased in patients taking monoamine oxidase inhibitors and in patients on histamine-rich diets.
References
External links
PDBe-KB provides an overview of all the structure information available in the PDB for human histamine N-methyltransferase
EC 2.1.1
Histamine
Enzymes
Metabolism
Human proteins | Histamine N-methyltransferase | Chemistry,Biology | 4,421 |
47,111,595 | https://en.wikipedia.org/wiki/KAHA%20Ligation | The α-Ketoacid-Hydroxylamine (KAHA) Amide-Forming Ligation is a chemical reaction that is used to join two unprotected fragments in peptide synthesis. It is an alternative to the Native Chemical Ligation (NCL).
KAHA Ligation was developed by Jeffrey W. Bode group at ETH Zürich (previously University of Pennsylvania).
Overview
An α-ketoacid at the C-terminus of one peptide fragment reacts with a hydroxylamine at the N-terminus of another to form a peptide bond (amide bond).
The reaction can happen in the presence of unprotected side chains. It also does not require any coupling reagents or catalysts. The only byproducts are water and CO2.
The first reported protein synthesized by KAHA ligation was human GLP-1 (7-36). Since then, a variety of small proteins (up to 200 residues) have been synthesized, including ubiquitin and other similar modifier proteins, hormone proteins, nitrophorin 4, S100A4 and cyclic proteins.
C-terminal ketoacid monomers are pre-loaded on resin via a linker for Fmoc-SPPS (Fmoc-based solid phase peptide synthesis). Initial research utilised sulfur ylide linkers, but more recently the group developed acid- and photo-labile ketoacid monomers that can be loaded directly on Rink Amide resin.
The most commonly used N-terminal hydroxylamine is the 5-oxaproline, which results in a homoserine residue after ligation and O-N rearrangement.
References
Peptides | KAHA Ligation | Chemistry | 352 |
25,687,055 | https://en.wikipedia.org/wiki/Gendicine | Gendicine is a gene therapy medication used to treat patients with head and neck squamous cell carcinoma linked to mutations in the TP53 gene. It consists of recombinant adenovirus engineered to code for p53 protein (rAd-p53) and is manufactured by Shenzhen SiBiono GeneTech.
Gendicine was the first gene therapy product to obtain regulatory approval for clinical use in humans after Chinese State Food and Drug Administration approved it in 2003. As of 2024, Gendicine has not been approved for use in the United States and the European Union.
Mechanism of action
Gendicine enters the tumour cells by way of receptor-mediated endocytosis and begins to over-express genes coding for the p53 protein needed to fight the tumour. Ad-p53 seems to act by stimulating the apoptotic pathway in tumour cells, which increases the expression of tumour suppressor genes and immune response factors (such as the ability of natural killer (NK) cells to exert "bystander" effects). It also decreases the expression of multi-drug resistance, vascular endothelial growth factor and matrix metalloproteinase-2 genes and blocking transcriptional survival signals.
p53 mutation status of the tumour cells and response to Ad-p53 treatment are not closely correlated. Ad-p53 appears to act synergistically with conventional treatments such as chemo- and radiotherapy. This synergy still exists in patients with chemotherapy and radiotherapy-resistant tumors. Gendicine produces fewer side effects than conventional therapy.
Related development
Contusugene ladenovec (Advexin), a similar gene therapy developed by Introgene that also uses adenovirus to deliver the p53 gene, was turned down by the FDA in 2008 and withdrawn by the maker from the EMA approval shortly after.
References
Gene delivery
Adenoviridae
Immunotherapy
Gene therapy | Gendicine | Chemistry,Engineering,Biology | 399 |
3,736,752 | https://en.wikipedia.org/wiki/Agent%20%28economics%29 | In economics, an agent is an actor (more specifically, a decision maker) in a model of some aspect of the economy. Typically, every agent makes decisions by solving a well- or ill-defined optimization or choice problem.
For example, buyers (consumers) and sellers (producers) are two common types of agents in partial equilibrium models of a single market. Macroeconomic models, especially dynamic stochastic general equilibrium models that are explicitly based on microfoundations, often distinguish households, firms, and governments or central banks as the main types of agents in the economy. Each of these agents may play multiple roles in the economy; households, for example, might act as consumers, as workers, and as voters in the model. Some macroeconomic models distinguish even more types of agents, such as workers and shoppers or commercial banks.
The term agent is also used in relation to principal–agent models; in this case, it refers specifically to someone delegated to act on behalf of a principal.
In agent-based computational economics, corresponding agents are "computational objects modeled as interacting according to rules" over space and time, not real people. The rules are formulated to model behavior and social interactions based on stipulated incentives and information. The concept of an agent may be broadly interpreted to be any persistent individual, social, biological, or physical entity interacting with other such entities in the context of a dynamic multi-agent economic system.
Representative vs. heterogenous agents
An economic model in which all agents of a given type (such as all consumers, or all firms) are assumed to be exactly identical is called a representative agent model. A model which recognizes differences among agents is called a heterogeneous agent model. Economists often use representative agent models when they want to describe the economy in the simplest terms possible. In contrast, they may be obliged to use heterogeneous agent models when differences among agents are directly relevant for the question at hand. For example, considering heterogeneity in age is likely to be necessary in a model used to study the economic effects of pensions; considering heterogeneity in wealth is likely to be necessary in a model used to study precautionary saving or redistributive taxation.
See also
Agency (law)
Demand set
Homo economicus
Market consumer
References
Further reading
Decision theory
Asymmetric information | Agent (economics) | Physics | 477 |
26,672,766 | https://en.wikipedia.org/wiki/Comobatrachus | Comobatrachus (meaning "Como Bluff frog") is a dubious genus of extinct frog known only from the holotype, YPM 1863, part of the right humerus, found in Reed's Quarry 9 near Como Bluff, Wyoming in the Late Jurassic-aged Morrison Formation. The holotype was commented on but not described by Moodie in 1912, although it was probably discovered alongside the holotype of Eobatrachus, but was not described by Othniel Charles Marsh when he named Eobatrachus in 1887. The type, and only species, C. aenigmatis, was named and described in 1960. It was probably related to the contemporaneous Eobatrachus.
References
Mesozoic frogs
Morrison fauna
Nomina dubia
Fossil taxa described in 1960 | Comobatrachus | Biology | 169 |
2,186,500 | https://en.wikipedia.org/wiki/Azimilide | Azimilide is a class ΙΙΙ antiarrhythmic drug (used to control abnormal heart rhythms). The agents from this heterogeneous group have an effect on the repolarization, they prolong the duration of the action potential and the refractory period. Also they slow down the spontaneous discharge frequency of automatic pacemakers by depressing the slope of diastolic depolarization. They shift the threshold towards zero or hyperpolarize the membrane potential. Although each agent has its own properties and will have thus a different function.
Heart potential
Azimilide dihydrochloride is a chlorophenylfuranyl compound, which slows repolarization of the heart and prolongs the QT interval of the electrocardiogram. Prolongation of atrial or ventricular repolarization can provide an anti-arrhythmic benefit in patients with heart rhythm disturbances, and this has been the primary interest in the clinical development azimilide. In rare cases, excessive prolongation of ventricular repolarization by azimilide can result in predisposition towards severe ventricular arrhythmias. Most recent clinical trials have investigated the use of azimilide in reducing the frequency and severity of arrhythmias in patients with implanted cardiac pacemakers-defibrillators, where rare pro-arrhythmic events are rescued by the device.
The ion currents
The action of azimilide is directed to the different currents present in atrial and ventricular cardiac myocytes. It principally blocks IKr, and IKs, with much weaker effects on INa, ICa, INCX and IK.Ach. The IKr(rapid)and IKs (slow) are inward rectifier potassium currents, responsible for repolarizing cardiac myocytes towards the end of the cardiac action potential. A somewhat higher concentration of azimilide is needed to block the IKs current. Both blockages result in an increase of the QT interval and a prolongation of atrial and ventricular refractory periods.
Azimilide blocks hERG channels (which encode the IKr current) with an affinity comparable to that with which KvLQT1 / minK channels (which encode the IKs current) are blocked. This block exhibits reverse use-dependence, i.e. the channel blocking effect wanes at faster pulsing rates of the cell. A possible explanation is an interaction of azimilide with K+ close to its binding site in the ion channel. However, there is an agonist effect as well, which is a voltage-dependent effect. This is a dual effect, a low voltage depolarization near the activation threshold will increase the current amplitude and higher depolarizing voltages will suppress the current amplitude. The effect comes from outside of the cell membrane and does not depend on G-proteins or kinase activity inside the cell. Azimilide binds on the extracellular domain of the hERG channel, this propagates a conformational change and inhibits the current. This change makes the activation gate open more easily by low voltage depolarization. Azimilide has two separate binding sites in hERG channel, one for its antagonist function and the other for the agonist function.
Pharmacology
Azimilide has been studied for its anti-arrhythmic effects: its converts and maintains sinus rhythm in patients with atrial arrhythmias; and it reduces the frequency and severity of ventricular arrhythmias in patients with implanted cardioverter-defibrillators. Azimilide's most important adverse effect is torsades de pointes, which is a form of ventricular tachycardia.
Pharmacokinetics
The drug is administered orally and will be completely absorbed. It shows none or very minor interactions with other drugs and it will be eventually cleared by the kidney. A peak in concentration in the blood is observed seven hours after the administration of Azimilide. The metabolic clearance is mediated through several pathways:
10% is found unchanged in the blood
30% will cleared by cleavage
25% by CYP 1A1 pathway
25% by CYP 3A4
F-1292 is the major metabolite of azimilide, it is formed cleavage of the aromethine bond. Unlike desmethyl azimilide, azimilide N-oxide and azimilide carboxylate F-1292 has no cardiovascular activity while the other three minor metabolites have a class ΙΙΙ antiarrhythmic activity. They only make out 10% of azimilide in the blood, so their contribution is not measurable.
References
Antiarrhythmic agents
Furans
HERG blocker
Hydrazones
4-Chlorophenyl compounds
4-Methylpiperazin-1-yl compounds
Ureas
Hydantoins | Azimilide | Chemistry | 1,026 |
78,289,889 | https://en.wikipedia.org/wiki/Polycab%20India | Polycab India Limited is an Indian electrical equipment company based in Mumbai, India. The company manufactures and sells electrical products, including wires and cables, electric fans, LED lighting and luminaires, switches and switchgear, solar products, and conduits and accessories. It also operates in the engineering, procurement, and construction (EPC) sector.
In 2023, the company was ranked 161st on the Fortune India 500 list in 2023, with revenues of 14,206 crores. It is the largest wire and cable manufacturer in India and holds 25% to 26% of the market share in the wires and cables sector in India. As of March 2023, the company operates 28 manufacturing units in Gujarat, Maharashtra, Karnataka, Uttarakhand, Tamil Nadu, and the Union Territory of Daman, along with over 29 warehouses across India. The company is included in the MSCI Standard Index, and is a constituent of the Nifty Midcap 100 Index and the BSE 200 Index.
History
The company's origins date back to 1964 with the establishment of Sind Electric Stores in Lohar Chawl by Thakurdas Jaisinghani, who had moved to Bombay from Pakistan after the Partition of India. This store dealt in various electrical products like fans, lighting fixtures, switches, and wires. By 1975, the company established Thakur Industries and a land was acquired from the Maharashtra Industrial Development Corporation in Andheri, Mumbai, for the construction of a cable and wire manufacturing facility.
In 1983, the company formally entered the electrical goods manufacturing sector with the founding of Polycab Industries by Inder T. Jaisinghani and his brothers. A factory was established in Halol, Gujarat, for the production of PVC insulated wires and cables, copper and aluminum, and bare copper wire.
In May 2008, the company entered into a joint venture with Nexans, a French cable manufacturer. The joint venture, with an initial investment of $37 million, focused on the production of rubber cables for the shipbuilding, railway, and wind power industries. In 2009, the company entered the engineering, procurement, and construction (EPC) sector, offering services covering the design, engineering, supply, execution, and commissioning of power distribution and rural electrification projects.
In September 2008, the International Finance Corporation, the private investment arm of the World Bank, acquired a 12% stake in Polycab Wires for 551.5 crore (US$120 million). This transaction valued the company's cable manufacturing business at ₹4,600 crore (US$1 billion). In 2014, Polycab India Limited expanded its product range to include electric fans, LED lighting and luminaires, switches and switchgear, solar products, and conduits and accessories.
In 2013, the company formed a joint venture with Nexans, holding a 49% stake, to establish production facilities in Gujarat. The investment for this venture amounted to $55 million (approximately 320 crore). In 2014, Polycab India Limited expanded its product range to include electric fans, LED lighting and luminaires, switches and switchgear, solar products, and conduits and accessories.
Polycab India Limited established a 50:50 joint venture named Ryker Base with Trafigura (Singapore) in 2016 to strengthen its backward integration for copper. The company acquired full ownership in May 2020 and subsequently, Ryker Base was acquired by Hindalco Industries, a subsidiary of the Aditya Birla Group in November 2021 for ₹323 crore.
IPO
In October 2018, Polycab India filed its draft red herring prospectus (DRHP) with the Securities and Exchange Board of India for an initial public offering. The company went public in April 2019, listing on the National Stock Exchange of India and the Bombay Stock Exchange. The IPO was oversubscribed more than 52 times and raised ₹1,346 crore.
Finance
In FY24, Polycab India reported a revenue of ₹180,394 million, showing a 28% year-over-year growth. The company's EBITDA increased by 35% YoY to ₹24,918 million, with an EBITDA margin of 13.8%. The Profit After Tax (PAT) rose by 41% YoY to ₹18,029 million, and the PAT margin expanded to 10.0%. Earnings per Share (EPS) for FY24 stood at ₹118.93. The wires and cables segment accounted for 88% of the company's sales, the Fast-Moving Electrical Goods (FMEG) segment contributed 7%, and the remaining 5% came from other segments, primarily the EPC business. The company has received a long-term credit rating of AA+ (Positive) from both CRISIL and India Ratings and Research.
Controversies
In December 2023, the Income Tax Department raided 50 offices of Polycab and discovered that the company had 1,000 crore unaccounted cash sales, 400 crore unaccounted cash payments made on its behalf by a distributor and 100 crore non-genuine expenses.
In March 2024, Polycab India's IT infrastructure was targeted by a ransomware attack attributed to the Lockbit group. The company subsequently reported that the incident did not have a significant impact on its core systems and operations.
Awards and recognition
Polycab India has been listed in the Fortune India 500 list for five consecutive years, from 2019 to 2023. Inder Jaisinghani, the chairman and managing director of Polycab India, and his family were ranked at #32 in Forbes India's 2023 list of the 100 Richest Individuals, with a net worth of $6.4 billion (approximately ₹53,298 crores). Jaisinghani was also featured on the Forbes India Rich List in 2022, ranking #60 with a net worth of $3.4 billion, and in 2021, ranking #57 with a net worth of $3.6 billion.
The company has been awarded the title of "Superbrand" by Superbrands India. The company was awarded a silver medal in the Consumer Durable category at the ET Brand Equity's 2023 Trendies Awards by The Economic Times. In 2019, the Employer Branding Institute recognized Polycab India with a National Best Employer Brand Award.
References
Companies based in Mumbai
Indian brands
Electrical engineering companies
Manufacturing companies of India
Companies established in 1964
Companies listed on the Bombay Stock Exchange
Companies listed on the National Stock Exchange of India
Electrical equipment manufacturers
Engineering companies of India
1964 establishments in Maharashtra
2019 initial public offerings | Polycab India | Engineering | 1,350 |
53,767,273 | https://en.wikipedia.org/wiki/Museum%20of%20Failure | The Museum of Failure is a museum that features a collection of failed products and services. The touring exhibition provides visitors with a learning experience about the critical role of failure in innovation and encourages organizations to become better at learning from failure. Samuel West's 2016 visit to the Museum of Broken Relationships in Zagreb inspired the concept of the museum. Museum founder and curator Samuel West reportedly registered a domain name for the museum and later realized he had misspelled the word museum. The Swedish Innovation Authority (Vinnova) partially funded the museum. The exhibition opened on 7 June 2017 in Helsingborg. The exhibit reopened at Dunkers Kulturhus on 2 June 2018, before closing in January 2019. A temporary exhibit opened in Los Angeles in December 2017. The Los Angeles museum was on Hollywood Boulevard in the Hollywood & Highland Center. The exhibit opened in January – March 2019 at Shanghai, No.1 Center (上海第一百货). And in December 2019 a smaller version opened in Paris, France at the Cité des Sciences et de l'Industrie along with other interesting failure-related exhibitions for the "Festival of Failures" (Les Foirés festival des flops, des bides, des ratés et des inutiles).
According to West, the goal of the museum is to help people recognize the "need to accept failure if we want progress", and to encourage companies to learn more from their failures without resorting to "cliches".
The collection consists of over 150 failed products and services worldwide. Some examples of the items on display include the Apple Newton, Bic for Her, Google Glass, N-Gage, lobotomy instruments, Harley-Davidson Cologne, Kodak DC-40, Sony Betamax, Lego Fiber Optics, the My Friend Cayla talking doll, and Coca-Cola BlāK.
The museum's package of Colgate lasagna is a replica since the company refused to send a real package of the short-lived 1960s product. In May 2020, the museum made most of the collection of artifacts available for viewing on its website.
See also
Fail fast (business)
Museum of Broken Relationships
References
Further reading
Danner, J., & Coopersmith, M. (2015). The Other "F" Word: How Smart Leaders, Teams, and Entrepreneurs Put Failure to Work. John Wiley & Sons.
Cannon, M. D., & Edmondson, A. C. (2005). Failing to learn and learning to fail (intelligently): How great organizations put failure to work to innovate and improve. Long Range Planning, 38(3), 299–319.
Khanna, R., Guler, I., & Nerkar, A. (2016). Fail often, fail big, and fail fast? Learning from small failures and R&D performance in the pharmaceutical industry. Academy of Management Journal, 59(2), 436–459.
What Google Learned From Its Quest to Build the Perfect Team, New York Times, 28 February 2016.
Frazier, M. L., Fainshmidt, S., Klinger, R. L., Pezeshkan, A., & Vracheva, V. (2017). Psychological safety: A meta‐analytic review and extension. Personnel Psychology, 70(1), 113–165.
Agarwal, P., & Farndale, E. (2017). High‐performance work systems and creativity implementation: the role of psychological capital and psychological safety. Human Resource Management Journal.
West, S., & Shiu, E. C. C. (2014). Play as a facilitator of organizational creativity. Creativity research: An inter-disciplinary and multi-disciplinary research handbook (2014), 191–206.
External links
2017 establishments in Europe
Museums established in 2017
Museums in Sweden
Technology museums
Culture in Helsingborg
Failure
Market failure
Technological failures
Innovation | Museum of Failure | Technology | 805 |
7,385,204 | https://en.wikipedia.org/wiki/MTD%20%28mobile%20network%29 | MTD (Swedish abbreviation for Mobiltelefonisystem D, or Mobile telephony system D) was a manual mobile phone system for the 450 MHz frequency band. It was introduced in 1971 in Sweden, and lasted until 1987, when it was made obsolete by the NMT automatic service. The MTD network had 20,000 users at its peak, with 700 people employed as phone operators.
MTD was also implemented in Denmark and in Norway (from 1976), which allowed roaming within the Scandinavian countries.
MTA
In Sweden, the first mobile phone system was MTA (for Mobiltelefonisystem A), which was introduced in 1956, and lasted until 1967. It was a 160 MHz system available in Stockholm and Gothenburg, with 125 total subscribers. The second system, MTB (for Mobiltelefonisystem B), had transistorized mobile sets, was introduced in 1962, and lasted until 1983. It operated in the 76–77.5 and 81–82.5 MHz bands, was also available in Malmö, and had around 600 subscribers.
OLT
In Norway, the first mobile phone system was OLT, introduced in 1966. In 1976, the OLT system was extended to include UHF bands, incorporating MTD, and allowing international roaming within Sweden.
References
External links
Brief description of MTD as well as MTA and MTB
Mobile radio telephone systems | MTD (mobile network) | Technology | 288 |
48,780,777 | https://en.wikipedia.org/wiki/Pleistophora%20mulleri | Pleistophora mulleri (Pfeiffer) Georgev. 1929 is a parasite of the amphipod Gammarus duebeni celticus (a sub-species of the boreal atlantic species of Gammarus, formerly Gammarus fabricius 1775, then later, Gammarus duebeni Lilljeborg 1852). The parasite targets the freshwater shrimp species and has shown higher rates of cannibalism, which in turn, affects biological communities and ecology.
Development
The parasite begins development within the host's muscles as a merogonial plasmodia. The process of sporogony occurs when the amorous coat separates from the plasmalemma. A merontogenic sporophorous vesicle is then formed around the sporont The sporonts then proceed the developmental process by transforming into sporoblasts. The process transitioning from a sporont to sporoblast is known as morphogenesis. The final product of the life cycle yields a mature sporoblast complete with organelles.
Transmission
It was discovered that P. mulleri was transmitted by cannibalism. The parasite is directly transmitted when an uninfected shrimp consumes infected tissue of the Pleistrophora parasite. Pleistrophora is host specific where P. mulleri has never been reported from G. duebeni duebeni or from non-Irish G. celticus populations, indicating a homoxenous life cycle.
Host
The host, Gammarus duebeni celticus is a freshwater shrimp that inhabits Europe's "Celtic Fringe." Even though the host is largely distributed throughout Europe's Fringe, amphipod infection with P. mulleri is localized to certain areas such as Ireland due to P. mulleri's requirement for water with low salinity. Infected amphipods have an increased rate of cannibalism within their communities. When the parasite is present, cannibalism is favored in order to increase transmission rates. Consumption of infected tissue is 23% efficient for transmission.
Cannibalism
Cannibalistic events differ when P. mulleri infects G. celticus. Once G. celticus ingests the microsporidian parasite, a “white tubular mass [forms] within the muscular tissue [of the host]”. The mass inside the infected shrimp lowers activity levels, foraging abilities, and mate-guarding ability. Since the infected shrimp have lowered defenses, they are more likely to be eaten by adults of G. celticus, thus transmitting the parasite to other uninfected amphipods. When there is a choice between uninfected and infected shrimp, the adults will tend to cannibalize the healthier G. celticus regardless of age. It is able to distinguish which shrimp are healthier to consume regardless of the shrimp being a juvenile or adult. In contrast, infected G. celticus will not discriminate between uninfected or infected individuals. The parasite will benefit from either situation.
The rate of cannibalism is influenced by interactions between species. When competition arises from an invasion of another shrimp species, Gammarus celticus changes its lifestyle where competition between shrimp species over common prey is favored over cannibalism. For instance, when G. pulex and G. celticus are present in the same area, instead of going after their common prey, C. pseudogracilis, they engage in intraspecies predation. The two species interact with one another and shift focus from their prey. Thus the interaction between species can facilitate rates of cannibalism.
Since Pleistrophora affects the rates of cannibalism, this further influences the inter- and intraspecies interaction dynamics. The parasite makes G. celticus easier to invade, which in turn causes G. pulex to consume more shrimp upon arrival. This causes the survival rate of G. celticus to decrease while the relative fitness of G. pulex increases. Even though the invaders deplete the population of shrimp, the number of parasites does not decrease. This is due to the fact that “the mixed ‘feeding frenzies’ that form during cannibalism and intraguild predation (IGP) are likely to promote persistence of the parasite in G. duebeni celticus, even when G. pulex invades.” Thus, the introduction of the pleipshora parasite mediates interspecies predation by decreasing the fitness of G. celticus.
Influence on ecology
Pleistrophora mulleri affects ecology by influencing the density of shrimp within an area. Due to the fact that the parasite is located in the muscle cells, the host's movement is hindered. Because G. celticus is unable to travel far, the parasite forces them into certain patches which gives Gammarus pulex an opportunity to expand their territory. This in turn affects the population growth rate due to the fact that there are more shrimp with a higher distribution. Even though G. pulex and G. celticus have similar carrying capacities, reproductive output may be higher in G. pulex, potentially leading to a higher intrinsic population growth rate. Because G. pulex is able to reproduce and expand domination over G. celticus, the potential consequence is overpopulation. Overtime, G. celticus can potentially cease to exist since there is a specific amount of organisms the environment can hold. Therefore, P. mulleri affects natural ecology by allowing other species to flourish and expand territories.
References
Microsporidia
Parasitic fungi
Fungus species | Pleistophora mulleri | Biology | 1,154 |
7,522,410 | https://en.wikipedia.org/wiki/Provisional%20stamp | Linn's World Stamp Almanac defines a provisional stamp as "a postage stamp issued for temporary use to meet postal demands until new or regular stocks of stamps can be obtained."
The issuance of provisional stamps might be occasioned by a change in name or government, by occupation of foreign territory, by a change in postal rates, by a change of currency, or by the need to provide stamps that are in short supply. An interesting example of issuing provisional stamps occurred during the Spanish–American War when supplies of stamps were low and the U.S. had occupation forces in Cuba. They are known as the "Puerto Principe" provisional stamps of 1898–1899. Over 40 different combinations of overprinted valuations and underlying Spanish Cuban stamps were produced under the auspices of the military forces over a three-week period from December 19, 1898, to January 11, 1899. These were replaced by another provisional set produced by overprinting U.S. stamps in the United States for Cuba. This second set of provisional stamps was sold for about eight months before the U.S. could print Cuban stamps. The U.S. civilian provisionals also included overprinted postal cards and stamped envelopes.
Provisional stamps are usually made by overprinting, surcharging and occasionally by bisecting pre-existing stamps.
Postmasters' provisionals
A subcategory, postmasters' provisionals, of particular importance in United States philately, comprises stamps that were issued by local postmasters in nations that had not yet begun to issue stamps for countrywide use. Between 1845, when the United States standardardized national postage rates, and 1847, when the post office issued its first stamps, postmasters' provisionals were introduced in eleven American cities, including New York, Providence, Rhode Island and St. Louis, Missouri. Many of these stamps (particularly from smaller cities such as Millbury, Massachusetts) are notable for their great rarity, or for their relative crudity of design.
Postmasters' provisionals also played a significant role in early history of the Confederate States of America. Many localities began furnishing them after U.S. mail service ceased delivering Confederate mail in June 1861; for it was only in October that Confederate stamps for nationwide use first appeared.
See also
St. Louis Bears
United States postmasters provisional stamps
U.S. provisional issue stamps
Postage stamps and postal history of the Confederate States
References
Postage stamps
Postal systems
Philatelic terminology | Provisional stamp | Technology | 504 |
1,842,477 | https://en.wikipedia.org/wiki/Prosthaphaeresis | Prosthaphaeresis (from the Greek προσθαφαίρεσις) was an algorithm used in the late 16th century and early 17th century for approximate multiplication and division using formulas from trigonometry. For the 25 years preceding the invention of the logarithm in 1614, it was the only known generally applicable way of approximating products quickly. Its name comes from the Greek prosthen (πρόσθεν) meaning before and aphaeresis (ἀφαίρεσις), meaning taking away or subtraction.
In ancient times the term was used to mean a reduction to bring the apparent place of a moving point or planet to the mean place (see Equation of the center).
Nicholas Copernicus mentions "prosthaphaeresis" several times in his 1543 work , to mean the "great parallax" caused by the displacement of the observer due to the Earth's annual motion.
History and motivation
In 16th-century Europe, celestial navigation of ships on long voyages relied heavily on ephemerides to determine their position and course. These voluminous charts prepared by astronomers detailed the position of stars and planets at various points in time. The models used to compute these were based on spherical trigonometry, which relates the angles and arc lengths of spherical triangles (see diagram, right) using formulas such as
and
where a, b and c are the angles subtended at the centre of the sphere by the corresponding arcs.
When one quantity in such a formula is unknown but the others are known, the unknown quantity can be computed using a series of multiplications, divisions, and trigonometric table lookups. Astronomers had to make thousands of such calculations, and because the best method of multiplication available was long multiplication, most of this time was spent taxingly multiplying out products.
Mathematicians, particularly those who were also astronomers, were looking for an easier way, and trigonometry was one of the most advanced and familiar fields to these people. Prosthaphaeresis appeared in the 1580s, but its originator is not known for certain; its contributors included the mathematicians Ibn Yunis, Johannes Werner, Paul Wittich, Joost Bürgi, Christopher Clavius, and François Viète. Wittich, Ibn Yunis, and Clavius were all astronomers and have all been credited by various sources with discovering the method. Its most well-known proponent was Tycho Brahe, who used it extensively for astronomical calculations such as those described above. It was also used by John Napier, who is credited with inventing the logarithms that would supplant it.
The identities
The trigonometric identities exploited by prosthaphaeresis relate products of trigonometric functions to sums. They include the following:
The first two of these are believed to have been derived by Jost Bürgi, who related them to [Tycho?] Brahe; the others follow easily from these two. If both sides are multiplied by 2, these formulas are also called the Werner formulas.
The algorithm
Using the second formula above, the technique for multiplication of two numbers works as follows:
Scale down: By shifting the decimal point to the left or right, scale both numbers to values between and , to be referred to as and .
Inverse cosine: Using an inverse cosine table, find two angles and whose cosines are our two values.
Sum and difference: Find the sum and difference of the two angles.
Average the cosines: Find the cosines of the sum and difference angles using a cosine table and average them, giving (according to the second formula above) the product .
Scale up: Shift the decimal place in the answer the combined number of places we have shifted the decimal in the first step for each input, but in the opposite direction.
For example, to multiply and :
Scale down: Shift the decimal point three and two places to the left, respectively. We get and .
Inverse cosine: , and .
Sum and difference: , and .
Average the cosines: is about .
Scale up: For each of and we shifted the decimal point a total of five places to the left, so in the answer we shift five places to the right. The result is . This is very close to the actual product, (a percent error of ≈0.003%).
If we want the product of the cosines of the two initial values, which is useful in some of the astronomical calculations mentioned above, this is surprisingly even easier: only steps 3 and 4 above are necessary.
To divide, we exploit the definition of the secant as the reciprocal of the cosine. To divide by , we scale the numbers to and . Now is the cosine of . Using a table of secants, we find is the secant of . This means that , and so we can multiply by using the above procedure. Average the cosine of the sum of the angles, , with the cosine of their difference, ,
Scaling up to locate the decimal point gives the approximate answer, .
Algorithms using the other formulas are similar, but each using different tables (sine, inverse sine, cosine, and inverse cosine) in different places. The first two are the easiest because they each only require two tables. Using the second formula, however, has the unique advantage that if only a cosine table is available, it can be used to estimate inverse cosines by searching for the angle with the nearest cosine value.
Notice how similar the above algorithm is to the process for multiplying using logarithms, which follows these steps: scale down, take logarithms, add, take inverse logarithm, scale up. It is no surprise that the originators of logarithms had used prosthaphaeresis. Indeed the two are closely related mathematically. In modern terms, prosthaphaeresis can be viewed as relying on the logarithm of complex numbers, in particular on Euler's formula
Decreasing the error
If all the operations are performed with high precision, the product can be as accurate as desired. Although sums, differences, and averages are easy to compute with high precision, even by hand, trigonometric functions and especially inverse trigonometric functions are not. For this reason, the accuracy of the method depends to a large extent on the accuracy and detail of the trigonometric tables used.
For example, a sine table with an entry for each degree can be off by as much as 0.0087 if we just round an angle off to the nearest degree; each time we double the size of the table (for example, by giving entries for every half-degree instead of every degree) we halve this error. Tables were painstakingly constructed for prosthaphaeresis with values for every second, or 3600th of a degree.
Inverse sine and cosine functions are particularly troublesome, because they become steep near −1 and 1. One solution is to include more table values in this area. Another is to scale the inputs to numbers between −0.9 and 0.9. For example, 950 would become 0.095 instead of 0.950.
Another effective approach to enhancing the accuracy is linear interpolation, which chooses a value between two adjacent table values. For example, if we know that the sine of 45° is about 0.707 and the sine of 46° is about 0.719, we can estimate the sine of 45.7° as 0.707 × (1 − 0.7) + 0.719 × 0.7 = 0.7154. The actual sine is 0.7157. A table of cosines with only 180 entries combined with linear interpolation is as accurate as a table with about entries without it. Even a quick estimate of the interpolated value is often much closer than the nearest table value. See lookup table for more details.
Reverse identities
The product formulas can also be manipulated to obtain formulas that express addition in terms of multiplication. Although less useful for computing products, these are still useful for deriving trigonometric results:
See also
Slide rule
References
External links
Prosthaphaeresis formulas
Daniel E. Otero Henry Briggs . Introduction: the need for speed in calculation.
Mathworld: Prosthaphaeresis formulas
Adam Mosley. Tycho Brahe and Mathematical Techniques. University of Cambridge.
IEEE Computer Society. History of computing: John Napier and the invention of logarithms.
Beatrice Lumpkin. African and African-American Contributions to Mathematics . Discusses Ibn Yunis's contribution to prosthaphaeresis.
Prosthaphaeresis and beat phenomenon in the theory of vibrations, by Nicholas J. Rose
Trigonometry
Arithmetic | Prosthaphaeresis | Mathematics | 1,849 |
319,632 | https://en.wikipedia.org/wiki/Verisign | Verisign, Inc. is an American company based in Reston, Virginia, that operates a diverse array of network infrastructure, including two of the Internet's thirteen root nameservers, the authoritative registry for the , , and generic top-level domains and the country-code top-level domains, and the back-end systems for the and sponsored top-level domains.
In 2010, Verisign sold its authentication business unit – which included Secure Sockets Layer (SSL) certificate, public key infrastructure (PKI), Verisign Trust Seal, and Verisign Identity Protection (VIP) services – to Symantec for $1.28 billion. The deal capped a multi-year effort by Verisign to narrow its focus to its core infrastructure and security business units. Symantec later sold this unit to DigiCert in 2017. On October 25, 2018, NeuStar, Inc. acquired VeriSign's Security Service Customer Contracts. The acquisition effectively transferred Verisign Inc.'s Distributed Denial of Service (DDoS) protection, Managed DNS, DNS Firewall and fee-based Recursive DNS services customer contracts.
Verisign's former chief financial officer (CFO) Brian Robins announced in August 2010 that the company would move from its original location of Mountain View, California, to Dulles in Northern Virginia by 2011 due to 95% of the company's business being on the East Coast. The company is incorporated in Delaware.
History
Verisign was founded in 1995 as a spin-off of the RSA Security certification services business. The new company received licenses to key cryptographic patents held by RSA (set to expire in 2000) and a time-limited non-compete agreement. The new company served as a certificate authority (CA) and its initial mission was "providing trust for the Internet and Electronic Commerce through our Digital Authentication services and products". Prior to selling its certificate business to Symantec in 2010, Verisign had more than 3 million certificates in operation for everything from military to financial services and retail applications, making it the largest CA in the world.
In 2000, Verisign acquired Network Solutions for $21billion, which operated the , and TLDs under agreements with the Internet Corporation for Assigned Names and Numbers (ICANN) and the United States Department of Commerce. Those core registry functions formed the basis for Verisign's naming division, which by then had become the company's largest and most significant business unit. In 2002, Verisign was charged with violation of the Securities Exchange Act. Verisign divested the Network Solutions retail (domain name registrar) business in 2003 for $100million, retaining the domain name registry (wholesale) function as its core Internet addressing business.
For the year ended December 31, 2010, Verisign reported revenue of $681 million, up 10% from $616 million in 2009. Verisign operates two businesses, Naming Services, which encompasses the operation of top-level domains and critical Internet infrastructure, and Network Intelligence and Availability (NIA) Services, which encompasses DDoS mitigation, managed DNS and threat intelligence.
On August 9, 2010, Symantec completed its approximately $1.28 billion acquisition of Verisign's authentication business, including the Secure Sockets Layer (SSL) Certificate Services, the Public Key Infrastructure (PKI) Services, the Verisign Trust Services, the Verisign Identity Protection (VIP) Authentication Service, and the majority stake in Verisign Japan. The deal capped a multi-year effort by Verisign to narrow its focus to its core infrastructure and security business units. Following ongoing controversies regarding Symantec's handling of certificate validation, which culminated in Google untrusting Symantec-issued certificates in its Chrome web browser, Symantec sold this unit to DigiCert in 2017 for $950 Million.
On 14 December 2021, the Ministry of Justice, Communication and Foreign Affairs of the Tuvalu Government announced on Facebook that they have selected GoDaddy Registry as the new registry service provider for the domain after Verisign did not participate in the renewal process.
In 2011, Verisign was selected by the General Services Administration (GSA) to operate the registry services for the top-level domain. They continued to operate service until 2023, when Cybersecurity and Infrastructure Security Agency (CISA) chose Cloudflare to replace Verisign as the .gov operator.
Verisign's share price tumbled in early 2014, hastened by the U.S. government's announcement that it would "relinquish oversight of the Internet's domain-naming system to a non-government entity". Ultimately ICANN chose to continue VeriSign's role as the root zone maintainer and the two entered into a new contract in 2016.
Naming services
Verisign's core business is its naming services division. The division operates the authoritative domain name registries for two of the Internet's most important top-level domains, and , and .name. It is the primary technical subcontractor for the and top-level domains for their respective registry operators, which are non-profit organizations; in this role Verisign maintains the zone files for these particular domains and hosts the domains from their domain servers. In addition, Verisign is also the contracted registry operator for the country code top-level domain (Cocos Islands). Registry operators are the "wholesalers" of Internet domain names, while domain name registrars act as the “retailers”, working directly with consumers to register a domain name address. It formerly was the contracted registry for .gov top-level domains as well as for the country code top-level domain .tv (Tuvalu).
Verisign also operates two of the Internet's thirteen "root servers" which are identified by the letters A-M (Verisign operates the “A” and “J” root servers). The root servers form the top of the hierarchical Domain Name System that supports most modern Internet communication. Verisign also generates the globally recognized root zone file and is also responsible for processing changes to that file once they are ordered by ICANN via IANA and approved by the U.S. Department of Commerce. Changes to the root zone were originally distributed via the A root server, but now they are distributed to all thirteen servers via a separate distribution system which Verisign maintains. Verisign is the only one of the 12 root server operators to operate more than one of the thirteen root nameservers. The A and J root servers are "anycasted” and are no longer operated from any of the company's own datacenters as a means to increase redundancy and availability and mitigate the threat of a single point of failure. In 2016, the Department of Commerce ended its role in managing the Internet's DNS and transferred full control to ICANN. While this initially negatively impacted VeriSign's stock, ICANN eventually chose to contract with Verisign to continue its role as the root zone maintainer.
VeriSign's naming services division dates back to 1993 when Network Solutions was awarded a contract by the National Science Foundation to manage and operate the civilian side of the Internet's domain name registrations. Network Solutions was the sole registrar for all of the Internet's non-governmental generic top-level domains until 1998 when ICANN was established and the new system of competitive registrars was implemented. As a result of these new policies, Network Solutions divided itself into two divisions. The NSI Registry division was established to manage the authoritative registries that the company would still operate, and was separated from the customer-facing registrar business that would have to compete with other registrars. The divisions were even geographically split with the NSI Registry moving from the corporate headquarters in Herndon, Virginia, to nearby Dulles, Virginia. In 2000, VeriSign purchased Network Solutions taking over its role in the Internet's DNS. The NSI Registry division eventually became VeriSign's naming services division while the remainder of Network Solutions was later sold by Verisign in 2003 to Pivotal Equity Group.
Company properties
Following the sale of its authentication services division in 2010, Verisign relocated from its former headquarters in Mountain View, California, to the headquarters of the naming division in Sterling, Virginia (originally NSI Registry's headquarters). Verisign began shopping that year for a new permanent home shortly after moving. They signed a lease for 12061 Bluemont Way in Reston, the former Sallie Mae headquarters, in 2010 and decided to purchase the building in September 2011. They have since terminated their lease of their current space in two buildings at Lakeside@Loudoun Technology Center. The company completed its move at the end of November 2011. The new headquarters is located in the Reston Town Center development which has become a major commercial and business hub for the region. In addition to its Reston headquarters, Verisign owns three data center properties. One at 22340 Dresden Street in Dulles, Virginia, not far from its corporate headquarters (within the large Broad Run Technology Park), one at 21 Boulden Circle in New Castle, Delaware, and a third in Fribourg, Switzerland. Their three data centers are mirrored so that a disaster at one data center has a minimal impact on operations. Verisign also leases an office suite in downtown Washington, D.C., on K street where its government relations office is located. It also has leased server space in numerous internet data centers around the world where the DNS constellation resolution sites are located, mostly at major internet peering facilities. One such facility is at the Equinix Ashburn Datacenter in Ashburn, Virginia, one of the world's largest datacenters and internet transit hubs.
Controversies
2001: Code signing certificate mistake
In January 2001, Verisign mistakenly issued two Class 3 code signing certificates to an individual claiming to be an employee of Microsoft. The mistake was not discovered and the certificates were not revoked until two weeks later during a routine audit. Because Verisign code-signing certificates do not specify a Certificate Revocation List Distribution Point, there was no way for them to be automatically detected as having been revoked, placing Microsoft's customers at risk. Microsoft had to later release a special security patch in order to revoke the certificates and mark them as being fraudulent.
2002: Domain transfer law suit
In 2002, Verisign was sued for domain slamming – transferring domains from other registrars to themselves by making the registrants believe they were merely renewing their domain name. Although they were found not to have broken the law, they were barred from suggesting that a domain was about to expire or claim that a transfer was actually a renewal.
2003: Site Finder legal case
In September 2003, Verisign introduced a service called Site Finder, which redirected Web browsers to a search service when users attempted to go to non-existent or domain names. ICANN asserted that Verisign had overstepped the terms of its contract with the U.S. Department of Commerce, which in essence grants Verisign the right to operate the DNS for and , and Verisign shut down the service. Subsequently, Verisign filed a lawsuit against ICANN in February 2004, seeking to gain clarity over what services it could offer in the context of its contract with ICANN. The claim was moved from federal to California state court in August 2004. In late 2005, Verisign and ICANN announced a proposed settlement which defined a process for the introduction of new registry services in the registry. The documents concerning these settlements are available at ICANN.org. The ICANN comments mailing list archive documents some of the criticisms that have been raised regarding the settlement. Additionally, Verisign was involved in the matter decided by the Ninth Circuit.
2003: Gives up domain
In keeping with ICANN's charter to introduce competition to the domain name marketplace, Verisign agreed to give up its operation of top-level domain in 2003 in exchange for a continuation of its contract to operate , which, at the time had more than 34 million registered addresses.
2005: Retains domain
In mid-2005, the existing contract for the operation of expired and five companies, including Verisign, bid for management of it. Verisign enlisted numerous IT and telecom heavyweights including Microsoft, IBM, Sun Microsystems, MCI, and others, to assert that Verisign had a perfect record operating . They proposed Verisign continue to manage the DNS due to its critical importance as the domain underlying numerous "backbone" network services. Verisign was also aided by the fact that several of the other bidders were based outside the United States, which raised concerns in national security circles. On June 8, 2005, ICANN announced that Verisign had been approved to operate until 2011. More information on the bidding process is available at ICANN. On July 1, 2011, ICANN announced that VeriSign's approval to operate .net was extended another six years, until 2017.
2010: Data breach and disclosure controversy
In February 2012, Verisign revealed that their network security had been repeatedly breached in 2010. Verisign stated that the breach did not impact the Domain Name System (DNS) that they maintain, but would not provide details about the loss of data. Verisign was widely criticized for not disclosing the breach earlier and apparently attempting to hide the news in an October 2011 SEC filing.
Because of the lack of details provided by Verisign, it was not clear whether the breach impacted the certificate signing business, acquired by Symantec in late 2010. Some, such as Oliver Lavery, the Director of Security and Research for nCircle, doubted whether sites using Verisign SSL certificates could be trusted.
2010: Web site domain seizures
On November 29, 2010, the U.S. Immigration and Customs Enforcement (U.S. ICE) issued seizure orders against 82 web sites with Internet addresses that were reported to be involved in the illegal sale and distribution of counterfeit goods. As registry operator for , Verisign performed the required takedowns of the 82 sites under order from law enforcement. InformationWeek reported that "Verisign will say only that it received sealed court orders directing certain actions to be taken with respect to specific domain names". The removal of the 82 websites was cited as an impetus for the launch of "the Dot-P2P Project" in order to create a decentralized DNS service without centralized registry operators. Following the disappearance of WikiLeaks during the following week and its forced move to wikileaks.ch, a Swiss domain, the Electronic Frontier Foundation warned of the dangers of having key pieces of Internet infrastructure such as DNS name translation under corporate control.
2012: Web site domain seizure
In March 2012, the U.S. government declared that it has the right to seize domains ending in , , , , , and if the companies administering the domains are based in the U.S. The U.S. government can seize the domains ending in , , , , and by serving a court-order on Verisign, which manages those domains. The domain is managed by the Virginia-based non-profit Public Interest Registry. In March 2012, Verisign shut down the sports-betting site Bodog.com after receiving a court order, even though the domain name was registered to a Canadian company.
References
External links
Digicert SSL Certificates - formerly from Verisign
Oral history interview with James Bidzos, Charles Babbage Institute University of Minnesota, Minneapolis. Bidzos discusses his leadership of software security firm RSA Data Security as it sought to commercialize encryption technology as well as his role in creating the RSA Conference and founding Verisign. Oral history interview 2004, Mill Valley, California.
Internet technology companies of the United States
American companies established in 1995
Domain Name System
Computer companies established in 1995
Companies based in Reston, Virginia
Companies listed on the Nasdaq
Former certificate authorities
Radio-frequency identification
Domain name registries
1995 establishments in Virginia
DDoS mitigation companies
1998 initial public offerings
Corporate spin-offs
Domain name seizures by United States | Verisign | Engineering | 3,340 |
41,224,120 | https://en.wikipedia.org/wiki/Guan%20ware | Guan ware or Kuan ware () is one of the Five Famous Kilns of Song dynasty China, making high-status stonewares, whose surface decoration relied heavily on crackled glaze, randomly crazed by a network of crack lines in the glaze.
Guan means "official" in Chinese and Guan ware was, most unusually for Chinese ceramics of the period, the result of an imperial initiative resulting from the loss of access to northern kilns such as those making Ru ware and Jun ware after the invasion of the north and the flight of a Song prince to establish the Southern Song at a new capital at Hangzhou, Zhejiang province. It is usually assumed that potters from the northern imperial kilns followed the court south to man the new kilns.
In some Asian sources "Guan ware" may be used in the literally translated sense to cover any "official" wares ordered by the Imperial court. In April 2015, Liu Yiqian paid US$14.7 million for a Guan ware vase from the Southern Song.
Dating and kiln sites
The new Southern Song court was established in Hangzhou in 1127, but some time probably elapsed before the kiln was established; this may not have been until after hostilities with the invaders were concluded in 1141. According to Chinese historical sources, the first kiln was actually within or beside the palace precinct, described as in the "back park", and was called or was at "Xiuneisi". Various places around the city have been explored, and ceramic remains found, but perhaps because of subsequent building on the site, the location of this kiln remained uncertain, and it is now thought that the name might refer to the controlling office rather than the actual kiln site. Following excavations in starting in 1996 it is now thought that the site has been found, as the Laohudong or Tiger Cave Kiln [老虎洞窑] on the outskirts of the city. An old Yue ware dragon kiln had been revived, but the official wares were made in a northern-style mantou kiln, rare this far south.
A second kiln was established later at Jiaotanxia ("Altar of Heaven" or "Suburban Altar"), on the outskirts of the new capital; this has been identified and excavated. In Chinese contemporary sources these wares were regarded as rather inferior to those from the first kiln, and the excavated sherds are very similar to those of the nearby Longquan celadon kilns. Indeed, Longquan may have helped out when the Guan kilns could not fulfill orders by themselves.
The end date of Guan ware is uncertain, but it probably persisted until 1400 or later, as the Ge Gu Yao Lun, a fourteenth century Ming dynasty manual on ceramics by Cao Zhao, seems to treat it as being still produced.
Characteristics
Guan ware is not difficult to distinguish from the Ru ware which it perhaps tries to imitate, but wares from the second site can be very similar indeed to Longquan ware, and it has been suggested that some was made there. Crackled glaze is usual, but perhaps was not at this time a desired effect, as it certainly became in imitations centuries later. Alternatively it was originally produced accidentally, but within the Guan period became deliberate. In surviving examples the effect is probably often more striking than it would have been originally, either because collectors have chemically enhanced them, through gradual oxidation over time, or from staining in use.
Three qualities of the ware are recorded in old sources, and can be identified in surviving examples. The best had a grey-blue glaze on a thin body, with wide crackle, followed by a greener glaze with a denser crackle, then finally "almost a pale grey brown" with a "very dark close crackle on a dark grey body" that was rather thicker; all are illustrated here, with the types indicated by 1–3 (which is not a standard terminology).
The crackle arises during cooling, when the coefficient of expansion differs between the glaze and the body. There are several layers of glaze, and the glaze is often thicker than the clay body, as can be seen in sherds. The crackle does not occur through all layers. Most shapes were wheel thrown, but moulds and slab-building were also used. Less usual shapes include those derived from ancient ritual bronzes and jade congs. Bowls and dishes often have "lobed or indented rims".
Imitations
Guan ware is "the most frequently copied of all Chinese wares", and the imitations began immediately, at the many southern kilns producing Longquan celadon and other wares. Imitations in Jingdezhen porcelain seem to have begun under the Yuan dynasty and continue to the present day; these are often hard to date.
Notes
References
Gompertz, G.St.G.M., Chinese Celadon Wares, 1980 (2nd edn.), Faber & Faber,
Kerr, Rose, Needham, Joseph, Wood, Nigel, Science and Civilisation in China: Volume 5, Chemistry and Chemical Technology, Part 12, Ceramic Technology, 2004, Cambridge University Press, , 9780521838337
Koh, NK, Koh Antiques, Singapore, "Guan wares" (covering official wares)
Krahl, Regina, Oxford Art Online, "Guan and Ge wares", section in "China, §VIII, 3: Ceramics: Historical development"
Medley, Margaret, The Chinese Potter: A Practical History of Chinese Ceramics, 3rd edition, 1989, Phaidon,
Vainker, S.J., Chinese Pottery and Porcelain, 1991, British Museum Press, 9780714114705
Valenstein, S. (1998). A handbook of Chinese ceramics, Metropolitan Museum of Art, New York. (fully online)
Chinese pottery
Kilns
Chinese pottery kiln sites | Guan ware | Chemistry,Engineering | 1,234 |
36,842,642 | https://en.wikipedia.org/wiki/Clazuril | Clazuril is a drug used in veterinary medicine as a coccidiostat.
See also
Diclazuril
Ponazuril
Toltrazuril
References
Veterinary drugs
Antiprotozoal agents
4-Chlorophenyl compounds | Clazuril | Biology | 54 |
25,490,426 | https://en.wikipedia.org/wiki/OpenSSH | OpenSSH (also known as OpenBSD Secure Shell) is a suite of secure networking utilities based on the Secure Shell (SSH) protocol, which provides a secure channel over an unsecured network in a client–server architecture.
OpenSSH started as a fork of the free SSH program developed by Tatu Ylönen; later versions of Ylönen's SSH were proprietary software offered by SSH Communications Security. OpenSSH was first released in 1999 and is currently developed as part of the OpenBSD operating system.
OpenSSH is not a single computer program, but rather a suite of programs that serve as alternatives to unencrypted protocols like Telnet and FTP. OpenSSH is integrated into several operating systems, namely Microsoft Windows, macOS and most Linux operating systems, while the portable version is available as a package in other systems.
History
OpenBSD Secure Shell was created by OpenBSD developers as an alternative to the original SSH software by Tatu Ylönen, which is now proprietary software. Although source code is available for the original SSH, various restrictions are imposed on its use and distribution. OpenSSH was created as a fork of Björn Grönvall's OSSH that itself was a fork of Tatu Ylönen's original free SSH 1.2.12 release, which was the last one having a license suitable for forking. The OpenSSH developers claim that their application is more secure than the original, due to their policy of producing clean and audited code and because it is released under the BSD license, the open-source license to which the word open in the name refers.
OpenSSH first appeared in OpenBSD 2.6. The first portable release was made in October 1999. Developments since then have included the addition of ciphers (e.g., ChaCha20-Poly1305 in 6.5 of January 2014), cutting the dependency on OpenSSL (6.7, October 2014) and an extension to facilitate public-key discovery and rotation for trusted hosts (for transition from DSA to Ed25519 public host keys, version 6.8 of March 2015).
On 19 October 2015, Microsoft announced that OpenSSH will be natively supported on Microsoft Windows and accessible through PowerShell, releasing an early implementation and making the code publicly available. OpenSSH-based client and server programs have been included in Windows 10 since version 1803. The SSH client and key agent are enabled and available by default, and the SSH server is an optional Feature-on-Demand.
In October 2019 protection for private keys at rest in RAM against speculation and memory side-channel attacks were added in OpenSSH 8.1.
Development
OpenSSH is developed as part of the OpenBSD operating system. Rather than including changes for other operating systems directly into OpenSSH, a separate portability infrastructure is maintained by the OpenSSH Portability Team, and "portable releases" are made periodically. This infrastructure is substantial, partly because OpenSSH is required to perform authentication, a capability that has many varying implementations. This model is also used for other OpenBSD projects such as OpenNTPD.
The OpenSSH suite includes the following command-line utilities and daemons:
, a replacement for .
, a replacement for to copy files between computers.
, a replacement for , and to allow shell access to a remote machine.
and , utilities to ease authentication by holding keys ready and avoid the need to enter passphrases every time they are used.
, a tool to inspect and generate the RSA, DSA and elliptic-curve keys that are used for user and host authentication.
, which scans a list of hosts and collects their public keys.
, the SSH server daemon.
The OpenSSH server can authenticate users using the standard methods supported by the SSH protocol: with a password; public-key authentication, using per-user keys; host-based authentication, which is a secure version of 's host trust relationships using public keys; keyboard-interactive, a generic challenge–response mechanism, which is often used for simple password authentication, but which can also make use of stronger authenticators such as tokens; and Kerberos/GSSAPI. The server makes use of authentication methods native to the host operating system; this can include using the BSD Authentication system or pluggable authentication modules (PAM) to enable additional authentication through methods such as one-time passwords. However, this occasionally has side effects: when using PAM with OpenSSH, it must be run as root, as root privileges are typically required to operate PAM. OpenSSH versions after 3.7 (16 September 2003) allow PAM to be disabled at run-time, so regular users can run sshd instances.
On OpenBSD, OpenSSH uses a dedicated user by default to drop privileges and perform privilege separation in accordance with the principle of least privilege, applied throughout the operating system including the Xenocara X server.
Features
OpenSSH includes the ability to set up a secured channel through which data sent to local, client-side Unix domain sockets or local, client-side TCP ports may be "forwarded" (sent across the secured channel) for routing on the server side; when this forwarding is set up, the server is instructed to send that forwarded data to some socket or TCP host/port (the host could be the server itself, "localhost"; or, the host may be some other computer, so that it appears to the other computer that the server is the originator of the data). The forwarding of data is bidirectional, meaning that any return communication is itself forwarded back to the client-side in the same manner; this is known as an "SSH tunnel", and it can be used to multiplex additional TCP connections over a single SSH connection since 2004, to conceal connections, to encrypt protocols that are otherwise unsecured, and to circumvent firewalls by sending/receiving all manner of data through one port that is allowed by the firewall. For example, an X Window System tunnel may be created automatically when using OpenSSH to connect to a remote host, and other protocols, such as HTTP and VNC, may be forwarded easily.
Tunneling a TCP-encapsulating payload (such as PPP) over a TCP-based connection (such as SSH's port forwarding) is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance due to the TCP meltdown problem, which is why virtual private network software may instead use for the tunnel connection a protocol simpler than TCP. However, this is often not a problem when using OpenSSH's port forwarding, because many use cases do not entail TCP-over-TCP tunneling; the meltdown is avoided because the OpenSSH client processes the local, client-side TCP connection in order to get to the actual payload that is being sent, and then sends that payload directly through the tunnel's own TCP connection to the server side, where the OpenSSH server similarly "unwraps" the payload in order to "wrap" it up again for routing to its final destination.
In addition, some third-party software includes support for tunnelling over SSH. These include DistCC, CVS, rsync, and Fetchmail. On some operating systems, remote file systems can be mounted over SSH using tools such as sshfs (using FUSE).
An ad hoc SOCKS proxy server may be created using OpenSSH. This allows more flexible proxying than is possible with ordinary port forwarding.
Beginning with version 4.3, OpenSSH implements an OSI layer 2/3 tun-based VPN. This is the most flexible of OpenSSH's tunnelling capabilities, allowing applications to transparently access remote network resources without modifications to make use of SOCKS.
Supported public key types
OpenSSH supports the following public key types:
ssh-dss (disabled at run-time since OpenSSH 7.0, released in 2015)
ssh-rsa (disabled at run-time since OpenSSH 8.8, released in 2021)
ecdsa-sha2-nistp256 (since OpenSSH 5.7, released in 2011)
ecdsa-sha2-nistp384 (since OpenSSH 5.7)
ecdsa-sha2-nistp521 (since OpenSSH 5.7)
ssh-ed25519 (since OpenSSH 6.5, released in 2014)
rsa-sha2-256 (since OpenSSH 7.2, released in 2016)
rsa-sha2-512 (since OpenSSH 7.2)
ecdsa-sk (since OpenSSH 8.2, released in 2020)
ed25519-sk (since OpenSSH 8.2)
NTRU Prime-x25519 (since OpenSSH 9.0, released in 2022)
Vulnerabilities
Before version 5.2 of OpenSSH, it was possible for an attacker to recover up to 14 bits of plaintext with a success probability of 2−14. The vulnerability was related to the CBC encryption mode. The AES CTR mode and arcfour ciphers are not vulnerable to this attack.
A local privilege escalation vulnerability existed in OpenSSH 6.8 to 6.9 () due to world-writable (622) TTY devices, which was believed to be a denial of service vulnerability. With the use of the TIOCSTI ioctl, it was possible for authenticated users to inject characters into other users terminals and execute arbitrary commands on Linux.
Malicious or compromised OpenSSH servers could read sensitive information on the client such as private login keys for other systems, using a vulnerability that relies on the undocumented connection-resuming feature of the OpenSSH client, which is called roaming, enabled by default on the client, but not supported on the OpenSSH server. This applies to versions 5.4 (released on 8 March 2010) to 7.1 of the OpenSSH client, and was fixed in OpenSSH 7.1p2, released on 14 January 2016. CVE numbers associated to this vulnerability are (information leak) and (buffer overflow).
On March 29, 2024, a serious supply chain attack on XZ Utils has been reported, targeting indirectly the OpenSSH server (sshd) running on Linux. The OpenSSH code is not directly concerned, the backdoor is caused by the dependencies on liblzma via libsystemd applied by a tierce patch, applied by various Linux distributions.
On July 1, 2024, the RegreSSHion security vulnerability was disclosed, which could enable a remote attacker to cause OpenSSH to execute arbitrary code and gain full root access. It was inadvertently introduced in prior OpenSSH version 8.5p1 in October 2020, and was patched following version 9.8/9.8p1.
Trademark
In February 2001, Tatu Ylönen, chairman and CTO of SSH Communications Security informed the OpenSSH development mailing list that the company intended to assert its ownership of the "SSH" and "Secure Shell" trademarks, and sought to change references to the protocol to "SecSH" or "secsh", in order to maintain control of the "SSH" name. He proposed that OpenSSH change its name in order to avoid a lawsuit, a suggestion that developers resisted. OpenSSH developer Damien Miller replied urging Ylönen to reconsider, arguing that "SSH" had long since been a generic trademark.
At the time, "SSH", "Secure Shell" and "ssh" had appeared in documents proposing the protocol as an open standard. Without marking these within the proposal as registered trademarks, Ylönen ran the risk of relinquishing all exclusive rights to the name as a means of describing the protocol. Improper use of a trademark, or allowing others to use a trademark incorrectly, results in the trademark becoming a generic term, like Kleenex or Aspirin, which opens the mark to use by others. After study of the USPTO trademark database, many online pundits opined that the term "ssh" was not trademarked, merely the logo using the lower case letters "ssh". In addition, the six years between the company's creation and the time when it began to defend its trademark, and that only OpenSSH was receiving threats of legal repercussions, weighed against the trademark's validity.
Both developers of OpenSSH and Ylönen himself were members of the IETF working group developing the new standard; after several meetings this group denied Ylönen's request to rename the protocol, citing concerns that it would set a bad precedent for other trademark claims against the IETF. The participants argued that both "Secure Shell" and "SSH" were generic terms and could not be trademarks.
See also
Comparison of SSH clients
Comparison of SSH servers
SSH File Transfer Protocol (SFTP)
Notes
References
External links
OpenSSH at the Super User's BSD Cross Reference (BXR.SU) OpenGrok
SSH OpenSSH - Windows CMD - SS64.com
Cross-platform free software
Cryptographic software
Free network-related software
Free security software
SSH
Secure Shell
Free software programmed in C
SSH File Transfer Protocol clients | OpenSSH | Mathematics,Technology | 2,785 |
45,691,558 | https://en.wikipedia.org/wiki/Zoosystematics%20and%20Evolution | Zoosystematics and Evolution is a peer-reviewed open access scientific journal covering zoological systematics and evolution. It was established in 1898 as and obtained its current title in 2008. The journal was established in 1898 and is published by Pensoft Publishers on behalf of the Museum für Naturkunde. The editor-in-chief is Matthias Glaubrecht (Museum für Naturkunde).
Abstracting and indexing
The journal is abstracted and indexed in
References
External links
Systematics journals
Publications established in 1898
Creative Commons Attribution-licensed journals
English-language journals
Zoology journals
Pensoft Publishers academic journals | Zoosystematics and Evolution | Biology | 126 |
9,678,262 | https://en.wikipedia.org/wiki/Godelieve | Saint Godelieve (also known as Godeleva, Godeliève, and Godelina; ) ( 10526 July 1070) is a Flemish saint. She behaved with charity and gentleness to all, accepting an arranged marriage as was the custom, but her husband and family turned out to be abusive. Eventually he had her strangled by his servants.
Every year, on the Sunday following 5 July, a procession celebrating Saint Godelieve takes place in Gistel.
Hagiography
Tradition, as recorded in her Vita, states that she was pious as a young girl, and became much sought after by suitors as a beautiful young woman. Godelieve, however, wanted to become a nun. A nobleman named Bertolf (Berthold) of Gistel, however, determined to marry her, successfully invoked the help of her father's overlord, Eustace II, Count of Boulogne, along with her parents. She accepted the betrothal obediently & went to Bertolf's family home. There she was badly treated by him and his mother. She continued to live as an obedient daughter-in-law, managing the household well and with Christian charity. Bertold became more dissatisfied with her, and he ordered his servants to provide only bread and water to the young bride. Godelieve shared this food with the poor.
Godelieve managed to escape to the home of her father, Hemfrid, seigneur of Wierre-Effroy. Hemfrid, appealing to the Bishops of Tournai and Soissons and the Count of Flanders, they concluded the marriage to be indissoluble and managed to have Bertolf restore Godelieve to her rightful position as his wife, which signaled a renewal of persecution.
In July 1070, Godelieve returned to Gistel and soon after, at the order of Bertolf, was strangled by two servants and thrown into a pool, causing it to appear she died a natural death.
Legend
According to legend, Bertolf married again, and had a daughter Edith, who was born blind: the legend states that Edith was cured through the intercession of Saint Godelieve. Bertolf, now repentant of his crimes, went to Rome to obtain absolution. He went on a pilgrimage to the Holy Land, and became a monk at the Abbey of Saint Winnoc at Bergues. Edith founded a Benedictine monastery at Gistel, which was dedicated to Saint Godelieve, which she joined herself as a nun.
Veneration
Godelieve's body was exhumed in 1084 by the Bishops of Tournai and Noyon, in the presence of Gertrude of Saxony, the wife of Robert I, Count of Flanders, the Abbot of St. Winnoc's and a number of clergymen. It was Radbod II, bishop of Noyon-Tournai, that consecrated Godelieve's relics in 1084, and Godelieve's popular cult developed thereafter.
Drogo, a monk of St. Winnoc's Abbey, wrote Godelieve's biography, the Vita Godeliph, about ten years after her death. The abbey of Ten Putte Abbey in Bruges was dedicated to her.
Every year, on the Sunday following 5 July, a procession celebrating Saint Godelieve takes place in Gistel. In 2017, the Godelieve procession was recognized as an Intangible Cultural Heritage.
Godelieve's feast day, 6 July, was, like that of Saint Swithun in England and Saint Medard in France, connected with the weather. She is thus considered one of the "weather saints". A monastery of Benedictine nuns was established on the site of her home, belonging to the Subiaco Congregation. It was closed due to falling numbers about 2020; the building is currently under review by the city/church authorities.
The Godelieve Polyptych
Godelieve's life is represented in the Godelieve Polyptych, now in the Metropolitan Museum of Art in New York City.
Notes
References
Sources
External links
Godeleva (Godelina) von Gistel
Santa Godeleva
Saint Godelieve, Martyr at the Christian Iconography web site
1050s births
1070 deaths
Weather lore
Domestic violence
Christian female saints of the Middle Ages
11th-century Christian saints
11th-century women from the Holy Roman Empire | Godelieve | Physics | 908 |
20,879,499 | https://en.wikipedia.org/wiki/Variable%20structure%20system | A variable structure system, or VSS, is a discontinuous nonlinear system of the form
where is the state vector, is the time variable, and is a piecewise continuous function. Due to the piecewise continuity of these systems, they behave like different continuous nonlinear systems in different regions of their state space. At the boundaries of these regions, their dynamics switch abruptly. Hence, their structure varies over different parts of their state space.
The development of variable structure control depends upon methods of analyzing variable structure systems, which are special cases of hybrid dynamical systems.
See also
Variable structure control
Sliding mode control
Hybrid system
Nonlinear control
Robust control
Optimal control
H-bridge – A topology that combines four switches forming the four legs of an "H". Can be used to drive a motor (or other electrical device) forward or backward when only a single supply is available. Often used in actuator sliding-mode control systems.
Switching amplifier – Uses switching-mode control to drive continuous outputs
Delta-sigma modulation – Another (feedback) method of encoding a continuous range of values in a signal that rapidly switches between two states (i.e., a kind of specialized sliding-mode control)
Pulse-density modulation – A generalized form of delta-sigma modulation
Pulse-width modulation – Another modulation scheme that produces continuous motion through discontinuous switching
References
2. Emelyanov, S.V., ed. (1967). Variable Structure Control Systems. Moscow: Nauka.
3. Emelyanov S, Utkin V, Tarin V, Kostyleva N, Shubladze A, Ezerov V, Dubrovsky E. 1970. Theory of Variable Structure Control Systems (in Russian). Moscow: Nauka.
4. Variable Structure Systems: From Principles to Implementation. A. Sabanovic, L. Fridman and S. Spurgeon (eds.), IEE, London, 2004, ISBN 0863413501.
5. Advances in Variable Structure Systems and Sliding Mode Control—Theory and Applications. Li, S., Yu, X., Fridman, L., Man, Z., Wang, X.(Eds.), Studies in Systems, Decision and Control, v. 115, Springer, 2017, ISBN 978-3-319-62895-0
6.Variable-Structure Systems and Sliding-Mode Control. M. Steinberger, M. Horn, L. Fridman.(eds.), Studies in Systems, Decision and Control, v.271, Springer International Publishing, Cham, 2020, ISBN 978-3-030-36620-9.
Further reading
Y. Shtessel, C. Edwards, L. Fridman, A. Levant. Sliding Mode Control and Observation, Series: Control Engineering, Birkhauser: Basel, 2014, ISBN 978-0-81764-8923
Nonlinear systems
Dynamical systems
Concepts in physics
Nonlinear control | Variable structure system | Physics,Mathematics | 609 |
73,504,548 | https://en.wikipedia.org/wiki/Sleep%20in%20the%20NBA | The issue of sleeping is of considerable importance and note in regard to the National Basketball Association (NBA). Traveling and packed game schedules are among aspects of the NBA calendar that affect the sleep of NBA personnel. Due to these and other factors, sleep deprivation has become a prevalent issue affecting player performance.
To help combat sleep deprivation, NBA organizations have employed scientists or doctors specializing in sleep or sleep medicine on their staffs.
History of sleep deprivation in the NBA
NBA players have long cited having issues sleeping or suffering from sleep deprivation. Aspects contributing to sleep deprivation include frequent travel across multiple time zones throughout a season, as well as constant circadian rhythm disruption. The issue has been noted to affect both in-game performances and mobility, as well as player recovery and mindset. The attitudes of players and organizations around the league, in regard to sleeping have changed over time. Starting in the late 2000s, NBA teams began to pay more attention to their players' sleeping habits.
According to a 2009 report by The Atlantic, players and coaches seldom slept for more than two or three hours at a time in between back-to-back games. During travel, both players and coaches were asked to sleep on the plane, the report stated. That year, NBA journalist Howard Beck wrote:
The typical night game ends at about 10 p.m. By the time players shower, dress and speak with the news media, it is close to 11 p.m. They are usually famished, so everyone eats a late dinner. Even the most conservative players—those who do not frequent nightclubs—will not get to sleep until at least 2 a.m. If the team is traveling, players may not reach their hotel until 3 a.m. For a shoot-around or practice that starts at 10 a.m., players have to arrive as early as 9 a.m. to lift weights, receive treatment or be taped.
Kobe Bryant stated in a 2014 interview that he used to "get by on three or four hours a night", before increasing the amount to between six and eight. By 2015, teams were still dealing with packed schedules, having to sometimes play four games in five nights or six in nine, respectively. These schedules often were cited as detrimental to players' energy and sleeping. LeBron James opined that sleep is the "most important" factor in player recovery but added that an NBA player's schedule makes it difficult to attain such sleep.
During the 2017–18 season, then-Charlotte Hornets head coach Steve Clifford was told by a doctor that the major headaches he was suffering were due to sleep deprivation. Clifford had routinely slept four or five hours per night before waking up and working throughout his career. Later in 2018, Jake Fischer of Sports Illustrated wrote that scientific data showing the effects of sleep deprivation on players of sports came to the forefront of team personnel's attention, leading to a greater focus on players' sleep and overall well-being that year. Also in 2018, a study by Lauren Hale of Stony Brook University showed that late-night use of Twitter had effects on players' performances during the day, with shooting accuracy, ability to rebound, and number of points scored negatively affected.
Despite greater importance placed on helping players receive better sleep, the issue of sleep deprivation persisted. Hassan Whiteside, then with the Miami Heat, stated "it's impossible" to get a good night of sleep within the NBA schedule. A 2019 ESPN report cited five NBA athletic training staff members who separately noted that players netted an average six hours of sleep per 24 hours, this figure combining nightly sleep and pregame naps. Ahead of the 2019–20 season, one NBA general manager anonymously told the outlet that the NBA community has "a large population of vampires", adding that traveling logistics compounded the issue.
The NBA officially commented on the issue, maintaining that player health and wellness was a main priority for the league and stating "significant game schedule changes, an investment in a new airline charter program, a focus on mental health and wellness, and the advancement of wearable technology. ... Sleep is an area we look at closely as part of this effort." The importance emphasized on sleeping in the late 2010s came during a time when NBA coaches also began to place higher amounts of care on other off-court aspects of a player's routine. By 2019, the NBA Coaches Association had hired specialists to offer guidance to players on forming healthy habits, encompassing sleep, mental health, diet and exercise.
Sleep science in the NBA
By 2009, increasing interest in sleep science and the understanding that recovery time for players was important inspired then-Boston Celtics head coach Doc Rivers to eliminate the morning shoot-around from his players' game-day routine. The Spurs and Portland Trail Blazers also dropped the routine, while the New York Knicks only practiced it during road games. 2009 also marked when Charles Czeisler, the director of the Division of Sleep Medicine at Harvard Medical School, began working with the Portland Trail Blazers. Czeisler became known as the "Sleep Doctor" in NBA circles.
During the 2009–10 season, the Spurs invited a sleep specialist from Stanford University to teach them how to optimize players' rest. As part of a trial run, the team decided to shift their practice schedule from mornings to afternoons, with the expectation that players would have more time to sleep in the morning. The aim was to provide players with a continuous sleep time of 8 to 10 hours. After the trial, the Spurs reverted to morning practices. The Spurs organization would, however, continue efforts to help players with their sleeping habits; each season, the team provides wristbands to its players that track their sleeping habits and send them their personal sleep data.
Cheri Mah was also noted by media outlets to assist NBA players with their sleeping. A physician scientist at the University of California, Mah has assisted Stephen Curry with his sleeping habits.
In the 2016–17 season, the Orlando Magic staff began using mobile polysomnographs and wearable devices to measure players' sleep across the season. By the end of the season, they noticed players were obtaining minimal or no restorative REM sleep.
By 2019, many teams had begun hiring so-called sleep coaches to their staffs. Then–Celtics coach Brad Stevens noted that the team speaks "with each player on their roster about maximizing their sleep and planning their routines." Teams also began using technology to help players focus on their sleep, such as a sleep tracking device placed underneath the mattress. However, Chip Schafer, the Chicago Bulls' director of performance health, noted that players' compliance was an issue with the technology.
Effect on player habits
Players' sleep schedules are extremely programmed. Many NBA players have cited mid-day naps on game days as being critical or vital. In 2011, Adam Silver, then the NBA's deputy commissioner, stated "Everyone in the league office knows not to call players at 3 p.m. It's the player nap." The length of naps varies from player-to-player. NBA guard Ty Lawson noted that he sleeps five hours during the night and then takes a three-hour nap during the daytime. Denver Nuggets guard Jamal Murray has stated that while in the NBA bubble, he slept for five hours following shoot-around, and regularly sleeps for two hours prior to games. Murray's teammate, Nikola Jokić sleeps for eight hours at night, though his status as a "non-napper" is considered rare in the NBA. Those players who do routinely employ scheduled naps have been noted to become irritable when their naps are interrupted.
In 2016, Ken Berger of CBS Sports wrote that NBA trainers, coaches, and owners had only then began realizing the devastating impact poor sleep habits can have on a player's longevity and injury rates, as well as an organization's financial bottom line. Some players, such as Danny Green and Rajon Rondo, have noted that early afternoon games disrupt their napping schedules.
Wanting to minimize the effects of travel on his sleeping patterns, NBA forward Tobias Harris was noted by ESPN to travel with an electroencephalogram (EEG) machine. Though having an efficacy debated by medical experts, Harris uses the EEG machine in order to engage in neurofeedback, believing that his daily 45-minute training sessions provide him with data to combat against fatigue. Some players ensure their sleeping is directly preceded by entering amply prepared environments, with LeBron James and Jimmy Butler being cited examples. James has been noted to employ a specific sleeping routine while playing on the road: in hotel rooms, James sets the temperature to between and , shuts off nearby electronics 30 to 45 minutes before settling into bed, and uses the meditation app Calm to play back a field recording of rain falling on leaves in order to soothe him to sleep. Meanwhile, Butler targets nine hours of sleep, beginning at 7 p.m., which he prepares for by drinking herbal tea three hours earlier, avoiding all screen use, and using a cold air diffuser.
See also
Concussions in American football
Sleep debt
References
Further reading
History of basketball
National Basketball Association
Occupational safety and health
NBA | Sleep in the NBA | Biology | 1,864 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.