id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
76,052,326 | https://en.wikipedia.org/wiki/%CE%913IA | α3IA, also known as GTPL4094, is an inverse agonist of the GABAA receptor. It is more selective for the α3 subunit, hence its name.
Effects
Agonism of the α3 subunit shows anxiolytic properties. However, by being an inverse agonist, α3IA has the opposite action: it shows anxiogenic properties.
This compound also has affinity for the other subunits of the GABAA receptor, but it is more selective for the α3 subunit.
See also
α5IA, an inverse agonist at the α5 subunit of GABAA receptors
References
GABAA receptor negative allosteric modulators
GABAA receptor modulators
Anxiogenics
Pyridines
4-Methoxyphenyl compounds
Methyl esters | Α3IA | Chemistry | 167 |
4,243,182 | https://en.wikipedia.org/wiki/Silver%20telluride | Silver telluride (Ag2Te) is a chemical compound, a telluride of silver, also known as disilver telluride or silver(I) telluride. It forms a monoclinic crystal. In a wider sense, silver telluride can be used to denote AgTe (silver(II) telluride, a metastable compound) or Ag5Te3.
Silver(I) telluride occurs naturally as the mineral hessite, whereas silver(II) telluride is known as empressite.
Silver telluride is a semiconductor which can be doped both n-type and p-type. Stoichiometric Ag2Te has n-type conductivity. On heating silver is lost from the material.
Non-stoichiometric silver telluride has shown extraordinary magnetoresistance.
Synthesis
Porous silver telluride (AgTe) is synthesized by an electrochemical deposition method. The experiment can be performed using a potentiostat and a three-electrode cell with 200 mL of 0.5 M sulfuric acid electrolyte solution containing Ag nanoparticles at room temperature. Then a silver paste used in the tungsten ditelluride (WTe2) attachment leach into the electrolyte which causes small amounts of Ag to dissolve in the electrolyte. The electrolyte was stirred by a magnetic bar to remove hydrogen bubbles. A silver- silver chloride electrode and a platinum wire can be used as reference and counter electrodes. All the potentials can be measured against the reference electrode, and it was calibrated using the equation ERHE = EAg/AgCl + .059 pH + .197. In order to grow the porous AgTe, the WTe2 was treated using multiple cyclic voltammetry between -1.2 and 0 volts with a scan rate of 100 mV/s.
Glutathione coated Ag2Te Nanoparticles can be synthesized by preparing a 9 mL solution containing 10 mM AgNO3, 5mM Na2TeO3, and 30 mM glutathione. Place that solution in an ice bath. N2H4 was added to the solution and the reaction is allowed to proceed for 5 min under constant stirring. Then the nanoparticles are washed three times by a way of centrifugation, after the three washes the nanoparticles are suspended in PBS and washed again with that same method.
References
Hagyeong Kwon, Dongyeon Bae, Dongyeun Won, Heeju Kim, Gunn Kim, Jiung Cho, Hee Jung Park, Hionsuck Baik, Ah Reum Jeong, Chia-Hsien Lin, Ching-Yu Chiang, Ching-Shun Ku, Heejun Yang, and Suyeon Cho "Nanoporous Silver Telluride for active hydrogen evolution." (n.d.) https://pubs.acs.org/doi/10.1021/acsnano.0c09517
See also
Hessite
Empressite
Sylvanite
Related materials
Silver selenide
Silver sulfide
Silver compounds
Tellurides
Semiconductor materials
Non-stoichiometric compounds | Silver telluride | Chemistry | 664 |
24,762,328 | https://en.wikipedia.org/wiki/Seismic%20metamaterial | A seismic metamaterial, is a metamaterial that is designed to counteract the adverse effects of seismic waves on artificial structures, which exist on or near the surface of the Earth. Current designs of seismic metamaterials utilize configurations of boreholes, trees or proposed underground resonators to act as a large scale material. Experiments have observed both reflections and bandgap attenuation from artificially induced seismic waves. These are the first experiments to verify that seismic metamaterials can be measured for frequencies below 100 Hz, where damage from Rayleigh waves is the most harmful to artificial structures.
The mechanics of seismic waves
More than a million earthquakes are recorded each year, by a worldwide system of earthquake detection stations. The propagation velocity of the seismic waves depends on density and elasticity of the earth materials. In other words, the speeds of the seismic waves vary as they travel through different materials in the Earth. The two main components of a seismic event are body waves and surface waves. Both of these have different modes of wave propagation.
Towards Seismic Cloaking
Computations showed that seismic waves traveling toward a building, could be directed around the building, leaving the building unscathed, by using seismic metamaterials. The very long wavelengths of earthquake waves would be shortened as they interact with the metamaterials; the waves would pass around the building so as to arrive in phase as the earthquake wave proceeded, as if the building was not there. The mathematical models produce the regular pattern provided by Metamaterial cloaking. This method was first understood with electromagnetic cloaking metamaterials - the electromagnetic energy is in effect directed around an object, or hole, and protecting buildings from seismic waves employs this same principle.
Giant polymer-made split ring resonators combined with other metamaterials are designed to couple at the seismic wavelength. Concentric layers of this material would be stacked, each layer separated by an elastic medium. The design that worked is ten layers of six different materials, which can be easily deployed in building foundations. As of 2009, the project is still in the design stage.
Electromagnetics cloaking principles for seismic metamaterials
For seismic metamaterials to protect surface structures, the proposal includes a layered structure of metamaterials, separated by elastic plates in a cylindrical configuration. A prior simulation showed that it is possible to create concealment from electromagnetic radiation with concentric, alternating layers of electromagnetic metamaterials. That study is in contrast to concealment by inclusions in a split ring resonator designed as an anisotropic metamaterial.
The configuration can be viewed as alternating layers of "homogeneous isotropic dielectric material" A. with "homogeneous isotropic dielectric material" B. Each dielectric material is much thinner than the radiated wavelength. As a whole, such structure is an anisotropic medium. The layered dielectric materials surround an "infinite conducting cylinder". The layered dielectric materials radiate outward, in a concentric fashion, and the cylinder is encased in the first layer. The other layers alternate and surround the previous layer all the way to the first layer. Electromagnetic wave scattering was calculated and simulated for the layered (metamaterial) structure and the split-ring resonator anisotropic metamaterial, to show the effectiveness of the layered metamaterial.
Acoustic cloaking principles for seismic metamaterials
The theory and ultimate development for the seismic metamaterial is based on coordinate transformations achieved when concealing a small cylindrical object with electromagnetic waves. This was followed by an analysis of acoustic cloaking, and whether or not coordinate transformations could be applied to artificially fabricated acoustic materials.
Applying the concepts used to understand electromagnetic materials to material properties in other systems shows them to be closely analogous. Wave vector, wave impedance, and direction of power flow are universal. By understanding how permittivity and permeability control these components of wave propagation, applicable analogies can be used for other material interactions.
In most instances, applying coordinate transformation to engineered artificial elastic media is not possible. However, there is at least one special case where there is a direct equivalence between electromagnetics and elastodynamics. Furthermore, this case appears practically useful. In two dimensions, isotropic acoustic media and isotropic electromagnetic media are exactly equivalent. Under these conditions, the isotropic characteristic works in anisotropic media as well.
It has been demonstrated mathematically that the 2D Maxwell equations with normal incidence apply to 2D acoustic equations when replacing the electromagnetic parameters with the following acoustic parameters: pressure, vector fluid velocity, fluid mass density and the fluid bulk modulus. The compressional wave solutions used in the electromagnetic cloaking are transferred to material fluidic solutions where fluid motion is parallel to the wavevector. The computations then show that coordinate transformations can be applied to acoustic media when restricted to normal incidence in two dimensions.
Next the electromagnetic cloaking shell is referenced as an exact equivalence for a simulated demonstration of the acoustic cloaking shell. Bulk modulus and mass density determine the spatial dimensions of the cloak, which can bend any incident wave around the center of the shell. In a simulation with perfect conditions, because it is easier to demonstrate the principles involved, there is zero scattering in any direction.
The seismic cloak
However, it can be demonstrated through computation and visual simulation that the waves are in fact dispersed around the location of the building. The frequency range of this capability is shown to have no limitation regarding the radiated frequency. The cloak itself demonstrates no forward or back scattering, hence, the seismic cloak becomes an effective medium.
Experiments on Seismic Metamaterials
In 2012, researchers held an experimental field-test near Grenoble (France), with the aim to highlight analogy with phononic crystals.
At the geophysics scale, in a forest in the Landes region of France in 2016, an ambitious seismic experiment called the METAFORET experiment demonstrated that trees could significantly modify the surface wavefield due to their coupled resonances when arranged at a subwavelength scale. A follow-up field experiment called the META-WT experiment was performed in the Nauen wind farm. This for the first time demonstrated that at the city scale, collective resonance of wind turbine structures can modify seismic waves propagating through it. These new observations have implications for seismic hazard in a city where dense urban structures like tall buildings can strongly modify the wavefield.
See also
Negative index metamaterials
Metamaterial antennas
Photonic crystal
Superlens
Split-ring resonator
Terahertz metamaterials
Tunable metamaterials
Photonic metamaterials
Material properties
Acoustic dispersion
Bulk modulus
Constitutive equation
Elastic wave
Equation of state
Linear elasticity
Permeability
Permittivity
Stress (mechanics)
Thermodynamic state
References
Seismology
Metamaterials
Continuum mechanics | Seismic metamaterial | Physics,Materials_science,Engineering | 1,404 |
17,745,893 | https://en.wikipedia.org/wiki/Longest%20trains | The length of a train may be measured in number of wagons (for bulk loads such as coal and iron ore) or in metres for general freight. Train lengths and loads on electrified railways, especially lower voltage 3000 V DC and 1500 V DC, are limited by traction and power considerations. Drawgear and couplings can also be a limiting factor, along with curves, gradients and crossing loop lengths.
Very long freight trains with a total length of or more are possible with the advent of distributed power, or additional locomotive units between or behind long chains of freight cars (referred to as a "consist"). Additional locomotive units enable much longer, heavier loads without the increased risks of derailing that stem from the stress of pulling very long chains of train-cars around curves.
Bulk
Australia
BHP iron ore train has typically 268 cars and a train weight of 43,000 tonnes carrying 24,200 tonnes of iron ore, long, two SD70ACe locomotives at the head of the train and two remote controlled SD70ACe locomotives as mid-train helpers.
BHP used to run 44,500-tonne, 336-car long iron ore trains over long, with six to eight locomotives including an intermediate remote unit. This operation seems to have ceased since the trunk line was fully double tracked in May 2011.
The record-breaking ore train from the same company, 682 cars and long, once carried 82,000 metric tons of ore for a total weight of the train, largest in the world, of 99,734 tonnes. It was driven by eight locomotives distributed along its length to keep the coupling loads and curve performance controllable.
Leigh Creek coal—, formerly ran as 161 wagons and three locomotives.
Cane tramway – 75 wagons ( gauge).
Brazil
Carajás Railway gauge iron ore trains are typically 330 cars long, totaling in length.
VLI Grain with 160 hopper cars, or 80 hoppers plus 72 FTTs (for pulp transport) totaling about long.
China
Datong–Qinhuangdao railway is a dedicated coal-transport railway. Every day 50 pairs of long trains consisting of 210 wagons and two HXD1 locomotives use the line. Each train hauls over 20,000 tons of coal.
Shuozhou–Huanghua railway is a heavy haul freight railway that has successfully tested 30,000 ton coal trains that stretch over in April 2024. The train consists of 324 cars wagons hauled with four China Energy Investment HXD1 variants.
India
Indian Railways operated the longest train in India on 15 August 2022. The 'Super Vasuki' freight train was long had a total of 6 locomotives pulling 295 wagons of coal.
Indonesia (proposed)
Muara Wahau coal to Bengalon port –
Super Babaranjang 120 cars testing. The testing train consisted of 120 coal cars with 4 EMD G26 leading. The consist was roughly long.
Mauritania
Iron ore trains on the Mauritania Railway are up to in length. They consist of 2 diesel-electric EMD locomotives, 200 to 210 cars each carrying up to 84 tons of iron ore, and 2-3 service cars.
South Africa
Sishen–Saldanha railway line ore trains on –
Ukraine
12,000 tonnes
General
– United States – a June 2024 third-party study over 10 days in Arizona found that Union Pacific routinely runs intermodal trains of more than in length, of which the longest was a carrying 506 containers in 280 well cars.
– United States – According to the Association of American Railroads, "Fewer than 1%" of trains in America are longer than 14,000 feet.
– France – intermediate locomotive – trial
– The Bangalore–Dharmavaram goods train (India)
The Netherlands–Germany—trial trains of this length
Saudi Arabia double stack
— Babaranjang Baratarahan from Tanjung Enim Coal Mine to Tarahan Port, Indonesia. 2 or 3 locomotive and 60 to 61 coal wagons.
— In Denmark and to Hamburg, Germany; 2 locomotives and 82 wagons.
Special test runs
These are one-off runs, sometimes specifically to set records.
Bulk (ore, coal etc)
BHP run on 21 June 2001, comprising 682 wagons and hauled by eight General Electric GE AC6000CW diesel-electric locomotives controlled by a single driver with a total length of on the iron ore railway to Port Hedland in Western Australia – total weight 99,734 tons on a gauge line.
Datong–Qinhuangdao railway, China. On 2 April 2014, an experimental train ran with 320 wagons and six locomotives hauling a 31,500 ton load, with a total length of .
Sishen–Saldanha, South Africa. Run on 26–27 August 1989, comprising 660 wagons, long and a total weight of 71,765 tons on a gauge line. The train comprised 16 locomotives (9 Class 9E 50 kV AC electric and 7 Class 37 diesel-electric).
Bulk coal train from Ekibastuz to the Urals, Soviet Union, 20 February 1986. The train consisted of 439 wagons and several diesel locomotives distributed along the train with a total mass of 43,400 tonnes and a total length of .
A 1991 test train pulled by two British Rail Class 59 diesel locomotives, weighing 12,108 tonnes and approximately long, was pulled with moderate success from Merehead Quarry to Witham Friary.
Norfolk and Western Railway unit coal train from Iaeger, West Virginia to Portsmouth, Ohio, 15 November 1967. The train consisted of 500 cars and six EMD SD45 diesel-electric locomotives distributed throughout the train for a total weight of 48,170 tons and total length of .
General cargo
Union Pacific, United States. Run from 8–10 January 2010, consisting of 296 container cars and hauled by nine diesel-electric locomotive spread through the train with a total length of , from a terminal in Texas to Los Angeles. Around 618 double-stacked containers weighing 14,059 t were carried at speeds up to .
BNSF, United States, 10 July 2009—, 458 container units powered by seven locomotives
Passenger
Kijfhoek–Eindhoven, Netherlands. In 1989, the Nederlandse Spoorwegen (Dutch Railways) celebrated their 150th anniversary. On 19 February 1989, NS ran a test train with 60 passenger cars ( long and weighing 2,597 tons), of which only the first 14 cars held actual passengers, pulled by one 1500 V DC locomotive. Twenty years later, in 2009, Railz Miniworld repeated the stunt on a smaller scale, inside their exhibition in Rotterdam.
Ghent–Ostend, Belgium. On 27 April 1991, one electric locomotive and 70 passenger cars (totalling and 2786 tons, excluding the locomotive) held a charity run for the Belgian Cancer Fund, exceeding the Dutch record.
Rhaetian Railway, Switzerland. On 29 October 2022, the Rhaetian Railway celebrated the 175th anniversary of Swiss railways with an hour-long, journey from Preda to Alvaneu in southeast Switzerland. The train had 25 4-car ABe 4/16 "Capricorn" EMUs, totalling 100 coaches with a total length of ; it ran on a narrow-gauge railway over several switchbacks and long curves.
See also
Distributed power—Where operational considerations or economics require it, trains can be made longer if intermediate locomotives are inserted in the train and remotely controlled from the leading locomotive.
High-speed rail
Track gauge
Extreme Trains on the History Channel 2009
List of steepest gradients on adhesion railways
International Heavy Haul Association
Longest road trains
References
Trains
Rail freight transport
Rail transport-related lists of superlatives | Longest trains | Technology | 1,551 |
23,288,103 | https://en.wikipedia.org/wiki/Phylloporus%20rhodoxanthus | Phylloporus rhodoxanthus, commonly known as the gilled bolete, is a species of fungus in the family Boletaceae. Like other species in the genus, it has a lamellate (gilled) hymenium and forms a mycorrhizal association with the roots of living trees, specifically beech and oak in North and Central America.
Taxonomy
The species was first described from North Carolina as Agaricus rhodoxanthus by Lewis David de Schweinitz in 1822. Giacomo Bresadola transferred it to Phylloporus in 1900.
Description
The cap is initially convex before flattening out in age, sometimes developing a central depression; it attains a diameter of . The cap margin is initially curved inward. The cap surface is dry, with a somewhat velvet-like texture, and often develops cracks in maturity that reveal the pale yellow flesh underneath. Its color ranges from dull red to reddish brown, to reddish yellow, or olive brown. The flesh has no distinct taste or odor. The gills are decurrent to somewhat decurrent, and well-spaced. They are deep yellow to greenish-yellow, often wrinkled, and usually have cross-veins in the spaces between the gills; these cross-veins sometimes give the gills a somewhat pore-like appearance. The cylindrical stem measures long by thick, and is often tapered toward the base. The stem is firm and solid (i.e., not hollow), and yellow, with yellow mycelium at the base. It frequently has longitudinal grooves extending down from the gills.
Phylloporus rhodoxanthus produces an olivaecous yellow-brown spore print. Spores are elliptical to spindle-shaped, smooth, and measure 9–14 by 3.5–5 μm.
Similar species
In North America, Phylloporus rhodoxanthus can be confused with: P. leucomycelinus, distinguished by the presence of white mycelium at the base of its stem; P. arenicola, associated with pines in western North America; P. boletinoides, present in southern North America and having a subporoid, olive-yellow hymenium; and P. foliiporus, also present in southern North America and microscopically distinguished by the presence of cystidia.
Uses
Fruit bodies are edible and considered good by some. The flavor has been described as "tender and nutty", and drying the fruit bodies first enhances the flavor. Suitable culinary uses include sauteing, adding to sauces or stuffings, or raw as a colorful garnish. They are also used by hobbyists to make mushroom dyes of beige, greenish beige, or gold colors, depending on the mordant used.
Habitat and distribution
The fruit bodies of Phylloporus rhodoxanthus grow on the ground singly or in small groups in deciduous forests of oak and beech. The species has a wide distribution in North America, where it fruits from July to October, and has also been reported from Belize. The name was formerly applied to Phylloporus species from Asia (China, India, and Taiwan), Australia, and Europe, but more recent research has shown that these non-American records refer to different species.
See also
List of North American boletes
References
External links
rhodoxanthus
Edible fungi
Fungi described in 1822
Fungi of North America
Taxa named by Lewis David de Schweinitz
Fungus species | Phylloporus rhodoxanthus | Biology | 726 |
45,381,806 | https://en.wikipedia.org/wiki/Toroidal%20embedding | In algebraic geometry, a toroidal embedding is an open embedding of algebraic varieties that locally looks like the embedding of the open torus into a toric variety. The notion was introduced by Mumford to prove the existence of semistable reductions of algebraic varieties over one-dimensional bases.
Definition
Let X be a normal variety over an algebraically closed field and a smooth open subset. Then is called a toroidal embedding if for every closed point x of X, there is an isomorphism of local -algebras:
for some affine toric variety with a torus T and a point t such that the above isomorphism takes the ideal of to that of .
Let X be a normal variety over a field k. An open embedding is said to a toroidal embedding if is a toroidal embedding.
Examples
Tits' buildings
See also
tropical compactification
References
Abramovich, D., Denef, J. & Karu, K.: Weak toroidalization over non-closed fields. manuscripta math. (2013) 142: 257.
External links
Toroidal embedding
Algebraic geometry | Toroidal embedding | Mathematics | 236 |
16,855,596 | https://en.wikipedia.org/wiki/MRPS5 | 28S ribosomal protein S5, mitochondrial, otherwise called uS5m, is a protein that in humans is encoded by the MRPS5 gene.
Function
Mammalian mitochondrial ribosomal proteins are encoded by nuclear genes and help in protein synthesis within the mitochondrion. Mitochondrial ribosomes (mitoribosomes) consist of a small 28S subunit and a large 39S subunit. They have an estimated 75% protein to rRNA composition compared to prokaryotic ribosomes, where this ratio is reversed. Another difference between mammalian mitoribosomes and prokaryotic ribosomes is that the latter contain a 5S rRNA. Among different species, the proteins comprising the mitoribosome differ greatly in sequence, and sometimes in biochemical properties, which prevents easy recognition by sequence homology. This gene encodes a 28S subunit protein that belongs to the ribosomal protein S5P family. Pseudogenes corresponding to this gene are found on chromosomes 4q, 5q, and 18q.
References
Further reading
External links
Ribosomal proteins | MRPS5 | Chemistry | 219 |
15,918,153 | https://en.wikipedia.org/wiki/Oleoylethanolamide | Oleoylethanolamide (OEA) is an endogenous peroxisome proliferator-activated receptor alpha (PPAR-α) agonist. It is a naturally occurring ethanolamide lipid that regulates feeding and body weight in vertebrates ranging from mice to pythons.
OEA is a shorter, monounsaturated analogue of the endocannabinoid anandamide, but unlike anandamide it acts independently of the cannabinoid pathway, regulating PPAR-α activity to stimulate lipolysis.
OEA is produced by the small intestine following feeding in two steps. First an N-acyl transferase (NAT) activity joins the free amino terminus of phosphatidylethanolamine (PE) to the oleoyl group (one variety of acyl group) derived from sn-1-oleoyl-phosphatidylcholine, which contains the fatty acid oleic acid at the sn-1 position. This produces an N-acylphosphatidylethanolamine, which is then split (hydrolyzed) by N-acyl phosphatidylethanolamine-specific phospholipase D (NAPE-PLD) into phosphatidic acid and OEA. The biosynthesis of OEA and other bioactive lipid amides is modulated by bile acids.
OEA has been demonstrated to bind to the novel cannabinoid receptor GPR119. OEA has been suggested to be the receptor's endogenous ligand.
OEA has been hypothesized to play a key role in the inhibition of food seeking behavior and in the lipolysis of brown bears "ursus arctos" during the hibernation season together with the alteration of the endocannabinoid system required for the metabolic changes for hibernation.
OEA has been reported to lengthen the life span of the roundworm Caenorhabditis elegans through interactions with lysomal molecules.
OEA is mainly known by its anorexigenic effects. However, it has also neuroprotective properties. In this sense, recent research has demonstrated that OEA reduces neuronal death in a murine model of aggressive neurodegeneration. Such neuroprotective effect is triggered by a stabilization of microtubule dynamics and by the modulation of neuroinflammation
References
External links
Science Magazine
BBC: Fatty foods 'offer memory boost'
Neurotransmitters
Fatty acid amides
Endocannabinoids | Oleoylethanolamide | Chemistry | 544 |
12,265,304 | https://en.wikipedia.org/wiki/Point%20distribution%20model | The point distribution model is a model for representing the mean geometry of a shape and some statistical modes of geometric variation inferred from a training set of shapes.
Background
The point distribution model concept has been developed by Cootes, Taylor et al. and became a standard in computer vision for the statistical study of shape and for segmentation of medical images where shape priors really help interpretation of noisy and low-contrasted pixels/voxels. The latter point leads to active shape models (ASM) and active appearance models (AAM).
Point distribution models rely on landmark points. A landmark is an annotating point posed by an anatomist onto a given locus for every shape instance across the training set population. For instance, the same landmark will designate the tip of the index finger in a training set of 2D hands outlines. Principal component analysis (PCA), for instance, is a relevant tool for studying correlations of movement between groups of landmarks among the training set population. Typically, it might detect that all the landmarks located along the same finger move exactly together across the training set examples showing different finger spacing for a flat-posed hands collection.
Details
First, a set of training images are manually landmarked with enough corresponding landmarks to sufficiently approximate the geometry of the original shapes. These landmarks are aligned using the generalized procrustes analysis, which minimizes the least squared error between the points.
aligned landmarks in two dimensions are given as
.
It's important to note that each landmark should represent the same anatomical location. For example, landmark #3, might represent the tip of the ring finger across all training images.
Now the shape outlines are reduced to sequences of landmarks, so that a given training shape is defined as the vector . Assuming the scattering is gaussian in this space, PCA is used to compute normalized eigenvectors and eigenvalues of the covariance matrix across all training shapes. The matrix of the top eigenvectors is given as , and each eigenvector describes a principal mode of variation along the set.
Finally, a linear combination of the eigenvectors is used to define a new shape , mathematically defined as:
where is defined as the mean shape across all training images, and is a vector of scaling values for each principal component. Therefore, by modifying the variable an infinite number of shapes can be defined. To ensure that the new shapes are all within the variation seen in the training set, it is common to only allow each element of to be within 3 standard deviations, where the standard deviation of a given principal component is defined as the square root of its corresponding eigenvalue.
PDM's can be extended to any arbitrary number of dimensions, but are typically used in 2D image and 3D volume applications (where each landmark point is or ).
Discussion
An eigenvector, interpreted in euclidean space, can be seen as a sequence of euclidean vectors associated to corresponding landmark and designating a compound move for the whole shape. Global nonlinear variation is usually well handled provided nonlinear variation is kept to a reasonable level. Typically, a twisting nematode worm is used as an example in the teaching of kernel PCA-based methods.
Due to the PCA properties: eigenvectors are mutually orthogonal, form a basis of the training set cloud in the shape space, and cross at the 0 in this space, which represents the mean shape. Also, PCA is a traditional way of fitting a closed ellipsoid to a Gaussian cloud of points (whatever their dimension): this suggests the concept of bounded variation.
The idea behind PDMs is that eigenvectors can be linearly combined to create an infinity of new shape instances that will 'look like' the one in the training set. The coefficients are bounded alike the values of the corresponding eigenvalues, so as to ensure the generated 2n/3n-dimensional dot will remain into the hyper-ellipsoidal allowed domain—allowable shape domain (ASD).
See also
Procrustes analysis
References
External links
Flexible Models for Computer Vision, Tim Cootes, Manchester University.
A practical introduction to PDM and ASMs.
Computer vision | Point distribution model | Engineering | 858 |
36,510,964 | https://en.wikipedia.org/wiki/IBM%20%28atoms%29 | IBM in atoms was a demonstration by IBM scientists in 1989 of a technology capable of manipulating individual atoms. A scanning tunneling microscope was used to arrange 35 individual xenon atoms on a substrate of chilled crystal of nickel to spell out the three letter company initialism. It was the first time that atoms had been precisely positioned on a flat surface.
Research
Donald Eigler and Erhard Schweizer of the IBM Almaden Research Center in San Jose, California, discovered the ability using a scanning tunneling microscope (STM) to move atoms about the surface. In the demonstration, where the microscope was used in low temperature, they positioned 35 individual xenon atoms on a substrate of chilled crystal of nickel to form the acronym "IBM". The pattern they created was 5 nm tall and 17 nm wide. They also assembled chains of xenon atoms similar in form to molecules. The demonstrated capacity showed the potential of fabricating rudimentary structures and allowed insights as to the extent of device miniaturization.
See also
There's Plenty of Room at the Bottom – Lecture by Richard Feynman
A Boy and His Atom – The world's smallest movie
References
External links
"IBM" in atoms at IBM's archives
Nanotechnology
IBM
Individual particles
Experiments
Scanning probe microscopy
Digital typography
1989 introductions | IBM (atoms) | Chemistry,Materials_science,Engineering | 267 |
64,204,284 | https://en.wikipedia.org/wiki/Adrian%20Iovi%C8%9B%C4%83 | Adrian Ioviță (born 28 June 1954) is a Romanian-Canadian mathematician, specializing in arithmetic algebraic geometry and p-adic cohomology theories.
Education
Born in Timișoara, Romania, Iovita received in 1978 his undergraduate degree in mathematics from the University of Bucharest. He worked as a researcher at the Institute of Mathematics of the Romanian Academy, obtaining a Ph.D. degree in 1991 from the University of Bucharest with thesis On local classfield theory written under the direction of Nicolae Popescu. He received in 1996 a doctorate in mathematics from Boston University. His doctoral thesis there was supervised by Glenn H. Stevens; the thesis title is p-adic Cohomology of Abelian Varieties.
Career
As a postdoc from 1996 to 1998 in Montreal he was at McGill University and Concordia University. From 1998 to 2003 he was an assistant professor at the University of Washington. Since 2003 he is a full professor at Concordia University. He has held permanent positions at the University of Padua, and also in Paris, Münster, Jerusalem, and Nottingham.
Awards
In 2008 Iovita received the Ribenboim Prize. In 2018 he was an invited speaker, with Vincent Pilloni and Fabrizio Andreatta, with talk p-adic variation of automorphic sheaves (given by Pilloni) at the International Congress of Mathematicians in Rio de Janeiro.
Selected publications
References
20th-century Romanian mathematicians
21st-century Romanian mathematicians
Algebraic geometers
University of Bucharest alumni
Boston University Graduate School of Arts & Sciences alumni
University of Washington faculty
Academic staff of Concordia University
Living people
Romanian emigrants to Canada
McGill University people
Number theorists
1954 births
Scientists from Timișoara | Adrian Ioviță | Mathematics | 333 |
67,543,042 | https://en.wikipedia.org/wiki/Charles%20G.%20Heyd | Charles Gordon Heyd (27 August 1884 – 4 February 1970) was an American surgeon and president of the American Medical Association in 1936–1937.
Biography
Heyd obtained a B.A. from the University of Toronto in 1905 and M.D. from University of Buffalo in 1909. During World War I he served as a Major in France. Heyd was an opponent of compulsory health insurance and socialized medicine. Instead, he favoured voluntary medical insurance and public health testing.
Heyd was Director of Surgery at New York Post-Graduate Medical School and Hospital and Professor of Clinical Surgery at Columbia University. He was President of the United Medical Service (1948–1951). In 1932, he received the Legion of Honour of France. He wrote the Preface for Lloyd Paul Stryker's Courts and Doctors, published in 1932.
He was President of the American Medical Association (1936–1937). In 1937, Heyd was awarded honorary degree of doctor of science by Temple University. In 1940, Heyd noted that most infections of the neck have their origin in the oral cavity.
Heyd died on 4 February 1970.
Opposition to water fluoridation
Heyd was an opponent of water fluoridation. He has been quoted as saying "I am appalled at the prospect of using water as a vehicle for drugs. Fluoride is a corrosive poison which will produce harm on a long-term basis".
Heyd's comment has been widely cited in anti-fluoridation literature as an argument from authority because he was a former President of the AMA. However, Heyd was President of the AMA for two years in the 1930s long before evidence of the effectiveness from fluoridation was available to examine. Since Heyd, no other AMA President has opposed fluoridation.
Selected publications
Liver and Its Relation to Chronic Abdominal Infection (1924)
The Doctor in Court (1941)
References
1884 births
1970 deaths
American surgeons
Columbia Medical School faculty
Canadian emigrants to the United States
Writers from Brantford
Presidents of the American Medical Association
American recipients of the Legion of Honour
University at Buffalo alumni
University of Toronto alumni
Water fluoridation | Charles G. Heyd | Chemistry | 427 |
14,936,306 | https://en.wikipedia.org/wiki/Workgroup%20%28computer%20networking%29 | In computer networking a work group is a collection of computers connected on a LAN that share the common resources and responsibilities. Workgroup is Microsoft's term for a peer-to-peer local area network. Computers running Microsoft operating systems in the same work group may share files, printers, or Internet connection. Work group contrasts with a domain, in which computers rely on centralized authentication.
See also
Windows for Workgroups – the earliest version of Windows to allow a work group
Windows HomeGroup – a feature introduced in Windows 7 and later removed in Windows 10 (Version 1803) that allows work groups to share contents more easily
Browser service – the service enabled 'browsing' all the resources in work groups
Peer Name Resolution Protocol (PNRP) - IPv6-based dynamic name publication and resolution
References
External links
Workgroup Server Protocol Program (WSPP)
Windows technology
Computer networking | Workgroup (computer networking) | Technology,Engineering | 178 |
5,024,105 | https://en.wikipedia.org/wiki/Gold%28III%29%20bromide | Gold(III) bromide is a dark-red to black crystalline solid. It has the empirical formula , but exists as a dimer with the molecular formula in which two gold atoms are bridged by two bromine atoms. It is commonly referred to as gold(III) bromide, gold tribromide, and rarely but traditionally auric bromide, and sometimes as digold hexabromide. The analogous copper or silver tribromides do not exist.
History
The first mention of any research or study of the gold halides dates back to the early-to-mid-19th century, and there are three primary researchers associated with the extensive investigation of this particular area of chemistry: Thomsen, Schottländer, and Krüss.
Structure
Gold(III) bromide adopts structures seen for the other gold(III) trihalide dimeric compounds, such as the chloride. The gold centers exhibit square planar coordination with bond angles of roughly 90 degrees.
Calculations indicate that in the hypothetical monomeric forms of the gold trihalides, the Jahn-Teller effect causes differences to arise in the structures of the gold halide complexes. For instance, gold(III) bromide contains one long and two short gold-bromine bonds whereas gold(III) chloride and gold(III) fluoride consist of two long and one short gold-halogen bonds. Moreover, gold tribromide does not exhibit the same coordination around the central gold atom as gold trichloride or gold trifluoride. In the latter complexes, the coordination exhibits a T-conformation, but in gold tribromide the coordination exists as more of a dynamic balance between a Y-conformation and a T-conformation. This coordination difference can be attributed to the Jahn-Teller effect but more so to the decrease in π-back bonding of the gold atoms with the bromine ligands compared to the π-back bonding found with fluorine and chlorine ligands. It is also this decrease in π-back bonding which explains why gold tribromide is less stable than its trifluoride and trichloride counterparts.
Preparation
The most common synthesis method of gold(III) bromide is heating gold and excess liquid bromine at 140 °C:
Alternatively, the halide-exchange reaction of gold(III) chloride with hydrobromic acid has also been proven successful in synthesizing gold(III) bromide:
Chemical properties
Gold(III) displays square planar coordination geometry.
Gold(III) trihalides form a variety of four-coordinate adducts. One example is the hydrate . Another well known adduct is that with tetrahydrothiophene. The tetrabromide is also known:
Uses
Catalytic chemistry
Gold(III) bromide catalyzes a variety of reactions. In one example, it catalyzes the Diels-Alder reaction of an enynal unit and carbonyl.
Another catalytic use of gold tribromide is in the nucleophilic substitution reaction of propargylic alcohols. In this reaction, the gold complex serves as an alcohol-activating agent to facilitate the substitution.
Ketamine detection
Gold(III) bromide can be used as a testing reagent for the presence of ketamine.
0.25% 0.1M NaOH is prepared to give a brownish-yellow solution. Two drops of this are added to a spotting plate and a small amount of ketamine is added. The mixture gives a deep purple color within approximately one minute, which turns to a dark, blackish-purple color within approximately two minutes.
Acetaminophen, ascorbic acid, heroin, lactose, mannitol, morphine, and sucrose all cause an instant colour change to purple, as do other compounds with phenol and hydroxyl groups.
Nothing commonly found in conjunction with ketamine gave the same colour change in the same time.
"The initial purple color may be due to the formation of a complex between the gold and the ketamine. The cause for the change of color from purple to dark blackish-purple is unknown; however, it may be due to a redox reaction that produces a small amount of colloidal gold."
References
Bromides
Metal halides
Gold(III) compounds
Drug testing reagents
Gold–halogen compounds | Gold(III) bromide | Chemistry | 905 |
8,445,754 | https://en.wikipedia.org/wiki/Stress%20testing%20%28software%29 | Stress testing is a software testing activity that determines the robustness of software by testing beyond the limits of normal operation. Stress testing is particularly important for "mission critical" software, but is used for all types of software. Stress tests commonly put a greater emphasis on robustness, availability, and error handling under a heavy load, than on what would be considered correct behavior under normal circumstances.
A system stress test refers to tests that put a greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. In particular, the goals of such tests may be to ensure the software does not crash in conditions of insufficient computational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks.
Examples:
A web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads. These attacks generally are under an hour long, or until a limit in the amount of data that the web server can tolerate is found.
Stress testing may be contrasted with load testing:
Load testing examines the entire environment and database, while measuring the response time, whereas stress testing focuses on identified transactions, pushing to a level so as to break transactions or systems.
During stress testing, if transactions are selectively stressed, the database may not experience much load, but the transactions are heavily stressed. On the other hand, during load testing the database experiences a heavy load, while some transactions may not be stressed.
System stress testing, also known as stress testing, is loading the concurrent users over and beyond the level that the system can handle, so it breaks at the weakest link within the entire system.
Field experience
Failures may be related to:
characteristics of non-production like environments, e.g. small test databases
complete lack of load or stress testing
Rationale
Reasons for stress testing include:
The software being tested is "mission critical", that is, failure of the software (such as a crash) would have disastrous consequences.
The amount of time and resources dedicated to testing is usually not sufficient, with traditional testing methods, to test all of the situations in which the software will be used when it is released.
Even with sufficient time and resources for writing tests, it may not be possible to determine before hand all of the different ways in which the software will be used. This is particularly true for operating systems and middleware, which will eventually be used by software that doesn't even exist at the time of the testing.
Customers may use the software on computers that have significantly fewer computational resources (such as memory or disk space) than the computers used for testing.
Input data integrity cannot be guaranteed. Input data are software wide: it can be data files, streams and memory buffers, as well as arguments and options given to a command line executable or user inputs triggering actions in a GUI application. Fuzzing and monkey test methods can be used to find problems due to data corruption or incoherence.
Concurrency is particularly difficult to test with traditional testing methods. Stress testing may be necessary to find race conditions and deadlocks.
Software such as web servers that will be accessible over the Internet may be subject to denial of service attacks.
Under normal conditions, certain types of bugs, such as memory leaks, can be fairly benign and difficult to detect over the short periods of time in which testing is performed. However, these bugs can still be potentially serious. In a sense, stress testing for a relatively short period of time can be seen as simulating normal operation for a longer period of time.
Relationship to branch coverage
Branch coverage (a specific type of code coverage) is a metric of the number of branches executed under test, where "100% branch coverage" means that every branch in a program has been executed at least once under some test. Branch coverage is one of the most important metrics for software testing; software for which the branch coverage is low is not generally considered to be thoroughly tested. Note that code coverage metrics are a property of the tests for a piece of software, not of the software being tested.
Achieving high branch coverage often involves writing negative test variations, that is, variations where the software is supposed to fail in some way, in addition to the usual positive test variations, which test intended usage. An example of a negative variation would be calling a function with illegal parameters. There is a limit to the branch coverage that can be achieved even with negative variations, however, as some branches may only be used for handling of errors that are beyond the control of the test. For example, a test would normally have no control over memory allocation, so branches that handle an "out of memory" error are difficult to test.
Stress testing can achieve higher branch coverage by producing the conditions under which certain error handling branches are followed. The coverage can be further improved by using fault injection.
Examples
A web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads.
Load test vs. stress test
Stress testing usually consists of testing beyond specified limits in order to determine failure points and test failure recovery.
Load testing implies a controlled environment moving from low loads to high. Stress testing focuses on more random events, chaos and unpredictability. Using a web application as an example here are ways stress might be introduced:
double the baseline number for concurrent users/HTTP connections
randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example)
take the database offline, then restart it
rebuild a RAID array while the system is running
run processes that consume resources (CPU, memory, disk, network) on the Web and database servers
observe how the system reacts to failure and recovers
Does it save its state?
Does the application hang and freeze or does it fail gracefully?
On restart, is it able to recover from the last good state?
Does the system output meaningful error messages to the user and to the logs?
Is the security of the system compromised because of unexpected failures?
Reliability
A Pattern-Based Software Testing Framework for Exploitability Evaluation of Metadata Corruption Vulnerabilities developed by Deng Fenglei, Wang Jian, Zhang Bin, Feng Chao, Jiang Zhiyuan, Su Yunfei discuss how there is increased attention in software quality assurance and protection. However, today’s software still unfortunately fails to be protected from cyberattacks, especially in the presence of insecure organization of heap metadata. The authors aim to explore whether heap metadata could be corrupted and exploited by cyber-attackers, and they propose RELAY, a software testing framework to simulate human exploitation behavior for metadata corruption at the machine level. RELAY also makes use of the fewer resources consumed to solve a layout problem according to the exploit pattern, and generates the final exploit.
A Methodology to Define Learning Objects Granularity developed by BENITTI, Fabiane Barreto Vavassori. The authors first discuss how learning object is one of the main research topics in the e-learning community in recent years and granularity is a key factor for learning object reuse. The authors then present a methodology to define the learning objects granularity in the computing area as well as a case study in software testing. Later, the authors carry out five experiments to evaluate the learning potential from the produced learning objects, as well as to demonstrate the possibility of learning object reuse. Results from the experiment are also presented in the article, which show that learning object promotes the understanding and application of the concepts.
A recent article, Reliability Verification of Software Based on Cloud Service, have a ground breaking effect and it explores how software industry needs a way to measure reliability of each component of the software. In this article, a guarantee-verification method based on cloud service was proposed. The article first discusses how trustworthy each component's are will be defined in terms of component service guarantee-verification. Then an effective component model was defined in the article and based on the proposed model, the process of verifying a component service is illustrated in an application sample.
See also
Software testing
This article covers testing software reliability under unexpected or rare (stressed) workloads. See also the closely related:
Scalability testing
Load testing
List of software tools for load testing at Load testing#Load testing tools
Stress test for a general discussion
Black box testing
Software performance testing
Scenario analysis
Simulation
White box testing
Technischer Überwachungsverein (TÜV) - product testing and certification
Concurrency testing using the CHESS model checker
Jinx (defunct because of takeover and project cancellation) automated stress testing by automatically exploring unlikely execution scenarios.
Stress test (hardware)
References
Software testing
ru:Стресс-тестирование | Stress testing (software) | Engineering | 1,786 |
7,808,557 | https://en.wikipedia.org/wiki/Unmanned%20surface%20vehicle | An unmanned surface vehicle, unmanned surface vessel or uncrewed surface vessel (USV), colloquially called a drone boat, drone ship or sea drone, is a boat or ship that operates on the surface of the water without a crew. USVs operate with various levels of autonomy, from remote control to fully autonomous surface vehicles (ASV).
Regulatory environment
The regulatory environment for USV operations is changing rapidly as the technology develops and is more frequently deployed on commercial projects. The Maritime Autonomous Surface Ship UK Industry Conduct Principles and Code of Practice 2020 (V4) has been prepared by the UK Maritime Autonomous Systems Regulatory Working Group (MASRWG) and published by Maritime UK through the Society of Maritime Industries. Organisations that contributed to the development of the MASS Code of Practice include The Maritime & Coastguard Agency (MCA), Atlas Elektronik UK Ltd, AutoNaut, Fugro, the UK Chamber of Shipping, UKHO, Trinity House, Nautical Institute, National Oceanography Centre, Dynautics Limited, SEA-KIT International, Sagar Defence Engineering and many more.
By the end of 2017, Sagar Defence Engineering became the first company in India to build and supply USV to a Government organization.
Development
As early as in World War I Germany designed and used remote-controlled FL-boats to attack British warships. At the end of World War II, remote-controlled USVs were used by the US Navy for target drone and minesweeping applications. In the twenty-first century, advances in USV control systems and navigation technologies have resulted in USVs that an operator can control remotely from land or a nearby vessel: USVs that operate with partially autonomous control, and USVs (ASVs) that operate fully autonomously. Modern applications and research areas for USVs and ASVs include commercial shipping, environmental and climate monitoring, seafloor mapping, passenger ferries, robotic research, surveillance, inspection of bridges and other infrastructure, military, and naval operations.
On January 17, 2022, the Soleil succeeded in completing the first fully autonomous sea voyage by ship. Built by MHI, the demonstration was conducted in cooperation of Shin Nihonkai Ferry. The seven-hour, 240-kilometre voyage, from Shinmoji in Northern Kyushu to the Iyonada Sea, recorded a maximum speed of 26 knots.
In August 2022, the MV Mikage of the Mitsui O.S.K. Lines sailed 161-nautical miles over two days, from Tsuruga to Sakai, successfully completing the first crewless sea voyage to include docking of an autonomous coastal container ship, in a two-day trial.
USV autonomy platforms
A number of autonomy platforms (computer software) tailored specifically for USV operations have been developed. Some are tied to specific vessels, while others are flexible and can be applied to different hull, mechanical, and electrical configurations.
Computer-controlled and operated USVs
The design and build of uncrewed surface vessels (USVs) is complex and challenging. Hundreds of decisions relating to mission goals, payload requirements, power budget, hull design, communication systems and propulsion control and management need to be analysed and implemented. Crewed vessel builders often rely on single-source suppliers for propulsion and instrumentation to help the crew control the vessel. In the case of an uncrewed (or partially crewed) vessel, the builder needs to replace elements of the human interface with a remote human interface.
Technical considerations
Uncrewed surface vessels vary in size from under 1 metre LOA to 20+ metres, with displacements ranging from a few kilograms to many tonnes, so propulsion systems cover a wide range of power levels, interfaces and technologies.
Interface types (broadly) in order of size/power:
PWM-controlled Electronic Speed Controllers for simple electric motors
Serial bus, using ASCII-coded commands
Serial bus using binary protocols
Analogue interfaces found on many larger vessel
Proprietary CANbus protocols used by various engine manufacturers
Proprietary CANbus protocols used by manufacturers of generic engine controls
While many of these protocols carry demands to the propulsion, most of them do not bring back any status information. Feedback of achieved RPM may come from tacho pulses or from built-in sensors that generate CAN or serial data. Other sensors may be fitted, such as current sensing on electric motors, which can give an indication of power delivered. Safety is a critical concern, especially at high power levels, but even a small propeller can cause damage or injury and the control system needs to be designed with this in mind. This is particularly important in handover protocols for optionally manned boats.
A frequent challenge faced in the control of USVs is the achievement of a smooth response from full astern to full ahead. Crewed vessels usually have a detent behaviour, with a wide deadband around the stop position. To achieve accurate control of differential steering, the control system needs to compensate for this deadband. Internal combustion engines tend to drive through a gearbox, with an inevitable sudden change when the gearbox engages which the control system must take into account. Waterjets are the exception to this, as they adjust smoothly through the zero point. Electric drives often have a similar deadband built in, so again the control system needs to be designed to preserve this behaviour for a man on board, but smooth it out for automatic control, e.g., for low-speed manoeuvring and Dynamic Positioning.
Oceanography, hydrography and environmental monitoring
USVs are valuable in oceanography, as they are more maneuverable than moored or drifting weather buoys, but far cheaper than the equivalent weather ships and research vessels, and more flexible than commercial-ship contributions. USVs used in oceanographic research tend to be powered and propelled by renewable energy sources. For example, Wave gliders harness wave energy for primary propulsion, whereas Saildrones use wind. Other USVs harness solar energy to power electric motors. Renewable-powered and persistent, ocean-going USVs have solar cells to power their electronics. Renewable-powered USV persistence are typically measured in months.
As late as early 2022, USVs had been predominantly used for environmental monitoring and hydrographic survey and future uptake was projected to be likely to grow in monitoring and surveillance of very remote locations due to their potential for multidisciplinary use. Low operational cost has been a consistent driver for USV uptake when compared with crewed vessels. Other drivers for USV uptake have changed through time, including reducing risk to people, spatio-temporal efficiency, endurance, precision and accessing very shallow water.
Non-renewable-powered USVs are a powerful tool for use in commercial hydrographic survey. Using a small USV in parallel to traditional survey vessels as a 'force-multiplier' can double survey coverage and reduce time on-site. This method was used for a survey carried out in the Bering Sea, off Alaska; the ASV Global 'C-Worker 5' autonomous surface vehicle (ASV) collected 2,275 nautical miles of survey, 44% of the project total. This was a first for the survey industry and resulted in a saving of 25 days at sea. In 2020, the British USV Maxlimer completed an unmanned survey of of seafloor in the Atlantic Ocean west of the English Channel.
Environmental Research Vehicles
Saildrone
A saildrone is a type of unmanned surface vehicle used primarily in oceans for data collection. Saildrones are wind and solar powered and carry a suite of science sensors and navigational instruments. They can follow a set of remotely prescribed waypoints. The saildrone was invented by Richard Jenkins, a British engineer, founder and CEO of Saildrone, Inc. Saildrones have been used by scientists and research organizations like the National Oceanic and Atmospheric Administration (NOAA) to survey the marine ecosystem, fisheries, and weather. In January 2019, a small fleet of saildrones was launched to attempt the first autonomous circumnavigation of Antarctica. One of the saildrones completed the mission, traveling over the seven month journey while collecting a detailed data set using on board environmental monitoring instrumentation.
In August 2019, SD 1021 completed the fastest unmanned Atlantic crossing sailing from Bermuda to the UK, and in October, it completed the return trip to become the first autonomous vehicle to cross the Atlantic in both directions. The University of Washington and the Saildrone company began a joint venture in 2019 called The Saildrone Pacific Sentinel Experiment, which positioned six saildrones along the west coast of the United States to gather atmospheric and ocean data.
Saildrone and NOAA deployed five modified hurricane-class vessels at key locations in the Atlantic Ocean prior to the June start of the 2021 hurricane season. In September, SD 1045 was in location to obtain video and data from inside Hurricane Sam. It was the first research vessel to ever venture into the middle of a major hurricane.
Low-cost Developments
Technologists are motivated to understand our waters due to rising concerns of water pollution as a global challenge. The availability of off-the-shelf sensors and instruments have spurred increased developments of low-cost vehicles. New regulations and monitoring requirements have created a need for scalable technologies such as robots for water quality sampling and microplastics collection.
Military applications
The military usage of unmanned ships in the form of a Fire ship dates back to ancient times.
USVs were used militarily as early as the 1920s as remote controlled target craft, following the development of the 'DCB's in World War One. By World War II they were also being used for minesweeper purposes.
Military applications for USVs include powered seaborne targets and minehunting, as well as surveillance and reconnaissance, strike operations, and area denial or sea denial. Various other applications are also being explored. Some commercial USVs may utilize COLREGs-compliant navigation.
In 2016 DARPA launched an anti-submarine USV prototype called Sea Hunter. Turkish firm Aselsan produced ALBATROS-T and ALBATROS-K moving target boats for the Turkish Naval Forces to use in shooting drills. Turkey's first indigenously developed armed USV (AUSV) is the ULAQ, developed by Ares Shipyard, Meteksan Defence Systems and Roketsan. ULAQ is armed with 4 Roketsan Cirit and 2 UMTAS. It completed its first firing test successfully on 27 May 2021. The ULAQ can be deployed from combat ships. It can be controlled remotely from mobile vehicles, headquarters, command centers and floating platforms. It will serve in missions such as reconnaissance, surveillance and intelligence, surface warfare, asymmetric warfare, armed escort, force protection, and strategic facility security. Ares Shipyard's CEO says that very different versions of ULAQ equipped with different weapons are under development. Its primary user will be Turkish Naval Forces.
In addition, military applications for medium unmanned surface vessels (MUSVs) include fleet intelligence, surveillance, reconnaissance and electronic warfare. In August 2020, L3Harris Technologies was awarded a contract to build an MUSV prototype, with options for up to nine vessels. L3Harris subcontracted Swiftships, a Louisiana-based shipbuilder, to build the vessels, with displacement of about 500 tons. The prototype is targeted for completion by end of 2022. It is the first unmanned naval platform programme in this class of ships, which will likely play a major role in supporting the Distributed Maritime Operations strategy of the U.S. Navy. Earlier, Swiftships partnered with University of Louisiana in 2014 to build the Anaconda (AN-1) and later the Anaconda (AN-2) class of small USVs.
On 13 April 2022, the US sent unspecified "unmanned coastal defense vessels" to Ukraine amid the 2022 Russian invasion of Ukraine as part of a new security package.
A theory was put forward by the BBC that an unmanned surface vehicle was used in the 2022 Crimean Bridge explosion. After explosions at this bridge in July 2023, Russia's Anti-Terrorist Committee claimed that Ukraine used unmanned surface vehicles to attack the bridge.
In December 2023, Russia unveiled its first kamikaze USV called "Oduvanchik". It is reported that the sea drone can carry up to 600 kg of explosives, has a range of 200 km and speed of 80 km/h.
At a ceremony held on 9 January 2024, TCB Marlin entered service in the Turkish Naval Forces as the first armed USV, with the hull number TCB-1101 and name Marlin SİDA.
In 2024, Sagar Defence Engineering Pvt Ltd demonstrated 850 nautical mile autonomous transit of, Matangi Autonomous Surface Vessel to the Indian Navy. The autonomous transit began from Mumbai and ended at Toothukudi. This demonstration was part of Indian Navy's Swavalamban 2024 self reliance in technology contest to enable the development of autonomous vessels for various military applications. These boats are equipped with 12.7mm SRCG gun and is capable of day and night patrolling with speed above 50 knots. 12 such autonomous boats are to be acquired by the Indian Navy and will also be used to patrol Pangong Tso lake.
Possible first use in combat
During the Yemeni civil war on 30 January 2017 an Al Madinah-class frigate was attacked by Houthi forces, the frigate was hit at the stern, resulting in an explosion and a fire. The crew was able to extinguish the fire but two members of the ship’s crew were killed in the attack while three others were injured. Houthi forces claimed to have targeted the ship with a missile, but Saudi forces claim that the ship was hit by 3 "suicide boats".
Further use in combat
On 29 October 2022, during the Russian invasion of Ukraine, Ukrainian armed forces made a multi-USV attack on Russian naval vessels at the Sevastopol Naval Base. According to the Russian Defense Ministry, seven USVs were involved in the attack with support of eight UAVs. Naval News reported that little damage had occurred to either of the two warships that were hit by the small USVs, a Russian frigate and a minesweeper. However, the military effect of the attack on the protected harbor of Sevastopol exceeded the direct damage because it led to the Russian Navy going into a protective mode, "essentially locking them in port. ... New defenses were quickly added, new procedures imposed and there was much less activity. Russia's most powerful warships in the war [were by mid-November] mostly tied up in port." The US Naval Institute reported that, by December 2022, the "Russian Navy now knows it is vulnerable in its main naval base, causing it to retreat further into its shell, increasing defenses and reducing activity outside."
A second USV attack occurred in mid-November in Novorossiysk, also in the Black Sea but much further from Russian occupied territory than Sevastopol.
By January 2023, SpaceX restricted the licensing of its Starlink satellite-internet communication technology to commercial use, excluding direct military use on weapon systems. The limitation restricted one use of the USV design used by Ukraine in late 2022. At the same time, Russia increased its capabilities in small explosive USVs which had been used to ram a Ukrainian bridge on 10 February 2023. By February, the new Russian capability with USVs, and the communication restrictions on the previous Ukrainian USVs, could affect the balance in the naval war. In the view of Naval News, "The Black Sea appears to be becoming more Russian friendly again." The potential for wider use of USVs to impact the outcome of the conflict is not settled, however, as both physical constraints on existing technology and emerging counter-USV capabilities may render these vessels vulnerable.
On 4 August 2023, the Olenegorsky Gornyak, a Ropucha-class landing ship was seriously damaged in the Black Sea Novorossiysk naval base after it was struck by a Ukrainian Maritime Drone carrying 450 kilograms of TNT. It was pictured listing heavily to one side while being towed back to port. Some 100 service personnel were onboard at the time.
On 1 February 2024, the Tarantul-III class missile corvette Ivanovets was sunk in the Donuzlav Bay after being attacked by Ukrainian USVs.
On 14 February 2024, the Tsezar Kunikov, a Ropucha-class landing ship was sunk off Alupka by Ukrainian HUR MO "Group 13" forces using MAGURA V5 USV.
Countermeasures used in combat
The naval war in the Black Sea during the Russian war on Ukraine has seen a number of countermeasures tried against the threat of Ukrainian uncrewed drones.
Due to the drone attack on the Sevastopol Naval Base in October 2022, Russian forces had deployed several early countermeasures. They have trained dolphins to protect the Naval Base, while using various booms or nets to stop further attacks. A main early change by mid-2023 was the use of dazzle camouflage, which according to Reuters is "designed to disguise a ship's heading and speed at sea — aims to confuse modern operators of suicide drones and satellites and prevent them from easily identifying important ships", while gunfire from helicopters can be used to destroy Ukrainian drones during an attack.
By December 2023, the Russian effort to counter Ukrainian USVs in the Black Sea had expanded to include:
formal dedicated anti-drone helicopter aviation units have been formed in Crimea to engage attacking USVs with unguided rockets and machine guns, using Mi-8 Hip and Ka-27 Helix helicopters. More occasionally, Sukhoi Su-27 Flanker fighter jets have been used.
electromagnetic noise countermeasures have been tried to jam communications of offensive USV drones.
escort ships have been used for high-value targets. Russia has recently begun to escort high-value weapon transport ships and tankers; escorts are typically frigates or patrol ships. The "convoys have been targeted by USVs on several occasions, with the escorts facing the brunt of the attacks."
Russia has tested flying an FPV drone from a patrol boat into a fixed target. Use in naval combat had not yet been reported by December 2023.
By January 2024, Russian countermeasures had become increasingly capable and the Ukrainian Navy indicated that some offensive USV "tactics that were worked out in 2022 and 2023 will not work in 2024." and that this military reality was driving change on the Ukrainian side. Ukraine is developing autonomous underwater vehicles (AUVs) to increase offensive capability against improved Russian USV defenses.
Strategic studies
An emerging field of research examines whether the proliferation of unmanned surface vessels can impact crisis dynamics or intra-war escalation. An exploratory report on the subject from the Center for Naval Analyses suggests seven potential concerns for military competition, including accidental, deliberate, and inadvertent escalation. While recent scholarship has examined the impact of unmanned aerial systems on crisis management, the empirical record for unmanned surface and subsurface systems is thinner, since these technologies have not yet been widely employed. According to an article published by Reuters, these drones are manufactured at a cost of $250,000 each. They use two impact detonator taken from Russian bombs. With a length of 5.5 metres, they have a camera to allow a human to operate them, and use a water jet for propulsion with a maximum speed of 80 kilometres per hour and an endurance of 60 hours. Given their relative low cost, compared to missiles or bombs, they can be deployed in a mass attack. Their low profile also makes them harder to hit.
Cargo
In the future, many unmanned cargo ships are expected to cross the waters. In November 2021, the first autonomous cargo ship, MV Yara Birkeland was launched in Norway. The fully electric ship is expected to substantially reduce the need for truck journeys.
Urban vessels and small-scale logistics
In 2021, the world's first urban autonomous vessels, Roboats, were deployed in the canals of Amsterdam, Netherlands. The ships developed by three institutions could carry up to five people, collect waste, deliver goods, monitor the environment and provide "on-demand infrastructure".
Seaweed farming
Unmanned surface vehicles can also assist in seaweed farming and help to reduce operating costs.
See also
Self-steering gear
Spartan Scout
Swarm robotics
Self Defense Test Ship
USV RSV (Marine Tech)
References
Oceanographic instrumentation | Unmanned surface vehicle | Technology,Engineering | 4,177 |
33,592,038 | https://en.wikipedia.org/wiki/Odyssey%20Studios | Odyssey Studios was a recording studio based near Marble Arch in London and opened in 1979. It was set up by Wayne Bickerton as an extension of State Records, the label he had set up with Tony Waddington and John Fruin in 1975. The studio closed in 1989 and the building was subsequently sold to Jazz FM.
Albums recorded at Odyssey
Through the 1980s, many artists recorded at Odyssey Studios, including Cliff Richard, Paul McCartney, Kate Bush, George Michael, Spandau Ballet and Roger Daltrey. Trevor Jones' score for the feature film Labyrinth was also recorded there. The Pat Metheny Group recorded a portion of the score for the feature film The Falcon and the Snowman (1985) there.
Below is a list of some albums recorded either in part or entirely at Odyssey Studios.
Set-up and equipment
Studio One in Odyssey was 1,400 square feet and had room for 50 musicians, which meant it could facilitate orchestral recordings and could be used for other activities such as video shoots. Studios 1 and 2 were equipped with MCI consoles and tape machines. Peter Jones (chief engineer) went to Fort Lauderdale, home of MCI, to commission all the equipment. At the time, they were the largest consoles that MCI had produced, and a hole in the factory wall was required to accommodate the extra length of the chassis. The studio was designed by Keith Slaughter and constructed on the "floating" principle to ensure total sound insulation. Studio Two, which was a mixing suite with capacity for 8 musicians, had an MCI 6000 48 Channel Desk which offered up to 48 tracks of recording, or the capacity to mixdown. Upstairs there was a radio facility, which offered a studio and separate control room plus a lounge area.
Odyssey was one of the first studios to install a satellite linkup, which effectively turned the studio into a miniature radio station and allowing it to broadcast any session live around the world.
References
1979 establishments in England
1989 disestablishments in England
Audio engineering
Audio mixing
Local mass media in London
Recording studios in London | Odyssey Studios | Engineering | 416 |
4,362,140 | https://en.wikipedia.org/wiki/Subsynchronous%20orbit | A subsynchronous orbit is an orbit of a satellite that is nearer the planet than it would be if it were in synchronous orbit, i.e. the orbital period is less than the sidereal day of the planet.
Technical considerations
An Earth satellite that is in (a prograde) subsynchronous orbit will appear to drift eastward as seen from the Earth's surface.
Economic importance in commercial spaceflight
The Geosynchronous-belt subsynchronous orbital regime is regularly used in spaceflight. This orbit is typically used to house working communication satellites that have not yet been deactivated, and may be still be used again in geostationary service if the need arises.
See also
Supersynchronous orbit
List of orbits
References
Orbits | Subsynchronous orbit | Astronomy | 162 |
221,400 | https://en.wikipedia.org/wiki/Aircraft%20flight%20dynamics | Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of gravity (cg), known as pitch, roll and yaw. These are collectively known as aircraft attitude, often principally relative to the atmospheric frame in normal flight, but also relative to terrain during takeoff or landing, or when operating at low elevation. The concept of attitude is not specific to fixed-wing aircraft, but also extends to rotary aircraft such as helicopters, and dirigibles, where the flight dynamics involved in establishing and controlling attitude are entirely different.
Control systems adjust the orientation of a vehicle about its cg. A control system includes control surfaces which, when deflected, generate a moment (or couple from ailerons) about the cg which rotates the aircraft in pitch, roll, and yaw. For example, a pitching moment comes from a force applied at a distance forward or aft of the cg, causing the aircraft to pitch up or down.
A fixed-wing aircraft increases or decreases the lift generated by the wings when it pitches nose up or down by increasing or decreasing the angle of attack (AOA). The roll angle is also known as bank angle on a fixed-wing aircraft, which usually "banks" to change the horizontal direction of flight. An aircraft is streamlined from nose to tail to reduce drag making it advantageous to keep the sideslip angle near zero, though an aircraft may be deliberately "sideslipped" to increase drag and descent rate during landing, to keep aircraft heading same as runway heading during cross-wind landings and during flight with asymmetric power.
Background
Roll, pitch and yaw refer to rotations about the respective axes starting from a defined steady flight equilibrium state. The equilibrium roll angle is known as wings level or zero bank angle.
The most common aeronautical convention defines roll as acting about the longitudinal axis, positive with the starboard (right) wing down. Yaw is about the vertical body axis, positive with the nose to starboard. Pitch is about an axis perpendicular to the longitudinal plane of symmetry, positive nose up.
Reference frames
Three right-handed, Cartesian coordinate systems see frequent use in flight dynamics. The first coordinate system has an origin fixed in the reference frame of the Earth:
Earth frame
Origin - arbitrary, fixed relative to the surface of the Earth
xE axis - positive in the direction of north
yE axis - positive in the direction of east
zE axis - positive towards the center of the Earth
In many flight dynamics applications, the Earth frame is assumed to be inertial with a flat xE,yE-plane, though the Earth frame can also be considered a spherical coordinate system with origin at the center of the Earth.
The other two reference frames are body-fixed, with origins moving along with the aircraft, typically at the center of gravity. For an aircraft that is symmetric from right-to-left, the frames can be defined as:
Body frame
Origin - airplane center of gravity
xb axis - positive out the nose of the aircraft in the plane of symmetry of the aircraft
zb axis - perpendicular to the xb axis, in the plane of symmetry of the aircraft, positive below the aircraft
yb axis - perpendicular to the xb,zb-plane, positive determined by the right-hand rule (generally, positive out the right wing)
Wind frame
Origin - airplane center of gravity
xw axis - positive in the direction of the velocity vector of the aircraft relative to the air
zw axis - perpendicular to the xw axis, in the plane of symmetry of the aircraft, positive below the aircraft
yw axis - perpendicular to the xw,zw-plane, positive determined by the right hand rule (generally, positive to the right)
Asymmetric aircraft have analogous body-fixed frames, but different conventions must be used to choose the precise directions of the x and z axes.
The Earth frame is a convenient frame to express aircraft translational and rotational kinematics. The Earth frame is also useful in that, under certain assumptions, it can be approximated as inertial. Additionally, one force acting on the aircraft, weight, is fixed in the +zE direction.
The body frame is often of interest because the origin and the axes remain fixed relative to the aircraft. This means that the relative orientation of the Earth and body frames describes the aircraft attitude. Also, the direction of the force of thrust is generally fixed in the body frame, though some aircraft can vary this direction, for example by thrust vectoring.
The wind frame is a convenient frame to express the aerodynamic forces and moments acting on an aircraft. In particular, the net aerodynamic force can be divided into components along the wind frame axes, with the drag force in the −xw direction and the lift force in the −zw direction.
In addition to defining the reference frames, the relative orientation of the reference frames can be determined. The relative orientation can be expressed in a variety of forms, including:
Rotation matrices
Direction cosines
Euler angles
Quaternions
The various Euler angles relating the three reference frames are important to flight dynamics. Many Euler angle conventions exist, but all of the rotation sequences presented below use the z-y'-x" convention. This convention corresponds to a type of Tait-Bryan angles, which are commonly referred to as Euler angles. This convention is described in detail below for the roll, pitch, and yaw Euler angles that describe the body frame orientation relative to the Earth frame. The other sets of Euler angles are described below by analogy.
Transformations (Euler angles)
From Earth frame to body frame
First, rotate the Earth frame axes xE and yE around the zE axis by the yaw angle ψ. This results in an intermediate reference frame with axes denoted x,y,z, where z'=zE.
Second, rotate the x and z axes around the y axis by the pitch angle θ. This results in another intermediate reference frame with axes denoted x",y",z", where y"=y.
Finally, rotate the y" and z" axes around the x" axis by the roll angle φ. The reference frame that results after the three rotations is the body frame.
Based on the rotations and axes conventions above:
Yaw angle ψ: angle between north and the projection of the aircraft longitudinal axis onto the horizontal plane;
Pitch angle θ: angle between the aircraft longitudinal axis and horizontal;
Roll angle φ: rotation around the aircraft longitudinal axis after rotating by yaw and pitch.
From Earth frame to wind frame
Heading angle σ: angle between north and the horizontal component of the velocity vector, which describes which direction the aircraft is moving relative to cardinal directions.
Flight path angle γ: is the angle between horizontal and the velocity vector, which describes whether the aircraft is climbing or descending.
Bank angle μ: represents a rotation of the lift force around the velocity vector, which may indicate whether the airplane is turning.
When performing the rotations described above to obtain the body frame from the Earth frame, there is this analogy between angles:
σ, ψ (heading vs yaw)
γ, θ (Flight path vs pitch)
μ, φ (Bank vs Roll)
From wind frame to body frame
sideslip angle β: angle between the velocity vector and the projection of the aircraft longitudinal axis onto the xw,yw-plane, which describes whether there is a lateral component to the aircraft velocity
angle of attack α: angle between the xw,yw-plane and the aircraft longitudinal axis and, among other things, is an important variable in determining the magnitude of the force of lift
When performing the rotations described earlier to obtain the body frame from the Earth frame, there is this analogy between angles:
β, ψ (sideslip vs yaw)
α, θ (attack vs pitch)
(φ = 0) (nothing vs roll)
Analogies
Between the three reference frames there are hence these analogies:
Yaw / Heading / Sideslip (Z axis, vertical)
Pitch / Flight path / Attack angle (Y axis, wing)
Roll / Bank / nothing (X axis, nose)
Design cases
In analyzing the stability of an aircraft, it is usual to consider perturbations about a nominal steady flight state. So the analysis would be applied, for example, assuming:
Straight and level flight
Turn at constant speed
Approach and landing
Takeoff
The speed, height and trim angle of attack are different for each flight condition, in addition, the aircraft will be configured differently, e.g. at low speed flaps may be deployed and the undercarriage may be down.
Except for asymmetric designs (or symmetric designs at significant sideslip), the longitudinal equations of motion (involving pitch and lift forces) may be treated independently of the lateral motion (involving roll and yaw).
The following considers perturbations about a nominal straight and level flight path.
To keep the analysis (relatively) simple, the control surfaces are assumed fixed throughout the motion, this is stick-fixed stability. Stick-free analysis requires the further complication of taking the motion of the control surfaces into account.
Furthermore, the flight is assumed to take place in still air, and the aircraft is treated as a rigid body.
Forces of flight
Three forces act on an aircraft in flight: weight, thrust, and the aerodynamic force.
Aerodynamic force
Components of the aerodynamic force
The expression to calculate the aerodynamic force is:
where:
Difference between static pressure and free current pressure
outer normal vector of the element of area
tangential stress vector practised by the air on the body
adequate reference surface
projected on wind axes we obtain:
where:
Drag
Lateral force
Lift
Aerodynamic coefficients
Dynamic pressure of the free current
Proper reference surface (wing surface, in case of planes)
Pressure coefficient
Friction coefficient
Drag coefficient
Lateral force coefficient
Lift coefficient
It is necessary to know Cp and Cf in every point on the considered surface.
Dimensionless parameters and aerodynamic regimes
In absence of thermal effects, there are three remarkable dimensionless numbers:
Compressibility of the flow:
Mach number
Viscosity of the flow:
Reynolds number
Rarefaction of the flow:
Knudsen number
where:
speed of sound
specific heat ratio
gas constant by mass unity
absolute temperature
mean free path
According to λ there are three possible rarefaction grades and their corresponding motions are called:
Continuum current (negligible rarefaction):
Transition current (moderate rarefaction):
Free molecular current (high rarefaction):
The motion of a body through a flow is considered, in flight dynamics, as continuum current. In the outer layer of the space that surrounds the body viscosity will be negligible. However viscosity effects will have to be considered when analysing the flow in the nearness of the boundary layer.
Depending on the compressibility of the flow, different kinds of currents can be considered:
Incompressible subsonic current:
Compressible subsonic current:
Transonic current:
Supersonic current:
Hypersonic current:
Drag coefficient equation and aerodynamic efficiency
If the geometry of the body is fixed and in case of symmetric flight (β=0 and Q=0), pressure and friction coefficients are functions depending on:
where:
angle of attack
considered point of the surface
Under these conditions, drag and lift coefficient are functions depending exclusively on the angle of attack of the body and Mach and Reynolds numbers. Aerodynamic efficiency, defined as the relation between lift and drag coefficients, will depend on those parameters as well.
It is also possible to get the dependency of the drag coefficient respect to the lift coefficient. This relation is known as the drag coefficient equation:
drag coefficient equation
The aerodynamic efficiency has a maximum value, Emax, respect to CL where the tangent line from the coordinate origin touches the drag coefficient equation plot.
The drag coefficient, CD, can be decomposed in two ways. First typical decomposition separates pressure and friction effects:
There is a second typical decomposition taking into account the definition of the drag coefficient equation. This decomposition separates the effect of the lift coefficient in the equation, obtaining two terms CD0 and CDi. CD0 is known as the parasitic drag coefficient and it is the base drag coefficient at zero lift. CDi is known as the induced drag coefficient and it is produced by the body lift.
Parabolic and generic drag coefficient
A good attempt for the induced drag coefficient is to assume a parabolic dependency of the lift
Aerodynamic efficiency is now calculated as:
If the configuration of the plane is symmetrical respect to the XY plane, minimum drag coefficient equals to the parasitic drag of the plane.
In case the configuration is asymmetrical respect to the XY plane, however, minimum drag differs from the parasitic drag. On these cases, a new approximate parabolic drag equation can be traced leaving the minimum drag value at zero lift value.
Variation of parameters with the Mach number
The Coefficient of pressure varies with Mach number by the relation given below:
where
Cp is the compressible pressure coefficient
Cp0 is the incompressible pressure coefficient
M∞ is the freestream Mach number.
This relation is reasonably accurate for 0.3 < M < 0.7 and when M = 1 it becomes ∞ which is impossible physical situation and is called Prandtl–Glauert singularity.
Aerodynamic force in a specified atmosphere
see Aerodynamic force
Stability
Stability is the ability of the aircraft to counteract disturbances to its flight path.
According to David P. Davies, there are six types of aircraft stability: speed stability, stick free static longitudinal stability, static lateral stability, directional stability, oscillatory stability, and spiral stability.
Speed stability
An aircraft in cruise flight is typically speed stable. If speed increases, drag increases, which will reduce the speed back to equilibrium for its configuration and thrust setting. If speed decreases, drag decreases, and the aircraft will accelerate back to its equilibrium speed where thrust equals drag.
However, in slow flight, due to lift-induced drag, as speed decreases, drag increases (and vice versa). This is known as the "back of the drag curve". The aircraft will be speed unstable, because a decrease in speed will cause a further decrease in speed.
Static stability and control
Longitudinal static stability
Longitudinal stability refers to the stability of an aircraft in pitch. For a stable aircraft, if the aircraft pitches up, the wings and tail create a pitch-down moment which tends to restore the aircraft to its original attitude. For an unstable aircraft, a disturbance in pitch will lead to an increasing pitching moment. Longitudinal static stability is the ability of an aircraft to recover from an initial disturbance. Longitudinal dynamic stability refers to the damping of these stabilizing moments, which prevents persistent or increasing oscillations in pitch.
Directional stability
Directional or weathercock stability is concerned with the static stability of the airplane about the z axis. Just as in the case of longitudinal stability it is desirable that the aircraft should tend to return to an equilibrium condition when subjected to some form of yawing disturbance. For this the slope of the yawing moment curve must be positive.
An airplane possessing this mode of stability will always point towards the relative wind, hence the name weathercock stability.
Dynamic stability and control
Longitudinal modes
It is common practice to derive a fourth order characteristic equation to describe the longitudinal motion, and then factorize it approximately into a high frequency mode and a low frequency mode. The approach adopted here is using qualitative knowledge of aircraft behavior to simplify the equations from the outset, reaching the result by a more accessible route.
The two longitudinal motions (modes) are called the short period pitch oscillation (SPPO), and the phugoid.
Short-period pitch oscillation
A short input (in control systems terminology an impulse) in pitch (generally via the elevator in a standard configuration fixed-wing aircraft) will generally lead to overshoots about the trimmed condition. The transition is characterized by a damped simple harmonic motion about the new trim. There is very little change in the trajectory over the time it takes for the oscillation to damp out.
Generally this oscillation is high frequency (hence short period) and is damped over a period of a few seconds. A real-world example would involve a pilot selecting a new climb attitude, for example 5° nose up from the original attitude. A short, sharp pull back on the control column may be used, and will generally lead to oscillations about the new trim condition. If the oscillations are poorly damped the aircraft will take a long period of time to settle at the new condition, potentially leading to Pilot-induced oscillation. If the short period mode is unstable it will generally be impossible for the pilot to safely control the aircraft for any period of time.
This damped harmonic motion is called the short period pitch oscillation; it arises from the tendency of a stable aircraft to point in the general direction of flight. It is very similar in nature to the weathercock mode of missile or rocket configurations. The motion involves mainly the pitch attitude (theta) and incidence (alpha). The direction of the velocity vector, relative to inertial axes is . The velocity vector is:
where , are the inertial axes components of velocity. According to Newton's Second Law, the accelerations are proportional to the forces, so the forces in inertial axes are:
where m is the mass.
By the nature of the motion, the speed variation is negligible over the period of the oscillation, so:
But the forces are generated by the pressure distribution on the body, and are referred to the velocity vector. But the velocity (wind) axes set is not an inertial frame so we must resolve the fixed axes forces into wind axes. Also, we are only concerned with the force along the z-axis:
Or:
In words, the wind axes force is equal to the centripetal acceleration.
The moment equation is the time derivative of the angular momentum:
where M is the pitching moment, and B is the moment of inertia about the pitch axis.
Let: , the pitch rate.
The equations of motion, with all forces and moments referred to wind axes are, therefore:
We are only concerned with perturbations in forces and moments, due to perturbations in the states and q, and their time derivatives. These are characterized by stability derivatives determined from the flight condition. The possible stability derivatives are:
Lift due to incidence, this is negative because the z-axis is downwards whilst positive incidence causes an upwards force.
Lift due to pitch rate, arises from the increase in tail incidence, hence is also negative, but small compared with .
Pitching moment due to incidence - the static stability term. Static stability requires this to be negative.
Pitching moment due to pitch rate - the pitch damping term, this is always negative.
Since the tail is operating in the flowfield of the wing, changes in the wing incidence cause changes in the downwash, but there is a delay for the change in wing flowfield to affect the tail lift, this is represented as a moment proportional to the rate of change of incidence:
The delayed downwash effect gives the tail more lift and produces a nose down moment, so is expected to be negative.
The equations of motion, with small perturbation forces and moments become:
These may be manipulated to yield as second order linear differential equation in :
This represents a damped simple harmonic motion.
We should expect to be small compared with unity, so the coefficient of (the 'stiffness' term) will be positive, provided . This expression is dominated by , which defines the longitudinal static stability of the aircraft, it must be negative for stability. The damping term is reduced by the downwash effect, and it is difficult to design an aircraft with both rapid natural response and heavy damping. Usually, the response is underdamped but stable.
Phugoid
If the stick is held fixed, the aircraft will not maintain straight and level flight (except in the unlikely case that it happens to be perfectly trimmed for level flight at its current altitude and thrust setting), but will start to dive, level out and climb again. It will repeat this cycle until the pilot intervenes. This long period oscillation in speed and height is called the phugoid mode. This is analyzed by assuming that the SSPO performs its proper function and maintains the angle of attack near its nominal value. The two states which are mainly affected are the flight path angle (gamma) and speed. The small perturbation equations of motion are:
which means the centripetal force is equal to the perturbation in lift force.
For the speed, resolving along the trajectory:
where g is the acceleration due to gravity at the Earth's surface. The acceleration along the trajectory is equal to the net x-wise force minus the component of weight. We should not expect significant aerodynamic derivatives to depend on the flight path angle, so only and need be considered. is the drag increment with increased speed, it is negative, likewise is the lift increment due to speed increment, it is also negative because lift acts in the opposite sense to the z-axis.
The equations of motion become:
These may be expressed as a second order equation in flight path angle or speed perturbation:
Now lift is very nearly equal to weight:
where is the air density, is the wing area, W the weight and is the lift coefficient (assumed constant because the incidence is constant), we have, approximately:
The period of the phugoid, T, is obtained from the coefficient of u:
Or:
Since the lift is very much greater than the drag, the phugoid is at best lightly damped. A propeller with fixed speed would help. Heavy damping of the pitch rotation or a large rotational inertia increase the coupling between short period and phugoid modes, so that these will modify the phugoid.
Lateral modes
With a symmetrical rocket or missile, the directional stability in yaw is the same as the pitch stability; it resembles the short period pitch oscillation, with yaw plane equivalents to the pitch plane stability derivatives. For this reason, pitch and yaw directional stability are collectively known as the "weathercock" stability of the missile.
Aircraft lack the symmetry between pitch and yaw, so that directional stability in yaw is derived from a different set of stability derivatives. The yaw plane equivalent to the short period pitch oscillation, which describes yaw plane directional stability is called Dutch roll. Unlike pitch plane motions, the lateral modes involve both roll and yaw motion.
Dutch roll
It is customary to derive the equations of motion by formal manipulation in what, to the engineer, amounts to a piece of mathematical sleight of hand. The current approach follows the pitch plane analysis in formulating the equations in terms of concepts which are reasonably familiar.
Applying an impulse via the rudder pedals should induce Dutch roll, which is the oscillation in roll and yaw, with the roll motion lagging yaw by a quarter cycle, so that the wing tips follow elliptical paths with respect to the aircraft.
The yaw plane translational equation, as in the pitch plane, equates the centripetal acceleration to the side force.
where (beta) is the sideslip angle, Y the side force and r the yaw rate.
The moment equations are a bit trickier. The trim condition is with the aircraft at an angle of attack with respect to the airflow. The body x-axis does not align with the velocity vector, which is the reference direction for wind axes. In other words, wind axes are not principal axes (the mass is not distributed symmetrically about the yaw and roll axes). Consider the motion of an element of mass in position -z, x in the direction of the y-axis, i.e. into the plane of the paper.
If the roll rate is p, the velocity of the particle is:
Made up of two terms, the force on this particle is first the proportional to rate of v change, the second is due to the change in direction of this component of velocity as the body moves. The latter terms gives rise to cross products of small quantities (pq, pr, qr), which are later discarded. In this analysis, they are discarded from the outset for the sake of clarity. In effect, we assume that the direction of the velocity of the particle due to the simultaneous roll and yaw rates does not change significantly throughout the motion. With this simplifying assumption, the acceleration of the particle becomes:
The yawing moment is given by:
There is an additional yawing moment due to the offset of the particle in the y direction:
The yawing moment is found by summing over all particles of the body:
where N is the yawing moment, E is a product of inertia, and C is the moment of inertia about the yaw axis.
A similar reasoning yields the roll equation:
where L is the rolling moment and A the roll moment of inertia.
Lateral and longitudinal stability derivatives
The states are (sideslip), r (yaw rate) and p (roll rate), with moments N (yaw) and L (roll), and force Y (sideways). There are nine stability derivatives relevant to this motion, the following explains how they originate. However a better intuitive understanding is to be gained by simply playing with a model airplane, and considering how the forces on each component are affected by changes in sideslip and angular velocity:
Side force due to side slip (in absence of yaw).
Sideslip generates a sideforce from the fin and the fuselage. In addition, if the wing has dihedral, side slip at a positive roll angle increases incidence on the starboard wing and reduces it on the port side, resulting in a net force component directly opposite to the sideslip direction. Sweep back of the wings has the same effect on incidence, but since the wings are not inclined in the vertical plane, backsweep alone does not affect . However, anhedral may be used with high backsweep angles in high performance aircraft to offset the wing incidence effects of sideslip. Oddly enough this does not reverse the sign of the wing configuration's contribution to (compared to the dihedral case).
Side force due to roll rate.
Roll rate causes incidence at the fin, which generates a corresponding side force. Also, positive roll (starboard wing down) increases the lift on the starboard wing and reduces it on the port. If the wing has dihedral, this will result in a side force momentarily opposing the resultant sideslip tendency. Anhedral wing and or stabilizer configurations can cause the sign of the side force to invert if the fin effect is swamped.
Side force due to yaw rate.
Yawing generates side forces due to incidence at the rudder, fin and fuselage.
Yawing moment due to sideslip forces.
Sideslip in the absence of rudder input causes incidence on the fuselage and empennage, thus creating a yawing moment counteracted only by the directional stiffness which would tend to point the aircraft's nose back into the wind in horizontal flight conditions. Under sideslip conditions at a given roll angle will tend to point the nose into the sideslip direction even without rudder input, causing a downward spiraling flight.
Yawing moment due to roll rate.
Roll rate generates fin lift causing a yawing moment and also differentially alters the lift on the wings, thus affecting the induced drag contribution of each wing, causing a (small) yawing moment contribution. Positive roll generally causes positive values unless the empennage is anhedral or fin is below the roll axis. Lateral force components resulting from dihedral or anhedral wing lift differences has little effect on because the wing axis is normally closely aligned with the center of gravity.
Yawing moment due to yaw rate.
Yaw rate input at any roll angle generates rudder, fin and fuselage force vectors which dominate the resultant yawing moment. Yawing also increases the speed of the outboard wing whilst slowing down the inboard wing, with corresponding changes in drag causing a (small) opposing yaw moment. opposes the inherent directional stiffness which tends to point the aircraft's nose back into the wind and always matches the sign of the yaw rate input.
Rolling moment due to sideslip.
A positive sideslip angle generates empennage incidence which can cause positive or negative roll moment depending on its configuration. For any non-zero sideslip angle dihedral wings causes a rolling moment which tends to return the aircraft to the horizontal, as does back swept wings. With highly swept wings the resultant rolling moment may be excessive for all stability requirements and anhedral could be used to offset the effect of wing sweep induced rolling moment.
Rolling moment due to yaw rate.
Yaw increases the speed of the outboard wing whilst reducing speed of the inboard one, causing a rolling moment to the inboard side. The contribution of the fin normally supports this inward rolling effect unless offset by anhedral stabilizer above the roll axis (or dihedral below the roll axis).
Rolling moment due to roll rate.
Roll creates counter rotational forces on both starboard and port wings whilst also generating such forces at the empennage. These opposing rolling moment effects have to be overcome by the aileron input in order to sustain the roll rate. If the roll is stopped at a non-zero roll angle the upward rolling moment induced by the ensuing sideslip should return the aircraft to the horizontal unless exceeded in turn by the downward rolling moment resulting from sideslip induced yaw rate. Longitudinal stability could be ensured or improved by minimizing the latter effect.
Equations of motion
Since Dutch roll is a handling mode, analogous to the short period pitch oscillation, any effect it might have on the trajectory may be ignored. The body rate r is made up of the rate of change of sideslip angle and the rate of turn. Taking the latter as zero, assuming no effect on the trajectory, for the limited purpose of studying the Dutch roll:
The yaw and roll equations, with the stability derivatives become:
(yaw)
(roll)
The inertial moment due to the roll acceleration is considered small compared with the aerodynamic terms, so the equations become:
This becomes a second order equation governing either roll rate or sideslip:
The equation for roll rate is identical. But the roll angle, (phi) is given by:
If p is a damped simple harmonic motion, so is , but the roll must be in quadrature with the roll rate, and hence also with the sideslip. The motion consists of oscillations in roll and yaw, with the roll motion lagging 90 degrees behind the yaw. The wing tips trace out elliptical paths.
Stability requires the "stiffness" and "damping" terms to be positive. These are:
(damping)
(stiffness)
The denominator is dominated by , the roll damping derivative, which is always negative, so the denominators of these two expressions will be positive.
Considering the "stiffness" term: will be positive because is always negative and is positive by design. is usually negative, whilst is positive. Excessive dihedral can destabilize the Dutch roll, so configurations with highly swept wings require anhedral to offset the wing sweep contribution to .
The damping term is dominated by the product of the roll damping and the yaw damping derivatives, these are both negative, so their product is positive. The Dutch roll should therefore be damped.
The motion is accompanied by slight lateral motion of the center of gravity and a more "exact" analysis will introduce terms in etc. In view of the accuracy with which stability derivatives can be calculated, this is an unnecessary pedantry, which serves to obscure the relationship between aircraft geometry and handling, which is the fundamental objective of this article.
Roll subsidence
Jerking the stick sideways and returning it to center causes a net change in roll orientation.
The roll motion is characterized by an absence of natural stability, there are no stability derivatives which generate moments in response to the inertial roll angle. A roll disturbance induces a roll rate which is only canceled by pilot or autopilot intervention. This takes place with insignificant changes in sideslip or yaw rate, so the equation of motion reduces to:
is negative, so the roll rate will decay with time. The roll rate reduces to zero, but there is no direct control over the roll angle.
Spiral mode
Simply holding the stick still, when starting with the wings near level, an aircraft will usually have a tendency to gradually veer off to one side of the straight flightpath. This is the (slightly unstable) spiral mode.
Spiral mode trajectory
In studying the trajectory, it is the direction of the velocity vector, rather than that of the body, which is of interest. The direction of the velocity vector when projected on to the horizontal will be called the track, denoted (mu). The body orientation is called the heading, denoted (psi). The force equation of motion includes a component of weight:
where g is the gravitational acceleration, and U is the speed.
Including the stability derivatives:
Roll rates and yaw rates are expected to be small, so the contributions of and will be ignored.
The sideslip and roll rate vary gradually, so their time derivatives are ignored. The yaw and roll equations reduce to:
(yaw)
(roll)
Solving for and p:
Substituting for sideslip and roll rate in the force equation results in a first order equation in roll angle:
This is an exponential growth or decay, depending on whether the coefficient of is positive or negative. The denominator is usually negative, which requires (both products are positive). This is in direct conflict with the Dutch roll stability requirement, and it is difficult to design an aircraft for which both the Dutch roll and spiral mode are inherently stable.
Since the spiral mode has a long time constant, the pilot can intervene to effectively stabilize it, but an aircraft with an unstable Dutch roll would be difficult to fly. It is usual to design the aircraft with a stable Dutch roll mode, but slightly unstable spiral mode.
See also
Acronyms and abbreviations in avionics
Aeronautics
Attitude and heading reference system
Steady flight
Aircraft flight control system
Aircraft flight mechanics
Aircraft heading
Aircraft bank
Crosswind landing
Dynamic positioning
Flight control surfaces
Helicopter dynamics
JSBSim (An open source flight dynamics software model)
Longitudinal static stability
Rigid body dynamics
Rotation matrix
Ship motions
Stability derivatives
Static margin
Weathervane effect
1902 Wright Glider
References
Notes
Bibliography
NK Sinha and N Ananthkrishnan (2013), Elementary Flight Dynamics with an Introduction to Bifurcation and Continuation Methods, CRC Press, Taylor & Francis.
External links
MIXR - mixed reality simulation platform
JSBsim, An open source, platform-independent, flight dynamics & control software library in C++
Aerodynamics
Avionics
Flight control systems | Aircraft flight dynamics | Chemistry,Technology,Engineering | 7,128 |
65,287,796 | https://en.wikipedia.org/wiki/Grammistin | Grammistins are peptide toxins synthesised by glands in the skin of soapfishes of the tribes Grammistini and Diploprionini which are both classified within the grouper subfamily Epinephelinae, a part of the family Serranidae. Grammistin has a hemolytic and ichthyotoxic action. The grammistins have secondary structures and biological effects comparable to other classes of peptide toxins, melittin from the bee stings and pardaxins which are secreted in the skin of two sole species. A similar toxin has been found to be secreted in the skin of some clingfishes.
Grammistins have a distinctive bitter taste. Soapfishes increase the amount of toxin released in their skin if they are stressed and other species of fish kept in a confined space with a stressed soapfish normally die. If ingested at a high enough dosage the toxin is lethal to mammals with some symptoms being similar to those produce by ciguatoxins. Grammistins also cause hemolysis of mammalian blood cells. The main purpose of the secretion of grammastin is defensive and when a lionfish (Pterois miles) tries to predate on a soapfish it immediately ejects it from its mouth, suggesting that it had detected the bitter taste. Grammistins affect organisms by cytolysis and hemolysis. As well as being toxic they are also antibiotic and antimicrobial.
References
Ichthyotoxins
Peptides
Biological toxin weapons | Grammistin | Chemistry | 313 |
1,261,255 | https://en.wikipedia.org/wiki/Mode%20of%20limited%20transposition | Modes of limited transposition are musical modes or scales that fulfill specific criteria relating to their symmetry and the repetition of their interval groups. These scales may be transposed to all twelve notes of the chromatic scale, but at least two of these transpositions must result in the same pitch classes, thus their transpositions are "limited". They were compiled by the French composer Olivier Messiaen, and published in his book La technique de mon langage musical ("The Technique of my Musical Language").
Technical criteria
There are two complementary ways to view the modes: considering their possible transpositions, and considering the different modes contained within them.
Definition by chromatic transposition
Transposing the diatonic major scale up in semitones results in a different set of notes being used each time. For example, C major consists of C, D, E, F, G, A, B, and the scale a semitone higher (D major) consists of D, E, F, G, A, B, C. By transposing D major up another semitone, another new set of notes (D major) is produced, and so on, giving 12 different diatonic scales in total. When transposing a mode of limited transposition this is not the case. For example, the mode of limited transposition that Messiaen labelled "Mode 1", which is the whole tone scale, contains the notes C, D, E, F, G, A; transposing this mode up a semitone produces C, D, F, G, A, B. Transposing this up another semitone produces D, E, F, G, A, C, which is the same set of notes as the original scale. Since transposing the mode up a whole tone produces the same set of notes, mode 1 has only 2 transpositions.
Any scale having 12 different transpositions is not a mode of limited transposition.
Definition by shifting modal degrees
Consider the intervals of the major scale: tone, tone, semitone, tone, tone, tone, semitone. Starting the scale on a different degree will always create a new mode with individual interval layouts—for example starting on the second degree of a major scale gives the "Dorian mode"—tone, semitone, tone, tone, tone, semitone, tone. This is not so of the modes of limited transposition, which can be modally shifted only a limited number of times. For example, mode 1, the whole tone scale, contains the intervals tone, tone, tone, tone, tone, tone. Starting on any degree of the mode gives the same sequence of intervals, and therefore the whole tone scale has only 1 mode. Messiaen's mode 2, or the diminished scale, consists of semitone, tone, semitone, tone, semitone, tone, semitone, tone, which can only be arranged 2 ways, starting with either a tone or a semitone. Therefore, mode 2 has two modes.
Any scale having the same number of modes as notes is not a mode of limited transposition.
Messiaen's list
Messiaen's first mode, also called the whole-tone scale, is divided into six groups of two notes each. The intervals it contains are tone, tone, tone, tone, tone, tone – it has two transpositions and one mode.
The second mode, also called the octatonic, diminished, whole-half, or half-whole scale, is divided into four groups of three notes each. It contains the intervals semitone, tone, semitone, tone, semitone, tone, semitone, tone – it has three transpositions, like the diminished 7th chord, and two modes:
The third mode is divided into three groups of four notes each. It contains the intervals tone, semitone, semitone, tone, semitone, semitone, tone, semitone, semitone – it has four transpositions, like the augmented triad, and three modes.
The fourth mode contains the intervals semitone, semitone, minor third, semitone, semitone, semitone, minor third, semitone – it has six transpositions, like the tritone, and four modes.
The fifth mode contains the intervals semitone, major third, semitone, semitone, major third, semitone – it has six transpositions, like the tritone, and three modes.
The sixth mode has the intervals tone, tone, semitone, semitone, tone, tone, semitone, semitone – it has six transpositions, like the tritone, and four modes.
The seventh mode contains the intervals semitone, semitone, semitone, tone, semitone, semitone, semitone, semitone, tone, semitone – it has six transpositions, like the tritone, and five modes.
Expansion and alteration of the modes
Are there others?
Messiaen wrote, "Their series is closed, it is mathematically impossible to find others, at least in our tempered system of 12 semitones." More modes can be found that fit the criteria, but they are truncations of the original seven modes.
Indeed, all these modes can be regrouped mathematically speaking in a lattice (wherein one set of notes is connected with those containing it). If one excludes modes with 0, 1, 11 or 12 notes which can be construed as too extreme, there are 15 generalized Messiaen modes which are represented on clock diagrams. Some of the ones omitted by Messiaen are arguably of musical importance, like the hexatonic collection 0 1 4 5 8 9.
The augmented scale: 0 3 4 7 8 11 (Root, minor 3rd, major 3rd, 5th, augmented 5th, major 7th, or minor 3rd, semitone, minor 3rd, semitone, minor 3rd, semitone) may appear to be an inexplicable omission on Messiaen's part. It is a symmetrical scale used frequently by modern jazz improvisers. However closer inspection reveals it to be a truncated version of his Third Mode.
Truncation
Truncation involves the removal of notes from one of the modes to leave a new truncated mode. Both the notes removed and the notes remaining must preserve the symmetry of the parent mode, and must therefore fulfill the conditions for limited transposition. For example, consider mode 1.
C D E F G A
Removing alternate notes creates a new truncated mode of limited transposition.
C E G
Removing two notes for every one kept creates a new truncated mode of limited transposition.
C F
Keeping two notes for every one removed creates another truncated mode of limited transposition.
C E F A
Only Messiaen's mode 7 and mode 3 are not truncated modes: the other modes may be constructed from them or from one or more of their modes. Mode 7 contains modes 1, 2, 4, 5, and 6. Mode 6 contains modes 1 and 5. Mode 4 contains mode 5. Mode 3 contains mode 1.
Pure intervallic truncations
Tritones, truncation of modes 1, 2, 3, 4, 5, 6 and 7: augmented fourth, augmented fourth – 1 mode and 6 transpositions
Major thirds, truncation of modes 1, 3, 6 and 7: major third, major third, major third – 1 mode and 4 transpositions. See Augmented triad
Minor thirds, truncation of modes 2, 4, 6 and 7: minor third, minor third, minor third, minor third – 1 mode and 3 transpositions. See Diminished seventh chord
Whole tones (mode 1), truncation of modes 3, 6 and 7: tone, tone, tone, tone, tone, tone – 1 mode and 2 transpositions
Other truncations
Truncation of modes 2, 4, 6 and 7: semitone, tone, minor third, semitone, tone, minor third – 3 modes, 6 transpositions. (Modes are "mirror" inversions of Petrushka Chord modes.)
Truncation of modes 1, 2, 3, 4, 5, 6 and 7: major third, tone, major third, tone – 2 modes, 6 transpositions. See French Sixth and Dominant seventh flat five chord
Truncation of modes 2, 3, 4, 5, 6 and 7: perfect fourth, semitone, perfect fourth, semitone – 2 modes, 6 transpositions. See 1:5 Distance model
Truncation of mode 3: minor third, semitone, minor third, semitone, minor third, semitone – 2 modes, 4 transpositions. See augmented scale
Truncation of modes 2, 4, 6 and 7: minor third, tone, semitone, minor third, tone, semitone – 3 modes, 6 transpositions. See Petrushka Chord
Use and sound
Messiaen found ways of employing all of the modes of limited transposition harmonically, melodically, and sometimes polyphonically. The whole-tone and octatonic scales have enjoyed quite widespread use since the turn of the 20th century, particularly by Debussy (the whole-tone scale) and Stravinsky (the octatonic scale).
The symmetry inherent in these modes (which means no note can be perceived as the tonic), together with certain rhythmic devices, Messiaen described as containing "the charm of impossibilities".
The composer Tōru Takemitsu made frequent use of Messiaen's modes, particularly the third mode. The composer Alexander Tcherepnin has adopted the third mode as one of his synthetic scales, and it is known as the "Tcherepnin scale".
In other temperaments
There are no modes of limited transposition in any prime equal division of the octave, such as 19 equal temperament or 31 equal temperament.
Composite divisions, such as 15 equal temperament or 22 equal temperament, have them. The 12-note chromatic scale can itself be considered such a mode when viewed as a subset of a larger system that contains it, such as quarter tones or 72 equal temperament.
For example, eight equal temperament, the lowest non-prime equal temperament not completely included in 12-tet (due to a scale step in 24-tet), would have modes of limited transposition. The first would be 0, 2, 4, 6 (steps: 2222), which has only two transpositions and one mode. Another would be 0, 1, 4, 5 (steps: 1313 and 3131), which has 4 transpositions and 2 modes (the other is 0, 3, 4, 7).
References
Further reading
Anaf, Jef. 1988. "Olivier Messiaen 80 jaar: Nog steeds actief mit zijn modale toonstelsel". Adem: Driemaandelijks tijdschrift voor muziek cultuur 24, no. 4 (October–December): 184–91.
Douthett, Jack, and Peter Steinbach. 1998. "Parsimonious Graphs: A Study in Parsimony, Contextual Transformations, and Modes of Limited Transposition". Journal of Music Theory 42, no. 2 (Fall): 241–63.
Frischknecht, Hans Eugen. 2008. "Potential und Grenzen einer musikalischen Sprache: Olivier Messiaens modes à transpositions limitées unter der Lupe". Dissonanz/Dissonance, no. 104 (December): 32–34.
Giesl, Peter. 2002. "Zur melodischen Verwendung des Zweiten Modus in Messiaens Subtilité des Corps Glorieux". In Musik, Wissenschaft und ihre Vermittlung: Bericht über die internationale musikwissenschaftliche Tagung der Hochschule für Musik und Theater Hannover, edited by Arnfried Edler and Sabine Meine, 259–64. Publikationen der Hochschule für Musik und Theater Hannover 12. Augsburg: Wißner. .
Neidhöfer, Christoph. 2005. "A Theory of Harmony and Voice Leading for the Music of Olivier Messiaen". Music Theory Spectrum 27, no. 1 (Spring): 1–34.
Schuster-Craig, John. 1990. "An Eighth Mode of Limited Transposition". The Music Review 51, no. 4 (November) : 296–306.
Street, Donald. 1976. "The Modes of Limited Transposition". The Musical Times 117, no. 1604 (October): 819–23.
Yamaguchi, Masaya. 2006. The Complete Thesaurus of Musical Scales, revised edition. New York: Masaya Music Services. .
Yamaguchi, Masaya. 2006. Symmetrical Scales for Jazz Improvisation, revised edition. New York: Masaya Music Services. .
Yamaguchi, Masaya. 2012. Lexicon of Geometric Patterns for Jazz Improvisation. New York: Masaya Music Services. .
External links
My Messiaen Modes – A visual representation of the modes of limited transposition
Modes (music)
Post-tonal music theory
Musical symmetry
Olivier Messiaen | Mode of limited transposition | Physics | 2,749 |
39,290 | https://en.wikipedia.org/wiki/Precondition | In computer programming, a precondition is a condition or predicate that must always be true just prior to the execution of some section of code or before an operation in a formal specification.
If a precondition is violated, the effect of the section of code becomes undefined and thus may or may not carry out its intended work. Preconditions that are missing, insufficient, or not formally proved (or have an incorrect attempted proof), or are not checked statically or dynamically, can give rise to Security problems, particularly in unsafe languages that are not strongly typed.
Often, preconditions are simply included in the documentation of the affected section of code. Preconditions are sometimes tested using guards or assertions within the code itself, and some languages have specific syntactic constructions for doing so.
Example
The factorial function is only defined where its parameter is an integer greater than or equal to zero. So an implementation of the factorial function would have a precondition that its parameter be an integer and that the parameter be greater than or equal to zero. Alternatively the type system of the language may be used to specify that the parameter of the factorial function is a natural number (unsigned integer), which can be formally verified automatically by a compiler's type checker.
In addition where numeric types have a limited range (as they do in most programming languages) the precondition must also specify the maximum value that the parameter may have if overflow is not to occur. (e.g. if an implementation of factorial returns the result in a 64-bit unsigned integer then the parameter must be less than 21 because factorial(21) is larger than the maximum unsigned integer that can be stored in 64 bits).
Where the language supports range sub-types (e.g. Ada) such constraints can be automatically verified by the type system. More complex constraints can be formally verified interactively with a proof assistant.
In object-oriented programming
Preconditions in object-oriented software development are an essential part of design by contract. Design by contract also includes notions of postcondition and class invariant.
The precondition for any routine defines any constraints on object state which are necessary for successful execution. From the program developer's viewpoint, this constitutes the routine caller's portion of the contract. The caller then is obliged to ensure that the precondition holds prior to calling the routine. The reward for the caller's effort is expressed in the called routine's postcondition.
Eiffel example
The routine in the following example written in Eiffel takes as an argument an integer which must be a valid value for an hour of the day, i. e., 0 through 23, inclusively. The precondition follows the keyword require. It specifies that the argument must be greater than or equal to zero and less than or equal to 23. The tag "valid_argument:" describes this precondition clause and serves to identify it in case of a runtime precondition violation.
set_hour (a_hour: INTEGER)
-- Set `hour' to `a_hour'
require
valid_argument: 0 <= a_hour and a_hour <= 23
do
hour := a_hour
ensure
hour_set: hour = a_hour
end
Preconditions and inheritance
In the presence of inheritance, the routines inherited by descendant classes (subclasses) do so with their preconditions in force. This means that any implementations or redefinitions of inherited routines also have to be written to comply with their inherited contract. Preconditions can be modified in redefined routines, but they may only be weakened. That is, the redefined routine may lessen the obligation of the client, but not increase it.
See also
Design by contract
Guard (computer science)
Postcondition
Hoare logic
Invariants maintained by conditions
Database trigger
References
Programming constructs
Formal methods
Logic in computer science | Precondition | Mathematics,Engineering | 813 |
39,963,415 | https://en.wikipedia.org/wiki/Overlay%20transport%20virtualization | Overlay transport virtualization (OTV) is a Cisco proprietary protocol for relaying layer 2 communications between layer 3 computer networks.
See also
Distributed Overlay Virtual Ethernet (DOVE)
Generic Routing Encapsulation (GRE)
IEEE 802.1ad, an Ethernet networking standard, also known as provider bridging, Stacked VLANs, or simply QinQ.
NVGRE, a similar competing specification
Virtual Extensible LAN (VXLAN)
Virtual LAN (VLAN)
External links
Cisco Overlay Transport Virtualization overview
Network protocols
Tunneling protocols | Overlay transport virtualization | Technology,Engineering | 115 |
50,425,980 | https://en.wikipedia.org/wiki/Domain-to-range%20ratio | The domain-to-range ratio (DRR) is a ratio which describes how the number of outputs corresponds to the number of inputs of a given logical function or software component. The domain-to-range ratio is a mathematical ratio of cardinality between the set of the function's possible inputs (the domain) and the set of possible outputs (the range). For a function defined on a domain, , and a range, , the domain-to-range ratio is given as:It can be used to measure the risk of missing potential errors when testing the range of outputs alone.
Example
Consider the function isEven() below, which checks the parity of an unsigned short number , any value between and , and yields a boolean value which corresponds to whether is even or odd. This solution takes advantage of the fact that integer division in programming typically rounds towards zero.
bool isEven(unsigned short x) {
return (x / 2) == ((x + 3)/2 - 1);
}Because can be any value from to , the function's domain has a cardinality of . The function yields , if is even, or , if is odd. This is expressed as the range , which has a cardinality of . Therefore, the domain-to-range ratio of isEven() is given by:Here, the domain-to-range ratio indicates that this function would require a comparatively large number of tests to find errors. If a test program attempts every possible value of in order from to , the program would have to perform tests for each of the two possible outputs in order to find errors or edge cases. Because errors in functions with a high domain-to-range ratio are difficult to identify via manual testing or methods which reduce the number of tested inputs, such as orthogonal array testing or all-pairs testing, more computationally complex techniques may be used, such as fuzzing or static program analysis, to find errors.
See also
Software testing
Formal verification
References
Software metrics
Software testing
Set theory | Domain-to-range ratio | Mathematics,Engineering | 419 |
1,452,758 | https://en.wikipedia.org/wiki/Parts%20book | A parts book, parts catalogue or illustrated part catalogue is a book published by a manufacturer which contains the illustrations, part numbers and other relevant data for their products or parts thereof.
Parts books were often issued as microfiche, though this has fallen out of favour. Now, many manufacturers offer this information digitally in an electronic parts catalogue. This can be locally installed software, or a centrally hosted web application. Usually, an electronic parts catalogue enables the user to virtually disassemble the product into its components to identify the required part(s).
In the automotive industry, electronic parts catalogues are also able to access specific vehicle information, usually through an online look-up of the vehicle identification number. This will identify specific models, allowing the user to correctly identify the required part and its relevant part number.
See also
ETKA
Product information management
Parts locator
Electronic Parts Catalogue (EPC)
References
https://www.skybrary.aero/index.php/Illustrated_Parts_Catalogue
Catalog Design Handbook – Marketing Principles so as typography and printing industry standards applied to catalog design
Manufacturing
Automotive industry | Parts book | Engineering | 226 |
61,594,613 | https://en.wikipedia.org/wiki/Monodontid%20alphaherpesvirus%201 | Monodontid alphaherpesvirus 1 (MoHV-1) is a species of virus in the family Herpesviridae. The virus is able to infect narwhals and beluga whales. The name of the virus stems from the scientific name of the Narwhal, Monodon Monoceros.
References
Alphaherpesvirinae | Monodontid alphaherpesvirus 1 | Biology | 77 |
6,021,465 | https://en.wikipedia.org/wiki/Biological%20pigment | Biological pigments, also known simply as pigments or biochromes, are substances produced by living organisms that have a color resulting from selective color absorption. Biological pigments include plant pigments and flower pigments. Many biological structures, such as skin, eyes, feathers, fur and hair contain pigments such as melanin in specialized cells called chromatophores. In some species, pigments accrue over very long periods during an individual's lifespan.
Pigment color differs from structural color in that it is the same for all viewing angles, whereas structural color is the result of selective reflection or iridescence, usually because of multilayer structures. For example, butterfly wings typically contain structural color, although many butterflies have cells that contain pigment as well.
Biological pigments
See conjugated systems for electron bond chemistry that causes these molecules to have pigment.
Heme/porphyrin-based: chlorophyll, bilirubin, hemocyanin, hemoglobin, myoglobin
Light-emitting: luciferin
Carotenoids:
Hematochromes (algal pigments, mixes of carotenoids and their derivates)
Carotenes: alpha and beta carotene, lycopene, rhodopsin
Xanthophylls: canthaxanthin, zeaxanthin, lutein
Proteinaceous: phytochrome, phycobiliproteins
Psittacofulvins: a class of red and yellow pigments unique to parrots
Turacin and Turacoverdin: red and green pigments found in turacos and related species
Anthoxanthins: white color of some plants
Other: melanin, urochrome, flavonoids
Pigments in plants
The primary function of pigments in plants is photosynthesis, which uses the green pigment chlorophyll and several colorful pigments that absorb as much light energy as possible. Pigments are also known to play a role in pollination where pigment accumulation or loss can lead to floral color change, signaling to pollinators which flowers are rewarding and contain more pollen and nectar.
Plant pigments include many molecules, such as porphyrins, carotenoids, anthocyanins and betalains. All biological pigments selectively absorb certain wavelengths of light while reflecting others.
The principal pigments responsible are:
Chlorophyll is the primary pigment in plants; it is a chlorin that absorbs blue and red wavelengths of light while reflecting a majority of green. It is the presence and relative abundance of chlorophyll that gives plants their green color. All land plants and green algae possess two forms of this pigment: chlorophyll a and chlorophyll b. Kelps, diatoms, and other photosynthetic heterokonts contain chlorophyll c instead of b, while red algae possess only chlorophyll a. All chlorophylls serve as the primary means plants use to intercept light in order to fuel photosynthesis.
Carotenoids are red, orange, or yellow tetraterpenoids. During the process of photosynthesis, they have functions in light-harvesting (as accessory pigments), in photoprotection (energy dissipation via non-photochemical quenching as well as singlet oxygen scavenging for prevention of photooxidative damage), and also serve as protein structural elements. In higher plants, they also serve as precursors to the plant hormone abscisic acid.
Betalains are red or yellow pigments. Like anthocyanins they are water-soluble, but unlike anthocyanins they are synthesized from tyrosine. This class of pigments is found only in the Caryophyllales (including cactus and amaranth), and never co-occur in plants with anthocyanins. Betalains are responsible for the deep red color of beets.
Anthocyanins (literally "flower blue") are water-soluble flavonoid pigments that appear red to blue, according to pH. They occur in all tissues of higher plants, providing color in leaves, plant stem, roots, flowers, and fruits, though not always in sufficient quantities to be noticeable. Anthocyanins are most visible in the petals of flowers of many species.
Plants, in general, contain six ubiquitous carotenoids: neoxanthin, violaxanthin, antheraxanthin, zeaxanthin, lutein and β-carotene. Lutein is a yellow pigment found in fruits and vegetables and is the most abundant carotenoid in plants. Lycopene is the red pigment responsible for the color of tomatoes. Other less common carotenoids in plants include lutein epoxide (in many woody species), lactucaxanthin (found in lettuce), and alpha carotene (found in carrots).
A particularly noticeable manifestation of pigmentation in plants is seen with autumn leaf color, a phenomenon that affects the normally green leaves of many deciduous trees and shrubs whereby they take on, during a few weeks in the autumn season, various shades of red, yellow, purple, and brown.
Chlorophylls degrade into colorless tetrapyrroles known as nonfluorescent chlorophyll catabolites (NCCs).
As the predominant chlorophylls degrade, the hidden pigments of yellow xanthophylls and orange beta-carotene are revealed. These pigments are present throughout the year, but the red pigments, the anthocyanins, are synthesized de novo once roughly half of chlorophyll has been degraded. The amino acids released from degradation of light harvesting complexes are stored all winter in the tree's roots, branches, stems, and trunk until next spring when they are recycled to re‑leaf the tree.
Pigments in algae
Algae are very diverse photosynthetic organisms, which differ from plants in that they are aquatic organisms, they do not present vascular tissue and do not generate an embryo. However, both types of organisms share the possession of photosynthetic pigments, which absorb and release energy that is later used by the cell. These pigments in addition to chlorophylls, are phycobiliproteins, fucoxanthins, xanthophylls and carotenes, which serve to trap the energy of light and lead it to the primary pigment, which is responsible for initiating oxygenic photosynthesis reactions.
Algal phototrophs such as dinoflagellates use peridinin as a light harvesting pigment. While carotenoids can be found complexed within chlorophyll-binding proteins such as the photosynthetic reaction centers and light-harvesting complexes, they also are found within dedicated carotenoid proteins such as the orange carotenoid protein of cyanobacteria.
Pigments in bacteria
Bacteria produce pigments such as carotenoids, melanin, violacein, prodigiosin, pyocyanin, actinorhodin, and zeaxanthin. Cyanobacteria produce phycocyanin, phycoerythrin, scytonemin, chlorophyll a, chlorophyll d, and chlorophyll f. Purple sulfur bacteria produce bacteriochlorophyll a and bacteriochlorophyll b. In cyanobacteria, many other carotenoids exist such as canthaxanthin, myxoxanthophyll, synechoxanthin, and echinenone.
Pigments in animals
Pigmentation is used by many animals for protection, by means of camouflage, mimicry, or warning coloration. Some animals including fish, amphibians and cephalopods use pigmented chromatophores to provide camouflage that varies to match the background.
Pigmentation is used in signalling between animals, such as in courtship and reproductive behavior. For example, some cephalopods use their chromatophores to communicate.
The photopigment rhodopsin intercepts light as the first step in the perception of light.
Skin pigments such as melanin may protect tissues from sunburn by ultraviolet radiation.
However, some biological pigments in animals, such as heme groups that help to carry oxygen in the blood, are colored as a result of happenstance. Their color does not have a protective or signalling function.
Pea aphids (Acyrthosiphon pisum), two-spotted spider mites (Tetranychus urticae), and gall midges (family Cecidomyiidae) are the only known animals capable of synthesizing carotenoids. The presence of genes for synthesizing carotenoids in these arthropods has been attributed to independent horizontal gene transfer (HGT) events from fungi.
Diseases and conditions
A variety of diseases and abnormal conditions that involve pigmentation are in humans and animals, either from absence of or loss of pigmentation or pigment cells, or from the excess production of pigment.
Albinism is an inherited disorder characterized by total or partial loss of melanin. Humans and animals that suffer from albinism are called "albinistic" (the term "albino" is also sometimes used, but may be considered offensive when applied to people).
Lamellar ichthyosis, also called "fish scale disease", is an inherited condition in which one symptom is excess production of melanin. The skin is darker than normal, and is characterized by darkened, scaly, dry patches.
Melasma is a condition in which dark brown patches of pigment appear on the face, influenced by hormonal changes. When it occurs during a pregnancy, this condition is called the mask of pregnancy.
ocular pigmentation is an accumulation of pigment in the eye, and may be caused by latanoprost medication.
Vitiligo is a condition in which there is a loss of pigment-producing cells called melanocytes in patches of skin.
Pigments in marine animals
Carotenoids and carotenoproteins
Carotenoids are the most common group of pigments found in nature. Over 600 different kinds of carotenoids are found in animals, plants, and microorganisms.
Marine animals are incapable of making their own carotenoids and thus rely on plants for these pigments. Carotenoproteins are especially common among marine animals. These complexes are responsible for the various colors (red, purple, blue, green, etc.) to these marine invertebrates for mating rituals and camouflage. There are two main types of carotenoproteins: Type A and Type B. Type A has carotenoids (chromogen) which are stoichiometrically associated with a simple protein (glycoprotein). The second type, Type B, has carotenoids which are associated with a lipo protein and is usually less stable. While Type A is commonly found in the surface (shells and skins) of marine invertebrates, Type B is usually in eggs, ovaries, and blood. The colors and characteristic absorption of these carotenoprotein complexes are based upon the chemical binding of the chromogen and the protein subunits.
For example, the blue carotenoprotein, linckiacyanin has about 100-200 carotenoid molecules per every complex. In addition, the functions of these pigment-protein complexes also change their chemical structure as well. Carotenoproteins that are within the photosynthetic structure are more common, but complicated. Pigment-protein complexes that are outside of the photosynthetic system are less common, but have a simpler structure. For example, there are only two of these blue astaxanthin-proteins in the jellyfish, Velella velella, contains only about 100 carotenoids per complex.
A common carotenoid in animals is astaxanthin, which gives off a purple-blue and green pigment. Astaxanthin's color is formed by creating complexes with proteins in a certain order. For example, the crustochrin has approximately 20 astaxanthin molecules bonded with protein. When the complexes interact by exciton-exciton interaction, it lowers the absorbance maximum, changing the different color pigments.
In lobsters, there are various types of astaxanthin-protein complexes present. The first one is crustacyanin (max 632 nm), a slate-blue pigment found in the lobster's carapace. The second one is crustochrin (max 409), a yellow pigment which is found on the outer layer of the carapace. Lastly, the lipoglycoprotein and ovoverdin forms a bright green pigment that is usually present in the outer layers of the carapace and the lobster eggs.
Tetrapyrroles
Tetrapyrroles are the next most common group of pigments. They have four pyrrole rings, each ring consisting of C4H4NH. The main role of the tetrapyrroles is their connection in the biological oxidation process. Tetrapyrroles have a major role in electron transport and act as a replacement for many enzymes. They also have a role in the pigmentation of the marine organism's tissues.
Melanin
Melanin is a class of compounds that serves as a pigment with different structures responsible for dark, tan, yellowish / reddish pigments in marine animals. It is produced as the amino acid tyrosine is converted into melanin, which is found in the skin, hair, and eyes. Derived from aerobic oxidation of phenols, they are polymers.
There are several different types of melanins considering that they are an aggregate of smaller component molecules, such as nitrogen containing melanins. There are two classes of pigments: black and brown insoluble eumelanins, which are derived from aerobic oxidation of tyrosine in the presence of tyrosinase, and the alkali-soluble phaeomelanins which range from a yellow to red brown color, arising from the deviation of the eumelanin pathway through the intervention of cysteine and/or glutathione. Eumelanins are usually found in the skin and eyes. Several different melanins include melanoprotein (dark brown melanin that is stored in high concentrations in the ink sac of the cuttlefish Sepia Officianalis), echinoidea (found in sand dollars, and the hearts of sea urchins), holothuroidea (found in sea cucumbers), and ophiuroidea (found in brittle and snake stars). These melanins are possibly polymers which arise from the repeated coupling of simple bi-polyfunctional monomeric intermediates, or of high molecular weights. The compounds benzothiazole and tetrahydroisoquinoline ring systems act as UV-absorbing compounds.
Bioluminescence
The only light source in the deep sea, marine animals give off visible light energy called bioluminescence, a subset of chemiluminescence. This is the chemical reaction in which chemical energy is converted to light energy. It is estimated that 90% of deep-sea animals produce some sort of bioluminescence. Considering that a large proportion of the visible light spectrum is absorbed before reaching the deep sea, most of the emitted light from the sea-animals is blue and green. However, some species may emit a red and infrared light, and there has even been a genus that is found to emit yellow bioluminescence. The organ that is responsible for the emission of bioluminescence is known as photophores. This type is only present in squid and fish, and is used to illuminate their ventral surfaces, which disguise their silhouettes from predators. The uses of the photophores in the sea-animals differ, such as lenses for controlling intensity of color, and the intensity of the light produced. Squids have both photophores and chromatophores which controls both of these intensities. Another thing that is responsible for the emission of bioluminescence, which is evident in the bursts of light that jellyfish emit, start with a luciferin (a photogen) and ends with the light emitter (a photagogikon.) Luciferin, luciferase, salt, and oxygen react and combine to create a single unit called photo-proteins, which can produce light when reacted with another molecule such as Ca+. Jellyfish use this as a defense mechanism; when a smaller predator is attempting to devour a jellyfish, it will flash its lights, which would therefore lure a larger predator and chase the smaller predator away. It is also used as mating behavior.
In reef-building coral and sea anemones, they fluoresce; light is absorbed at one wavelength, and re-emitted at another. These pigments may act as natural sunscreens, aid in photosynthesis, serve as warning coloration, attract mates, warn rivals, or confuse predators.
Chromatophores
Chromatophores are color pigment changing cells that are directly stimulated by central motor neurons. They are primarily used for quick environmental adaptation for camouflaging. The process of changing the color pigment of their skin relies on a single highly developed chromatophore cell and many muscles, nerves, glial and sheath cells. Chromatophores contract and contain vesicles that stores three different liquid pigments. Each color is indicated by the three types of chromatophore cells: erythrophores, melanophores, and xanthophores. The first type is the erythrophores, which contains reddish pigments such as carotenoids and pteridines. The second type is the melanophores, which contains black and brown pigments such as the melanins. The third type is the xanthophores which contains yellow pigments in the forms of carotenoids. The various colors are made by the combination of the different layers of the chromatophores. These cells are usually located beneath the skin or scale the animals. There are two categories of colors generated by the cell – biochromes and schematochromes. Biochromes are colors chemically formed microscopic, natural pigments. Their chemical composition is created to take in some color of light and reflect the rest. In contrast, schematochromes (structural colors) are colors created by light reflections from a colorless surface and refractions by tissues. Schematochromes act like prisms, refracting and dispersing visible light to the surroundings, which will eventually reflect a specific combination of colors. These categories are determined by the movement of pigments within the chromatophores. The physiological color changes are short-term and fast, found in fishes, and are a result from an animal's response to a change in the environment. In contrast, the morphological color changes are long-term changes, occurs in different stages of the animal, and are due to the change of numbers of chromatophores. To change the color pigments, transparency, or opacity, the cells alter in form and size, and stretch or contract their outer covering.
Photo-protective pigments
Due to damage from UV-A and UV-B, marine animals have evolved to have compounds that absorb UV light and act as sunscreen. Mycosporine-like amino acids (MAAs) can absorb UV rays at 310-360 nm. Melanin is another well-known UV-protector. Carotenoids and photopigments both indirectly act as photo-protective pigments, as they quench oxygen free-radicals. They also supplement photosynthetic pigments that absorb light energy in the blue region.
Defensive role of pigments
It's known that animals use their color patterns to warn off predators, however it has been observed that a sponge pigment mimicked a chemical which involved the regulation of moulting of an amphipod that was known to prey on sponges. So whenever that amphipod eats the sponge, the chemical pigments prevents the moulting, and the amphipod eventually dies.
Environmental influence on color
Coloration in invertebrates varies based on the depth, water temperature, food source, currents, geographic location, light exposure, and sedimentation. For example, the amount of carotenoid a certain sea anemone decreases as we go deeper into the ocean. Thus, the marine life that resides on deeper waters is less brilliant than the organisms that live in well-lit areas due to the reduction of pigments. In the colonies of the colonial ascidian-cyanophyte symbiosis Trididemnum solidum, their colors are different depending on the light regime in which they live. The colonies that are exposed to full sunlight are heavily calcified, thicker, and are white. In contrast the colonies that live in shaded areas have more phycoerythrin (pigment that absorbs green) in comparison to phycocyanin (pigment that absorbs red), thinner, and are purple. The purple color in the shaded colonies are mainly due to the phycobilin pigment of the algae, meaning the variation of exposure in light changes the colors of these colonies.
Adaptive coloration
Aposematism is the warning coloration to signal potential predators to stay away. In many chromodorid nudibranchs, they take in distasteful and toxic chemicals emitted from sponges and store them in their repugnatorial glands (located around the mantle edge). Predators of nudibranchs have learned to avoid these certain nudibranchs based on their bright color patterns. Preys also protect themselves by their toxic compounds ranging from a variety of organic and inorganic compounds.
Physiological activities
Pigments of marine animals serve several different purposes, other than defensive roles. Some pigments are known to protect against UV (see photo-protective pigments.) In the nudibranch Nembrotha Kubaryana, tetrapyrrole pigment 13 has been found to be a potent antimicrobial agent. Also in this creature, tamjamines A, B, C, E, and F has shown antimicrobial, antitumor, and immunosuppressive activities.
Sesquiterpenoids are recognized for their blue and purple colors, but it has also been reported to exhibit various bioactivities such as antibacterial, immunoregulating, antimicrobial, and cytotoxic, as well as the inhibitory activity against cell division in the fertilized sea urchin and ascidian eggs. Several other pigments have been shown to be cytotoxic. In fact, two new carotenoids that were isolated from a sponge called Phakellia stelliderma showed mild cytotoxicity against mouse leukemia cells. Other pigments with medical involvements include scytonemin, topsentins, and debromohymenialdisine have several lead compounds in the field of inflammation, rheumatoid arthritis and osteoarthritis respectively. There's evidence that topsentins are potent mediators of immunogenic inflation, and topsentin and scytonemin are potent inhibitors of neurogenic inflammation.
Uses
Pigments may be extracted and used as dyes.
Pigments (such as astaxanthin and lycopene) are used as dietary supplements.
See also
Photosynthetic pigment
Human skin color
References
External links
Biological pigments
Warning coloration | Biological pigment | Biology | 5,005 |
204,118 | https://en.wikipedia.org/wiki/Fortification | A fortification (also called a fort, fortress, fastness, or stronghold) is a military construction designed for the defense of territories in warfare, and is used to establish rule in a region during peacetime. The term is derived from Latin ("strong") and ("to make").
From very early history to modern times, defensive walls have often been necessary for cities to survive in an ever-changing world of invasion and conquest. Some settlements in the Indus Valley Civilization were the first small cities to be fortified. In ancient Greece, large cyclopean stone walls fitted without mortar had been built in Mycenaean Greece, such as the ancient site of Mycenae. A Greek phrourion was a fortified collection of buildings used as a military garrison, and is the equivalent of the Roman castellum or fortress. These constructions mainly served the purpose of a watch tower, to guard certain roads, passes, and borders. Though smaller than a real fortress, they acted as a border guard rather than a real strongpoint to watch and maintain the border.
The art of setting out a military camp or constructing a fortification traditionally has been called "castrametation" since the time of the Roman legions. Fortification is usually divided into two branches: permanent fortification and field fortification. There is also an intermediate branch known as semipermanent fortification. Castles are fortifications which are regarded as being distinct from the generic fort or fortress in that they are a residence of a monarch or noble and command a specific defensive territory.
Roman forts and hill forts were the main antecedents of castles in Europe, which emerged in the 9th century in the Carolingian Empire. The Early Middle Ages saw the creation of some towns built around castles.
Medieval-style fortifications were largely made obsolete by the arrival of cannons in the 14th century. Fortifications in the age of black powder evolved into much lower structures with greater use of ditches and earth ramparts that would absorb and disperse the energy of cannon fire. Walls exposed to direct cannon fire were very vulnerable, so the walls were sunk into ditches fronted by earth slopes to improve protection.
The arrival of explosive shells in the 19th century led to another stage in the evolution of fortification. Star forts did not fare well against the effects of high explosives, and the intricate arrangements of bastions, flanking batteries and the carefully constructed lines of fire for the defending cannon could be rapidly disrupted by explosive shells. Steel-and-concrete fortifications were common during the 19th and early 20th centuries. The advances in modern warfare since World War I have made large-scale fortifications obsolete in most situations.
Nomenclature
Many United States Army installations are known as forts, although they are not always fortified. During the pioneering era of North America, many outposts on the frontiers, even non-military outposts, were referred to generically as forts. Larger military installations may be called fortresses; smaller ones were once known as fortalices. The word fortification can refer to the practice of improving an area's defense with defensive works. City walls are fortifications but are not necessarily called fortresses.
The art of setting out a military camp or constructing a fortification traditionally has been called castrametation since the time of the Roman legions. Laying siege to a fortification and of destroying it is commonly called siegecraft or siege warfare and is formally known as poliorcetics. In some texts, this latter term also applies to the art of building a fortification.
Fortification is usually divided into two branches: permanent fortification and field fortification. Permanent fortifications are erected at leisure, with all the resources that a state can supply of constructive and mechanical skill, and are built of enduring materials. Field fortifications—for example breastworks—and often known as fieldworks or earthworks, are extemporized by troops in the field, perhaps assisted by workers and tools and with materials that do not require much preparation, such as soil, brushwood, and light timber, or sandbags (see sangar). An example of field fortification was the construction of Fort Necessity by George Washington in 1754.
There is also an intermediate branch known as semipermanent fortification. This is employed when in the course of a campaign it becomes desirable to protect some locality with the best imitation of permanent defenses that can be made in a short time, given ample resources and skilled civilian workers. An example of this is the construction of Roman forts in England and in other Roman territories where camps were set up with the intention of staying for some time, but not permanently.
Castles are fortifications which are regarded as being distinct from the generic fort or fortress in that it describes a residence of a monarch or noble and commands a specific defensive territory. An example of this is the massive medieval castle of Carcassonne.
History
Early uses
Defensive fences for protecting humans and domestic animals against predators was used long before the appearance of writing and began "perhaps with primitive man blocking the entrances of his caves for security from large carnivores".
From very early history to modern times, walls have been a necessity for many cities. Amnya Fort in western Siberia has been described by archeologists as one of the oldest known fortified settlements, as well as the northernmost Stone Age fort. In Bulgaria, near the town of Provadia a walled fortified settlement today called Solnitsata starting from 4700 BC had a diameter of about , was home to 350 people living in two-story houses, and was encircled by a fortified wall. The huge walls around the settlement, which were built very tall and with stone blocks which are high and thick, make it one of the earliest walled settlements in Europe but it is younger than the walled town of Sesklo in Greece from 6800 BC.
Uruk in ancient Sumer (Mesopotamia) is one of the world's oldest known walled cities. The Ancient Egyptians also built fortresses on the frontiers of the Nile Valley to protect against invaders from adjacent territories, as well as circle-shaped mud brick walls around their cities. Many of the fortifications of the ancient world were built with mud brick, often leaving them no more than mounds of dirt for today's archeologists. A massive prehistoric stone wall surrounded the ancient temple of Ness of Brodgar 3200 BC in Scotland. Named the "Great Wall of Brodgar" it was thick and tall. The wall had some symbolic or ritualistic function. The Assyrians deployed large labor forces to build new palaces, temples and defensive walls.
Bronze Age Europe
In Bronze Age Malta, some settlements also began to be fortified. The most notable surviving example is Borġ in-Nadur, where a bastion built in around 1500 BC was found. Exceptions were few—notably, ancient Sparta and ancient Rome did not have walls for a long time, choosing to rely on their militaries for defense instead. Initially, these fortifications were simple constructions of wood and earth, which were later replaced by mixed constructions of stones piled on top of each other without mortar. In ancient Greece, large stone walls had been built in Mycenaean Greece, such as the ancient site of Mycenae (famous for the huge stone blocks of its 'cyclopean' walls). In classical era Greece, the city of Athens built two parallel stone walls, called the Long Walls, that reached their fortified seaport at Piraeus a few miles away.
In Central Europe, the Celts built large fortified settlements known as oppida, whose walls seem partially influenced by those built in the Mediterranean. The fortifications were continuously being expanded and improved. Around 600 BC, in Heuneburg, Germany, forts were constructed with a limestone foundation supported by a mudbrick wall approximately 4 meters tall, probably topped by a roofed walkway, thus reaching a total height of 6 meters. The wall was clad with lime plaster, regularly renewed. Towers protruded outwards from it.
The Oppidum of Manching (German: Oppidum von Manching) was a large Celtic proto-urban or city-like settlement at modern-day Manching (near Ingolstadt), Bavaria (Germany). The settlement was founded in the 3rd century BC and existed until . It reached its largest extent during the late La Tène period (late 2nd century BC), when it had a size of 380 hectares. At that time, 5,000 to 10,000 people lived within its 7.2 km long walls. The oppidum of Bibracte is another example of a Gaulish fortified settlement.
Bronze and Iron Age Near East
The term casemate wall is used in the archeology of Israel and the wider Near East, having the meaning of a double wall protecting a city or fortress, with transverse walls separating the space between the walls into chambers. These could be used as such, for storage or residential purposes, or could be filled with soil and rocks during siege in order to raise the resistance of the outer wall against battering rams. Originally thought to have been introduced to the region by the Hittites, this has been disproved by the discovery of examples predating their arrival, the earliest being at Ti'inik (Taanach) where such a wall has been dated to the 16th century BC. Casemate walls became a common type of fortification in the Southern Levant between the Middle Bronze Age (MB) and Iron Age II, being more numerous during the Iron Age and peaking in Iron Age II (10th–6th century BC). However, the construction of casemate walls had begun to be replaced by sturdier solid walls by the 9th century BC, probably due the development of more effective battering rams by the Neo-Assyrian Empire. Casemate walls could surround an entire settlement, but most only protected part of it. The three different types included freestanding casemate walls, then integrated ones where the inner wall was part of the outer buildings of the settlement, and finally filled casemate walls, where the rooms between the walls were filled with soil right away, allowing for a quick, but nevertheless stable construction of particularly high walls.
Ancient Rome
The Romans fortified their cities with massive, mortar-bound stone walls. The most famous of these are the largely extant Aurelian Walls of Rome and the Theodosian Walls of Constantinople, together with partial remains elsewhere. These are mostly city gates, like the Porta Nigra in Trier or Newport Arch in Lincoln.
Hadrian's Wall was built by the Roman Empire across the width of what is now northern England following a visit by Roman Emperor Hadrian (AD 76–138) in AD 122.
Indian subcontinent
A number of forts dating from the Later Stone Age to the British Raj are found in the mainland Indian subcontinent (modern day India, Pakistan, Bangladesh and Nepal). "Fort" is the word used in India for all old fortifications. Numerous Indus Valley Civilization sites exhibit evidence of fortifications. By about 3500 BC, hundreds of small farming villages dotted the Indus floodplain. Many of these settlements had fortifications and planned streets. The stone and mud brick houses of Kot Diji were clustered behind massive stone flood dykes and defensive walls, for neighboring communities bickered constantly about the control of prime agricultural land. The fortification varies by site. While Dholavira has stone-built fortification walls, Harrapa is fortified using baked bricks; sites such as Kalibangan exhibit mudbrick fortifications with bastions and Lothal has a quadrangular fortified layout. Evidence also suggested of fortifications in Mohenjo-daro. Even a small town—for instance, Kotada Bhadli, exhibiting sophisticated fortification-like bastions—shows that nearly all major and minor towns of the Indus Valley Civilization were fortified. Forts also appeared in urban cities of the Gangetic valley during the second urbanization period between 600 and 200 BC, and as many as 15 fortification sites have been identified by archeologists throughout the Gangetic valley, such as Kaushambi, Mahasthangarh, Pataliputra, Mathura, Ahichchhatra, Rajgir, and Lauria Nandangarh. The earliest Mauryan period brick fortification occurs in one of the stupa mounds of Lauria Nandangarh, which is 1.6 km in perimeter and oval in plan and encloses a habitation area.Mundigak () in present-day south-east Afghanistan has defensive walls and square bastions of sun dried bricks.
India currently has over 180 forts, with the state of Maharashtra alone having over 70 forts, which are also known as durg, many of them built by Shivaji, founder of the Maratha Empire.
A large majority of forts in India are in North India. The most notable forts are the Red Fort at Old Delhi, the Red Fort at Agra, the Chittor Fort and Mehrangarh Fort in Rajasthan, the Ranthambhor Fort, Amer Fort and Jaisalmer Fort also in Rajasthan and Gwalior Fort in Madhya Pradesh.
Arthashastra, the Indian treatise on military strategy describes six major types of forts differentiated by their major modes of defenses.
Sri Lanka
Forts in Sri Lanka date back thousands of years, with many being built by Sri Lankan kings. These include several walled cities. With the outset of colonial rule in the Indian Ocean, Sri Lanka was occupied by several major colonial empires that from time to time became the dominant power in the Indian Ocean. The colonists built several western-style forts, mostly in and around the coast of the island. The first to build colonial forts in Sri Lanka were the Portuguese; these forts were captured and later expanded by the Dutch. The British occupied these Dutch forts during the Napoleonic wars. Most of the colonial forts were garrisoned up until the early 20th century. The coastal forts had coastal artillery manned by the Ceylon Garrison Artillery during the two world wars.
Most of these were abandoned by the military but retained civil administrative officers, while others retained military garrisons, which were more administrative than operational. Some were reoccupied by military units with the escalation of the Sri Lankan Civil War; Jaffna fort, for example, came under siege several times.
China
Large tempered earth (i.e. rammed earth) walls were built in ancient China since the Shang dynasty (–1050 BC); the capital at ancient Ao had enormous walls built in this fashion (see siege for more info). Although stone walls were built in China during the Warring States (481–221 BC), mass conversion to stone architecture did not begin in earnest until the Tang dynasty (618–907 AD). The Great Wall of China had been built since the Qin dynasty (221–207 BC), although its present form was mostly an engineering feat and remodeling of the Ming dynasty (1368–1644 AD).
In addition to the Great Wall, a number of Chinese cities also employed the use of defensive walls to defend their cities. Notable Chinese city walls include the city walls of Hangzhou, Nanking, the Old City of Shanghai, Suzhou, Xi'an and the walled villages of Hong Kong. The famous walls of the Forbidden City in Beijing were established in the early 15th century by the Yongle Emperor. The Forbidden City made up the inner portion of the Beijing city fortifications.
Philippines
Spanish colonial fortifications
During the Spanish Era several forts and outposts were built throughout the archipelago. Most notable is Intramuros, the old walled city of Manila located along the southern bank of the Pasig River. The historic city was home to centuries-old churches, schools, convents, government buildings and residences, the best collection of Spanish colonial architecture before much of it was destroyed by the bombs of World War II. Of all the buildings within the 67-acre city, only one building, the San Agustin Church, survived the war.
Partial listing of Spanish forts:
Intramuros, Manila
Cuartel de Santo Domingo, Santa Rosa, Laguna
Fuerza de Cuyo, Cuyo, Palawan
Fuerza de Cagayancillo, Cagayancillo, Palawan
Real Fuerza de Nuestra Señora del Pilar de Zaragoza, Zamboanga City
Fuerza de San Felipe, Cavite City
Fuerza de San Pedro, Cebu
Fuerte dela Concepcion y del Triunfo, Ozamiz, Misamis Occidental
Fuerza de San Antonio Abad, Manila
Fuerza de Pikit, Pikit, Cotabato
Fuerza de Santiago, Romblon, Romblon
Fuerza de Jolo, Jolo, Sulu
Fuerza de Masbate, Masbate
Fuerza de Bongabong, Bongabong, Oriental Mindoro
Cotta de Dapitan, Dapitan, Zamboanga del Norte
Fuerte de Alfonso XII, Tukuran, Zamboanga del Sur
Fuerza de Bacolod, Bacolod, Lanao del Norte
Guinsiliban Watchtower, Guinsiliban, Camiguin
Laguindingan Watchtower, Laguindingan, Misamis Oriental
Kutang San Diego, Gumaca, Quezon
Baluarte Luna, Luna, La Union
Local fortifications
The Ivatan people of the northern islands of Batanes built their so-called idjang on hills and elevated areas to protect themselves during times of war. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived.
The Igorots built forts made of stone walls that averaged several meters in width and about two to three times the width in height around 2000 BC.
The Muslim Filipinos of the south built strong fortresses called kota or moong to protect their communities. Usually, many of the occupants of these kotas are entire families rather than just warriors. Lords often had their own kotas to assert their right to rule, it served not only as a military installation but as a palace for the local Lord. It is said that at the height of the Maguindanao Sultanate's power, they blanketed the areas around Western Mindanao with kotas and other fortifications to block the Spanish advance into the region. These kotas were usually made of stone and bamboo or other light materials and surrounded by trench networks. As a result, some of these kotas were burned easily or destroyed. With further Spanish campaigns in the region, the sultanate was subdued and a majority of kotas dismantled or destroyed. kotas were not only used by the Muslims as defense against Spaniards and other foreigners, renegades and rebels also built fortifications in defiance of other chiefs in the area. During the American occupation, rebels built strongholds and the datus, rajahs, or sultans often built and reinforced their kotas in a desperate bid to maintain rule over their subjects and their land. Many of these forts were also destroyed by American expeditions, as a result, very very few kotas still stand to this day.
Notable kotas:
Kota Selurong: an outpost of the Bruneian Empire in Luzon, later became the City of Manila.
Kuta Wato/Kota Bato: Literally translates to "stone fort" the first known stone fortification in the country, its ruins exist as the "Kutawato Cave Complex"
Kota Sug/Jolo: The capital and seat of the Sultanate of Sulu. When it was occupied by the Spaniards in the 1870s they converted the kota into the world's smallest walled city.
Pre-Islamic Arabia
During Muhammad's lifetime
During Muhammad's era in Arabia, many tribes made use of fortifications. In the Battle of the Trench, the largely outnumbered defenders of Medina, mainly Muslims led by Islamic prophet Muhammad, dug a trench, which together with Medina's natural fortifications, rendered the confederate cavalry (consisting of horses and camels) useless, locking the two sides in a stalemate. Hoping to make several attacks at once, the confederates persuaded the Medina-allied Banu Qurayza to attack the city from the south. However, Muhammad's diplomacy derailed the negotiations, and broke up the confederacy against him. The well-organized defenders, the sinking of confederate morale, and poor weather conditions caused the siege to end in a fiasco.
During the Siege of Ta'if in January 630, Muhammad ordered his followers to attack enemies who fled from the Battle of Hunayn and sought refuge in the fortress of Taif.
Islamic world
Africa
The entire city of Kerma in Nubia (present day Sudan) was encompassed by fortified walls surrounded by a ditch. Archeology has revealed various Bronze Age bastions and foundations constructed of stone together with either baked or unfired brick.
The walls of Benin are described as the world's second longest man-made structure, as well as the most extensive earthwork in the world, by the Guinness Book of Records, 1974. The walls may have been constructed between the thirteenth and mid-fifteenth century CE or, during the first millennium CE. Strong citadels were also built other in areas of Africa. Yorubaland for example had several sites surrounded by the full range of earthworks and ramparts seen elsewhere, and sited on ground. This improved defensive potential—such as hills and ridges. Yoruba fortifications were often protected with a double wall of trenches and ramparts, and in the Congo forests concealed ditches and paths, along with the main works, often bristled with rows of sharpened stakes. Inner defenses were laid out to blunt an enemy penetration with a maze of defensive walls allowing for entrapment and crossfire on opposing forces.
A military tactic of the Ashanti was to create powerful log stockades at key points. This was employed in later wars against the British to block British advances. Some of these fortifications were over a hundred yards long, with heavy parallel tree trunks. They were impervious to destruction by artillery fire. Behind these stockades, numerous Ashanti soldiers were mobilized to check enemy movement. While formidable in construction, many of these strongpoints failed because Ashanti guns, gunpowder and bullets were poor, and provided little sustained killing power in defense. Time and time again British troops overcame or bypassed the stockades by mounting old-fashioned bayonet charges, after laying down some covering fire.
Defensive works were of importance in the tropical African Kingdoms. In the Kingdom of Kongo field fortifications were characterized by trenches and low earthen embankments. Such strongpoints ironically, sometimes held up much better against European cannon than taller, more imposing structures.
Medieval Europe
Roman forts and hill forts were the main antecedents of castles in Europe, which emerged in the 9th century in the Carolingian Empire. The Early Middle Ages saw the creation of some towns built around castles. These cities were only rarely protected by simple stone walls and more usually by a combination of both walls and ditches. From the 12th century, hundreds of settlements of all sizes were founded all across Europe, which very often obtained the right of fortification soon afterward.
The founding of urban centers was an important means of territorial expansion and many cities, especially in eastern Europe, were founded precisely for this purpose during the period of Ostsiedlung. These cities are easy to recognize due to their regular layout and large market spaces. The fortifications of these settlements were continuously improved to reflect the current level of military development. During the Renaissance era, the Venetian Republic raised great walls around cities, and the finest examples, among others, are in Nicosia (Cyprus), Rocca di Manerba del Garda (Lombardy), and Palmanova (Italy), or Dubrovnik (Croatia), which proved to be futile against attacks but still stand to this day. Unlike the Venetians, the Ottomans used to build smaller fortifications but in greater numbers, and only rarely fortified entire settlements such as Počitelj, Vratnik, and Jajce in Bosnia.
Development after introduction of firearms
Medieval-style fortifications were largely made obsolete by the arrival of cannons on the 14th century battlefield. Fortifications in the age of black powder evolved into much lower structures with greater use of ditches and earth ramparts that would absorb and disperse the energy of cannon fire. Walls exposed to direct cannon fire were very vulnerable, so were sunk into ditches fronted by earth slopes.
This placed a heavy emphasis on the geometry of the fortification to allow defensive cannonry interlocking fields of fire to cover all approaches to the lower and thus more vulnerable walls.
The evolution of this new style of fortification can be seen in transitional forts such as Sarzanello in North West Italy which was built between 1492 and 1502. Sarzanello consists of both crenellated walls with towers typical of the medieval period but also has a ravelin like angular gun platform screening one of the curtain walls which is protected from flanking fire from the towers of the main part of the fort. Another example is the fortifications of Rhodes which were frozen in 1522 so that Rhodes is the only European walled town that still shows the transition between the classical medieval fortification and the modern ones. A manual about the construction of fortification was published by Giovanni Battista Zanchi in 1554.
Fortifications also extended in depth, with protected batteries for defensive cannonry, to allow them to engage attacking cannons to keep them at a distance and prevent them from bearing directly on the vulnerable walls.
The result was star shaped fortifications with tier upon tier of hornworks and bastions, of which Fort Bourtange is an excellent example. There are also extensive fortifications from this era in the Nordic states and in Britain, the fortifications of Berwick-upon-Tweed and the harbor archipelago of Suomenlinna at Helsinki being fine examples.
19th century
During the 18th century, it was found that the continuous enceinte, or main defensive enclosure of a bastion fortress, could not be made large enough to accommodate the enormous field armies which were increasingly being employed in Europe; neither could the defenses be constructed far enough away from the fortress town to protect the inhabitants from bombardment by the besiegers, the range of whose guns was steadily increasing as better manufactured weapons were introduced. Therefore, since refortifying the Prussian fortress cities of Koblenz and Köln after 1815, the principle of the ring fortress or girdle fortress was used: forts, each several hundred meters out from the original enceinte, were carefully sited so as to make best use of the terrain and to be capable of mutual support with neighboring forts. Gone were citadels surrounding towns: forts were to be moved some distance away from cities to keep the enemy at a distance so their artillery could not bombard said urbanized settlements. From now on a ring of forts were to be built at a spacing that would allow them to effectively cover the intervals between them.
The arrival of explosive shells in the 19th century led to yet another stage in the evolution of fortification. Star forts did not fare well against the effects of high explosives and the intricate arrangements of bastions, flanking batteries and the carefully constructed lines of fire for the defending cannon could be rapidly disrupted by explosive shells.
Worse, the large open ditches surrounding forts of this type were an integral part of the defensive scheme, as was the covered way at the edge of the counterscarp. The ditch was extremely vulnerable to bombardment with explosive shells.
In response, military engineers evolved the polygonal style of fortification. The ditch became deep and vertically sided, cut directly into the native rock or soil, laid out as a series of straight lines creating the central fortified area that gives this style of fortification its name.
Wide enough to be an impassable barrier for attacking troops but narrow enough to be a difficult target for enemy shellfire, the ditch was swept by fire from defensive blockhouses set in the ditch as well as firing positions cut into the outer face of the ditch itself.
The profile of the fort became very low indeed, surrounded outside the ditch covered by caponiers by a gently sloping open area so as to eliminate possible cover for enemy forces, while the fort itself provided a minimal target for enemy fire. The entrypoint became a sunken gatehouse in the inner face of the ditch, reached by a curving ramp that gave access to the gate via a rolling bridge that could be withdrawn into the gatehouse.
Much of the fort moved underground. Deep passages and tunnel networks now connected the blockhouses and firing points in the ditch to the fort proper, with magazines and machine rooms deep under the surface. The guns, however, were often mounted in open emplacements and protected only by a parapet; both in order to keep a lower profile and also because experience with guns in closed casemates had seen them put out of action by rubble as their own casemates were collapsed around them.
The new forts abandoned the principle of the bastion, which had also been made obsolete by advances in arms. The outline was a much-simplified polygon, surrounded by a ditch. These forts, built in masonry and shaped stone, were designed to shelter their garrison against bombardment. One organizing feature of the new system involved the construction of two defensive curtains: an outer line of forts, backed by an inner ring or line at critical points of terrain or junctions (see, for example, Séré de Rivières system in France).
Traditional fortification however continued to be applied by European armies engaged in warfare in colonies established in Africa against lightly armed attackers from amongst the indigenous population. A relatively small number of defenders in a fort impervious to primitive weaponry could hold out against high odds, the only constraint being the supply of ammunition.
20th and 21st centuries
Steel-and-concrete fortifications were common during the 19th and early 20th centuries. However, the advances in modern warfare since World War I have made large-scale fortifications obsolete in most situations. In the 1930s and 1940s, some fortifications were built with designs taking into consideration the new threat of aerial warfare, such as Fort Campbell in Malta. Despite this, only underground bunkers are still able to provide some protection in modern wars. Many historical fortifications were demolished during the modern age, but a considerable number survive as popular tourist destinations and prominent local landmarks today.
The downfall of permanent fortifications had two causes:
The ever-escalating power, speed, and reach of artillery and airpower meant that almost any target that could be located could be destroyed if sufficient force were massed against it. As such, the more resources a defender devoted to reinforcing a fortification, the more combat power that fortification justified being devoted to destroying it, if the fortification's destruction was demanded by an attacker's strategy. From World War II, bunker busters were used against fortifications. By 1950, nuclear weapons were capable of destroying entire cities and producing dangerous radiation. This led to the creation of civilian nuclear air raid shelters.
The second weakness of permanent fortification was its very permanency. Because of this, it was often easier to go around a fortification and, with the rise of mobile warfare in the beginning of World War II, this became a viable offensive choice. When a defensive line was too extensive to be entirely bypassed, massive offensive might could be massed against one part of the line allowing a breakthrough, after which the rest of the line could be bypassed. Such was the fate of the many defensive lines built before and during World War II, such as the Siegfried Line, the Stalin Line, and the Atlantic Wall. This was not the case with the Maginot Line; it was designed to force the Germans to invade other countries (Belgium or Switzerland) to go around it, and was successful in that sense.
Instead field fortification rose to dominate defensive action. Unlike the trench warfare which dominated World War I, these defenses were more temporary in nature. This was an advantage because since it was less extensive it formed a less obvious target for enemy force to be directed against.
If sufficient power were massed against one point to penetrate it, the forces based there could be withdrawn and the line could be reestablished relatively quickly. Instead of a supposedly impenetrable defensive line, such fortifications emphasized defense in depth, so that as defenders were forced to pull back or were overrun, the lines of defenders behind them could take over the defense.
Because the mobile offensives practiced by both sides usually focused on avoiding the strongest points of a defensive line, these defenses were usually relatively thin and spread along the length of a line. The defense was usually not equally strong throughout, however.
The strength of the defensive line in an area varied according to how rapidly an attacking force could progress in the terrain that was being defended—both the terrain the defensive line was built on and the ground behind it that an attacker might hope to break out into. This was both for reasons of the strategic value of the ground, and its defensive value.
This was possible because while offensive tactics were focused on mobility, so were defensive tactics. The dug-in defenses consisted primarily of infantry and antitank guns. Defending tanks and tank destroyers would be concentrated in mobile brigades behind the defensive line. If a major offensive was launched against a point in the line, mobile reinforcements would be sent to reinforce that part of the line that was in danger of failing.
Thus the defensive line could be relatively thin because the bulk of the fighting power of the defenders was not concentrated in the line itself but rather in the mobile reserves. A notable exception to this rule was seen in the defensive lines at the Battle of Kursk during World War II, where German forces deliberately attacked the strongest part of the Soviet defenses, seeking to crush them utterly.
The terrain that was being defended was of primary importance because open terrain that tanks could move over quickly made possible rapid advances into the defenders' rear areas that were very dangerous to the defenders. Thus such terrain had to be defended at all costs.
In addition, since in theory the defensive line only had to hold out long enough for mobile reserves to reinforce it, terrain that did not permit rapid advance could be held more weakly because the enemy's advance into it would be slower, giving the defenders more time to reinforce that point in the line. For example, the Battle of the Hurtgen Forest in Germany during the closing stages of World War II is an excellent example of how difficult terrain could be used to the defenders' advantage.
After World War II, intercontinental ballistic missiles capable of reaching much of the way around the world were developed, so speed became an essential characteristic of the strongest militaries and defenses. Missile silos were developed, so missiles could be fired from the middle of a country and hit cities and targets in another country, and airplanes (and aircraft carriers) became major defenses and offensive weapons (leading to an expansion of the use of airports and airstrips as fortifications). Mobile defenses could be had underwater, too, in the form of ballistic missile submarines capable of firing submarine launched ballistic missiles. Some bunkers in the mid to late 20th century came to be buried deep inside mountains and prominent rocks, such as Gibraltar and the Cheyenne Mountain Complex. On the ground itself, minefields have been used as hidden defenses in modern warfare, often remaining long after the wars that produced them have ended.
Demilitarized zones along borders are arguably another type of fortification, although a passive kind, providing a buffer between potentially hostile militaries.
Military airfields
Military airfields offer a fixed "target rich" environment for even relatively small enemy forces, using hit-and-run tactics by ground forces, stand-off attacks (mortars and rockets), air attacks, or ballistic missiles. Key targets—aircraft, munitions, fuel, and vital technical personnel—can be protected by fortifications.
Aircraft can be protected by revetments, hesco barriers, hardened aircraft shelters and underground hangars which will protect from many types of attack. Larger aircraft types tend to be based outside the operational theater.
Munition storage follows safety rules which use fortifications (bunkers and bunds) to provide protection against accident and chain reactions (sympathetic detonations). Weapons for rearming aircraft can be stored in small fortified expense stores closer to the aircraft. At Biên Hòa, South Vietnam, on the morning of May 16, 1965, as aircraft were being refueled and armed, a chain reaction explosion destroyed 13 aircraft, killed 34 personnel, and injured over 100; this, along with damage and losses of aircraft to enemy attack (by both infiltration and stand-off attacks), led to the construction of revetments and shelters to protect aircraft throughout South Vietnam.
Aircrew and ground personnel will need protection during enemy attacks and fortifications range from culvert section "duck and cover" shelters to permanent air raid shelters. Soft locations with high personnel densities such as accommodation and messing facilities can have limited protection by placing prefabricated concrete walls or barriers around them, examples of barriers are Jersey Barriers, T Barriers or Splinter Protection Units (SPUs). Older fortification may prove useful such as the old 'Yugo' pyramid shelters built in the 1980s which were used by US personnel on 8 Jan 2020 when Iran fired 11 ballistic missiles at Ayn al-Asad Airbase in Iraq.
Fuel is volatile and has to comply with rules for storage which provide protection against accidents. Fuel in underground bulk fuel installations is well protected though valves and controls are vulnerable to enemy action. Above-ground tanks can be susceptible to attack.
Ground support equipment will need to be protected by fortifications to be usable after an enemy attack.
Permanent (concrete) guard fortifications are safer, stronger, last longer and are more cost-effective than sandbag fortifications. Prefabricated positions can be made from concrete culvert sections. The British Yarnold Bunker is made from sections of a concrete pipe.
Guard towers provide an increased field of view but a lower level of protection.
Dispersal and camouflage of assets can supplement fortifications against some forms of airfield attack.
Counterinsurgency
Just as in colonial periods, comparatively obsolete fortifications are still used for low intensity conflicts. Such fortifications range in size from small patrol bases or forward operating bases up to huge airbases such as Camp Bastion/Leatherneck in Afghanistan. Much like in the 18th and 19th century, because the enemy is not a powerful military force with the heavy weaponry required to destroy fortifications, walls of gabion, sandbag or even simple mud can provide protection against small arms and antitank weapons—although such fortifications are still vulnerable to mortar and artillery fire.
Forts
Forts in modern American usage often refer to space set aside by governments for a permanent military facility; these often do not have any actual fortifications, and can have specializations (military barracks, administration, medical facilities, or intelligence).
However, there are some modern fortifications that are referred to as forts. These are typically small semipermanent fortifications. In urban combat, they are built by upgrading existing structures such as houses or public buildings. In field warfare they are often log, sandbag or gabion type construction.
Such forts are typically only used in low-level conflicts, such as counterinsurgency conflicts or very low-level conventional conflicts, such as the Indonesia–Malaysia confrontation, which saw the use of log forts for use by forward platoons and companies. The reason for this is that static above-ground forts cannot survive modern direct or indirect fire weapons larger than mortars, RPGs and small arms.
Prisons and others
Fortifications designed to keep the inhabitants of a facility in rather than attacker out can also be found, in prisons, concentration camps, and other such facilities. Those are covered in other articles, as most prisons and concentration camps are not primarily military forts (although forts, camps, and garrison towns have been used as prisons and/or concentration camps; such as Theresienstadt, Guantanamo Bay detention camp and the Tower of London for example).
Field fortifications
Notes
References
Bibliography
July, Robert Pre-Colonial Africa, Charles Scribner, 1975.
Murray, Nicholas. "The Development of Fortifications", The Encyclopedia of War, Gordon Martel (ed.). WileyBlackwell, 2011.
Murray, Nicholas. The Rocky Road to the Great War: The Evolution of Trench Warfare to 1914. Potomac Books Inc. (an imprint of the University of Nebraska Press), 2013.
Osadolor, Osarhieme Benson, "The Military System of Benin Kingdom 1440–1897", (UD), Hamburg University: 2001 copy.
Thornton, John Kelly Warfare in Atlantic Africa, 1500–1800, Routledge: 1999, .
External links
Fortress Study Group
ICOFORT
Engineering barrages
Military strategy
Military installations
Forts | Fortification | Engineering | 8,212 |
78,160,954 | https://en.wikipedia.org/wiki/Ana%20Guadalupe | Ana R. Guadalupe Quiñones (born 1956) is a Puerto Rican chemist and academic academic administrator who served as the chancellor of the University of Puerto Rico, Río Piedras Campus from 2009 to 2013. She researches electrochemistry, specifically in the development of electrochemical sensors, biosensors, and electrocatalysts for applications in bioelectrochemistry and environmental monitoring.
Early life and education
Guadalupe was born in 1956. She attended the University of Puerto Rico, Río Piedras Campus (UPR-RP). She earned a B.Sc. in chemistry in 1979 and a M.Sc. in analytical chemistry in 1984. Her master's thesis was titled, Polymer-Modified Electrodes: Electrochemical Characterization and Analytical Applications for the Extraction of Metal Species in Aqueous Solutions. Between 1979 and 1981, Guadalupe worked as a laboratory instructor at the Universidad del Sagrado Corazón. She was a teaching assistant at UPR-RP, from 1981 to 1984.
In 1987, Guadalupe completed a Ph.D. in analytical chemistry, specializing in electrochemistry, at Cornell University. Her doctoral research focused on the electrochemical properties of chemically modified electrodes. Her dissertation was titled, Polymer Modified Electrodes: Electrochemical Characterization and Applications. Héctor D. Abruña was her doctoral advisor. She completed a postdoctoral fellowship in bioelectrochemistry at the University of North Carolina at Chapel Hill from 1987 to 1988.
Career
In 1988, Guadalupe became an assistant professor of chemistry at UPR-RP. Her research during this period focused on electrochemical processes, specifically the study of redox reactions and the development of electrochemical sensors. In 1989, she joined a collaborative project on the use of polymer-modified electrodes in bioelectrochemical applications, which laid the groundwork for her later research on biosensors. From 1988 to 1992, she was also an instructor at Interamerican University of Puerto Rico. She investigated sensor technology, working on the electrocatalytic oxidation of malate and lactate using ruthenium complexes, and the development of biosensors for the detection of compounds like nicotinamide adenine dinucleotide (NADH).
Guadalupe was promoted to associate professor in 1992 at UPR-RP, where she continued her work on electrochemical sensors and biosensors. Her research during this time included projects on polymer-supported electrodes, the controlled release of insulin, and the use of ruthenium complexes as molecular probes for enzyme-coupled reactions. By the mid-1990s, she was recognized for her expertise in bioelectrochemistry and had secured grants from the National Institutes of Health (NIH) and the National Science Foundation (NSF). In 1994, Guadalupe was appointed coordinator of the UPR-RP chemistry graduate program, a position she held until 1998. During this period, she supervised a number of graduate theses and expanded her research into areas such as the electrochemical production of nanoparticles and their application in catalysis.
In 1998, Guadalupe was promoted to professor of chemistry. Her work on electrochemical materials and nanostructured polymers garnered further recognition, and she began collaborating on interdisciplinary projects involving materials science and sensor technology. She also served as president of the Puerto Rico Science Teachers Association from 1991 to 1993 and was a member of the American Chemical Society, where she held various leadership roles, including president of the Puerto Rico chapter in 1998. From 2001 to 2009, she served as dean of graduate studies and research at UPR-RP, where she was responsible for overseeing the university's research initiatives and securing funding for projects in science and technology. During this time, she continued her research on electrochemical sensors and biosensors, contributing to advances in detecting environmental pollutants and pathogens like Salmonella.
In October 2009, Guadalupe was appointed acting chancellor of the UPR-RP. In this role, she provided leadership during a transitional period for the university, focusing on strengthening research and academic programs. Her tenure coincided with student protests from 2010 to 2011, sparked by an annual fee imposed by the University of Puerto Rico (UPR) board of trustees. Guadalupe called for police intervention during these protests, leading to violent confrontations and arrests, which drew widespread criticism from the student body. In March 2011, she was physically and verbally attacked by students, further intensifying the tensions. She resigned in May 2013 following a reconfiguration of the university's board of trustees. Her research in the period included the development of electrochemical materials and biosensors for environmental monitoring and public health applications. She also worked on projects involving the electrochemical characterization of porous silicon and the design of sensors for detecting waterborne diseases.
In 2021, Guadalupe was considered for the interim presidency of UPR, but her candidacy was met with opposition from students due to her controversial role during the protests. She was ultimately not selected as a finalist.
References
Living people
1956 births
Place of birth missing (living people)
Puerto Rican women scientists
21st-century Puerto Rican women educators
21st-century American women scientists
21st-century American chemists
American women chemists
University of Puerto Rico, Río Piedras Campus alumni
University of Puerto Rico faculty
American women academic administrators
Puerto Rican academic administrators
Analytical chemists
Cornell University alumni | Ana Guadalupe | Chemistry | 1,088 |
66,646,643 | https://en.wikipedia.org/wiki/Jan%20Trlifaj | Jan Trlifaj (born 30 December 1954) is a professor of Mathematics at Charles University whose research interests include Commutative algebra, Homological algebra and Representation theory.
Career and research
Jan Trlifaj studied mathematics at the Faculty of Mathematics and Physics, Charles University, from which he received MSc. in 1979, Ph.D. in 1989 under Ladislav Bican. and Prof. of Mathematics in the field Algebra and number theory in 2009.
In the academic year 1994/95 he had the position as Postdoctoral Fellow of the Royal Society at Department of Mathematics at University of Manchester. In Fall 1998 he received the J.W.Fulbright Scholarship at the Department of Mathematics, University California at Irvine. During Fall 2002 and 2006 he was a visiting professor at Centre de Recerca Matemàtica, Barcelona.
Since 1990, he has completed numerous short term visiting appointments and given over 100 invited lectures at conferences and seminars worldwide.
Since 2017, he is Fellow of Learned Society of the Czech Republic.
He served in the organizing committee of 18th International Conference on
Representations of Algebras (ICRA 2018), held for 250 participants from 34 countries in August 2018 in Prague, Czech Republic.
He has been elected Fellow of the American Mathematical Society (AMS) in 2020, for contributions to homological algebra and tilting theory for non finitely generated modules.
He serves as Member of the Science board for Neuron prize that is awarded to best Czech scientists by Neuron Endowment Fund.
Selected publications
Papers
1994:
1996:
2001: (with Paul C. Eklof), (with Saharon Shelah)
2007: (with Jan Šťovíček)
2012: (with Dolors Herbera), (with Sergio Estrada, Pedro A. Guil Asensio, and Mike Prest)
2014: (with Lidia Angeleri Hügel, David Pospíšil, and Jan Šťovíček)
2016: (with Alexander Slávik)
Books
2006, 2012: Approximations and Endomorphism Algebras of Modules, de Gruyter Expositions in Mathematics 41, Vol. 1 - Approximations, Vol. 2 - Predictions, W. de Gruyter Berlin - Boston, xxviii + 972 pp. (with Rüdiger Göbel)
Awards and distinctions
Prize of the Dean of MFF for the best monograph 2006
MFF UK Silver medal at the Sexagennial anniversary
Fellow of the American Mathematical Society, 2020
References
External links
Personal web page
20th-century Czech mathematicians
21st-century Czech mathematicians
Algebraists
Fellows of the American Mathematical Society
Charles University alumni
Living people
1954 births
Czechoslovak mathematicians | Jan Trlifaj | Mathematics | 530 |
71,267,518 | https://en.wikipedia.org/wiki/Setaphyta | The Setaphyta are a clade within the Bryophyta which includes Marchantiophytina (liverworts) and Bryophytina (mosses). Anthocerotophytina (hornworts) are excluded. A 2018 study found through molecular sequencing that liverworts are more closely related to mosses than hornworts, with the implication that liverworts were not among the first species to colonize land.
Phylogeny
There is strong phylogenetic evidence for Setaphyta.
References
Bryophytes
Plant unranked clades | Setaphyta | Biology | 119 |
22,796,688 | https://en.wikipedia.org/wiki/Real-time%20recovery | In information technology, real-time recovery (RTR) is the ability to recover a piece of IT infrastructure such as a server from an infrastructure failure or human-induced error in a time frame that has minimal impact on business operations. Real-time recovery focuses on the most appropriate technology for restores, thus reducing the Recovery Time Objective (RTO) to minutes, Recovery Point Objectives (RPO) to within 15 minutes ago, and minimizing Test Recovery Objectives (TRO), which is the ability to test and validate that backups have occurred correctly without impacting production systems.
Real-Time Recovery is a new market segment in the backup, recovery and disaster recovery market that addresses the challenges companies that have historically faced with regards to protecting, and more importantly, recovering their data.
Definition
A real-time recovery solution must contain (at a minimum) the following attributes: The ability to restore a server in minutes to the same, totally different or to a virtual environment to within 5 minutes ago and not require the use of any additional agents, options or modules to accomplish this. It must be able to restore files in seconds (after all, the only reason anyone backups is to be able to restore). It must perform sector level backups, every 5 minutes and have the ability to self-heal a broken incremental chain of backups should part of the image set get corrupted or deleted. It must deliver improved recoverability of data files and databases.
Classification of data loss
Data Loss can be classified in three broad categories:
Server Hardware Failure - Preventing a server failure is very difficult, but it is possible to take precautions to avoid total server failure through the user of Redundant Power Supplies, Redundant Array of Independent Disks (RAID) disk sets.
Human Error - These disasters are major reasons for failure. Human error and intervention may be intentional or unintentional which can cause massive failures such as loss of entire systems or data files. This category of data loss includes accidental erasure, walkout, sabotage, burglary, virus, intrusion, etc.
Natural Disasters / Acts of terrorism – although infrequent, companies should weigh up their risk to natural disasters or acts of terrorism. How much data loss is the business willing or able tolerate.
Platforms for data servers
Data servers can be either physical hosts or run as guest servers within a virtualization platform, or a combination of both. It is very common for a customer environment to have a mixture of Virtual and Physical Servers. This is where attention to detail must be given to the approach of protecting the data on these servers at regular intervals. There are distinct advantages in selecting a technology that is virtual or physical independent. This would limit the number of technologies that organizations will have to get trained on, skilled up on, purchase, deploy, manage and maintain. In an ideal world, if you can reduce the complexity of managing multiple products to protect your physical and virtual infrastructure you will reap the rewards. A technology that gets installed at the operating system level ensures consistency in an environment that is either physical or virtual and eliminates API compatibility or Disk Volume Structure limitations (e.g. Raw Mapped Devices, VMFS).
Strategies
Prior to selecting a real-time recovery strategy or solution, a disaster recovery planner will refer to their organization's business continuity plan for the key metrics of recovery point objective (RPO) and recovery time objective for various business processes (such as the process to run payroll, generate an order, e-mail, etc.). The metrics specified for the business processes must then be mapped to the underlying IT systems and infrastructure that support those processes.
Once the recovery time objective and recovery point objective metrics have been mapped to IT infrastructure, the DR planner can determine the most suitable recovery strategy for each system. The business ultimately sets the IT budget, and therefore the RTO and RPO metrics need to fit with the available budget. While the ideal is zero data loss and zero time loss, the cost associated with that level of protection historically have made high-availability solutions impractical and unaffordable. The costs of a Real-Time Recovery solution are far less than previous tape-based backup systems.
References
Disaster recovery
Information technology | Real-time recovery | Technology | 856 |
946,459 | https://en.wikipedia.org/wiki/Erich%20Gamma | Erich Gamma is a Swiss computer scientist and one of the four co-authors (referred to as "Gang of Four") of the software engineering textbook, Design Patterns: Elements of Reusable Object-Oriented Software.
Gamma, along with Kent Beck, co-wrote the JUnit software testing framework which helped create Test-Driven Development and . He was the development team lead of the Eclipse platform's Java Development Tools (JDT), and worked on the IBM Rational Jazz project.
In 2011 he joined the Microsoft Visual Studio team and leads a development lab in Zürich, Switzerland that has developed the "Monaco" suite of components for browser-based development, found in products such as Azure DevOps Services (formerly Visual Studio Team Services and Visual Studio Online), Visual Studio Code, Azure Mobile Services, Azure Web Sites, and the Office 365 Development tools.
References
External links
GitHub account
Swiss writers
Living people
Swiss computer scientists
Eclipse (software)
IBM employees
Microsoft technical fellows
Software testing people
Scientists from Zurich
Year of birth missing (living people) | Erich Gamma | Technology | 211 |
61,200,396 | https://en.wikipedia.org/wiki/C22H25N3O2 | {{DISPLAYTITLE:C22H25N3O2}}
The molecular formula C22H25N3O2 may refer to:
Baxdrostat
Bucindolol | C22H25N3O2 | Chemistry | 43 |
4,245,815 | https://en.wikipedia.org/wiki/Mosaic%20gold | Mosaic gold or bronze powder refers to tin(IV) sulfide as used as a pigment in bronzing and gilding wood and metal work. It is obtained as a yellow scaly crystalline powder. The alchemists referred to it as aurum musivum, or aurum mosaicum. The term mosaic gold has also been used to refer to ormolu and to cut shapes of gold leaf, some darkened for contrast, arranged as a mosaic. The term bronze powder may also refer to powdered bronze alloy.
A recipe for mosaic gold is already provided in the 3th century A.D. treatise Baopuzi, composed by the Chinese alchemist Ge Hong. The earliest sources for its preparation in Europe, under the name porporina or purpurina, are the late 13th-century North Italian Liber colorum secundum Magistrum Bernardum and Cennino Cennini's Libro dell'arte from the 1420s. Instructions became more widespread and varied thereafter, the around 1500 recipe collection Liber illuministarum from Tegernsee Abbey in Bavaria alone offering six different methods for its preparation. Alchemists prepared it by combining mercury, tin, sal ammoniac, and sublimated sulfur (fleur de soufre), grinding, mixing, then setting them for three hours in a sand heat. The dirty sublimate being taken off, aurum mosaicum was found at the bottom of the matrass.
In the past it was used for medical purposes in most chronic and nervous ailments, and particularly convulsions of children; however, it is no longer recommended for any medical uses.
See also
List of inorganic pigments
References
Inorganic pigments
Visual arts materials
Alchemical substances
Tin(IV) compounds
Powders
Sulfides | Mosaic gold | Physics,Chemistry | 371 |
66,707,212 | https://en.wikipedia.org/wiki/Corporate%20Memphis | Corporate Memphis is an art style named after the Memphis Group that features flat areas of color and geometric elements. Widely associated with Big Tech illustrations in the late 2010s and early 2020s, it has been met with a polarized response, with criticism focusing on its use in sanitizing corporate communication, as well as being seen as visually offensive, insincere, pandering and over-saturated. Other illustrators have defended the style, pointing at what they claim to be its art-historical legitimacy.
Origins
Flat art developed out of the rise of vector graphic programs, and a nostalgia for mid-century modern illustration. It began to trend in editorial illustration and especially the tech industry, which relied on simple, scalable illustrations to fill white space and add character to apps and web pages. The style was widely popularized when Facebook introduced Alegria, an illustration system commissioned from design agency Buck Studios and illustrator Xoana Herrera in 2017.
The name "Corporate Memphis" originated from the title of an Are.na board that collected early examples, and is a reference to the Memphis Group, a 1980s design group known for bright colors, childish patterns, and geometric shapes. The style itself was inspired by a synthesis of elements spanning the 20th-century, including the Art Deco style of the 1920s, futurism in interior design from the Atomic Age, and color and patterns from the Pop Art movement.
Visual characteristics
Common motifs are flat human characters in action, with disproportionate features such as long and bendy limbs, small torsos, minimal or no facial features, and bright colors without any blending. Facebook's Alegria uses non-representational skin colors such as blues and purples in order to feel universal, though some artists working in the style opt for more realistic skin colors and features to show diversity.
Corporate Memphis is materially quick, cheap and easy to produce, and thus appealing to companies; programs such as Adobe Illustrator can be used to produce such designs rapidly.
Reception
Once Facebook had adopted the style, the sudden ubiquity of vector graphics led to a critical backlash. The style has been criticized professionally and popularly (including in myriad internet memes) for being overly minimalistic, generic,
lazy, overused, and attempting to sanitise public perception of big tech companies by presenting human interaction in utopian optimism.
Criticism of the art style is often rooted in larger anxieties about the creative industry under capitalism and neoliberalism. Others have argued that Corporate Memphis deserves to be understood on its own merits separate from the corporations which regularly employ it.
See also
Material design, a Google-derived design language linked to Corporate Memphis
Frutiger Aero, a prominent design style preceding Corporate Memphis that embraced contrasting skeuomorphism
Flat design
Hyperreality
Postmodern art
Metamodernism
Pop art
Capitalist realism
References
Illustration
Art movements
Design
Advertising
Minimalism
21st century in art | Corporate Memphis | Engineering | 589 |
209,584 | https://en.wikipedia.org/wiki/Atomic%20Age | The Atomic Age, also known as the Atomic Era, is the period of history following the detonation of the first nuclear weapon, The Gadget at the Trinity test in New Mexico on 16 July 1945 during World War II. Although nuclear chain reactions had been hypothesized in 1933 and the first artificial self-sustaining nuclear chain reaction (Chicago Pile-1) had taken place in December 1942, the Trinity test and the ensuing bombings of Hiroshima and Nagasaki that ended World WarII represented the first large-scale use of nuclear technology and ushered in profound changes in sociopolitical thinking and the course of technological development.
While atomic power was promoted for a time as the epitome of progress and modernity, entering into the nuclear power era also entailed frightful implications of nuclear warfare, the Cold War, mutual assured destruction, nuclear proliferation, the risk of nuclear disaster (potentially as extreme as anthropogenic global nuclear winter), as well as beneficial civilian applications in nuclear medicine. It is no easy matter to fully segregate peaceful uses of nuclear technology from military or terrorist uses (such as the fabrication of dirty bombs from radioactive waste), which complicated the development of a global nuclear-power export industry right from the outset.
In 1973, concerning a flourishing nuclear power industry, the United States Atomic Energy Commission predicted that by the turn of the 21st century, 1,000 reactors would be producing electricity for homes and businesses across the U.S. However, the "nuclear dream" fell far short of what was promised because nuclear technology produced a range of social problems, from the nuclear arms race to nuclear meltdowns, and the unresolved difficulties of bomb plant cleanup and civilian plant waste disposal and decommissioning. Since 1973, reactor orders declined sharply as electricity demand fell and construction costs rose. Many orders and partially completed plants were cancelled.
By the late 1970s, nuclear power had suffered a remarkable international destabilization, as it was faced with economic difficulties and widespread public opposition, coming to a head with the Three Mile Island accident in 1979 and the Chernobyl disaster in 1986, both of which adversely affected the nuclear power industry for many decades.
Early years
In 1901, Frederick Soddy and Ernest Rutherford discovered that radioactivity was part of the process by which atoms changed from one kind to another, involving the release of energy. Soddy wrote in popular magazines that radioactivity was a potentially "inexhaustible" source of energy and offered a vision of an atomic future where it would be possible to "transform a desert continent, thaw the frozen poles, and make the whole earth one smiling Garden of Eden." The promise of an "atomic age," with nuclear energy as the global, utopian technology for the satisfaction of human needs, has been a recurring theme ever since. But "Soddy also saw that atomic energy could possibly be used to create terrible new weapons".
The concept of a nuclear chain reaction was hypothesized in 1933, shortly after James Chadwick's discovery of the neutron. Only a few years later, in December 1938 nuclear fission was discovered by Otto Hahn and his assistant Fritz Strassmann. Hahn understood that a "burst" of the atomic nuclei had occurred. The first artificial self-sustaining nuclear chain reaction took place at Chicago Pile-1 in December 1942 under the leadership of Enrico Fermi.
In 1945, the pocketbook The Atomic Age heralded the untapped atomic power in everyday objects and depicted a future where fossil fuels would go unused. One science writer, David Dietz, wrote that instead of filling the gas tank of your car two or three times a week, you will travel for a year on a pellet of atomic energy the size of a vitamin pill. Glenn T. Seaborg, who chaired the Atomic Energy Commission, wrote "there will be nuclear powered earth-to-moon shuttles, nuclear powered artificial hearts, plutonium heated swimming pools for SCUBA divers, and much more".
World War II
The phrase Atomic Age was coined by William L. Laurence, a journalist with The New York Times, who became the official journalist for the Manhattan Project which developed the first nuclear weapons. He witnessed both the Trinity test and the bombing of Nagasaki and went on to write a series of articles extolling the virtues of the new weapon. His reporting before and after the bombings helped to spur public awareness of the potential of nuclear technology and in part motivated development of the technology in the U.S. and in the Soviet Union. The Soviet Union would go on to test its first nuclear weapon in 1949.
In 1949, U.S. Atomic Energy Commission chairman, David Lilienthal stated that "atomic energy is not simply a search for new energy, but more significantly a beginning of human history in which faith in knowledge can vitalize man's whole life".
1950s
The phrase gained popularity as a feeling of nuclear optimism emerged in the 1950s in which it was believed that all power generators in the future would be atomic in nature. The atomic bomb would render all conventional explosives obsolete, and nuclear power plants would do the same for power sources such as coal and oil. There was a general feeling that everything would use a nuclear power source of some sort, in a positive and productive way, from irradiating food to preserve it, to the development of nuclear medicine. There would be an age of peace and plenty in which atomic energy would "provide the power needed to desalinate water for the thirsty, irrigate the deserts for the hungry, and fuel interstellar travel deep into outer space". This use would render the Atomic Age as significant a step in technological progress as the first smelting of bronze, of iron, or the commencement of the Industrial Revolution.
This included even cars, leading Ford Motor Company to display the Ford Nucleon concept car to the public in 1958. There was also the promise of golf balls which could always be found and nuclear-powered aircraft, which the U.S. federal government even spent US$1.5 billion researching. Nuclear policymaking became almost a collective technocratic fantasy, or at least was driven by fantasy:
The very idea of splitting the atom had an almost magical grip on the imaginations of inventors and policymakers. As soon as someone said—in an even mildly credible way—that these things could be done, then people quickly convinced themselves ... that they would be done.
In the US, military planners "believed that demonstrating the civilian applications of the atom would also affirm the American system of private enterprise, showcase the expertise of scientists, increase personal living standards, and defend the democratic lifestyle against communism". Some media reports predicted that thanks to the giant nuclear power stations of the near future electricity would soon become much cheaper and that electricity meters would be removed, because power would be "too cheap to meter."
When the Shippingport reactor went online in 1957 it produced electricity at a cost roughly ten times that of coal-fired generation. Scientists at the AEC's own Brookhaven Laboratory "wrote a 1958 report describing accident scenarios in which 3,000 people would die immediately, with another 40,000 injured". However Shippingport was an experimental reactor using highly enriched uranium (unlike most power reactors) and originally intended for a (cancelled) nuclear-powered aircraft carrier.
Kenneth Nichols, a consultant for the Connecticut Yankee and Yankee Rowe nuclear power stations, wrote that while considered "experimental" and not expected to be competitive with coal and oil, they "became competitive because of inflation ... and the large increase in price of coal and oil." He wrote that for nuclear power stations the capital cost is the major cost factor over the life of the plant, hence "antinukes" try to increase costs and building time with changing regulations and lengthy hearings, so that "it takes almost twice as long to build a (U.S.-designed boiling-water or pressurised water) atomic power plant in the United States as in France, Japan, Taiwan or South Korea." French pressurised-water nuclear plants produce 60% of their electric power and have proven to be much cheaper than oil or coal.
Fear of possible atomic attack from the Soviet Union caused U.S. school children to participate in "duck and cover" civil defense drills.
Atomic City
During the 1950s, Las Vegas earned the nickname "Atomic City" for becoming a hotspot where tourists would gather to watch above-ground nuclear weapons tests taking place at Nevada Test Site. Following the detonation of Able, one of the first atomic bombs dropped at the Nevada Test Site, the Las Vegas Chamber of Commerce began advertising the tests as an entertainment spectacle to tourists.
The detonations proved popular, and casinos throughout the city capitalised on the tests by advertising hotel rooms or rooftops which offered views of the testing site or by planning "Dawn Bomb Parties" where people would come together to celebrate the detonations. Most parties started at midnight, and musicians would perform at the venues until 4:00 a.m. when the party would briefly stop so guests could silently watch the detonation. Some casinos capitalised on the tests further by creating so called "atomic cocktails", a mixture of vodka, cognac, sherry and champagne. Meanwhile, groups of tourists would drive out into the desert with family or friends to watch the detonations.
Despite the health risks associated with nuclear fallout, tourists and viewers were told to simply "shower". Later on, however, anyone who had worked at the testing site or lived in areas exposed to nuclear fallout fell ill and had higher chances of developing cancer or suffering pre-mature deaths.
1960s
The term "atomic age" was initially used in a positive, futuristic sense, but by the 1960s the threats posed by nuclear weapons had begun to edge out nuclear power as the dominant motif of the atom. In the Thunderbirds TV series, a set of vehicles was presented that were imagined to be completely nuclear, as shown in cutaways presented in their comic-books.
Project Plowshare
By exploiting the peaceful uses of the "friendly atom" in medical applications, earth removal and subsequently in nuclear power plants, the nuclear industry and U.S. government sought to allay public fears about nuclear technology and promote the acceptance of nuclear weapons. At the peak of the Atomic Age, the U.S. initiated Project Plowshare, involving "peaceful nuclear explosions". The United States Atomic Energy Commission (AEC) chairman announced that Plowshare was intended to "highlight the peaceful applications of nuclear explosive devices and thereby create a climate of world opinion that is more favorable to weapons development and tests". Plowshare "was named directly from the Bible itself, specifically Micah 4:3, which states that God will beat swords into ploughshares, and spears into pruning hooks, so that no country could lift up weapons against another".
Proposed uses included widening the Panama Canal, constructing a new sea-level waterway through Nicaragua nicknamed the Pan-Atomic Canal, cutting paths through mountainous areas for highways, and connecting inland river systems. Other proposals involved blasting caverns for water, natural gas, and petroleum storage. It was proposed to plant underground atomic bombs to extract shale oil in eastern Utah and western Colorado. Serious consideration was given to using these explosives for various mining operations. One proposal suggested using nuclear blasts to connect underground aquifers in Arizona. Another plan involved surface blasting on the western slope of California's Sacramento Valley for a water transport project.
However, there were many negative impacts from Project Plowshare's 27 nuclear explosions. Consequences included blighted land, relocated communities, tritium-contaminated water, radioactivity, and fallout from debris being hurled high into the atmosphere. These were ignored and downplayed until the program was terminated in 1977, due in large part to public opposition, after $770 million had been spent on the project.
1970s to 1990s
French advocates of nuclear power developed an aesthetic vision of nuclear technology as art to bolster support for the technology. Leclerq compares the nuclear cooling tower to some of the grandest architectural monuments of Western culture:
The age in which we live has, for the public, been marked by the nuclear engineer and the gigantic edifices he has created. For builders and visitors alike, nuclear power plants will be considered the cathedrals of the 20th century. Their syncretism mingles the conscious and the unconscious, religious fulfilment and industrial achievement, the limitations of uses of materials and boundless artistic inspiration, utopia come true and the continued search for harmony.
In 1973, the AEC predicted that, by the turn of the 21st century 1,000 reactors would be producing electricity for homes and businesses across the U.S. But after 1973, reactor orders declined sharply as electricity demand fell and construction costs rose. Many orders and partially completed plants were cancelled.
Nuclear power has proved controversial since the 1970s. Highly radioactive materials may overheat and escape from the reactor building. Nuclear waste (spent nuclear fuel) needs to be regularly removed from the reactors and disposed of safely for up to a million years, so that it does not pollute the environment. Recycling of nuclear waste has been discussed, but it creates plutonium which can be used in weapons, and in any case still leaves much unwanted waste to be stored and disposed of. Large, purpose-built facilities for long-term disposal of nuclear waste have been difficult to site.
By the late 1970s, nuclear power suffered a remarkable international destabilization, as it was faced with economic difficulties and widespread public opposition, coming to a head with the Three Mile Island accident in 1979 and the Chernobyl disaster in 1986, both of which adversely affected the nuclear power industry for decades thereafter. A cover story in the 11 February 1985 issue of Forbes magazine addresses the overall management of the nuclear power program in the United States:
The failure of the U.S. nuclear power program ranks as the largest managerial disaster in business history, a disaster on a monumental scale ... only the blind, or the biased, can now think that the money has been well spent. It is a defeat for the U.S. consumer and for the competitiveness of U.S. industry, for the utilities that undertook the program and for the private enterprise system that made it possible.
In a period just over 30 years, the early dramatic rise of nuclear power went into equally meteoric reverse. With no other energy technology has there been a conjunction of such rapid and revolutionary international emergence, followed so quickly by equally transformative demise.
21st century
In the 21st century, the label of the "Atomic Age" connotes either a sense of nostalgia or naïveté and is considered by many to have ended with the fall of the Soviet Union in 1991, though the term continues to be used by many historians to describe the era following the conclusion of the Second World War. Atomic energy and weapons continue to have a strong effect on world politics in the 21st century.
The nuclear power industry has improved the safety and performance of reactors and has proposed new safer (but generally untested) reactor designs, but there is no guarantee that the reactors will be designed, built and operated correctly. Mistakes do occur, and natural disasters can effect nuclear power plants, such as the 2011 Tōhoku earthquake and tsunami that damaged the Fukushima plant in Japan. According to UBS AG, the Fukushima accident cast doubt on whether even an advanced economy like Japan can master nuclear safety. Catastrophic scenarios involving terrorist attacks are also conceivable. An interdisciplinary team from MIT has estimated that if nuclear power use tripled from 2005 to 2055 (2%–7%), at least four serious nuclear accidents would be expected in that period.
In September 2012, in reaction to the Fukushima disaster, Japan announced that it would completely phase out nuclear power by 2030, although the likelihood of this goal became unlikely during the subsequent Abe administration. Germany planned to completely phase out nuclear energy by 2022 but was still using 11.9% in 2021. In 2022, following the Russian invasion of Ukraine, the United Kingdom pledged to build up to 8 new reactors to reduce their reliance on gas and oil and hopes that 25% of all energy produced will be by nuclear means.
On August 1, 2024, Vipin Narang, a senior Pentagon official, remarked, "We now find ourselves in nothing short of a new nuclear age." He attributed this development to an "unprecedented mix of multiple revisionist nuclear challengers who are uninterested in arms control or risk-reduction efforts, each rapidly modernizing and expanding their nuclear arsenals."
Anti-nuclear movement
A large anti-nuclear demonstration was held on 6 May 1979, in Washington D.C., when 125,000 people including the governor of California, attended a march and rally against nuclear power. In New York City on 23 September 1979, almost 200,000 people attended a protest against nuclear power. Anti-nuclear power protests preceded the shutdown of the Shoreham, Yankee Rowe, Millstone I, Rancho Seco, Maine Yankee, and about a dozen other nuclear power plants.
On 12 June 1982, one million people demonstrated in New York City's Central Park against nuclear weapons and for an end to the Cold War arms race. It was the largest anti-nuclear protest and the largest political demonstration in American history. International Day of Nuclear Disarmament protests were held on 20 June 1983, at 50 sites across the United States.
In 1986, hundreds of people walked from Los Angeles to Washington, D.C., in the Great Peace March for Global Nuclear Disarmament. There were many Nevada Desert Experience protests and peace camps at the Nevada Test Site during the 1980s and 1990s.
On May 1, 2005, 40,000 anti-nuclear/anti-war protesters marched past the United Nations in New York, 60 years after the atomic bombings of Hiroshima and Nagasaki. This was the largest anti-nuclear rally in the U.S. for several decades.
Timeline
Discovery and development
1896 – Henri Becquerel notices that uranium gives off an unknown radiation which fogs photographic film.
1898 – Marie Curie discovers thorium gives off a similar radiation. She calls it radioactivity.
1903 – Ernest Rutherford begins to speak of the possibility of atomic energy.
1905 – Albert Einstein formulates the special theory of relativity which explains the phenomenon of radioactivity as mass–energy equivalence.
1911 – Ernest Rutherford formulates a theory about the structure of the atomic nucleus based on his experiments with alpha particles.
1930 – Otto Hahn writes an article with his prophecy "The Atom – the source of power of the future?" in the newspaper Deutsche Allgemeine Zeitung.
1932 – James Chadwick discovers the neutron.
1934 – Enrico Fermi begins bombarding uranium with slow neutrons; Ida Noddack predicts that uranium nuclei will break up under bombardment by fast neutrons. (Fermi does not pursue this because his theoretical mathematical predictions do not predict this result.)
17 December 1938 – Otto Hahn and his assistant Fritz Strassmann, by bombarding uranium with fast neutrons, discover experimentally and prove nuclear fission with radiochemical methods.
6 January 1939 – Hahn and Strassmann publish the first paper about their discovery in the German review Die Naturwissenschaften.
10 February 1939 – Hahn and Strassmann publish the second paper about their discovery in Die Naturwissenschaften, using for the first time the term uranium fission, and predict the liberation of additional neutrons in the fission process.
11 February 1939 – Lise Meitner and her nephew Otto Frisch publish the first theoretical interpretation of nuclear fission, a term coined by Frisch, in the British review Nature.
11 October 1939 – The Einstein–Szilárd letter, suggesting that the United States construct a nuclear weapon, is delivered to President Franklin D. Roosevelt. Roosevelt signs the order to build a nuclear weapon on 6 December 1941.
26 February 1941 – Discovery of plutonium by Glenn Seaborg and Arthur Wahl.
September 1942 – General Leslie Groves takes charge of the Manhattan Project.
2 December 1942 – Under the leadership of Fermi, the first self-sustaining nuclear chain reaction takes place at the Chicago Pile-1.
Nuclear arms deployment
16 July 1945 – The first nuclear weapon is detonated in a plutonium form near Socorro, New Mexico, United States in the successful Trinity test.
6 August 1945 – The second nuclear weapon, and the first to be deployed in combat, is detonated when the Little Boy uranium bomb was dropped on the Japanese city of Hiroshima.
9 August 1945 – The third nuclear weapon—and the second and last to be deployed in combat—is detonated when the Fat Man plutonium bomb was dropped on the Japanese city of Nagasaki.
5 September 1951 – The U.S. Air Force announces the awarding of a contract for the development of an "atomic-powered airplane".
1 November 1952 – The first hydrogen bomb, largely designed by Edward Teller, is tested at Eniwetok Atoll.
"Atoms for Peace"
8 December 1953 – U.S. President Dwight D. Eisenhower, in a speech before the UN General Assembly, announces the Atoms for Peace program to provide nuclear power to developing countries.
21 January 1954 – The first nuclear submarine, the , is launched into the Thames River near New London, Connecticut, United States.
27 June 1954 – The first nuclear power plant begins operation near Obninsk, USSR.
17 September 1954 – Lewis L. Strauss, chairman of the U.S. Atomic Energy Commission, states that nuclear energy will be "too cheap to meter".
17 October 1956 – The world's first nuclear power station to deliver electricity in commercial quantities opens at Calder Hall in the UK.
29 September 1957 – more than 200 people die as a result of the Mayak nuclear waste storage tank explosion in Chelyabinsk, Soviet Union, and 270,000 people were exposed to dangerous radiation levels.
1957 to 1959 – The Soviet Union and the United States both begin deployment of ICBMs.
1958 – The neutron bomb, a special type of tactical nuclear weapon developed specifically to release a relatively large portion of its energy as energetic neutron radiation, is invented by Samuel Cohen of the Lawrence Livermore National Laboratory.
1960 – Herman Kahn publishes On Thermonuclear War.
November 1961 – In Fortune magazine, an article by Gilbert Burck appears outlining the plans of Nelson Rockefeller, Edward Teller, Herman Kahn, and Chet Holifield for the construction of an enormous network of concrete-lined underground fallout shelters throughout the United States sufficient to shelter millions of people to serve as a refuge in case of nuclear war.
12 October 1962 to 28 October 1962 – The Cuban Missile Crisis brings the world to the brink of nuclear war.
10 October 1963 – The Partial Test Ban Treaty goes into effect, banning above ground nuclear testing.
26 August 1966 – The first pebble-bed reactor goes online in Jülich, West Germany (some nuclear engineers think that the pebble-bed reactor design can be adapted for atomic powered vehicles).
27 January 1967 – The Outer Space Treaty bans the deployment of nuclear weapons in space.
1968 – Physicist Freeman Dyson proposes building a space ark using an Orion nuclear-pulse propulsion rocket powered by hydrogen bombs. The rocket would have a payload of 50,000 tonnes, a crew of 240, and be able to travel at 3.3% of the speed of light and would reach Alpha Centauri in 133 years. It would cost $367 billion in 1968 dollars, which is the equivalent of about $3.3 trillion in 2024 dollars.
Three Mile Island and Chernobyl
28 March 1979 – The Three Mile Island accident occurs at the Three Mile Island Nuclear Generating Station near Harrisburg, Pennsylvania, dampening enthusiasm in the United States for nuclear power, and causing a dramatic shift in the growth of nuclear power in the United States.
6 May 1979 – A large anti-nuclear demonstration was held in Washington, D.C., when 125,000 people including the Governor of California, attended a march and rally against nuclear power.
23 September 1979 – In New York City, almost 200,000 people attended a protest against nuclear power.
26 April 1986 – The Chernobyl disaster occurs at the Chernobyl Nuclear Power Plant near Pripyat, Ukraine, USSR, reducing enthusiasm for nuclear power among many people in the world, and causing a dramatic shift in the growth of nuclear power.
Nuclear arms reduction
8 December 1987 – The Intermediate-Range Nuclear Forces Treaty is signed in Washington 1987. Ronald Reagan and Mikhail Gorbachev agreed after negotiations following the 11–12 October 1986 Reykjavík Summit to go farther than a nuclear freeze – they agreed to reduce nuclear arsenals. IRBMs and SRBMs were eliminated.
1993–2007 – Nuclear power is the primary source of electricity in France. Throughout these two decades, France produced over three quarters of its power from nuclear sources (78.8%), the highest percentage in the world at the time.
31 July 1991 – As the Cold War ends, the Start I treaty is signed by the United States and the Soviet Union, reducing the deployed nuclear warheads of each side to no more than 6,000 each.
1993 – The Megatons to Megawatts Program is agreed upon by Russia and the United States and begins to be implemented in 1995. When it is completed in 2013, five hundred tonnes of uranium derived from 20,000 nuclear warheads from Russia will have been converted from weapons-grade to reactor-grade uranium and used in United States nuclear plants to generate electricity. This has provided 10% of the electrical power of the U.S. (50% of its nuclear power) during the 1995–2013 period.
2006 – Patrick Moore, an early member of Greenpeace and environmentalists such as Stewart Brand suggest the deployment of more advanced nuclear power technology for electric power generation (such as pebble-bed reactors) to combat global warming.
21 November 2006 – Implementation of the ITER fusion power reactor project near Cadarache, France is begun. Construction is to be completed in 2016 with the hope that the research conducted there will allow the introduction of practical commercial fusion power plants by 2050.
2006–2009 – Nuclear engineers begin to suggest that, to combat global warming, it would be more efficient to build nuclear reactors that operate on the thorium cycle.
8 April 2010 – The New START treaty is signed by the United States and Russia in Prague. It mandates the eventual reduction by both sides to no more than 1,550 deployed strategic nuclear weapons each.
Fukushima
11 March 2011 – A tsunami resulting from the Tōhoku earthquake causes severe damage to the Fukushima I nuclear power plant in Japan, causing partial nuclear meltdowns in several of the reactors. Many international leaders express concerns about the accidents, and some countries re-evaluate existing nuclear energy programs. The event is rated level 7 on the International Nuclear Event Scale by the Japanese government's nuclear safety agency. Other than the Chernobyl disaster, it is the only nuclear accident to be rated at level 7, the highest level on the scale, and caused the most dramatic shift in nuclear policy to date.
Influence on popular culture
1945 – The Atomaton chapter of Sweet Adelines was formed by Edna Mae Anderson after she and her sister singers decided, "We have an atom of an idea and a ton of energy." The name also recognized the Atomic Age—just three days after Sweet Adelines was founded (13 July 1945), the first nuclear bomb, Trinity, was detonated.
5 July 1946 – The bikini swimsuit, named after Bikini Atoll, where an atomic bomb test called Operation Crossroads had taken place a few days earlier on 1 July 1946, was introduced at a fashion show in Paris.
1954 – Them!, a science fiction film about humanity's battle with a nest of giant mutant ants, was one of the first of the "nuclear monster" movies.
1954 – The science fiction film Godzilla was released, about an iconic fictional monster that is a gigantic irradiated dinosaur, transformed from the fallout of a hydrogen bomb test.
23 January 1957 – Walt Disney Productions released the film "Our Friend the Atom" describing the marvelous benefits of atomic power. As well as being presented as an episode on the TV show Disneyland, this film was also shown to almost all baby boomers in their public school auditoriums or their science classes and was instrumental in creating within that generation a mostly favorable attitude toward nuclear power.
1958 –The peace symbol was designed for the British nuclear disarmament movement by Gerald Holtom.
1959 – The popular film On the Beach shows the last remnants of humanity in Australia awaiting the end of the human race after a nuclear war.
1964 – The film Dr. Strangelove, or: How I Learned to Stop Worrying and Love the Bomb (aka Dr. Strangelove), a black comedy directed by Stanley Kubrick about an accidentally triggered nuclear war, was released.
1982 – The documentary film The Atomic Cafe, detailing society's attitudes toward the atomic bomb in the early Atomic Age, debuted to widespread acclaim.
1982 – Jonathan Schell's book Fate of the Earth, about the consequences of nuclear war, is published. The book "forces even the most reluctant person to confront the unthinkable: the destruction of humanity and possibly most life on Earth". The best-selling book instigated the Nuclear Freeze campaign.
20 November 1983 – The Day After, an American television movie was aired on the ABC Television Network and in the Soviet Union. The film portrays a fictional nuclear war between the United States/NATO and the Soviet Union/Warsaw Pact. After the film, a panel discussion was presented in which Carl Sagan suggested that we need to reduce the number of nuclear weapons as a matter of "planetary hygiene". This film was seen by over 100,000,000 people and was instrumental in greatly increasing public support for the Nuclear Freeze campaign.
See also
References
Further reading
"Presidency in the Nuclear Age", conference and forum at the JFK Library, Boston, 12 October 2009. Four panels: "The Race to Build the Bomb and the Decision to Use It", "Cuban Missile Crisis and the First Nuclear Test Ban Treaty", "The Cold War and the Nuclear Arms Race", and "Nuclear Weapons, Terrorism, and the Presidency".
External links
Annotated bibliography on the Nuclear Age at the Alsos Digital Library for Nuclear Issues.
Atomic Age Alliance, a volunteer group dedicated to preserving Atomic Age culture and architecture.
The Nation in the Nuclear Age, a slideshow by The Nation.
20th century
Historical eras
Nuclear history
Nuclear warfare | Atomic Age | Chemistry | 6,233 |
2,301,322 | https://en.wikipedia.org/wiki/Superfecundation | Superfecundation is the fertilization of two or more ova from the same cycle by sperm from separate acts of sexual intercourse, which can lead to twin babies from two separate biological fathers. The term superfecundation is derived from fecund, meaning able to produce offspring. Homopaternal superfecundation is fertilization of two separate ova from the same father, leading to fraternal twins, while heteropaternal superfecundation is a form of atypical twinning where, genetically, the twins are half siblings – sharing the same mother, but with different fathers.
Conception
Sperm cells can live inside a female's body for up to five days, and once ovulation occurs, the egg remains viable for 12–48 hours before it begins to disintegrate. Superfecundation most commonly happens within hours or days of the first instance of fertilization with ova released during the same cycle.
Ovulation is normally suspended during pregnancy to prevent further ova becoming fertilized and to help increase the chances of a full-term pregnancy. However, if an ovum is atypically released after the female was already impregnated when previously ovulating, a chance of a second pregnancy occurs, albeit at a different stage of development. This is known as superfetation.
Heteropaternal superfecundation
Heteropaternal superfecundation is common in animals such as cats and dogs. Stray dogs can produce litters in which every puppy has a different sire. Though rare in humans, cases have been documented. In one study on humans, the frequency was 2.4% among dizygotic twins whose parents had been involved in paternity suits.
Selected cases involving superfecundation
In 1982, twins who were born with two different skin colors were discovered to be conceived as a result of heteropaternal superfecundation.
In 1995, a young woman gave birth to diamniotic monochorionic twins, who were originally assumed to be monozygotic twins until a paternity suit led to a DNA test. This led to the discovery that the twins had different fathers.
In 2001, a case of spontaneous monopaternal superfecundation was reported after a woman undergoing IVF treatments gave birth to quintuplets after only two embryos were implanted. Genetic testing supported that the twinning was not a result of the embryos splitting, and that all five boys shared the same father.
In 2008, on the Maury Show a paternity test on live television established a heteropaternal superfecundation.
In 2015, a judge in New Jersey ruled that a man should only pay child support for one of two twins, as he was only the biological father to one of the children.
In 2017, an IVF-implanted surrogate mother gave birth to two children: one genetically unrelated child from an implanted embryo, and a biological child from her own egg and her husband's sperm.
In 2019, a Chinese woman was reported to have two babies from different fathers, one of whom was her husband and the other was a man having a secret affair with her during the same time.
In 2022, a 19-year-old Brazilian from Mineiros gave birth to twins from two different fathers with whom she had sex on the same day.
Mythology
Greek mythology holds many cases of superfecundation:
Leda lies with both her husband Tyndareus and with the god Zeus, the latter in the guise of a swan. Nine months later, she bears two daughters: Clytemnestra by Tyndareus and Helen by Zeus. This happens again; this time Leda bears two sons: Castor by Tyndareus and Pollux by Zeus.
Alcmene lies with Zeus, who is disguised as her husband Amphitryon; Alcmene later lies with the real Amphitryon and gives birth to two sons: Iphicles by Amphitryon and Heracles by Zeus.
Chione lies with both Apollo and Hermes on the same night, and falls pregnant. She bears two sons; Autolycus for Hermes and Philammon for Apollo.
See also
Chimera (genetics)
Mixed twins
Polyandry in nature
Polyspermy
Twins
References
Further reading
Castor and Pollux
Clytemnestra
Fertility
Helen of Troy
Multiple births
Reproduction | Superfecundation | Astronomy,Biology | 922 |
61,194,766 | https://en.wikipedia.org/wiki/Time-domain%20diffuse%20optics | Time-domain diffuse optics or time-resolved functional near-infrared spectroscopy is a branch of functional near-Infrared spectroscopy which deals with light propagation in diffusive media. There are three main approaches to diffuse optics namely continuous wave (CW), frequency domain (FD) and time-domain (TD). Biological tissue in the range of red to near-infrared wavelengths are transparent to light and can be used to probe deep layers of the tissue thus enabling various in vivo applications and clinical trials.
Physical concepts
In this approach, a narrow pulse of light (< 100 picoseconds) is injected into the medium. The injected photons undergo multiple scattering and absorption events and the scattered photons are then collected at a certain distance from the source and the photon arrival times are recorded. The photon arrival times are converted into the histogram of the distribution of time-of-flight (DTOF) of photons or temporal point spread function. This DTOF is delayed, attenuated and broadened with respect to the injected pulse. The two main phenomena affecting photon migration in diffusive media are absorption and scattering. Scattering is caused by microscopic refractive index changes due to the structure of the media. Absorption, on the other hand, is caused by a radiative or non-radiative transfer of light energy on interaction with absorption centers such as chromophores. Both absorption and scattering are described by coefficients and respectively.
Multiple scattering events broaden the DTOF and the attenuation of a result of both absorption and scattering as they divert photons from the direction of the detector. Higher scattering leads to a more delayed and a broader DTOF and higher absorption reduces the amplitude and changes the slope of the tail of the DTOF. Since absorption and scattering have different effects on the DTOF, they can be extracted independently while using a single source-detector separation. Moreover, the penetration depth in TD depends solely on the photon arrival times and is independent of the source-detector separation unlike in CW approach.
The theory of light propagation in diffusive media is usually dealt with using the framework of radiative transfer theory under the multiple scattering regime. It has been demonstrated that radiative transfer equation under the diffusion approximation yields sufficiently accurate solutions for practical applications. For example, it can be applied for the semi-infinite geometry or the infinite slab geometry, using proper boundary conditions. The system is considered as a homogeneous background and an inclusion is considered as an absorption or scattering perturbation.
The time-resolved reflectance curve at a point from the source for a semi-infinite geometry is given by
where is the diffusion coefficient, is the reduced scattering coefficient and is asymmetry factor, is the photon velocity in the medium, takes into account the boundary conditions and is a constant.
The final DTOF is a convolution of the instrument response function (IRF) of the system with the theoretical reflectance curve.
When applied to biological tissues estimation of and allows us to then estimate the concentration of the various tissue constituents as well as provides information about blood oxygenation (oxy and deoxy-hemoglobin) as well as saturation and total blood volume. These can then be used as biomarkers for detecting various pathologies.
Instrumentation
Instrumentation in time-domain diffuse optics consists of three fundamental components namely, a pulsed laser source, a single photon detector and a timing electronics.
Sources
Time-domain diffuse optical sources must have the following characteristics; emission wavelength in the optical window i.e. between 650 and 1350 nanometre (nm); a narrow full width at half maximum (FWHM), ideally a delta function; high repetition rate (>20 MHz) and finally, sufficient laser power (>1 mW) to achieve good signal to noise ratio.
In the past bulky tunable Ti:sapphire Lasers were used. They provided a wide wavelength range of 400 nm, a narrow FWHM (< 1 ps) high average power (up to 1W) and high repetition (up to 100 MHz) frequency. However, they are bulky, expensive and take a long time for wavelength swapping.
In recent years, pulsed fiber lasers based on super continuum generation have emerged. They provide a wide spectral range (400 to 2000 ps), typical average power of 5 to 10 W, a FWHM of < 10ps and a repetition frequency of tens of MHz. However, they are generally quite expensive and lack stability in super continuum generation and hence, have been limited in there use.
The most wide spread sources are the pulsed diode lasers. They have a FWHM of around 100 ps and repetition frequency of up to 100 MHz and an average power of about a few milliwatts. Even though they lack tunability, their low cost and compactness allows for multiple modules to be used in a single system.
Detectors
Single photon detector used in time-domain diffuse optics require not only a high photon detection efficiency in the wavelength range of optical window, but also a large active area as well as large numerical aperture (N.A.) to maximize the overall light collection efficiency. They also require narrow timing response and a low noise background.
Traditionally, fiber coupled photomultiplier tubes (PMT) have been the detector of choice for diffuse optical measurements, thanks mainly due to the large active area, low dark count and excellent timing resolution. However, they are intrinsically bulky, prone to electromagnetic disturbances and they have a quite limited spectral sensitivity. Moreover, they require a high biasing voltage and they are quite expensive. Single photon avalanche diodes have emerged as an alternative to PMTS. They are low cost, compact and can be placed in contact, while needing a much lower biasing voltage. Also, they offer a wider spectral sensitivity and they are more robust to bursts of light. However, they have a much lower active area and hence a lower photon collection efficiency and a larger dark count. Silicon photomultipliers (SiPM) are an arrays of SPADs with a global anode and a global cathode and hence have a larger active area while maintaining all the advantages offered by SPADs. However, they suffer from a larger dark count and a broader timing response.
Timing electronics
The timing electronics is needed to losslessly reconstruct the histogram of the distribution of time of flight of photons. This is done by using the technique of time-correlated single photon counting (TCSPC), where the individual photon arrival times are marked with respect to a start/stop signal provided by the periodic laser cycle. These time-stamps can then be used to build up histograms of photon arrival times.
The two main types of timing electronics are based on a combination of time-to-analog converter (TAC) and an analog-to-digital converter (ADC), and time-to-digital converter (TDC), respectively. In the first case, the difference between the start and the stop signal is converted into an analog voltage signal, which is then processed by the ADC. In the second method, the delay is directly converted into a digital signal. Systems based on ADCs generally have a better timing resolution and linearity while being expensive and the capability of being integrated. TDCs, on the other hand, can be integrated into a single chip and hence are better suited in multi-channel systems. However, they have a worse timing performance and can handle much lower sustained count-rates.
Applications
The usefulness of TD Diffuse optics lies in its ability to continually and noninvasive monitor optical properties of tissue. Making it a powerful diagnostic tool for long-term bedside monitoring in infants and adults alike. It has already been demonstrated that TD diffuse optics can be successfully applied to various biomedical applications such as cerebral monitoring, optical mammography, muscle monitoring, etc.
See also
Near-infrared spectroscopy
Functional near-infrared spectroscopy
Diffuse optical imaging
Neuroimaging
Functional neuroimaging
References
Neuroimaging
Optical imaging
Spectroscopy | Time-domain diffuse optics | Physics,Chemistry | 1,634 |
25,292,513 | https://en.wikipedia.org/wiki/Methiodide | In organic chemistry, a methiodide is a chemical derivative produced by the reaction of a compound with methyl iodide. Methiodides are often formed through the methylation of tertiary amines:
R3N + CH3I → (CH3)R3N+I−
Whereas the parent amines are hydrophobic and often oily, methiodides, being salts, are somewhat hydrophilic and exhibit high melting points. Methiodides exhibit altered pharmacological properties as well.
Examples include:
Cocaine methiodide, a charged cocaine analog which cannot pass the blood brain barrier and enter the brain
Bicuculline methiodide, a water-soluble form of bicuculline
Tertiary phosphines and phosphite esters also form methiodides.
References
Quaternary ammonium compounds | Methiodide | Chemistry | 179 |
18,493,701 | https://en.wikipedia.org/wiki/Darwin%E2%80%93Wallace%20Medal | The Darwin–Wallace Medal is a medal awarded by the Linnean Society of London for "major advances in evolutionary biology". Historically, the medals have been awarded every 50 years, beginning in 1908. That year marked 50 years after the joint presentation by Charles Darwin and Alfred Russel Wallace of two scientific papers—On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection—to the Linnean Society of London on 1 July 1858. Fittingly, Wallace was one of the first recipients of the medal, in his case it was, exceptionally, in gold, rather than the silver version presented in the six other initial awards. However, in 2008 the Linnean Society announced that due to the continuing importance of evolutionary research, the medal will be awarded on an annual basis beginning in 2010.
Awardees
1908
The first award was of a gold medal to Alfred Russel Wallace, and silver medals to six other distinguished scientists:
Joseph Dalton Hooker
August Weismann
Ernst Haeckel
Francis Galton
E. Ray Lankester
Eduard Strasburger
1958
20 silver medals were awarded:
Edgar Anderson
E. Pavlovsky
Maurice Caullery
Bernhard Rensch
Ronald A. Fisher
G. Gaylord Simpson
C. R. Florin
Carl Skottsberg
Roger Heim
H. Hamshaw Thomas
J. B. S. Haldane
Erik Stensiö
John Hutchinson
Göte Turesson
Julian Huxley
Victor van Straelen
Ernst Mayr
D. M. S. Watson
H. J. Muller
John Christopher Willis (posthumously)
2008
13 silver medals were awarded, including 2 posthumously:
Nick Barton
M.W. Chase
Bryan Clarke
Joseph Felsenstein
Stephen Jay Gould (posthumously)
Peter R. Grant
Rosemary Grant
James Mallet
Lynn Margulis
John Maynard Smith (posthumously)
Mohamed Noor
H. Allen Orr
Linda Partridge
From 2010
Brian Charlesworth (2010)
James A. Lake (2011)
Loren H. Rieseberg (2012)
Godfrey Hewitt (2013)
Dolph Schluter (2014)
Roger Butlin (2015)
Pamela S. Soltis and Douglas E. Soltis (2016)
John N. Thompson (2017)
Josephine Pemberton (2018)
David Reich and Svante Pääbo (2019)
Spencer Barrett (2020)
Sarah Otto (2021)
David Jablonski (2022)
Ziheng Yang (2023)
Peter Crane (2024)
See also
List of biology awards
References
Biology awards
British science and technology awards
Linnean Society of London | Darwin–Wallace Medal | Technology | 517 |
24,113,462 | https://en.wikipedia.org/wiki/HD%20147018%20c | HD 147018 c is a gas giant extrasolar planet which orbits the G-type main sequence star HD 147018, located approximately 140 light years away in the constellation Triangulum Australe. It has mass at least six and a half time more than Jupiter and orbits HD 147018 nearly twice the distance between the Earth and the Sun. This planet is eight times farther away than HD 147018 b. This planet was discovered on August 11, 2009 by radial velocity method.
References
Exoplanets discovered in 2009
Giant planets
Triangulum Australe
Exoplanets detected by radial velocity | HD 147018 c | Astronomy | 126 |
15,245,998 | https://en.wikipedia.org/wiki/KCNJ13 | Potassium inwardly-rectifying channel, subfamily J, member 13 (KCNJ13) is a human gene encoding the Kir7.1 protein.
See also
Inward-rectifier potassium ion channel
References
Further reading
External links
Ion channels | KCNJ13 | Chemistry | 52 |
78,283,566 | https://en.wikipedia.org/wiki/Zytron | Zytron, also known as DMPA, is a chlorophenoxy herbicide. It controls crabgrass and other weeds in turf preëmergently, and ants, chinch bugs and grubs. It is used on baseball pitches in Australia.
Zytron inhibits microtubule assembly, preventing mitosis. making it a Group 3 / D / K1, similar to dinitroanilines like trifluralin. It was tested and commercially available in the US in 1959, and applied at 10-20 lbs per acre on turf, a high rate compared to other herbicides.
Zytron disappears almost completely from the body within one hour of mammalian exposure. It does not accumulate in soil and is non-harmful to microflora. DMPA has in testing been applied at rates as high as 67 lbs per acre.
Zytron may cause neurotoxicity in chickens. It is an organophosphorus ester, and other such chemicals are known to cause similar neurotoxicity. 100 mg/kg daily for 10 days was considered the minimum effective dose to observably alter hens' behaviour.
Zytron has been sold under the tradenames "Dow Crabgrass Killer", "Dow 1329", "Dowco 118" and "T-H Crabgrass Killer."
References
Herbicides
Chloroarenes
Thiophosphoryl compounds
Isopropylamino compounds
Methyl esters
Phosphoramidothioates | Zytron | Chemistry,Biology | 316 |
13,703,641 | https://en.wikipedia.org/wiki/HD%20166 | HD 166 or V439 Andromedae (ADS 69 A) is a 6th magnitude star in the constellation Andromeda, approximately 45 light years away from Earth. It is a variable star of the BY Draconis type, varying between magnitudes 6.13 and 6.18 with a 6.23 days periodicity. It appears within one degree of the star Alpha Andromedae and is a member of the Hercules-Lyra association moving group. It also happens to be less than 2 degrees from right ascension 00h 00m.
Star characteristics
HD 166 is a K-type main sequence star, cooler and dimmer than the Sun, and has a stellar classification of K0Ve where the e suffix indicates the presence of emission lines in the spectrum. The star has a proper motion of 0.422 arcseconds per year in a direction 114.1° from north. It has an estimated visual luminosity of 61% of the Sun, and is emitting like a blackbody with an effective temperature of 5,327K. It has a diameter that is about 90% the size of the Sun and a radial velocity of −6.9 km/s. Age estimates range from as low as 78 million years old based on its chromospheric activity, up to 9.6 billion years based on a comparison with theoretical evolutionary tracks. X-ray emission has been detected from this star, with an estimated luminosity of .
An infrared excess has been detected around HD 166, most likely indicating the presence of a circumstellar disk at a radius of 7.5 AU. The temperature of this dust is 90 K.
Variability
Eric J. Gaidos et al. first detected variability in HD 166 in the year 2000. It was given its variable star designation, V439 Andromedae, in 2006.
It has been found that the periodicity in the photometric variability of HD 166 is coincident with the rotation period. This leads to its classification as a BY Draconis variable, where brightness variations are caused by the presence of large starspots on the surface and by chromospheric activity.
References
External links
Image HD 166
nstars.nau.edu
Andromeda (constellation)
BY Draconis variables
000166
Spectroscopic binaries
K-type main-sequence stars
HD, 000166
Andromedae, V439
0008
000544
Durchmusterung objects
0005
Emission-line stars | HD 166 | Astronomy | 514 |
11,242,818 | https://en.wikipedia.org/wiki/ChemRefer | ChemRefer was a service that allows searching of freely available and full-text chemical and pharmaceutical literature that is published by authoritative sources.
Features included basic and advanced search options, mouseover detailed view, an integrated chemical structure drawing and search tool, downloadable toolbar, customized RSS feeds, and newsletter.
ChemRefer was primarily of use to readers who do not have subscriptions for accessing restricted chemical literature, and to publishers who offer either open access or hybrid open access journals and seek to attract further subscriptions by publicly releasing part of their archive.
See also
Google Scholar
Windows Live Academic
BASE
PubMed
References
External links
Recommendations & reviews
Cited as an "Internet Site of the Week" by the library of the Rowland Institute for Science at Harvard University
Recommended in the list of chemical literature databases by the library of the United States Naval Research Laboratory
Recommended in the list of chemical literature databases by the library of Mount Allison University
Review of ChemRefer at Depth-First chemoinformatics magazine
Recommended in the list of chemical literature databases by the Technology Research Portal, Belgium
Recommended in the list of chemical literature databases by the Centre for Research and Technology, Thessaloniki
Background
Interview with William James Griffiths at Reactive Reports chemistry magazine
Open access overview by Professor Peter Suber, Earlham College
Scholarly search services
Chemistry literature
Information retrieval systems
Open access projects | ChemRefer | Technology | 268 |
18,957,256 | https://en.wikipedia.org/wiki/Batch%20Enhancer | BATCH ENHANCER (or BE) (or BE.EXE,) is an applet or free-standing utility packaged with Norton Utilities (NU) to graphically enhance the presentation of batch files. Batch Enhancer allows the use of colours, square graphics, delays, beeps, more complex decision parameters, easier to create user-choice menus and other functions.
BE is unusual in Norton's suite of utilities in that it was not designed to primarily be launched by NU itself, but be used like another (DOS) command/function to simplify the writing of executable batch files, or make them more powerful, such as for creating colourful and animated DOS screens that were typically plain blue & cyan.
Examples of usage syntaxes: BE command (parameters)
or
BE filespec
Commands available:
ASK BEEP BOX CLS DELAY PRINTCHAR ROWCOL
SA WINDOW
For more help on a specific command type:
BE command ?
BE BEEP [switches]
or
BE BEEP [filespec]
Switches
/Dn Duration of the tone in n/18 seconds
/FnSound a tone of frequency n
/Rn Repeat the tone n times
/Wn Wait between tones n/18 seconds
PLAY A LITTLE TUNE.
be beep /F392 /D4;
be beep /F392 /D1;
be beep /F523 /D15;
BE BOX Usage: BE BOX top left bottom right [color]
BE window 0,0,24,79 bright yellow on blue explode
BE window 4,11,20,68 bright yellow on green explode shadow
Usage: BE DELAY ticks (1 tick = 1/18 second)
If the line "BE DELAY 18" was run from inside a batch file, it would pause for one second before moving to the next instruction. At modern computer speeds many batch files run too fast to be followed by a user; inserting a delay ensures that the user would see the DOS window that executes the .BAT file.
References
Software features
DOS software
Gen Digital software | Batch Enhancer | Technology | 417 |
12,608 | https://en.wikipedia.org/wiki/Geodesy | Geodesy or geodetics is the science of measuring and representing the geometry, gravity, and spatial orientation of the Earth in temporally varying 3D. It is called planetary geodesy when studying other astronomical bodies, such as planets or circumplanetary systems. Geodesy is an earth science and many consider the study of Earth's shape and gravity to be central to that science. It is also a discipline of applied mathematics.
Geodynamical phenomena, including crustal motion, tides, and polar motion, can be studied by designing global and national control networks, applying space geodesy and terrestrial geodetic techniques, and relying on datums and coordinate systems. Geodetic job titles include geodesist and geodetic surveyor.
History
Geodesy began in pre-scientific antiquity, so the very word geodesy comes from the Ancient Greek word or geodaisia (literally, "division of Earth").
Early ideas about the figure of the Earth held the Earth to be flat and the heavens a physical dome spanning over it. Two early arguments for a spherical Earth were that lunar eclipses appear to an observer as circular shadows and that Polaris appears lower and lower in the sky to a traveler headed South.
Definition
In English, geodesy refers to the science of measuring and representing geospatial information, while geomatics encompasses practical applications of geodesy on local and regional scales, including surveying.
In German, geodesy can refer to either higher geodesy ( or , literally "geomensuration") — concerned with measuring Earth on the global scale, or engineering geodesy () that includes surveying — measuring parts or regions of Earth.
For the longest time, geodesy was the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field; however, geodetic science and operations are applied to other astronomical bodies in our Solar System also.
To a large extent, Earth's shape is the result of rotation, which causes its equatorial bulge, and the competition of geological processes such as the collision of plates, as well as of volcanism, resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface (dynamic sea surface topography), and Earth's atmosphere. For this reason, the study of Earth's gravitational field is called physical geodesy.
Geoid and reference ellipsoid
The geoid essentially is the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of seawater, the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. Unlike a reference ellipsoid, the geoid is irregular and too complicated to serve as the computational surface for solving geometrical problems like point positioning. The geometrical separation between the geoid and a reference ellipsoid is called geoidal undulation, and it varies globally between ±110 m based on the GRS 80 ellipsoid.
A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) a and flattening f. The quantity f = , where b is the semi-minor axis (polar radius), is purely geometrical. The mechanical ellipticity of Earth (dynamical flattening, symbol J2) can be determined to high precision by observation of satellite orbit perturbations. Its relationship with geometrical flattening is indirect and depends on the internal density distribution or, in simplest terms, the degree of central concentration of mass.
The 1980 Geodetic Reference System (GRS 80), adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG), posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. GRS 80 essentially constitutes the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. Numerous systems used for mapping and charting are becoming obsolete as countries increasingly move to global, geocentric reference systems utilizing the GRS 80 reference ellipsoid.
The geoid is a "realizable" surface, meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge. The geoid can, therefore, be considered a physical ("real") surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, so it is an abstract surface. The third primary surface of geodetic interest — the topographic surface of Earth — is also realizable.
Coordinate systems in space
The locations of points in 3D space most conveniently are described by three cartesian or rectangular coordinates, X, Y, and Z. Since the advent of satellite positioning, such coordinate systems are typically geocentric, with the Z-axis aligned to Earth's (conventional or instantaneous) rotation axis.
Before the era of satellite geodesy, the coordinate systems associated with a geodetic datum attempted to be geocentric, but with the origin differing from the geocenter by hundreds of meters due to regional deviations in the direction of the plumbline (vertical). These regional geodetic datums, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927), have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas.
It is only because GPS satellites orbit about the geocenter that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space themselves get computed within such a system.
Geocentric coordinate systems used in geodesy can be divided naturally into two classes:
The inertial reference systems, where the coordinate axes retain their orientation relative to the fixed stars or, equivalently, to the rotation axes of ideal gyroscopes. The X-axis points to the vernal equinox.
The co-rotating reference systems (also ECEF or "Earth Centred, Earth Fixed"), in which the axes are "attached" to the solid body of Earth. The X-axis lies within the Greenwich observatory's meridian plane.
The coordinate transformation between these two systems to good approximation is described by (apparent) sidereal time, which accounts for variations in Earth's axial rotation (length-of-day variations). A more accurate description also accounts for polar motion as a phenomenon closely monitored by geodesists.
Coordinate systems in the plane
In geodetic applications like surveying and mapping, two general types of coordinate systems in the plane are in use:
Plano-polar, with points in the plane defined by their distance, s, from a specified point along a ray having a direction α from a baseline or axis.
Rectangular, with points defined by distances from two mutually perpendicular axes, x and y. Contrary to the mathematical convention, in geodetic practice, the x-axis points North and the y-axis East.
One can intuitively use rectangular coordinates in the plane for one's current location, in which case the x-axis will point to the local north. More formally, such coordinates can be obtained from 3D coordinates using the artifice of a map projection. It is impossible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen — called a conformal projection — preserves angles and length ratios so that small circles get mapped as small circles and small squares as squares.
An example of such a projection is UTM (Universal Transverse Mercator). Within the map plane, we have rectangular coordinates x and y. In this case, the north direction used for reference is the map north, not the local north. The difference between the two is called meridian convergence.
It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be α and s respectively; then we have:
The reverse transformation is given by:
Heights
In geodesy, point or terrain heights are "above sea level" as an irregular, physically defined surface.
Height systems in use are:
Orthometric heights
Dynamic heights
Geopotential heights
Normal heights
Each system has its advantages and disadvantages. Both orthometric and normal heights are expressed in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m2 s−2) and not metric. The reference surface is the geoid, an equigeopotential surface approximating the mean sea level as described above. For normal heights, the reference surface is the so-called quasi-geoid, which has a few-metre separation from the geoid due to the density assumption in its continuation under the continental masses.
One can relate these heights through the geoid undulation concept to ellipsoidal heights (also known as geodetic heights), representing the height of a point above the reference ellipsoid. Satellite positioning receivers typically provide ellipsoidal heights unless fitted with special conversion software based on a model of the geoid.
Geodetic datums
Because coordinates and heights of geodetic points always get obtained within a system that itself was constructed based on real-world observations, geodesists introduced the concept of a "geodetic datum" (plural datums): a physical (real-world) realization of a coordinate system used for describing point locations. This realization follows from choosing (therefore conventional) coordinate values for one or more datum points. In the case of height data, it suffices to choose one datum point — the reference benchmark, typically a tide gauge at the shore. Thus we have vertical datums, such as the NAVD 88 (North American Vertical Datum 1988), NAP (Normaal Amsterdams Peil), the Kronstadt datum, the Trieste datum, and numerous others.
In both mathematics and geodesy, a coordinate system is a "coordinate system" per ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system" for the same. When coordinates are realized by choosing datum points and fixing a geodetic datum, ISO speaks of a "coordinate reference system", whereas IERS uses a "reference frame" for the same. The ISO term for a datum transformation again is a "coordinate transformation".
Positioning
General geopositioning, or simply positioning, is the determination of the location of points on Earth, by myriad techniques. Geodetic positioning employs geodetic methods to determine a set of precise geodetic coordinates of a point on land, at sea, or in space. It may be done within a coordinate system (point positioning or absolute positioning) or relative to another point (relative positioning). One computes the position of a point in space from measurements linking terrestrial or extraterrestrial points of known location ("known points") with terrestrial ones of unknown location ("unknown points"). The computation may involve transformations between or among astronomical and terrestrial coordinate systems. Known points used in point positioning can be GNSS continuously operating reference stations or triangulation points of a higher-order network.
Traditionally, geodesists built a hierarchy of networks to allow point positioning within a country. The highest in this hierarchy were triangulation networks, densified into the networks of traverses (polygons) into which local mapping and surveying measurements, usually collected using a measuring tape, a corner prism, and the red-and-white poles, are tied.
Commonly used nowadays is GPS, except for specialized measurements (e.g., in underground or high-precision engineering). The higher-order networks are measured with static GPS, using differential measurement to determine vectors between terrestrial points. These vectors then get adjusted in a traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is the basis for defining a single global, geocentric reference frame that serves as the "zero-order" (global) reference to which national measurements are attached.
Real-time kinematic positioning (RTK GPS) is employed frequently in survey mapping. In that measurement technique, unknown points can get quickly tied into nearby terrestrial known points.
One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. There can be thousands of those geodetically determined points in a country, usually documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements.
Geodetic problems
In geometrical geodesy, there are two main problems:
First geodetic problem (also known as direct or forward geodetic problem): given the coordinates of a point and the directional (azimuth) and distance to a second point, determine the coordinates of that second point.
Second geodetic problem (also known as inverse or reverse geodetic problem): given the coordinates of two points, determine the azimuth and length of the (straight, curved, or geodesic) line connecting those points.
The solutions to both problems in plane geometry reduce to simple trigonometry and are valid for small areas on Earth's surface; on a sphere, solutions become significantly more complex as, for example, in the inverse problem, the azimuths differ going between the two end points along the arc of the connecting great circle.
The general solution is called the geodesic for the surface considered, and the differential equations for the geodesic are solvable numerically. On the ellipsoid of revolution, geodesics are expressible in terms of elliptic integrals, which are usually evaluated in terms of a series expansion — see, for example, Vincenty's formulae.
Observational concepts
As defined in geodesy (and also astronomy), some basic observational concepts like angles and coordinates include (most commonly from the viewpoint of a local observer):
Plumbline or vertical: (the line along) the direction of local gravity.
Zenith: the (direction to the) intersection of the upwards-extending gravity vector at a point and the celestial sphere.
Nadir: the (direction to the) antipodal point where the downward-extending gravity vector intersects the (obscured) celestial sphere.
Celestial horizon: a plane perpendicular to the gravity vector at a point.
Azimuth: the direction angle within the plane of the horizon, typically counted clockwise from the north (in geodesy and astronomy) or the south (in France).
Elevation: the angular height of an object above the horizon; alternatively: zenith distance equal to 90 degrees minus elevation.
Local topocentric coordinates: azimuth (direction angle within the plane of the horizon), elevation angle (or zenith angle), distance.
North celestial pole: the extension of Earth's (precessing and nutating) instantaneous spin axis extended northward to intersect the celestial sphere. (Similarly for the south celestial pole.)
Celestial equator: the (instantaneous) intersection of Earth's equatorial plane with the celestial sphere.
Meridian plane: any plane perpendicular to the celestial equator and containing the celestial poles.
Local meridian: the plane which contains the direction to the zenith and the celestial pole.
Measurements
The reference surface (level) used to determine height differences and height reference systems is known as mean sea level. The traditional spirit level directly produces such (for practical purposes most useful) heights above sea level; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid, as GPS only gives heights above the GRS80 reference ellipsoid. As geoid determination improves, one may expect that the use of GPS in height determination shall increase, too.
The theodolite is an instrument used to measure horizontal and vertical (relative to the local vertical) angles to target points. In addition, the tachymeter determines, electronically or electro-optically, the distance to a target and is highly automated or even robotic in operations. Widely used for the same purpose is the method of free station position.
Commonly for local detail surveys, tachymeters are employed, although the old-fashioned rectangular technique using an angle prism and steel tape is still an inexpensive alternative. As mentioned, also there are quick and relatively accurate real-time kinematic (RTK) GPS techniques. Data collected are tagged and recorded digitally for entry into Geographic Information System (GIS) databases.
Geodetic GNSS (most commonly GPS) receivers directly produce 3D coordinates in a geocentric coordinate frame. One such frame is WGS84, as well as frames by the International Earth Rotation and Reference Systems Service (IERS). GNSS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys.
To monitor the Earth's rotation irregularities and plate tectonic motions and for planet-wide geodetic surveys, methods of very-long-baseline interferometry (VLBI) measuring distances to quasars, lunar laser ranging (LLR) measuring distances to prisms on the Moon, and satellite laser ranging (SLR) measuring distances to prisms on artificial satellites, are employed.
Gravity is measured using gravimeters, of which there are two kinds. First are absolute gravimeters, based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube). They are used to establish vertical geospatial control or in the field. Second, relative gravimeters are spring-based and more common. They are used in gravity surveys over large areas — to establish the figure of the geoid over these areas. The most accurate relative gravimeters are called superconducting gravimeters, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide in studying Earth's tides, rotation, interior, oceanic and atmospheric loading, as well as in verifying the Newtonian constant of gravitation.
In the future, gravity and altitude might become measurable using the special-relativistic concept of time dilation as gauged by optical clocks.
Units and measures on the ellipsoid
Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are angles, not metric
measures, and describe the direction of the local normal to the reference ellipsoid of revolution. This direction is approximately the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works reasonably well when one also uses an ellipsoidal model of the figure of the Earth.
One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator same as with the nautical mile.
A metre was originally defined as the 10-millionth part of the length from the equator to the North Pole along the meridian through Paris (the target was not quite reached in actual implementation, as it is off by 200 ppm in the current definitions). This situation means that one kilometre roughly equals (1/40,000) * 360 * 60 meridional minutes of arc, or 0.54 nautical miles. (This is not exactly so as the two units had been defined on different bases, so the international nautical mile is 1,852 m exactly, which corresponds to rounding the quotient from 1,000/0.54 m to four digits).
Temporal changes
Various techniques are used in geodesy to study temporally changing surfaces, bodies of mass, physical fields, and dynamical systems. Points on Earth's surface change their location due to a variety of mechanisms:
Continental plate motion, plate tectonics
The episodic motion of tectonic origin, especially close to fault lines
Periodic effects due to tides and tidal loading
Postglacial land uplift due to isostatic adjustment
Mass variations due to hydrological changes, including the atmosphere, cryosphere, land hydrology, and oceans
Sub-daily polar motion
Length-of-day variability
Earth's center-of-mass (geocenter) variations
Anthropogenic movements such as reservoir construction or petroleum or water extraction
Geodynamics is the discipline that studies deformations and motions of Earth's crust and its solidity as a whole. Often the study of Earth's irregular rotation is included in the above definition. Geodynamical studies require terrestrial reference frames realized by the stations belonging to the Global Geodetic Observing System (GGOS).
Techniques for studying geodynamic phenomena on global scales include:
Satellite positioning by GPS, GLONASS, Galileo, and BeiDou
Very-long-baseline interferometry (VLBI)
Satellite laser ranging (SLR) and lunar laser ranging (LLR)
DORIS
Regionally and locally precise leveling
Precise tachymeters
Monitoring of gravity change using land, airborne, shipborne, and spaceborne gravimetry
Satellite altimetry based on microwave and laser observations for studying the ocean surface, sea level rise, and ice cover monitoring
Interferometric synthetic aperture radar (InSAR) using satellite images.
Notable geodesists
See also
Fundamentals
Geodesy (book)
Concepts and Techniques in Modern Geography
Geodesics on an ellipsoid
History of geodesy
Physical geodesy
Earth's circumference
Physics
Geosciences
Governmental agencies
National mapping agencies
U.S. National Geodetic Survey
National Geospatial-Intelligence Agency
Ordnance Survey
United States Coast and Geodetic Survey
United States Geological Survey
International organizations
International Union of Geodesy and Geophysics (IUGG)
International Association of Geodesy (IAG)
International Federation of Surveyors (IFS)
International Geodetic Student Organisation (IGSO)
Other
EPSG Geodetic Parameter Dataset
Meridian arc
Surveying
References
Further reading
F. R. Helmert, Mathematical and Physical Theories of Higher Geodesy, Part 1, ACIC (St. Louis, 1964). This is an English translation of Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Vol 1 (Teubner, Leipzig, 1880).
F. R. Helmert, Mathematical and Physical Theories of Higher Geodesy, Part 2, ACIC (St. Louis, 1964). This is an English translation of Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Vol 2 (Teubner, Leipzig, 1884).
B. Hofmann-Wellenhof and H. Moritz, Physical Geodesy, Springer-Verlag Wien, 2005. (This text is an updated edition of the 1967 classic by W.A. Heiskanen and H. Moritz).
W. Kaula, Theory of Satellite Geodesy : Applications of Satellites to Geodesy, Dover Publications, 2000. (This text is a reprint of the 1966 classic).
Vaníček P. and E.J. Krakiwsky, Geodesy: the Concepts, pp. 714, Elsevier, 1986.
Torge, W (2001), Geodesy (3rd edition), published by de Gruyter, .
Thomas H. Meyer, Daniel R. Roman, and David B. Zilkoski. "What does height really mean?" (This is a series of four articles published in Surveying and Land Information Science, SaLIS.)
"Part I: Introduction" SaLIS Vol. 64, No. 4, pages 223–233, December 2004.
"Part II: Physics and gravity" SaLIS Vol. 65, No. 1, pages 5–15, March 2005.
"Part III: Height systems" SaLIS Vol. 66, No. 2, pages 149–160, June 2006.
"Part IV: GPS heighting" SaLIS Vol. 66, No. 3, pages 165–183, September 2006.
External links
Geodetic awareness guidance note, Geodesy Subcommittee, Geomatics Committee, International Association of Oil & Gas Producers
Earth sciences
Cartography
Measurement
Navigation
Applied mathematics
Articles containing video clips | Geodesy | Physics,Astronomy,Mathematics | 5,099 |
5,308,524 | https://en.wikipedia.org/wiki/Powder-actuated%20tool | A powder-actuated tool (PAT, often generically called a Hilti gun or a Ramset gun after their manufacturing companies) is a type of nail gun used in construction and manufacturing to join materials to hard substrates such as steel and concrete. Known as direct fastening or explosive fastening, this technology is powered by a controlled explosion of a small chemical propellant charge, similar to the process that discharges a firearm.
Features
Powder-actuated tools are often used because of their speed of operation, compared to other processes such as drilling and then installing a threaded fastener. They can more easily be used in narrow or awkward locations, such as installing steel suspension clips into an overhead concrete ceiling.
Powder-actuated tools are powered by small explosive cartridges, which are triggered when a firing pin strikes a primer, a sensitive explosive charge in the base of the cartridge. The primer ignites the main charge of powder, which burns rapidly. The hot gases released by the burning of the propellant rapidly build pressure within the cartridge, which pushes either directly on the head of the fastener, or on a piston, accelerating the fastener out of the muzzle.
Powder-actuated tools come in high-velocity and low-velocity types. In high-velocity tools, the propellant charge acts directly on the fastener in a process similar to a firearm. Low-velocity tools introduce a piston into the chamber. The propellant acts on the piston, which then drives the fastener into the substrate. The piston is analogous to the bolt of a captive bolt pistol.
A tool is considered low velocity if the average test velocity of the fastener is not in excess of with no single test having a velocity of over . A high velocity tool propels or discharges a stud, pin, or fastener in excess of .
High-velocity tools made or sold in the United States must comply under certain circumstances; with many being used in the shipbuilding and steel industries.
Powder-actuated fasteners are made of special heat-treated steel; common nails are unsafe for this application. There are many specialized fasteners designed for specific applications in the construction and manufacturing industries.
History
Powder-actuated technology was developed for commercial use during the Second World War, when high-velocity fastening systems were used to temporarily repair damage to ships. In the case of hull breaches, these tools fastened steel plates over damaged areas. These tools were developed by Mine Safety Appliances, for the United States Navy. Powder-actuated tools were investigated and used prior to this development; they were used in anti-submarine warfare during the First World War and were the subject of a 1921 United States patent (US Patent No. 1365869).
Types
Powder actuated tools can be variously classified:
Direct acting (the charge acts directly on the head of the nail or high velocity), or indirect (using an intermediate piston or low velocity)
Single-shot, or magazine-fed
Automatic or manual piston cycling
Automatic or manual feed of the charges
Energy sources
Powder-actuated tools are powered by specially-designed blank firearm cartridges, also informally called "loads", "boosters", "rounds", or "charges".
In many cases, the charges are ordinary firearm cartridges with modified casings, and the bullets omitted. The .22 Short, developed by Smith & Wesson, is common. These charges may be hand-fed (single-shot), or manufactured and distributed on a plastic carrier strip.
Color coding
The three single-shot strengths or colors typically sold to the general public are brown, green, and yellow in brass-colored casings.
Not all powder-actuated tools are rated for high-capacity charges—the strongest charge (nickel-purple at ), for example, is dangerous in a tool not rated for the high pressures it generates. The table above is for a slug from a test device.
Safety and regulation
As with their air-actuated cousins, powder-actuated guns have a muzzle safety interlock. If the muzzle is not pressed against a surface with sufficient force, the firing pin is blocked and cannot reach the load to fire it. This helps ensure that the gun does not discharge in an unsafe manner, causing the nail to become an unrestrained projectile.
Due to their potential for causing personal injury, OSHA regulations in the US require certification specific to the tool being used before any person is permitted to rent or use powder-actuated equipment. Most manufacturers of powder-actuated nail guns offer training and certification, some with no further charge online testing. In addition, special instruction is necessary if the prospective user is unable to distinguish colors used in the color code system that identifies proper power levels. Most certifications are accepted for life; however, in California they must be renewed every three years.
See also
Pneumatic tool
Power tool
References
Mechanical hand tools | Powder-actuated tool | Physics | 987 |
63,094,759 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20and%20the%20Three-Body%20Problem | Poincaré and the Three-Body Problem is a monograph in the history of mathematics on the work of Henri Poincaré on the three-body problem in celestial mechanics. It was written by June Barrow-Green, as a revision of her 1993 doctoral dissertation, and published in 1997 by the American Mathematical Society and London Mathematical Society as Volume 11 in their shared History of Mathematics series (). The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics
The three-body problem concerns the motion of three bodies interacting under Newton's law of universal gravitation, and the existence of orbits for those three bodies that remain stable over long periods of time. This problem has been of great interest mathematically since Newton's formulation of the laws of gravity, in particular with respect to the joint motion of the sun, earth, and moon. The centerpiece of Poincaré and the Three-Body Problem is a memoir on this problem by Henri Poincaré, entitled Sur le problème des trois corps et les équations de la dynamique [On the problem of the three bodies and the equations of dynamics]. This memo won the King Oscar Prize in 1889, commemorating the 60th birthday of
Oscar II of Sweden, and was scheduled to be published in Acta Mathematica on the king's birthday, until Lars Edvard Phragmén and Poincaré determined that there were serious errors in the paper. Poincaré called for the paper to be withdrawn, spending more than the prize money to do so. In 1890 it was finally published in revised form, and over the next ten years Poincaré expanded it into a monograph, Les méthodes nouvelles de la mécanique céleste [New methods in celestial mechanics]. Poincare's work led to the discovery of chaos theory, set up a long-running separation between mathematicians and dynamical astronomers over the convergence of series, and became the initial claim to fame for Poincaré himself. The detailed story behind these events, long forgotten, was brought back to life in a sequence of publications by multiple authors in the early and mid 1990s, including Barrow-Green's dissertation, a journal publication based on the dissertation, and this book.
The first chapter of Poincaré and the Three-Body Problem introduces the problem and its second chapter surveys early work on this problem, in which some particular solutions were found by Newton, Jacob Bernoulli, Daniel Bernoulli, Leonhard Euler, Joseph-Louis Lagrange, Pierre-Simon Laplace, Alexis Clairaut, Charles-Eugène Delaunay, Hugo Glydén, Anders Lindstedt, George William Hill, and others. The third chapter surveys the early work of Poincaré, which includes work on differential equations, series expansions, and some special solutions of the three-body problem, and the fourth chapter surveys this history of the founding of Acta Arithmetica by Gösta Mittag-Leffler and of the prize competition announced by Mittag-Leffler in 1885, which Barrow-Green suggests may have been deliberately set with Poincaré's interests in mind and which Poincaré's memoir would win.
The fifth chapter concerns Poincaré's memoir itself; it includes a detailed comparison of the significant differences between the withdrawn and published versions, and overviews the new mathematical content it contained, including not only the possibility of chaotic orbits but also homoclinic orbits and the use of integrals to construct invariants of systems. After a chapter on Poincaré's expanded monograph and his other later work on the three-body problem, the remainder of the book discusses the influence of Poincaré's work on later mathematicians. This includes contributions on the singularities of solutions
by Paul Painlevé, Edvard Hugo von Zeipel, Tullio Levi-Civita, Jean Chazy, Richard McGehee, Donald G. Saari, and Zhihong Xia,
on the stability of solutions by Aleksandr Lyapunov,
on numerical results by George Darwin, Forest Ray Moulton, and Bengt Strömgren,
on power series by Giulio Bisconcini and Karl F. Sundman,
and on the KAM theory by Andrey Kolmogorov, Vladimir Arnold, and Jürgen Moser,
and additional contributions by George David Birkhoff, Jacques Hadamard, V. K. Melnikov, and Marston Morse. However, much of modern chaos theory is left out of the story "as amply dealt with elsewhere", and the work of Qiudong Wang generalizing Sundman's convergent series from three bodies to arbitrary numbers of bodies is also omitted. An epilogue considers the impact of modern computer power on the numerical study of Poincaré's theories.
Audience and reception
This book is aimed at specialists in the history of mathematics,
but can be read by any student of mathematics familiar with differential equations,
although the central part of the book, analyzing Poincaré's work, may be too light on mathematical detail to be readily understandable
without reference to other material.
Reviewer Ll. G. Chambers writes "This is a superb piece of work and it throws new light on one of the most fundamental topics of mechanics."
Reviewer Jean Mawhin calls it "the definitive work about the chaotic story of the King Oscar Prize" and "pleasantly accessible"; reviewer R. Duda calls it "clearly organized, well written, richly documented", and both Mawhin and Duda call it a "valuable addition" to the literature. And reviewer Albert C. Lewis writes that it "provides insights into higher mathematics that justify its being on every university mathematics student's reading list". Although reviewer Florin Diacu (himself a noted researcher on the -body problem) complains that Wang was omitted, that Barrow-Green "sometimes fails to see connections ... within Poincaré's own work" and that some of her translations are inaccurate, he also recommends the book.
References
Astronomical dynamical systems
Books about the history of mathematics
1997 non-fiction books | Poincaré and the Three-Body Problem | Astronomy,Mathematics | 1,249 |
64,394,483 | https://en.wikipedia.org/wiki/Institute%20of%20Naval%20Medicine | The Institute of Naval Medicine is the main research centre and training facility of the Royal Navy Medical Service. The Institute was established in Alverstoke, Gosport, in 1969.
The Institute today offers 'specialist medical training, guidance and support from service entry to resettlement', and provides 'extensive research, laboratory and clinical facilities' for use across the armed services.
History
Royal Naval Medical School
First established at Royal Naval College, Greenwich in 1912, the Royal Naval Medical School provided induction training for new-entry medical officers, and promotion training for the rank of Fleet Surgeon (later Surgeon Lieutenant Commander). The initial course provided prospective naval surgeons with the skills to function as a sole practitioner at sea; subjects taught included naval hygiene, dentistry, radiography, anaesthetics and tropical medicine. It was from the start a research-focused institution, which in its early decades played a key role in the production of vaccines and sera. Clinical training took place initially at the Dreadnought Seamen's Hospital, then (after the First World War) at the London Hospital; courses prior to the Second World War were validated by the University of London.
In the early 1930s induction training moved to Haslar, but other teaching and research work continued to be based at Greenwich; by that time the RMNS was engaged in 'a very large amount of highly technical work of the greatest importance to the health of the Navy', including research, analytics, pathological examinations, tropical disease investigations and vaccine making.
The Royal Naval Medical School was removed from Greenwich to Clevedon at the start of World War II, where it remained until 1948. During the war its work continued: in the back garden of a house on Elton Road, Clevedon, in 1942 the RNMS constructed the world's first fully functional factory for the mass production of penicillin.
In 1948 the Royal Naval Medical School was relocated to Monckton House, Alverstoke. The Royal Naval Physiological Laboratory had been established here in 1942, a joint project of the RN Scientific Service and the RN Medical Service.
In the 1960s short courses were offered in atomic, underwater and tropical medicine. At the same time, the RNMS began to undertake increasingly specialised medical research in support of the Polaris submarine-launched nuclear weapons programme. Specialised research, training and radiological protection facilities were built in the grounds of Monckton House, and in 1969 the establishment was renamed the Institute of Naval Medicine.
Institute of Naval Medicine
At a safety conference on Saturday 25 March 1972 at the University of Birmingham, organised by the National Council of British Mountaineering, with around five hundred climbing experts present, Surgeon Commander Duncan Walters (August 1927 - August 2021) showed a film entitled Give Him Air, about a swimmer in Malta that was accidentally speared in the lung by a harpoon gun. The film showed the gruesome after-effects of the harpoon incident, which caused eight conference attendees to faint, and had to be carried outside. The film was on the subjects of mountaineering injuries, and expedition medicine. The conference chairman was Sir Jack Longland. It was recommended that walkers on mountains in North Wales were guided by someone with the Mountain Leadership Certificate. In 1970, 43% of those injured on mountains in North Wales were aged 15 to 20.
In November 1973 a £200,000 environmental medical centre opened, which simulated life inside a submarine. From 12 November 1973, four sailors (medical ratings) were shut inside this for thirty days, to test atmospheric pollution.
J and P Engineering Reading Ltd developed a photo-sensitive radiation detector for the institute, later sold to the National Radiological Protection Board (NRPB) in Oxfordshire and for CERN.
At a conference in Aberdeen in September 1988, Surgeon Captain Ramsay Pearson, head of undersea medicine, said that recreational diving in the UK had too many accidents, due to decompression computers, which he claimed did not have built-in safety factors. The National Hyperbaric Centre in Aberdeen (built by the government in 1987) agreed with him.
In August 2000 the site sent four doctors and two staff to the Kursk submarine disaster in a team of twenty-seven from the UK.
As of 2005 the Institute's mission statement was 'to improve the operational capability of the Royal Navy by promoting good health and safety and maximising the effectiveness of personnel'. Its five 'principal business areas' were:
Scientific advice on maritime and military health and safety
Operationally deployable specialist medical and scientific staff (principally focusing on diving, submarine and radiation medicine)
Specialist training
Research and equipment testing
Corporate services (including medico-legal advice, medical resettlement, libraries and biostatistics).
Sports and survival medicine
The Channel 5 documentary Survivor featured the institute, and surviving cold temperatures on the Cascade Range, on Wednesday 28 January 1998. Sir Ranulph Fiennes visited on Monday 11 October 1999, when he was put in an immersion tank.
The British Olympic coxless four men's rowing team had medical tests, with a vitalograph for lung function in 2008, later winning the gold medal in August 2012.
Activity
Training
It trained medical staff for the Naval Emergency Monitoring Team at three sites at Gare Loch, Portsmouth and Plymouth, which worked with the Nuclear Accident Response Organisation (NARO) at the Clyde Submarine Base (HMNB Clyde)
In 1970s, nurses in the navy trained at the navy hospitals in Gosport and Plymouth; the Royal Naval School of Nursing began around 1962, in Gosport. There is longer a navy site at Plymouth, but there is a Ministry of Defence Hospital Unit - MODHU at Plymouth hospital; all medical assistants would complete 22 weeks of training at the RN Hospital in Gosport, followed by another 32 weeks at the RN hospitals at Gosport or Plymouth for naval (ship) medical assistants. Submarine medical assistants (MASM) would be trained at the institute, such as in radiation decontamination.
Medical assistants are trained at the Defence Medical Academy in Whittington, Staffordshire, with nuclear training at the Nuclear Department at HMS Sultan in Gosport, which will move to Scotland. The Department of Nuclear Science and Technology moved from London in October 1998.
Research
Drowning
The site has done much research into drowning, which kills 700–1000 a year in the UK, with a third being males aged 15–35. Surgeon Commander Frank Golden (5 June 1936 - 5 January 2014), the Director of Research in the 1980s, conducted many important investigations. Many able swimmers died, no more than 10 yards from refuge, from effects of cold water. Frank Golden later worked with Professor Mike Tipton at the University of Surrey Robens Institute. Together they wrote wrote the book Essentials of Sea Survival in 2002. ()
So-called 'dry drowning' is caused by the shock of cold water. A possible cause is cold water causing the larynx to spasm. Animals have a 'diving response', but humans hyperventilate, and the heart beats too quickly due to a chemical imbalance.
Drowning is the third most common form of accidental death in the UK after road accidents and home injuries. It is often competent swimmers in canals, rivers or flooded quarries in spring or early summer, and there has not been much research on this form of drowning. Most deaths occur in the first three minutes, and those who last 15 minutes mostly last to 30 minutes. Admiral Frank Golden in the 1990s thought that the deaths were linked to the gasp reflex as found in cold showers. There is a big increase in blood pressure and heart rate. Uncontrolled rate breathing makes swimming impossible due to the cold shock response. Work had neen carried out with the University of Leeds on 'immersion hypothermia'.
Diving
In the 1990s, Surgeon Commander James Francis found 'nitrogen narcosis' below 30m of water depth.
James Francis became Head of Undersea Medicine and left the Navy in 1996.
The INM works with The Physiological Society, and staff have given lectures at the Society in London.
Exposure and cold temperatures
Surgeon Commander Jim Sykes, the Professor of Naval Occupation Medicine, researched exposure.
Surgeon Commander Howard Oakley researched exposure in the 1990s, and drowning, and premature junctional contraction.
Seasickness
In November 1979 the site tested a new seasickness pill on HMS Broadsword, called cinnarizine, with reference to the previous medication hyoscine (scopolamine), and worked with the MRC
Women submariners
In 2010 the USA allowed women on its submarines but women submariners were not allowed in the UK as carbon dioxide in a submarine's atmosphere could damage a foetus.
In December 2011 women were allowed on submarines, with officers first then all women from 2015. All women would serve on the Astute class submarines from 2016. Women had been on surface ships since 1990. There are around 3420 females in the Royal Navy, about 9%.
Structure
It is situated in the south of Gosport. The Medical Officer-in-Charge is also the Dean of Naval Medicine.
Departments
Diving and Hyperbaric Medicine, when known as the Undersea Medicine Department, it worked with the Submarine Escape Training Tank and HMS Reclaim
Submarine and Radiation Medicine; the Naval Radiological Protection Service became the Defence R P S in 1982 which became DERA Radiation Protection Services
Environmental and Industrial Hazards Laboratories, investigates drinking water
Environmental Medicine and Science; the EMU - Environmental Medicine Unit had a Fitness Anthropometric Clinic
Applied Physiology and Human Factors, investigates nutrition and supports the Defence Nutrition Advisory Service
Acoustics and Vibration, has worked with the Institute of Sound and Vibration Research at the University of Southampton; the Royal Navy has an exemption from the Control of Vibration at Work Regulations 2005
Cold Injury Clinic
RNMS School, works with the Resuscitation Council UK on first aid
Medical Officers in Charge
Surgeon Rear Admiral Sir James Watt 1969–72
Surgeon Rear Admiral A. O'Connor 24 July 1972 - 1975
Surgeon Rear Admiral Sir John Rawlins 1975–77
Surgeon Rear Admiral Sir John Harrison 30 March 1977 - 1981
Surgeon Rear Admiral R. J. A. Lambert 1981-1983
Surgeon Captain E. P. Beck 1983-1985
Surgeon Commodore J. W. Richardson 1985-1987
Surgeon Captain R. W. F. Paul 1987-1989
Surgeon Captain A. Craig 1989-1990
Surgeon Captain J. W. Davies 1991-1993
Surgeon Rear Admiral A. Craig 1993-1994
Surgeon Commodore F Reed OBE - 2005
Surgeon Commodore Jim Sykes 2005-2008
Surgeon Captain D.C. Brown 25 September 2008 - 2011
Surgeon Captain N.P. Butterfield August 2011 - 2012
Surgeon Captain M.A. Howell September 2012 -
See also
in Devon
in Bedfordshire
, military medical research site in France
References
External links
Institute of Naval Medicine
1969 establishments in the United Kingdom
Underwater diving medicine organizations
Gosport
Medical research institutes in the United Kingdom
Medical schools in England
Military education and training in Hampshire
Military medical research organisations of the United Kingdom
Military medical training establishments
Radiation protection organizations
Research institutes established in 1969
Research institutes in Hampshire
Royal Navy bases in Hampshire
Royal Navy Medical Service
Submarine education and training
Thermal medicine
Toxicology in the United Kingdom
Toxicology organizations
Underwater diving in the United Kingdom | Institute of Naval Medicine | Engineering,Environmental_science | 2,252 |
31,599,049 | https://en.wikipedia.org/wiki/Stars%20virus | The Stars virus is a computer virus which infects computers running Microsoft Windows. It was named and discovered by Iranian authorities in April 2011. Iran claimed it was used as a tool to commit espionage. Western researchers came to believe it is probably the same thing as the Duqu virus, part of the Stuxnet attack on Iran.
History
The Stars virus was studied in a laboratory in Iran – that means major vendors of antivirus software did not have access to samples and therefore they could not assess any potential relation to Duqu or Stuxnet. Foreign computer experts say they have seen no evidence of the virus, and some even doubt its actual existence. Iran is claiming Stars to be harmful for computer systems. It is said to inflict minor damage in the initial stage and might be mistaken for executable files of governmental organizations.
This is the second attack claimed by Iran after the Stuxnet computer worm discovered in July 2010, which targeted industrial software and equipment.
Researchers came to believe that the Stars virus found by Iranian computer specialists was the Duqu virus. The Duqu virus keylogger was embedded in a JPEG file. Since most of the file was taken by the keylogger only a portion of the image remained. It turned out to be an image taken by the Hubble telescope showing a cluster of stars, the aftermath of two galaxies colliding. Symantec, Kaspersky and CrySyS researchers came to believe Duqu and Stars were the same virus.
See also
Flame (malware)
Cyber electronic warfare
Cyber security standards
Cyber warfare
List of cyber attack threat trends
Proactive Cyber Defence
References
2011 in computing
2011 in Iran
Computer viruses
Cyberwarfare
Industrial computing
Nuclear program of Iran
Rootkits
Cyberwarfare in Iran | Stars virus | Technology,Engineering | 353 |
6,816,383 | https://en.wikipedia.org/wiki/Burgundy%20mixture | Burgundy mixture, named after the French district where it was first used to treat grapes and vines, is a mixture of copper sulfate and sodium carbonate. This mixture, which can have an overall copper concentration within the range of 1% through 20%, is used as a fungicidal spray for trees and small fruits.
History
Similar to the Bordeaux mixture, one of the earliest fungicides in use, Burgundy mixture, also known as “sal soda Bordeaux”, is used as a fungus preventative applicant on plants before fungi have appeared. Bordeaux mixture contains copper(II) sulfate, CuSO4, and hydrated lime, Ca(OH)2, while Burgundy mixture contains copper sulfate, CuSO4, and sodium carbonate, Na2CO3. First used around 1885, Burgundy mixture has since been replaced by synthetic organic compounds, or by compounds that contain copper in a non-reactive, chelated form. This helps to prevent the accumulation of high levels of copper in sediments surrounding the plants.
Synthesis and composition
Burgundy mixture is made by combining dissolved copper sulfate and dissolved sodium carbonate. Dissolved copper sulfate ratios generally range from 1:1 to 1:18. Sodium carbonate is generally added in higher quantities and at a dissolved ratio of 1:1.5. Over time, the sodium carbonate will crystallize out of solution, and the closer the copper sulfate to carbonate mixture is to 1:1 ratios, the faster this process occurs. This property is one key factor in the general discontinued usage of Burgundy mixture, as the mixture must be mixed shortly before intended utilization.
Uses and mode of action
Burgundy mixture is used as a preemptive fungicide prevention for trees and small fruits. This occurs because the Cu(II) ions are capable of interfering with enzymes found within the spores of many fungi, preventing germination from occurring. Unfortunately, the mechanism for copper antifungal properties is not well understood, though it is thought that interactions between the copper and negatively charged portions of the cellular membranes of the fungi promote an altered shape and increased membrane permeability, which alters the homeostasis of the cell and can lead to insufficient uptake and storage of essential nutrients and ions.
References
Copper(II) compounds
Fungicides | Burgundy mixture | Biology | 450 |
61,391,308 | https://en.wikipedia.org/wiki/Alphasyllabic%20numeral%20system | Alphasyllabic numeral systems are a type of numeral systems, developed mostly in India starting around 500 AD. Based on various alphasyllabic scripts, in this type of numeral systems glyphs of the numerals are not abstract signs, but syllables of a script, and numerals are represented with these syllable-signs. On the basic principle of these systems, numeric values of the syllables are defined by the consonants and vowels which constitute them, so that consonants and vowels are - or are not in some systems in case of vowels - ordered to numeric values. While there are many hundreds of possible syllables in a script, and since in alphasyllabic numeral systems several syllables receive the same numeric value, so
the mapping is not injective.
Alphasyllabaries
The basic principle of the Indian alphasyllabaries is a set of 33 consonant-signs, which are combined with a set of about 20 diacritic marks that indicate vowels of the brahmi scripts, these produce a set of signs for syllables; unmarked consonant-signs denote the syllable with the inherent vowel ’a’.
Indian alphasyllabic numeration
Starting around 500 AD, Indian astronomers and astrologers began to use this new principle for numeration with assigning numeral values to the phonetic signs of various Indian alphasyllabic scripts – the brahmi scripts. Earlier 20th-century scholars supposed that the Indian grammarian Pāṇini used alphasyllabic numerals already in the 7th century BC. Since there is no direct evidence for any alphasyllabic numeration in India until about 510 AD, recently this theory is not supported.
These systems, known collectively as varnasankhya systems, were considered to be distinct from other Indian systems – i.e. brahmi or kharosthi numerals - that had abstract numeral-signs. Alike the alphabetic systems of Europe and the Middle East, these systems used phonetic signs of a script for numeration, but they were more flexible than those. Three significant systems of them: Āryabhaṭa numeration, katapayadi system, and the aksharapalli numerals.
Alphasyllabic numeration are very important for understanding Indian astronomy, astrology, and numerology, since Indian astronomical texts were written in Sanskrit verse, which had strict metrical form. These systems had the advantage of being able to give any word a numerical value, and to find many words corresponding to one given number. This made possible the construction of various mnemonics to aid scholars and students, and would have served a prosodic function.
Structure
Structure of the Indian alphasyllabic numeration systems differs basically from one another. Though in each of the systems consonants and vowels are ordered to numeric values, thereby each syllable has a numeric value, but on the base of each system's own rules. In various systems the V, CV, CCV syllables receive different values, and the methods, how the numbers are represented by these syllables, are quite different.
Āryabhaṭa numeration system operates on the additive principle, so that the number's value, which is represented in it, is computed as the sum of each syllable's numeric value. In his mapping, the consonants are ordered from 1 to 25, then by tens from 30 to 100. Each successive vowel is ordered to the different exponent of 100. In Āryabhaṭa numeration’s the diacritic signs, which mark vowels, multiply the value of the syllable’s consonant by the given power of 100. Direction of his script is right to left, which reflects the order of the Sanskrit lexical numerals.
In katapayadi system, syllables have the numeric values only from 0 to 9. To each V, CV and CCV syllable is given a value between 0 and 9. In this way each number between 0 and 9 are ordered to several syllables. Unlike Aryabhata's system, changing the vowel in the syllable doesn’t change the syllable’s numerical value. The number’s value, which is represented in this way, is given as positional number with one syllable on each position. Direction of this script is right to left.
In aksharapalli system, syllables were assigned the numerical values 1–9, 10–90, but never as high as 1000. According to S. Chrisomalis there was never a single regular system for correlating signs with numeral values in this system. It was used widely for paginating books, aksharapalli numerals were written in the margins from top to bottom.
Systems
Āryabhaṭa numeration
Katapayadi system
Aksharapalli
Tuubhyara
References
Sources
Georges Ifrah: The Universal History of Numbers. From Prehistory to the Invention of the Computer. John Wiley & Sons, New York, 2000, .
See also
Alphabetic numeral system
Bhutasamkhya system
Numeral systems
Indian mathematics | Alphasyllabic numeral system | Mathematics | 1,036 |
14,876,723 | https://en.wikipedia.org/wiki/CREBL1 | CAMP responsive element binding protein-like 1, also known as CREBL1, is a protein which in humans is encoded by the CREBL1 gene.
Function
The protein encoded by this gene bears sequence similarity with the Creb/ATF subfamily of the bZip superfamily of transcription factors. It localizes to both the cytoplasm and the nucleus. The gene localizes to the major histocompatibility complex (MHC) class III region on chromosome 6.
References
Further reading
External links
Transcription factors | CREBL1 | Chemistry,Biology | 108 |
23,087,727 | https://en.wikipedia.org/wiki/Mordell%E2%80%93Weil%20theorem | In mathematics, the Mordell–Weil theorem states that for an abelian variety over a number field , the group of K-rational points of is a finitely-generated abelian group, called the Mordell–Weil group. The case with an elliptic curve and the field of rational numbers is Mordell's theorem, answering a question apparently posed by Henri Poincaré around 1901; it was proved by Louis Mordell in 1922. It is a foundational theorem of Diophantine geometry and the arithmetic of abelian varieties.
History
The tangent-chord process (one form of addition theorem on a cubic curve) had been known as far back as the seventeenth century. The process of infinite descent of Fermat was well known, but Mordell succeeded in establishing the finiteness of the quotient group which forms a major step in the proof. Certainly the finiteness of this group is a necessary condition for to be finitely generated; and it shows that the rank is finite. This turns out to be the essential difficulty. It can be proved by direct analysis of the doubling of a point on E.
Some years later André Weil took up the subject, producing the generalisation to Jacobians of higher genus curves over arbitrary number fields in his doctoral dissertation published in 1928. More abstract methods were required, to carry out a proof with the same basic structure. The second half of the proof needs some type of height function, in terms of which to bound the 'size' of points of . Some measure of the co-ordinates will do; heights are logarithmic, so that (roughly speaking) it is a question of how many digits are required to write down a set of homogeneous coordinates. For an abelian variety, there is no a priori preferred representation, though, as a projective variety.
Both halves of the proof have been improved significantly by subsequent technical advances: in Galois cohomology as applied to descent, and in the study of the best height functions (which are quadratic forms).
Further results
The theorem leaves a number of questions still unanswered:
Calculation of the rank. This is still a demanding computational problem, and does not always have effective solutions.
Meaning of the rank: see Birch and Swinnerton-Dyer conjecture.
Possible torsion subgroups: Barry Mazur proved in 1978 that the Mordell–Weil group can have only finitely many torsion subgroups. This is the elliptic curve case of the torsion conjecture.
For a curve in its Jacobian variety as , can the intersection of with be infinite? Because of Faltings's theorem, this is false unless .
In the same context, can contain infinitely many torsion points of ? Because of the Manin–Mumford conjecture, proved by Michel Raynaud, this is false unless it is the elliptic curve case.
See also
Arithmetic geometry
Mordell–Weil group
References
Further reading
Diophantine geometry
Elliptic curves
Abelian varieties
Theorems in algebraic number theory | Mordell–Weil theorem | Mathematics | 613 |
23,266,205 | https://en.wikipedia.org/wiki/IEEE%20Honorary%20Membership | IEEE Honorary Membership is an honorary type of membership of the Institute of Electrical and Electronics Engineers (IEEE), that is given for life to an individual. It is awarded by the board of directors of IEEE to people 'who have rendered meritorious service to humanity in [the] IEEE's designated fields of interest' while not being members of IEEE.
This membership provides all the rights and privileges of a normal IEEE membership, except the right to hold an IEEE office.
The recipients of this grade will receive a certificate, an 'Honorary Member' pin and a crystal sculpture.
In a given year, if the IEEE Medal of Honor recipient is not an IEEE member, they will be automatically recommended to the IEEE Board of Directors for IEEE Honorary Membership.
Recipients
The following people received the IEEE Honorary Membership:
References
Honorary Membership
H | IEEE Honorary Membership | Technology | 163 |
73,374,707 | https://en.wikipedia.org/wiki/Tsakane%20Clay%20Grassland | The Tsakane Clay Grassland is a rare South African vegetation type supporting a unique grassland ecosystem. It is named after the township of Tsakane in Ekurhuleni, Gauteng, in which it is the dominant natural vegetation type. This ecosystem is characterized by its clay-rich soil, which supports a diverse array of flora and fauna, including several endemic and threatened species. The Tsakane Clay Grassland is an important conservation area, as it plays a crucial role in maintaining biodiversity and providing ecosystem services to the surrounding human populations.
Geography
The Tsakane Clay Grassland is the main vegetation type within the Suikerbosrand Nature Reserve, with a smaller occurrence of the Andesite Mountain Bushveld (SVcb11) vegetation unit. The altitude varies between 1,545 and 1,917 meters above sea level. The grassland extends from Soweto to the town of Springs in Gauteng and is distributed in patches southwards to Nigel and Vereeniging. The vegetation unit also occurs in parts of Mpumalanga between Balfour and Standerton and also in the northern side of the Vaal Dam. The landscape is flat to slightly undulating, with low hills also present in some areas of the grassland.
Biodiversity
The Tsakane Clay Grassland is home to a diverse range of plant species, including important taxa such as Andropogon schirensis, Eragrostis racemosa, Senecio inornatus, and Anthospermum rigidum subsp. pumilum. These species are adapted to the clay-rich soil conditions found in the grassland. The ecosystem also supports a variety of animal species, including mammals, birds, reptiles, and insects, many of which rely on the unique plant species for food and habitat.
Conservation
The Tsakane Clay Grassland is an important conservation area due to its high levels of biodiversity and the presence of several threatened and endemic species. Efforts to conserve the ecosystem include the establishment of protected areas, as well as ongoing research and monitoring programs to better understand and manage the unique flora and fauna. These conservation efforts aim to preserve the ecological integrity of the grassland and ensure the long-term survival of its species.
Threats
The Tsakane Clay Grassland faces several threats, primarily from human activities such as urbanization, agriculture, and mining. The expansion of nearby towns and cities has led to habitat loss and fragmentation, which can negatively impact the ecosystem's biodiversity. Additionally, the introduction of non-native species and pollution from various sources can further degrade the grassland and threaten its native species.
See also
Biodiversity of South Africa
References
Grasslands of South Africa
Ecosystems | Tsakane Clay Grassland | Biology | 529 |
501,086 | https://en.wikipedia.org/wiki/Speedrunning | Speedrunning is the act of playing a video game, or section of a video game, with the goal of completing it as fast as possible. Speedrunning often involves following planned routes, which may incorporate sequence breaking and exploit glitches that allow sections to be skipped or completed more quickly than intended. Tool-assisted speedrunning (TAS) is a subcategory of speedrunning that uses emulation software or additional tools to create a precisely controlled sequence of inputs.
Many online communities revolve around speedrunning specific games; community leaderboard rankings for individual games form the primary competitive metric for speedrunning. Racing between two or more speedrunners is also a popular form of competition. Videos and livestreams of speedruns are shared via the internet on media sites such as YouTube and Twitch. Speedruns are sometimes showcased at marathon events, which are gaming conventions that feature multiple people performing speedruns in a variety of games.
History
Early examples
Speedrunning has generally been an intrinsic part of video games since the early days of the medium, similar to the chasing of high scores, though it did not achieve broad interest until 1993. Some groundwork for what would become modern speedrunning was established by id Software during the development for Wolfenstein 3D (1992), although prior games such as Metroid (1986) and Prince of Persia (1989) encouraged speedrunning by noting a player's time upon meeting certain metrics, including completion of the game. Wolfenstein 3D recorded a "par time" statistic which was based on John Romero's personal records for each level. Romero's best level times were also printed in the official hint book, which was available via the same mail-order system used to distribute the game at the time. His intention was that players would attempt to beat his times.
Doom and Quake demos, early Internet communities
The development of a strong speedrunning community is considered to have originated with the 1993 computer game Doom. The game retained the "par time" mechanic from Wolfenstein and included a feature that allowed players to record and play back gameplay using files called demos (also known as game replays). Demos were lightweight files that could be shared more easily than video files on Internet bulletin board systems at the time. Internally, in January 1994, University of Waterloo student Christina Norman created a File Transfer Protocol server dedicated to compiling demos, named the LMP Hall of Fame (after the .lmp file extension used by Doom demos). The LMP Hall of Fame inspired the creation of the Doom Honorific Titles by Frank Stajano, a catalogue of titles that a player could obtain by beating certain challenges in the game. The Doom speedrunning community emerged in November 1994, when Simon Widlake created COMPET-N, a website hosting leaderboards dedicated to ranking completion times of Doom's single-player levels.
In 1996, id Software released Quake as a successor to the Doom series. Like its predecessor, Quake had a demo-recording feature and drew attention from speedrunners. In April 1997, Nolan "Radix" Pflug created Nightmare Speed Demos (NSD), a website for tracking Quake speedruns. In June 1997, Pflug released a full-game speedrun demo of Quake called Quake Done Quick, which introduced speedrunning to a broader audience. Quake speedruns were notable for their breadth of movement techniques, including "bunny hopping," a method of gaining speed also present in future shooting games like Counter-Strike and Team Fortress. In April 1998, NSD merged with another demo-hosting website to create Speed Demos Archive.
Speed Demos Archive and video sharing
For five years, Speed Demos Archive hosted exclusively Quake speedruns, but in 2003 it published a 100% speedrun of Metroid Prime done by Pflug. Six months later, SDA began accepting runs from all games. Unlike its predecessor websites, SDA did not compile leaderboards for their games; they displayed only the fastest speedrun of each game. Until SDA's expansion into games other than Quake in 2004, speedrun video submissions were primarily sent to early video game record-keeper Twin Galaxies. The videos were often never publicly released, creating verifiability concerns that SDA aimed to address. It was often impossible to determine what strategies had gone into setting these records, hindering the development of speedrunning techniques. Sites dedicated to speedrunning, including game-specific sites, began to establish the subculture around speedrunning. These sites were not only used for sharing runs but also to collaborate and share tips to improve times, leading to collaborative efforts to continuously improve speedrunning records on certain games.
In 2003, a video demonstrating a TAS of Super Mario Bros. 3 garnered widespread attention on the internet; many speedrunners cite this as their first introduction to the hobby. It was performed and published by a Japanese user named Morimoto. The video was lacking context to indicate that it was a TAS, so many people believed it to be an actual human performance. It drew criticism from viewers who felt "cheated" when Morimoto later explained the process by which he created the video and apologized for the confusion. In December 2003, after seeing Morimoto’s TAS, a user named Bisqwit created TASVideos (initially named NESVideos), a site dedicated to displaying tool-assisted speedruns.
The creation of video-sharing and streaming websites in the late 2000s and early 2010s contributed to an increase in the accessibility and popularity of speedrunning. In 2005, the creation of YouTube enabled speedrunners to upload and share videos of speedruns and discuss strategies on the SDA forums. Twitch, a livestreaming website centered around video gaming, was launched in 2011. The advent of livestreaming made for easier verification and preservation of speedruns, and some speedrunners believe it is responsible for a shift towards collaboration among members of the community. In 2014, Speedrun.com was created, which had less stringent submission guidelines than SDA and was intended to centralize speedrun leaderboards for many different games. Speedrunners' move towards using Speedrun.com and social media platforms like Skype and Discord contributed to SDA's relevance waning in the 2010s.
Methodology
Gameplay strategies
Routing is a fundamental process in speedrunning. Routing is the act of developing an optimal sequence of actions and stages in a video game. A route may involve skipping one or more important items or sections. Skipping a part of a video game that is normally required for progression is referred to as sequence breaking, a term first used in reference to the 2002 action-adventure game Metroid Prime. Video game glitches may be used to achieve sequence breaks, or may be used for other purposes such as skipping cutscenes and increasing the player's speed or damage output. Some people, called glitch-hunters, choose to focus on finding glitches that will be useful to speedrunners. In some games, arbitrary code execution exploits may be possible, allowing players to write their own code into the game's memory. Several speedruns use a "credits warp", a category of glitch that causes the game's credits sequence to play, which may require arbitrary code execution. The use of glitches and sequence breaks in speedruns was historically not allowed, per the rules of Twin Galaxies' early leaderboards. When speedrunning moved away from Twin Galaxies towards independent online leaderboards, their use became increasingly common.
Tool-assisted speedruns
A tool-assisted speedrun (TAS) is a speedrun that uses emulation software and tools to create a "theoretically perfect playthrough". According to TASVideos, common examples of tools include advancing the game frame by frame to play the game more precisely, retrying parts of the run using savestates, and hex editing. These tools are designed to remove restrictions imposed by human reflexes and allow for optimal gameplay. The run is recorded as a series of controller inputs intended to be fed back to the game in sequence. Although generally recorded on an emulator, TASes can be played back on original console hardware by sending inputs into the console's controller ports, a process known as console verification (as some exploits are possible on emulation but not console). To differentiate them from tool-assisted speedruns, unassisted speedruns are sometimes referred to as real-time attack (RTA) speedruns. Due to the lack of a human playing the game in real time, TASes are not considered to be in competition with RTA speedruns.
Categorization and ranking
Speedruns are divided into various categories that impose additional limitations on a runner. It is common for category restrictions to require a certain amount of content to be completed in the game. Each video game may have its own speedrun categories, but some categories are popular irrespective of game. The most common are:
Any%, which involves getting to the end as fast as possible with no qualifier,
100%, which requires full completion of a game. This may entail obtaining all items or may use some other metric.
Low%, the opposite of 100%, which requires the player to beat the game while completing the minimum amount possible.
Glitchless, which restricts the player from performing any glitches during the speedrun.
No Major Glitches, Which consist of beating the game as fast as possible while not using any "game breaking" glitches.
Speedrunners compete in these categories by ranking times on online leaderboards. According to Wired, the definitive website for speedrun leaderboards is Speedrun.com. the site hosts leaderboards for over 20,000 video games. Runners usually record footage of their speedruns for accurate timing and verification, and may include a timer in their videos. They often use timers that keep track of splits—the time between the start of the run and the completion of some section or objective. Verification is usually done by leaderboard moderators who review submissions and determine the validity of individual speedruns.
Community
According to many speedrunners, community is an important aspect of the hobby. Matt Merkle, director of operations at Games Done Quick, says that speedrunners "value the cooperation the community encourages," and many speedrunners have said that their mental health has improved because of their involvement in the community. Erica Lenti, writing for Wired, said a sense of community is vital to speedrunning because it motivates players and aids in the development of routes and tricks used in speedruns, and Milan Jacevic highlighted "years of research" and collective community efforts that contribute to world records.
Speedrunners use media-sharing sites like YouTube and Twitch to share videos and livestreams of speedruns. The speedrunning community is divided into many sub-communities focused on speedrunning specific games. These sub-communities can form their own independent leaderboards and communicate about their games using Discord. Many communities have used the centralized leaderboard hosting site Speedrun.com since its founding in 2014.
Marathons
Speedrunning marathons, a form of gaming convention, feature a series of speedruns by multiple speedrunners. While many marathons are held worldwide, the largest event is Games Done Quick, a semiannual marathon held in the United States. it has raised over $37 million for charity organizations since its inception in 2010. The largest marathon in Europe is the European Speedrunner Assembly, held in Sweden. Both events broadcast the speedruns on Twitch and raise money for various charity organizations. Speedruns at marathons are done in one attempt and often have accompanying commentary. Many people consider marathons to be important to runners and spectators in the speedrunning community. Peter Marsh, writing for the Australian Broadcasting Corporation, says that the Games Done Quick events provide an inclusive space for women and the LGBTQ community in contrast to the related cultures of gaming and Twitch streaming. Alex Miller of Wired says the events have played an important role in connecting people and supporting international humanitarian organization Médecins Sans Frontières during the COVID-19 pandemic.
Speedrun races
Races between two or more speedrunners are a common competition format. They require players to be skilled at recovering from setbacks during a speedrun because they cannot start over. Occasionally, races are featured at marathons; a 4-person Super Metroid race is a popular recurring event at Games Done Quick marathons. The Global Speedrun Association (GSA) have organized head-to-head tournaments for multiple games, including Celeste, Super Mario 64, and Super Mario Odyssey. In 2019, GSA organized an in-person speedrun race event called PACE. Their efforts have drawn criticism from some speedrunners who believe that they "undermine the community spirit", citing cash prizes as incentives to avoid collaboration with other speedrunners and ignore games without prize money. Video game randomizers—ROM hacks that randomly shuffle item locations and other in-game content—are popular for speedrun races as well. Tournaments and other events have been organized for randomizer races, and they have been featured at speedrun marathons.
Cheating
Methods
Splicing
Splicing is by far the most popular cheating method in speedrunning. Here, a speedrun is not recorded continuously, as is usually the case, but instead composed of various video snippets recorded at different times, sometimes with gameplay stolen from TAS composers or legitimate players.
At SGDQ 2019, speedrunner "ConnorAce" used a spliced run to illegitimately claim the world record on Clustertruck for the "NoAbility%" category, depriving the legitimate record holder from being invited to the event. The run was treated with suspicion due to it not being submitted officially to speedrun.com, with the video being unlisted on YouTube prior to ConnorAce's acceptance into SGDQ. In October 2019, ConnorAce's run was exposed by the YouTube documentarian Apollo Legend.
In a typical case, splicing allows difficult segments to be repeated to perfection and edited together afterwards into one seemingly continuous effort, which can sometimes dramatically reduce the amount of time needed to grind out a comparable score. However, a spliced run is not considered cheating if it is announced to be a multi-segment run upon submission; for example, this community-made multi-segment compilation for Super Mario Bros.
TASbotting
When 'TASbotting', the player records their controller inputs as a tool-assisted run in an external device in order to then have this device reproduce the inputs on a real console. As with splicing, the inputs of individual segments can be combined and, as is usual for tool-assisted runs, inputs can be made frame by frame. As long as these inputs are authentic and seem realistic for a human being, such manipulations are much more difficult to detect in the resulting video product than splicing. If, on the other hand, a TAS is not outputted on the original hardware but, as usual, on emulators, it can sometimes be alleged from the resulting video that such auxiliary programs were used; additionally, some emulators never perfectly imitate the desired hardware, which can cause synchronization issues when replayed on a console.
Modifying the timer or playback speed
Modifying game timers, especially on computer games, is another common method to improve one's recorded times. However, this is a very noticeable manipulation, especially in highly competitive areas, since the speedruns in the upper area of leaderboards are repeatedly analyzed by other players in order to check their legitimacy and playback reproducibility, including a temporal check known as "retiming". This often reveals discrepancies between one's recording time and a speedrun in the leaderboards.
Another method, a variation of splicing, includes speeding up cutscenes or compressing transitional black space. Again, such methods are likely to be detected by a speedrun moderator, although some games, especially where PC speed can have an effect, may actually vary depending on hardware.
Finally, another common cheating method is to play the game using frame-by-frame advancement or in slow motion, which is similar to normal tool-assisted speedrunning but without the ability to redo inputs. Playing in slow motion is often effective for games that require very precise movements.
Modifying in-game files
While it is often possible to use traditional cheats such as a GameShark to increase character speed, strength, health, etc., such cheats are generally quite easy for an experienced moderator to detect, even when applied subtly. However, the modification of internal files to improve RNG can often be much more difficult to detect.
One of the most infamous examples of file modification was several cheated runs by the speedrunner Dream in 2020, whose luck was considered so extreme in a series of Minecraft speedruns that they were considered exceedingly unlikely to have been done without cheating (with an approximately 1 in 20 sextillion chance of occurring, as estimated by Matt Parker from Numberphile) by both the moderators at Speedrun.com and various YouTubers, such as Karl Jobst and Matt Parker, whose videos on Dream gained a combined 5.7 million and 6.5 million views, respectively, as of January 2024. Dream later admitted to the runs being cheated about five months after his runs were rejected, although he claimed he did not know he was using a modified version of the game. Nearly two years later, the player who helped uncover Dream's cheated runs, MinecrAvenger, was also found to be using similar luck manipulation in late 2022.
Lying about times
While all of the aforementioned methods are deceptive in nature, the simplest way of cheating is merely to lie about a time. One of the most infamous cases of this was done by Todd Rogers. Several of his records have come under scrutiny for being seemingly impossible or lacking sufficient proof. In 2002, Robert Mruczek, then chief referee at Twin Galaxies, officially rescinded Todd's record time in Barnstorming after other players pointed out that his time of 32.04 seconds did not appear to be possible, even when the game was hacked to remove all obstacles. Upon further investigation, Twin Galaxies referees were unable to find independent verification for this time, having instead been relying on erroneous information from Activision.
As listed on the Twin Galaxies leaderboard until January 2018, Rogers's record in the 1980 Activision game Dragster was a time of 5.51 seconds from 1982. At the time, Activision verified high scores by Polaroid. According to Rogers, after he submitted a photo of this time, he was called by Activision, who asked him to verify how he achieved such a score, because they had programmed a 'perfect run' of the game and were unable to achieve better than a 5.54. The game's programmer David Crane would later confirm that he had a vague recollection of programming test runs, but did not remember the results. In 2012, Todd received a Guinness World Record for the longest-standing video game score record, for his 1982 Dragster record. In 2017, a speedrunner named Eric "Omnigamer" Koziel disassembled the game's code and concluded that the fastest possible time was 5.57 seconds. With a tick rate of 0.03 seconds, the record claim is two ticks faster than Omnigamer's data and one tick faster than the reported Activision 'perfect run'.
Cheat detection
In order to prevent most of these methods, some games require a video of the hands on the controller or keyboard ("handcam"), in addition to the screen recording, so that game-specific moderators in charge of authenticating a submission can ensure that the inputs are really done in the specified combination and by a human. Other methods include forensic audio analysis, which is a common method for detecting telltale signs of video splicing; this is why runs without high-quality audio streams are often rejected on speedrun boards.
Additional detection methods are the use of mathematics (as in the aforementioned Dream case) or human moderation of suspicious inputs (in games which record them such as Doom and TrackMania). Cheat detection software created for TrackMania was used to analyze over 400,000 replays and isolate a handful of cheaters, leading to hundreds of world records being determined to have been cheated using slowdown tools. This included those of Burim "riolu" Fejza, who was signed to the eSports team Nordavind (now known as 00 Nation) before being dropped following the scandal.
See also
Donkey Kong high score competition
Nintendo World Championships
Games Done Quick
European Speedrunner Assembly
Running with Speed
Time attack
References
External links
Karl Jobst: The Evolution Of Speedrunning (Video essay on YouTube)
Speedrun.com, popular leaderboard-hosting website
Video game terminology
Articles containing video clips
1990s neologisms | Speedrunning | Technology | 4,329 |
83,877 | https://en.wikipedia.org/wiki/Enema | An enema, also known as a clyster, is an injection of fluid into the rectum or into lower bowel by way of the rectum. The word enema can also refer to the liquid injected, as well as to a device for administering such an injection.
In standard medicine, the most frequent uses of enemas are to relieve constipation and for bowel cleansing before a medical examination or procedure; also, they are employed as a lower gastrointestinal series (also called a barium enema), to treat traveler's diarrhea, as a vehicle for the administration of food, water or medicine, as a stimulant to the general system, as a local application and, more rarely, as a means of reducing body temperature, as treatment for encopresis, and as a form of rehydration therapy (proctoclysis) in patients for whom intravenous therapy is not applicable.
Medical usage
The principal medical usages of enemas are:
Bowel cleansing
Acute treatments
As bowel stimulants, enemas are employed for the same purposes as orally administered laxatives: to relieve constipation; to treat fecal impaction; to empty the colon prior to a medical procedure such as a colonoscopy. When oral laxatives are not indicated or are not sufficiently effective, enemas may be a sensible and necessary measure.
A large volume enema can be given to cleanse as much of the colon as possible of feces. However, a low enema is generally useful only for stool in the rectum, not in the intestinal tract.
Such enemas' mechanism consists of the volume of the liquid causing a rapid expansion of the intestinal tract in conjunction with, in the case of certain solutions, irritation of the intestinal mucosa which stimulates peristalsis and lubricates the stool to encourage a bowel movement. An enema's efficacy depends on several factors including the volume injected and the temperature and the contents of the infusion. In order for the enema to be effective the patient should retain the solution for five to ten minutes, as tolerated. or, as some nursing textbooks recommend, for five to fifteen minutes or as long as possible.
Large volume enemas
For emptying the entire colon as much as feasible deeper and higher enemas are utilized to reach large sections of the colon. The colon dilates and expands when a large volume of liquid is injected into it, and the colon reacts to that sudden expansion with general contractions, peristalsis, propelling its contents toward the rectum.
Soapsuds enema is a frequently used synonym for a large volume enema (although soap is not necessary for effectivity).
For relieving occasional constipation, a large volume enema may be used in a home setting, although for recurring or severe cases of constipation medical care may be required.
Water-based solutions
Plain water can be used, simply functioning mechanically to expand the colon, thus prompting evacuation.
Normal saline is least irritating to the colon. Like plain water, it simply functions mechanically to expand the colon, but having a neutral concentration gradient, it neither draws electrolytes from the body, as happens with plain water, nor draws water into the colon, as occurs with phosphates. Thus, a salt water solution can be used when a longer period of retention is desired, such as to soften an impaction.
Castile soap is commonly added because its irritation of the colon's lining increases the urgency to defecate. However, liquid handsoaps and detergents should not be used.
Glycerol is a specific bowel mucosa irritant serving to induce peristalsis via a hyperosmotic effect. It is used in a dilute solution, e.g., 5%.
Other solutions
Equal parts of milk and molasses heated together to slightly above normal body temperature have been used. Neither the milk sugars and proteins nor the molasses are absorbed in the lower intestine, thus keeping the water from the enema in the intestine. Studies have shown that milk and molasses enemas have a low complication rate when used in the emergency department and are safe and effective with minimal side effects.
Mineral oil functions as a lubricant and stool softener, but may have side effects including rectal skin irritation and leakage of oil.
Micro-enemas
ATC codes for drugs for constipation — enemas
A06AG01 Sodium phosphate
A06AG02 Bisacodyl
A06AG03 Dantron, including combinations
A06AG04 Glycerol
A06AG06 Oil
A06AG07 Sorbitol
A06AG10 Docusate sodium, including combinations
A06AG11 Sodium lauryl sulfoacetate, including combinations
A06AG20 Combinations
Single substance solutions
In alphabetical order
Arachis oil (peanut oil) enema is useful for softening stools which are impacted higher than the rectum.
Bisacodyl stimulates enteric nerves to cause colonic contractions.
Dantron is a stimulant drug and stool softener used alone or in combinations in enemas. Considered to be a carcinogen its use is limited, e.g., restricted in the UK to patients who already have a diagnosis of terminal cancer and not used at all in the USA.
Docusate
Glycerol has a hyperosmotic effect and can be used as a small-volume (2–10 ml) enema (or suppository).
Mineral oil is used as a lubricant because most of the ingested material is excreted in the stool rather than being absorbed by the body.
Sodium phosphate. Also known by the brand name Fleet. Available at drugstores; usually self-administered. Buffered sodium phosphate solution draws additional water from the bloodstream into the colon to increase the effectiveness of the enema. But it can be rather irritating to the colon, causing intense cramping or "griping." Fleet enemas usually causes a bowel movement in 1 to 5 minutes. Known adverse effects.
Sorbitol pulls water into the large intestines causing distention, thereby stimulating the normal forward movement of the bowels. Sorbitol is found in some dried fruits and may contribute to the laxative effects of prunes. and is available for taking orally as a laxative. As an enema for constipation, the recommended adult dose is 120 mL of 25-30% solution, administered once. Note that Sorbitol is an ingredient of the MICROLAX Enema.
Compounded from multiple ingredients
In alphabetical order of the original brand names
Klyx contains docusate sodium 1 mg/mL and sorbitol solution (70%)(crystallising) 357 mg/mL and is used for faecal impaction or constipation or for colon evacuation prior medical procedures, developed by Ferring B.V.
Micralax (not to be confused with MICROLAX®)
MICROLAX® (not to be confused with Micralax) combines the action of sodium citrate, a peptidising agent which can displace bound water present in the faeces, with sodium alkyl sulphoacetate, a wetting agent, and with glycerol, an anal mucosa irritant and hyperosmotic. However, also sold under the name "Micralax", is a preparation containing sorbitol rather than glycerol; which was initially tested in preparation for sigmoidoscopy.
Micolette Micro-enema® contains 45 mg sodium lauryl sulphoacetate, 450 mg per 5 ml sodium citrate BP, and 625 mg glycerol BP and is a small volume stimulant enema suitable where large-volume enemas are contra-indicated.
Chronic treatments
Transanal irrigation
TAI, also termed retrograde irrigation, is designed to assist evacuation using a water enema as a treatment for persons with bowel dysfunction, including fecal incontinence or constipation, especially obstructed defecation. By regularly emptying the bowel using transanal irrigation, controlled bowel function is often re-established to a high degree, thus enabling development of a consistent bowel routine. Its effectiveness varies considerably, some individuals experiencing complete control of incontinence but others reporting little or no benefit.
An international consensus on when and how to use transanal irrigation for people with bowel problems was published in 2013, offering practitioners a clear, comprehensive and simple guide to practice for the emerging therapeutic area of transanal irrigation.
The term retrograde irrigation distinguishes this procedure from the Malone antegrade continence enema, where irrigation fluid is introduced into the colon proximal to the anus via a surgically created irrigation port.
Bowel management
Patients who have a bowel disability, a medical condition which impairs control of defecation, e.g., fecal incontinence or constipation, can use bowel management techniques to choose a predictable time and place to evacuate. Without bowel management, such persons might either suffer from the feeling of not getting relief, or they might soil themselves.
While simple techniques might include a controlled diet and establishing a toilet routine, a daily enema can be taken to empty the colon, thus preventing unwanted and uncontrolled bowel movements that day.
Contrast (X-ray)
In a lower gastrointestinal series an enema that may contain barium sulfate powder or a water-soluble contrast agent is used in the radiological imaging of the bowel. Called a barium enema, such enemas are sometimes the only practical way to view the colon in a relatively safe manner.
Failure to expel all of the barium may cause constipation or possible impaction and a patient who has no bowel movement for more than two days or is unable to pass gas rectally should promptly inform a physician and may require an enema or laxative.
Medication administration
The administration of substances into the bloodstream. This may be done in situations where it is undesirable or impossible to deliver a medication by mouth, such as antiemetics given to reduce nausea (though not many antiemetics are delivered by enema). Additionally, several anti-angiogenic agents, which work better without digestion, can be safely administered via a gentle enema.
The topical administration of medications into the rectum, such as corticosteroids and mesalazine used in the treatment of inflammatory bowel disease. Administration by enema avoids having the medication pass through the entire gastrointestinal tract, therefore simplifying the delivery of the medication to the affected area and limiting the amount that is absorbed into the bloodstream.
Rectal corticosteroid enemas are sometimes used to treat mild or moderate ulcerative colitis. They also may be used along with systemic (oral or injection) corticosteroids or other medicines to treat severe disease or mild to moderate disease that has spread too far to be treated effectively by medicine inserted into the rectum alone.
Inhibiting pathological defecation
Traveller's diarrhea's symptoms treated with an enema of sodium butyrate, organic acids, and A-300 silicon dioxide can be successfully decreased with lack of observed side effects.
Shigellosis treatment benefits from adjunct therapy with butyrate enemas, promoting healing of the rectal mucosa and inflammation, but not helping in clinical recovery from shigellosis. Use of an 80 ml of a sodium butyrate isotonic enema administered every 12 hours has been studied and found effective.
Other
There have been a few cases in remote or rural settings, where rectal fluids have been used to rehydrate a person. Benefits include not needing to use sterile fluids.
Introducing healthy bacterial flora through infusion of stool, known as a fecal microbiota transplant, was first performed in 1958 employing retention enemas. Enemas remained the most common method until 1989, when alternative means of administration were developed. As of 2013, colonoscope implantation has been preferred over fecal enemas because by using the former method, the entire colon and ileum can be inoculated, but enemas reach only to the splenic flexure.
A patient unable to be fed otherwise can be nourished by an enteral administration of predigested foods, which is known as a nutrient enema. This treatment is ancient, dating back at least to the second century CE when documented by Galen, and commonly used in the Middle Ages, remaining a common technique in 19th century, and as recently as 1941 the U. S. military's manual for hospital diets prescribes their use. Nutrient enemas have been superseded in modern medical care by tube feeding and intravenous feeding.
Enemas have been used around the time of childbirth; however, there is no evidence for this practice and it is now discouraged.
Adverse effects
Improper administration of an enema can cause electrolyte imbalance (with repeated enemas) or ruptures to the bowel or rectal tissues which can be unnoticed as the rectum is insensitive to pain, resulting in internal bleeding. However, these occurrences are rare in healthy, sober adults. Internal bleeding or rupture may leave the individual exposed to infections from intestinal bacteria. Blood resulting from tears in the colon may not always be visible, but can be distinguished if the feces are unusually dark or have a red hue. If intestinal rupture is suspected, medical assistance should be obtained immediately. Frequent use of enemas can cause laxative dependency.
The enema tube and solution may stimulate the vagus nerve, which may trigger an arrhythmia such as bradycardia.
Enemas should not be used if there is an undiagnosed abdominal pain since the peristalsis of the bowel can cause an inflamed appendix to rupture.
There are arguments both for and against colonic irrigation in people with diverticulitis, ulcerative colitis, Crohn's disease, severe or internal hemorrhoids or tumors in the rectum or colon, and its usage is not recommended soon after bowel surgery (unless directed by one's health care provider). Regular treatments should be avoided by people with heart disease or kidney failure. Colonics are inappropriate for people with bowel, rectal or anal pathologies where the pathology contributes to the risk of bowel perforation.
Recent research has shown that ozone water, which is sometimes used in enemas, can immediately cause microscopic colitis.
A recent case series of 11 patients with five deaths illustrated the danger of phosphate enemas in high-risk patients.
History
Etymology
Enema entered the English language c. 1675 from Latin in which, in the 15th century, it was first used in the sense of a rectal injection, from Greek ἔνεμα (énema), "injection", itself from ἐνιέναι (enienai) "to send in, inject", from ἐν (en), "in" + ἱέναι (hienai), "to send, throw".
Clyster entered the English language in the late 14th century from Old French or Latin, from Greek κλυστήρ (klyster), "syringe", itself from κλύζειν (klyzein), "to wash out", also spelled glister in the 18th century, is a generally archaic word used more particularly for enemas administered using a clyster syringe.
Ancient and medieval
Africa
The first mention of the enema in medical literature is in the Ancient Egyptian Ebers Papyrus ( BCE). One of the many types of medical specialists was a Nery-Pehuyt, the Shepherd of the Anus. Many medications were administered by enemas. There was a Keeper of the Royal Rectum who may have primarily been the pharaoh's enema maker. The god Thoth, according to Egyptian mythology, invented the enema.
In parts of Africa the calabash gourd is used traditionally to administer enemas. On the Ivory Coast the narrow neck of the gourd filled with water is inserted the patient's rectum and the contents are then injected by means of an attendant's forcible oral inflation, or, alternatively, a patient may self-administer the enema by using suction to create a negative pressure in the gourd, placing a finger at the opening, and then upon anal insertion, removing the finger to allow atmospheric pressure to effect the flow. In South Africa, Bhaca people used an ox horn to administer enemas. Along the upper Congo River an enema apparatus is made by making a hole in one end of the gourd for filling it, and using a resin to attach a hollow cane to the gourd's neck. The cane is inserted into the anus of the patient who is in a posture that allows gravity to effect infusion of the fluid.
Americas
The Olmec from their middle preclassic period (10th through 7th centuries BCE) through the Spanish Conquest used trance-inducing substances ceremonially, and these were ingested via, among other routes, enemas administered using jars.
As further described below in religious rituals, the Maya in their late classic age (7th through 10th centuries CE) used enemas for, at least, ritual purposes, Mayan sculpture and ceramics from that period depicting scenes in which, injected by syringes made of gourd and clay, ritual hallucinogenic enemas were taken. In the Xibalban court of the God D, whose worship included ritual cult paraphernal, the Maya illustrated the use of a characteristic enema bulb syringe by female attendants administering clysters ritually.
For combating illness and discomfort of the digestive tract, the Mayan also employed enemas, as documented during the colonial period, e.g., in the Florentine Codex.
The indigenous peoples of North America employed tobacco smoke enemas to stimulate respiration, injecting the smoke using a rectal tube.
A rubber bag connected with a conical nozzle, at an early period, was in use among the indigenous peoples of South America as an enema syringe, and the rubber enema bag with a connecting tube and ivory tip remained in use by them while in Europe a syringe was still the usual means for conducting an enema.
Asia
In Babylonia, by 600 BCE, enemas were in use, although it appears that initially they were in use because of a belief that the demon of disease would, by means of an enema, be driven out of the body. Babylonian and Assyrian tablets c. 600 BCE bear cuneiform inscriptions referring to enemas.
In China, c. 200 CE, Zhang Zhongjing was the first to employ enemas. "Secure a large pig's bile and mix with a small quantity of vinegar. Insert a bamboo tube three or four inches long into the rectum and inject the mixture" are his directions, according to Wu Lien-teh.
In India, in the fifth century BCE, Sushruta enumerates the enema syringe among 121 surgical instruments described. Early Indian physicians' enema apparatus consisted of a tube of bamboo, ivory, or horn attached to the scrotum of a deer, goat, or ox.
In Persia, Avicenna (980–1037 A. D.) is credited with the introduction of the "clyster-purse" or collapsible portion of an enema outfit made from ox skin or silk cloth and emptied by squeezing with the hands.
Europe
Hippocrates (460–370 BCE) frequently mentions enemas, e.g., "if the previous food which the patient has recently eaten should not have gone down, give an enema if the patient be strong and in the prime of life, but if he be weak, a suppository should be administered, should the bowels be not well moved on their own accord."
In the first century BCE the Greek physician Asclepiades of Bithynia wrote "Treatment consists merely of three elements: drink, food, and the enema". Also, he contended that indigestion is caused by particles of food that are too big and his prescribed treatment was proper amounts of food and wine followed by an enema which would remove the improper food doing the damage.
In the second century CE the Greek physician Soranus prescribed, among other techniques, enemas as a safe abortion method, and the Greek philosopher Celsus recommended an enema of pearl barley in milk or rose oil with butter as a nutrient for those with dysentery and unable to eat, and also Galen mentions enemas in several contexts.
In medieval times appear the first illustrations of enema equipment in the Western world, a clyster syringe consisting of a tube attached to a pump action bulb made of a pig bladder.
A simple piston syringe clyster was in use from the 15th through 19th centuries. This device had its rectal nozzle connected to a syringe with a plunger rather than to a bulb.
Modern Western
Beginning in the 17th century enema apparatus was chiefly designed for self-administration at home and many were French as enemas enjoyed wide usage in France.
In 1694 François Mauriceau in his early-modern treatise, The Diseases of Women with Child, records that both midwives and man-midwives commonly administered clysters to labouring mothers just prior to their delivery.
Clysters were administered for symptoms of constipation and, with more questionable effectiveness, stomach aches and other illnesses.
In 1753 an enema bag prepared from a pig's or beef's bladder attached to a tube was described by Johann Jacob Woyts as an alternative to a syringe.
In the 18th century Europeans began emulating the indigenous peoples of North America's use of tobacco smoke enemas to resuscitate drowned people. Tobacco resuscitation kits consisting of a pair of bellows and a tube were provided by the Royal Humane Society of London and placed at various points along the Thames. Furthermore, these enemas came to be employed for headaches, respiratory failure, colds, hernias, abdominal cramps, typhoid fever, and cholera outbreaks.
Clysters were a favourite medical treatment in the bourgeoisie and nobility of the Western world up to the 19th century. As medical knowledge was fairly limited at the time, purgative clysters were used for a wide variety of ailments, the foremost of which were stomach aches and constipation.
According to the duc de Saint-Simon, clysters were so popular at the court of King Louis XIV of France that the duchess of Burgundy had her servant give her a clyster in front of the King (her modesty being preserved by an adequate posture) before going to the comedy. However, he also mentions the astonishment of the King and Mme de Maintenon that she should take it before them.
In the 19th century many new types of enema administration equipment were devised. Devices allowing gravity to infuse the solution, like those mentioned above used by South American indigenous people and like the enema bag described by Johann Jacob Woyts, came into common use. These consist of a nozzle at the end of a hose which connects a reservoir, either a bucket or a rubber bag, which is filled with liquid and held or hung above the recipient.
In the early 20th century the disposable microenema, a squeeze bottle, was invented by Charles Browne Fleet.
Society and culture
Alternative medicine
Relatively benign
Colonic irrigation
The term "colonic irrigation" is commonly used in gastroenterology to refer to the practice of introducing water through a colostomy or a surgically constructed conduit as a treatment for constipation. The Food and Drug Administration has ruled that colonic irrigation equipment is not approved for sale for the purpose of general well-being and has taken action against many distributors of this equipment, including a Warning Letter.
Colon cleansing
The same term is also used in alternative medicine where it may involve the use of substances mixed with water in order to detoxify the body. Practitioners believe the accumulation of fecal matter in the large intestine leads to ill health. This resurrects the old medical concept of autointoxication which was orthodox doctrine up to the end of the 19th century but which has now been discredited.
Kellogg's enemas
In the late 19th century Dr. John Harvey Kellogg made sure that the bowel of each and every patient was plied with water, from above and below. His favorite device was an enema machine ("just like one I saw in Germany") that could run fifteen gallons of water through a person's bowel in a matter of seconds. Every water enema was followed by a pint of yogurt—half was eaten, the other half was administered by enema "thus planting the protective germs where they are most needed and may render most effective service." The yogurt served to replace "the intestinal flora" of the bowel, creating what Kellogg claimed was a completely clean intestine.
Dangerous
Bleach enemas
Chlorine dioxide enemas have been fraudulently marketed as a medical treatment, primarily for autism. This has resulted, for example, in a six-year-old boy needing to have his colon removed and a colostomy bag fitted, complaints to the FDA reporting life-threatening reactions, and even death.
Proponents falsely claim that administering autistic children these enemas results in the expulsion of parasitic worms ("rope worms"), which actually are fragments of damaged intestinal epithelium that are misinterpreted as being human pathogens. Oral and rectal use of the solution has also been promoted as a cure for HIV, malaria, viral hepatitis, influenza, common colds, acne, cancer, Parkinson's, and much more.
Chlorine dioxide is a potent and toxic bleach that is relabeled for "medicinal purposes" to a variety of brand names including, but not limited, to MMS, Miracle Mineral Supplement, and CD protocol. For oral use, the doses recommended on the labeling can cause nausea, vomiting, diarrhea, and potentially life-threatening dehydration.
No clinical trials have been performed to test the health claims made for chlorine dioxide, which originate from former Scientologist Jim Humble in his 2006 self-published book, The Miracle Mineral Solution of the 21st Century and from anecdotal reports. The name MMS was coined by Humble. Sellers sometimes describe MMS as a water purifier so as to circumvent medical regulations. The International Federation of Red Cross and Red Crescent Societies rejected "in the strongest terms" reports by promoters of MMS that they had used the product to fight malaria.
Coffee enemas
Well documented as having no proven benefits and considered by medical authorities as rash and potentially dangerous is an enema of coffee.
A coffee enema can cause numerous maladies including infections, sepsis (including campylobacter sepsis), severe electrolyte imbalance, colitis, polymicrobial enteric sepsis, proctocolitis, salmonella, brain abscess, and heart failure,
and deaths related to coffee enemas have been documented.
Gerson therapy includes administering enemas of coffee, as well as of castor oil and sometimes of hydrogen peroxide or of ozone.
Some proponents of alternative medicine have claimed that coffee enemas have an anti-cancer effect by "detoxifying" metabolic products of tumors but there is no medical scientific evidence to support this.
Recreational usage
Pleasure
Enjoyment of enemas is known as klismaphilia, which medically is classified as a paraphilia. A person with klismaphilia is a klismaphile.
Both women and men may enjoy sexual enema play, heterosexually and homosexually, experiencing sexual arousal from enemas which they find gratifying or sensual and which can be an auxiliary to, or even a substitute for, genital sexual activity.
Klismaphiles may perceive pleasure from a large, water-distended belly, or the feeling of internal pressure. An enema fetish may include sexual attraction to the involved equipment, processes, environments, situations, or scenarios, Klismaphiles can gain satisfaction of enemas through fantasies, by actually receiving or giving one, or through the process of eliminating steps to being administered one (e.g., under the pretence of being constipated).
That some women use enemas while masturbating was documented by Alfred Kinsey in Sexual Behavior in the Human Female: "There were still other masturbatory techniques which were regularly or occasionally employed by some 11 percent of the females in the sample... Douches, streams of running water, vibrators, urethral insertions, enemas, other anal insertions, sado-masochistic activity, and still other methods were occasionally employed, but none of them in any appreciable number of cases."
Other sexually related uses
Besides klismaphilia, the intrinsic enjoyment of enemas, there are other uses of enemas in sexual play.
BDSM
Enemas are sometimes used in sadomasochistic activities for erotic humiliation or for physical discomfort.
Rectal douching
Another sexual use for enemas is to empty the rectum as a prelude to other anal sexual activities such as anal sex, possibly reducing risk of infection.
This is different from klismaphilia, in which the enema is enjoyed for itself and as a part of sexual arousal and gratification.
Rectal douching is a common practice among people who take a receptive role in anal sex although rectal douching before anal sex may increase the risk of transferring HIV, hepatitis B, and other diseases.
Intoxication
Noting that deaths have been reported from alcohol poisoning via enemas, an alcohol enema can be used to very quickly instill alcohol into the bloodstream, absorbed through the membranes of the colon. However, great care must be taken as to the amount of alcohol used. Only a small amount is needed as the intestine absorbs the alcohol far more quickly than the stomach.
Preceding an enema for administration of drugs or alcohol, a cleansing enema may first be used for cleaning the colon to help increase the rate of absorption.
Religious rituals
All across Mesoamerica ritual enemas were employed to consume psychoactive substances, e.g., balché, alcohol, tobacco, peyote, and other hallucinogenic drugs and entheogens, most notably by the Maya, thus attaining more intense trance states more quickly, and Mayan classic-period sculpture and ceramics depict hallucinogenic enemas used in rituals. Some tribes continue the practice in the present day.
With historical roots in the Indian subcontinent, enemas in Ayurveda, called Basti or Vasti, form part of Panchakarma procedure in which herbal medicines are introduced rectally.
Punitive usage
Enemas have also been forcibly applied as a means of punishment.
Political dissenters in post-independence Argentina were given enemas of chili pepper and turpentine. Turpentine enemas are very harsh purgatives.
In the Guantanamo Bay Detention Camp, the Senate Intelligence Committee report on CIA torture documented instances of enemas being used by the Central Intelligence Agency in order to ensure "total control" over detainees. Enemas, officials said, are uncomfortable and degrading, The CIA forced nutrient enema on detainees who attempted hunger strikes, documenting "With head lower than torso … sloshing up the large intestines … [what] I infer is that you get a tube up as you can … We used the largest Ewal tube we had" wrote an officer, and "violent enemas" is how a detainee described what he received.
In arts and literature
Written literature
In the Dionysus' satyr play Limos, Silenus attempts to give an enema to Heracles.
In Cervantes' Don Quixote, a narrative to Sancho includes “The Knight of the Sun ... bound hand and foot ... was administered a clyster of snow water and sand that almost disracted him"
In the 17th century, satirists made physicians a favorite target, resembling Molière's caricature whose prescription for anything was "clyster, bleed, purge," or "purge, bleed, clyster".
In Molière's play The Imaginary Invalid, Argan, a severe hypochondriac, is addicted to enemas as indicated by such lines as when Bĕralde asks, "Can't you be one moment without a purge?"
In Grace Metalious's novel Peyton Place, the town doctor tells of "a young boy with the worst case of dehydration I ever saw. It came from getting too many enemas that he didn't need. Sex, with a capital S-E-X.". As a teenager, the boy enjoys receiving enemas from his mother.
In Flora Rheta Schreiber's book Sybil, Sybil's psychiatrist asks her "What's Mama been doing to you, dear?...I know she gave you the enemas."
Film
In The Right Stuff, during flight training astronaut Alan Shepard retains a barium enema, given two floors away from a toilet, embarrassedly riding a public elevator wearing a hospital gown and holding the enema bag with its tip still inserted in him.
Water Power is a film loosely based on the real-life exploits of Michael H. Kenyon, an American criminal who pleaded guilty to a decade-long series of armed robberies of female victims, some of which involved sexual assaults in which he would give them enemas.
Song
The lyrics of Frank Zappa's song The Illinois Enema Bandit are concerned with Michael H. Kenyon's sexual assaults which included administering involuntary enemas.
Monument
A brass statue of a syringe enema bulb held aloft by three cherubs stands in front of the "Mashuk" spa in the settlement of Zheleznovodsk in Russia. Inspired by the 15th century Renaissance painter Botticelli, it was created by a local artist who commented that "An enema is an unpleasant procedure as many of us may know. But when cherubs do it, it's all right." When unveiled on 19 June 2008, posted on one of the spa's wall was a banner declaring "Let's beat constipation and sloppiness with enemas." The spa lying in the Caucasus Mountains region, known for dozens of spas that routinely treat digestive and other complaints with enemas of mineral spring water, the director commented "An enema is almost a symbol of our region." It is the only known monument to the enema.
See also
Bowel management
Dry enema
Fecal microbiota transplant
Murphy drip
Nutrient enema
Tobacco smoke enema
Transanal irrigation
References
Sources
External links
Alternative medicine
Anal eroticism
Anus
BDSM equipment
Constipation
Digestive system
Dosage forms
Drug delivery devices
Drugs acting on the gastrointestinal system and metabolism
Evidence-based medicine
Gastroenterology
Health care
Large intestine
Laxatives
Medical anthropology
Rectum
Routes of administration
Self-care
Sexual health
Torture | Enema | Chemistry,Biology | 7,524 |
1,160,936 | https://en.wikipedia.org/wiki/Marconi%20Company | The Marconi Company was a British telecommunications and engineering company founded by Italian inventor Guglielmo Marconi in 1897 which was a pioneer of wireless long distance communication and mass media broadcasting, eventually becoming one of the UK's most successful manufacturing companies.
Its roots were in the Wireless Telegraph & Signal Company, which underwent several changes in name after mergers and acquisitions. In 1999, its defence equipment manufacturing division, Marconi Electronic Systems, merged with British Aerospace (BAe) to form BAE Systems. In 2006, financial difficulties led to the collapse of the remaining company, with the bulk of the business acquired by the Swedish telecommunications company, Ericsson.
History
Naming history
1897–1900: The Wireless Telegraph & Signal Company
1900–1963: Marconi's Wireless Telegraph Company
1963–1987: Marconi Company Ltd
1987–1998: GEC-Marconi Ltd
1998–1999: Marconi Electronic Systems Ltd
1999–2003: Marconi plc, with Marconi Communications as principal subsidiary
2003–2006: Marconi Corporation plc
Early history
Marconi's "Wireless Telegraph and Signal Company" was formed on 20 July 1897 after a British patent for wireless technology was granted on 2 July that year. The company opened the world's first radio factory on Hall Street in Chelmsford northeast of London in 1898 and was responsible for some of the most important advances in radio and television. These include:
The diode vacuum tube in 1904 (Fleming)
Transatlantic radio broadcasting between Clifden, Ireland and Glace Bay, Nova Scotia, October 17, 1907.
High frequency tuned broadcasting
Formation of the British Broadcasting Company (later to become the independent BBC)
Formation of the Marconi Wireless Telegraph Company of America (assets acquired by RCA in 1920)
Marconi International Marine Communication Co. (M.I.M.C.Co.), founded 1900 in London
Compagnie de Télégraphie sans Fil (C.T.S.F.), founded 1900 in the City of Brussels
Short wave beam broadcasting
Radar
Television
Avionics
The subsidiary Marconi Wireless Telegraph Company of America, also called "American Marconi", was founded in 1899. It was the dominant radio communications provider in the US until the formation of the Radio Corporation of America (RCA) in 1919.
In 1900 the company's name was changed to "Marconi's Wireless Telegraph Company" and Marconi's Wireless Telegraph Training College was established in 1901. The company and factory was moved to New Street Works in 1912 to allow for production expansion in light of the RMS Titanic disaster. Along with private entrepreneurs, Marconi company formed in 1924 the Unione Radiofonica Italiana (URI), which was granted by Mussolini's regime a monopoly of radio broadcasts in 1924. After the war, URI became the RAI, which lives on to this day.
Isaac Shoenberg joined the company in 1914 and became joint general manager in 1924. After leaving Marconi in 1928 he went on to lead research at EMI where he was influential in the development of television broadcasting.
In 1939, the Marconi Research Laboratories were founded at Great Baddow, Essex. In 1941 there was a buyout of Marconi-Ekco Instruments to form Marconi Instruments.
Operations as English Electric subsidiary
English Electric acquired the Marconi Company in 1946 to complement its other operations: heavy electrical engineering, aircraft manufacture and its railway traction business. In 1948 the company was reorganised into four divisions: Communications, Broadcasting, Aeronautics and Radar. These had expanded to 13 manufacturing divisions by 1965 when a further reorganisation took place. The divisions were placed into three groups: Telecommunications, Components and Electronics.
At this time the Marconi Company had facilities at New Street Chelmsford, Baddow, Basildon, Billericay, and Writtle as well as in Wembley, Gateshead and Hackbridge. It also owned Marconi Instruments, Sanders Electronics, Eddystone Radio and Marconi Italiana (based in Genoa, Italy). In 1967 Marconi took over Stratton and Company to form Eddystone Radio.
Expansion in Canada
In 1903, Marconi founded the Marconi's Wireless Telegraph Company of Canada which was renamed as the Canadian Marconi Company in 1925. The radio business of the Canadian Marconi Company is known as Ultra Electronics TCS since 2002 and its avionic activities as CMC Electronics, owned by Esterline since 2007.
Expansion as GEC subsidiary
In 1967 or 1968, English Electric was subject to a takeover bid by the Plessey Company but chose instead to accept an offer from the General Electric Company (GEC). Under UK government pressure, the computer section of GEC, English Electric Leo Marconi (EELM), merged with International Computers and Tabulators (ICT) to form International Computers Limited (ICL). The computer interests of Elliott Automation which specialised in real-time computing were amalgamated with those of Marconi's Automation Division to form Marconi-Elliott Computers, later renamed as GEC Computers. In 1968 Marconi Space and Defence Systems and Marconi Underwater Systems were formed.
The Marconi Company continued as the primary defence subsidiary of GEC, GEC-Marconi. Marconi was renamed GEC-Marconi in 1987. During the period 1968–1999 GEC-Marconi/MES underwent significant expansion.
Acquisitions which were folded into the company and partnerships established included:
Defence operations of Associated Electrical Industries in 1968, AEI had been acquired in 1967.
Yarrow Shipbuilders in 1985
Ferranti defence businesses in 1990
Ferranti Dynamics in 1992
Vickers Shipbuilding and Engineering in 1995
Alenia Marconi Systems in 1998, a defence electronics company and an equal shares joint venture between GEC-Marconi and Finmeccanica's Alenia Difesa.
Tracor in 1998.
Other acquisitions included:
Divisions of Plessey in 1989 (others acquired by its partner in the deal, Siemens AG, to meet with regulatory approval).
Plessey Avionics
Plessey Naval Systems
Plessey Cryptography
Plessey Electronic Systems (75%)
Sippican
Leigh Instruments
In a major reorganisation of the company, GEC-Marconi was renamed Marconi Electronic Systems in 1996 and was separated from other non-defence assets.
Since 1999
In 1999, GEC was broken up and parts sold off. Marconi Electronic Systems, which included its wireless assets, was demerged and sold to British Aerospace which then formed BAE Systems.
GEC, realigning itself as a primarily telecommunications company following the MES sale, retained the Marconi brand and renamed itself Marconi plc. BAE were granted limited rights to continue use of the Marconi name in existing partnerships, which had ceased by 2005. Major spending and the dot-com collapse led to a major restructuring of the Marconi group in 2003: in a debt-for-equity swap, shareholders retained 0.5% of the new company, Marconi Corporation plc.
In October 2005 the Swedish firm Ericsson offered to buy the Marconi name and most of the assets. The transaction was completed on 23 January 2006, effective as of 1 January 2006. The remainder of the Marconi company, with some 2,000 staff working on telecommunications infrastructure in the UK and the Republic of Ireland, was renamed Telent.
See also
Aerospace industry in the United Kingdom
GEC-Marconi scientist deaths conspiracy theory
Marconiphone
Marconi-Osram Valve
Imperial Wireless Chain
Sinking of the RMS Titanic (section 14 April 1912)
References
Baker, W. J. (1970, 1996). History of the Marconi Company 1894–1965.
External links
Ericsson press release about the acquisition
Catalogue of the Marconi Archives At the Department of Special Collections and Western Manuscripts, Bodleian Library, University of Oxford
Marconi Calling The Life, Science and Achievements of Guglielmo Marconi
History of Marconi House
Electronics companies of the United Kingdom
Defunct computer companies of the United Kingdom
Defunct computer hardware companies
Telegraph companies of the United Kingdom
Defunct technology companies of the United Kingdom
Guglielmo Marconi
Companies based in Chelmsford
General Electric Company
History of radio technology
Radio manufacturers
Companies established in 1897 | Marconi Company | Engineering | 1,644 |
16,336,287 | https://en.wikipedia.org/wiki/Tesaglitazar | Tesaglitazar (also known as AZ 242) is a dual peroxisome proliferator-activated receptor agonist with affinity to PPARα and PPARγ, proposed for the management of type 2 diabetes.
The drug had completed several phase III clinical trials, however in May, 2006 AstraZeneca announced that it had discontinued further development.
Cardiac toxicity of tesaglitazar is related to mitochondrial toxicity caused by decrease in PPARγ coactivator 1-α (PPARGC1A, PGC1α) and sirtuin 1 (SIRT1).
References
Abandoned drugs
Carboxylic acids
Phenol ethers
PPAR agonists
Benzosulfones
Ethoxy compounds | Tesaglitazar | Chemistry | 152 |
69,141,235 | https://en.wikipedia.org/wiki/CAST-31 | CAST-31, Technical Clarifications Identified for RTCA DO-254 / EUROCAE ED-80 is a Certification Authorities Software Team (CAST) Position Paper. It is an FAA publication that "does not constitute official policy or guidance from any of the authorities", but is provided for educational and informational purposes only for applicants for software and hardware certification.
Contents
DO-254/ED-80 was introduced in 2000, but, unlike DO-178C, has not been updated to address concerns coming from decades of experience with applying the guidance of the standard; including errors, omissions, and advances in technology. This CAST Position Paper was created as both needed interim clarifications and a starting point for eventual development and release of an updated DO-254/ED-80.
Concerns identified in the Position Paper include:
a few known errors in the standard, largely concerning consistency within the document (definition of complex) and across related processes (usage of IDAL),
a compilation of recognized omissions to be added (notably, increased resolution in addressing hardware of ranging complexity, from extremely simple to highly complex) with identification of published sources for information on the omitted topics,
updates to the revision status of referenced publications that have been modified since the standard’s release, particularly ARP 4754A/ED79A and DO-178C, and
various additional content clarifications.
Where this Position Paper identifies an omission or need for clarification in DO-254, it generally identifies a section within either FAA Order 8110.105 or EASA CM-SWCEH-001 where the issue is discussed.
While DO-254/ED-80 has not been updated, this Position Paper is no longer provided on the FAA website because the "model for certification authority harmonization has changed since CAST's inception and now includes direct collaboration with industry on technical topics."
References
External links
. Retrieved 2021-11-02.
Avionics
Safety
Software requirements
RTCA standards
Computer standards | CAST-31 | Technology,Engineering | 410 |
1,241,750 | https://en.wikipedia.org/wiki/Actinism | Actinism () is the property of solar radiation that leads to the production of photochemical and photobiological effects. Actinism is derived from the Ancient Greek ἀκτίς, ἀκτῖνος ("ray, beam"). The word actinism is found, for example, in the terminology of imaging technology (esp. photography), medicine (concerning sunburn), and chemistry (concerning containers that protect from photo-degradation), and the concept of actinism is applied, for example, in chemical photography and X-ray imaging.
Actinic () chemicals include silver salts used in photography and other light sensitive chemicals.
In chemistry
In chemical terms, actinism is the property of radiation that lets it be absorbed by a molecule and cause a photochemical reaction as a result. Albert Einstein was the first to correctly theorize that each photon would be able to cause only one molecular reaction. This distinction separates photochemical reactions from exothermic reduction reactions triggered by radiation.
For general purposes, photochemistry is the commonly used vernacular rather than actinic or actino-chemistry, which are again more commonly seen used for photography or imaging.
In medicine
In medicine, actinic effects are generally described in terms of the dermis or outer layers of the body, such as eyes (see: Actinic conjunctivitis) and upper tissues that the sun would normally affect, rather than deeper tissues that higher-energy shorter-wavelength radiation such as x-ray and gamma might affect. Actinic is also used to describe medical conditions that are triggered by exposure to light, especially UV light (see actinic keratosis).
The term actinic rays is used to refer to this phenomenon.
In biology
In biology, actinic light denotes light from solar or other sources that can cause photochemical reactions such as photosynthesis in a species.
In photography
Actinic light was first commonly used in early photography to distinguish light that would expose the monochrome films from light that would not. A non-actinic safe-light (e.g., red or amber) could be used in a darkroom without risk of exposing (fogging) light-sensitive films, plates or papers.
Early "non colour-sensitive" (NCS) films, plates and papers were only sensitive to the high-energy end of the visible spectrum from green to UV (shorter-wavelength light). This would render a print of the red areas as a very dark tone because the red light was not actinic. Typically, light from xenon flash lamps is highly actinic, as is daylight as both contain significant green-to-UV light.
In the first half of the 20th century, developments in film technology produced films sensitive to red and yellow light, known as orthochromatic and panchromatic, and extended that through to near infra-red light. These gave a truer reproduction of human perception of lightness across the color spectrum. In photography, therefore, actinic light must now be referenced to the photographic material in question.
In manufacturing
Actinic inspection of masks in computer chip manufacture refers to inspecting the mask with the same wavelength of light that the lithography system will use.
In aquaculture
Actinic lights are also common in the reef aquarium industry. They are used to promote coral and invertebrate growth. They are also used to accentuate the fluorescence of fluorescent fish.
Actinic lighting is also used to limit algae growth in the aquarium. Since algae (like many other plants), flourish in shallower warm water, algae cannot effectively photosynthesize from blue and violet light, thus actinic light minimizes its photosynthetic benefit.
Actinic lighting is also a great alternative to black lights as it provides a "night environment" for the fish, while still allowing enough light for coral and other marine life to grow. Aesthetically, they make fluorescent coral "pop" to the eye, but in some cases also to promote the growth of deeper-water coral adapted to photosynthesis in regions of the ocean dominated by blue light.
In artificial lighting
"Actinic" lights are a high-color-temperature blue light. They are also used in electric fly killers to attract flies. The center wavelength for most actinic light products is 420 nanometers, with longer wavelengths regarded as "royal blue" (450nm) to sky blue (470nm) and cyan (490nm) and shorter wavelengths regarded as "violet" (400nm) and blacklight (365nm). Actinic light centered at 420nm may appear to the naked eye as a color between deep blue and violet.
See also
Spectral sensitivity is commonly used to describe the actinic responsivity of photographic materials.
Ionizing radiation
References
Electromagnetic radiation
Physical chemistry
Radiation
Science of photography
Lighting | Actinism | Physics,Chemistry | 1,012 |
36,929,648 | https://en.wikipedia.org/wiki/Phosphinimide%20ligands | Phosphinimide ligands, also known as phosphorane iminato ligands, are any of a class of organic compounds of the general formula NPR3−. The R groups represent organic substituents or, in rare cases, halides or NR2 groups. NPR3− is isoelectronic with phosphine oxides (OPR3) and siloxides ([OSiR3]−), but far more basic. By varying the R groups on P, a variety of ligands with different electronic and steric properties can be produced, and due to the high oxidation state of phosphorus, these ligands have good thermal stability. Many transition metal phosphinimide complexes have been well-developed as have main group phosphinimide complexes.
In main group phosphinimide complexes, only terminal and μ2-N-bridging bonding modes are observed. The terminally bound bent ligands are primarily commonly have M-N-P bond angles ranging from 120-150°. Both the M-N and N-P bond lengths are appropriate for double bonds. This bonding can best be described by a covalent single bond with an overlaying share of polar bonding. The μ2-N-bridging mode arises when the free electron pair at nitrogen gives rise to dimerization.
These dimeric complexes yield different M-N bond lengths depending on the ligands present in the rest of the ligand sphere of M. When the complex contains two or four identical ligands, nearly equal M-N distances are observed, whereas, when different or odd-numbered identical ligands are in the complex, the M-N distances are all of significantly different length.
Synthesis
Phosphonimines with the formula R3P=NSiMe3 are particularly useful. They are prepared by the Staudinger reaction of tertiary phosphines with trimethylsilyl azide:
R3P + N3SiMe3 → R3P=NSiMe3 + N2
R3P=NSiMe3 undergoes alcoholysis to give the parent imine:
R3P=NSiMe3 + MeOH → R3P=NH + MeOSiMe3
Ammonia can be used in place of alcohol.
Lithium phosphinimides are produced by deprotonation of the parent imine:
R3P=NH + RLi → R3P=NLi + RH
The lithio derivatives, which exist as tetrameric clusters in the solid state, are useful reagents.
References
Ligands
Phosphorus compounds | Phosphinimide ligands | Chemistry | 535 |
14,263,752 | https://en.wikipedia.org/wiki/Sodium%20malate | Sodium malate is a compound with formula Na2(C2H4O(COO)2). It is the sodium salt of malic acid. As a food additive, it has the E number E350.
Properties
Sodium malate is an odorless white crystalline powder. It is freely soluble in water.
Use
It is used as an acidity regulator and flavoring agent. It tastes similar to sodium chloride (table salt).
References
Malates
Organic sodium salts
E-number additives | Sodium malate | Chemistry | 103 |
19,468,696 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20calculus | The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes, or rate of change at each point in time) with the concept of integrating a function (calculating the area under its graph, or the cumulative effect of small contributions). Roughly speaking, the two operations can be thought of as inverses of each other.
The first part of the theorem, the first fundamental theorem of calculus, states that for a continuous function , an antiderivative or indefinite integral can be obtained as the integral of over an interval with a variable upper bound.
Conversely, the second part of the theorem, the second fundamental theorem of calculus, states that the integral of a function over a fixed interval is equal to the change of any antiderivative between the ends of the interval. This greatly simplifies the calculation of a definite integral provided an antiderivative can be found by symbolic integration, thus avoiding numerical integration.
History
The fundamental theorem of calculus relates differentiation and integration, showing that these two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that these two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals, an operation that we would now call integration. The origins of differentiation likewise predate the fundamental theorem of calculus by hundreds of years; for example, in the fourteenth century the notions of continuity of functions and motion were studied by the Oxford Calculators and other scholars. The historical relevance of the fundamental theorem of calculus is not the ability to calculate these operations, but the realization that the two seemingly distinct operations (calculation of geometric areas, and calculation of gradients) are actually closely related.
From the conjecture and the proof of the fundamental theorem of calculus, calculus as a unified theory of integration and differentiation is started. The first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character, was by James Gregory (1638–1675). Isaac Barrow (1630–1677) proved a more generalized version of the theorem, while his student Isaac Newton (1642–1727) completed the development of the surrounding mathematical theory. Gottfried Leibniz (1646–1716) systematized the knowledge into a calculus for infinitesimal quantities and introduced the notation used today.
Geometric meaning/Proof
The first fundamental theorem may be interpreted as follows. Given a continuous function whose graph is plotted as a curve, one defines a corresponding "area function" such that is the area beneath the curve between and . The area may not be easily computable, but it is assumed to be well defined.
The area under the curve between and could be computed by finding the area between and , then subtracting the area between and . In other words, the area of this "strip" would be .
There is another way to estimate the area of this same strip. As shown in the accompanying figure, is multiplied by to find the area of a rectangle that is approximately the same size as this strip. So:
Dividing by h on both sides, we get:
This estimate becomes a perfect equality when h approaches 0:
That is, the derivative of the area function exists and is equal to the original function , so the area function is an antiderivative of the original function.
Thus, the derivative of the integral of a function (the area) is the original function, so that derivative and integral are inverse operations which reverse each other. This is the essence of the Fundamental Theorem.
Physical intuition
Intuitively, the fundamental theorem states that integration and differentiation are inverse operations which reverse each other.
The second fundamental theorem says that the sum of infinitesimal changes in a quantity (the integral of the derivative of the quantity) adds up to the net change in the quantity. To visualize this, imagine traveling in a car and wanting to know the distance traveled (the net change in position along the highway). You can see the velocity on the speedometer but cannot look out to see your location. Each second, you can find how far the car has traveled using , that is, multiplying the current speed (in kilometers or miles per hour) by the time interval (1 second = hour). By summing up all these small steps, you can approximate the total distance traveled, in spite of not looking outside the car:As becomes infinitesimally small, the summing up corresponds to integration. Thus, the integral of the velocity function (the derivative of position) computes how far the car has traveled (the net change in position).
The first fundamental theorem says that the value of any function is the rate of change (the derivative) of its integral from a fixed starting point up to any chosen end point. Continuing the above example using a velocity as the function, you can integrate it from the starting time up to any given time to obtain a distance function whose derivative is that velocity. (To obtain your highway-marker position, you would need to add your starting position to this integral and to take into account whether your travel was in the direction of increasing or decreasing mile markers.)
Formal statements
There are two parts to the theorem. The first part deals with the derivative of an antiderivative, while the second part deals with the relationship between antiderivatives and definite integrals.
First part
This part is sometimes referred to as the first fundamental theorem of calculus.
Let be a continuous real-valued function defined on a closed interval . Let be the function defined, for all in , by
Then is uniformly continuous on and differentiable on the open interval , and
for all in so is an antiderivative of .
Corollary
The fundamental theorem is often employed to compute the definite integral of a function for which an antiderivative is known. Specifically, if is a real-valued continuous function on and is an antiderivative of in , then
The corollary assumes continuity on the whole interval. This result is strengthened slightly in the following part of the theorem.
Second part
This part is sometimes referred to as the second fundamental theorem of calculus or the Newton–Leibniz theorem.
Let be a real-valued function on a closed interval and a continuous function on which is an antiderivative of in :
If is Riemann integrable on then
The second part is somewhat stronger than the corollary because it does not assume that is continuous.
When an antiderivative of exists, then there are infinitely many antiderivatives for , obtained by adding an arbitrary constant to . Also, by the first part of the theorem, antiderivatives of always exist when is continuous.
Proof of the first part
For a given function , define the function as
For any two numbers and in , we have
the latter equality resulting from the basic properties of integrals and the additivity of areas.
According to the mean value theorem for integration, there exists a real number such that
It follows that
and thus that
Taking the limit as and keeping in mind that one gets
that is,
according to the definition of the derivative, the continuity of , and the squeeze theorem.
Proof of the corollary
Suppose is an antiderivative of , with continuous on . Let
By the first part of the theorem, we know is also an antiderivative of . Since the mean value theorem implies that is a constant function, that is, there is a number such that for all in . Letting , we have
which means . In other words, , and so
Proof of the second part
This is a limit proof by Riemann sums.
To begin, we recall the mean value theorem. Stated briefly, if is continuous on the closed interval and differentiable on the open interval , then there exists some in such that
Let be (Riemann) integrable on the interval , and let admit an antiderivative on such that is continuous on . Begin with the quantity . Let there be numbers such that
It follows that
Now, we add each along with its additive inverse, so that the resulting quantity is equal:
The above quantity can be written as the following sum:
The function is differentiable on the interval and continuous on the closed interval ; therefore, it is also differentiable on each interval and continuous on each interval . According to the mean value theorem (above), for each there exists a in such that
Substituting the above into (), we get
The assumption implies Also, can be expressed as of partition .
We are describing the area of a rectangle, with the width times the height, and we are adding the areas together. Each rectangle, by virtue of the mean value theorem, describes an approximation of the curve section it is drawn over. Also need not be the same for all values of , or in other words that the width of the rectangles can differ. What we have to do is approximate the curve with rectangles. Now, as the size of the partitions get smaller and increases, resulting in more partitions to cover the space, we get closer and closer to the actual area of the curve.
By taking the limit of the expression as the norm of the partitions approaches zero, we arrive at the Riemann integral. We know that this limit exists because was assumed to be integrable. That is, we take the limit as the largest of the partitions approaches zero in size, so that all other partitions are smaller and the number of partitions approaches infinity.
So, we take the limit on both sides of (). This gives us
Neither nor is dependent on , so the limit on the left side remains .
The expression on the right side of the equation defines the integral over from to . Therefore, we obtain
which completes the proof.
Relationship between the parts
As discussed above, a slightly weaker version of the second part follows from the first part.
Similarly, it almost looks like the first part of the theorem follows directly from the second. That is, suppose is an antiderivative of . Then by the second theorem, . Now, suppose . Then has the same derivative as , and therefore . This argument only works, however, if we already know that has an antiderivative, and the only way we know that all continuous functions have antiderivatives is by the first part of the Fundamental Theorem.
For example, if , then has an antiderivative, namely
and there is no simpler expression for this function. It is therefore important not to interpret the second part of the theorem as the definition of the integral. Indeed, there are many functions that are integrable but lack elementary antiderivatives, and discontinuous functions can be integrable but lack any antiderivatives at all. Conversely, many functions that have antiderivatives are not Riemann integrable (see Volterra's function).
Examples
Computing a particular integral
Suppose the following is to be calculated:
Here, and we can use as the antiderivative. Therefore:
Using the first part
Suppose
is to be calculated. Using the first part of the theorem with gives
This can also be checked using the second part of the theorem. Specifically, is an antiderivative of , so
An integral where the corollary is insufficient
Suppose
Then is not continuous at zero. Moreover, this is not just a matter of how is defined at zero, since the limit as of does not exist. Therefore, the corollary cannot be used to compute
But consider the function
Notice that is continuous on (including at zero by the squeeze theorem), and is differentiable on with Therefore, part two of the theorem applies, and
Theoretical example
The theorem can be used to prove that
Since,
the result follows from,
Generalizations
The function does not have to be continuous over the whole interval. Part I of the theorem then says: if is any Lebesgue integrable function on and is a number in such that is continuous at , then
is differentiable for with . We can relax the conditions on still further and suppose that it is merely locally integrable. In that case, we can conclude that the function is differentiable almost everywhere and almost everywhere. On the real line this statement is equivalent to Lebesgue's differentiation theorem. These results remain true for the Henstock–Kurzweil integral, which allows a larger class of integrable functions.
In higher dimensions Lebesgue's differentiation theorem generalizes the Fundamental theorem of calculus by stating that for almost every , the average value of a function over a ball of radius centered at tends to as tends to 0.
Part II of the theorem is true for any Lebesgue integrable function , which has an antiderivative (not all integrable functions do, though). In other words, if a real function on admits a derivative at every point of and if this derivative is Lebesgue integrable on , then
This result may fail for continuous functions that admit a derivative at almost every point , as the example of the Cantor function shows. However, if is absolutely continuous, it admits a derivative at almost every point , and moreover is integrable, with equal to the integral of on . Conversely, if is any integrable function, then as given in the first formula will be absolutely continuous with almost everywhere.
The conditions of this theorem may again be relaxed by considering the integrals involved as Henstock–Kurzweil integrals. Specifically, if a continuous function admits a derivative at all but countably many points, then is Henstock–Kurzweil integrable and is equal to the integral of on . The difference here is that the integrability of does not need to be assumed.
The version of Taylor's theorem that expresses the error term as an integral can be seen as a generalization of the fundamental theorem.
There is a version of the theorem for complex functions: suppose is an open set in and is a function that has a holomorphic antiderivative on . Then for every curve , the curve integral can be computed as
The fundamental theorem can be generalized to curve and surface integrals in higher dimensions and on manifolds. One such generalization offered by the calculus of moving surfaces is the time evolution of integrals. The most familiar extensions of the fundamental theorem of calculus in higher dimensions are the divergence theorem and the gradient theorem.
One of the most powerful generalizations in this direction is the generalized Stokes theorem (sometimes known as the fundamental theorem of multivariable calculus): Let be an oriented piecewise smooth manifold of dimension and let be a smooth compactly supported -form on . If denotes the boundary of given its induced orientation, then
Here is the exterior derivative, which is defined using the manifold structure only.
The theorem is often used in situations where is an embedded oriented submanifold of some bigger manifold (e.g. ) on which the form is defined.
The fundamental theorem of calculus allows us to pose a definite integral as a first-order ordinary differential equation.
can be posed as
with as the value of the integral.
See also
Differentiation under the integral sign
Telescoping series
Fundamental theorem of calculus for line integrals
Notation for differentiation
Notes
References
Bibliography
.
.
.
Further reading
.
.
Malet, A., Studies on James Gregorie (1638-1675) (PhD Thesis, Princeton, 1989).
Hernandez Rodriguez, O. A.; Lopez Fernandez, J. M. . "Teaching the Fundamental Theorem of Calculus: A Historical Reflection", Loci: Convergence (MAA), January 2012.
.
.
External links
James Gregory's Euclidean Proof of the Fundamental Theorem of Calculus at Convergence
Isaac Barrow's proof of the Fundamental Theorem of Calculus
Fundamental Theorem of Calculus at imomath.com
Alternative proof of the fundamental theorem of calculus
Fundamental Theorem of Calculus MIT.
Fundamental Theorem of Calculus Mathworld.
Articles containing proofs
Theorems in calculus
Theorems in real analysis | Fundamental theorem of calculus | Mathematics | 3,237 |
3,441,703 | https://en.wikipedia.org/wiki/DO-219 | DO-219 is a communications standard published by RTCA, Incorporated. It contains Minimum Operational Performance Standards (MOPS) for aircraft equipment required for Air Traffic Control (ATC) Two-Way Data Link Communications (TWDL) services. TWDL Services are one element of Air Traffic Services Communication (ATSC). ATSC addressing requirements are supported by the Context Management (CM) Service. The Aeronautical Telecommunications Network (ATN) provides the media and protocols to conduct data link Air Traffic Services Communication.
Outline of Contents
Purpose and Scope
Performance requirements and Test Procedures
Installed Equipment Performance
Operational Characteristics
Appendix A: ATC Two-Way Data Link Communications Message Set
Appendix B: ATC Two-Way Data Link Communications Data Structures Glossary
Appendix C: An Overview of the Packed encoding Rules ISO PDIS 8825-2; PER Unaligned
Appendix D: A Guide for Encoding and Decoding the RTCA SC-169 Message Set for ATC 2-Way Data Link, According to the Packed Encoding Rules
Appendix E: ATC Two-Way Data Link Communications Sample Messages
Appendix F: State Table
See also
Air traffic control
Aeronautical Telecommunications Network
ACARS
RTCA standards
Avionics | DO-219 | Technology | 238 |
3,225,985 | https://en.wikipedia.org/wiki/Proth%27s%20theorem | In number theory, Proth's theorem is a primality test for Proth numbers.
It states that if p is a Proth number, of the form k2n + 1 with k odd and k < 2n, and if there exists an integer a for which
then p is prime. In this case p is called a Proth prime. This is a practical test because if p is prime, any chosen a has about a 50 percent chance of working, furthermore, since the calculation is mod p, only values of a smaller than p have to be taken into consideration.
In practice, however, a quadratic nonresidue of p is found via a modified Euclid's algorithm and taken as the value of a, since if a is a quadratic nonresidue modulo p then the converse is also true, and the test is conclusive. For such an a the Legendre symbol is
Thus, in contrast to many Monte Carlo primality tests (randomized algorithms that can return a false positive), the primality testing algorithm based on Proth's theorem is a Las Vegas algorithm, always returning the correct answer but with a running time that varies randomly. Note that if a is chosen to be a quadratic nonresidue as described above, the runtime is constant, save for the time spent on finding such a quadratic nonresidue. Finding such a value is very fast compared to the actual test.
Numerical examples
Examples of the theorem include:
for p = 3 = 1(21) + 1, we have that 2(3-1)/2 + 1 = 3 is divisible by 3, so 3 is prime.
for p = 5 = 1(22) + 1, we have that 3(5-1)/2 + 1 = 10 is divisible by 5, so 5 is prime.
for p = 13 = 3(22) + 1, we have that 5(13-1)/2 + 1 = 15626 is divisible by 13, so 13 is prime.
for p = 9, which is not prime, there is no a such that a(9-1)/2 + 1 is divisible by 9.
The first Proth primes are :
3, 5, 13, 17, 41, 97, 113, 193, 241, 257, 353, 449, 577, 641, 673, 769, 929, 1153 ….
The largest known Proth prime is , and is 9,383,761 digits long. It was found by Peter Szabolcs in the PrimeGrid volunteer computing project which announced it on 6 November 2016. It is the 11-th largest known prime number as of January 2024, it was the largest known non-Mersenne prime until being surpassed in 2023, and is the largest Colbert number. The second largest known Proth prime is , found by PrimeGrid.
Proof
The proof for this theorem uses the Pocklington-Lehmer primality test, and closely resembles the proof of Pépin's test. The proof can be found on page 52 of the book by Ribenboim in the references.
History
François Proth (1852–1879) published the theorem in 1878.
See also
Pépin's test (the special case k = 1, where one chooses a = 3)
Sierpinski number
References
External links
Primality tests
Theorems about prime numbers
de:Prothsche Primzahl
nl:Prothgetal | Proth's theorem | Mathematics | 731 |
53,616,523 | https://en.wikipedia.org/wiki/Naira%20Hovakimyan | Naira Hovakimyan (born September 21, 1966) is an Armenian control theorist who holds the W. Grafton and Lillian B. Wilkins professorship of the Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign. She is the director of AVIATE Center of flying cars at UIUC, funded through a NASA University Leadership Initiative. She was the inaugural director of the Intelligent Robotics Laboratory during 2015–2017, associated with the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign.
Education
Naira Hovakimyan received her MS degree in Theoretical Mechanics and Applied Mathematics in 1988 from Yerevan State University in Armenia. She got her Ph.D. in Physics and Mathematics in 1992, in Moscow, from the Institute of Applied Mathematics of Russian Academy of Sciences, majoring in optimal control and differential games.
Academic life
Before joining the faculty of the University of Illinois at Urbana–Champaign in 2008, Naira Hovakimyan has spent time as a research scientist at Stuttgart University in Germany, at INRIA in France, at Georgia Institute of Technology, and she was on faculty of Aerospace and Ocean engineering of Virginia Tech during 2003–2008. She is currently W. Grafton and Lillian B. Wilkins Professor of Mechanical Science and Engineering at UIUC. In 2015, she was named as inaugural director for Intelligent Robotics Laboratory of CSL at UIUC. Currently she is the director of AVIATE Center of flying cars at UIUC, funded through a NASA University Leadership Initiative. She has co-authored two books, ten book chapters, eleven patents, and more than 500 journal and conference papers.
Research areas
Her research interests are in control and optimization, autonomous systems, machine learning, cybersecurity, neural networks, game theory and their applications in aerospace, robotics, mechanical, agricultural, electrical, petroleum, biomedical engineering and elderly care.
Honors
She is the 2011 recipient of AIAA Mechanics and Control of Flight award, the 2015 recipient of SWE Achievement Award, the 2017 recipient of IEEE CSS Award for Technical Excellence in Aerospace Controls, and the 2019 recipient of AIAA Pendray Aerospace Literature Award. In 2014 she was awarded the Humboldt prize for her lifetime achievements and was recognized as Hans Fischer senior fellow of Technical University of Munich. She is Fellow and life member of AIAA, Fellow of IEEE, Fellow of ASME, and Senior Member of National Academy of Inventors,. In 2015 and 2023 she was recognized as outstanding advisor by Engineering Council of UIUC. In 2024 she was recognized by College Award for Excellence in Translational research. Naira is co-founder and Chief Scientist of IntelinAir.
She is named 2017 Commencement Speaker of the American University of Armenia. She has been listed among 50 Global Armenians in the world by Mediamax and was a member of the FAST (The Foundation for Armenian Science and Technology) advisory board. She is also advising a few startup companies.
In 2021 she was one of the speakers of TEDxYerevan event. In 2022, she was awarded a Fulbright fellowship from the US Department of State. In 2022, she founded the AVIATE Center of flying cars at UIUC.
References
External links
http://naira-hovakimyan.mechse.illinois.edu/
https://aviate.illinois.edu/
Google Scholar
Research group website
https://csl.illinois.edu/directory/profile/nhovakim
http://mechse.illinois.edu/directory/faculty/nhovakim
University of Illinois faculty
1966 births
Living people
Game theorists
Control theorists
Fellows of the American Institute of Aeronautics and Astronautics
Fellows of the IEEE | Naira Hovakimyan | Mathematics,Engineering | 739 |
4,385,154 | https://en.wikipedia.org/wiki/Site-specific%20recombinase%20technology | Site-specific recombinase technologies are genome engineering tools that depend on recombinase enzymes to replace targeted sections of DNA.
History
In the late 1980s gene targeting in murine embryonic stem cells (ESCs) enabled the transmission of mutations into the mouse germ line, and emerged as a novel option to study the genetic basis of regulatory networks as they exist in the genome. Still, classical gene targeting proved to be limited in several ways as gene functions became irreversibly destroyed by the marker gene that had to be introduced for selecting recombinant ESCs. These early steps led to animals in which the mutation was present in all cells of the body from the beginning leading to complex phenotypes and/or early lethality. There was a clear need for methods to restrict these mutations to specific points in development and specific cell types. This dream became reality when groups in the USA were able to introduce bacteriophage and yeast-derived site-specific recombination (SSR-) systems into mammalian cells as well as into the mouse.
Classification, properties and dedicated applications
Common genetic engineering strategies require a permanent modification of the target genome. To this end great sophistication has to be invested in the design of routes applied for the delivery of transgenes. Although for biotechnological purposes random integration is still common, it may result in unpredictable gene expression due to variable transgene copy numbers, lack of control about integration sites and associated mutations. The molecular requirements in the stem cell field are much more stringent. Here, homologous recombination (HR) can, in principle, provide specificity to the integration process, but for eukaryotes it is compromised by an extremely low efficiency. Although meganucleases, zinc-finger- and transcription activator-like effector nucleases (ZFNs and TALENs) are actual tools supporting HR, it was the availability of site-specific recombinases (SSRs) which triggered the rational construction of cell lines with predictable properties. Nowadays both technologies, HR and SSR can be combined in highly efficient "tag-and-exchange technologies".
Many site-specific recombination systems have been identified to perform these DNA rearrangements for a variety of purposes, but nearly all of these belong to either of two families, tyrosine recombinases (YR) and serine recombinases (SR), depending on their mechanism. These two families can mediate up to three types of DNA rearrangements (integration, excision/resolution, and inversion) along different reaction routes based on their origin and architecture.
The founding member of the YR family is the lambda integrase, encoded by bacteriophage λ, enabling the integration of phage DNA into the bacterial genome. A common feature of this class is a conserved tyrosine nucleophile attacking the scissile DNA-phosphate to form a 3'-phosphotyrosine linkage. Early members of the SR family are closely related resolvase / DNA invertases from the bacterial transposons Tn3 and γδ, which rely on a catalytic serine responsible for attacking the scissile phosphate to form a 5'-phosphoserine linkage. These undisputed facts, however, were compromised by a good deal of confusion at the time other members entered the scene, for instance the YR recombinases Cre and Flp (capable of integration, excision/resolution as well as inversion), which were nevertheless welcomed as new members of the "integrase family". The converse examples are PhiC31 and related SRs, which were originally introduced as resolvase/invertases although, in the absence of auxiliary factors, integration is their only function. Nowadays the standard activity of each enzyme determines its classification reserving the general term "recombinase" for family members which, per se, comprise all three routes, INT, RES and INV:
Our table extends the selection of the conventional SSR systems and groups these according to their performance. All of these enzymes recombine two target sites, which are either identical (subfamily A1) or distinct (phage-derived enzymes in A2, B1 and B2). Whereas for A1 these sites have individual designations ("FRT" in case of Flp-recombinase, loxP for Cre-recombinase), the terms "attP" and "attB" (attachment sites on the phage and bacterial part, respectively) are valid in the other cases. In case of subfamily A1 we have to deal with short (usually 34 bp-) sites consisting of two (near-)identical 13 bp arms (arrows) flanking an 8 bp spacer (the crossover region, indicated by red line doublets). Note that for Flp there is an alternative, 48 bp site available with three arms, each accommodating a Flp unit (a so-called "protomer"). attP- and attB-sites follow similar architectural rules, but here the arms show only partial identity (indicated by the broken lines) and differ in both cases. These features account for relevant differences:
recombination of two identical educt sites leads to product sites with the same composition, although they contain arms from both substrates; these conversions are reversible;
in case of attP x attB recombination crossovers can only occur between these complementary partners in processes that lead to two different products (attP x attB → attR + attL) in an irreversible fashion.
In order to streamline this chapter the following implementations will be focused on two recombinases (Flp and Cre) and just one integrase (PhiC31) since their spectrum covers the tools which, at present, are mostly used for directed genome modifications. This will be done in the framework of the following overview.
Reaction routes
The mode integration/resolution and inversion (INT/RES and INV) depend on the orientation of recombinase target sites (RTS), among these pairs of attP and attB. Section C indicates, in a streamlined fashion, the way recombinase-mediated cassette exchange (RMCE) can be reached by synchronous double-reciprocal crossovers (rather than integration, followed by resolution).
Tyr-Recombinases are reversible, while the Ser-Integrase is unidirectional. Of note is the way reversible Flp (a Tyr recombinase) integration/resolution is modulated by 48 bp (in place of 34 bp minimal) FRT versions: the extra 13 bp arm serves as a Flp "landing path" contributing to the formation of the synaptic complex, both in the context of Flp-INT and Flp-RMCE functions (see the respective equilibrium situations). While it is barely possible to prevent the (entropy-driven) reversion of integration in section A for Cre and hard to achieve for Flp, RMCE can be completed if the donor plasmid is provided at an excess due to the bimolecular character of both the forward- and the reverse reaction. Posing both FRT sites in an inverse manner will lead to an equilibrium of both orientations for the insert (green arrow). In contrast to Flp, the Ser integrase PhiC31 (bottom representations) leads to unidirectional integration, at least in the absence of an recombinase-directionality (RDF-)factor. Relative to Flp-RMCE, which requires two different ("heterospecific") FRT-spacer mutants, the reaction partner (attB) of the first reacting attP site is hit arbitrarily, such that there is no control over the direction the donor cassette enters the target (cf. the alternative products). Also different from Flp-RMCE, several distinct RMCE targets cannot be mounted in parallel, owing to the lack of heterospecific (non-crossinteracting) attP/attB combinations.
Cre recombinase
Cre recombinase (Cre) is able to recombine specific sequences of DNA without the need for cofactors. The enzyme recognizes 34 base pair DNA sequences called loxP ("locus of crossover in phage P1"). Depending on the orientation of target sites with respect to one another, Cre will integrate/excise or invert DNA sequences. Upon the excision (called "resolution" in case of a circular substrate) of a particular DNA region, normal gene expression is considerably compromised or terminated.
Due to the pronounced resolution activity of Cre, one of its initial applications was the excision of loxP-flanked ("floxed") genes leading to cell-specific gene knockout of such a floxed gene after Cre becomes expressed in the tissue of interest. Current technologies incorporate methods, which allow for both the spatial and temporal control of Cre activity. A common method facilitating the spatial control of genetic alteration involves the selection of a tissue-specific promoter to drive Cre expression. Placement of Cre under control of such a promoter results in localized, tissue-specific expression. As an example, Leone et al. have placed the transcription unit under the control of the regulatory sequences of the myelin proteolipid protein (PLP) gene, leading to induced removal of targeted gene sequences in oligodendrocytes and Schwann cells. The specific DNA fragment recognized by Cre remains intact in cells, which do not express the PLP gene; this in turn facilitates empirical observation of the localized effects of genome alterations in the myelin sheath that surround nerve fibers in the central nervous system (CNS) and the peripheral nervous system (PNS). Selective Cre expression has been achieved in many other cell types and tissues as well.
In order to control temporal activity of the excision reaction, forms of Cre which take advantage of various ligand binding domains have been developed. One successful strategy for inducing specific temporal Cre activity involves fusing the enzyme with a mutated ligand-binding domain for the human estrogen receptor (ERt). Upon the introduction of tamoxifen (an estrogen receptor antagonist), the Cre-ERt construct is able to penetrate the nucleus and induce targeted mutation. ERt binds tamoxifen with greater affinity than endogenous estrogens, which allows Cre-ERt to remain cytoplasmic in animals untreated with tamoxifen. The temporal control of SSR activity by tamoxifen permits genetic changes to be induced later in embryogenesis and/or in adult tissues. This allows researchers to bypass embryonic lethality while still investigating the function of targeted genes.
Recent extensions of these general concepts led to generating the "Cre-zoo", i.e. collections of hundreds of mouse strains for which defined genes can be deleted by targeted Cre expression.
Flp recombinase
In its natural host (S. cerevisiae) the Flp/FRT system enables replication of a "2μ plasmid" by the inversion of a segment that is flanked by two identical, but oppositely oriented FRT sites ("flippase" activity). This inversion changes the relative orientation of replication forks within the plasmid enabling "rolling circle"—amplification of the circular 2μ entity before the multimeric intermediates are resolved to release multiple monomeric products. Whereas 34 bp minimal FRT sites favor excision/resolution to a similar extent as the analogue loxP sites for Cre, the natural, more extended 48 bp FRT variants enable a higher degree of integration, while overcoming certain promiscuous interactions as described for phage enzymes like Cre- and PhiC31. An additional advantage is the fact, that simple rules can be applied to generate heterospecific FRT sites which undergo crossovers with equal partners but nor with wild type FRTs. These facts have enabled, since 1994, the development and continuous refinements of recombinase-mediated cassette exchange (RMCE-)strategies permitting the clean exchange of a target cassette for an incoming donor cassette.
Based on the RMCE technology, a particular resource of pre-characterized ES-strains that lends itself to further elaboration has evolved in the framework of the EUCOMM (European Conditional Mouse Mutagenesis) program, based on the now established Cre- and/or Flp-based "FlExing" (Flp-mediated excision/inversion) setups, involving the excision and inversion activities. Initiated in 2005, this project focused first on saturation mutagenesis to enable complete functional annotation of the mouse genome (coordinated by the International Knockout-Mouse Consortium, IKMC) with the ultimate goal to have all protein genes mutated via gene trapping and -targeting in murine ES cells. These efforts mark the top of various "tag-and-exchange" strategies, which are dedicated to tagging a distinct genomic site such that the "tag" can serve as an address to introduce novel (or alter existing) genetic information. The tagging step per se may address certain classes of integration sites by exploiting integration preferences of retroviruses or even site specific integrases like PhiC31, both of which act in an essentially unidirectional fashion.
The traditional, laborious "tag-and-exchange" procedures relied on two successive homologous recombination (HR-)steps, the first one ("HR1") to introduce a tag consisting of a selection marker gene. "HR2" was then used to replace the marker by the "GOI. In the first ("knock-out"-) reaction the gene was tagged with a selectable marker, typically by insertion of a hygtk ([+/-]) cassette providing G418 resistance. In the following "knock-in" step, the tagged genomic sequence was replaced by homologous genomic sequences with certain mutations. Cell clones could then be isolated by their resistance to ganciclovir due to loss of the HSV-tk gene, i.e. ("negative selection"). This conventional two-step tag-and-exchange procedure could be streamlined after the advent of RMCE, which could take over and add efficiency to the knock-in step.
PhiC31 integrase
Without much doubt, Ser integrases are the current tools of choice for integrating transgenes into a restricted number of well-understood genomic acceptor sites that mostly (but not always) mimic the phage attP site in that they attract an attB-containing donor vector. At this time the most prominent member is PhiC31-INT with proven potential in the context of human and mouse genomes.
Contrary to the above Tyr recombinases, PhiC31-INT as such acts in a unidirectional manner, firmly locking in the donor vector at a genomically anchored target. An obvious advantage of this system is that it can rely on unmodified, native attP (acceptor) and attB donor sites. Additional benefits (together with certain complications) may arise from the fact that mouse and human genomes per se contain a limited number of endogenous targets (so called "attP-pseudosites"). Available information suggests that considerable DNA sequence requirements let the integrase recognize fewer sites than retroviral or even transposase-based integration systems opening its career as a superior carrier vehicle for the transport and insertion at a number of well established genomic sites, some of which with so called "safe-harbor" properties.
Exploiting the fact of specific (attP x attB) recombination routes, RMCE becomes possible without requirements for synthetic, heterospecific att-sites. This obvious advantage, however comes at the expense of certain shortcomings, such as lack of control about the kind or directionality of the entering (donor-) cassette. Further restrictions are imposed by the fact that irreversibility does not permit standard multiplexing-RMCE setups including "serial RMCE" reactions, i.e., repeated cassette exchanges at a given genomic locus.
Outlook and perspectives
Annotation of the human and mouse genomes has led to the identification of >20 000 protein-coding genes and >3 000 noncoding RNA genes, which guide the development of the organism from fertilization through embryogenesis to adult life. Although dramatic progress is noted, the relevance of rare gene variants has remained a central topic of research.
As one of the most important platforms for dealing with vertebrate gene functions on a large scale, genome-wide genetic resources of mutant murine ES cells have been established. To this end four international programs aimed at saturation mutagenesis of the mouse genome have been founded in Europe and North America (EUCOMM, KOMP, NorCOMM, and TIGM). Coordinated by the International Knockout Mouse Consortium (IKSC) these ES-cell repositories are available for exchange between international research units. Present resources comprise mutations in 11 539 unique genes, 4 414 of these conditional.
The relevant technologies have now reached a level permitting their extension to other mammalian species and to human stem cells, most prominently those with an iPS (induced pluripotent) status.
See also
Site-specific recombination
Recombinase-mediated cassette exchange
Cre recombinase
Cre-Lox recombination
FLP-FRT recombination
Genetic recombination
Homologous recombination
References
External links
http://www.knockoutmouse.org/
Genetic engineering | Site-specific recombinase technology | Chemistry,Engineering,Biology | 3,728 |
1,190,570 | https://en.wikipedia.org/wiki/Pocket%20park | A pocket park (also known as a parkette, mini-park, vest-pocket park or vesty park) is a small park accessible to the general public. While the locations, elements, and uses of pocket parks vary considerably, the common defining characteristic of a pocket park is its small size. Typically, a pocket park occupies one to three municipal lots and is smaller than in size.
Pocket parks can be urban, suburban or rural, but they customarily appear in densely urbanized areas, where land is very expensive and space for the development of larger urban parks is limited. They are frequently created on small, irregular pieces of public or private land, such as in vacant building lots, in brownfields, beside railways, beneath utility lines, or in parking spots.
Pocket parks can create new public spaces without the need for large-scale redevelopment. In inner-city areas, pocket parks are often part of urban regeneration efforts by transforming underutilized or blighted spaces into vibrant community assets. They may also be created as a component of the public space requirement of large building projects.
Pocket parks can serve as focal points of activity and interest in urban areas. Common elements of pocket parks include benches, tables, fountains, playgrounds, monuments, historic markers, art installations, barbecue pits, flower beds, community gardens, or basketball courts. Although they are often too small for many space-intensive physical activities, pocket parks provide communities with greenery, a place to sit and rest, and an ecological foothold for urban wildlife.
History
The first pocket parks appeared in Europe in the aftermath of World War II. As cities began to recover from the large-scale physical damage incurred by warfare, such as from bombings, limitations in capital, labor, and building materials necessitated cheap, easy, and minimalistic solutions to restore urban landscapes. These constraints promoted the conversion of heavily damaged sites into small public parks which echoed the neighborhood's original peacetime identities.
By the 1950s, the first pocket parks appeared in the United States as an adaptation of these small European parks. Inspired by this readaptation of urban space, landscape architect and professor Karl Linn proposed the transformation of vacant lots in Baltimore, Philadelphia, and Washington D.C. into neighborhood commons. These small urban spaces served as low-cost interventions to improve the quantity and quality of community gathering spaces and recreational facilities in dense urban areas.
In 1964, Whitney North Seymour Jr. advocated for the creation of pocket parks in New York City during his tenure as president of the Park Association of New York. Congressman John Lindsay endorsed the creation of pocket parks in his 1965 campaign for New York City mayor, and Paley Park, a premier privately owned public space and prominent example of a pocket park, opened during his mayoralty in 1967.
One of the first municipal programs to fund and structure the creation of pocket parks in the United States occurred in Philadelphia. In 1967, a $320,000 urban beautification campaign encouraged community groups to identify and nominate disused parcels for development into pocket parks. Upon approval, the city provided technical knowledge and financial support to residents, who would collaborate with city officials to design, construct, and maintain the new parks. From their onset, these pocket parks were well received by municipal workers and residents. To this day, the City of Philadelphia manages over 150 neighborhood parks.
Development and design
Pocket parks typically develop on small, solitary, irregularly shaped, and physically damaged lots. Because these parcels may not be conducive to commercial development, the land on which they are situated is often relatively cheap to acquire, and transforming the neglected parcel into public or green space may be the only viable opportunity for redevelopment. Thus, the placement and creation of pocket parks tends to be an opportunistic product of environmental circumstance rather than through deliberate master planning.
Due to their small size, pocket parks typically serve a hyperlocal population, and the limited opportunities for park form and function are closely tied to these local community needs. For example, a pocket park in a business district may prioritize tables and seating for employees to take a lunch break, while a pocket park in a residential area may prioritize a structure for children to play on.
Consequently, the development of pocket parks generally entails extensive public participation and collaboration between community members, landscape architects, municipal officials, and local institutions such as businesses or schools. Through this community organization, the development of pocket parks promotes grassroots planning and strengthens relationships between residents and local authorities.
Unlike larger parks, pocket parks are sometimes designed to be fenced and locked when not in use.
Community impact
Despite their small footprint, pocket parks can dramatically enhance the quality of life of their surrounding communities.
Pocket parks prevent overdevelopment in dense neighborhoods and vary the form of the built environment with islands of shade, quiet, and privacy, which may otherwise be difficult to find in urban areas. Well-maintained pocket parks can deter visual signs of urban neglect by discouraging the vandalism which occurs in otherwise abandoned lots. The beautification efforts of pocket parks can increase a neighborhood's aesthetic appeal and shape a distinct, positive visual identity for a city as a whole.
The creation of pocket parks encourages public participation and residential collaboration towards a meaningful long-term improvement to the community. In turn, this community participation can foster community pride and empower residents to tackle additional neighborhood improvement projects.
Unlike a singular large scale urban park, numerous pocket parks can be distributed throughout a single neighborhood, and multiple pocket parks can be spaced close together. This distribution increases the usefulness and accessibility of green urban spaces by decreasing the distance and time between parks and their users, especially for users who have difficulty travelling long distances, such as children, the elderly, or individuals with mobility impairments. This close proximity can also generate strong personal attachments and positive associations of place identity, especially among children who grow up in neighborhoods containing pocket parks.
These positive impacts are magnified in neighborhoods with low-income or racial minority populations, where green space may be scarce and the new development of larger-scale parks may be infeasible due to spatial or financial constraints. These benefits also particularly improve the quality of urban life for women, who are more likely to use pocket parks than men.
Economic impact
One study conducted in Greenville, South Carolina, found that "attractively maintained small and medium parks have a positive influence on neighboring property values." Despite this potential to inflate local housing costs, pocket parks are less likely to contribute to environmental gentrification than larger urban parks.
Ecological impact
Patches of green landscaping and permeable surfaces within pocket parks can mitigate the urban heat island effect, aid in stormwater management, and help control microclimates. This greenery can also attract and harbor urban wildlife, especially birds. However, pocket parks are typically designed for human use and therefore may only provide limited ecological benefits to non-human species.
The establishment of local pocket parks can reduce the stress upon larger urban parks, such as by eliminating overcrowding. The use of local pocket parks instead of more distant large urban parks reduces the traffic, pollution, and energy consumption associated with automobile travel and can allow larger parks to dedicate more space to uses beyond what a pocket park can offer, such as for large-scale natural habitats.
Public health and safety impact
Pocket parks can deter the accumulation of unsanitary and potentially biohazardous waste, promoting positive externalities on public health.
A study in Los Angeles concluded that pocket parks were more effective than larger existing parks and playgrounds at promoting moderate to vigorous physical activity in low-income neighborhoods. This is likely due to increased pedestrianism, for the short distance between user's homes and pocket parks encourages users to walk to access outdoor public spaces.
The creation of pocket parks can improve resident perceptions of public safety. One study from the University of Pennsylvania concluded that converting vacant lots into pocket parks reduces crime rates. In Los Angeles, where there are restrictions on how close registered sex offenders can live to parks, local officials planned three pocket parks to drive "undesirables" from a given area.
Around the world
Chile
In Santiago, Chile, the first pocket park (plaza de bolsillo) was created beside of Palacio La Moneda at Morandé Street. It was an initiative of Architecture Department of the Ministry of Public Infrastructure and Regional Government of Santiago.
Mexico
In Mexico City, there is a city program to facilitate the creation of up to 150 pocket parks of 400m2 or less on vacant lots and former road intersections, such as Jardín Edith Sánchez Ramírez and Condesa pocket park.
Poland
In Krakow, the Municipal Green Areas Management Board launched a 2018 initiative to improve the quality of public space and the quantity of green space by creating eighteen new pocket parks, which were modeled after the successes of New York City's Paley Park and Philadelphia's John F. Collins Park.
United Kingdom
In England, a 1984 project to involve the local community in the creation and running of small, local parks has fostered several pocket parks in Northamptonshire, and was later developed by the Countryside Commission into the Millennium Green and Doorstep Green projects.
Gallery
See also
Parklet
Urban green space
References
Parks
Urban planning
Urban public parks
Urban studies and planning terminology | Pocket park | Engineering | 1,866 |
61,321,852 | https://en.wikipedia.org/wiki/Enik%C5%91%20Kubinyi | Eniko Kubinyi (born 1 August 1976) is a Hungarian biologist, professor and head of department at the Department of Ethology, ELTE Eotvos Lorand University, who studies dog behaviour, cognition, ageing, and the relationship between dogs and humans. She is the principal investigator of the Senior Family Dog Project, the MTA-ELTE Lendület/Momentum Companion Animal Research Group, and the Canine Brain and Tissue Bank.
In 2012, she appeared on the Horizon (BBC TV series) programme The Secret Life of the Dog, where she presented their research on hand-rearing wolves and comparing the behaviour of wolves and dogs. Kubinyi has received several awards for his work, including APA Comparative Psychology Award in 2004 and the L'Oréal-UNESCO Award For Women in Science in 2018. She is a fellow of the Young Academy of Europe and a founding member of the Hungarian Young Academy.
Bibliography
Books
Author of 10 chapters The Dog - A Natural History
References
External links
Young Academy of Europe
Eotvos Lorand University profile
Academic staff of Eötvös Loránd University
Ethologists
Hungarian biologists
Living people
1976 births | Enikő Kubinyi | Biology | 236 |
74,589,966 | https://en.wikipedia.org/wiki/VITO%20experiment | The Versatile Ion polarisation Technique Online (VITO) experiment is a permanent experimental setup located in the ISOLDE facility at CERN, in the form of a beamline. The purpose of the beamline is to perform a wide range of studies using spin-polarised short-lived atomic nuclei. VITO uses circularly-polarised laser light to obtain polarised radioactive beams of different isotopes delivered by ISOLDE. These have already been used for weak-interaction studies, biological investigations, and more recently nuclear structure research. The beamline is located at the site of the former Ultra High Vacuum (UHV) beamline hosting ASPIC.
Beamline setup
Radioactive ion beams (RIBs) are produced by the ISOLDE facility, using a beam of high-energy protons from the ProtonSynchrotron Booster (PSB) incident on a target. The interaction of the beam and the target produces radioactive species, which are extracted through thermal diffusion by heating the target. The beam of radioactive ions is then separated by mass number by one of the two mass separators at the facility. The resulting low-energy beam is delivered to the various experimental stations.
The VITO beamline is modular. The first part is common for all projects and is devoted to atomic polarisation via optical pumping with circularly polarised laser light. The singly-charged ion beam of short-lived isotopes from ISOLDE (RIB) is Doppler-tuned in resonance with the laser light provided by a continuous-wave tunable laser. Next, the beam may be neutralised, before it reaches a 1.5 m long section in which the ion or atom beam is overlapped with the laser and they interact many times (many excitation-decay cycles take place), leading to the polarisation of the atomic spins.
The polarised beam is then transported to one of the setups that can be placed behind the polarisation line. At this point the polarised beam is implanted into a solid or liquid host. A strong magnetic field surrounding the sample allowing the nuclear spin polarisation to be maintained for dozens of milliseconds to seconds, by decoupling the electron and nuclear spin. In these conditions, the degree of spin polarisation and its changes can be monitored extremely efficiently by observing the spatial asymmetry in the emission of beta particles by the decaying short-lived nuclei. This is possible, because the weak force that is responsible for the beta decay does not conserve parity. As few as several thousands decays might be enough to record a good signal.
Nuclear Magnetic Resonance (NMR)
Nuclear Magnetic Resonance (NMR) is a technique that provides information on the environment of a nucleus, from calculations based on the shift in Larmor frequency or relaxation time. β-NMR is a modification of this basic technique using the idea that beta decay from polarised radioactive nuclei is anisotropic (directional) in space. The resonances are detected as change in the beta-decay asymmetry which gives it a much higher signal strength than conventional NMR (up to 10 orders of magnitude).
Results
One of the first experiments using polarised beams at VITO was devoted polarisation of a mirror-nucleus argon-35. The scientific motivation for this project was provided by the weak interaction studies and the determination of the Vud matrix element in the CKM quark mixing matrix.
The next, gradually upgraded, setup is centred around a high-field magnet, liquid samples and radio frequency excitations. The aim is to develop a method of beta-detected Nuclear Magnetic Resonance (β-NMR) to investigate the interaction of metal ions with biomolecules in liquids.
The most recent studies at VITO concern the determination of spins and parities in excited nuclear states, poplulated by beta decay. In this case, the setup consists of a solid sample, surrounded by a compact magnet that allows for gamma radiation and neutrons to reach the decay spectroscopy setup.
External links
VITO page on the ISOLDE website
References
Physics experiments
CERN experiments | VITO experiment | Physics | 822 |
69,520,751 | https://en.wikipedia.org/wiki/Robert%20D.%20Shannon | Robert Day Shannon (born 1935) is a retired research chemist formerly at DuPont de Nemours, Inc.
Career
Shannon received his B.S. and M.S. degrees in Ceramic Engineering from the University of Illinois in 1957 and 1959. He then went on to receive his Ph.D. in Ceramic Engineering from the University of California at Berkeley in 1960. He then joined the DuPont Company as a research chemist from 1964 to 1971 where he concentrated on high-pressure synthesis and precious metal oxide chemistry. He then spent 1971 conducting post-doctorate studies at McMaster University in Hamilton, Ontario, working with Chris Calvo on the crystal structures of a number of vanadates and with David Brown on bond strength-bond length relationships useful in determining H locations in hydroxides and hydrates. Next, he took a sabbatical leave from DuPont and spent 1972 at the CNRS and teaching at the University of Grenoble, France as a visiting professor, where he presented a course on solid state chemistry and conducted research on high-pressure chemistry of vanadates. He returned to DuPont in 1973 to do research on new ionic conductors and precious metal oxide chemistry.
In 1982, he was granted another sabbatical leave from DuPont and worked on catalysis with zeolites at the Institute de Catalyse in Lyon, France. Upon completion of the sabbatical, he returned to DuPont and worked for another ten years before retiring in 1992.
After retirement, he received a grant from the Alexander von Humboldt Foundation to continue his research on ion polarizabilities in collaboration with Reinhard Fischer in 1994 at the Universities of Mainz and Bremen in Germany and with Olaf Medenbach at the Ruhr-Universität in Bochum, Germany. There, he prepared three papers on refractive indices and electronic polarizabilities in oxides, and other compounds. He has since moved to Colorado where he has been associated with the University of Colorado Boulder · Cooperative Institute for Research in Environmental Sciences (CIRES).
Shannon was a member of the American Chemical Society and the American Crystallographic Association. He was elected a Fellow of the Mineralogical Society of America. He has served on the Evaluation Panel for Materials Science at the National Bureau of Standards, and on the National Science Foundation Subcommittee for Oversight Review of Solid State Chemistry.
Research
Shannon has about 164 publications that, together, have received over 77 thousand citations. His work on ionic radii of ions has drawn particularly wide attention. In a 2014 Nature paper his 1976 work on the ionic radii of ions was recognized as the 22d most cited paper in all of science. It is also been cited as the highest formally-cited database of all time.
He has a number of patents on glass compositions, zeolite catalysts, noble-metal oxide, electrodes, and chemical compounds.
Mineral named in his honor
The mineral bobshannonite, Na2KBa(Mn,Na)8(Nb,Ti)4(Si2O7)4O4(OH)4(O,F)2, was named in his honor in recognition of his major contributions to the field of crystal chemistry in particular and mineralogy in general through his development of accurate and comprehensive ionic radii and his work on dielectric properties of minerals.
Selected highly cited publications
Shannon RD, Fischer RX (2021) Empirical electronic polarizabilities for use in refractive index measurements III. Structures with short [5]Ti-O and vanadyl bonds. Canadian Mineralogist 59, 107–124.
Shannon RD, Fischer RX (2006) Empirical electronic polarizabilities in oxides, hydroxides, oxyfluorides, and oxychlorides. Physical Review B 73, 235111/1-235111/28.
Shannon RD; Shannon RC, Medenbach O, Fischer RX (2002) Refractive index and dispersion of fluorides and oxides. Journal of Physical and Chemical Reference Data 31, 931–970.
Medenbach O, Dettmar D, Shannon RD, Fischer RX, Yen WM (2001) Refractive index and optical dispersion of rare earth oxides using a small-prism technique. Journal of Optics A: Pure and Applied Optics 3, 174–177.
Shannon RD (1993) Dielectric polarizabilities of ions in oxides and fluorides. Journal of Applied Physics 73, 348–66.
Shannon RD, Oswald RA, Parise JB, Chai BHT, Byszewski P, Pajaczkowska A, Sobolewski R (1992) Dielectric constants and crystal structures of calcium yttrium aluminate (CaYAlO4), calcium neodymium aluminate (CaNdAlO4), and lanthanum strontium aluminate (SrLaAlO4), and deviations from the oxide additivity rule. Journal of Solid State Chemistry 98, 90–98.
Shannon RD, Rossman GR (1992) Dielectric constants of silicate garnets and the oxide additivity rule. American Mineralogist 77, 94–100.
Coudurier G, Auroux A, Vedrine JC, Farlee RD, Abrams L, Shannon RD (1987) Properties of boron-substituted ZSM-5 and ZSM-11 zeolites. Journal of Catalysis 108, 1–14.
Shannon RD, Gardner KH, Staley RH, Bergeret G, Gallezot P, Auroux A (1985) The nature of the nonframework aluminum species formed during the dehydroxylation of H-Y. Journal of Physical Chemistry 89, 4778–4788.
Rossman GR; Shannon RD and Waring, RK (1981) Origin of the Yellow Color of Complex Nickel Oxides. Journal of Solid State Chemistry 39, 277–287.
Tranqui D, Shannon RD, Chen Hy, Iijima S, Baur WH (1979) Crystal structure of ordered Li4SiO4 Acta Crystallographica Section B-Structural Science 35, 2479–2487.
Shannon RD, Taylor BE, Gier TE, Chen HY, Berzins T (1978) Ionic conductivity in sodium yttrium silicon oxide (Na5YSi4O12)-type silicates. Inorganic Chemistry 17, 958–964.
Shannon RD, Gillson JL, Bouchard RJ (1977) Single crystal synthesis and electrical properties of cadmium stannite and stannate, indium tellurate, and cadmium indicate. Journal of Physics and Chemistry of Solids 38, 877–881.
Shannon RD, Taylor BE, English AD, Berzins T (1977) New lithium solid electrolytes. Electrochimica Acta (1977), 22(7), 783–796.
Shannon RD (1976) Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides. Acta Crystallographica, Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography, A32, 751–67.
Brown, ID, Shannon RD (1973) Empirical bond-strength-bond-length curves for oxides. Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography 29, 266–282.
Shannon RD, Calvo C (1973) Refinement of the crystal structure of low temperature lithium vanadate(V) and analysis of mean bond lengths in phosphates, arsenates, and vanadates. Journal of Solid State Chemistry 6, 538–49.
Shannon RD, Rogers DB, Prewitt CT, Gillson JL (1971) Chemistry of noble metal oxide. III. Electrical transport properties and crystal chemistry of ABO2 compounds with the delafossite structure. Inorganic Chemistry 10, 723–727.
Prewitt CT, Shannon RD, Rogers DB (1971) Chemistry of Nobel Metal Oxides. II. Chemistry of noble metal oxide. II. Crystal structures of platinum cobalt dioxide, palladium cobalt dioxide, copper iron dioxide, and silver iron dioxide. Inorganic Chemistry 10,.719–723.
Shannon RD; Rogers DB, Prewitt, CT (1971) Chemistry of noble metal oxide. I. Syntheses and properties of ABO2 delafossite compounds. Inorganic Chemistry 10, 713–718.
Shannon RD, Prewitt CT (1970) Revised Values of Effective Ionic Radii. Acta Crystallographica Section B-Structural Crystallography and Crystal Chemistry B 26, 1046–1048.
Shannon RD and Prewitt CT (1970) Effective Ionic Radii and Crystal Chemistry. Journal of Inorganic & Nuclear Chemistry 32, 1427–1441.
Shannon RD, Bierstedt PE (1970) Single-crystal growth and electrical properties of barium plumbate. Journal of the American Ceramic Society 53, 635–636.
Prewitt CT, Shannon RD, Rogers D, Sleight AW (1969) C rare earth oxide-corundum transition and crystal chemistry of oxides having the corundum structure. Inorganic Chemistry 8, 1985–1993.
Rogers DB, Shannon RD, Sleight AW, Gillson JL (1969) Crystal chemistry of metal dioxides with rutile-related structures. Inorganic Chemistry 8, 841–9.
Shannon RD and Prewitt, CT (1969) Effective Ionic Radii in Oxides and Fluorides. Acta Crystallographica Section B-Structural Crystallography and Crystal Chemistry B 25, 925–946.
Prewitt, CT and Shannon RD (1968) Crystal structure of a high-pressure form of boron sesquioxide. Acta Crystallographica Section B-Structural Crystallography and Crystal Chemistry B 24, 869–874.
Shannon RD (1968) Synthesis and properties of two new members of the rutile family, RhO2 and PtO2. Solid State Communications 6, 139–143.
Shannon RD, Pask JA (1965) The kinetics and mechanism of the anatase-rutile transformation. Journal of the American Ceramic Society 48, 391–398
Shannon RD, Pask JA (1964) Topotaxy in the anatase-rutile transformation. American Mineralogist 49, 1707–1717.
Shannon RD (1964) Activated complex theory applied to the thermal decomposition of solids. Transactions of the Faraday Society 60 (503P), pp. 1902–1913.
References
University of Illinois alumni
American chemists
1935 births
Living people
University of California, Berkeley alumni
Solid state chemists | Robert D. Shannon | Chemistry | 2,219 |
25,625,276 | https://en.wikipedia.org/wiki/Mark%C3%B3%E2%80%93Lam%20deoxygenation | The Markó–Lam deoxygenation is an organic chemistry reaction where the hydroxy functional group in an organic compound is replaced by a hydrogen atom to give an alkyl group. The Markó-Lam reaction is a variant of the Bouveault–Blanc reduction and an alternative to the classical Barton–McCombie deoxygenation. It is named for the Belgian chemists István Markó and Kevin Lam.
The main features of the reaction are:
short reaction time (5 seconds to 5 minutes).
the use of a stable toluate derivative.
the use of SmI2/HMPA system or electrolysis instead of the classical and difficult to remove tributyltin hydride.
Mechanism
A hydroxyl group is first derivitised into a stable and very often crystalline toluate derivative. The aromatic ester is submitted to a monoelectronical reduction, by the use of SmI2/HMPA or by electrolysis, to yield the a radical-anion which decomposes into the corresponding carboxylate and into the radical of the alkyl fragment.
This radical could be used for further chemical reactions or can abstract a hydrogen atom to form the deoxygenated product.
Variations
In presence of methanol or isopropanol, the reduction lead to the selective deprotection of the aromatic esters.
In presence of ketones, allylic derivatives lead to the coupling product when treated in Barbier's conditions with samarium diiodide.
Scope
The Markó-Lam reaction was used as a final step in the total synthesis of Trifarienol B:
References
Free radical reactions
Organic redox reactions
Name reactions | Markó–Lam deoxygenation | Chemistry | 348 |
25,553,718 | https://en.wikipedia.org/wiki/WebSocket | WebSocket is a computer communications protocol, providing a simultaneous two-way communication channel over a single Transmission Control Protocol (TCP) connection. The WebSocket protocol was standardized by the IETF as in 2011. The current specification allowing web applications to use this protocol is known as WebSockets. It is a living standard maintained by the WHATWG and a successor to The WebSocket API from the W3C.
WebSocket is distinct from HTTP used to serve most webpages. Although they are different, states that WebSocket "is designed to work over HTTP ports 443 and 80 as well as to support HTTP proxies and intermediaries", thus making it compatible with HTTP. To achieve compatibility, the WebSocket handshake uses the HTTP Upgrade header to change from the HTTP protocol to the WebSocket protocol.
The WebSocket protocol enables full-duplex interaction between a web browser (or other client application) and a web server with lower overhead than half-duplex alternatives such as HTTP polling, facilitating real-time data transfer from and to the server. This is made possible by providing a standardized way for the server to send content to the client without being first requested by the client, and allowing messages to be passed back and forth while keeping the connection open. In this way, a two-way ongoing conversation can take place between the client and the server. The communications are usually done over TCP port number 443 (or 80 in the case of unsecured connections), which is beneficial for environments that block non-web Internet connections using a firewall. Additionally, WebSocket enables streams of messages on top of TCP. TCP alone deals with streams of bytes with no inherent concept of a message. Similar two-way browser–server communications have been achieved in non-standardized ways using stopgap technologies such as Comet or Adobe Flash Player.
Most browsers support the protocol, including Google Chrome, Firefox, Microsoft Edge, Internet Explorer, Safari and Opera.
The WebSocket protocol specification defines ws (WebSocket) and wss (WebSocket Secure) as two new uniform resource identifier (URI) schemes that are used for unencrypted and encrypted connections respectively. Apart from the scheme name and fragment (i.e. # is not supported), the rest of the URI components are defined to use URI generic syntax.
History
WebSocket was first referenced as TCPConnection in the HTML5 specification, as a placeholder for a TCP-based socket API. In June 2008, a series of discussions were led by Michael Carter that resulted in the first version of the protocol known as WebSocket.
Before WebSocket, port 80 full-duplex communication was attainable using Comet channels; however, Comet implementation is nontrivial, and due to the TCP handshake and HTTP header overhead, it is inefficient for small messages. The WebSocket protocol aims to solve these problems without compromising the security assumptions of the web.
The name "WebSocket" was coined by Ian Hickson and Michael Carter shortly thereafter through collaboration on the #whatwg IRC chat room, and subsequently authored for inclusion in the HTML5 specification by Ian Hickson. In December 2009, Google Chrome 4 was the first browser to ship full support for the standard, with WebSocket enabled by default. Development of the WebSocket protocol was subsequently moved from the W3C and WHATWG group to the IETF in February 2010, and authored for two revisions under Ian Hickson.
After the protocol was shipped and enabled by default in multiple browsers, the was finalized under Ian Fette in December 2011.
introduced compression extension to WebSocket using the DEFLATE algorithm on a per-message basis.
Client example
<!DOCTYPE html>
<script>
// Connect to server
ws = new WebSocket("ws://127.0.0.1/scoreboard") // Local server
// ws = new WebSocket("wss://game.example.com/scoreboard") // Remote server
ws.onopen = () => {
console.log("Connection opened")
ws.send("Hi server, please send me the score of yesterday's game")
}
ws.onmessage = (event) => {
console.log("Data received", event.data)
ws.close() // We got the score so we don't need the connection anymore
}
ws.onclose = (event) => {
console.log("Connection closed", event.code, event.reason, event.wasClean)
}
ws.onerror = () => {
console.log("Connection closed due to error")
}
</script>
Server example
from socket import socket
from base64 import b64encode
from hashlib import sha1
MAGIC = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
# Create socket and listen at port 80
ws = socket()
ws.bind(("", 80))
ws.listen()
conn, addr = ws.accept()
# Parse request
for line in conn.recv(4096).split(b"\r\n"):
if line.startswith(b"Sec-WebSocket-Key"):
nonce = line.split(b":")[1].strip()
# Format response
response = f"""\
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: {b64encode(sha1(nonce + MAGIC).digest()).decode()}
"""
conn.send(response.replace("\n", "\r\n").encode())
while True: # decode messages from the client
header = conn.recv(2)
FIN = bool(header[0] & 0x80) # bit 0
assert FIN == 1, "We only support unfragmented messages"
opcode = header[0] & 0xf # bits 4-7
assert opcode == 1 or opcode == 2, "We only support data messages"
masked = bool(header[1] & 0x80) # bit 8
assert masked, "The client must mask all frames"
payload_size = header[1] & 0x7f # bits 9-15
assert payload_size <= 125, "We only support small messages"
masking_key = conn.recv(4)
payload = bytearray(conn.recv(payload_size))
for i in range(payload_size):
payload[i] = payload[i] ^ masking_key[i % 4]
print(payload)
Web API
A web application (e.g. web browser) may use the WebSocket interface to connect to a WebSocket server.
Protocol
Steps:
Opening handshake (HTTP request + HTTP response) to establish a connection.
Data messages to transfer application data.
Closing handshake (two Close frames) to close the connection.
Opening handshake
The client sends an HTTP request (method GET, version ≥ 1.1) and the server returns an HTTP response with status code 101 (Switching Protocols) on success. This means a WebSocket server can use the same port as HTTP (80) and HTTPS (443) because the handshake is compatible with HTTP.
Example request:GET /chat HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13
Origin: http://example.com
Example response:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=
Sec-WebSocket-Protocol: chat
In HTTP each line ends in \r\n and the last line is empty.# Calculate Sec-WebSocket-Accept using Sec-WebSocket-Key
from base64 import b64encode
from hashlib import sha1
from os import urandom
# key = b64encode(urandom(16)) # Client should do this
key = b"x3JJHMbDL1EzLkh9GBhXDw==" # Value in example request above
magic = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11" # Protocol constant
print(b64encode(sha1(key + magic).digest()))
# Output: HSmrc0sMlYUkAGmm5OPpG2HaGWk=Once the connection is established, communication switches to a binary frame-based protocol which does not conform to the HTTP protocol.
Sec-WebSocket-Key and Sec-WebSocket-Accept are intended to prevent a caching proxy from re-sending a previous WebSocket conversation, and does not provide any authentication, privacy, or integrity.
Though some servers accept a short Sec-WebSocket-Key, many modern servers will reject the request with error "invalid Sec-WebSocket-Key header".
Frame-based message
After the opening handshake, the client and server can, at any time, send messages to each other, such as data messages (text or binary) and control messages (close, ping, pong). A message is composed of one or more frames.
Fragmentation allows a message to be split into two or more frames. It enables sending messages with initial data available but complete length unknown. Without fragmentation, the whole message must be sent in one frame, so the complete length is needed before the first byte can be sent, which requires a buffer. It also enables multiplexing several streams simultaneously (e.g. to avoid monopolizing a socket for a single large payload).
An unfragmented message consists of a single frame with FIN = 1 and opcode ≠ 0.
A fragmented message consists of a single frame with FIN = 0 and opcode ≠ 0, followed by zero or more frames with FIN = 0 and opcode = 0, and terminated by a single frame with FIN = 1 and opcode = 0.
Frame structure
Opcodes
Status codes
Browser support
A secure version of the WebSocket protocol is implemented in Firefox 6, Safari 6, Google Chrome 14, Opera 12.10 and Internet Explorer 10. A detailed protocol test suite report lists the conformance of those browsers to specific protocol aspects.
An older, less secure version of the protocol was implemented in Opera 11 and Safari 5, as well as the mobile version of Safari in iOS 4.2. The BlackBerry Browser in OS7 implements WebSockets. Because of vulnerabilities, it was disabled in Firefox 4 and 5, and Opera 11.
Using browser developer tools, developers can inspect the WebSocket handshake as well as the WebSocket frames.
Server implementations
Nginx has supported WebSockets since 2013, implemented in version 1.3.13 including acting as a reverse proxy and load balancer of WebSocket applications.
Apache HTTP Server has supported WebSockets since July, 2013, implemented in version 2.4.5
Internet Information Services added support for WebSockets in version 8 which was released with Windows Server 2012.
lighttpd has supported WebSockets since 2017, implemented in lighttpd 1.4.46. lighttpd mod_proxy can act as a reverse proxy and load balancer of WebSocket applications. lighttpd mod_wstunnel can act as a WebSocket endpoint to transmit arbitrary data, including in JSON format, to a backend application. lighttpd supports WebSockets over HTTP/2 since 2022, implemented in lighttpd 1.4.65.
Security considerations
Unlike regular cross-domain HTTP requests, WebSocket requests are not restricted by the same-origin policy. Therefore, WebSocket servers must validate the "Origin" header against the expected origins during connection establishment, to avoid cross-site WebSocket hijacking attacks (similar to cross-site request forgery), which might be possible when the connection is authenticated with cookies or HTTP authentication. It is better to use tokens or similar protection mechanisms to authenticate the WebSocket connection when sensitive (private) data is being transferred over the WebSocket. A live example of vulnerability was seen in 2020 in the form of Cable Haunt.
Proxy traversal
WebSocket protocol client implementations try to detect whether the user agent is configured to use a proxy when connecting to destination host and port, and if it is, uses HTTP CONNECT method to set up a persistent tunnel.
While the WebSocket protocol itself is unaware of proxy servers and firewalls, it features an HTTP-compatible handshake, thus allowing HTTP servers to share their default HTTP and HTTPS ports (80 and 443 respectively) with a WebSocket gateway or server. The WebSocket protocol defines a ws:// and wss:// prefix to indicate a WebSocket and a WebSocket Secure connection respectively. Both schemes use an HTTP upgrade mechanism to upgrade to the WebSocket protocol. Some proxy servers are transparent and work fine with WebSocket; others will prevent WebSocket from working correctly, causing the connection to fail. In some cases, additional proxy-server configuration may be required, and certain proxy servers may need to be upgraded to support WebSocket.
If unencrypted WebSocket traffic flows through an explicit or a transparent proxy server without WebSockets support, the connection will likely fail.
If an encrypted WebSocket connection is used, then the use of Transport Layer Security (TLS) in the WebSocket Secure connection ensures that an HTTP CONNECT command is issued when the browser is configured to use an explicit proxy server. This sets up a tunnel, which provides low-level end-to-end TCP communication through the HTTP proxy, between the WebSocket Secure client and the WebSocket server. In the case of transparent proxy servers, the browser is unaware of the proxy server, so no HTTP CONNECT is sent. However, since the wire traffic is encrypted, intermediate transparent proxy servers may simply allow the encrypted traffic through, so there is a much better chance that the WebSocket connection will succeed if WebSocket Secure is used. Using encryption is not free of resource cost, but often provides the highest success rate, since it would be travelling through a secure tunnel.
A mid-2010 draft (version hixie-76) broke compatibility with reverse proxies and gateways by including eight bytes of key data after the headers, but not advertising that data in a Content-Length: 8 header. This data was not forwarded by all intermediates, which could lead to protocol failure. More recent drafts (e.g., hybi-09) put the key data in a Sec-WebSocket-Key header, solving this problem.
See also
BOSH
Comparison of WebSocket implementations
Network socket
Push technology
Server-sent events
XMLHttpRequest
HTTP/2
Internet protocol suite
WebRTC
Notes
References
External links
IETF Hypertext-Bidirectional (HyBi) working group
The WebSocket protocol – Proposed Standard published by the IETF HyBi Working Group
The WebSocket protocol – Internet-Draft published by the IETF HyBi Working Group
The WebSocket protocol – Original protocol proposal by Ian Hickson
The WebSocket API – W3C Working Draft specification of the API
The WebSocket API – W3C Candidate Recommendation specification of the API
WebSocket.org WebSocket demos, loopback tests, general information and community
Application layer protocols
HTML5
Internet terminology
Network socket
Real-time web
Web development
2011 in computing | WebSocket | Technology,Engineering | 3,502 |
52,974,247 | https://en.wikipedia.org/wiki/Department%20of%20Pharmacology%2C%20University%20College%20London | The Department of Pharmacology at the University College London, the first of its kind in England, was founded in 1905 and remained in existence until 2007.
Early history
University College London (UCL) was founded in 1826. It was born in the ferment of radical London in the 1820s and 1830s and was heavily influenced by the Scottish and French Enlightenments. UCL was part of the radical opposition to the hegemony of Oxford and Cambridge. In medicine, UCL was a force in combatting the conservative and religious monopoly of the Royal Colleges of Physicians and Surgeons
Although Edinburgh University was well ahead at the time, UCL had a professor of Materia Medica and Pharmacy, A.T.Thomson, from the start. Later this was renamed as the chair of Materia Medica and Therapeutics. Its best known holder was Sydney Ringer (1878–87), who worked on the isolated beating heart and is renowned for his eponymous salt solution which he designed to maximise the viability of isolated hearts. His textbook ‘Handbook of Therapeutics’ ran for 13 editions between 1869 and 1897.
In 1905. Pharmacology was established as a distinct discipline within basic medical sciences at UCL. It was the first Department of Pharmacology in England.
Most of the people involved in the development of quantitative analysis of drug-receptor interactions worked at some time in UCL's Departments of Pharmacology, or of Physiology or of Biophysics.
The Department of Pharmacology
Arthur Robertson Cushny FRS
Arthur Cushny (1866–1926) was the first holder of the newly instituted Chair of Pharmacology, from 1905 until 1918.
After graduating in medicine from Aberdeen, Cushny had studied in Berne, Würzburg, and Strasbourg, where he became Assistant to the famed Oswald Schmiedeberg. In 1893, at the age of 27, he was appointed Professor of Pharmacology at the University of Michigan, Ann Arbor. Eight years later Cushny came to the chair at UCL where he soon expanded the department from the single room he had been given. He raised the funds for the building which the remnants of the department still occupies.
His main interests were in the heart and kidney. His work on the involvement of calcium in the action of digitalis was prescient. He was interested in optical isomers. Data from an early clinical trial using hyoscine isomers was used by William Sealy Gossett who, under the pseudonym "Student" published in 1908 the first small-sample test of significance, Student's t test. His use of these data has given rise to much discussion. Later reanalysis of the same data by a randomisation tests gave a similar result.
Cushny published a textbook Textbook of Pharmacology and Therapeutics (eighth edition 1924). He introduced the Cushny myograph, an ingenious arrangement of counterbalanced levers that allowed the faithful recording of the rate and force of contraction of the rapidly beating animal heart. It was still in use in practical classes at UCL, and elsewhere, in the 1960s.
Cushny left UCL in 1918, to become Professor of Materia Medica and Pharmacology at Edinburgh. he was succeeded by A.J. Clark/
Alfred Joseph Clark FRS
A.J. Clark, FRS (1885–1941) held the established Chair of Pharmacology from 1918 to 1926. After qualifying in medicine, and serving as a field medical officer throughout the First World War, Clark had been appointed Professor of Pharmacology at the University of Cape Town where he remained until accepting the Chair of Pharmacology at UCL in 1920. His influence on the subject was profound. The distinguished physiologist and Nobel laureate A.V. Hill (Archibald Hill) had begun the quantitative study of the action of agonists on an isolated tissue (frog skeletal muscle) some years earlier. Clark took this much further and extended it to examine the actions of antagonists. The data he gathered on the exact relationship between agonist concentration and response, and on how this changed in the presence of a competitive antagonist, were published in two classic papers in the Journal of Physiology in 1926,. But he failed to work out a method for analysing properly the results of experiments with antagonists: that had to wait for his successor, Heinz Schild.
Nevertheless, Clark was largely responsible for the transition of pharmacology from a descriptive subject to the quantitative science that it is today - this emphasis on quantitative approaches has remained strong throughout the subsequent history of the department. Clark's book The Mode of Action of Drugs on Cells (Williams & Wilkins, 1933) is a classic and the following quotation from it set the tone for the department for many years.
"In the first place, there is no advantage in fitting curves by a formula unless this expresses some possible physico-chemical process, and it is undesirable to employ formulae that imply impossibilities. It is a question of finding a few systems so simple that it is possible to establish with reasonable probability the relation between the quantity of drug and the action produced."
While at UCL Clark wrote the first edition of his textbook Applied Pharmacology in 1923, a book that was to be updated by two of his successors as Head of department, first by H.O. Schild and later by H.P. Rang, and is still extant in the form of the widely used textbook Rang & Dale's Pharmacology.
In 1926 Clark followed his predecessor in moving to the University of Edinburgh.
E.B.Verney FRS
Ernest Basil Verney (1894–1967) succeeded Clark. He held the Chair of Pharmacology from 1926 to 1934.
While at UCL Verney discovered the antidiuretic hormone and also the mechanism by which structures in the brain sense minute changes in blood osmotic pressure. Both findings were of profound importance for the understanding of water and electrolyte balance. Verney was also instrumental in arranging for Otto Krayer to come to the department, albeit for only a short period, following Krayer's exclusion from all academic positions in German universities because of his objection to the expulsion of Jewish scientists from their posts. Krayer was later to head the Department of Pharmacology at Harvard with the greatest distinction.
In 1934 Verney moved to an academic post at the University of Cambridge where he later became the first Sheild Professor of Pharmacology
J.H. Gaddum FRS
John Gaddum (1900–1965) held the Chair of Pharmacology from 1935 to 1938.
Like A.J. Clark, he had a profound interest in quantitative methods.
He extended A.J. Clark's work on competitive antagonism, and applied the law of mass action to describe the relationship (the Gaddum equation) between receptor occupancy and the concentrations of an agonist and a competitive antagonist at equilibrium with the receptors in a tissue. The theory for two or more competing ligands had been known since Michaelis & Menten (1914), but Gaddum was the first to apply it in a pharmacological context). Like Clark before him, Gaddum failed to spot how to use the theory to estimate equilibrium constants.
Gaddum was also a master of bioassay which was then the preferred, and usually the only, way to determine the concentrations of biologically active molecules such as labile neurotransmitters and the neuropeptides.
F.R. Winton
Frank Winton (1894–1985) held the Chair of Pharmacology from 1938 to 1961. His main scientific interest was in the control of blood flow to the kidney. Winton ran the department through the difficult war years when the Medical School was evacuated to Leatherhead, Surrey.
He appointed the first two female academics in the department. Mary Lockett (1911–1982) was a lecturer in the department from 1945 - 1950. Hannah Steinberg arrived in the UK from Vienna on a Kindertransport train while still a schoolgirl, and she eventually became Professor of Psychopharmacology.
Winton also worked hard and successfully to ensure that pharmacology had an appropriate place in the preclinical curriculum. He oversaw the extension of the department, including the Pharmacology Lecture Theatre (now the Schild Theatre).
He was the author, with Leonard Bayliss, of a widely used textbook Human Physiology, first published in 1932. The 6th edition, 1968 was written by Olof J.C. Lippold and F.R Winton
H.O. Schild FRS
Heinz Otto Schild (1906–1984) held the Chair of Pharmacology from 1961 to 1973.
He was born in Fiume (now Rijeka, Croatia), in 1908, when it was part of the Austro-Hungarian empire. He qualified in medicine in Munich and then worked with Straub, the leading German pharmacologist of the time. By good fortune, Schild had been accepted as a visiting worker by Sir Henry Dale and was in England when the National Socialists came to power in Germany. He decided to stay in Britain and became an assistant in the Department of Pharmacology in Edinburgh, then headed by A.J. Clark. When J. H. Gaddum was appointed to the chair at UCL, he invited Schild to join him as a Demonstrator. So began his long association with UCL, interrupted only by his bizarre internment on the Isle of Man as an ‘enemy alien’ at the outbreak of the Second World War. Following his release (greatly aided by F.R Winton's and Sir Henry Dale's appeals to the Home Office) he returned to his work in the department, then based in Leatherhead, and in 1961 became Winton's successor as Head of Department and Professor of Pharmacology.
Schild made major contributions to receptor pharmacology, to the understanding of the mechanism of histamine release and to bioassay. Like Gaddum and Clark, he used quantitative approaches whenever possible. His name is immortalised by the Schild equation. He built on the work of Clark and Gaddum on competitive antagonism, by realising that the null method was key to extraction of physical equilibrium constants from simple functional experiments. Rather than looking at the depression by antagonist of the response to a fixed agonist concentration, he measured the dose-ratio, the factor by which the agonist concentration had to be increased in order to restore a given response in the presence of the antagonist. By measuring the dose-ratio as a function of antagonist, it was possible to estimate the dissociation equilibrium constant for the combination of the antagonist with its receptor. Crucially the estimate is not dependent on the nature of the agonist. Although Schild's derivation used the simplest possible model, it was subsequently shown that his equation is valid under much more general conditions.
A.J. Clark's textbook was continued by Schild as Clark's Applied Pharmacology by Wilson & Schild.
Heinz Schild was a generous and kindly Head of department. He appointed the third female member of academic staff, Dr M. Maureen Dale, a co-author of Rang & Dales Pharmacology. He oversaw the planning and introduction of a three-year B.Sc. course in Pharmacology which began in 1967 and continues to this day. Medical students were able to enter its final year and Schild, who never lost sight of the roots of the subject in medicine, was delighted that many took this opportunity.
J.W.Black FRS
Sir James Whyte Black (1924–2010) held the Chair of Pharmacology from 1973 to 1978.
Jim Black and Heinz Schild knew each other well because Schild had acted as a consultant to the then Smith, Kline & French company during the time when Black was leading the team that developed the histamine receptor antagonists, H2 antagonists, which reduce secretion of gastric acid and which, at the time, transformed the treatment of gastric ulcers. Schild's methods for quantitative methods for analysis of drug antagonists were crucial for this work.
Black introduced many changes to teaching in the department. One of the most important was the introduction of a BSc course in Medicinal Chemistry. His long experience in the pharmaceutical industry had convinced him that organic and physical chemists working on drug development with pharmacologists and biochemists would benefit greatly from a substantial knowledge of biology, certainly enough to allow them to understand and assess the kinds of measurements that their biological colleagues undertook. Though the students were based in the Department of Chemistry, they took also courses in physiology and pharmacology, particularly its molecular aspects. This BSc course, like that in Pharmacology, also flourished and continues today. Another important change was a sharp reduction in the number of experiments with animal tissues undertaken by medical students during their course in pharmacology. At the same time, the emphasis on the importance of observations on human subjects was increased.
Black's appointment coincided with the onset of the straitened circumstances that all UK universities were to experience and that have continued in one form or another ever since. The changes he made helped the department to adjust to these harder times. To the regret of his Departmental staff, Black found that only the pharmaceutical industry could provide the facilities needed for the work he wished to pursue, and in 1978 he left to join the Wellcome Foundation.
Black was knighted in 1981 and in 1988 he got the Nobel Prize in Physiology or Medicine along with Gertrude B. Elion and George H. Hitchings for their work on drug development.
H.P.Rang FRS
Humphrey Rang (born 1936) held the Chair of Pharmacology from 1979 to 1983.
Rang qualified in medicine at UCL and had worked in H.O.Schild's laboratory while a medical student. He was the author of the first successful ligand-binding experiment of the modern era. This was based on his PhD work in Oxford, under William D.M. Paton. Rang had previously been the Professor of Pharmacology at Southampton and at St. George's Hospital Medical School. He brought with him David Colquhoun who was also returning to the department, having been appointed in 1964 as an assistant lecturer by H.O. Schild. These appointments greatly strengthened the interests and achievements of the department in fundamental aspects of pharmacology, particularly the study of ion channels and receptors.
In collaboration with M. Maureen Dale (also appointed during Schild's Headship), Rang prepared the first edition of Pharmacology, the successor to Wilson & Schild's Applied Pharmacology.
In 1983 Rang was offered and accepted the Directorship of the Sandoz Institute of Medical Research, a division of Sandoz, then an independent pharmaceutical company. The new Institute was located in UCL and developed a close relationship with the department, both in teaching, to which members of the Institute contributed, and in research.
After 1983
The heads of department since 1983.
After Rang's resignation, the Chair of Pharmacology became vacant. The Head of department from 1983 to 1987 was Donald.H Jenkinson He had done his PhD under Bernard Katz. in UCL's famous Department of Biophysics, and was yet another member of staff who had been invited to join the Department by Heinz Schild. During his tenure the Middlesex Hospital Medical School was merged with UCL's, including the two Departments of Pharmacology.
During the 1980s the traditional role of Heads of department was replaced by rotating headships that were no longer associated necessarily with an established chair. Established chairs were, de facto, abolished as part of the move to corporatise universities.
David Colquhoun FRS was appointed to the established chair in 1985. It was subsequently dubbed the A.J. Clark chair, in honour of Clark's role in the establishment of quantitative pharmacology. His work, with statistician Alan Hawkes and Bert Sakmann (Nobel prize 1991) established the department as the world leader in the theory and experiment of single ion channels. After retiring from the A.J.Clark chair in 2004, he worked on the misinterpretation of p values and its contribution to the irreproducibility that has come to light in some areas of science.
D. A. Brown FRS (1936 - 2023) was appointed in 1987 as head of department (he had previously held the same position at the School of Pharmacy). In 1987, the merger with the Middlesex Hospital Medical School was completed and David Brown inherited the title Astor Chair of Pharmacology from Professor F Hobbiger who had held that title at the Middlesex.
Brown's appointment was intended initially to be the start of a 5-year rotating headship, but when Colquhoun's turn became due, he decided that the job of Head of department would not allow enough time to do the algebra and program development with which he was involved. Donald Jenkinson likewise declined to take another turn. Luckily David Brown agreed to continue and he remained Head of department until 2002. His tenure saw a second merger, this time with the Department of Pharmacology at the Royal Free Medical School, headed by Professor Annette Dolphin, FRS. David Brown is renowned for his discovery of the acetylcholine (muscarinic)-sensitive potassium channel (M channel).
The Wellcome Lab for Molecular Pharmacology
The growing importance of molecular biology led Brown & Colquhoun to apply to the Wellcome Trust in 1990. They funded the building of the Lab for Molecular Pharmacology which Colquhoun directed until his retirement in 2004.
Trevor G. Smart became Head of department in 2002, with the title of Schild Chair of Pharmacology. He also works in the ion channel field. After the demise of the department in 2007. Smart became head of the new Research Department of Neuroscience, Physiology and Pharmacology.
Stephanie Schorge. In 2021, Professor Schorge succeeded Trevor Smart as head of the Research Department of Neuroscience, Physiology and Pharmacology. She is the first female head of pharmacology since 1905.
Stuart Cull-Candy FRS. Stuart G. Cull-Candy works on glutamate-activated ion channels. He joined the Department from UCL's Department of Biophysics and holds the Gaddum Chair of Pharmacology.
Lucia Sivilotti was appointed to the A.J. Clark chair in 2014. She has run the UCL Single Ion Channel group after Colquhoun's retirement in 2004. She continued and greatly extended the work in the field of single channel kinetics. She owns the web site OneMol where the group's analysis programs and publications can be downloaded. The association of the A.J. Clark chair with quantitative work on receptors has thus continued to the present day.
The first of the nationwide Research Assessment Exercises took place in 1986. The UCL Department headed the list. It continued to be rated as the top Department of Pharmacology in each of the four research assessments that followed in 1989, 1992, 1996 and 2002. But this performance was not enough to save the department.
The end of the Department of Pharmacology
In 2004, Malcolm Grant became provost of UCL. He commissioned external reports on the reorganisation of the college. The distinguished vice-president of the University of Manchester, Richard Alan North FRS, was asked to assess several options for the reorganisation of the Faculty of Life Sciences. One was to create large Research Departments, including one of Neuroscience, Physiology and Pharmacology, from the existing academic Departments. Professor North's only comment on the options was that the proposed "research departments in Life Sciences were too big". Grant accepted the conclusions except for the part about the size of departments.
On 24 May 2007 Grant persuaded the Academic Board to authorise him to act on its behalf and on 13 June 2007 the Department of Pharmacology was disestablished, after a century of distinction and innovation.
The academic staff at the time had three main concerns about the proposals. (a) The separation of teaching from research is bad, especially for teaching: the fact that a degree is offered in, for example, Pharmacology without a Pharmacology department to support it, means that there is no guarantee that there will be staff qualified or fully motivated to teach it. Moreover, the collegiality that comes from designing and providing a first-rate degree course is lost. (b) The size of the merged department of Neurosciences, Physiology and Pharmacology means less interaction between staff, and less collegiate spirit. (c) The changes created two extra levels of administration, so that now five levels existed between academics and the provost.
Staff were told at the time that the new organisation would be rolled out to other Faculties across UCL, though this has not happened. David Colquhoun has kept a personal diary of the process on his blog: In Memoriam Department of Pharmacology, UCL 1905 – 2007. On the positive side, UCL's current provost, Michael Arthur, has put much emphasis on the quality of teaching, and maintaining its connections with research.
As of 2019, UCL still offers pharmacology degrees, though within the now merged Neuroscience, Physiology and Pharmacology department.
A history of the combined department appears on UCL's web site.
References
History of University College London
Pharmacology | Department of Pharmacology, University College London | Chemistry | 4,484 |
50,078,940 | https://en.wikipedia.org/wiki/Cyclorotor | A cyclorotor, cycloidal rotor, cycloidal propeller or cyclogiro, is a fluid propulsion device that converts shaft power into the acceleration of a fluid using a rotating axis perpendicular to the direction of fluid motion. It uses several blades with a spanwise axis parallel to the axis of rotation and perpendicular to the direction of fluid motion. These blades are cyclically pitched twice per revolution to produce force (thrust or lift) in any direction normal to the axis of rotation. Cyclorotors are used for propulsion, lift, and control on air and water vehicles. An aircraft using cyclorotors as the primary source of lift, propulsion, and control is known as a cyclogyro or cyclocopter. A unique aspect is that it can change the magnitude and direction of thrust without the need of tilting any aircraft structures. The patented application, used on ships with particular actuation mechanisms both mechanical or hydraulic, is named after German company Voith Turbo.
Operating principle
Cyclorotors produce thrust by combined action of a rotation of a fixed point of the blades around a centre and the oscillation of the blades that changes their angle-of-attack over time. The joint action of the advancement produced by the orbital motion and pitch angle variation generates a higher thrust at low speed than any other propeller. In hover, the blades are actuated to a positive pitch (outward from the centre of the rotor) on the upper half of their revolution and a negative pitch (inward towards the axis of rotation) over the lower half inducing a net upward aerodynamic force and opposite fluid downwash. By varying the phase of this pitch motion the force can be shifted to any perpendicular angle or even downward. Before blade stall, increasing the amplitude of the pitching kinematics will magnify thrust.
History
The origin of the rotocycloid propeller are Russian and relates to the aeronautic domain. Sverchkov's "Samoljot" (St. Petersburg, 1909) or "wheel orthopter" was the first vehicle expressly thought to have used this type of propulsion. Its scheme came near to cyclogiro, but it's difficult to classify it precisely. It had three flat surfaces and a rudder; the rear edge of one of surfaces could be bent, replacing the action of an elevator. Lift and thrust had to be created by paddle wheels consisting of 12 blades, established in pairs under a 120° angle. The blades of a concave shape were changing an angle of incidence by the means of eccentrics and springs. In a bottom of the craft 10 hp engine was arranged. Transmission was ensured by a belt. Empty weight was about 200 kg. "Samoljot" was constructed by the military engineer E.P. Sverchkov with the grants of the Main Engineering Agency in St. Petersburg in 1909, was demonstrated at the Newest Inventions Exhibition and won a medal. Otherwise, it could not pass the preliminary tests without flying.
In 1914, Russian inventor and scientist A.N. Lodygin addressed the Russian government with the project of the cyclogiro-like aircraft, his scheme was similar to Sverchkov's "Samoljot". The project was not carried out.
In 1933, experiments in Germany by Adolf Rohrbach resulted in a paddle-wheel wing arrangement. Oscillating winglets went from positive to negative angles of attack during each revolution to create lift, and their eccentric mounting would, in theory, produce nearly any combination of horizontal and vertical forces. The DVL evaluated Rohrbach's design, but the foreign aviation journals of the time cast doubt on the soundness of the design which meant that funding for the project could not be raised, even with a latter proposal as a Luftwaffe transport aircraft. There appears to be no evidence that this design was ever built, let alone flown. Based on Rohrbach's paddle-wheel research, however, Platt in the US designed by 1933 his own independent Cyclogyro. His paddle-wheel wing arrangement was awarded a US patent (which was only one of many similar patents on file), and underwent extensive wind-tunnel testing at MIT in 1927. Despite this, there is no evidence Platt's aircraft was ever built.
The first operative cycloid propulsion was developed at Voith. Its origins date to the decision of the Voith company to focus on the business of transmission gear assemblies for turbines. The famous Voight propeller was based on its fluid-dynamics know-how gained from previous turbine projects. It was invented by Ernst Schneider, and enhanced by Voith. It was launched with name of Voith-Schneider Propeller (VSP) for commercial vessels. This new marine drive could significantly improve the manoeuvrability of a ship as demonstrated in the successful sea trials on the test boat Torqueo, in 1937. The first Voith Schneider Propellers were put into operation in the narrow canals of Venice, Italy. During the 1937 World Fair in Paris, Voith was awarded the grand prize – three times – for its exhibition of Voith Schneider Propellers and Voith turbo-transmissions. A year later, two of Paris' fire-fighting boats started operating with the new VSP system.
Design advantages and challenges
Rapid thrust vectoring
Cyclorotors provide a high degree of control. Traditional propellers, rotors, and jet engines produce thrust only along their axis of rotation and require rotation of the entire device to alter the thrust direction. This rotation requires large forces and comparatively long time scales since the propeller inertia is considerable, and the rotor gyroscopic forces resist rotation. For many practical applications (helicopters, airplanes, ships) this requires rotating the entire vessel. In contrast, cyclorotors need only to vary the blade pitch motions. Since there is little inertia associated with blade pitch change, thrust vectoring in the plane perpendicular to the axis of rotation is rapid.
High advance ratio thrust and symmetric lift
Cyclorotors can produce lift and thrust at high advance ratios, which, in theory, would enable a cyclogyro aircraft to fly at subsonic speeds well exceeding those of single rotor helicopters.
Single rotor helicopters are limited in forward speed by a combination of retreating blade stall and sonic blade tip constraints. As helicopters fly forward, the tip of the advancing blade experiences a wind velocity that is the sum of the helicopter forward speed and rotor rotational speed. This value cannot exceed the speed of sound if the rotor is to be efficient and quiet. Slowing the rotor rotational speed avoids this problem, but presents another. In the traditional method of the composition of velocity it is easy to understand that the velocity experienced by the retreating blade has a value that is produced by the vector composition of the velocity of blade rotation and the freestream velocity. In this condition it is evident that in presence of a sufficiently high advance ratio the velocity of air on the retreating blade is low. The flapping movement of the blade changes the angle of attack. It is then possible for the blade to reach the stall condition. In this case it is necessary that the stalling blade increases the pitch angle to keep some lift capability. This risk puts constraints on the design of the system. An accurate choice of the wing profile is necessary and careful dimensioning of the radius of the rotor for the specified speed range.
Slow speed cyclorotors bypass this problem through a horizontal axis of rotation and operating at a comparatively low blade tip speed. For higher speeds, which may become necessary for industrial applications, it seems necessary to adopt more sophisticated strategies and solutions. A solution is the independent actuation of the blades which have been recently patented and successfully tested for naval use by use on hydraulic actuation system. The horizontal axis of rotation always provides an advancement of the upper blades, that produce always a positive lift by the full rotor. These characteristics could help overcome two issues of helicopters: their low energy efficiency and the advance ratio limitation.
Unsteady aerodynamics
The advancement of the blades and oscillations are the two dynamic actions which are produced by a cyclorotor. It is evident that the wing-blades of a cyclorotor operates in different way than a traditional aircraft wing or a traditional helicopter wing. The blades of a cyclorotor oscillate by rotation around a point that rotating describes an ideal circumference. The combination of the advancement motion of the centre of rotation of the blade and the oscillation of the blade (it is a movement somehow similar to the pendulum), which continue to vary its pitch generate a complex set of aerodynamic phenomena:
the delay of the blade stall;
an increase of the maximum blade lift coefficient at low Reynolds numbers.
The two effects are evidently correlated with a general increase of the thrust produced.
If compared to a helicopter or any other propeller, it is evident that the same blade section in a rotocycloid produces much more thrust at the same Reynolds number. This effect can be explained by considering the traditional behavior of a propeller.
At low Reynolds numbers there is little turbulence and laminar flow conditions can be reached. Considering a traditional wing profile it is evident that those conditions minimize the speed differences between upper and lower face of the wing. It is then evident that both lift and stall speed are reduced. A consequence is a reduction of angle of attack at which stall conditions are reached.
In this regime, conventional propellers and rotors must use larger blade area and rotate faster to achieve the same propulsive forces and lose more energy to blade drag. It is then evident that a cyclorotor is much more energy efficient than any other propeller.
Actual cyclorotors bypass this problem by quickly increasing and then decreasing blade angle of attack, which temporarily delays stall and achieves a high lift coefficient. This unsteady lift makes cyclorotors more efficient at small scales, low velocities, and high altitudes than traditional propellers.
It is otherwise evident that many living beings, such as birds, and some insects, are still much more efficient, because they can change not only the pitch but also the shape of their wings, or they can change the property of the boundary layer such as sharkskin.
Some research tries to acquire the same level of efficiency of the natural examples of wings or surfaces. One direction is to introduce morphing wing concepts. Another relates to the introduction of boundary layer control mechanisms, such as dielectric barrier discharge.
Noise
During experimental evaluation, cyclorotors produced little aerodynamic noise. This is likely due to the lower blade tip speeds, which produce lower intensity turbulence following the blades.
Hovering thrust efficiency
In small-scale tests, cyclorotors achieved a higher power loading than comparable scale traditional rotors at the same disk loading. This is attributed to utilizing unsteady lift and consistent blade aerodynamic conditions. The rotational component of velocity on propellers increases from root to tip and requires blade chord, twist, airfoil, etc., to be varied along the blade. Since the cyclorotor blade span is parallel to the axis of rotation, each spanwise blade section operates at similar velocities and the entire blade can be optimized.
Structural considerations
Cyclorotor blades require support structure for their positioning parallel to the rotor axis of rotation. This structure, sometimes referred to as "spokes," adds to the parasite drag and weight of the rotor. Cyclorotor blades are also centrifugally loaded in bending (as opposed to the axial loading on propellers), which requires blades with an extremely high strength to weight ratio or intermediate blade support spokes. Early 20th century cyclorotors featured short blade spans, or additional support structure to circumvent this problem.
Blade pitch considerations
Cyclorotors require continuously actuated blade pitch. The relative flow angle experienced by the blades as they rotate about the rotor varies substantially with advance ratio and rotor thrust. To operate most efficiently a blade pitch mechanism should adjust for these diverse flow angles. High rotational velocities makes it difficult to implement an actuator based mechanism, which calls for a fixed or variable shape track for pitch control, mounted parallel to blade trajectory, onto which are placed blade's followers such as rollers or airpads - the pitch control track shape reliably determines blade's pitch along the orbit regardless of the blade's RPM. While the pitching motions used in hover are not optimized for forward flight, in experimental evaluation they were found to provide efficient flight up to an advance ratio near one.
Applications
Wind turbines
Wind turbines are a potential application of cyclorotors. They are named in this case variable-pitch vertical-axis wind turbines, with large benefits with respect to traditional VAWTs. This kind of turbine is stated to overcome most of the traditional limitations of traditional Darrieus VAWTs.
Ship propulsion and control
The most widespread application of cyclorotors is for ship propulsion and control. In ships the cyclorotor is mounted with the axis of rotation vertical so that thrust can quickly be vectored in any direction parallel to the plane of the water surface. In 1922, Frederick Kirsten fitted a pair of cyclorotors to a 32 ft boat in Washington, which eliminated the need for a rudder and provided extreme manoeuvrability. While the idea floundered in the United States after the Kirsten-Boeing Propeller Company lost a US Navy research grant, the Voith-Schneider propeller company successfully commercially employed the propeller. This Voith-Schneider propeller was fitted to more than 100 ships prior to the outbreak of the Second World War. Today, the same company sells the same propeller for highly manoeuvrable watercraft. It is applied on offshore drilling ships, tugboats, and ferries.
Aircraft
Cyclogyros
A cyclogyro is a vertical takeoff and landing aircraft using a cyclorotor as a rotor wing for lift and often also for propulsion and control. Advances in cyclorotor aerodynamics made the first untethered model cyclogyro flight possible in 2011 at the Northwestern Polytechnic Institute in China. Since then, universities and companies have successfully flown small-scale cyclogyros in several configurations.
The performance of traditional rotors is severely deteriorated at low Reynolds Numbers by low angle-of-attack blade stall. Current hover-capable MAVs can stay aloft for only minutes. Cyclorotor MAVs (very small scale cyclogyros) could utilize unsteady lift to extend endurance. The smallest cyclogyro flown to date weighs only 29 grams and was developed by the advanced vertical flight laboratory at Texas A&M university.
Commercial cyclogyro UAVs are being developed by D-Daelus, Pitch Aeronautics, and CycloTech.
Airship propulsion and control
A large exposed area makes airships susceptible to gusts and difficult to takeoff, land, or moor in windy conditions. Propelling airships with cyclorotors could enable flight in more severe atmospheric conditions by compensating for gusts with rapid thrust vectoring. Following this idea, the US Navy seriously considered fitting of six primitive Kirsten-Boeing cyclorotors to the airship. The Shenandoah crashed while transiting a squall line on 3 September 1925 before any possible installation and testing. No large scale tests have been attempted since, but a cyclorotor airship demonstrated improved performance over a traditional airship configuration in a test.
See also
References
External links
https://www.cyclotech.at/
Aerodynamics
Propulsion
Propellers | Cyclorotor | Chemistry,Engineering | 3,164 |
11,308,417 | https://en.wikipedia.org/wiki/Vertex%20%28geometry%29 | In geometry, a vertex (: vertices or vertexes) is a point where two or more curves, lines, or edges meet or intersect. As a consequence of this definition, the point where two lines meet to form an angle and the corners of polygons and polyhedra are vertices.
Definition
Of an angle
The vertex of an angle is the point where two rays begin or meet, where two line segments join or meet, where two lines intersect (cross), or any appropriate combination of rays, segments, and lines that result in two straight "sides" meeting at one place.
Of a polytope
A vertex is a corner point of a polygon, polyhedron, or other higher-dimensional polytope, formed by the intersection of edges, faces or facets of the object.
In a polygon, a vertex is called "convex" if the internal angle of the polygon (i.e., the angle formed by the two edges at the vertex with the polygon inside the angle) is less than π radians (180°, two right angles); otherwise, it is called "concave" or "reflex". More generally, a vertex of a polyhedron or polytope is convex, if the intersection of the polyhedron or polytope with a sufficiently small sphere centered at the vertex is convex, and is concave otherwise.
Polytope vertices are related to vertices of graphs, in that the 1-skeleton of a polytope is a graph, the vertices of which correspond to the vertices of the polytope, and in that a graph can be viewed as a 1-dimensional simplicial complex the vertices of which are the graph's vertices.
However, in graph theory, vertices may have fewer than two incident edges, which is usually not allowed for geometric vertices. There is also a connection between geometric vertices and the vertices of a curve, its points of extreme curvature: in some sense the vertices of a polygon are points of infinite curvature, and if a polygon is approximated by a smooth curve, there will be a point of extreme curvature near each polygon vertex.
Of a plane tiling
A vertex of a plane tiling or tessellation is a point where three or more tiles meet; generally, but not always, the tiles of a tessellation are polygons and the vertices of the tessellation are also vertices of its tiles. More generally, a tessellation can be viewed as a kind of topological cell complex, as can the faces of a polyhedron or polytope; the vertices of other kinds of complexes such as simplicial complexes are its zero-dimensional faces.
Principal vertex
A polygon vertex of a simple polygon is a principal polygon vertex if the diagonal intersects the boundary of only at and . There are two types of principal vertices: ears and mouths.
Ears
A principal vertex of a simple polygon is called an ear if the diagonal that bridges lies entirely in . (see also convex polygon) According to the two ears theorem, every simple polygon has at least two ears.
Mouths
A principal vertex of a simple polygon is called a mouth if the diagonal lies outside the boundary of .
Number of vertices of a polyhedron
Any convex polyhedron's surface has Euler characteristic
where is the number of vertices, is the number of edges, and is the number of faces. This equation is known as Euler's polyhedron formula. Thus the number of vertices is 2 more than the excess of the number of edges over the number of faces. For example, since a cube has 12 edges and 6 faces, the formula implies that it has eight vertices.
Vertices in computer graphics
In computer graphics, objects are often represented as triangulated polyhedra in which the object vertices are associated not only with three spatial coordinates but also with other graphical information necessary to render the object correctly, such as colors, reflectance properties, textures, and surface normal. These properties are used in rendering by a vertex shader, part of the vertex pipeline.
See also
Vertex arrangement
Vertex figure
References
External links
Euclidean geometry
3D computer graphics
0
Point (geometry) | Vertex (geometry) | Mathematics | 848 |
5,635,076 | https://en.wikipedia.org/wiki/Syntrophy | In biology, syntrophy, syntrophism, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the cooperative interaction between at least two microbial species to degrade a single substrate. This type of biological interaction typically involves the transfer of one or more metabolic intermediates between two or more metabolically diverse microbial species living in close proximity to each other. Thus, syntrophy can be considered an obligatory interdependency and a mutualistic metabolism between different microbial species, wherein the growth of one partner depends on the nutrients, growth factors, or substrates provided by the other(s).
Microbial syntrophy
Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium.
Mechanism of microbial syntrophy
The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation of these organic compounds cannot occur in fermenting microorganisms unless the hydrogen concentration is reduced to a low level by the methanogens. The key mechanism that ensures the success of syntrophy is interspecies electron transfer. The interspecies electron transfer can be carried out via three ways: interspecies hydrogen transfer, interspecies formate transfer and interspecies direct electron transfer. Reverse electron transport is prominent in syntrophic metabolism.
The metabolic reactions and the energy involved for syntrophic degradation with H2 consumption:
A classical syntrophic relationship can be illustrated by the activity of ‘Methanobacillus omelianskii’. It was isolated several times from anaerobic sediments and sewage sludge and was regarded as a pure culture of an anaerobe converting ethanol to acetate and methane. In fact, however, the culture turned out to consist of a methanogenic archaeon "organism M.o.H" and a Gram-negative Bacterium "Organism S" which involves the oxidization of ethanol into acetate and methane mediated by interspecies hydrogen transfer. Individuals of organism S are observed as obligate anaerobic bacteria that use ethanol as an electron donor, whereas M.o.H are methanogens that oxidize hydrogen gas to produce methane.
Organism S: 2 Ethanol + 2 H2O → 2 Acetate− + 2 H+ + 4 H2 (ΔG°' = +9.6 kJ per reaction)
Strain M.o.H.: 4 H2 + CO2 → Methane + 2 H2O (ΔG°' = -131 kJ per reaction)
Co-culture:2 Ethanol + CO2 → 2 Acetate− + 2 H+ + Methane (ΔG°' = -113 kJ per reaction)
The oxidization of ethanol by organism S is made possible thanks to the methanogen M.o.H, which consumes the hydrogen produced by organism S, by turning the positive Gibbs free energy into negative Gibbs free energy. This situation favors growth of organism S and also provides energy for methanogens by consuming hydrogen. Down the line, acetate accumulation is also prevented by similar syntrophic relationship. Syntrophic degradation of substrates like butyrate and benzoate can also happen without hydrogen consumption.
An example of propionate and butyrate degradation with interspecies formate transfer carried out by the mutual system of Syntrophomonas wolfei and Methanobacterium formicicum:
Propionate+2H2O+2CO2 → Acetate− +3Formate− +3H+ (ΔG°'=+65.3 kJ/mol)
Butyrate+2H2O+2CO2 → 2Acetate- +3Formate- +3H+ ΔG°'=+38.5 kJ/mol)
Direct interspecies electron transfer (DIET) which involves electron transfer without any electron carrier such as H2 or formate was reported in the co-culture system of Geobacter mettalireducens and Methanosaeto or Methanosarcina
Examples
In ruminants
The defining feature of ruminants, such as cows and goats, is a stomach called a rumen. The rumen contains billions of microbes, many of which are syntrophic. Some anaerobic fermenting microbes in the rumen (and other gastrointestinal tracts) are capable of degrading organic matter to short chain fatty acids, and hydrogen. The accumulating hydrogen inhibits the microbe's ability to continue degrading organic matter, but the presence of syntrophic hydrogen-consuming microbes allows continued growth by metabolizing the waste products. In addition, fermentative bacteria gain maximum energy yield when protons are used as electron acceptor with concurrent H2 production. Hydrogen-consuming organisms include methanogens, sulfate-reducers, acetogens, and others.
Some fermentation products, such as fatty acids longer than two carbon atoms, alcohols longer than one carbon atom, and branched chain and aromatic fatty acids, cannot directly be used in methanogenesis. In acetogenesis processes, these products are oxidized to acetate and H2 by obligated proton reducing bacteria in syntrophic relationship with methanogenic archaea as low H2 partial pressure is essential for acetogenic reactions to be thermodynamically favorable (ΔG < 0).
Biodegradation of pollutants
Syntrophic microbial food webs play an integral role in bioremediation especially in environments contaminated with crude oil and petrol. Environmental contamination with oil is of high ecological importance and can be effectively mediated through syntrophic degradation by complete mineralization of alkane, aliphatic and hydrocarbon chains. The hydrocarbons of the oil are broken down after activation by fumarate, a chemical compound that is regenerated by other microorganisms. Without regeneration, the microbes degrading the oil would eventually run out of fumarate and the process would cease. This breakdown is crucial in the processes of bioremediation and global carbon cycling.
Syntrophic microbial communities are key players in the breakdown of aromatic compounds, which are common pollutants. The degradation of aromatic benzoate to methane produces intermediate compounds such as formate, acetate, and H2. The buildup of these products makes benzoate degradation thermodynamically unfavorable. These intermediates can be metabolized syntrophically by methanogens and makes the degradation process thermodynamically favorable
Degradation of amino acids
Studies have shown that bacterial degradation of amino acids can be significantly enhanced through the process of syntrophy. Microbes growing poorly on amino acid substrates alanine, aspartate, serine, leucine, valine, and glycine can have their rate of growth dramatically increased by syntrophic H2 scavengers. These scavengers, like Methanospirillum and Acetobacterium, metabolize the H2 waste produced during amino acid breakdown, preventing a toxic build-up. Another way to improve amino acid breakdown is through interspecies electron transfer mediated by formate. Species like Desulfovibrio employ this method. Amino acid fermenting anaerobes such as Clostridium species, Peptostreptococcus asacchaarolyticus, Acidaminococcus fermentans were known to breakdown amino acids like glutamate with the help of hydrogen scavenging methanogenic partners without going through the usual Stickland fermentation pathway
Anaerobic digestion
Effective syntrophic cooperation between propionate oxidizing bacteria, acetate oxidizing bacteria and H2/acetate consuming methanogens is necessary to successfully carryout anaerobic digestion to produce biomethane
Examples of syntrophic organisms
Syntrophomonas wolfei
Syntrophobacter funaroxidans
Pelotomaculum thermopropinicium
Syntrophus aciditrophicus
Syntrophus buswellii
Syntrophus gentianae
References
Biological interactions
Food chains | Syntrophy | Biology | 1,885 |
45,658,812 | https://en.wikipedia.org/wiki/Sergei%20Tabachnikov | Sergei Tabachnikov, also spelled Serge, (born in 1956) is an American mathematician who works in geometry and dynamical systems. He is currently a Professor of Mathematics at Pennsylvania State University.
Biography
He earned his Ph.D. from Moscow State University in 1987 under the supervision of Dmitry Fuchs and Anatoly Fomenko. He has been living and working in the USA since 1990.
From 2013 to 2015 Tabachnikov served as Deputy Director of the Institute for Computational and Experimental Research in Mathematics (ICERM) in Providence, Rhode Island. He is now Emeritus Deputy Director of ICERM.
He is a fellow of the American Mathematical Society. He has served as Editor-in-Chief of the journal Experimental Mathematics, and is currently serving as Editor-in-Chief of the Arnold Mathematical Journal and as co-Editor-in-Chief of the Mathematical Intelligencer.
A paper on the variability hypothesis by Theodore Hill and Tabachnikov was accepted and retracted by The Mathematical Intelligencer and later The New York Journal of Mathematics (NYJM). There was some controversy over the mathematical model, the peer-review process, and the lack of an official retraction notice from the NYJM.
Selected publications
References
External links
Homepage
Fellows of the American Mathematical Society
1956 births
Living people
Moscow State University alumni
Dynamical systems theorists
American topologists
Russian expatriates in the United States
Pennsylvania State University faculty | Sergei Tabachnikov | Mathematics | 284 |
20,221,001 | https://en.wikipedia.org/wiki/Draw%20twister | A draw twister is a machine used to draw and twist large quantities of polymer fibers. It uses two sets of rollers, where the second set rotates faster than the first, thus drawing the fiber between them. While the fibers are being drawn they are also twisted into thread.
References
Plastics industry
Machines | Draw twister | Physics,Technology,Engineering | 63 |
14,848,896 | https://en.wikipedia.org/wiki/PH%20partition | pH partition is the tendency for acids to accumulate in basic fluid compartments, and bases to accumulate in acidic compartments. The reason for this phenomenon is that acids become negatively electric charged in basic fluids, as they donate a proton. On the other hand, bases become positively electric charged in acid fluids, as they receive a proton. Since electric charge decreases the membrane permeability of substances, once an acid enters a basic fluid and becomes electrically charged, it cannot escape that compartment with ease and therefore accumulates, and vice versa with bases.
See also
Ion trapping
Acid dissociation constant - pKa
Henderson-Hasselbalch equation
Acid–base chemistry | PH partition | Chemistry,Biology | 131 |
66,250,770 | https://en.wikipedia.org/wiki/Kepler-429 | Kepler-429 (KIC 10001893) is a variable subdwarf B star in the constellation Lyra, about 5,900 light years away.
The brightness of Kepler-429 changes unpredictably by up to 0.13 magnitudes. It has been classified as a V361 Hydrae variable, but also as a V1093 Herculis variable, which typically has slower variations and a cooler temperature. Over 100 pulsation modes were identified with periods from 256 seconds to over three hours.
Planetary system
Kepler-429 has been reported to have three possible exoplanets, though their existence is questioned. They were detected by orbital brightness modulation.
See also
Kepler-70
References
B-type subdwarfs
Kepler objects of interest
Lyra | Kepler-429 | Astronomy | 163 |
22,398,341 | https://en.wikipedia.org/wiki/Fire-safe%20polymers | Fire-safe polymers are polymers that are resistant to degradation at high temperatures. There is need for fire-resistant polymers in the construction of small, enclosed spaces such as skyscrapers, boats, and airplane cabins. In these tight spaces, ability to escape in the event of a fire is compromised, increasing fire risk. In fact, some studies report that about 20% of victims of airplane crashes are killed not by the crash itself but by ensuing fires. Fire-safe polymers also find application as adhesives in aerospace materials, insulation for electronics, and in military materials such as canvas tenting.
Some fire-safe polymers naturally exhibit an intrinsic resistance to decomposition, while others are synthesized by incorporating fire-resistant additives and fillers. Current research in developing fire-safe polymers is focused on modifying various properties of the polymers such as ease of ignition, rate of heat release, and the evolution of smoke and toxic gases. Standard methods for testing polymer flammability vary among countries; in the United States common fire tests include the UL 94 small-flame test, the ASTM E 84 Steiner Tunnel, and the ASTM E 622 National Institute of Standards and Technology (NIST) smoke chamber. Research on developing fire-safe polymers with more desirable properties is concentrated at the University of Massachusetts Amherst and at the Federal Aviation Administration where a long-term research program on developing fire-safe polymers was begun in 1995. The Center for UMass/Industry Research on Polymers (CUMIRP) was established in 1980 in Amherst, MA as a concentrated cluster of scientists from both academia and industry for the purpose of polymer science and engineering research.
History
Early history
Controlling the flammability of different materials has been a subject of interest since 450 B.C. when Egyptians attempted to reduce the flammability of wood by soaking it in potassium aluminum sulfate (alum). Between 450 B.C. and the early 20th century, other materials used to reduce the flammability of different materials included mixtures of alum and vinegar; clay and hair; clay and gypsum; alum, ferrous sulfate, and gypsum; and ammonium chloride, ammonium phosphate, borax, and various acids. These early attempts found application in reducing the flammability of wood for military materials, theater curtains, and other textiles, for example. Important milestones during this early work include the first patent for a mixture for controlling flammability issued to Obadiah Wyld in 1735, and the first scientific exploration of controlling flammability, which was undertaken by Joseph Louis Gay-Lussac in 1821.
Developments since WWII
Research on fire-retardant polymers was bolstered by the need for new types of synthetic polymers in World War II. The combination of a halogenated paraffin and antimony oxide was found to be successful as a fire retardant for canvas tenting. Synthesis of polymers, such as polyesters, with fire retardant monomers were also developed around this time. Incorporating flame-resistant additives into polymers became a common and relatively cheap way to reduce the flammability of polymers, while synthesizing intrinsically fire-resistant polymers has remained a more expensive alternative, although the properties of these polymers are usually more efficient at deterring combustion.
Polymer combustion
General mechanistic scheme
Traditional polymers decompose under heat and produce combustible products; thus, they are able to originate and easily propagate fire (as shown in Figure 1).
The combustion process begins when heating a polymer yields volatile products. If these products are sufficiently concentrated, within the flammability limits, and at a temperature above the ignition temperature, then combustion proceeds. As long as the heat supplied to the polymer remains sufficient to sustain its thermal decomposition at a rate exceeding that required to feed the flame, combustion will continue.
Purpose and methods of fire-retardant systems
The purpose is to control heat below the critical level. To achieve this, one can create an endothermic environment, produce non-combustible products, or add chemicals that would remove fire-propagating radicals (H and OH), to name a few. These specific chemicals can be added into the polymer molecules permanently (see Intrinsically Fire-Resistant Polymers) or as additives and fillers (see Flame-Retardant Additives and Fillers).
Role of oxygen
Oxygen catalyzes the pyrolysis of polymers at low concentration and initiates oxidation at high concentration. Transition concentrations are different for different polymers. (e.g., polypropylene, between 5% and 15%). Additionally, polymers exhibit a structural-dependent relationship with oxygen. Some structures are intrinsically more sensitive to decomposition upon reaction with oxygen. The amount of access that oxygen has to the surface of the polymer also plays a role in polymer combustion. Oxygen is better able to interact with the polymer before a flame has actually been ignited.
Role of heating rate
In most cases, results from a typical heating rate (e.g. 10°C/min for mechanical thermal degradation studies) do not differ significantly from those obtained at higher heating rates. The extent of reaction can, however, be influenced by the heating rate. For example, some reactions may not occur with a low heating rate due to evaporation of the products.
Role of pressure
Volatile products are removed more efficiently under low pressure, which means the stability of the polymer might have been compromised. Decreased pressure also slows down decomposition of high boiling products.
Intrinsically fire-resistant polymers
The polymers that are most efficient at resisting combustion are those that are synthesized as intrinsically fire-resistant. However, these types of polymers can be difficult as well as costly to synthesize. Modifying different properties of the polymers can increase their intrinsic fire-resistance; increasing rigidity or stiffness, the use of polar monomers, and/or hydrogen bonding between the polymer chains can all enhance fire-resistance.
Linear, single-stranded polymers with cyclic aromatic components
Most intrinsically fire-resistant polymers are made by incorporation of aromatic cycles or heterocycles, which lend rigidity and stability to the polymers. Polyimides, polybenzoxazoles (PBOs), polybenzimidazoles, and polybenzthiazoles (PBTs) are examples of polymers made with aromatic heterocycles (Figure 2). Polymers made with aromatic monomers have a tendency to condense into chars upon combustion, decreasing the amount of flammable gas that is released. Syntheses of these types of polymers generally employ prepolymers which are further reacted to form the fire-resistant polymers.
Ladder polymers
Ladder polymers are a subclass of polymers made with aromatic cycles or heterocycles. Ladder polymers generally have one of two types of general structures, as shown in Figure 3.One type of ladder polymer links two polymer chains with periodic covalent bonds. In another type, the ladder polymer consists of a single chain that is double-stranded. Both types of ladder polymers exhibit good resistance to decomposition from heat because the chains do not necessarily fall apart if one covalent bond is broken. However, this makes the processing of ladder polymers difficult because they are not easily melted. These difficulties are compounded because ladder polymers are often highly insoluble.
Inorganic and semiorganic polymers
Inorganic and semiorganic polymers often employ silicon-nitrogen, boron-nitrogen, and phosphorus-nitrogen monomers. The non-burning characteristics of the inorganic components of these polymers contribute to their controlled flammability. For example, instead of forming toxic, flammable gasses in abundance, polymers prepared with incorporation of cyclotriphosphazene rings give a high char yield upon combustion. Polysialates (polymers containing frameworks of aluminum, oxygen, and silicon) are another type of inorganic polymer that can be thermally stable up to temperatures of 1300-1400 °C.
Flame-retardant additives and fillers
Additives are divided into two basic types depending on the interaction of the additive and polymer. Reactive flame retardants are compounds that are chemically built into the polymer. They usually contain heteroatoms. Additive flame retardants, on the other hand, are compounds that are not covalently bound to the polymer; the flame retardant and the polymer are just physically mixed together.
Only a few elements are being widely used in this field: aluminum, phosphorus, nitrogen, antimony, chlorine, bromine, and in specific applications magnesium, zinc and carbon. One prominent advantage of the flame retardants (FRs) derived from these elements is that they are relatively easy to manufacture. They are used in important quantities: in 2013, the world consumption of FRs amounted to around 1.8/2.1 Mio t for 2013 with sales of 4.9/5.2 billion USD. Market studies estimate FRs demand to rise between 5/7 % pa to 2.4/2.6 Mio t until 2016/2018 with estimated sales of 6.1/7.1 billion USD.
The most important flame retardants systems used act either in the gas phase where they remove the high energy radicals H and OH from the flame or in the solid phase, where they shield the polymer by forming a charred layer and thus protect the polymer from being attacked by oxygen and heat.
Flame retardants based on bromine or chlorine, as well as a number of phosphorus compounds act chemically in the gas phase and are very efficient. Others only act in the condensed phase such as metal hydroxides (aluminum trihydrate, or ATH, magnesium hydroxide, or MDH, and boehmite), metal oxides and salts (zinc borate and zinc oxide, zinc hydroxystannate), as well as expandable graphite and some nanocomposites (see below). Phosphorus and nitrogen compounds are also effective in the condensed phase, and as they also may act in the gas phase, they are quite efficient flame retardants. Overviews of the main flame retardants families, their mode of action and applications are given in. Further handbooks on these topics are
A good example for a very efficient phosphorus-based flame retardant system acting in the gas and condensed phases is aluminium diethyl phosphinate in conjunction with synergists such as melamine polyphosphate (MPP) and others. These phosphinates are mainly used to flame retard polyamides (PA) and polybutylene terephthalate (PBT) for flame retardant applications in electrical engineering/electronics (E&E).
Natural fiber-containing composites
Besides providing satisfactory mechanical properties and renewability, natural fibers are easier to obtain and much cheaper than man-made materials. Moreover, they are more environmentally friendly. Recent research focuses on application of different types of fire retardants during the manufacturing process as well as applications of fire retardants (especially intumescent coatings) at the finishing stage.
Nanocomposites
Nanocomposites have become a hotspot in the research of fire-safe polymers because of their relatively low cost and high flexibility for multifunctional properties. Gilman and colleagues did the pioneering work by demonstrating the improvement of fire-retardancy by having nanodispersed montmorillonite clay in the polymer matrix. Later, organomodified clays, TiO2 nanoparticles, silica nanoparticles, layered double hydroxides, carbon nanotubes and polyhedral silsesquioxanes were proved to work as well. Recent research has suggested that combining nanoparticles with traditional fire retardants (e.g., intumescents) or with surface treatment (e.g., plasma treatment) effectively decreases flammability.
Problems with additives and fillers
Although effective at reducing flammability, flame-retardant additives and fillers have disadvantages as well. Their poor compatibility, high volatility and other deleterious effects can change properties of polymers. Besides, addition of many fire-retardants produces soot and carbon monoxide during combustion. Halogen-containing materials cause even more concerns on environmental pollution.
See also
Plastics
Fireproofing
Phenol formaldehyde resin
Pyrolysis
Combustion
Fire-retardant gel
Fire test
References
External links
Fire-Safety Branch of the Federal Aviation Administration
Polymers
Fire protection
Flame retardants | Fire-safe polymers | Chemistry,Materials_science,Engineering | 2,583 |
65,122,016 | https://en.wikipedia.org/wiki/Doomscrolling | Doomscrolling or doomsurfing is the act of spending an excessive amount of time reading large quantities of news, particularly negative news, on the web and social media. The concept was coined around 2020, particularly in the context of the COVID-19 pandemic.
Doomscrolling can also be defined as the excessive consumption of short-form videos or social media content for an excessive period of time without stopping.
Surveys and studies suggest doomscrolling is predominant among youth. It can be considered a form of internet addiction disorder. In 2019, a study by the National Academy of Sciences found that doomscrolling can be linked to a decline in mental and physical health. Numerous reasons for doomscrolling have been cited, including negativity bias, fear of missing out, and attempts at gaining control over uncertainty.
History
Origins
The practice of doomscrolling can be compared to an older phenomenon from the 1970s called the mean world syndrome, described as "the belief that the world is a more dangerous place to live in than it actually is as a result of long-term exposure to violence-related content on television". Studies show that seeing upsetting news leads people to seek out more information on the topic, creating a self-perpetuating cycle.
In common parlance, the word "doom" connotes darkness and evil, referring to one's fate (cf. damnation). In the internet's infancy, "surfing" was a common verb used in reference to browsing the internet; similarly, the word "scrolling" refers to sliding through online content. After 3 years of being on the Merriam-Webster "watching" list, "doomscrolling" was recognized as an official word in September 2023. Dictionary.com chose it as the top monthly trend in August 2020. The Macquarie Dictionary named doomscrolling as the 2020 Committee's Choice Word of the Year.
Popularity
According to Merriam-Webster, the term was first used in 2020. The term continued to gain traction in the early 2020s through events such as the COVID-19 pandemic, the George Floyd protests, the 2020 U.S. presidential election, the storming of the U.S. Capitol in 2021, and the Russian invasion of Ukraine since 2022, all of which have been noted to have exacerbated the practice of doomscrolling. Doomscrolling became widespread among users of Twitter during the COVID-19 pandemic, and has also been discussed in relation to the climate crisis. A 2024 survey conducted by Morning Consult, concluded that approximately 31% of American adults doomscroll on a regular basis. This percentage is further exaggerated the younger the adults are, with millennials at 46%, and Gen Z adults at 51%.
The infinite scroll
Infinite scrolling is a design approach which loads content continuously as the user scrolls down. It eliminates the need for pagination thereby encouraging doomscrolling behaviours. The feature allows a social media user to "infinitely scroll", as the software is continuously loading new content and displaying an endless stream of information. Consequently, this feature can exacerbate doomscrolling as it removes natural stopping points that a user might pause at. The concept of infinite scrolling is sometimes attributed to Aza Raskin by the elimination of pagination of web pages, in favor of continuously loading content as the user scrolls down the page. Raskin later expressed regret at the invention, describing it as "one of the first products designed to not simply help a user, but to deliberately keep them online for as long as possible". Usability research suggests infinite scrolling can present an accessibility issue. The lack of stopping cues has been described as a pathway to both problematic smartphone use and problematic social media use.
Social media's role
Social media companies play a significant role in the perpetuation of doomscrolling by leveraging algorithms designed to maximize user engagement. These algorithms prioritize content that is emotionally stimulating, often favoring negative news and sensationalized headlines to keep users scrolling. The business models of most social media platforms rely heavily on user engagement, which means that the longer people stay on their platforms, the more advertisements they see, and the more data is collected on their behavior. This creates a cycle where emotionally charged content—often involving negative or anxiety-inducing information—is repeatedly pushed to users, encouraging them to keep scrolling and consuming more content. Despite the well-documented negative effects of doomscrolling on mental health, social media companies are incentivized to maintain user engagement through these methods, making it challenging for individuals to break free from the habit.
Explanations
Negativity bias
The act of doomscrolling can be attributed to the natural negativity bias people have when consuming information. Negativity bias is the idea that negative events have a larger impact on one's mental well-being than good ones. Jeffrey Hall, a professor of communication studies at the University of Kansas in Lawrence, notes that due to an individual's regular state of contentment, potential threats provoke one's attention. One psychiatrist at the Ohio State University Wexner Medical Center notes that humans are "all hardwired to see the negative and be drawn to the negative because it can harm [them] physically." He cites evolution as the reason for why humans seek out such negatives: if one's ancestors, for example, discovered how an ancient creature could injure them, they could avoid that fate.
As opposed to primitive humans, however, most people in modern times do not realize that they are even seeking negative information. Social media algorithms heed the content users engage in and display posts similar in nature, which can aid in the act of doomscrolling. As per the clinic director of the Perelman School of Medicine's Center for the Treatment and Study of Anxiety: "People have a question, they want an answer, and assume getting it will make them feel better ... You keep scrolling and scrolling. Many think that will be helpful, but they end up feeling worse afterward."
Fear of missing out
Doomscrolling can also be explained by the fear of missing out, a common fear that causes people to take part in activities that may not be explicitly beneficial to them, but which they fear "missing out on". This fear is also applied within the world of news, and social media. A research study conducted by Statista in 2013 found that more than half of Americans experienced FOMO on social media; further studies found FOMO affected 67% of Italian users in 2017, and 59% of Polish teenagers in 2021.
Thus, Bethany Teachman, a professor of psychology at the University of Virginia, states that FOMO is likely to be correlated with doomscrolling due to the person's fear of missing out on crucial negative information.
Control seeking
Obsessively consuming negative news online can additionally be partially attributed to a person's psychological need for control. As stated earlier, the COVID-19 pandemic coincided with the popularity of doomscrolling. A likely reasoning behind this is that during uncertain times, people are likely to engage in doomscrolling as a way to help them gather information and a sense of mastery over the situation. This is done by people to reinforce their belief that staying informed, and in control will provide them with protection from grim situations. However, while attempting to seize control, more often than not as a result of doomscrolling individuals develop more anxiety towards the situation rather than lessen it.
Brain anatomy
Doomscrolling, the compulsion to engross oneself in negative news, may be the result of an evolutionary mechanism where humans are "wired to screen for and anticipate danger". By frequently monitoring events surrounding negative headlines, staying informed may grant the feeling of being better prepared; however, prolonged scrolling may also lead to worsened mood and mental health as personal fears might seem heightened.
The inferior frontal gyrus (IFG) plays an important role in information processing and integrating new information into beliefs about reality. In the IFG, the brain "selectively filters bad news" when presented with new information as it updates beliefs. When a person engages in doomscrolling, the brain may feel under threat and shut off its "bad news filter" in response.
In a study where researchers manipulated the left IFG using transcranial magnetic stimulation (TMS), patients were more likely to incorporate negative information when updating beliefs. This suggests that the left IFG may be responsible for inhibiting bad news from altering personal beliefs; when participants were presented with favorable information and received TMS, the brain still updated beliefs in response to the positive news. The study also suggests that the brain selectively filters information and updates beliefs in a way that reduces stress and anxiety by processing good news with higher regard (see optimistic bias). Increased doomscrolling exposes the brain to greater quantities of unfavorable news and may restrict the brain's ability to embrace good news and discount bad news; this can result in negative emotions that make one feel anxious, depressed, and isolated.
Health effects
Psychological effects
Health professionals have advised that excessive doomscrolling can negatively impact existing mental health issues. While the overall impact that doomscrolling has on people may vary, it can often make one feel anxious, stressed, fearful, depressed, and isolated.
Research
Professors of psychology at the University of Sussex conducted a study in which participants watched television news consisting of "positive-, neutral-, and negative valenced material". The study revealed that participants who watched the negative news programs showed an increase in anxiety, sadness, and catastrophic tendencies regarding personal worries.
A study conducted by psychology researchers in conjunction with the Huffington Post found that participants who watched three minutes of negative news in the morning were 27% more likely to have reported experiencing a bad day six to eight hours later. Comparatively, the group who watched solutions-focused news stories reported a good day 88% of the time.
News avoidance
Some people have begun coping with the abundance of negative news stories by avoiding news altogether. A study from 2017 to 2022 showed that news avoidance is increasing, and that 38% of people admitted to sometimes or often actively avoiding the news in 2022, up from 29% in 2017. Some journalists have admitted to avoiding the news; journalist Amanda Ripley wrote that "people producing the news themselves are struggling, and while they aren't likely to admit it, it is warping the coverage." She also identified ways she believes could help fix the problem, such as intentionally adding more hope, agency, and dignity into stories so readers don't feel the helplessness which leads them to tune out entirely.
In 2024, a study by the University of Oxford's Reuters Institute for the Study of Journalism indicated that an increasing number of people are avoiding the news. In 2023, 39% of people worldwide reported actively avoiding the news, up from 29% in 2017. The study suggests that conflicts in Ukraine and the Middle East may be contributing factors to this trend. In the UK, interest in news has nearly halved since 2015.
See also
References
External links
Article on Medium.com
Article on Metro News
Article on The Star
2010s neologisms
2020s neologisms
Digital media use and mental health
Information society
Internet manipulation and propaganda
Internet terminology | Doomscrolling | Technology | 2,318 |
25,654,198 | https://en.wikipedia.org/wiki/Telescope%20for%20Habitable%20Exoplanets%20and%20Interstellar/Intergalactic%20Astronomy | Telescope for Habitable Exoplanets and Interstellar/Intergalactic Astronomy (THEIA) is a NASA-proposed 4-metre optical/ultraviolet space telescope that would succeed the Hubble Space Telescope and complement the infrared-James Webb Space Telescope. THEIA would use a 40-metre occulter to block starlight so as to directly image exoplanets.
It was proposed with three main instruments and an occulter:
eXoPlanet Characterizer (XPC)
Star Formation Camera (SFC),
Ultraviolet Spectrograph (UVS)
A separate occulter spacecraft
See also
List of proposed space observatories
References
THEIA Website
External links
2010 white paper (.pdf)
Design of a telescope-occulter system for THEIA
Space telescopes | Telescope for Habitable Exoplanets and Interstellar/Intergalactic Astronomy | Astronomy | 156 |
27,015,788 | https://en.wikipedia.org/wiki/Michael%20Zerner | Michael Charles Zerner (January 1, 1940 – February 2, 2000) was an American theoretical chemist, professor at the University of Guelph from 1970 to 1981 and University of Florida from 1981 to 2000. Zerner earned his Ph.D. under Martin Gouterman at Harvard, working with the spectroscopy of porphyrins. He conceived and wrote a quantum chemistry program, known as BIGSPEC or ZINDO, for calculating electronic spectra of big molecules. In 1996 Zerner was diagnosed with liver cancer, and died on February 2, 2000, survived by his wife and two children.
External links
1940 births
2000 deaths
University of Florida faculty
Theoretical chemists
Harvard University alumni
Computational chemists
Academic staff of the University of Guelph
American expatriates in Canada | Michael Zerner | Chemistry | 158 |
15,508,064 | https://en.wikipedia.org/wiki/Internet%20in%20Tajikistan | Internet in Tajikistan became present within the country during the early 1990s. Tajikistan had just become independent in 1992, with Emomali Rahmon as the new ruler, when the internet was introduced to the country. Nevertheless, it was after over a decade that the country’s internet became more accessible. The history of the internet’s foundation in Tajikistan extends from 1992 to present-day Tajikistan. By 2009, internet penetration had developed since the initial conception of the internet in Tajikistan and Internet Service Providers (ISPs) had increased in number. For most of the applications vpn is necessary inside tajikistan except for government use
Although at the initial start there was little governing of the internet, after the 2000s, new legalities became associated with the internet in Tajikistan. There are certain restrictions as to what is accessible and on display on the internet and as such, surveillance and filtering is present.
History
The Internet in Tajikistan emerged as the country was ending a bloody civil war that followed the demise of Soviet rule in the early 1990s. The resulting fragmentation of power also meant that Internet services developed largely without state interference and the Ministry of Transport and Communications played a weak role in the development of the sector as a whole. Telecommunications remained fragmented up until the end of the 1990s, with several companies failing to interconnect because of fierce (and at times violent and armed) competition. During this period of instability, ISPs were aligned with feuding political and criminal interests that spilled over to the competition among the ISPs themselves.
Since the end of the civil war, the government has taken steps to attract investors and liberalize the sector prompted by expectations of accession to the WTO. However, important steps are still pending, such as the privatization of Tajiktelecom (the national operator) and the establishment of an independent regulatory authority. In recent years, the telecommunications sector has boosted Tajikistan's GDP, and the number of licensed Internet and mobile operators has been increasing. In 2008, more than 180 companies were licensed in the ICT market.
Internet penetration and ISPs
As of January 2020 Tajikistan has 26% internet penetration.
Internet penetration in Tajikistan was estimated at 9.3 percent in 2009. In 2009, the cost of accessing the Internet increased, further restricting development of the sector. Access costs of US$0.73 per hour at Internet cafés and up to $300 for unlimited Wi-Fi traffic compare poorly with average wages of $35 per month and a minimum salary of $7 per month. Unlimited monthly traffic by dial-up access costs $26.41; xDSL with capacity of 128/64 kbit/s amounts to $200; and Wi-Fi unlimited traffic per month with the same capacity is $300.
One respected Tajik NGO estimates that 1 percent of households own personal computers and that most people access the Internet from home by way of dial-up connections. Access with DSL and wireless (Wi-Fi and WiMAX) technologies is limited by relatively high costs, and therefore restricted to a small number of commercial companies.
In 2009, there are ten main ISPs in Tajikistan actively providing Internet services to all major cities in the country. The state-owned telecommunications company Tajiktelecom, which provides local, long-distance, and international telephone, mobile telephony, and Internet services, lost its unrivaled dominance of the telecoms market in 2007, when Babilon-Mobile seized more than 30 percent of the market. Tajikistan remains dependent on satellite-based connections using Discovery Global Networks, as the cost of fiber remains high—approximately 30 percent higher than using the same-capacity channel over VSAT. The country is connected to the Trans-Asia-Europe (TAE) fiber-optic highway passing through Uzbekistan, and a second connection is from Kyrgyzstan. In part to overcome this bottleneck, Tajiktelecom expanded their fiber-optic infrastructure across the country and establish connections with China.
The ISPs are reluctant to share information about their bandwidth because of the concern that the data would be used by their competitors to undermine their market position. They are also reluctant to discuss their international points of connection from which they buy bandwidth. The ONI data reveal that with the exception of TARENA (an educational network), all Tajik ISPs maintained two international points of access, one located in Russia and the other in Western Europe. Tajik providers are aggressive in adopting new technologies. Three of the operators, Babilon-T, Telecom Technology, and Eastera, provide a commercial Next Generation Network (NGN) service.
In 2005, the Association of Tajik ISPs established a national Internet exchange point (IXP) that connected only four of the ten commercial ISPs (Babilon-T, Compuworld, Eastera, and MKF Networks), as well as TARENA. At the time of writing, the IXP is not operational as ISPs prefer to maintain bilateral peering connections between them.
Most Internet users are young and access the Internet through Internet cafés close to schools and universities. In January 2006, the Ministry of Transport and Communications estimated that some 400 Internet cafe´ s, mostly concentrated in large cities, operated in the country. Many Internet cafés act as second-tier ISPs and buy their bandwidth from the first-level ISPs (i.e., main ISPs in the country with independent international connection). Recent changes in licensing regulations require Internet cafés operating as ISPs to obtain a license from the Ministry, a requirement which has brought about a decrease of the overall number of Internet cafés.
Although more than 70 percent of the population resides in rural areas, Internet access is mainly restricted to urban areas because of poor infrastructure and low afford-ability. A 2005 study by the local Civil Initiative on Internet Policy (CIPI) shows a great disparity between the percentage of men accessing Internet (77.5) and that of women (22.5). About 12 percent of users are secondary school students, with around 100 schools across the country connected to the Internet. The most active users are university students, employees of international organizations, commercial companies, and public sector institutions.
Tajik is the official national language. Nevertheless, Russian remains the most popular language for Internet use. According to data obtained from the national information portal (TopTJ.com), the top-ten most-visited Web sites in October 2007 were informational and analytical portals (AsiaPlus, Varorud, Watanweb, Ariana), a commercial bank, and entertainment sites. Other popular Web sites include mail.ru; popular research engines are rambler.ru, google.com, yahoo.com, and yandex.ru. Among Tajik youth, the most popular applications include instant messenger, followed by social networking sites (odnokassniki.ru, my.mail.ru), and online educational resources.
Local Tajik content on the Internet is poorly developed. Most Internet content is available in Russian, but the knowledge of Russian among the younger generation is gradually decreasing. A survey conducted among 342 students and professors from nine universities showed 60 percent of respondents saw the Internet as an informational and educational resource, but not as a means to create local information resources. The Tajik top-level domain name was registered with the Internet Assigned Numbers Authority (IANA) in 1997, but the domain name was later suspended because it was used mainly for registering pornography sites. In 2003, the domain name registration was delegated to the Information and Technical Center of the President of Tajikistan Administration, a state entity that now supervises registrations within the ".tj" domain. Any operator that has a license for providing telecommunication services (including Internet) is eligible to act as a domain registrar. By January 19, 2008, 4,894 second-level domain names were registered within the ".tj" domain.
Legal and regulatory frameworks
All Tajik ISPs operate under a license from the Tajik Ministry of Transport and Communication. Internet service providers are permitted to operate VoIP services under an IP-telephony license, although the ministry has introduced amendments that require VoIP providers to obtain a special license, presumably as a means to further regulate the sector.23 In Tajikistan, P2P services are not popular, and the government has not shown ambitions to regulate them at this time.
The main state entities regulating the Internet in Tajikistan are the Security Council (SC), the ICT Council, and the MTC (an entity established in February 2007, replacing the former Ministry of Communications). The Communications and Informatization Department of the MTC is the main regulator in the telecommunications industry and is empowered to issue licenses for any related activities.24 In 2003, the government adopted the Conception on Information Security, 25 which serves as a platform for proclaiming official views and policy directions to preserve state information security. The president remains the key authority that ratifies the main legal documents in the IT sector and directs ICT policy in the country. The SC controls the implementation of the State Strategy on Information and Communication Technologies for Development of the Republic of Tajikistan (e-Strategy), 26 aimed at developing the information society and exploiting the country's ICT potential. The SC monitors telecommunications, including the Internet, for national security reasons. The ICT Council, 27 where the president sits as chairman alongside members of the government, is responsible for implementing and coordinating work under the e-Strategy and advising the president. However, although the council was established in February 2006, it has yet to be convened.
The government restricts the distribution of state secrets and other privileged data intending to "discredit the dignity and honor of the state and the President," or that which contains "violence and cruelty, racial, national and religious hostility ... pornography ... and any other information prohibited by law."28 The provisions of this regulation are broad and allow state bodies wide discretion in their application. The control over information security is assigned to the Main Department of State Secrets and the Ministry of Security.
The lower chamber (Majlisi Namoyandagon) and the president ratified the Law on Changes and Amendments to the Criminal Code in June and July 2007, respectively. The changes introduced, inter alia, provisions on defamation (Article 135, part 2, Slander) and provisions on illegal collection and distribution of private data (Article 144, part 1). Defamation incurred over "mass media or Internet" is prosecuted according to local laws when it contains "intentional distribution via the Internet of knowingly false, libelous and insulting information, as well as expletive words and phrases which denigrate the dignity of human personality."
Surveillance and filtering
Several government agencies possess the right to inspect ISPs’ activities and premises, and require information on their users. The rights and obligations of ISPs in this regard are envisioned in the Annex to the "Internet Services Provision Rules within the Republic of Tajikistan" (herein referred to as the Rules). According to Section 4, paragraph 15, of the Rules, the provider is obliged to "render its activity in accordance with the current Rules" and "provide an easy access to its facilities for employees of the State Communications Inspectorate of the Ministry of Transport and Communications, Ministry of Security and other state agencies granted under the corresponding rules, provide on their demand information, for which they are authorized to ask and fulfill their instructions on time."
In 2006, the government signaled its intention to create an agency under the auspices of the Ministry of Transport and Communications that would control the ISP sector. All telecoms and ISPs were required to provide direct access to the state inspectorate in a manner similar to Russian surveillance legislation (SORM). In 2009, the high cost of the project as well as lobbying from telecom operators halted its realization.32
ONI Testing Results
Tajikistan does not have an official policy on Internet filtering. However, state authorities have been known to restrict access to some Web sites at politically sensitive times by communicating their "recommendations" to all top-level ISPs—an example of second-generation controls. Prior to the 2006 presidential election, the government-controlled Communications Regulation Agency issued a "Recommendation on filtering" that advised ISPs that, "for the purpose of information security," they should "engage in filtering and block access to Web sites that aim to undermine the state policy in the sphere of information."29 As a result, several oppositional news Web sites hosted in Russia or Tajikistan were inaccessible to Tajik users for several days.30 Although officials offered unclear reasons for shutting down the Web sites, independent media sources believe that the block list will grow in the future.31
In 2007 and 2008, the OpenNet Initiative tested in Tajikistan on four key ISPs: Babilon-T, Eastera, Tajiktelecom, and TARENA. Testing in Tajikistan yielded no evidence of Internet filtering. This extends to pornographic content, and with the exception of TARENA (which services schools and universities), the major ISPs do not filter such content on the backbone level. However, accessing pornographic content at Internet cafés is illegal. Any persons caught accessing such content is subject to a fine ranging from US$15 to $100, and violators may be criminally prosecuted. The ONI's investigation concluded that currently most Internet cafés do not filter access to pornographic content. However, they do employ monitoring software that notifies them when a client is attempting to retrieve such content.
See also
References
This article was originally adapted from the December 1, 2010 OpenNet Initiative profile of Tajikistan, which was published under a Creative Commons Attribution license.
Tajikistan
Communications in Tajikistan
Tajikistan | Internet in Tajikistan | Technology | 2,776 |
14,224,142 | https://en.wikipedia.org/wiki/HBG1 | Hemoglobin subunit gamma-1 is a protein that in humans is encoded by the HBG1 gene.
Function
The gamma globin genes (HBG1 and HBG2) are normally expressed in the fetal liver, spleen and bone marrow. Two gamma chains together with two alpha chains constitute fetal hemoglobin (HbF) which is normally replaced by adult hemoglobin (HbA) in the year following birth. In the non-pathological condition known as hereditary persistence of fetal hemoglobin (HPFH), gamma globin expression is continued into adulthood. Also, in cases of beta-thalassemia and related conditions, gamma chain production may be maintained, possibly as a mechanism to compensate for the mutated beta-globin. The two types of gamma chains differ at residue 136 where glycine is found in the G-gamma product (HBG2) and alanine is found in the A-gamma product (HBG1). The former is predominant at birth. The order of the genes in the beta-globin cluster is: 5' - epsilon – gamma-G – gamma-A – delta – beta - 3'.
References
Further reading
External links
Hemoglobins | HBG1 | Chemistry | 264 |
43,541,168 | https://en.wikipedia.org/wiki/16%2C807 | 16807 is the natural number following 16806 and preceding 16808.
In mathematics
As a number of the form nn − 2 (16807 = 75), it can be applied in Cayley's formula to count the number of trees with seven labeled nodes.
In other fields
Several authors have suggested a Lehmer random number generator:
References
External links
16807 : facts & properties
16807 | 16,807 | Mathematics | 83 |
42,374,268 | https://en.wikipedia.org/wiki/Chinese%20button%20knot | The Chinese button knot is essentially a knife lanyard knot where the lanyard loop is shortened to a minimum, i.e. tightened to the knot itself. There emerges therefore only two lines next to each other from the knot: the beginning and the end. The knot has traditionally been used as a button on clothes in Asia, thus the name.
Tying
The basic chinese button knot (ABOK #599 on one string) is usually tied with a carrick bend that attaches the two ends as a first step. This results then in a knife lanyard knot (ABOK #787) where the loop part can be sized and used as a button hole, while the knot part can be used as a button.
Below is the ABOK description, and several video demonstration references:
There is however a tying method that does not require a carrick bend, rather a slip knot as a first step, and does not produce a lanyard loop that needs to be reduced when used as a button. This method provides just the button, a spherical basket weave knot, in the style of Turk's head knot.
A third way to tie this knot starts with two loops almost like tying the celtic button knot, except for the curvature change at the center which results in the way the ends exit the knot; at opposite sides for celtic, at the same side here.
The resulting knot in both tying methods (slip-knot method and two-loops or WhyKnot method) is ABOK #600 which is similar to knife lanyard knot but the loop part is reduced to the top center bulge on its surface.
Which triangular hole at the S formed/back bent top center each end is tucked through in both tying methods makes a difference:
tucking through the one at near side of the center as indicated by red lines in this image gives ABOK #600 the 8 part knot, of which the common chinese button knot is a version with a 9th surface part,
tucking through the one at opposite side as indicated by red lines in this image gives ABOK #787 the knife lanyard knot but with a retreated loop.
See also
Tangzhuang, a jacket which often incorporates knotted buttons
References
Buttons
Decorative knots
Fashion accessories
Parts of clothing
Sewing
Stopper knots
Textile arts
Textile closures | Chinese button knot | Technology | 460 |
18,163,211 | https://en.wikipedia.org/wiki/Computer-aided%20management%20of%20emergency%20operations | CAMEO is a system of software applications used widely to plan for and respond to chemical emergencies. It is one of the tools developed by EPA’s Office of Emergency Management (OEM) and the National Oceanic and Atmospheric Administration Office of Response and Restoration (NOAA), to assist front-line chemical emergency planners and responders. They can use CAMEO to access, store, and evaluate information critical for developing emergency plans. In addition, CAMEO supports regulatory compliance by helping users meet the chemical inventory reporting requirements of the Emergency Planning and Community Right-to-Know Act (EPCRA, also known as SARA Title III). CAMEO also can be used with a separate software application called LandView to display EPA environmental databases and demographic/economic information to support analysis of environmental justice issues.
The CAMEO system integrates a chemical database and a method to manage the data, an air dispersion model, and a mapping capability. All modules work interactively to share and display critical information. The CAMEO system is available in Macintosh and Windows formats.
Origin
CAMEO initially was developed because NOAA recognized the need to assist first responders with easily accessible and accurate response information. Since 1988, EPA and NOAA have collaborated to augment CAMEO to assist both emergency responders and planners. CAMEO has been enhanced to provide emergency planners with a tool to enter local information and develop incident scenarios to better prepare for chemical emergencies. The U.S. Census Bureau and the U.S. Coast Guard have worked with EPA and NOAA to continue to enhance the system.
External links
The CAMEO page at EPA.
The CAMEO page at NOAA.
Tier II Online - with CAMEO integration.
Disaster preparedness in the United States
Chemical safety
Emergency management software
Project management software | Computer-aided management of emergency operations | Chemistry | 348 |
208,288 | https://en.wikipedia.org/wiki/17%20%28number%29 | 17 (seventeen) is the natural number following 16 and preceding 18. It is a prime number.
17 was described at MIT as "the least random number", according to the Jargon File. This is supposedly because, in a study where respondents were asked to choose a random number from 1 to 20, 17 was the most common choice. This study has been repeated a number of times.
Mathematics
17 is a Leyland number and Leyland prime, using 2 & 3 (23 + 32) and using 4 and 5, using 3 & 4 (34 - 43). 17 is a Fermat prime. 17 is one of six lucky numbers of Euler.
Since seventeen is a Fermat prime, regular heptadecagons can be constructed with a compass and unmarked ruler. This was proven by Carl Friedrich Gauss and ultimately led him to choose mathematics over philology for his studies.
The minimum possible number of givens for a sudoku puzzle with a unique solution is 17.
Geometric properties
Two-dimensions
There are seventeen crystallographic space groups in two dimensions. These are sometimes called wallpaper groups, as they represent the seventeen possible symmetry types that can be used for wallpaper.
Also in two dimensions, seventeen is the number of combinations of regular polygons that completely fill a plane vertex. Eleven of these belong to regular and semiregular tilings, while 6 of these (3.7.42, 3.8.24, 3.9.18, 3.10.15, 4.5.20, and 5.5.10) exclusively surround a point in the plane and fill it only when irregular polygons are included.
Seventeen is the minimum number of vertices on a two-dimensional graph such that, if the edges are colored with three different colors, there is bound to be a monochromatic triangle; see Ramsey's theorem.
Either 16 or 18 unit squares can be formed into rectangles with perimeter equal to the area; and there are no other natural numbers with this property. The Platonists regarded this as a sign of their peculiar propriety; and Plutarch notes it when writing that the Pythagoreans "utterly abominate" 17, which "bars them off from each other and disjoins them".
17 is the least for the Theodorus Spiral to complete one revolution. This, in the sense of Plato, who questioned why Theodorus (his tutor) stopped at when illustrating adjacent right triangles whose bases are units and heights are successive square roots, starting with . In part due to Theodorus’s work as outlined in Plato’s Theaetetus, it is believed that Theodorus had proved all the square roots of non-square integers from 3 to 17 are irrational by means of this spiral.
Enumeration of icosahedron stellations
In three-dimensional space, there are seventeen distinct fully supported stellations generated by an icosahedron. The seventeenth prime number is 59, which is equal to the total number of stellations of the icosahedron by Miller's rules. Without counting the icosahedron as a zeroth stellation, this total becomes 58, a count equal to the sum of the first seven prime numbers (2 + 3 + 5 + 7 ... + 17). Seventeen distinct fully supported stellations are also produced by truncated cube and truncated octahedron.
Four-dimensional zonotopes
Seventeen is also the number of four-dimensional parallelotopes that are zonotopes. Another 34, or twice 17, are Minkowski sums of zonotopes with the 24-cell, itself the simplest parallelotope that is not a zonotope.
Abstract algebra
Seventeen is the highest dimension for paracompact Vineberg polytopes with rank mirror facets, with the lowest belonging to the third.
17 is a supersingular prime, because it divides the order of the Monster group. If the Tits group is included as a non-strict group of Lie type, then there are seventeen total classes of Lie groups that are simultaneously finite and simple (see classification of finite simple groups). In base ten, (17, 71) form the seventh permutation class of permutable primes.
Other notable properties
The sequence of residues (mod ) of a googol and googolplex, for , agree up until .
Seventeen is the longest sequence for which a solution exists in the irregularity of distributions problem.
Other fields
Music
Where Pythagoreans saw 17 in between 16 from its Epogdoon of 18 in distaste, the ratio 18:17 was a popular approximation for the equal tempered semitone (12-tone) during the Renaissance.
Notes
References
External links
Prime Curios!: 17
Is 17 the "most random" number?
Integers | 17 (number) | Mathematics | 985 |
47,486,255 | https://en.wikipedia.org/wiki/Northern%20Provinces | The Northern Provinces of South Africa is a biogeographical area used in the World Geographical Scheme for Recording Plant Distributions (WGSRPD). It is part of the WGSRPD region 27 Southern Africa. The area has the code "TVL". It includes the South African provinces of Gauteng, Mpumalanga, Limpopo (Northern Province) and North West, together making up an area slightly larger than the former Transvaal Province.
See also
Cape Provinces
References
Bibliography
Biogeography | Northern Provinces | Biology | 105 |
106,235 | https://en.wikipedia.org/wiki/Pipette | A pipette (sometimes spelled as pipet) is a type of laboratory tool commonly used in chemistry and biology to transport a measured volume of liquid, often as a media dispenser. Pipettes come in several designs for various purposes with differing levels of accuracy and precision, from single piece glass pipettes to more complex adjustable or electronic pipettes. Many pipette types work by creating a partial vacuum above the liquid-holding chamber and selectively releasing this vacuum to draw up and dispense liquid. Measurement accuracy varies greatly depending on the instrument.
History
The first simple pipettes were made of glass, such as Pasteur pipettes. Large pipettes continue to be made of glass; others are made of squeezable plastic for situations where an exact volume is not required.
The first micropipette was patented in 1957 by Dr Heinrich Schnitger (Marburg, Germany). The founder of the company Eppendorf, Dr. Heinrich Netheler, inherited the rights and started the commercial production of micropipettes in 1961.
The adjustable micropipette is a Wisconsin invention developed through interactions among several people, primarily inventor Warren Gilson and Henry Lardy, a professor of biochemistry at the University of Wisconsin–Madison.
Nomenclature
Although specific names exist for each type of pipette, in practice, any type can be referred to as a "pipette". Pipettes that dispense less than 1000 μL are sometimes distinguished as micropipettes.
The terms "pipette" and "pipet" are used interchangeably despite minor historical differences in their usage.
Common pipettes
Air displacement micropipettes
Air displacement micropipettes are a type of adjustable micropipette that deliver a measured volume of liquid; depending on size, it could be between about 0.1 μL to 1,000 μL (1 mL). These pipettes require disposable tips that come in contact with the fluid.
These pipettes operate by piston-driven air displacement. A vacuum is generated by the vertical travel of a metal or ceramic piston within an airtight sleeve. As the piston moves upward, driven by the depression of the plunger, a vacuum is created in the space left vacant by the piston. The liquid around the tip moves into this vacuum (along with the air in the tip) and can then be transported and released as necessary. These pipettes are capable of being very precise and accurate. However, since they rely on air displacement, they are subject to inaccuracies caused by the changing environment, particularly temperature and user technique. For these reasons, this equipment must be carefully maintained and calibrated, and users must be trained to exercise correct and consistent technique.
The micropipette was invented and patented in 1960 by Dr. Heinrich Schnitger in Marburg, Germany. Afterwards, the co-founder of the biotechnology company Eppendorf, Dr. Heinrich Netheler, inherited the rights and initiated the global and general use of micropipettes in labs. In 1972, the adjustable micropipette was invented at the University of Wisconsin-Madison by several people, primarily Warren Gilson and Henry Lardy.
Types of air displacement pipettes include:
adjustable or fixed
volume handled
Single-channel, multi-channel or repeater
conical tips or cylindrical tips
standard or locking
manual or electronic
manufacturer
Irrespective of brand or expense of pipette, every micropipette manufacturer recommends checking the calibration at least every six months, if used regularly. Companies in the drug or food industries are required to calibrate their pipettes quarterly (every three months). Schools which are conducting chemistry classes can have this process annually. Those studying forensics and research where a great deal of testing is commonplace will perform monthly calibrations.
Electronic pipette
To minimize the possible development of musculoskeletal disorders due to repetitive pipetting, electronic pipettes commonly replace the mechanical version.
Positive displacement pipette
These are similar to air displacement pipettes, but are less commonly used and are used to avoid contamination and for volatile or viscous substances at small volumes, such as DNA. The major difference is that the disposable tip is a microsyringe (plastic), composed of a capillary and a piston (movable inner part) which directly displaces the liquid.
Volumetric pipettes
Volumetric pipettes or bulb pipette allow the user to measure a volume of solution extremely precisely (precision of four significant figures). These pipettes have a large bulb with a long narrow portion above with a single graduation mark as it is calibrated for a single volume (like a volumetric flask). Typical volumes are 20, 50, and 100 mL. Volumetric pipettes are commonly used to make laboratory solutions from a base stock as well as prepare solutions for titration.
Graduated pipettes
Graduated pipettes are a type of macropipette consisting of a long tube with a series of graduations, as on a graduated cylinder or burette, to indicate different calibrated volumes. They also require a source of vacuum; in the early days of chemistry and biology, the mouth was used. The safety regulations included the statement: "Never pipette by mouth KCN, NH3, strong acids, bases and mercury salts". Some pipettes were manufactured with two bubbles between the mouth piece and the solution level line, to protect the chemist from accidental swallowing of the solution.
Pasteur pipette
Pasteur pipettes are plastic or glass pipettes used to transfer small amounts of liquids, but are not graduated or calibrated for any particular volume. The bulb is separate from the pipette body. Pasteur pipettes are also called teat pipettes, droppers, eye droppers and chemical droppers.
Transfer pipettes
Transfer pipettes, also known as Beral pipettes, are similar to Pasteur pipettes but are made from a single piece of plastic and their bulb can serve as the liquid-holding chamber.
Specialized pipettes
Pipetting syringe
Pipetting syringes are hand-held devices that combine the functions of volumetric (bulb) pipettes, graduated pipettes, and burettes. They are calibrated to ISO volumetric A grade standards. A glass or plastic pipette tube is used with a thumb-operated piston and PTFE seal which slides within the pipette in a positive displacement operation. Such a device can be used on a wide variety of fluids (aqueous, viscous, and volatile fluids; hydrocarbons; essential oils; and mixtures) in volumes between 0.5 mL and 25 mL. This arrangement provides improvements in precision, handling safety, reliability, economy, and versatility. No disposable tips or pipetting aids are needed with the pipetting syringe.
Van Slyke pipette
The Van Slyke pipette, invented by Donald Dexter Van Slyke, is a graduated pipette commonly used in medical technology with serologic pipettes for volumetric analysis.
Ostwald–Folin pipette
The Ostwald–Folin pipette, developed by Wilhelm Ostwald and refined by Otto Folin, is a type of volumetric pipette used to measure viscous fluids such as whole blood or serum.
Winkler–Dennis gas combustion pipette
The Winkler–Dennis gas combustion pipette, developed by Clemens Winkler and refined by Louis Munroe Dennis, is an apparatus for the controlled reaction of liquids under a mild electric current and a supply of oxygen.
Glass micropipette
Glass micropipettes are fabricated in a micropipette puller and are typically used in a micromanipulator. These are used to physically interact with microscopic samples, such as in the procedures of microinjection and patch clamping. Most micropipettes are made of borosilicate, aluminosilicate or quartz with many types and sizes of glass tubing being available. Each of these compositions has unique properties which will determine suitable applications.
Microfluidic pipette
A recent introduction into the micropipette field integrates the versatility of microfluidics into a freely positionable pipette platform. At the tip of the device, a localized flow zone is created which allows for constant control of the nanolitre environment, directly in front of the pipette. The pipettes are made from polydimethylsiloxane (PDMS), which is formed using reactive injection molding. Interfacing of these pipettes using pneumatics enables multiple solutions to be loaded and switched on demand, with solution exchange times of 100ms. This type of pipette was invented by Alar Ainla, and currently situated in the Biophysical Technology Lab. at Chalmers University of Technology in Sweden.
Extremely low volume pipettes
A zeptolitre pipette has been developed at Brookhaven National Laboratory. The pipette is made of a carbon shell, within which is an alloy of gold-germanium. The pipette was used to learn about how crystallization takes place.
Pipette aids
A variety of devices have been developed for safer, easier, and more efficient pipetting. For example, a motorized pipette controller can aid liquid aspiration or dispensing using volumetric pipettes or graduated pipettes; a tablet can interact in real-time with the pipette and guide a user through a protocol; and a pipette station can help to control the pipette tip immersion depth and improve ergonomics.
Robots
Pipette robots are capable of manipulating pipettes just as humans would do.
Calibration
Pipette recalibration is an important consideration in laboratories using these devices. It is the act of determining the accuracy of a measuring device by comparison with NIST traceable reference standards. Pipette calibration is essential to ensure that the instrument is working according to expectations and as per the defined regimes or work protocols. Pipette calibration is considered to be a complex affair because it includes many elements of calibration procedure and several calibration protocol options as well as makes and models of pipettes to consider.
Posture and injuries
Proper pipetting posture is the most important element in establishing good ergonomic work practices. During repetitive tasks such as pipetting, maintaining body positions that provide a maximum of strength with the least amount of muscular stress is important to minimize the risk of injury. A number of common pipetting techniques have been identified as potentially hazardous due to biomechanical stress factors. Recommendations for corrective pipetting actions, made by various US governmental agencies and ergonomics experts, are presented below.
Winged elbow pipetting
Technique: elevated, “winged elbow”. The average human arm weighs approximately 6% of the total body weight. Holding a pipette with the elbow extended (winged elbow) in a static position places the weight of the arm onto the neck and shoulder muscles and reduces blood flow, thereby causing stress and fatigue. Muscle strength is also substantially reduced as arm flexion is increased.
Corrective action: Position elbows as close to the body as possible, with arms and wrists extended in straight, neutral positions (handshake posture). Keep work items within easy reach to limit extension and elevation of arm. Arm/hand elevation should not exceed 12” from the worksurface.
Over rotated arm pipetting
Technique: Over-rotated forearm and wrist. Rotation of the forearm in a supinated position (palm up) and/or wrist flexion increases the fluid pressure in the carpal tunnel. This increased pressure can result in compression of soft tissues like nerves, tendons and blood vessels, causing numbness in the thumb and fingers.
Corrective action: Forearm rotation angle near 45° pronation (palm down) should be maintained to minimize carpal tunnel pressure during repetitive activity.
Clenched fist pipetting
Technique: Tight grip (clenched fist). Hand fatigue results from continuous contact between a hard object and sensitive tissues. This occurs when a firm grip is needed to hold a pipette, such as when jamming on a tip, and results in diminished hand strength.
Corrective action: Use pipettes with hooks or other attributes that allow a relaxed grip and/or alleviate need to constantly grip the pipette. This will reduce tension in the arm, wrist and hand.
Thumb plunger pipetting
Technique: Concentrated area of force (contact stress between a hard object and sensitive tissues). Some devices have plungers and buttons with limited surface areas, requiring a great deal of force to be expended by the thumb or other finger in a concentrated area.
Corrective action: Use pipettes with large contoured or rounded plungers and buttons. This will disperse the pressure used to operate the pipette across the entire surface of the thumb or finger, reducing contact pressure to acceptable levels.
Incorrect posture can have a strong impact on available strength arm strength pipetting
Technique: elevated arm. Muscle strength is substantially reduced when arm flexion is increased.
Corrective action: Keep work items within easy reach to limit extension and elevation of arm. Arm/hand elevation should also not exceed 12” from the worksurface.
Elbow strength pipetting
Technique: Elbow flexion or abduction. Arm strength diminishes as elbow posture is deviated from a 90° position.
Corrective action: Keep forearm and hand elevation within 12” of the worksurface, which will allow the elbow to remain near a 90° position.
Unlike traditional axial pipettes, ergonomic pipetting can affect posture and prevent common pipetting injuries such as carpal tunnel syndrome, tendinitis and other musculoskeletal disorders. To be "ergonomically correct" significant changes to traditional pipetting postures are essential, like:
minimizing forearm and wrist rotations, keeping a low arm and elbow height and relaxing the shoulders and upper arms.
Pipette stand
Typically the pipettes are vertically stored on holder called pipette stands. In case of electronic pipettes, such stands can recharge their batteries. The most advanced pipette stands can directly control electronic pipettes.
Alternatives
An alternative technology, especially for transferring small volumes (micro and nano litre range) is acoustic droplet ejection.
References
External links
Helpful Hints on the Use of a Volumetric Pipet by Oliver Seely
Laboratory glassware
Laboratory equipment
Microbiology equipment
Volumetric instruments | Pipette | Technology,Engineering,Biology | 2,905 |
45,031,983 | https://en.wikipedia.org/wiki/Ricardo%20Baeza%20Rodr%C3%ADguez | Ricardo Baeza Rodríguez is a Chilean mathematician who works as a professor at the University of Talca. He earned his Ph.D. in 1970 from Saarland University, under the joint supervision of Robert W. Berger and Manfred Knebusch. His research interest is in number theory.
Career
Baeza became a member of the Chilean Academy of Sciences in 1983. He was the 2009 winner of the Chilean National Prize for Exact Sciences. In 2012, he became one of the inaugural fellows of the American Mathematical Society, the only Chilean to be so honored.
Research
In 1990, Baeza proved the norm theorem over characteristic two; it had been previously proved in other characteristics. The theorem states that if q is a nonsingular quadratic form over a field F, and be a monic irreducible polynomial (with the corresponding field extension), then if and only if is hyperbolic.
In 1992, Baeza and Roberto Aravire introduced a modification of Milnor's k-theory for quadratic forms over a field of characteristic two. In particular, if denotes the Witt group of quadratic forms over a field F, then one can construct a group and an isomorphism for every value of n.
In 2003, Baeza and Aravire studied quadratic forms and differential forms over certain function fields of an algebraic variety of characteristic two. Using this result, they deduced the characteristic two analogue of Knebusch's degree conjecture.
In 2007, Baeza and Arason found a group presentation of the groups , generated by n-fold bilinear Pfister forms, and of the groups , generated by quadratic Pfister forms.
Publications
References
External links
Year of birth missing (living people)
Living people
Number theorists
Saarland University alumni
Academic staff of the University of Talca
Fellows of the American Mathematical Society
20th-century Chilean mathematicians
21st-century Chilean mathematicians | Ricardo Baeza Rodríguez | Mathematics | 385 |
66,444,644 | https://en.wikipedia.org/wiki/Rockfall%20barrier | A rockfall barrier is a structure built to intercept rockfall, most often made from metallic components and consisting of an interception structure hanged on post-supported cables.
Barriers are passive rockfall mitigation structures adapted for rock block kinetic energies up to 8 megajoules.
Alternatively, these structures are also referred to as fences, catch fences, rock mesh, net fences....
History
In the 1960s, the Washington State Department of Transportation conducted the very first experiments for evaluating the efficiency of barriers in arresting rock blocks. A so-called 'chain link fence attenuator' was exposed to impacts by blocks freely rolling down a slope for evaluating its efficiency. These experiments were followed by some others till the end of the 1990s. Progressively, the testing technique was improved using zip-lines to convey the rock block to the barrier. Testing real-scale structures is now very common and part of the design process.
The very first use of rockfall barriers dates back to this period. It progressively became widespread. Nowadays, barriers are the most widely used type of rockfall mitigation structures and their variety has considerabily increased since the 1970s and in particular over the last two decades.
A commonly used type of net is made from metallic rings. In such nets, each ring is interlaced with either 4 or 6 adjoining rings. These nets were first used after a French company bought a stock of nets used in the USSR for protecting harbours against submarine intrusion. These nets are referred to as ASM (anti-submarine). Other mesh shapes are also obesrved in rockfall barriers (see below).
From the 2000s, these barriers were progressively adapted to be used as protection structures against various types of geophysical flows such as small landslides, mud flows debris flows, snow avalanches...
Types of barriers
Barriers are mainly made from metallic components which are net, cables, posts, shackles and brakes mainly. Barriers are connected to the ground thanks to anchors. Depending on the rock block kinetic energy and manufacturer, various structures types and design exist, combining these different components.
This variety in barrier design in particular results from the different:
post cross sectional shapes (circular, square...)
mesh size and shape: made from hexagonal wire mesh, circular rings or cables, this latter forming either rectangular, square, rhombus or water-drops mesh shapes
distance between supporting posts (ie. length of the mesh panels)
number and lay out of the cables and brakes (if any)
brakes (if any): various technologies and activation force levels
number and lay out of the brakes (if any)
post position with respect to the interception structure.
Static barriers
When the rock block kinetic energy is less than 500 kJ, a static barrier is often adapted. In general, it consists of static posts, cables and an interception net. As a result of this design, the deformation of the structure when impacted is limited.
Flexible barriers
Flexible barriers are used when the rock bloc kinetic energy is larger than 500 kJ and up to 8000 kJ. The structure is given flexibility by using brakes, placed along the cables connected to the interception net. When the rock boulder impacts, the net, force develop in these cables. Once the force in the cables reaches a given value, the brake is activated, allowing for a larger barrier deformation and dissipating energy. The way this component dissipates energy varies from one brake technology to the other : pure friction, partial failure, plastic deformation, mixed friction/plastic deformation). Brakes also avoid large forces to develop in the barrier anchorage and are thus key components.
Design principles
The two main design characterictis of rockfall barriers are their height and their impact strength.
Similarly as for other passive rockfall protection structures (e.g. embankments), the barrier required height is defined based on rock fragments passing heights obtained from trajectory simulations. These simulations also provide the kinetic energy to consider for the barrier selection and design. The appropriate barrier choice is based on these two parameters.
The impact strength of a specific rockfall barrier is mainly determined from real-scale impact experiments. For instance, the design of flexible barriers is often based on results from the conformance tests prescribed in a specific European guildeline. These tests consist in normal-to-the barrier impacts in the center of a three-panel barrier by a projectile with translational velocity of at least 25 m/s and no rotational velocity.
The response of a barrier may also be evaluated based on specific numerical models, developed based on a finite element method or a discrete element method. These simulations tools may also be used to improve the barrier design, for example accounting for site-specific impact conditions.
See also
Rockfall
Flexible debris-resisting barrier
Landslide mitigation
Rockfall protection embankment
References
Landslide analysis, prevention and mitigation | Rockfall barrier | Environmental_science | 978 |
3,514,267 | https://en.wikipedia.org/wiki/Zinc%20telluride | Zinc telluride is a binary chemical compound with the formula ZnTe. This solid is a semiconductor material with a direct band gap of 2.26 eV. It is usually a p-type semiconductor. Its crystal structure is cubic, like that for sphalerite and diamond.
Properties
ZnTe has the appearance of grey or brownish-red powder, or ruby-red crystals when refined by sublimation. Zinc telluride typically has a cubic (sphalerite, or "zincblende") crystal structure, but can be also prepared as rocksalt crystals or in hexagonal crystals (wurtzite structure). Irradiated by a strong optical beam burns in presence of oxygen. Its lattice constant is 0.6101 nm, allowing it to be grown with or on aluminium antimonide, gallium antimonide, indium arsenide, and lead selenide. With some lattice mismatch, it can also be grown on other substrates such as GaAs, and it can be grown in thin-film polycrystalline (or nanocrystalline) form on substrates such as glass, for example, in the manufacture of thin-film solar cells. In the wurtzite (hexagonal) crystal structure, it has lattice parameters a = 0.427 and c = 0.699 nm.
Applications
Optoelectronics
Zinc telluride can be easily doped, and for this reason it is one of the more common semiconducting materials used in optoelectronics. ZnTe is important for development of various semiconductor devices, including blue LEDs, laser diodes, solar cells, and components of microwave generators. It can be used for solar cells, for example, as a back-surface field layer and p-type semiconductor material for a CdTe/ZnTe structure or in PIN diode structures.
The material can also be used as a component of ternary semiconductor compounds, such as CdxZn(1-x)Te (conceptually a mixture composed from the end-members ZnTe and CdTe), which can be made with a varying composition x to allow the optical bandgap to be tuned as desired.
Nonlinear optics
Zinc telluride together with lithium niobate is often used for generation of pulsed terahertz radiation in time-domain terahertz spectroscopy and terahertz imaging. When a crystal of such material is subjected to a high-intensity light pulse of subpicosecond duration, it emits a pulse of terahertz frequency through a nonlinear optical process called optical rectification. Conversely, subjecting a zinc telluride crystal to terahertz radiation causes it to show optical birefringence and change the polarization of a transmitting light, making it an electro-optic detector.
Vanadium-doped zinc telluride, "ZnTe:V", is a non-linear optical photorefractive material of possible use in the protection of sensors at visible wavelengths. ZnTe:V optical limiters are light and compact, without complicated optics of conventional limiters. ZnTe:V can block a high-intensity jamming beam from a laser dazzler, while still passing the lower-intensity image of the observed scene. It can also be used in holographic interferometry, in reconfigurable optical interconnections, and in laser optical phase conjugation devices. It offers superior photorefractive performance at wavelengths between 600 and 1300 nm, in comparison with other III-V and II-VI compound semiconductors. By adding manganese as an additional dopant (ZnTe:V:Mn), its photorefractive yield can be significantly increased.
References
External links
National Compound Semiconductor Roadmap (Office of Naval research) – Accessed April 2006
Tellurides
telluride
II-VI semiconductors
Terahertz technology
Nonlinear optical materials
Zincblende crystal structure | Zinc telluride | Physics,Chemistry | 821 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.