id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
75,727,453 | https://en.wikipedia.org/wiki/Cation%E2%80%93cation%20bond | Cation-cation interactions often occur between molecular cations of transition metals and f-elements (so-called yl cations). They are distinguished as a separate type of intermolecular interactions, since without taking them into account, cations usually repel each other in accordance with Coulomb's law.
Cation-cation interactions in f-elements
Of particular interest are cation-cation interactions between actinide cations - uranium, in the form of uranyl UO22+, neptunium in the form of neptunoyl NpO22+, plutonium and americium, which are essentially a manifestation of specific complex formation. They are a special case of intermolecular interactions that occur in molecular ions. It was first discovered when studying the behavior of netunium(V) compounds in solutions of uranyl perchlorate. Great impact to crystallochemistry of cation-cation bonds was made by Mikhail Grigoriev.
The cation-cation interaction [NpO2]+ determines the crystal structure of Np(V) compounds, and the strength of cation-cation bonds between neptunoyl ions in a solid is comparable to the strength of ordinary bonds with acid ligands, although in an aqueous solution, apparently, the role There are somewhat fewer of these connections. In an organic solution, the intensity of C-C interactions can remain high [1]. In the absence of mutual coordination of neptunium ions, Np(V) compounds can exhibit both structural similarity with An(VI) compounds and fundamental differences (showing a tendency to unite Np coordination polyhedra through common equatorial edges. Cation-cation interactions can be detected not only by data from structural studies, but also according to vibrational spectroscopy data. For uranyl, cation-cation interactions are also recorded in the gas phase.
Cation-cation interactions in transition metal ions
For transition metals, cation-cation interactions appear in nitrene complexes. Nitrenium ligands provide an excellent platform for the simple and efficient synthesis of extremely rare complexes that have positively charged ligands coordinated to positively charged metals to form stable cation-cation and cation-dication coordination bonds.
References
Cations
Chemical bonding | Cation–cation bond | Physics,Chemistry,Materials_science | 477 |
26,385,993 | https://en.wikipedia.org/wiki/Bo%C3%B6tes%20III | Boötes III is an overdensity in the Milky Way's halo, which may be a disrupted dwarf spheroidal galaxy. It is situated in the constellation Boötes and was discovered in 2009 in the data obtained by Sloan Digital Sky Survey. The galaxy is located at the distance of about 46 kpc from the Sun and moves away from it at the speed of about 200 km/s. It has an elongated shape (axis ratio of 2:1) with the radius of about 0.5 kpc. The large size and an irregular shape may indicate that Boötes III in a transitional phase between a gravitationally bound galaxy and completely unbound system.
Boötes III is one of the smallest and faintest satellites of the Milky Way—its integrated luminosity is about 18,000 times that of the Sun (absolute visible magnitude of about −5.8), which is much lower than the luminosity of many globular clusters. The mass of Boötes III is difficult to estimate because the galaxy is in process of being disrupted. In this case the velocity dispersion of its stars is not related to its mass.
The stellar population of Boötes III consists mainly of moderately old stars formed more than 12 billion years ago. The metallicity of these old stars is low at , which means that they contain 120 times less heavy elements than the Sun. Boötes III might be the source of stars of the Styx stream in the galactic halo, which was discovered together with this galaxy.
References
Dwarf spheroidal galaxies
4713562
Boötes
Local Group
Milky Way Subgroup
? | Boötes III | Astronomy | 325 |
63,917,263 | https://en.wikipedia.org/wiki/Human-to-human%20transmission | Human-to-human transmission (HHT) is an epidemiologic vector, especially in case the disease is borne by individuals known as superspreaders. In these cases, the basic reproduction number of the virus, which is the average number of additional people that a single case will infect without any preventative measures, can be as high as 203.9. Interhuman transmission is a synonym for HHT.
The World Health Organization designation of a pandemic hinges on the demonstrable fact that there is sustained HHT in two regions of the world.
Synopsis
Relevant microbes may be viruses, bacteria, or fungi, and they may be spread through breathing, talking, coughing, sneezing, spraying of liquids, toilet flushing or any activities which generate aerosol particles or droplets or generate fomites, such as raising of dust.
Transfer efficiency depends not only on surface, but also on pathogen type. For example, avian influenza survives on both porous and non-porous materials for 144 hours.
The microbes may also be transmitted by poor use of cutlery or improper sanitation of dishes or bedlinen. Particularly problematic are toilet practices, which lead to the fecal–oral route. STDs are by definition spread through this vector.
List of HHT diseases
Examples of some HHT diseases are listed below.
measles: vaccine available
mumps: vaccine available
chicken pox: vaccine available
small pox
bubonic plague: slim non-nil risk
pneumonic plague: 1910-11 Manchurian plague
tuberculosis
Norovirus
monkeypox
SARS-CoV-1
SARS-CoV-2: vaccine available
MERS
Avian flu
Sexually transmitted infections (STIs) or sexually transmitted diseases (STDs):
Syphilis, aka French pox
References
Sources
Epidemiology
Parasitology
Infectious diseases
Sanitation
Hygiene
Global health
Epidemics | Human-to-human transmission | Environmental_science | 390 |
15,367,309 | https://en.wikipedia.org/wiki/Rheumatoid%20nodule | A rheumatoid nodule is a lump of tissue, or an area of swelling, that appears on the exterior of the skin usually around the olecranon (tip of the elbow) or the interphalangeal joints (finger knuckles), but can appear in other areas. There are four different types of rheumatoid nodules: subcutaneous rheumatoid nodules, cardiac nodules, pulmonary nodules, and central nervous systems nodules. These nodules occur almost exclusively in association with rheumatoid arthritis. Very rarely do rheumatoid nodules occur as rheumatoid nodulosis (multiple nodules on the hands or other areas) in the absence of rheumatoid arthritis. Rheumatoid nodules can also appear in areas of the body other than the skin. Less commonly they occur in the lining of the lungs or other internal organs. The occurrence of nodules in the lungs of miners exposed to silica dust was known as Caplan’s syndrome. Rarely, the nodules occur at diverse sites on body (e.g., upper eyelid, distal region of the soles of the feet, vulva, and internally in the gallbladder, lung, heart valves, larynx, and spine).
Rheumatoid nodules can vary in size from 2 mm to 5 cm and are usually rather firm to the touch. Quite often they are associated with synovial pockets or bursae. About 5% of people with rheumatoid arthritis have such nodules within two years of disease onset, and the cumulative prevalence is about 20–30%. Risk factors of developing rheumatoid nodules include as smoking and trauma to small vessels.
In the majority of the time, nodules are not painful or disabling in any way. They are usually more of an unsightly nuisance. However, rheumatoid nodules can become painful when infection or ulcers occur on the skin of the nodule. Some nodules may disappear over time, but other may grow larger, making nodular size difficult to predict.
Treatment of rheumatoid nodules can be quite difficult, but both surgical removal and injection of corticosteroids have shown good results.
Pathophysiology
Although the exact process is unknown, there are a few hypotheses for the generation of rheumatoid nodules. It has been observed that rheumatoid nodules frequently form over extensor sites and areas vulnerable to trauma. The trauma causes inflammatory particles to build up and leads to a secondary inflammatory response which ultimately causes fibrin release and necrosis. Another hypothesis suggests that the inflammation of blood vessels activates complement components, which leads to the deposit of rheumatoid factors and fibrin.
The rheumatoid nodule is the most common cutaneous manifestation of rheumatoid arthritis. Rheumatoid arthritis involves chronic inflammation of synovial membranes, which leads to degradation of articular cartilage and the juxta-articular bone. Inflammation is caused by T cells, B cells, and monocytes when endothelial cells are activated. Neovascularization, the growth of new blood vessels, serves as an additional marker for rheumatoid arthritis. A hyperplastic synovial lining layer can be caused by the expansion of synovial fibroblast and macrophage cells. This expansion of the synovial membrane, sometimes referred to as "pannus", can lead to bony erosions and cartilage degradation at the site of the cartilage-bone junction in the periarticular bone.
The cause of rheumatoid arthritis is unknown. It is speculated that both genetic and environmental factors contribute to the development of rheumatoid arthritis. Gene loci and antigens, such as HLA class II antigens, have been seen as closely associated with rheumatoid arthritis. Environmental risk factors include smoking, periodontitis, viral infections, and gut, mouth, and lung microbiomes. Researchers have noted that Prevotella species, which are expanded in the gastrointestinal tract in early RA, and Porphyromonas gingivalis, which is associated with periodontitis, may have a role in pathogenesis.
Pathology
Histological examination of nodules shows that they consist of a shell of fibrous tissue surrounding a center of fibrinoid necrosis. Pea-sized nodules have one center. Larger nodules tend to be multilocular, with many separate shells or with connections between the necrotic centers. Individual necrotic centers may contain a cleft or several centers of necrosis may all open on to a large bursal pocket containing synovial fluid.
The boundary between the necrotic center and the outer fibrous shell is made up of the characteristic feature of the nodule, which is known as a cellular palisade. The palisade is a densely packed layer of macrophages and fibroblasts which tend to be arranged radially, like the seeds of a kiwifruit or fig. Further out into the fibrous shell there is a zone that contains T cells and plasma cells in association with blood vessels. The histology of pulmonary nodules are similar to that of subcutaneous nodules, with central necrosis surrounded by palisading macrophages and inflammatory infiltrate.
Risk factors
Rheumatoid nodules develop if a person currently has rheumatoid arthritis. However, not all people with rheumatoid arthritis develop rheumatoid nodules. Some risk factors for rheumatoid nodules for people with rheumatoid arthritis may include:
Smoking (strong association)
Elevated levels of serum rheumatoid factors
HLA-DRB1 gene (weak association)
Trauma to small vessels
Having severe rheumatoid arthritis
Taking Methotrexate over other arthritis drugs
Diagnosis
Differential diagnosis of rheumatoid nodules can be classified from localization, depth pathology, age of onset, persistence, rheumatoid factor, concomitant joint disease, and bone erosions. Diagnosis is typically determined clinically by a rheumatologist. Rheumatoid arthritis associated rheumatoid nodules are typically subcutaneous and occur at extensor sites. The onset typically starts in adulthood and presents with rheumatoid factors and bone erosions, and concomitant joint diseases. The pathology is characterized by central necrosis, palisading mononuclear cells, and perivascular lymphocytic infiltrations.
Rheumatoid nodulosis is characterized by multiple subcutaneous nodules presenting with rheumatoid factors but an absence of joint complaints. The nodules are typically small and concentrated on the extensor sites of the hands and feet, sometimes accompanied by bone erosions. The onset typically starts in adulthood with a pathology similar to rheumatoid arthritis associated rheumatoid nodules.
Benign rheumatoid nodules are often not associated with rheumatoid factors or concomitant joint diseases. They are typically found on the feet, scalp, and pretibial regions. Frequently seen in children before the age of 18, the pathology is similar to that of rheumatoid arthritis associated rheumatoid nodules. The nodules are non-tender and undergo rapid growth, but also resolve spontaneously. A similar presentation occurring more intracutaneously (superficial) is known as granuloma annulare.
Rheumatic fever nodules are typically associated with acute rheumatic fever in children. They are not accompanied by rheumatoid factors or bone erosions, but are associated with concomitant joint diseases. No larger than the size of peas, they are typically found at extensor sites and processus spinosi of the vertebrae. The pathology is characterized by central necrosis and little histiocytic/lymphocytic infiltration.
Prevalence
There are four different types of rheumatoid nodules: subcutaneous rheumatoid nodules, cardiac nodules, pulmonary nodules and central nervous system nodules.
Subcutaneous rheumatoid nodules
According to a study done by the BARFOT study group, 7% of individuals diagnosed with rheumatoid arthritis reported the presence of subcutaneous rheumatoid nodules upon initial diagnosis. And about 30–40% of all those diagnosed with rheumatoid arthritis reported developing these nodules throughout the course of the disease. Subcutaneous rheumatoid nodules is correlated with the increased risk of cardiovascular and respiratory disease, and those with detected subcutaneous rheumatoid nodules should be assessed for cardiovascular and respiratory risk factors.
Cardiac nodules
Rheumatoid nodules may also form in the heart. Specifically, it could develop in the myocardium, pericardium, and other valvular structures, and these nodules can be discovered through echocardiograms. There are little studies with minimal data on the development of cardiac nodules in association with rheumatoid arthritis, but the general consensus is that such occurrences is relatively rare.
Pulmonary Nodules
The reported prevalence of pulmonary nodules has varying depending on the method of detection. In a 1984 study done on lung biopsies in rheumatoid arthritis, the reported prevalence was about 32% in a sample size of 40 individuals. However, another clinical study utilizing a different method of detection; plain film radiographs of the chest; showed that only 2 out of 516 people (~0.4%) diagnosed with rheumatoid arthritis developed pulmonary nodules. Additionally, other clinical studies have reported increased pulmonary nodule growth following treatments with methotrexate, leflunomide, and etanercept.
Central nervous system nodules
Like cardiac nodules, nodules developing in the central nervous system is also relatively rare. Most reports of nodule growth on the central nervous system also presented with severe stages of erosive joint diseases. Generally, these nodules can be detected through MRI and confirmed through biopsies. As of right now, there are no known mediations that have been reported in reducing nodules in the central nervous systems.
Prevention
There are no methods as of right now to completely prevent the development of rheumatoid nodules, but for those diagnosed with rheumatoid arthritis, proper management of the disease could reduce the risk of nodule formation. Additionally, proper medication adherence, smoking cessation, increasing physical activity, and keeping up with doctor appointments are just some lifestyle changes that could "prevent" nodules.
Treatment
Treatment for rheumatoid nodules may be tricky as some treatments for rheumatoid arthritis can act against the nodules. Common drug therapies for rheumatoid arthritis may show no benefits towards the treatment for rheumatoid nodules. Common drug therapies, such as anti TNF treatment or other immunosuppressive drugs, for rheumatoid arthritis has shown little effect on the nodules. In fact, it has been shown that Methotrexate, a drug often used in rheumatoid arthritis, is actually correlated with the increased risk of nodule formation. Because rheumatoid nodules also cause pain or nerve entrapment, treatment for these symptoms with nonsteroidal anti-inflammatory drugs may be sufficient. Other drug therapies, such as corticosteroids, have shown to decrease nodular size, however, it can increase the risk of infection as well. Local corticosteroid injections seems to be the most studied treatment for rheumatoid nodules as of now.
Surgery to have the nodule removed is another option that can be done to treat rheumatoid nodules. However, usually these are usually only indicated in the case of eroding/ necrotising skin.
See also
Rheumatoid nodulosis
Rheumatoid arthritis
References
External links
Histopathology
Arthritis | Rheumatoid nodule | Chemistry | 2,558 |
1,550,886 | https://en.wikipedia.org/wiki/Dux%20Britanniarum | Dux Britanniarum was a military post in Roman Britain, probably created by Emperor Diocletian or Constantine I during the late third or early fourth century. The Dux (literally, "(military) leader" was a senior officer in the late Roman army of the West in Britain. It is listed in the Notitia Dignitatum as being one of the three commands in Britain, along with the Comes Britanniarum and Count of the Saxon Shore.
His responsibilities covered the area along Hadrian's Wall, including the surrounding areas to the Humber estuary in the southeast of today's Yorkshire, Cumbria and Northumberland to the mountains of the Southern Pennines. The headquarters were in the city of Eboracum (York). The purpose of this buffer zone was to preserve the economically important and prosperous southeast of the island from attacks by the Picts (tribes of what are now the Scottish lowlands) and against the Scots (Irish raiders).
History
The Dux Britanniarum was commander of the troops of the Northern Region, primarily along Hadrian's Wall. The position carried the rank of viri spectabiles, but was below that of the Comes Britanniarum. His responsibilities would have included protection of the frontier, maintenance of fortifications, and recruitment. Provisioning the troops would have played a significant part in the economy of the area. The Dux would have had considerable influence within his geographical jurisdiction, and exercised significant autonomy due in part to the distance from headquarters of his superiors.
The Notitia Dignitatum lists the garrison along Hadrian's Wall (along with several sites on the coast of Cumbria) under the command of the Dux Britanniarum. Archaeological evidence shows that other units must have been stationed here, which are not, however, mentioned in the Notita. Most of them were established during the 3rd Century.
Castles and units
His troops were limitanei or frontier guards and not the comitatenses or field army commanded by the Comes Britanniarum. Fourteen units in north Britain are listed in the Notitia as being under his command, stationed in either modern Yorkshire, Cumbria or Northumberland. Archaeological evidence indicates there were other posts occupied at the time which are not listed. His forces included three cavalry vexillationes with the rest being infantry. They were newly raised units rather than being third century creations. In addition to these fort garrisons, the dux commanded the troops at Hadrian's Wall: the Notitia lists their stations from east to west, as well as additional forts on the Cumbrian coast. These troops appear to have been third century regiments, although the reliability of the Notitia makes it difficult to infer any solid information from it.
From Chapter XL:
sub dispositione viri spectabilis Ducis Britanniarum (literally "made available to the most honorable military commander of the British provinces")
...in addition to the administrative staff (Officium) lists 14 prefects and their units with their deployment locations under the command of this Dux:
Praefectus Legionis sextae
Praefectus Numeri directorum, Verteris
Praefectus Numeri exploratorum, Lavatrae
Praefectus Equitum Dalmatarum, Praesidio
Praefectus Equitum Crispianorum, Dano
Praefectus Numeri defensorum, Barboniaco
Praefectus Equitum, catafractariorum, Morbio
Praefectus Numeri Solensium, Maglone
Praefectus Numeri barcariorum Tigrisiensium, Arbeia
Praefectus Numeri Pacensium, Magis
Praefectus Numeri Nerviorum Dictensium, Dicti
Praefectus Numeri Longovicanorum, Longovicium
Praefectus Numeri vigilum, Concangis
Praefectus Numeri supervenientium Petueriensium, Deruentione (Derventio?)
Then follow the garrisons along Hadrian's Wall (per item lineam Valli):
Cohortis quaternary Lingonum, Segedunum
Tribune Alae Petrianae, Petriana
Praefectus cohortis primae Cornoviorum, Pons Aelius
Tribune Alae primae Asturum, Cilurnum or Cilurvum
Praefectus Numeri Maurorum Aurelianorum, Aballaba
Praefectus cohortis primae Frixagorum, Vindobala
Tribune cohortis secundae Lingonum, Segedunum
Tribune Alae Sabinianae, Hunnum or Onnum
Praefectus cohortis primae Hispanorum, Uxelodunum or Petriana
Tribune Alae secundae Asturum, Aesica
Praefectus cohortis secundae Thracum, Gabrosenti
Tribune cohortis primae Batavorum, Procolita
Tribune cohortis primae Aeliae Classicae, Tunnocelo
Tribune cohortis primae Tungrorum Classicae, Vercovicium
Tribune cohortis primae Morinorum, Glannoventa
Tribune cohortis quaternary Gallorum, Vindolanda
Tribune cohortis tertiae Nerviorum, Alione (Alauna?)
Tribune cohortis primae Asturum, Aesica
Cuneus Sarmatarum, Bremetenraco (Bremenium?)(no officer stated)
Cohortis secundae Dalmatarum, Magnis
Tribune Alae primae Herculeae, Olenaco
Praefectus cohortis primae Aeliae Dacorum, Camboglanna or Banna
Tribune cohortis sextae Nerviorum, Virosido
and an unknown unit in the fort Luguvalium
The Dux Britanniarum held command over thirty-eight regimental commanders. Infantry units were concentrated along the Wall. A Sarmatian unit of heavy cavalry (Cuneus Sarmatarum), was stationed near the crossroads at Ribchester. As their name suggests the Praefectus Numeri exploratorum were used for reconnaissance. The Equites Crispianorum was located at Doncaster, and a naval unit at the mouth of the Tyne. Collins estimates troop counts from a low of 7,000 to as much as 15,000, with the average approximating 12,500.
Origin
The Legio sexta is an ancient tribal legion of Britain, the Legio VI Eburacum (York). They seem to have had in late antiquity no fixed posting. One might expect that this legion (full name: Legio VI Victrix Pia Fidelis Britannica) at this time still to be stationed in Eburacum: this absence may indicate that the unit had been moved to another site when the list of the Dux Britanniarum was compiled in the Notita Dignitatum. ("Possibly is the VI."?) but also in connection with the non-historically tangible primani iuniores in the army of the Comes Britanniarum.
The men under the Praefectus Numbers Solensium could (per Arnold Hughes Martin Jones, 1986) be the descendants of another British unit, the Legio XX Valeria Victrix. This is the only legion no longer listed in the Notitia Dignitatum. The last epigraphic evidence of their presence in Britain is a mention on coins of the usurper Carausius, a century before the Notita Dignitatum was compiled.
See also
Fullofaudes
Dulcitius
Notes
Sources
Alexander Demandt: Geschichte der Spätantike: Das Römische Reich von Diocletian bis Justinian 284-565 n. Chr. München 1998, (Beck Historische Bibliothek).
Nick Fields: Rome's Saxon Shore Coastal Defences of Roman Britain AD 250–500. Osprey Books, 2006, (Fortress 56).
Arnold Hugh Martin Jones: The Later Roman Empire, 284–602. A Social, Economic and Administrative Survey. 2 Bde. Johns Hopkins University Press, Baltimore 1986, .
Simon MacDowall: Late Roman Infantryman, 236-565 AD. Weapons, Armour, Tactics. Osprey Books, 1994, (Warrior 9).
Ralf Scharf: Der Dux Mogontiacensis und die Notitia Dignitatum. de Gruyter, Berlin 2005, .
Fran & Geoff Doel, Terry Lloyd: König Artus und seine Welt, Aus dem Englischen von Christof Köhler. Sutton, Erfurt 2000, .
Guy de la Bedoyere: Hadrians Wall, History and Guide. Tempus, Stroud 1998, .
Roman Britain
Saxon Shore
Military history of Roman Britain
Late Roman military ranks | Dux Britanniarum | Engineering | 1,847 |
7,200,934 | https://en.wikipedia.org/wiki/Silicon%20photonics | Silicon photonics is the study and application of photonic systems which use silicon as an optical medium. The silicon is usually patterned with sub-micrometre precision, into microphotonic components. These operate in the infrared, most commonly at the 1.55 micrometre wavelength used by most fiber optic telecommunication systems. The silicon typically lies on top of a layer of silica in what (by analogy with a similar construction in microelectronics) is known as silicon on insulator (SOI).
Silicon photonic devices can be made using existing semiconductor fabrication techniques, and because silicon is already used as the substrate for most integrated circuits, it is possible to create hybrid devices in which the optical and electronic components are integrated onto a single microchip. Consequently, silicon photonics is being actively researched by many electronics manufacturers including IBM and Intel, as well as by academic research groups, as a means for keeping on track with Moore's Law, by using optical interconnects to provide faster data transfer both between and within microchips.
The propagation of light through silicon devices is governed by a range of nonlinear optical phenomena including the Kerr effect, the Raman effect, two-photon absorption and interactions between photons and free charge carriers. The presence of nonlinearity is of fundamental importance, as it enables light to interact with light, thus permitting applications such as wavelength conversion and all-optical signal routing, in addition to the passive transmission of light.
Silicon waveguides are also of great academic interest, due to their unique guiding properties, they can be used for communications, interconnects, biosensors, and they offer the possibility to support exotic nonlinear optical phenomena such as soliton propagation.
Applications
Optical communications
In a typical optical link, data is first transferred from the electrical to the optical domain using an electro-optic modulator or a directly modulated laser. An electro-optic modulator can vary the intensity and/or the phase of the optical carrier. In silicon photonics, a common technique to achieve modulation is to vary the density of free charge carriers. Variations of electron and hole densities change the real and the imaginary part of the refractive index of silicon as described by the empirical equations of Soref and Bennett. Modulators can consist of both forward-biased PIN diodes, which generally generate large phase-shifts but suffer of lower speeds, as well as of reverse-biased p–n junctions. A prototype optical interconnect with microring modulators integrated with germanium detectors has been demonstrated.
Non-resonant modulators, such as Mach-Zehnder interferometers, have typical dimensions in the millimeter range and are usually used in telecom or datacom applications. Resonant devices, such as ring-resonators, can have dimensions of few tens of micrometers only, occupying therefore much smaller areas. In 2013, researchers demonstrated a resonant depletion modulator that can be fabricated using standard Silicon-on-Insulator Complementary Metal-Oxide-Semiconductor (SOI CMOS) manufacturing processes. A similar device has been demonstrated as well in bulk CMOS rather than in SOI.
On the receiver side, the optical signal is typically converted back to the electrical domain using a semiconductor photodetector. The semiconductor used for carrier generation has usually a band-gap smaller than the photon energy, and the most common choice is pure germanium. Most detectors use a p–n junction for carrier extraction, however, detectors based on metal–semiconductor junctions (with germanium as the semiconductor) have been integrated into silicon waveguides as well. More recently, silicon-germanium avalanche photodiodes capable of operating at 40 Gbit/s have been fabricated.
Complete transceivers have been commercialized in the form of active optical cables.
Optical communications are conveniently classified by the reach, or length, of their links. The majority of silicon photonic communications have so far been limited to telecom
and datacom applications, where the reach is of several kilometers or several meters respectively.
Silicon photonics, however, is expected to play a significant role in computercom as well, where optical links have a reach in the centimeter to meter range. In fact, progress in computer technology (and the continuation of Moore's Law) is becoming increasingly dependent on faster data transfer between and within microchips. Optical interconnects may provide a way forward, and silicon photonics may prove particularly useful, once integrated on the standard silicon chips. In 2006, Intel Senior Vice President - and future CEO - Pat Gelsinger stated that, "Today, optics is a niche technology. Tomorrow, it's the mainstream of every chip that we build." In 2010 Intel demonstrated a 50 Gbit/s connection made with silicon photonics.
The first microprocessor with optical input/output (I/O) was demonstrated in December 2015 using an approach known as "zero-change" CMOS photonics. This is known as fiber-to-the-processor.
This first demonstration was based on a 45 nm SOI node, and the bi-directional chip-to-chip link was operated at a rate of 2×2.5 Gbit/s. The total energy consumption of the link was calculated to be of 16 pJ/b and was dominated by the contribution of the off-chip laser.
Some researchers believe an on-chip laser source is required. Others think that it should remain off-chip because of thermal problems (the quantum efficiency decreases with temperature, and computer chips are generally hot) and because of CMOS-compatibility issues. One such device is the hybrid silicon laser, in which the silicon is bonded to a different semiconductor (such as indium phosphide) as the lasing medium. Other devices include all-silicon Raman laser or an all-silicon Brillouin lasers wherein silicon serves as the lasing medium.
In 2012, IBM announced that it had achieved optical components at the 90 nanometer scale that can be manufactured using standard techniques and incorporated into conventional chips. In September 2013, Intel announced technology to transmit data at speeds of 100 gigabits per second along a cable approximately five millimeters in diameter for connecting servers inside data centers. Conventional PCI-E data cables carry data at up to eight gigabits per second, while networking cables reach 40 Gbit/s. The latest version of the USB standard tops out at ten Gbit/s. The technology does not directly replace existing cables in that it requires a separate circuit board to interconvert electrical and optical signals. Its advanced speed offers the potential of reducing the number of cables that connect blades on a rack and even of separating processor, storage and memory into separate blades to allow more efficient cooling and dynamic configuration.
Graphene photodetectors have the potential to surpass germanium devices in several important aspects, although they remain about one order of magnitude behind current generation capacity, despite rapid improvement. Graphene devices can work at very high frequencies, and could in principle reach higher bandwidths. Graphene can absorb a broader range of wavelengths than germanium. That property could be exploited to transmit more data streams simultaneously in the same beam of light. Unlike germanium detectors, graphene photodetectors do not require applied voltage, which could reduce energy needs. Finally, graphene detectors in principle permit a simpler and less expensive on-chip integration. However, graphene does not strongly absorb light. Pairing a silicon waveguide with a graphene sheet better routes light and maximizes interaction. The first such device was demonstrated in 2011. Manufacturing such devices using conventional manufacturing techniques has not been demonstrated.
Optical routers and signal processors
Another application of silicon photonics is in signal routers for optical communication. Construction can be greatly simplified by fabricating the optical and electronic parts on the same chip, rather than having them spread across multiple components. A wider aim is all-optical signal processing, whereby tasks which are conventionally performed by manipulating signals in electronic form are done directly in optical form. An important example is all-optical switching, whereby the routing of optical signals is directly controlled by other optical signals. Another example is all-optical wavelength conversion.
In 2013, a startup company named "Compass-EOS", based in California and in Israel, was the first to present a commercial silicon-to-photonics router.
Long range telecommunications using silicon photonics
Silicon microphotonics can potentially increase the Internet's bandwidth capacity by providing micro-scale, ultra low power devices. Furthermore, the power consumption of datacenters may be significantly reduced if this is successfully achieved. Researchers at Sandia, Kotura, NTT, Fujitsu and various academic institutes have been attempting to prove this functionality. A 2010 paper reported on a prototype 80 km, 12.5 Gbit/s transmission using microring silicon devices.
Light-field displays
As of 2015, US startup company Magic Leap is working on a light-field chip using silicon photonics for the purpose of an augmented reality display.
Artificial intelligence
Silicon photonics has been used in artificial intelligence inference processors that are more energy efficient than those using conventional transistors. This can be done using Mach-Zehnder interferometers (MZIs) which can be combined with nanoelectromechanical systems to modulate the light passing though it, by physically bending the MZI which changes the phase of the light.
Physical properties
Optical guiding and dispersion tailoring
Silicon is transparent to infrared light with wavelengths above about 1.1 micrometres. Silicon also has a very high refractive index, of about 3.5. The tight optical confinement provided by this high index allows for microscopic optical waveguides, which may have cross-sectional dimensions of only a few hundred nanometers. Single mode propagation can be achieved, thus (like single-mode optical fiber) eliminating the problem of modal dispersion.
The strong dielectric boundary effects that result from this tight confinement substantially alter the optical dispersion relation. By selecting the waveguide geometry, it is possible to tailor the dispersion to have desired properties, which is of crucial importance to applications requiring ultrashort pulses. In particular, the group velocity dispersion (that is, the extent to which group velocity varies with wavelength) can be closely controlled. In bulk silicon at 1.55 micrometres, the group velocity dispersion (GVD) is normal in that pulses with longer wavelengths travel with higher group velocity than those with shorter wavelength. By selecting a suitable waveguide geometry, however, it is possible to reverse this, and achieve anomalous GVD, in which pulses with shorter wavelengths travel faster. Anomalous dispersion is significant, as it is a prerequisite for soliton propagation, and modulational instability.
In order for the silicon photonic components to remain optically independent from the bulk silicon of the wafer on which they are fabricated, it is necessary to have a layer of intervening material. This is usually silica, which has a much lower refractive index (of about 1.44 in the wavelength region of interest), and thus light at the silicon-silica interface will (like light at the silicon-air interface) undergo total internal reflection, and remain in the silicon. This construct is known as silicon on insulator. It is named after the technology of silicon on insulator in electronics, whereby components are built upon a layer of insulator in order to reduce parasitic capacitance and so improve performance. Silicon photonics have also been built with silicon nitride as the material in the optical waveguides.
Kerr nonlinearity
Silicon has a focusing Kerr nonlinearity, in that the refractive index increases with optical intensity. This effect is not especially strong in bulk silicon, but it can be greatly enhanced by using a silicon waveguide to concentrate light into a very small cross-sectional area. This allows nonlinear optical effects to be seen at low powers. The nonlinearity can be enhanced further by using a slot waveguide, in which the high refractive index of the silicon is used to confine light into a central region filled with a strongly nonlinear polymer.
Kerr nonlinearity underlies a wide variety of optical phenomena. One example is four wave mixing, which has been applied in silicon to realise optical parametric amplification, parametric wavelength conversion, and frequency comb generation.,
Kerr nonlinearity can also cause modulational instability, in which it reinforces deviations from an optical waveform, leading to the generation of spectral-sidebands and the eventual breakup of the waveform into a train of pulses. Another example (as described below) is soliton propagation.
Two-photon absorption
Silicon exhibits two-photon absorption (TPA), in which a pair of photons can act to excite an electron-hole pair. This process is related to the Kerr effect, and by analogy with complex refractive index, can be thought of as the imaginary-part of a complex Kerr nonlinearity. At the 1.55 micrometre telecommunication wavelength, this imaginary part is approximately 10% of the real part.
The influence of TPA is highly disruptive, as it both wastes light, and generates unwanted heat. It can be mitigated, however, either by switching to longer wavelengths (at which the TPA to Kerr ratio drops), or by using slot waveguides (in which the internal nonlinear material has a lower TPA to Kerr ratio). Alternatively, the energy lost through TPA can be partially recovered (as is described below) by extracting it from the generated charge carriers.
Free charge carrier interactions
The free charge carriers within silicon can both absorb photons and change its refractive index. This is particularly significant at high intensities and for long durations, due to the carrier concentration being built up by TPA. The influence of free charge carriers is often (but not always) unwanted, and various means have been proposed to remove them. One such scheme is to implant the silicon with helium in order to enhance carrier recombination. A suitable choice of geometry can also be used to reduce the carrier lifetime. Rib waveguides (in which the waveguides consist of thicker regions in a wider layer of silicon) enhance both the carrier recombination at the silica-silicon interface and the diffusion of carriers from the waveguide core.
A more advanced scheme for carrier removal is to integrate the waveguide into the intrinsic region of a PIN diode, which is reverse biased so that the carriers are attracted away from the waveguide core. A more sophisticated scheme still, is to use the diode as part of a circuit in which voltage and current are out of phase, thus allowing power to be extracted from the waveguide. The source of this power is the light lost to two photon absorption, and so by recovering some of it, the net loss (and the rate at which heat is generated) can be reduced.
As is mentioned above, free charge carrier effects can also be used constructively, in order to modulate the light.
Second-order nonlinearity
Second-order nonlinearities cannot exist in bulk silicon because of the centrosymmetry of its crystalline structure. By applying strain however, the inversion symmetry of silicon can be broken. This can be obtained for example by depositing a silicon nitride layer on a thin silicon film.
Second-order nonlinear phenomena can be exploited for optical modulation, spontaneous parametric down-conversion, parametric amplification, ultra-fast optical signal processing and mid-infrared generation. Efficient nonlinear conversion however requires phase matching between the optical waves involved. Second-order nonlinear waveguides based on strained silicon can achieve phase matching by dispersion-engineering.
So far, however, experimental demonstrations are based only on designs which are not phase matched.
It has been shown that phase matching can be obtained as well in silicon double slot waveguides coated with a highly nonlinear organic cladding
and in periodically strained silicon waveguides.
The Raman effect
Silicon exhibits the Raman effect, in which a photon is exchanged for a photon with a slightly different energy, corresponding to an excitation or a relaxation of the material. Silicon's Raman transition is dominated by a single, very narrow frequency peak, which is problematic for broadband phenomena such as Raman amplification, but is beneficial for narrowband devices such as Raman lasers. Early studies of Raman amplification and Raman lasers started at UCLA which led to demonstration of net gain Silicon Raman amplifiers and silicon pulsed Raman laser with fiber resonator (Optics express 2004). Consequently, all-silicon Raman lasers have been fabricated in 2005.
The Brillouin effect
In the Raman effect, photons are red- or blue-shifted by optical phonons with a frequency of about 15 THz. However, silicon waveguides also support acoustic phonon excitations. The interaction of these acoustic phonons with light is called Brillouin scattering. The frequencies and mode shapes of these acoustic phonons are dependent on the geometry and size of the silicon waveguides, making it possible to produce strong Brillouin scattering at frequencies ranging from a few MHz to tens of GHz. Stimulated Brillouin scattering has been used to make narrowband optical amplifiers as well as all-silicon Brillouin lasers. The interaction between photons and acoustic phonons is also studied in the field of cavity optomechanics, although 3D optical cavities are not necessary to observe the interaction. For instance, besides in silicon waveguides the optomechanical coupling has also been demonstrated in fibers and in chalcogenide waveguides.
Solitons
The evolution of light through silicon waveguides can be approximated with a cubic Nonlinear Schrödinger equation, which is notable for admitting sech-like soliton solutions. These optical solitons (which are also known in optical fiber) result from a balance between self phase modulation (which causes the leading edge of the pulse to be redshifted and the trailing edge blueshifted) and anomalous group velocity dispersion. Such solitons have been observed in silicon waveguides, by groups at the universities of Columbia, Rochester, and Bath.
See also
Photonic integrated circuit
Optical computing
Silicon Photonics Cloud
References
Nonlinear optics
Photonics
Silicon | Silicon photonics | Materials_science | 3,757 |
3,986,527 | https://en.wikipedia.org/wiki/Regsvr32 | In computing, regsvr32 (Register Server) is a command-line utility in Microsoft Windows and ReactOS for registering and unregistering DLLs and ActiveX controls in the operating system Registry. Despite the suffix "32" in the name of the file, there are both 32-bit and 64-bit versions of this utility (with identical names, but in different directories). regsvr32 requires elevated privileges.
To be used with regsvr32, a DLL must export the functions DllRegisterServer and DllUnregisterServer.
The regsvr32 command is comparable to ldconfig in Linux.
Example usage
regsvr32 shmedia.dll for registering a file
regsvr32 shmedia.dll /s for registering a file without the dialog box ( silent )
regsvr32 /u shmedia.dll for unregistering a file
regsvr32 shmedia.dll /u /s for unregistering a file without the dialog box ( silent )
If another copy of shmedia.dll exists in the system search path, regsvr32 may choose that copy instead of the one in the current directory. This problem can usually be solved by specifying a full path (e.g., c:\windows\system32\shmedia.dll) or using the following syntax:
regsvr32 .\shmedia.dll
References
Further reading
External links
Microsoft TechNet Regsvr32 article
Explanation of Regsvr32 Usage and Error Messages
C# Frequently Asked Questions: What is the equivalent to regsvr32 in .NET?
Windows administration
Windows commands | Regsvr32 | Technology | 350 |
74,980,742 | https://en.wikipedia.org/wiki/HydrogenPro | HydrogenPro is a technology company and an OEM for high-pressure alkaline electrolyser systems for large-scale green hydrogen plants.
HydrogenPro was founded in 2013 by individuals with a background in the electrolysis industry which was
established in Telemark, Norway by Norsk Hydro in 1927.
History
HydrogenPro was founded in 2013 by a team with the experienced electrolyser industry at Norsk Hydro and delivered the largest electrolyser plant in Northern Europe.
In 2020, HydrogenPro is listed on Euronext Growth with key investors such as Mitsubishi taking part in the IPO. HydrogenPro sets up JV with electrolyser producer THM and control over all IP and electrolyser technology.
In 2022, HydrogenPro secured a landmark purchase order for 10-year service and support agreement from Mitsubishi (220 MW in Utah, US). In the same year, they were listed on the main list of Oslo Børs.
In 2023, HydrogenPro entered into a partnership with ANDRITZ to collaborate on scaling up the manufacturing and assembly of electrolysers.
References
Hydrogen technologies
Electrolysis
Hydrogen production
Technology companies | HydrogenPro | Chemistry | 231 |
78,246,878 | https://en.wikipedia.org/wiki/Metarhiziopsis | Metarhiziopsis is a genus of fungus in the family Clavicipitaceae.
Species in this genus include:
Metarhiziopsis microspora
References
Hypocreales genera
Clavicipitaceae | Metarhiziopsis | Biology | 47 |
5,900,914 | https://en.wikipedia.org/wiki/Phosphorylcholine | Phosphorylcholine (abbreviated ChoP) is the hydrophilic polar head group of some phospholipids, which is composed of a negatively charged phosphate bonded to a small, positively charged choline group. Phosphorylcholine is part of the platelet-activating factor; the phospholipid phosphatidylcholine and sphingomyelin, the only phospholipid of the membrane that is not built with a glycerol backbone. Treatment of cell membranes, like those of RBCs, by certain enzymes, like some phospholipase A2, renders the phosphorylcholine moiety exposed to the external aqueous phase, and thus accessible for recognition by the immune system. Antibodies against phosphorylcholine are naturally occurring autoantibodies that are created by CD5+/B-1 B cells and are referred to as non-pathogenic autoantibodies.
Thrombus-resistant stents
In interventional cardiology, phosphorylcholine is used as a synthetic polymer-based coating, applied to drug-eluting stents, to prevent the occurrence of coronary artery restenosis. The first application of this approach for use of stents evolved from efforts by Hayward, Chapman et al., who showed that the phosphorylcholine component of the outer surface of the erythrocyte bilayer was non-thrombogenicity. Until 2002, over 120,000 phosphorylcholine-coated stents have been implanted in patients with no apparent deleterious effect in the long term compared to bare metal stent technologies.
Phosphorylcholine polymer-based drug-eluting stents
Drug-eluting stents (DES) are used by interventional cardiologists, operating on patients with coronary artery disease. The stent is inserted into the artery via a balloon angioplasty. This will dilate the diameter of the coronary artery and keep it fixed in this phase so that more blood flows through the artery without the risk of blood clots (atherosclerosis).
Phosphorylcholine is used as the polymer-based coating of a DES because its molecular design improves surface biocompatibility and lowers the risk of causing inflammation or thrombosis. Polymer coatings of stents that deliver the anti-proliferative drug Zotarolimus to the arterial vessel wall are components of these medical devices. For targeted local delivery of Zotarolimus to the artery, the drug is incorporated into a methacrylate-based copolymer that includes a synthetic form of phosphorylcholine. This use of biomimicry, or the practice of using polymers that occur naturally in biology, provides a coating with minimal thrombus deposition and no adverse clinical effect on late healing of the arterial vessel wall. Not only is the coating non-thrombogenic, but it also exhibits other features that should be present when applying such a material to a medical device for long-term implantation. These include durability, neutrality to the chemistry of the incorporated drug and ability for sterilization using standard methods which do not affect drug structure or efficiency
See also
Citicoline
Acetylcholine
References
External links
Phosphocholine - C5H15NO4P+ at PubChem
Cholinergics
Phospholipids | Phosphorylcholine | Chemistry | 731 |
18,950,900 | https://en.wikipedia.org/wiki/Brand | A brand is a name, term, design, symbol or any other feature that distinguishes one seller's good or service from those of other sellers. Brands are used in business, marketing, and advertising for recognition and, importantly, to create and store value as brand equity for the object identified, to the benefit of the brand's customers, its owners and shareholders. Brand names are sometimes distinguished from generic or store brands.
The practice of branding—in the original literal sense of marking by burning—is thought to have begun with the ancient Egyptians, who are known to have engaged in livestock branding and branded slaves as early as 2,700 BCE. Branding was used to differentiate one person's cattle from another's by means of a distinctive symbol burned into the animal's skin with a hot branding iron. If a person stole any of the cattle, anyone else who saw the symbol could deduce the actual owner. The term has been extended to mean a strategic personality for a product or company, so that "brand" now suggests the values and promises that a consumer may perceive and buy into. Over time, the practice of branding objects extended to a broader range of packaging and goods offered for sale including oil, wine, cosmetics, and fish sauce and, in the 21st century, extends even further into services (such as legal, financial and medical), political parties and people's stage names.
In the modern era, the concept of branding has expanded to include deployment by a manager of the marketing and communication techniques and tools that help to distinguish a company or products from competitors, aiming to create a lasting impression in the minds of customers. The key components that form a brand's toolbox include a brand's identity, personality, product design, brand communication (such as by logos and trademarks), brand awareness, brand loyalty, and various branding (brand management) strategies. Many companies believe that there is often little to differentiate between several types of products in the 21st century, hence branding is among a few remaining forms of product differentiation.
Brand equity is the measurable totality of a brand's worth and is validated by observing the effectiveness of these branding components. When a customer is familiar with a brand or favors it incomparably over its competitors, a corporation has reached a high level of brand equity. Brand owners manage their brands carefully to create shareholder value. Brand valuation is a management technique that ascribes a monetary value to a brand.
Etymology
The word brand, originally meaning a burning piece of wood, comes from a Middle English brand, meaning "torch", from an Old English brand. It became to also mean the mark from burning with a branding iron.
History
Branding and labeling have an ancient history. Branding probably began with the practice of branding livestock to deter theft. Images of the branding of cattle occur in ancient Egyptian tombs dating to around 2,700 BCE. Over time, purchasers realized that the brand provided information about origin as well as about ownership, and could serve as a guide to quality. Branding was adapted by farmers, potters, and traders for use on other types of goods such as pottery and ceramics. Forms of branding or proto-branding emerged spontaneously and independently throughout Africa, Asia and Europe at different times, depending on local conditions. Seals, which acted as quasi-brands, have been found on early Chinese products of the Qin dynasty (221-206 BCE); large numbers of seals survive from the Harappan civilization of the Indus Valley (3,300–1,300 BCE) where the local community depended heavily on trade; cylinder seals came into use in Ur in Mesopotamia in around 3,000 BCE, and facilitated the labelling of goods and property; and the use of maker's marks on pottery was commonplace in both ancient Greece and Rome. Identity marks, such as stamps on ceramics, were also used in ancient Egypt.
Diana Twede has argued that the "consumer packaging functions of protection, utility and communication have been necessary whenever packages were the object of transactions". She has shown that amphorae used in Mediterranean trade between 1,500 and 500 BCE exhibited a wide variety of shapes and markings, which consumers used to glean information about the type of goods and the quality. The systematic use of stamped labels dates from around the fourth century BCE. In largely pre-literate society, the shape of the amphora and its pictorial markings conveyed information about the contents, region of origin and even the identity of the producer, which were understood to convey information about product quality. David Wengrow has argued that branding became necessary following the urban revolution in ancient Mesopotamia in the 4th century BCE, when large-scale economies started mass-producing commodities such as alcoholic drinks, cosmetics and textiles. These ancient societies imposed strict forms of quality-control over commodities, and also needed to convey value to the consumer through branding. Producers began by attaching simple stone seals to products which, over time, gave way to clay seals bearing impressed images, often associated with the producer's personal identity thus giving the product a personality. Not all historians agree that these markings are comparable with modern brands or labels, with some suggesting that the early pictorial brands or simple thumbprints used in pottery should be termed proto-brands while other historians argue that the presence of these simple markings does not imply that mature brand management practices operated.
Scholarly studies have found evidence of branding, packaging, and labeling in antiquity. Archaeological evidence of potters' stamps has been found across the breadth of the Roman Empire and in ancient Greece. Stamps were used on bricks, pottery, and storage containers as well as on fine ceramics. Pottery marking had become commonplace in ancient Greece by the 6th century BCE. A vase manufactured around 490 BCE bears the inscription "Sophilos painted me", indicating that the object was both fabricated and painted by a single potter. Branding may have been necessary to support the extensive trade in such pots. For example, 3rd-century Gaulish pots bearing the names of well-known potters and the place of manufacture (such as Attianus of Lezoux, Tetturo of Lezoux and Cinnamus of Vichy) have been found as far away as Essex and Hadrian's Wall in England. English potters based at Colchester and Chichester used stamps on their ceramic wares by the 1st century CE. The use of hallmarks, a type of brand, on precious metals dates to around the 4th century CE. A series of five marks occurs on Byzantine silver dating from this period.
Some of the earliest use of maker's marks, dating to about 1,300 BCE, have been found in India. The oldest generic brand in continuous use, known in India since the Vedic period ( to 500 BCE), is the herbal paste known as chyawanprash, consumed for its purported health benefits and attributed to a revered rishi (or seer) named Chyawan. One well-documented early example of a highly developed brand is that of White Rabbit sewing needles, dating from China's Song dynasty (960 to 1127 CE). A copper printing plate used to print posters contained a message which roughly translates as: "Jinan Liu's Fine Needle Shop: We buy high-quality steel rods and make fine-quality needles, to be ready for use at home in no time." The plate also includes a trademark in the form of a 'White Rabbit", which signified good luck and was particularly relevant to women, who were the primary purchasers. Details in the image show a white rabbit crushing herbs, and text includes advice to shoppers to look for the stone white rabbit in front of the maker's shop.
In ancient Rome, a commercial brand or inscription applied to objects offered for sale was known as a titulus pictus. The inscription typically specified information such as place of origin, destination, type of product and occasionally quality claims or the name of the manufacturer. Roman marks or inscriptions were applied to a very wide variety of goods, including, pots, ceramics, amphorae (storage/shipping containers) and on factory-produced oil-lamps. Carbonized loaves of bread, found at Herculaneum, indicate that some bakers stamped their bread with the producer's name. Roman glassmakers branded their works, with the name of Ennion appearing most prominently.
One merchant that made good use of the titulus pictus was Umbricius Scaurus, a manufacturer of fish sauce (also known as garum) in Pompeii, . Mosaic patterns in the atrium of his house feature images of amphorae bearing his personal brand and quality claims. The mosaic depicts four different amphora, one at each corner of the atrium, and bearing labels as follows:
1. G(ari) F(los) SCO[m]/ SCAURI/ EX OFFI[ci]/NA SCAU/RI (translated as: "The flower of garum, made of the mackerel, a product of Scaurus, from the shop of Scaurus")
2. LIQU[minis]/ FLOS (translated as: "The flower of Liquamen")
3. G[ari] F[los] SCOM[bri]/ SCAURI (translated as: "The flower of garum, made of the mackerel, a product of Scaurus")
4. LIQUAMEN/ OPTIMUM/ EX OFFICI[n]/A SCAURI (translated as: "The best liquamen, from the shop of Scaurus")
Scaurus' fish sauce was known by people across the Mediterranean to be of very high quality, and its reputation traveled as far away as modern France. In both Pompeii and nearby Herculaneum, archaeological evidence also points to evidence of branding and labeling in relatively common use across a broad range of goods. Wine jars, for example, were stamped with names, such as "Lassius" and "L. Eumachius"; probably references to the name of the producer.
The use of identity marks on products declined following the fall of the Roman Empire. In the European Middle Ages, heraldry developed a language of visual symbolism which would feed into the evolution of branding,
and with the rise of the merchant guilds the use of marks resurfaced and was applied to specific types of goods. By the 13th century, the use of maker's marks had become evident on a broad range of goods. In 1266, makers' marks on bread became compulsory in England. The Italians used brands in the form of watermarks on paper in the 13th century. Blind stamps, hallmarks, and silver-makers' marks—all types of brand—became widely used across Europe during this period. Hallmarks, although known from the 4th-century, especially in Byzantium, only came into general use during the Medieval period. British silversmiths introduced hallmarks for silver in 1300.
Some brands still in existence date from the 17th, 18th, and 19th centuries' period of mass-production. Bass Brewery, the British brewery founded in 1777, became a pioneer in international brand marketing. Many years before 1855, Bass applied a red triangle to casks of its pale ale. In 1876, its red-triangle brand became the first registered trademark issued by the British government. Guinness World Records recognizes Tate & Lyle (of Lyle's Golden Syrup) as Britain's, and the world's, oldest branding and packaging, with its green-and-gold packaging having remained almost unchanged since 1885. Twinings tea has used the same logo capitalized font beneath a lion crest since 1787, making it the world's oldest in continuous use.
A characteristic feature of 19th-century mass-marketing was the widespread use of branding, originating with the advent of packaged goods. Industrialization moved the production of many household items, such as soap, from local communities to centralized factories. When shipping their items, the factories would literally brand their logo or company insignia on the barrels used, effectively using a corporate trademark as a quasi-brand.
Factories established following the Industrial Revolution introduced mass-produced goods and needed to sell their products to a wider market—that is, to customers previously familiar only with locally produced goods. It became apparent that a generic package of soap had difficulty competing with familiar, local products. Packaged-goods manufacturers needed to convince the market that the public could place just as much trust in the non-local product. Gradually, manufacturers began using personal identifiers to differentiate their goods from generic products on the market. Marketers generally began to realize that brands, to which personalities were attached, outsold rival brands. By the 1880s, large manufacturers had learned to imbue their brands' identity with personality traits such as youthfulness, fun, sex appeal, luxury or the "cool" factor. This began the modern practice now known as branding, where the consumers buy the brand instead of the product and rely on the brand name instead of a retailer's recommendation.
The process of giving a brand "human" characteristics represented, at least in part, a response to consumer concerns about mass-produced goods. The Quaker Oats Company began using the image of the Quaker Man in place of a trademark from the late 1870s, with great success. Pears' soap, Campbell's soup, Coca-Cola, Juicy Fruit chewing gum and Aunt Jemima pancake mix were also among the first products to be "branded" in an effort to increase the consumer's familiarity with the product's merits. Other brands which date from that era, such as Ben's Original rice and Kellogg's breakfast cereal, furnish illustrations of the trend.
By the early 1900s, trade press publications, advertising agencies, and advertising experts began producing books and pamphlets exhorting manufacturers to bypass retailers and to advertise directly to consumers with strongly branded messages. Around 1900, advertising guru James Walter Thompson published a housing advertisement explaining trademark advertising. This was an early commercial explanation of what scholars now recognize as modern branding and the beginnings of brand management. This trend continued to the 1980s, and is quantified by marketers in concepts such as brand value and brand equity. Naomi Klein has described this development as "brand equity mania". In 1988, for example, Philip Morris Companies purchased Kraft Foods Inc. for six times what the company was worth on paper. Business analysts reported that what they really purchased was the brand name.
With the rise of mass media in the early 20th century, companies adopted techniques that allowed their messages to stand out. Slogans, mascots, and jingles began to appear on radio in the 1920s and in early television in the 1930s. Soap manufacturers sponsored many of the earliest radio drama series, and the genre became known as soap opera.
By the 1940s, manufacturers began to recognize the way in which consumers had started to develop relationships with their brands in a social/psychological/anthropological sense. Advertisers began to use motivational research and consumer research to gather insights into consumer purchasing. Strong branded campaigns for Chrysler and Exxon/Esso, using insights drawn from research into psychology and cultural anthropology, led to some of the most enduring campaigns of the 20th-century. Brand advertisers began to imbue goods and services with a personality, based on the insight that consumers searched for brands with personalities that matched their own.
Concepts
Effective branding, attached to strong brand values, can result in higher sales of not only one product, but of other products associated with that brand. If a customer loves Pillsbury biscuits and trusts the brand, he or she is more likely to try other products offered by the company – such as chocolate-chip cookies, for example. Brand development, often performed by a design team, takes time to produce.
Brand names and trademarks
A brand name is the part of a brand that can be spoken or written and identifies a product, service or company and sets it apart from other comparable products within a category. A brand name may include words, phrases, signs, symbols, designs, or any combination of these elements. For consumers, a brand name is a "memory heuristic": a convenient way to remember preferred product choices. A brand name is not to be confused with a trademark which refers to the brand name or part of a brand that is legally protected. For example, Coca-Cola not only protects the brand name, Coca-Cola, but also protects the distinctive Spencerian script and the contoured shape of the bottle.
Corporate brand identity
Brand identity is a collection of individual components, such as a name, a design, a set of images, a slogan, a vision, writing style, a particular font or a symbol etc. which sets the brand aside from others.
For a company to exude a strong sense of brand identity, it must have an in-depth understanding of its target market, competitors and the surrounding business environment. Brand identity includes both the core identity and the extended identity. The core identity reflects consistent long-term associations with the brand; whereas the extended identity involves the intricate details of the brand that help generate a constant motif.
According to Kotler et al. (2009), a brand's identity may deliver four levels of meaning:
attributes
benefits
values
personality
A brand's attributes are a set of labels with which the corporation wishes to be associated. For example, a brand may showcase its primary attribute as environmental friendliness. However, a brand's attributes alone are not enough to persuade a customer into purchasing the product. These attributes must be communicated through benefits, which are more emotional translations. If a brand's attribute is being environmentally friendly, customers will receive the benefit of feeling that they are helping the environment by associating with the brand. Aside from attributes and benefits, a brand's identity may also involve branding to focus on representing its core set of values. If a company is seen to symbolize specific values, it will, in turn, attract customers who also believe in these values. For example, Nike's brand represents the value of a "just do it" attitude. Thus, this form of brand identification attracts customers who also share this same value. Even more extensive than its perceived values is a brand's personality. Quite literally, one can easily describe a successful brand identity as if it were a person. This form of brand identity has proven to be the most advantageous in maintaining long-lasting relationships with consumers, as it gives them a sense of personal interaction with the brand Collectively, all four forms of brand identification help to deliver a powerful meaning behind what a corporation hopes to accomplish, and to explain why customers should choose one brand over its competitors.
Brand personality
Brand personality refers to "the set of human personality traits that are both applicable to and relevant for brands." Marketers and consumer researchers often argue that brands can be imbued with human-like characteristics which resonate with potential consumers. Such personality traits can assist marketers to create unique, brands that are differentiated from rival brands. Aaker conceptualized brand personality as consisting of five broad dimensions, namely: sincerity (down-to-earth, honest, wholesome, and cheerful), excitement (daring, spirited, imaginative, and up to date), competence (reliable, intelligent, and successful), sophistication (glamorous, upper class, charming), and ruggedness (outdoorsy and tough). Subsequent research studies have suggested that Aaker's dimensions of brand personality are relatively stable across different industries, market segments and over time. Much of the literature on branding suggests that consumers prefer brands with personalities that are congruent with their own.
Consumers may distinguish the psychological aspect (brand associations like thoughts, feelings, perceptions, images, experiences, beliefs, attitudes, and so on that become linked to the brand) of a brand from the experiential aspect. The experiential aspect consists of the sum of all points of contact with the brand and is termed the consumer's brand experience. The brand is often intended to create an emotional response and recognition, leading to potential loyalty and repeat purchases. The brand experience is a brand's action perceived by a person. The psychological aspect, sometimes referred to as the brand image, is a symbolic construct created within the minds of people, consisting of all the information and expectations associated with a product, with a service, or with the companies providing them.
Marketers or product managers that responsible for branding, seek to develop or align the expectations behind the brand experience, creating the impression that a brand associated with a product or service has certain qualities or characteristics, which make it special or unique. A brand can, therefore, become one of the most valuable elements in an advertising theme, as it demonstrates what the brand owner is able to offer in the marketplace. This means that building a strong brand helps to distinguish a product from similar ones and differentiate it from competitors. The art of creating and maintaining a brand is called brand management. The orientation of an entire organization towards its brand is called brand orientation. Brand orientation develops in response to market intelligence.
Careful brand management seeks to make products or services relevant and meaningful to a target audience. Marketers tend to treat brands as more than the difference between the actual cost of a product and its selling price; rather brands represent the sum of all valuable qualities of a product to the consumer and are often treated as the total investment in brand building activities including marketing communications.
Consumers may look on branding as an aspect of products or services, as it often serves to denote a certain attractive quality or characteristic (see also brand promise). From the perspective of brand owners, branded products or services can command higher prices. Where two products resemble each other, but one of the products has no associated branding (such as a generic, store-branded product), potential purchasers may often select the more expensive branded product on the basis of the perceived quality of the brand or on the basis of the reputation of the brand owner.
Brand awareness
Brand awareness involves a customer's ability to recall and/or recognize brands, logos, and branded advertising. Brands help customers to understand which brands or products belong to which product or service category. Brands assist customers to understand the constellation of benefits offered by individual brands, and how a given brand within a category is differentiated from its competing brands, and thus the brand helps customers & potential customers understand which brand satisfies their needs. Thus, the brand offers the customer a short-cut to understanding the different product or service offerings that make up a particular category.
Brand awareness is a key step in the customer's purchase decision process, since some kind of awareness is a precondition to purchasing. That is, customers will not consider a brand if they are not aware of it. Brand awareness is a key component in understanding the effectiveness both of a brand's identity and of its communication methods. Successful brands are those that consistently generate a high level of brand awareness, as this can be the pivotal factor in securing customer transactions. Various forms of brand awareness can be identified. Each form reflects a different stage in a customer's cognitive ability to address the brand in a given circumstance.
Marketers typically identify two distinct types of brand awareness; namely brand recall (also known as unaided recall or occasionally spontaneous recall) and brand recognition (also known as aided brand recall). These types of awareness operate in entirely different ways with important implications for marketing strategy and advertising.
Most companies aim for "Top-of-Mind" which occurs when a brand pops into a consumer's mind when asked to name brands in a product category. For example, when someone is asked to name a type of facial tissue, the common answer, "Kleenex", will represent a top-of-mind brand. Top-of-mind awareness is a special case of brand recall.
Brand recall (also known as unaided brand awareness or spontaneous awareness) refers to the brand or set of brands that a consumer can elicit from memory when prompted with a product category
Brand recognition (also known as aided brand awareness) occurs when consumers see or read a list of brands, and express familiarity with a particular brand only after they hear or see it as a type of memory aide.
Strategic awareness occurs when a brand is not only top-of-mind to consumers, but also has distinctive qualities which consumers perceive as making it better than other brands in the particular market. The distinction(s) that set a product apart from the competition is/are also known as the unique selling point or USP.
Brand recognition
Brand recognition is one of the initial phases of brand awareness and validates whether or not a customer remembers being pre-exposed to the brand. Brand recognition (also known as aided brand recall) refers to consumers' ability to correctly differentiate a brand when they come into contact with it. This does not necessarily require consumers to identify or recall the brand name. When customers experience brand recognition, they are triggered by either a visual or verbal cue. For example, when looking to satisfy a category need such as a toilet paper, the customer would firstly be presented with multiple brands to choose from. Once the customer is visually or verbally faced with a brand, they may remember being introduced to it before. When given a cue, consumers able to retrieve the memory node associated with the brand exhibit brand recognition. Often, this form of brand awareness assists customers in choosing one brand over another when faced with a low-involvement purchasing decision.
Brand recognition is often the mode of brand awareness that operates in retail shopping environments. When presented with a product at the point-of-sale, or after viewing its visual packaging, consumers are able to recognize the brand and may be able to associate it with attributes or meanings acquired through exposure to promotion or word-of-mouth referrals. In contrast to brand recall, where few consumers are able to spontaneously recall brand names within a given category, when prompted with a brand name, a larger number of consumers are typically able to recognize it.
Brand recognition is most successful when people can elicit recognition without being explicitly exposed to the company's name, but rather through visual signifiers like logos, slogans, and colors. For example, Disney successfully branded its particular script font (originally created for Walt Disney's "signature" logo), which it used in the logo for go.com.
Brand recall
Unlike brand recognition, brand recall (also known as unaided brand recall or spontaneous brand recall) is the ability of the customer retrieving the brand correctly from memory. Rather than being given a choice of multiple brands to satisfy a need, consumers are faced with a need first, and then must recall a brand from their memory to satisfy that need. This level of brand awareness is stronger than brand recognition, as the brand must be firmly cemented in the consumer's memory to enable unassisted remembrance. This gives the company huge advantage over its competitors because the customer is already willing to buy or at least know the company offering available in the market. Thus, brand recall is a confirmation that previous branding touchpoints have successfully fermented in the minds of its consumers.
Marketing-mix modeling can help marketing leaders optimize how they spend marketing budgets to maximize the impact on brand awareness or on sales. Managing brands for value creation will often involve applying marketing-mix modeling techniques in conjunction with brand valuation.
Brand elements
Brands typically comprise various elements, such as:
name: the word or words used to identify a company, product, service, or concept
logo: the visual trademark that identifies a brand
tagline or catchphrase: a short phrase always used in the product's advertising and closely associated with the brand
graphics: the "dynamic ribbon" is a trademarked part of Coca-Cola's brand
shapes: the distinctive shapes of the Coca-Cola bottle and of the Volkswagen Beetle are trademarked elements of those brands
colors: the instant recognition consumers have when they see Tiffany & Co.'s robin's egg blue (Pantone No. 1837). Tiffany & Co.'s trademarked the color in 1998.
sounds: a unique tune or set of notes can denote a brand. NBC's chimes provide a famous example.
scents: the rose-jasmine-musk scent of Chanel No. 5 is trademarked
tastes: Kentucky Fried Chicken has trademarked its special recipe of eleven herbs and spices for fried chicken
movements: Lamborghini has trademarked the upward motion of its car doors
Brand communication
Although brand identity is a fundamental asset to a brand's equity, the worth of a brand's identity would become obsolete without ongoing brand communication. Integrated marketing communications (IMC) relates to how a brand transmits a clear consistent message to its stakeholders . Five key components comprise IMC:
Advertising
Sales promotions
Direct marketing
Personal selling
Public relations
The effectiveness of a brand's communication is determined by how accurately the customer perceives the brand's intended message through its IMC. Although IMC is a broad strategic concept, the most crucial brand communication elements are pinpointed to how the brand sends a message and what touch points the brand uses to connect with its customers [Chitty 2005].
One can analyze the traditional communication model into several consecutive steps:
Firstly, a source/sender wishes to convey a message to a receiver. This source must encode the intended message in a way that the receiver will potentially understand.
After the encoding stage, the forming of the message is complete and is portrayed through a selected channel. In IMC, channels may include media elements such as advertising, public relations, sales promotions, etc.
It is at this point where the message can often deter from its original purpose as the message must go through the process of being decoded, which can often lead to unintended misinterpretation.
Finally, the receiver retrieves the message and attempts to understand what the sender was aiming to render. Often, a message may be incorrectly received due to noise in the market, which is caused by "…unplanned static or distortion during the communication process".
The final stage of this process is when the receiver responds to the message, which is received by the original sender as feedback.
When a brand communicates a brand identity to a receiver, it runs the risk of the receiver incorrectly interpreting the message. Therefore, a brand should use appropriate communication channels to positively "…affect how the psychological and physical aspects of a brand are perceived".
In order for brands to effectively communicate to customers, marketers must "…consider all touch point|s, or sources of contact, that a customer has with the brand". Touch points represent the channel stage in the traditional communication model, where a message travels from the sender to the receiver. Any point where a customer has an interaction with the brand - whether watching a television advertisement, hearing about a brand through word of mouth or even noticing a branded license plate – defines a touchpoint. According to Dahlen et al. (2010), every touchpoint has the "…potential to add positive – or suppress negative – associations to the brand's equity" Thus, a brand's IMC should cohesively deliver positive messages through appropriate touch points associated with its target market. One methodology involves using sensory stimuli touch points to activate customer emotion. For example, if a brand consistently uses a pleasant smell as a primary touchpoint, the brand has a much higher chance of creating a positive lasting effect on its customers' senses as well as memory. Another way a brand can ensure that it is utilizing the best communication channel is by focusing on touchpoints that suit particular areas associated with customer experience. As suggested Figure 2, certain touch points link with a specific stage in customer-brand-involvement. For example, a brand may recognize that advertising touchpoints are most effective during the pre-purchase experience stage therefore they may target their advertisements to new customers rather than to existing customers. Overall, a brand has the ability to strengthen brand equity by using IMC branding communications through touchpoints.
Brand communication is important in ensuring brand success in the business world and refers to how businesses transmit their brand messages, characteristics and attributes to their consumers. One method of brand communication that companies can exploit involves electronic word-of-mouth (eWOM). eWOM is a relatively new approach [Phelps et al., 2004] identified to communicate with consumers. One popular method of eWOM involves social networking sites (SNSs) such as Twitter. A study found that consumers classed their relationship with a brand as closer if that brand was active on a specific social media site (Twitter). Research further found that the more consumers "retweeted" and communicated with a brand, the more they trusted the brand. This suggests that a company could look to employ a social-media campaign to gain consumer trust and loyalty as well as in the pursuit of communicating brand messages.
McKee (2014) also looked into brand communication and states that when communicating a brand, a company should look to simplify its message as this will lead to more value being portrayed as well as an increased chance of target consumers recalling and recognizing the brand.
In 2012 Riefler stated that if the company communicating a brand is a global organization or has future global aims, that company should look to employ a method of communication that is globally appealing to their consumers, and subsequently choose a method of communication with will be internationally understood. One way a company can do this involves choosing a product or service's brand name, as this name will need to be suitable for the marketplace that it aims to enter.
It is important that if a company wishes to develop a global market, the company name will also need to be suitable in different cultures and not cause offense or be misunderstood. When communicating a brand, a company needs to be aware that they must not just visually communicate their brand message and should take advantage of portraying their message through multi-sensory information. One article suggests that other senses, apart from vision, need to be targeted when trying to communicate a brand with consumers. For example, a jingle or background music can have a positive effect on brand recognition, purchasing behaviour and brand recall.
Therefore, when looking to communicate a brand with chosen consumers, companies should investigate a channel of communication that is most suitable for their short-term and long-term aims and should choose a method of communication that is most likely to reach their target consumers. The match-up between the product, the consumer lifestyle, and the endorser is important for the effectiveness of brand communication.
Global brand variables
Brand name
The term "brand name" is quite often used interchangeably with "brand", although it is more correctly used to specifically denote written or spoken linguistic elements of any product. In this context, a "brand name" constitutes a type of trademark, if the brand name exclusively identifies the brand owner as the commercial source of products or services. A brand owner may seek to protect proprietary rights in relation to a brand name through trademark registration – such trademarks are called "Registered Trademarks". Advertising spokespersons have also become part of some brands, for example: Mr. Whipple of Charmin toilet tissue and Tony the Tiger of Kellogg's Frosted Flakes. Putting a value on a brand by brand valuation or using marketing mix modeling techniques is distinct to valuing a trademark.
US government purchasing rules allow for federal departments to seek bids using a "brand name or equal" provision in their solicitation. Federal Acquisition Regulation 52.211-6 states that "where an item in [a] solicitation is identified as "brand name or equal", the purchase description [should] reflect the characteristics and level of quality that will satisfy the Government's needs. The salient physical, functional, or performance characteristics that "equal" products must meet are [to be] specified in the solicitation.
Types of brand names
Brand names come in many styles. These include:
initialism: a name made of initials, such as "UPS" or "IBM"
descriptive: names that describe a product benefit or function, such as "Whole Foods" or "Toys R' Us"
alliteration and rhyme: names that are fun to say and which stick in the mind, such as "Reese's Pieces" or "Dunkin' Donuts"
evocative: names that can evoke a vivid image, such as "Amazon" or "Crest"
neologisms: completely made-up words, such as "Wii" or "Häagen-Dazs"
foreign word: adoption of a word from another language, such as "Volvo"
founders' names: using the names of real people, (especially a founder's surname), such as "Hewlett-Packard", "Dell", "Disney", "Stussy" or "Mars"
geography: naming for regions and landmarks, such as "Cisco" or "Fuji Film"
personification: taking names from myths, such as "Nike"; or from the minds of ad execs, such as "Betty Crocker"
punny: some brands create their name by using a silly pun, such as "Lord of the Fries", "Wok on Water" or "Eggs Eggscetera"
portmanteau: combining multiple words together to create one, such as "Microsoft" ("microcomputer" and "software"), "Comcast" ("communications" and "broadcast"), "Evernote" ("forever" and "note"), "Vodafone" ("voice", "data", "telephone")
The act of associating a product or service with a brand has become part of pop culture. Most products have some kind of brand identity, from common table salt to designer jeans. A brandnomer is a brand name that has colloquially become a generic term for a product or service, such as Band-Aid, Nylon, or Kleenex—which are often used to describe any brand of adhesive bandage; any type of hosiery; or any brand of facial tissue respectively. Xerox, for example, has become synonymous with the word "copy".
Brand line
A brand line allows the introduction of various subtypes of a product under a common, ideally already established, brand name. Examples would be the individual Kinder chocolates by Ferrero SpA, the subtypes of Coca-Cola, or special editions of popular brands. See also brand extension.
Open Knowledge Foundation created in December 2013 the BSIN (Brand Standard Identification Number). BSIN is universal and is used by the Open Product Data Working Group of the Open Knowledge Foundation to assign a brand to a product. The OKFN Brand repository is critical for the Open Data movement.
Brand identity
The expression of a brand – including its name, trademark, communications, and visual appearance – is brand identity. Because the identity is assembled by the brand owner, it reflects how the owner wants the consumer to perceive the brand – and by extension the branded company, organization, product or service. This is in contrast to the brand image, which is a customer's mental picture of a brand. The brand owner will seek to bridge the gap between the brand image and the brand identity. Brand identity is fundamental to consumer recognition and symbolizes the brand's differentiation from competitors. Brand identity is distinct from brand image.
Brand identity is what the owner wants to communicate to its potential consumers. However, over time, a product's brand identity may acquire (evolve), gaining new attributes from consumer perspective but not necessarily from the marketing communications, an owner percolates to targeted consumers. Therefore, businesses research consumer's brand associations.
The brand identity works as a guideline, as the frame in which a brand will evolve and define itself, or in the words of David Aaker, "…a unique set of brand associations that the brand strategist aspires to create or maintain."
According to Kapferer (2004), there are six facets to a brand's identity:
Physique: The physical characteristics and iconography of your brand ( such as the Nike swoosh or the orange pantone of easyJet).
Personality: The persona, how a brand communicates with their audience, which is expressed through its tone of voice, design assets and then integrates this into communication touchpoints in a coherent way.
Culture: The values, the principles on which a brand bases its behaviour. For example, Google flexible office hours and fun environment so the employees feel happy and creative at work.
Reflection: The "stereotypical user" of the brand. A brand is likely to be purchased by several buyer's profiles but they will have a go-to person that they use in their campaigns. For example, Lou Yetu and the Parisian chic profile.
Relationship: The bond between a brand and its customers, and the customer expectations of the brand (the experience beyond the tangible product). Such as warranties or services during and after purchase help maintain a sustainable relationship and keep the consumer trust.
Self-image: How one brand-customer portrays their ideal self – how they want to look and behave; what they aspire to – brands can target their messaging accordingly and make the brand's aspirations reflect theirs.
Brand image
Visual brand identity
Color is a particularly important element of visual brand identity and color mapping provides an effective way of ensuring color contributes to differentiation in a visually cluttered marketplace.
Brand trust
Brand trust is the intrinsic 'believability' that any entity evokes. In the commercial world, the intangible aspect of brand trust impacts the behavior and performance of its business stakeholders in many intriguing ways. It creates the foundation of a strong brand connect with all stakeholders, converting simple awareness to strong commitment.
Brand parity
Brand parity is the perception of customers that some brands are equivalent. This means that shoppers will purchase within a group of accepted brands rather than choosing one specific brand. Cranfield management professor Christopher Martin has referred to research confirming that consumers choose from a "portfolio of brands", and that factors such as availability will be a major determinant of actual choice.
When brand parity operates, quality is often not a major concern because consumers believe that only minor quality differences exist. Instead, it is important to have brand equity which is "the perception that a good or service with a given brand name is different, better, and can be trusted" according to Kenneth E Clow.
Expanding role of brands
The original aim of branding was to simplify the process of identifying and differentiating products. Over time, manufacturers began to use branded messages to give the brand a unique personality. Brands came to embrace a performance or benefit promise, for the product, certainly, but eventually also for the company behind the brand.
Today, brands play a much bigger role. The power of brands to communicate a complex message quickly, with emotional impact and with the ability of brands to attract media attention, makes them ideal tools in the hands of activists. Cultural conflict over a brand's meaning has also influences the diffusion of an innovation.
During the COVID-19 pandemic, 75% of US customers tried different stores, websites or brands, and 60% of those expect to integrate new brands or stores into their post-pandemic lives. If brands can find ways to help people feel empowered and regain a sense of control in uncertain times, they can help people reconnect and heal (and be appreciated for it).
Branding strategies
Company name
Often, especially in the industrial sector, brand engineers will promote a company's name. Exactly how the company name relates to product and services names forms part of a brand architecture. Decisions about company names and product names and their relationship depend on more than a dozen strategic considerations.
In this case, a strong brand name (or company name) becomes the vehicle for marketing a range of products (for example, Mercedes-Benz or Black & Decker) or a range of subsidiary brands (such as Cadbury Dairy Milk, Cadbury Flake, or Cadbury Fingers in the UK).
Corporate name-changes offer particularly stark examples of branding-related decisions.
A name change may signal different ownership or new product directions.
Thus the name Unisys originated in 1986 when Burroughs bought and incorporated UNIVAC; and the newly named International Business Machines represented a broadening of scope in 1924 from its original name, the Computing-Tabulating-Recording Company. A change in corporate naming may also have a role in seeking to shed an undesirable image: for example, Werner Erhard and Associates re-branded its activities as Landmark Education in 1991 at a time when publicity in a 60 Minutes investigative-report broadcast cast the est and Werner Erhard brands in a negative light,
and Union Carbide India Limited became Eveready Industries India in 1994 subsequent to the Bhopal disaster of 1984
Individual branding
Marketers associate separate products or lines with separate brand names - such as Seven-Up, Kool-Aid, or Nivea Sun (Beiersdorf - which may compete against other brands from the same company (for example, Unilever owns Persil, Omo, Surf, and Lynx).
Challenger brands
A challenger brand is a brand in an industry where it is neither the market leader nor a niche brand. Challenger brands are categorized by a mindset that sees them have business ambitions beyond conventional resources and an intent to bring change to an industry.
Multiproduct branding strategy
Multiproduct branding strategy is when a company uses one name across all its products in a product class. When the company's trade name is used, multiproduct branding is also known as corporate branding, family branding or umbrella branding. Examples of companies that use corporate branding are Microsoft, Samsung, Apple, and Sony as the company's brand name is identical to their trade name.
Other examples of multiproduct branding strategy include Virgin and Church & Dwight. Virgin, a multination conglomerate uses the punk-inspired, handwritten red logo with the iconic tick for all its products ranging from airlines, hot air balloons, telecommunication to healthcare. Church & Dwight, a manufacturer of household products displays the Arm & Hammer family brand name for all its products containing baking soda as the main ingredient. A multiproduct branding strategy has many advantages. It capitalizes on brand equity as consumers that have a good experience with the product will in turn pass on this positive opinion to supplementary objects in the same product class as they share the same name. Consequently, the multiproduct branding strategy makes product line extension possible.
Product line extension
A product line extension is the procedure of entering a new market segment in its product class by means of using a current brand name. An example of this is the Campbell Soup Company, primarily a producer of canned soups. They utilize a multiproduct branding strategy by way of soup line extensions. They have over 100 soup flavours putting forward varieties such as regular Campbell soup, condensed, chunky, fresh-brewed, organic, and soup on the go. This approach is seen as favourable as it can result in lower promotion costs and advertising due to the same name being used on all products, therefore increasing the level of brand awareness. Although, line extension has potential negative outcomes with one being that other items in the company's line may be disadvantaged because of the sale of the extension. Line extensions work at their best when they deliver an increase in company revenue by enticing new buyers or by removing sales from competitors.
Subbranding
Subbranding is used by certain multiproduct branding companies. Subbranding merges a corporate, family or umbrella brand with the introduction of a new brand in order to differentiate part of a product line from others in the whole brand system. Subbranding assists to articulate and construct offerings. It can alter a brand's identity as subbranding can modify associations of the parent brand. Examples of successful subbranding can be seen through Gatorade and Porsche. Gatorade, a manufacturer of sport-themed food and beverages effectively introduced Gatorade G2, a low-calorie line of Gatorade drinks. Likewise, Porsche, a specialized automobile manufacturer successfully markets its lower-end line, Porsche Boxster and higher-end line, Porsche Carrera.
Brand extension and brand dilution
Brand extension is the system of employing a current brand name to enter a different product class. Having a strong brand equity allows for brand extension; for example, many fashion and designer companies extended brands into fragrances, shoes and accessories, home textile, home decor, luggage, (sun-) glasses, furniture, hotels, etc. Nevertheless, brand extension has its disadvantages. There is a risk that too many uses for one brand name can oversaturate the market resulting in a blurred and weak brand for consumers. Examples of brand extension can be seen through Kimberly-Clark and Honda. Kimberly-Clark is a corporation that produces personal and health care products being able to extend the Huggies brand name across a full line of toiletries for toddlers and babies. The success of this brand extension strategy is apparent in the $500 million in annual sales generated globally. Similarly, Honda using their reputable name for automobiles has spread to other products such as motorcycles, power equipment, engines, robots, aircraft, and bikes. Mars extended its brand to ice cream, Caterpillar to shoes and watches, Michelin to a restaurant guide, Adidas and Puma to personal hygiene. Dunlop extended its brand from tires to other rubber products such as shoes, golf balls, tennis racquets, and adhesives. Frequently, the product is no different from what else is on the market, except a brand name marking. Brand is product identity.
There is a difference between brand extension and line extension. A line extension is when a current brand name is used to enter a new market segment in the existing product class, with new varieties or flavors or sizes.
When Coca-Cola launched Diet Coke and Cherry Coke, they stayed within the originating product category: non-alcoholic carbonated beverages. Procter & Gamble did likewise extending its strong lines (such as Fairy Soap) into neighboring products (Fairy Liquid and Fairy Automatic) within the same category, dish washing detergents.
The risk of over-extension is brand dilution where the brand loses its brand associations with a market segment, product area, or quality, price or cachet.
Brand collaborations
Brand collaborations refer to the participation of multiple firms in a branding initiative. Brand collaborations are short-lived or ephemeral "partnerships between brands in which their images, legacies and values intertwine."p.13 One of the most well-known is co-branding, a strategy in which two firms combine their brands into a single product.
Most recently, some brands have engaged in unconventional brand collaborations, partnering with other brands or designers with a significantly different design, esthetic, positioning or set of values. For example, in 2017, Louis Vuitton partnered with the skateboarding brand Supreme.
Multibranding strategy
Multibranding strategy is when a company gives each product a distinct name. Multibranding is best used as an approach when each brand in intended for a different market segment. Multibranding is used in an assortment of ways with selected companies grouping their brands based on price-quality segments. Individual brand names naturally allow greater flexibility by permitting a variety of different products, of differing quality, to be sold without confusing the consumer's perception of what
business the company is in or diluting higher quality products. Procter & Gamble, a multinational consumer goods company that offers over 100 brands, each suited for different consumer needs. For instance, Head & Shoulders that helps consumers relieve dandruff in the form of a shampoo, Oral-B which offers inter-dental products, Vicks which offers cough and cold products, and Downy which offers dryer sheets and fabric softeners. Other examples include Coca-Cola, Nestlé, Kellogg's, and Mars.
This approach usually results in higher promotion costs and advertising. This is due to the company being required to generate awareness among consumers and retailers for each new brand name without the benefit of any previous impressions. Multibranding strategy has many advantages. There is no risk that a product failure will affect other products in the line as each brand is unique to each market segment. Although, certain large multiband companies have come across that the cost and difficulty of implementing a multibranding strategy can overshadow the benefits. For example, Unilever, the world's third-largest multination consumer goods company recently streamlined its brands from over 400 brands to center their attention onto 14 brands with sales of over 1 billion euros. Unilever accomplished this through product deletion and sales to other companies. Other multibrand companies introduce new product brands as a protective measure to respond to competition called fighting brands or fighter brands.
Cannibalization is a particular challenge with a multi-brand strategy approach, in which the new brand takes business away from an established one which the organization also owns. This may be acceptable (indeed to be expected) if there is a net gain overall. Alternatively, it may be the price the organization is willing to pay for shifting its position in the market; the new product being one stage in this process.
Fighting brands
The main purpose of fighting brands is to challenge competitor brands. For example, Qantas, Australia's largest flag carrier airline, introduced Jetstar to go head-to-head against the low-cost carrier, Virgin Australia (formerly known as Virgin Blue). Jetstar is an Australian low-cost airline for budget conscious travellers, but it receives many negative reviews due to this. The launching of Jetstar allowed Qantas to rival Virgin Australia without the criticism being affiliated with Qantas because of the distinct brand name.
Private branding strategy
Private branding (also known as reseller branding, private labelling, store brands, or own brands) have increased in popularity. Private branding is when a company manufactures products but it is sold under the brand name of a wholesaler or retailer. Private branding is popular because it typically produces high profits for manufacturers and resellers. The pricing of private brand product are usually cheaper compared to competing name brands. Consumers are commonly deterred by these prices in good economic circumstances, as it sets a perception of lower quality and standard, but this view shifts in less ideal economic circumstances.
In Australia, their leading supermarket chains, both Woolworths and Coles are saturated with store brands (or private labels). For example, in the United States, Paragon Trade Brands, Ralcorp Holdings, and Rayovac are major suppliers of diapers, grocery products, and private label alkaline batteries, correspondingly. Costco, Walmart, RadioShack, Sears and Kroger are large retailers that have their own brand names. Similarly, Macy's, a mid-range chain of department stores offers a wide catalogue of private brands exclusive to their stores, from brands such as First Impressions which supply newborn and infant clothing, Hotel Collection which supply luxury linens and mattresses, and Tasso Elba which supply European inspired menswear. They use private branding strategy to specifically target consumer markets.
Mixed branding strategy
Mixed branding strategy is where a firm markets products under its own name(s) and that of a reseller because the segment attracted to the reseller is different from its own market.
For example, Elizabeth Arden, Inc., a major American cosmetics and fragrance company, uses mixed branding strategy. The company sells its Elizabeth Arden brand through department stores and line of skin care products at Walmart with the "skin simple" brand name. Companies such as Whirlpool, Del Monte, and Dial produce private brands of home appliances, pet foods, and soap, correspondingly. Other examples of mixed branding strategy include Michelin, Epson, Microsoft, Gillette, and Toyota. Michelin, one of the largest tire manufacturers allowed Sears, an American retail chain to place their brand name on the tires. Microsoft, a multinational technology company is seriously regarded as a corporate technology brand but it sells its versatile home entertainment hub under the brand Xbox to better align with the new and crazy identity. Gillette catered to females with Gillette for Women which has now become known as Venus. The launch of Venus was conducted in order to fulfil the feminine market of the previously dominating masculine razor industry. Similarly, Toyota, an automobile manufacturer used mixed branding. In the U.S., Toyota was regarded as a valuable car brand being economical, family orientated and known as a vehicle that rarely broke down. But Toyota sought out to fulfil a higher end, expensive market segment, thus they created Lexus, the luxury vehicle division of premium cars.
Attitude branding and iconic brands
Attitude branding is the choice to represent a larger feeling, which is not necessarily connected with the product or consumption of the product at all. Marketing labeled as attitude branding include that of Nike, Starbucks, The Body Shop, Safeway and Apple. In the 1999 book No Logo, Naomi Klein describes attitude branding as a "fetish strategy". Schaefer and Kuehlwein analyzed brands such as Apple, Ben & Jerry's or Chanel describing them as 'Ueber-Brands' – brands that are able to gain and retain "meaning beyond the material."
A great brand raises the bar – it adds a greater sense of purpose to the experience, whether it's the challenge to do your best in sports and fitness, or the affirmation that the cup of coffee you're drinking really matters. – Howard Schultz (President, CEO, and Chairman of Starbucks)
Iconic brands are defined as having aspects that contribute to consumer's self-expression and personal identity. Brands whose value to consumers comes primarily from having identity value are said to be "identity brands". Some of these brands have such a strong identity that they become more or less cultural icons which makes them "iconic brands". Examples are: Apple, Nike and Harley-Davidson. Many iconic brands include almost ritual-like behaviour in purchasing or consuming the products.
There are four key elements to creating iconic brands (Holt 2004):
"Necessary conditions" – The performance of the product must at least be acceptable, preferably with a reputation of having good quality.
"Myth-making" – A meaningful storytelling fabricated by cultural insiders. These must be seen as legitimate and respected by consumers for stories to be accepted.
"Cultural contradictions" – Some kind of mismatch between prevailing ideology and emergent undercurrents in society. In other words, a difference with the way consumers are and how they wish they were.
"The cultural brand management process" – Actively engaging in the myth-making process in making sure the brand maintains its position as an icon.
Schaefer and Kuehlwein propose the following 'Ueber-Branding' principles. They derived them from studying successful modern Prestige brands and what elevates them above mass competitors and beyond considerations of performance and price (alone) in the minds of consumers:
"Mission Incomparable" – Having a differentiated and meaningful brand purpose beyond 'making money.' Setting rules that follow this purpose – even when it violates the mass marketing mantra of "Consumer is always Boss/right".
"Longing versus Belonging" – Playing with the opposing desires of people for Inclusion on the one hand and Exclusivity on the other.
"Un-Selling" – First and foremost seeking to seduce through pride and provocation, rather than to sell through arguments.
"From Myth To Meaning" – Leveraging the power of myth – 'Ueber-Stories' that have fascinated- and guided humans forever.
"Behold!" – Making products and associated brand rituals reflect the essence of the brand mission and myth. Making it the center of attention, while keeping it fresh.
"Living the Dream" – Living the brand mission as an organization and through its actions. Thus radiating the brand myth from the inside out, consistently and through all brand manifestations. – For "Nothing is as volatile than a dream."
"Growth without End" – Avoiding to be perceived as an omnipresent, diluting brand appeal. Instead 'growing with gravitas' by leveraging scarcity/high prices, 'sideways expansion' and other means.
"No-brand" branding
Recently, a number of companies have successfully pursued "no-brand" strategies by creating packaging that imitates generic brand simplicity. Examples include the Japanese company Muji, which means "No label" in English (from 無印良品 – "Mujirushi Ryohin" – literally, "No brand quality goods"), and the Florida company No-Ad Sunscreen. Although there is a distinct Muji brand, Muji products are not branded. This no-brand strategy means that little is spent on advertisement or classical marketing and Muji's success is attributed to the word-of-mouth, simple shopping experience and the anti-brand movement.
"No brand" branding may be construed as a type of branding as the product is made conspicuous through the absence of a brand name.
"Tapa Amarilla" or "Yellow Cap" in Venezuela during the 1980s is another good example of no-brand strategy. It was simply recognized by the color of the cap of this cleaning products company.
Derived brands
In this case the supplier of a key component, used by a number of suppliers of the end-product, may wish to guarantee its own position by promoting that component as a brand in its own right. The most frequently quoted example is Intel, which positions itself in the PC market with the slogan (and sticker) "Intel Inside".
Social media brands
In The Better Mousetrap: Brand Invention in a Media Democracy (2012), author and brand strategist Simon Pont posits that social media brands may be the most evolved version of the brand form, because they focus not on themselves but on their users. In so doing, social media brands are arguably more charismatic, in that consumers are compelled to spend time with them, because the time spent is in the meeting of fundamental human drivers related to belonging and individualism. "We wear our physical brands like badges, to help define us – but we use our digital brands to help express who we are. They allow us to be, to hold a mirror up to ourselves, and it is clear. We like what we see."
Private labels
Private label brands, also called own brands, or store brands have become popular. Where the retailer has a particularly strong identity (such as Marks & Spencer in the UK clothing sector) this "own brand" may be able to compete against even the strongest brand leaders, and may outperform those products that are not otherwise strongly branded.
Designer Private Labels
A relatively recent innovation in retailing is the introduction of designer private labels. Designer-private labels involve a collaborative contract between a well-known fashion designer and a retailer. Both retailer and designer collaborate to design goods with popular appeal pitched at price points that fit the consumer's budget. For retail outlets, these types of collaborations give them greater control over the design process as well as access to exclusive store brands that can potentially drive store traffic.
In Australia, for example, the department store, Myer, now offers a range of exclusive designer private labels including Jayson Brundson, Karen Walker, Leona Edmiston, Wayne Cooper, Fleur Wood and 'L' for Lisa Ho. Another up-market department store, David Jones, currently offers 'Collette' for leading Australian designer, Collette Dinnigan, and has recently announced its intention to extend the number of exclusive designer brands. Target Australia has teamed up with Dannii Minogue to produce her "Petites" range. Specsavers has joined up with Sydney designer, Alex Perry to create an exclusive range of spectacle frames while Big W stocks frame designed by Peter Morrissey.
Individual and organizational brands
With the development of the brand, Branding is no longer limited to a product or service. There are kinds of branding that treat individuals and organizations as the products to be branded. Most NGOs and non-profit organizations carry their brand as a fundraising tool. The purpose of most NGOs is to leave a social impact so their brand becomes associated with specific social life matters. Amnesty International, Habitat for Humanity, World Wildlife Fund and AIESEC are among the most recognized brands around the world. NGOs and non-profit organizations moved beyond using their brands for fundraising to express their internal identity and to clarify their social goals and long-term aims. Organizational brands have well-determined brand guidelines and logo variables.
Personal branding
Employer branding
Crowd sourced branding
These are brands that are created by "the public" for the business, which is opposite to the traditional method where the business creates a brand.
Personalized branding
Many businesses have started to use elements of personalization in their branding strategies, offering the client or consumer the ability to choose from various brand options or have direct control over the brand. Examples of this include the #ShareACoke campaign by Coca-Cola which printed people's names and place names on their bottles encouraging people. AirBNB has created the facility for users to create their own symbol for the software to replace the brand's mark known as The Bélo.
Nation branding (place branding and public diplomacy)
Nation branding is a field of theory and practice which aims to measure, build and manage the reputation of countries (closely related to place branding). Some approaches applied, such as an increasing importance on the symbolic value of products, have led countries to emphasize their distinctive characteristics. The branding and image of a nation-state "and the successful transference of this image to its exports – is just as important as what they actually produce and sell."
Destination branding
Destination branding is the work of cities, states, and other localities to promote the location to tourists and drive additional revenues into a tax base. These activities are often undertaken by governments, but can also result from the work of community associations. The Destination Marketing Association International is the industry leading organization.
Brand protection
Intellectual property infringements, in particular counterfeiting, can affect consumer trust and ultimately damage brand equity. Brand protection is the set of preventive, monitoring and reactive measures taken by brand owners to eliminate, reduce or mitigate these infringements and their effect.
Doppelgänger brand image (DBI)
A doppelgänger brand image or "DBI" is a disparaging image or story about a brand that is circulated in popular culture. DBI targets tend to be widely known and recognizable brands. The purpose of DBIs is to undermine the positive brand meanings the brand owners are trying to instill through their marketing activities.
The term stems from the combination of the German words ('double') and ('walker').
Doppelgänger brands are typically created by individuals or groups to express criticism of a brand and its perceived values, through a form of parody, and are typically unflattering in nature.
Due to the ability of doppelgänger brands to rapidly propagate virally through digital media channels, they can represent a real threat to the equity of the target brand. Sometimes the target organization is forced to address the root concern or to re-position the brand in a way that defuses the criticism.
Examples include:
Joe Chemo campaign organized to criticize the marketing of tobacco products to children and their harmful effects.
Parody of the Pepsi logo as an obese man to highlight the relationship between soft drink consumption and obesity.
The FUH2 campaign protesting the Hummer SUV as a symbol of corporate and public irresponsibility toward public safety and the environment.
In the 2006 article "Emotional Branding and the Strategic Value of the Doppelgänger Brand Image", Thompson, Rindfleisch, and Arsel suggest that a doppelgänger brand image can be a benefit to a brand if taken as an early warning sign that the brand is losing emotional authenticity with its market.
International Standards
The ISO branding standards developed by the Committee ISO/TC 289 are:
'ISO 10668:2010' Brand valuation - Requirements for monetary brand valuation ,
'ISO 20671:2019' Brand evaluation - Principles and fundamentals .
Two other ISO standards are being developed by ISO/TC 289:
ISO/AWI 23353 Brand evaluation - Guidelines for brands relating to geographical indications
ISO/AWI 24051 Brand evaluation - Guide for the annual brand evaluation.
See also
References
Branding terminology
Communication design
Graphic design
Intangible assets
Product management | Brand | Engineering | 13,967 |
14,894,500 | https://en.wikipedia.org/wiki/Object%20manipulation | Object manipulation is a form of dexterity play or performance in which one or more people physically interact with one or more objects. Many object manipulation skills are recognised circus skills. Other object manipulation skills are linked to sport, magic, and everyday objects or practices. Many object manipulation skills use special props made for that purpose: examples include the varied circus props such as balls, clubs, hoops, rings, poi, staff, and devil sticks; magic props such as cards and coins; sports equipment such as nunchaku and footballs. Many other objects can also be used for manipulation skills. Object manipulation with ordinary items may be considered to be object manipulation when the object is used in an unusually stylised or skilful way (such as in flair bartending) or for a physical interaction outside of its socially acknowledged context or differently from its original purpose.
Object manipulators may also be practitioners of fire performance, which is essentially object manipulation where specially designed props are soaked in fuel and lit on fire.
Types
There are a very wide range of types of object manipulation. Each type of object manipulation has often been grouped together in a category of object manipulation skills. These categories are shown below. However many types of object manipulation do not fit these common categories while others can be seen to belong to more than one category.
Juggling and tossing
Juggling is a physical skill involving the manipulation of objects for recreation, entertainment or sport. The most recognizable form of juggling is toss juggling. Juggling can be the manipulation of one object or many objects at the same time, using one or many hands. Jugglers often refer to the objects they juggle as props. The most common props are balls, clubs, or rings. Some jugglers use more dramatic objects such as knives, fire torches or chainsaws. The term juggling can also commonly refer to other prop-based manipulation skills such as diabolo, devil sticks, poi, cigar boxes, shaker cups, contact juggling, hooping, and hat manipulation.
Juggling
Contact juggling
Toss juggling
Chinlone
Cigar box (juggling)
Devil Sticks
Diabolo
Flair bartending
Footbag
Hat manipulation
Spinning and twirling
Spinning and twirling are any of several activities performing spinning, twirling or rotating the spun object for exercise, play or performance. The object twirled can be done directly by one or two hands, the fingers or by other parts of the body. It can also be done indirectly, as in the case of devil or flower sticks, using another object or objects. The origin of twirling can be found in manipulation skills developed for armed combat and in traditional dance. The various twirling skills have become increasingly popular with many associated with circus skills.
Chinese yo-yo
Club swinging
Devil sticks
Diabolo
Eskimo yo-yo
Fanning
Fire staff
Flagging
Freestyle nunchaku
Glowsticking
Hooping
Hula hooping
Padiddling
Plate spinning
Poi spinning
Rhythmic gymnastics
Rope dart
Trick roping
Twirling
Baton twirling
Color guard and winter guard
Yo-yo
Skill toys
Skill toys are purpose-made objects that require manipulative skill for their typical use. Also often used as fidget toys, examples of such toys are:
Astrojax
Begleri
Kendama
Pen spinning
Butterfly knife (Balisong)
Knucklebones
Fingerboard (skateboard)
Dexterity
Dexterity skills are here seen to be skills which are not usually associated with other categories of object manipulation. Many of these skills use items not usually associated with object manipulation. Examples are dice, cups, lighters.
Cardistry
Card manipulation
Clackers
Coin manipulation
Dice stacking
Isolation
Knife throwing
Shuffling
Sport stacking, a.k.a. cup stacking
Whipcracking
See also
Burning Man, an annual event that attracts object manipulators
Chinese jump rope
Modern juggling culture
External links
Sleight of hand
Motor control | Object manipulation | Biology | 799 |
1,411,852 | https://en.wikipedia.org/wiki/Active%20structure | An active structure (also known as a smart or adaptive structure) is a mechanical structure with the ability to alter its configuration, form or properties in response to changes in the environment.
The term active structure also refers to structures that, unlike traditional engineering structures (e.g., bridges, buildings), require constant motion and hence power input to remain stable. The advantage of active structures is that they can be far more massive than a traditional static structure: an example would be a space fountain, a building that reaches into space.
Function
The result of the activity is a structure more suited for the type and magnitude of the load it is carrying. For example, an orientation change of a beam could reduce the maximum stress or strain level, while a shape change could render a structure less susceptible to dynamic vibrations. A good example of an adaptive structure is the human body where the skeleton carries a wide range of loads and the muscles change its configuration to do so. Consider carrying a backpack. If the upper body did not adjust the centre of mass of the whole system slightly by leaning forward, the person would fall on their back.
An active structure consists of three integral components besides the load carrying part. They are the sensors, the processor and the actuators. In the case of a human body, the sensory nerves are the sensors which gather information of the environment. The brain acts as the processor to evaluate the information and decide to act accordingly and therefore instructs the muscles, which act as actuators to respond. In heavy engineering, there is already an emerging trend to incorporate activation into bridges and domes to minimize vibrations under wind and earthquake loads.
Aviation engineering and aerospace engineering have been the main driving force in developing modern active structures. Aircraft (and spacecraft) require adaptation because they are exposed to many different environments, and therefore loadings, during their lifetime. Prior to launching they are subjected to gravity or dead loads, during
takeoff they are subjected to extreme dynamic and inertial loads and in-flight they need to be in a configuration which minimizes drag but promotes lift. A lot of effort has been committed into adaptive aircraft wings to produce one that can control the separation of boundary layers and turbulence. Many space structures utilize adaptivity to survive extreme environmental challenges in space or to achieve precise accuracies. For example, space antennas and mirrors can be activated to precise orientation. As space technology advances, some sensitive equipment (namely interferometric optical and infrared astronomical instruments) are required to be accurate in position as delicate as a few nanometres, while the supporting active structure is tens of metres in dimensions.
Design
Human-made actuators existing in the market, even the most sophisticated ones, are nearly all one-dimensional. This means they are only capable of extending and contracting along, or rotating about 1 axis. Actuators capable of movement in both forward and reverse directions are known as two-way actuators, as opposed to one-way actuators which can only move in one direction. The limiting capability of actuators has restricted active structures to two main types: active truss structures, based on linear actuators, and manipulator arms, based on rotary
actuators.
A good active structure has a number of requirements. First, it needs to be easily actuated. The actuation should be energy-saving. A structure which is very stiff and strongly resists morphing is therefore not desirable. Second, the resulting structure must have structural integrity to carry the design loads. Therefore, the process of actuation should not jeopardize the structure's strength. More precisely, we can say: We seek an active structure where actuation of some members will lead to a geometry change without substantially altering its stress state. In other words, a structure that has both statical determinacy and kinematic determinacy is optimal for actuation.
Applications
Active-control technology is applied in civil engineering, mechanical engineering and aerospace engineering. Although most civil engineering structures are static, active control is utilized in some civil structures for deployment against seismic loading, wind loading and environmental vibration. Also, active control is proposed to be used for damage tolerance purposes where human intervention is restricted. Korkmaz et al. demonstrated configuration of active control system for a damage tolerance and deployment of a bridge.
See also
James E. Hubbard, Jr.
Tuned mass damper
References
External links
Swiss Federal Institute of Technology (EPFL), Applied Computing and Mechanics Laboratory (IMAC)
Cambridge University Deployable Structures Lab
Hoberman Associates - Transformable Design
CRG Technology: Morphing Processes
A free-standing space elevator structure: A practical alternative to the space tether
Engineering concepts
Structure | Active structure | Engineering | 949 |
7,687,767 | https://en.wikipedia.org/wiki/Forster%E2%80%93Decker%20method | The Forster–Decker method is a series of chemical reactions that have the effect of mono-alkylating a primary amine (1), forming a secondary amine (6). The process occurs by way of transient formation of an imine (3) that undergoes the actual alkylation reaction.
Process stages
Conversion of the primary amine to an imine (Schiff base) using an aldehyde.
Alkylation of the imine using an alkyl halide, forming an iminium ion.
Hydrolysis of the iminium, releasing the secondary amine and regenerating the aldehyde.
Because the actual alkylation occurs on the imine, over-alkylation is not possible. Therefore, this method does not suffer from side-reactions such as formation of tertiary amines as a simple SN2-type process can.
See also
Reductive amination
References
Substitution reactions
Name reactions | Forster–Decker method | Chemistry | 191 |
25,113,859 | https://en.wikipedia.org/wiki/Remote%20visual%20inspection | Remote Visual Inspection or Remote Digital Video Inspection, also known as RVI or RDVI, is a form of visual inspection which uses visual aids including video technology to allow an inspector to look at objects and materials from a distance because the objects are inaccessible or are in dangerous environments. RVI is also a specialty branch of nondestructive testing (NDT).
Purposes
Technologies include, but not limited to, rigid or flexible borescopes, videoscopes, fiberscopes, push cameras, pan/tilt/zoom cameras and robotic crawlers. Remote are commonly used where distance, angle of view and limited lighting may impair direct visual examination or where access is limited by time, financial constraints or atmospheric hazards.
RVI/RDVI is commonly used as a predictive maintenance or regularly scheduled maintenance tool to assess the "health" and operability of fixed and portable assets. RVI/RDVI enables greater inspection coverage, inspection repeatability and data comparison.
The "remote" portion of RVI/RDVI refers to the characterization of the operator not entering the inspection area due to physical size constraints or potential safety issues related to the inspection environment.
Applications
Typical applications for RVI include:
Aircraft engines (turbofan, turbojet, turboshaft)
Aircraft fuselage
Turbines for power generation (steam and gas)
Process piping (oil and gas, pharmaceutical, food preparation)
Nuclear Power Stations - contaminated areas
Any areas where it is too dangerous, small or costly to view directly
References
Nondestructive testing
Maintenance
Tests | Remote visual inspection | Materials_science,Engineering | 306 |
33,301,481 | https://en.wikipedia.org/wiki/Transmission%20curve | The transmission curve or transmission characteristic is the mathematical function or graph that describes the transmission fraction of an optical or electronic filter as a function of frequency or wavelength. It is an instance of a transfer function but, unlike the case of, for example, an amplifier, output never exceeds input (maximum transmission is 100%). The term is often used in commerce, science, and technology to characterise filters.
The term has also long been used in fields such as geophysics and astronomy to characterise the properties of regions through which radiation passes, such as the ionosphere.
See also
Electronic filter — examples of transmission characteristics of electronic filters
References
Signal processing | Transmission curve | Technology,Engineering | 131 |
50,896 | https://en.wikipedia.org/wiki/Space%20station | A space station (or orbital station) is a spacecraft which remains in orbit and hosts humans for extended periods of time. It therefore is an artificial satellite featuring habitation facilities. The purpose of maintaining a space station varies depending on the program. Most often space stations have been research stations, but they have also served military or commercial uses, such as hosting space tourists.
Space stations have been hosting the only continuous presence of humans in space. The first space station was Salyut 1 (1971), hosting the first crew, of the ill-fated Soyuz 11. Consecutively space stations have been operated since Skylab (1973) and occupied since 1987 with the Salyut successor Mir. Uninterrupted occupation has been sustained since the operational transition from the Mir to the International Space Station (ISS), with its first occupation in 2000.
Currently there are two fully operational space stations – the ISS and China's Tiangong Space Station (TSS), which have been occupied since October 2000 with Expedition 1 and since June 2022 with Shenzhou 14. The highest number of people at the same time on one space station has been 13, first achieved with the eleven day docking to the ISS of the 127th Space Shuttle mission in 2009. The record for most people on all space stations at the same time has been 17, first on May 30, 2023, with 11 people on the ISS and 6 on the TSS.
Space stations are often modular, featuring docking ports, through which they are built and maintained, allowing the joining or movement of modules and the docking of other spacecrafts for the exchange of people, supplies and tools. While space stations generally do not leave their orbit, they do feature thrusters for station keeping.
History
Early concepts
The first mention of anything resembling a space station occurred in Edward Everett Hale's 1868 "The Brick Moon". The first to give serious, scientifically grounded consideration to space stations were Konstantin Tsiolkovsky and Hermann Oberth about two decades apart in the early 20th century.
In 1929, Herman Potočnik's The Problem of Space Travel was published, the first to envision a "rotating wheel" space station to create artificial gravity. Conceptualized during the Second World War, the "sun gun" was a theoretical orbital weapon orbiting Earth at a height of . No further research was ever conducted. In 1951, Wernher von Braun published a concept for a rotating wheel space station in Collier's Weekly, referencing Potočnik's idea. However, development of a rotating station was never begun in the 20th century.
First advances and precursors
The first human flew to space and concluded the first orbit on April 12, 1961, with Vostok 1.
The Apollo program had in its early planning instead of a lunar landing a crewed lunar orbital flight and an orbital laboratory station in orbit of Earth, at times called Project Olympus, as two different possible program goals, until the Kennedy administration sped ahead and made the Apollo program focus on what was originally planned to come after it, the lunar landing. The Project Olympus space station, or orbiting laboratory of the Apollo program, was proposed as an in-space unfolded structure with the Apollo command and service module docking. While never realized, the Apollo command and service module would perform docking maneuvers and eventually become a lunar orbiting module which was used for station-like purposes.
But before that the Gemini program paved the way and achieved the first space rendezvous (undocked) with Gemini 6 and Gemini 7 in 1965. Subsequently in 1966 Neil Armstrong performed on Gemini 8 the first ever space docking, while in 1967 Kosmos 186 and Kosmos 188 were the first spacecrafts that docked automatically.
In January 1969, Soyuz 4 and Soyuz 5 performed the first docked, but not internal, crew transfer, and in March, Apollo 9 performed the first ever internal transfer of astronauts between two docked spaceships.
Salyut, Almaz and Skylab
In 1971, the Soviet Union developed and launched the world's first space station, Salyut 1. The Almaz and Salyut series were eventually joined by Skylab, Mir, and Tiangong-1 and Tiangong-2. The hardware developed during the initial Soviet efforts remains in use, with evolved variants comprising a considerable part of the ISS, orbiting today. Each crew member stays aboard the station for weeks or months but rarely more than a year.
Early stations were monolithic designs that were constructed and launched in one piece, generally containing all their supplies and experimental equipment. A crew would then be launched to join the station and perform research. After the supplies had been consumed, the station was abandoned.
The first space station was Salyut 1, which was launched by the Soviet Union on April 19, 1971. The early Soviet stations were all designated "Salyut", but among these, there were two distinct types: civilian and military. The military stations, Salyut 2, Salyut 3, and Salyut 5, were also known as Almaz stations.
The civilian stations Salyut 6 and Salyut 7 were built with two docking ports, which allowed a second crew to visit, bringing a new spacecraft with them; the Soyuz ferry could spend 90 days in space, at which point it needed to be replaced by a fresh Soyuz spacecraft. This allowed for a crew to man the station continually. The American Skylab (1973–1979) was also equipped with two docking ports, like second-generation stations, but the extra port was never used. The presence of a second port on the new stations allowed Progress supply vehicles to be docked to the station, meaning that fresh supplies could be brought to aid long-duration missions. This concept was expanded on Salyut 7, which "hard docked" with a TKS tug shortly before it was abandoned; this served as a proof of concept for the use of modular space stations. The later Salyuts may reasonably be seen as a transition between the two groups.
Mir
Unlike previous stations, the Soviet space station Mir had a modular design; a core unit was launched, and additional modules, generally with a specific role, were later added. This method allows for greater flexibility in operation, as well as removing the need for a single immensely powerful launch vehicle. Modular stations are also designed from the outset to have their supplies provided by logistical support craft, which allows for a longer lifetime at the cost of requiring regular support launches.
International Space Station
The ISS is divided into two main sections, the Russian Orbital Segment (ROS) and the US Orbital Segment (USOS). The first module of the ISS, Zarya, was launched in 1998.
The Russian Orbital Segment's "second-generation" modules were able to launch on Proton, fly to the correct orbit, and dock themselves without human intervention. Connections are automatically made for power, data, gases, and propellants. The Russian autonomous approach allows the assembly of space stations prior to the launch of crew.
The Russian "second-generation" modules are able to be reconfigured to suit changing needs. As of 2009, RKK Energia was considering the removal and reuse of some modules of the ROS on the Orbital Piloted Assembly and Experiment Complex after the end of mission is reached for the ISS. However, in September 2017, the head of Roscosmos said that the technical feasibility of separating the station to form OPSEK had been studied, and there were now no plans to separate the Russian segment from the ISS.
In contrast, the main US modules launched on the Space Shuttle and were attached to the ISS by crews during EVAs. Connections for electrical power, data, propulsion, and cooling fluids are also made at this time, resulting in an integrated block of modules that is not designed for disassembly and must be deorbited as one mass.
Axiom Station is a planned commercial space station that will begin as a single module docked to the ISS. Axiom Space gained NASA approval for the venture in January 2020. The first module, the Payload Power Transfer Module (PPTM), is expected to be launched to the ISS no earlier than 2027. PPTM will remain at the ISS until the launch of Axiom's Habitat One (Hab-1) module about one year later, after which it will detach from the ISS to join with Hab-1.
Tiangong program
China's first space laboratory, Tiangong-1 was launched in September 2011. The uncrewed Shenzhou 8 then successfully performed an automatic rendezvous and docking in November 2011. The crewed Shenzhou 9 then docked with Tiangong-1 in June 2012, followed by the crewed Shenzhou 10 in 2013.
According to the China Manned Space Engineering Office, Tiangong-1 reentered over the South Pacific Ocean, northwest of Tahiti, on 2 April 2018 at 00:15 UTC.
A second space laboratory Tiangong-2 was launched in September 2016, while a plan for Tiangong-3 was merged with Tiangong-2. The station made a controlled reentry on 19 July 2019 and burned up over the South Pacific Ocean.
The Tiangong Space Station (), the first module of which was launched on 29 April 2021, is in low Earth orbit, 340 to 450 kilometres above the Earth at an orbital inclination of 42° to 43°. Its planned construction via 11 total launches across 2021–2022 was intended to extend the core module with two laboratory modules, capable of hosting up to six crew.
Planned projects
Architecture
Two types of space stations have been flown: monolithic and modular. Monolithic stations consist of a single vehicle and are launched by one rocket. Modular stations consist of two or more separate vehicles that are launched independently and docked on orbit. Modular stations are currently preferred due to lower costs and greater flexibility.
A space station is a complex vehicle that must incorporate many interrelated subsystems, including structure, electrical power, thermal control, attitude determination and control, orbital navigation and propulsion, automation and robotics, computing and communications, environmental and life support, crew facilities, and crew and cargo transportation. Stations must serve a useful role, which drives the capabilities required.
Orbit and purpose
Materials
Space stations are made from durable materials that have to weather space radiation, internal pressure, micrometeoroids, thermal effects of the sun and cold temperatures for long periods of time. They are typically made from stainless steel, titanium and high-quality aluminum alloys, with layers of insulation such as Kevlar as a ballistics shield protection.
The International Space Station (ISS) has a single inflatable module, the Bigelow Expandable Activity Module, which was installed in April2016 after being delivered to the ISS on the SpaceX CRS-8 resupply mission. This module, based on NASA research in the 1990s, weighs and was transported while compressed before being attached to the ISS by the space station arm and inflated to provide a volume. Whilst it was initially designed for a 2year lifetime it was still attached and being used for storage in August 2022.
Construction
Salyut 1 – first space station, launched in 1971
Skylab – launched in a single launch in May 1973
Mir – first modular space station assembled in orbit
International Space Station – modular space station assembled in orbit
Tiangong space station – Chinese space station
Habitability
The space station environment presents a variety of challenges to human habitability, including short-term problems such as the limited supplies of air, water, and food and the need to manage waste heat, and long-term ones such as weightlessness and relatively high levels of ionizing radiation. These conditions can create long-term health problems for space-station inhabitants, including muscle atrophy, bone deterioration, balance disorders, eyesight disorders, and elevated risk of cancer.
Future space habitats may attempt to address these issues, and could be designed for occupation beyond the weeks or months that current missions typically last. Possible solutions include the creation of artificial gravity by a rotating structure, the inclusion of radiation shielding, and the development of on-site agricultural ecosystems. Some designs might even accommodate large numbers of people, becoming essentially "cities in space" where people would reside semi-permanently.
Molds that develop aboard space stations can produce acids that degrade metal, glass, and rubber. Despite an expanding array of molecular approaches for detecting microorganisms, rapid and robust means of assessing the differential viability of the microbial cells, as a function of phylogenetic lineage, remain elusive.
Power
Like uncrewed spacecraft close to the Sun, space stations in the inner Solar System generally rely on solar panels to obtain power.
Life support
Space station air and water is brought up in spacecraft from Earth before being recycled. Supplemental oxygen can be supplied by a solid fuel oxygen generator.
Communications
Military
The last military-use space station was the Soviet Salyut 5, which was launched under the Almaz program and orbited between 1976 and 1977.
Occupation
Space stations have harboured so far the only long-duration direct human presence in space. After the first station, Salyut 1 (1971), and its tragic Soyuz 11 crew, space stations have been operated consecutively since Skylab (1973–1974), having allowed a progression of long-duration direct human presence in space. Long-duration resident crews have been joined by visiting crews since 1977 (Salyut 6), and stations have been occupied by consecutive crews since 1987 with the Salyut successor Mir. Uninterrupted occupation of stations has been achieved since the operational transition from the Mir to the ISS, with its first occupation in 2000. The ISS has hosted the highest number of people in orbit at the same time, reaching 13 for the first time during the eleven day docking of STS-127 in 2009.
The duration record for a single spaceflight is 437.75 days, set by Valeri Polyakov aboard Mir from 1994 to 1995. , four cosmonauts have completed single missions of over a year, all aboard Mir.
Operations
Resupply and crew vehicles
Many spacecraft are used to dock with the space stations. Soyuz flight T-15 in March to July 1986 was the first and as of 2016, only spacecraft to visit two different space stations, Mir and Salyut 7.
International Space Station
The International Space Station has been supported by many different spacecraft.
Future
Sierra Nevada Corporation Dream Chaser
New Space-Station Resupply Vehicle (HTV-X)
Roscosmos Orel
Current
Northrop Grumman Cygnus (2013–present)
Roscosmos Progress (multiple variants) (2000–present)
Energia Soyuz (multiple variants) (2001–present)
SpaceX Dragon 2 (2020–present)
Retired
Automated Transfer Vehicle (ATV) (2008–2015)
H-II Transfer Vehicle (HTV) (2009–2020)
Space Shuttle (1998–2011)
SpaceX Dragon 1 (2012–2020)
Tiangong space station
The Tiangong space station is supported by the following spacecraft:
Shenzhou (2021–present)
Tianzhou (2021–present)
Tiangong program
The Tiangong program relied on the following spacecraft.
Shenzhou program (2011–2016)
Mir
The Mir space station was in orbit from 1986 to 2001 and was supported and visited by the following spacecraft:
Roscosmos Progress (multiple variants) (1986–2000) – An additional Progress spacecraft was used in 2001 to deorbit Mir.
Energia Soyuz (multiple variants) (1986–2000)
Space Shuttle (1995–1998)
Skylab
Apollo command and service module (1973–1974)
Salyut programme
Energia Soyuz (multiple variants) (1971–1986)
Docking and berthing
Maintenance
Research
Research conducted on the Mir included the first long term space based ESA research project EUROMIR95 which lasted 179days and included 35 scientific experiments.
During the first 20 years of operation of the International Space Station, there were around 3,000 scientific experiments in the areas of biology and biotech, technology development, educational activities, human research, physical science, and Earth and space science.
Materials research
Space stations provide a useful platform to test the performance, stability, and survivability of materials in space. This research follows on from previous experiments such as the Long Duration Exposure Facility, a free flying experimental platform which flew from April1984 until January1990.
Mir Environmental Effects Payload (1996–1997)
Materials International Space Station Experiment (2001–present)
Human research
Botany
Space tourism
On the International Space Station, guests sometimes pay $50 million to spend the week living as an astronaut. Later, space tourism is slated to expand once launch costs are lowered sufficiently. By the end of the 2020s, space hotels may become relatively common.
Finance
As it currently costs on average $10,000 to $25,000 per kilogram to launch anything into orbit, space stations remain the exclusive province of government space agencies, which are primarily funded by taxation. In the case of the International Space Station, space tourism makes up a small portion of money to run it.
Legacy
Technology spinoffs
International cooperation and economy
Cultural impact
Space settlement
See also
Apollo–Soyuz
Spacelab
Shuttle–Mir program
List of space stations
References
Bibliography
Haeuplik-Meusburger: Architecture for Astronauts – An Activity based Approach. Springer Praxis Books, 2011, .
External links
Read Congressional Research Service (CRS) Reports regarding Space Stations
ISS – on Russian News Agency TASS, Official Infographic
The star named ISS – on Roscosmos TV
"Giant Doughnut Purposed as Space Station", Popular Science, October 1951, pp. 120–121; article on the subject of space exploration and a space station orbiting earth
Further reading
1971 introductions
Human habitats
Soviet inventions
Solar System | Space station | Astronomy | 3,598 |
58,507,732 | https://en.wikipedia.org/wiki/Aspergillus%20pseudoglaucus | Aspergillus pseudoglaucus is a species of fungus in the genus Aspergillus. It is from the Aspergillus section. The species was first described in 1929. It has been reported to produce asperentins, asperflavin, auroglaucin, bisanthrons, dihydroauroglaucin, echinulins, erythroglaucin, 6-farnesyl-5,7-dihydroxy-4-methylphthalide, flavoglaucin, isoechinulins, mycophenolic acid, neoechinulins, physcion, questin, questinol, tetracyclic, and tetrahydroauroglaucin.
Growth and morphology
A. pseudoglaucus has been cultivated on both Czapek yeast extract agar (CYA) plates and yeast extract sucrose agar (YES) plates. The growth morphology of the colonies can be seen in the pictures below.
References
pseudoglaucus
Fungi described in 1929
Fungus species | Aspergillus pseudoglaucus | Biology | 228 |
11,347,064 | https://en.wikipedia.org/wiki/Reversed%20electrodialysis | Reverse electrodialysis (RED) is the salinity gradient energy retrieved from the difference in the salt concentration between seawater and river water. A method of utilizing the energy produced by this process by means of a heat engine was invented by Prof. Sidney Loeb in 1977 at the Ben-Gurion University of the Negev.
--United States Patent US4171409
In reverse electrodialysis a salt solution and fresh water are let through a stack of alternating cation and anion exchange membranes. The chemical potential difference between salt and fresh water generates a voltage over each membrane and the total potential of the system is the sum of the potential differences over all membranes. The process works through difference in ion concentration instead of an electric field, which has implications for the type of membrane needed.
In RED, as in a fuel cell, the cells are stacked. A module with a capacity of 250 kW has the size of a shipping container.
In the Netherlands, for example, more than 3,300 m3 fresh water runs into the sea per second on average. The membrane halves the pressure differences which results in a water column of approximately 135 meters. The energy potential is therefore e=mgΔh=3.3*106 kg/s*10 m/s2*135 meters ca.= 4.5*109 Joule per second, Power=4.5 gigawatts.
Development
In 2006 a 50 kW plant was located at a coastal test site in Harlingen, the Netherlands, the focus being on prevention of biofouling of the anode, cathode, and membranes and increasing the membrane performance. In 2007 the Directorate for Public Works and Water Management, Redstack, and ENECO signed a declaration of intent for development of a pilot plant on the Afsluitdijk in the Netherlands. The plant was put into service on 26 November 2014 and produces 50 kW of electricity to show the technical feasibility in real-life conditions using fresh IJsselmeer water and salt water from the Wadden Sea. Theoretically, with 1m3/s river water and an equal amount of sea water, approximately 1 MW of renewable electricity can be recovered at this location by upscaling the plant. It is to be expected that after this phase the installation could be further expanded to a final capacity of 200 MW.
The main disadvantage of reverse electrodialysis electricity production is the high capital costs involved. Ion exchange membranes are very expensive and power produced per membrane area is really low. As consequence, return of investment is much lower than other renewable energy sources such as wind or solar.
See also
Osmotic power
Van 't Hoff factor
Pressure-retarded osmosis (PRO)
Electrodialysis reversal (EDR)
Reverse osmosis
Semipermeable membrane
Green energy
Renewable energy
References
External links
Wetsus
KEMA
KEMA
Dutch Research Database
Osmotic Energy (1995)
Salinity Power UN Report
Practical Potential of Reverse Electrodialysis, Environ. Sci. Technol., July 29, 2009
Dutch water plan to turn green energy blue
Fuel cells
Sustainable technologies
Sustainable energy
Energy conversion
Membrane technology | Reversed electrodialysis | Chemistry | 641 |
38,061,896 | https://en.wikipedia.org/wiki/71%20Ophiuchi | 71 Ophiuchi is a single star in the equatorial constellation of Ophiuchus. It is visible to the naked eye as a faint, yellow-hued point of light with an apparent visual magnitude of 4.64. The star is located approximately 273 light years away from the Sun based on parallax, and is moving closer with a radial velocity of −3 km/s.
At the estimated age of 400 million years, this is an aging giant star with a stellar classification of G8III, having exhausted the supply of hydrogen at its core and expanded to around 13 times the Sun's radius. It is a red clump giant, which means it is on the horizontal branch and is generating energy through helium fusion at its core. The star has 2.9 times the mass of the Sun and is radiating 89 times the Sun's luminosity from its swollen photosphere at an effective temperature of 5,001 K.
References
G-type giants
Horizontal-branch stars
Ophiuchus
BD+08 3582
Ophiuchi, 71
165760
088765
6770 | 71 Ophiuchi | Astronomy | 227 |
57,183,208 | https://en.wikipedia.org/wiki/Israel%20Halperin%20Prize | The Israel Halperin Prize is awarded every five years by the Canadian Annual Symposium on Operator Theory and Operator Algebras to a member of the Canadian mathematical community who has recently obtained a doctorate and has made contributions to operator theory or operator algebras. It honors Israel Halperin, the founder of a group of researchers in operator algebras and operator theory at the University of Toronto who strongly influenced the field across Canada. First awarded in 1980.
Recipients
The recipients of are:
1980: Man-Duen Choi
1985: Kenneth R. Davidson
1985: David Handelman
1990: Ian F. Putnam
1995: Nigel Higson
2000: Guihua Gong
2000: Alexandru Nica
2010: Andrew Toms
2015: Serban Belinschi
2015: Zhuang Niu
2020: Matthew Kennedy
2020: Aaron Tikuisis
See also
List of mathematics awards
References
Mathematics awards | Israel Halperin Prize | Technology | 176 |
3,103,995 | https://en.wikipedia.org/wiki/IBM%20WebFountain | WebFountain is an Internet analytical engine implemented by IBM for the study of unstructured data on the World Wide Web. IBM describes WebFountain as:
. . . a set of research technologies that collect, store and analyze massive amounts of unstructured and semi-structured text. It is built on an open, extensible platform that enables the discovery of trends, patterns and relationships from data.
The project represents one of the first comprehensive attempts to catalog and interpret the unstructured data of the Web in a continuous fashion. To this end its supporting researchers at IBM have investigated new systems for the precise retrieval of subsets of the information on the Web, real-time trend analysis, and meta-level analysis of the available information of the Web.
Factiva, an information retrieval company owned by Dow Jones and Reuters, licensed WebFountain in September 2003, and has been building software which utilizes the WebFountain engine to gauge corporate reputation. Factiva reportedly offers yearly subscriptions to the service for $200,000. Factiva has since decided to explore other technologies, and has severed its relationship with WebFountain.
WebFountain is developed at IBM's Almaden research campus in the Bay Area of California.
IBM has developed software, called UIMA for Unstructured Information Management Architecture, that can be used for analysis of unstructured information. It can perhaps help perform trend analysis across documents, determine the theme and gist of documents, allow fuzzy searches on unstructured documents.
References
External links
IBM Almaden Research Center WebFountain overview
WebFountain on John Battelle's Searchblog
Zdnet article "Drinking from the Fire Hydrant"
IBM sets out to make sense of the Web, February 5, 2004
IBM Joins Corporate Monitoring Space with Release of Public Image Monitoring Solution, Search Engine Watch, November 9, 2005
WebFountain | IBM WebFountain | Technology | 390 |
37,470,470 | https://en.wikipedia.org/wiki/Taphrina%20caerulescens | Taphrina caerulescens is a species of fungus in the family Taphrinaceae. It is a pathogenic Ascomycete fungus that causes oak leaf blister disease on various species of oak trees (Quercus spp.). The associated anamorph species is Lalaria coccinea, described in 1990. This disease causes lesions and blisters on Oak leaves. Effects of the disease are mostly cosmetic. Although not taxonomically defined, strains of T. caerulescens have been shown to be host specific with varying ¬ascus morphology between strains. There are differences in strains' abilities to metabolize various carbon and nitrogen compounds. This has been proposed as a method of taxonomically defining subspecies within T. caerulescens.
Taphrina caerulescens is very closely related to Taphrina deformans, which causes peach leaf curl. These two pathogens have indistinguishable asci. However, T. deformans infects peach tree species while T. caerulescens infects Oak tree species only.
Hosts and symptoms
Taphrina caerulescens infects about 50 different species of oak (Quercus), predominately red oak (Q. erythrobalanus) and some white oak (Q. leurobalanus). Oak leaf blister is found across the country and in varying parts of the world but is most severe in the southeast and Gulf States of the U.S.
It is generally accepted that a T. caerulescens strain isolated from one host cannot be used to infect a different host species. This indicates that there are a number of different strains within T. caerulescens. For instance, it has been observed where a single, heavily infected oak tree of one species is surrounded by various other susceptible oak species which remain symptom-less of oak leaf blister the entire season. In a study by Taylor & Birdwell, pathogen isolates from water, live, and southern red oak were used to inoculate the host live oak. Asci developed on the live oak only from pathogen isolates originating from the live oak, further indicating host specificity. The extent of strains' host specificity is not fully known and no taxonomic specifications are in place to name these strains. Various strains have also been shown to differ in their nitrogen and carbon compound metabolic profiles
Symptoms
In the lab, symptoms develop about four weeks after inoculation In the field, symptoms are most prominent on the top side of the leaf. Grey lesions 3–20 mm in diameter appear in early spring on the bottom side of the leaf with blisters or bulges up to high on the top side of the leaf. By midsummer these lesions may coalesce causing significant amounts of necrosis per leaf. This can cause the leaf to curl as well as premature defoliation (more common where disease is more severe) upwards of 85%. Top side chlorosis normally corresponds with the lesions on the bottom side of the leaf. Reduced overall growth of the tree may result, but is not common.
Microscopic symptoms
Cells around the blisters resemble meristematic cells with denser cytoplasms and smaller vacuoles. Mycelial development is sparse on the leaf surface. Hyphae growth is subcuticle and intercellular in the epidermal layer. There is no evidence of hyphae growing into the mesophyll layer nor into epidermal and cuticle layers beyond the lesions whereas T. deformans does cross into the mesophyll layer.
Lower epidermal cells in diseased tissue are elongated. Anticlinal divisions are likely while periclinal and oblique divisions are definitive. There are no observable effects on guard cells. Cells in the mesophyll layer remain mostly unchanged; there is a slight reduction [chloroplast] number in palisade cells. Occasional degenerate chloroplasts and slightly lower number in mesophyll cells.
The upper epidermal cells appear to remain relatively normal as well, as hyphal growth goes below this layer. Healthy epidermal cells contain a large central vacuole surrounded by a thin cytoplasmic layer with endoplasmic reticulum, chloroplasts with well-developed grana, starch granules, and osmophilic globules. Other organelles are infrequently present as well. Epidermal cells of diseased tissue have highly irregular cell walls. The most dramatic changes were within the cell. The large central vacuole is replaced with a number of small irregularly shaped vacuoles containing a highly electron dense material. Nuclei are enlarged, plus a 2-3 fold increase in the number of organelles normally present in these cell types. Chloroplasts become large and irregular with large starch granules inside of them as well as other internal alterations to the chloroplasts. These alterations indicate intense metabolic activity, appearing to dedifferentiate and resemble meristematic cells. Symptoms typically occur in early summer. Concave depressions with asci in them on the top and bottom of leaves suggests direct penetration.
Disease cycle
Taphrina caerulescens is an ascomycete fungi. T. caerulescens, as is true for all species of Taphrina, has distinct parasitic and saprophytic stages. The two stages have varying morphologies. Saprophytic somatic cells overwinter in bark crevasses and bud scales. In early spring these spores infect young leaves as they first emerge. Infection triggers hyperplasia and hypertrophy, likely due to the production and excretion of a hormones called cytokines produced by Taphrina species, and other symptoms as described above. Taphrina species have been shown to produce cytokinin. A layer of asci emerge within the lesions on the bottom of the leaf. Ascus initials are found among mature asci. The asci initials are cytoplasmic containing lipid droplets and glycogen granules. Mature asci develop from the initials by nuclear fusion followed by mitosis. These asci sit naked on the leaf surface with no ascocarp. These mature asci are long single cells, unitruncate, and contain ascospores. With no special mechanism, ascospores are all forcibly discharged from the ascus to sit on the leaf surface.
Blastospores and conidia bud directly, with a noticeable constriction point, from these ascospores while in the ascus as well as after they have discharged. Germination of spores has been observed in the ascus as well. These spores may then infect tissue or remain in a saprophytic stage. In the lab, 20% of conidia form germ tubes by 48 hours after release from the ascus. Germ tubes protrude from the apical end of conidia. Extending hyphae are long and thin, sometimes branch, and in an appressed manner appear to follow the leaf contours in growth. These hyphae grow over guard cells and enter the leaf tissue through open stomata. Two germ tubes may even form from the same spore, the second protruding from the opposite end of the cell. This is the only known method of entry by T. caerulescens, no direct penetration or appresoria have been observed in this species. Thus, infection occurs only on the underside of leaves, where stomata are found. In the saprophytic stage the single celled spores grow like yeast, that is to say they bud (replicate) directly. Spores may be disseminated by wind or rain. These cells then overwinter and serve as inoculum the following spring.
Environment
Taphrina caerulescens, as with all other Taprhina species, thrives in cool, wet environments. Environmental conditions play a large role in the ability of Taphrina species because they are highly dependent upon leaf surface moisture to infect budding leaf tissue. A similar species to T. caerulescens is the more studied T. deformans (causing leaf peach curl) which requires temperatures to be less than 16 and 19°C in the infection and incubation for the pathogen to infect successfully. Some scientists argue that the pathogen is especially successful at such low temperatures, not necessarily because it is so well adapted to low temperatures, but also in part because at higher temperatures the plant is able to outgrow the infection, as evidenced by hyphal growth being present in leaf tissue in both <16°C and >16°C.
Control
Management recommendations for oak leaf blister is primarily to focus on mitigating other stressors to the tree. Watering and fertilizing infected trees can help reduce stress on the tree and can reduce disease symptoms . Sanitation methods such as removing fallen leaves in autumn can reduce disease inoculum for the following spring Fungicide application is not a necessary management strategy because T. caerulescens does not severely harm plant health and is considered a purely cosmetic disease. If, in rare cases, the disease is severe, fungicides can be applied in spring before the tree buds. Fungicides recommended are: Bordeaux mixture, chlorothalonil, liquid copper, and liquid lime sulfur. If homeowners decide fungicides are necessary, plant pathologists recommend hiring professionals to apply the chemicals and not apply the sprays themselves. Fungicides must be applied before the tree buds to be effective. Once the spores have infected the young bud tissue in spring it will be too late to reduce disease by fungicide treatment. Application of fungicides can reduce symptoms of the T. caerulescens, but once the disease is present it can only be reduced, not eradicated As with most foliar fungal diseases, it is too late to apply a fungicide if symptoms are already being spotted, and all management options will be to reduce inoculum for the next spring by raking leaves, or to apply a chemical spray the following spring to reduce infection.
Importance
Oak leaf blister is not considered a significant threat to tree health and is a cosmetic disease. Although the disease causes very little damage to plant health, it is important because it is found throughout the United States.
Similar species
The blisters and masses of pale spores produced by T. caerulescens can resemble the felty mass caused by the erineum mite, Aceria mackiei. Click on the proceeding reference link for photos comparing the two.
References
External links
Fungal tree pathogens and diseases
Taphrinomycetes
Fungi described in 1848
Taxa named by John Baptiste Henri Joseph Desmazières
Taxa named by Camille Montagne
Fungus species | Taphrina caerulescens | Biology | 2,233 |
1,554,398 | https://en.wikipedia.org/wiki/Minimal%20realization | In control theory, given any transfer function, any state-space model that is both controllable and observable and has the same input-output behaviour as the transfer function is said to be a minimal realization of the transfer function. The realization is called "minimal" because it describes the system with the minimum number of states.
The minimum number of state variables required to describe a system equals the order of the differential equation; more state variables than the minimum can be defined. For example, a second order system can be defined by two or more state variables, with two being the minimal realization.
Gilbert's realization
Given a matrix transfer function, it is possible to directly construct a minimal state-space realization by using Gilbert's method (also known as Gilbert's realization).
References
Control theory | Minimal realization | Mathematics | 163 |
32,168,948 | https://en.wikipedia.org/wiki/Sepp%20Hochreiter | Josef "Sepp" Hochreiter (born 14 February 1967) is a German computer scientist. Since 2018 he has led the Institute for Machine Learning at the Johannes Kepler University of Linz after having led the Institute of Bioinformatics from 2006 to 2018. In 2017 he became the head of the Linz Institute of Technology (LIT) AI Lab. Hochreiter is also a founding director of the Institute of Advanced Research in Artificial Intelligence (IARAI). Previously, he was at Technische Universität Berlin, at University of Colorado Boulder, and at the Technical University of Munich. He is a chair of the Critical Assessment of Massive Data Analysis (CAMDA) conference.
Hochreiter has made contributions in the fields of machine learning, deep learning and bioinformatics, most notably the development of the long short-term memory (LSTM) neural network architecture, but also in meta-learning, reinforcement learning and biclustering with application to bioinformatics data.
Scientific career
Long short-term memory (LSTM)
Hochreiter developed the long short-term memory (LSTM) neural network architecture in his diploma thesis in 1991 leading to the main publication in 1997. LSTM overcomes the problem of numerical instability in training recurrent neural networks (RNNs) that prevents them from learning from long sequences (vanishing or exploding gradient). In 2007, Hochreiter and others successfully applied LSTM with an optimized architecture to very fast protein homology detection without requiring a sequence alignment. LSTM networks have also been used in Google Voice for transcription and search, and in the Google Allo chat app for generating response suggestion with low latency.
Other machine learning contributions
Beyond LSTM, Hochreiter has developed "Flat Minimum Search" to increase the generalization of neural networks and introduced rectified factor networks (RFNs) for sparse coding
which have been applied in bioinformatics and genetics. Hochreiter introduced modern Hopfield networks with continuous states and applied them to the task of immune repertoire classification.
Hochreiter worked with Jürgen Schmidhuber in the field of reinforcement learning on actor-critic systems that learn by "backpropagation through a model".
Hochreiter has been involved in the development of factor analysis methods with application to bioinformatics, including FABIA for biclustering, HapFABIA for detecting short segments of identity by descent and FARMS for preprocessing and summarizing high-density oligonucleotide DNA microarrays to analyze RNA gene expression.
In 2006, Hochreiter and others proposed an extension of the support vector machine (SVM), the "Potential Support Vector Machine" (PSVM), which can be applied to non-square kernel matrices and can be used with kernels that are not positive definite. Hochreiter and his collaborators have applied PSVM to feature selection, including gene selection for microarray data.
Awards
Hochreiter was awarded the IEEE CIS Neural Networks Pioneer Prize in 2021 for his work on LSTM.
References
External links
Home Page Sepp Hochreiter
1967 births
Living people
German bioinformaticians
Biostatisticians
Computational biology
German artificial intelligence researchers
Machine learning researchers
Academic staff of the Technical University of Munich
Academic staff of Johannes Kepler University Linz
Allianz people | Sepp Hochreiter | Biology | 688 |
19,636,775 | https://en.wikipedia.org/wiki/Maximum%20cut | In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets and , such that the number of edges between and is as large as possible. Finding such a cut is known as the max-cut problem.
The problem can be stated simply as follows. One wants a subset of the vertex set such that the number of edges between and the complementary subset is as large as possible. Equivalently, one wants a bipartite subgraph of the graph with as many edges as possible.
There is a more general version of the problem called weighted max-cut, where each edge is associated with a real number, its weight, and the objective is to maximize the total weight of the edges between and its complement rather than the number of the edges. The weighted max-cut problem allowing both positive and negative weights can be trivially transformed into a weighted minimum cut problem by flipping the sign in all weights.
Lower bounds
Edwards obtained the following two lower bound for Max-Cut on a graph with vertices and edges (in (a) is arbitrary, but in (b) it is connected):
(a)
(b)
Bound (b) is often called the Edwards-Erdős bound as Erdős conjectured it. Edwards proved the Edwards-Erdős bound using probabilistic method; Crowston et al. proved the bound using linear algebra and analysis of pseudo-boolean functions.
The proof of Crowston et al. allows us to extend the Edwards-Erdős bound to the Balanced Subgraph Problem (BSP) on signed graphs , i.e. graphs where each edge is assigned + or –. For a partition of into subsets and , an edge is balanced if either and and are in the same subset, or and and are different subsets. BSP aims at finding a partition with the maximum number of balanced edges in . The Edwards-Erdős gives a lower bound on for every connected signed graph .
Bound (a) was improved for special classes of graphs: triangle-free graphs, graphs of given maximum degree, -free graphs, etc., see e.g.
Poljak and Turzik extended the Edwards-Erdős bound to weighted Max-Cut:
where and are the weights of and its minimum weight spanning tree . Recently, Gutin and Yeo obtained a number of lower bounds for weighted Max-Cut extending the Poljak-Turzik bound for arbitrary weighted graphs and bounds for special classes of weighted graphs.
Computational complexity
The following decision problem related to maximum cuts has been studied widely in theoretical computer science:
Given a graph G and an integer k, determine whether there is a cut of size at least k in G.
This problem is known to be NP-complete. It is easy to see that the problem is in NP: a yes answer is easy to prove by presenting a large enough cut. The NP-completeness of the problem can be shown, for example, by a reduction from maximum 2-satisfiability (a restriction of the maximum satisfiability problem). The weighted version of the decision problem was one of Karp's 21 NP-complete problems; Karp showed the NP-completeness by a reduction from the partition problem.
The canonical optimization variant of the above decision problem is usually known as the Maximum-Cut Problem or Max-Cut and is defined as:
Given a graph G, find a maximum cut.
The optimization variant is known to be NP-Hard.
The opposite problem, that of finding a minimum cut is known to be efficiently solvable via the Ford–Fulkerson algorithm.
Algorithms
Polynomial-time algorithms
As the Max-Cut Problem is NP-hard, no polynomial-time algorithms for Max-Cut in general graphs are known.
Planar graphs
However, in planar graphs, the Maximum-Cut Problem is dual to the route inspection problem (the problem of finding a shortest tour that visits each edge of a graph at least once), in the sense that the edges that do not belong to a maximum cut-set of a graph G are the duals of the edges that are doubled in an optimal inspection tour of the dual graph of G. The optimal inspection tour forms a self-intersecting curve that separates the plane into two subsets, the subset of points for which the winding number of the curve is even and the subset for which the winding number is odd; these two subsets form a cut that includes all of the edges whose duals appear an odd number of times in the tour. The route inspection problem may be solved in polynomial time, and this duality allows the maximum cut problem to also be solved in polynomial time for planar graphs. The Maximum-Bisection problem is known however to be NP-hard.
Approximation algorithms
The Max-Cut Problem is APX-hard, meaning that there is no polynomial-time approximation scheme (PTAS), arbitrarily close to the optimal solution, for it, unless P = NP. Thus, every known polynomial-time approximation algorithm achieves an approximation ratio strictly less than one.
There is a simple randomized 0.5-approximation algorithm: for each vertex flip a coin to decide to which half of the partition to assign it. In expectation, half of the edges are cut edges. This algorithm can be derandomized with the method of conditional probabilities; therefore there is a simple deterministic polynomial-time 0.5-approximation algorithm as well. One such algorithm starts with an arbitrary partition of the vertices of the given graph and repeatedly moves one vertex at a time from one side of the partition to the other, improving the solution at each step, until no more improvements of this type can be made. The number of iterations is at most because the algorithm improves the cut by at least one edge at each step. When the algorithm terminates, at least half of the edges incident to every vertex belong to the cut, for otherwise moving the vertex would improve the cut. Therefore, the cut includes at least edges.
The polynomial-time approximation algorithm for Max-Cut with the best known approximation ratio is a method by Goemans and Williamson using semidefinite programming and randomized rounding that achieves an approximation ratio where
If the unique games conjecture is true, this is the best possible approximation ratio for maximum cut. Without such unproven assumptions, it has been proven to be NP-hard to approximate the max-cut value with an approximation ratio better than .
In there is an extended analysis of 10 heuristics for this problem, including open-source implementation.
Parameterized algorithms and kernelization
While it is trivial to prove that the problem of finding a cut of size at least (the parameter) k is fixed-parameter tractable (FPT), it is much harder to show fixed-parameter tractability for the problem of deciding whether a graph G has a cut of size at least the Edwards-Erdős lower bound (see Lower bounds above) plus (the parameter)k. Crowston et al. proved that the problem can be solved in time and admits a kernel of size . Crowston et al. extended the fixed-parameter tractability result to the Balanced Subgraph Problem (BSP, see Lower bounds above) and improved the kernel size to (holds also for BSP). Etscheid and Mnich improved the fixed-parameter tractability result for BSP to and the kernel-size result to vertices.
Applications
Machine learning
Treating its nodes as features and its edges as distances, the max cut algorithm divides a graph in two well-separated subsets. In other words, it can be naturally applied to perform binary classification. Compared to more common classification algorithms, it does not require a feature space, only the distances between elements within.
Theoretical physics
In statistical physics and disordered systems, the Max Cut problem is equivalent to minimizing the Hamiltonian of a spin glass model, most simply the Ising model. For the Ising model on a graph G and only nearest-neighbor interactions, the Hamiltonian is
Here each vertex i of the graph is a spin site that can take a spin value A spin configuration partitions into two sets, those with spin up and those with spin down We denote with the set of edges that connect the two sets. We can then rewrite the Hamiltonian as
Minimizing this energy is equivalent to the min-cut problem or by setting the graph weights as the max-cut problem.
Circuit design
The max cut problem has applications in VLSI design.
See also
Minimum cut
Minimum k-cut
Odd cycle transversal, equivalent to asking for the largest bipartite induced subgraph
Unfriendly partition, a related concept for infinite graphs
Notes
References
.
.
Maximum cut (optimisation version) is the problem ND14 in Appendix B (page 399).
.
.
.
.
.
.
.
.
.
Maximum cut (decision version) is the problem ND16 in Appendix A2.2.
Maximum bipartite subgraph (decision version) is the problem GT25 in Appendix A1.2.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski, Gerhard Woeginger (2000), "Maximum Cut", in "A compendium of NP optimization problems".
Andrea Casini, Nicola Rebagliati (2012), "A Python library for solving Max Cut"
Graph theory objects
Combinatorial optimization
NP-complete problems
Computational problems in graph theory | Maximum cut | Mathematics | 1,966 |
2,144,540 | https://en.wikipedia.org/wiki/Toxaphene | Toxaphene was an insecticide used primarily for cotton in the southern United States during the late 1960s and the 1970s. Toxaphene is a mixture of over 670 different chemicals and is produced by reacting chlorine gas with camphene. It can be most commonly found as a yellow to amber waxy solid.
Toxaphene was banned in the United States in 1990 and was banned globally by the 2001 Stockholm Convention on Persistent Organic Pollutants. It is a very persistent chemical that can remain in the environment for 1–14 years without degrading, particularly in the soil.
Testing performed on animals, mostly rats and mice, has demonstrated that toxaphene is harmful to animals. Exposure to toxaphene has proven to stimulate the central nervous system, as well as induce morphological changes in the thyroid, liver, and kidneys.
Toxaphene has been shown to cause adverse health effects in humans. The main sources of exposure are through food, drinking water, breathing contaminated air, and direct contact with contaminated soil. Exposure to high levels of toxaphene can cause damage to the lungs, nervous system, liver, kidneys, and in extreme cases, may even cause death. It is thought to be a potential carcinogen in humans, though this has not yet been proven.
Composition
Toxaphene is a synthetic organic mixture composed of over 670 chemicals, formed by the chlorination of camphene (C10H16) to an overall chlorine content of 67–69% by weight. The bulk of the compounds (mostly chlorobornanes, chlorocamphenes, and other bicyclic chloroorganic compounds) found in toxaphene have chemical formulas ranging from C10H11Cl5 to C10H6Cl12, with a mean formula of C10H10Cl8. The formula weights of these compounds range from 308 to 551 grams/mole; the theoretical mean formula has a value of 414 grams/mole. Toxaphene is usually seen as a yellow to amber waxy solid with a piney odor. It is highly insoluble in water but freely soluble in aromatic hydrocarbons and readily soluble in aliphatic organic solvents. It is stable at room temperature and pressure. It is volatile enough to be transported for long distances through the atmosphere.
Applications
Advertisements for Toxaphene were seen in agricultural periodicals such as Farm Journal as early as 1950.
Toxaphene was primarily used as a pesticide for cotton in the southern United States during the late 1960s and 1970s. It was also used on small grains, maize, vegetables, and soybeans. Outside of the realm of crops, it was also used to control ectoparasites such as lice, flies, ticks, mange, and scam mites on livestock. In some cases it was used to kill undesirable fish species in lakes and streams. The breakdown of usage can be summarized: 85% on cotton, 7% to control insect pests on livestock and poultry, 5% on other field crops, 3% on soybeans, and less than 1% on sorghum.
The first recorded usage of toxaphene was in 1966 in the United States, and by the early to mid 1970s, toxaphene was the United States' most heavily used pesticide. Over 34 million pounds of toxaphene were used annually from 1966 to 1976. As a result of Environmental Protection Agency restrictions, annual toxaphene usage fell to 6.6 million pounds in 1982. In 1990, the EPA banned all usage of toxaphene in the United States. Toxaphene is still used in countries outside the United States but much of this usage has been undocumented. Between 1970 and 1995, global usage of toxaphene was estimated to be 670 million kilograms (1.5 billion pounds).
Production
Toxaphene was first produced in the United States in 1947 although it was not heavily used until 1966. By 1975, toxaphene production reached its peak at 59.4 million pounds annually. Production decreased more than 90% from this value by 1982 due to Environmental Protection Agency restrictions. Overall, an estimated 234,000 metric tons (over 500 million pounds) have been produced in the United States. Between 25% and 35% of the toxaphene produced in the United States has been exported. There are currently 11 toxaphene suppliers worldwide.
Environmental effects
When released into the environment, toxaphene can be quite persistent and exists in the air, soil, and water. In water, it can evaporate easily and is fairly insoluble. Its solubility is 3 mg/L of water at 22 degrees Celsius. Toxaphene breaks down very slowly and has a half-life of up to 12 years in the soil. It is most commonly found in air, soil, and sediment found at the bottom of lakes or streams. It can also be present in many parts of the world where it was never used because toxaphene is able to evaporate and travel long distances through air currents. Toxaphene can eventually be degraded, through dechlorination, in the air using sunlight to break it down. The degradation of toxaphene usually occurs under aerobic conditions. The levels of toxaphene have decreased since its ban. However, due to its persistence, it can still be found in the environment today.
Exposure
The three main paths of exposure to toxaphene are ingestion, inhalation, and absorption. For humans, the main source of toxaphene exposure is through ingested seafood. When toxaphene enters the body, it usually accumulates in fatty tissues. It is broken down through dechlorination and oxidation in the liver, and the byproducts are eliminated through feces.
People that live near an area that has high toxaphene contamination are at high risk to toxaphene exposure through inhalation of contaminated air or direct skin contact with contaminated soil or water. Eating large quantities of fish on a daily basis also increases susceptibility to toxaphene exposure. Finally, exposure is rare, yet possible through drinking water when contaminated by toxaphene runoff from the soil. However, toxaphene has been rarely seen at high levels in drinking water due to toxaphene's nearly complete insolubility in water.
Shellfish, algae, fish and marine mammals have all been shown to exhibit high levels of toxaphene. People in the Canadian Arctic, where a traditional diet consists of fish and marine animals, have been shown to consume ten times the accepted daily intake of toxaphene. Also, blubber from beluga whales in the Arctic were found to have unhealthy and toxic levels of toxaphene.
Health effects
In humans
When inhaled or ingested, sufficient quantities of toxaphene can damage the lungs, nervous system, and kidneys, and may cause death. The major health effects of toxaphene involve central nervous system stimulation leading to convulsive seizures. The dose necessary to induce nonfatal convulsions in humans is about 10 milligrams per kilogram body weight per day. Several deaths linked to toxaphene have been documented in which an unknown quantity of toxaphene was ingested intentionally or accidentally from food contamination. The deaths are attributed to respiratory failure resulting from seizures. Chronic inhalation exposure in humans results in reversible respiratory toxicity.
A study conducted between 1954 and 1972 of male agricultural workers and agronomists exposed to toxaphene and other pesticides showed that there are higher proportions of bronchial carcinoma in the test group than in the unexposed general population. However, toxaphene may not have been the main pesticide responsible for tumor production. Tests on lab animals show that toxaphene causes liver and kidney cancer, so the EPA has classified it as a Group B2 carcinogen, meaning it is a probable human carcinogen. The International Agency for Research on Cancer has classified it as a Group 2B carcinogen.
Toxaphene can be detected in blood, urine, breast milk, and body tissues if a person has been exposed to high levels, but it is removed from the body quickly, so detection has to occur within several days of exposure.
It is not known whether toxaphene can affect reproduction in humans.
In animals
Toxaphene was used to treat mange in cattle in California in the 1970s and there were reports of cattle deaths following the toxaphene treatment.
Chronic oral exposure in animals affects the liver, the kidney, the spleen, the adrenal and thyroid glands, the central nervous system, and the immune system. Toxaphene stimulates the central nervous system by antagonizing neurons leading to hyperpolarization of neurons and increased neuronal activity.
Regulations
Toxaphene has been found on at least 68 of the 1,699 National Priorities List sites identified by the United States Environmental Protection Agency. Toxaphene has been forbidden in Germany since 1980. Most uses of toxaphene were cancelled in the U.S. in 1982 with the exception of use on livestock in emergency situations, and for controlling insects on banana and pineapple crops in Puerto Rico and the U.S. Virgin Islands. All uses of toxaphene were cancelled in the U.S. in 1990.
Toxaphene has been banned in 37 countries, including Austria, Belize, Brazil, Costa Rica, Dominican Republic, Egypt, the EU, India, Ireland, Kenya, Korea, Mexico, Panama, Singapore, Thailand and Tonga. Its use has been severely restricted in 11 other countries, including Argentina, Columbia, Dominica, Honduras, Nicaragua, Pakistan, South Africa, Turkey, and Venezuela.
In the Stockholm Convention on POPs, which came into effect on 17 May 2004, twelve POPs were listed to be eliminated or their production and use restricted. The OCPs or pesticide-POPs identified on this list have been termed the "dirty dozen" and include aldrin, chlordane, DDT, dieldrin, endrin, heptachlor, hexachlorobenzene, mirex, and toxaphene.
The EPA has determined that lifetime exposure to 0.01 milligrams per liter of toxaphene in the drinking water is not expected to cause any adverse noncancer effects if the only source of exposure is drinking water, and has established the maximum contaminant level (MCL) of toxaphene at 0.003 mg/L. The United States Food and Drug Administration (FDA) uses the same level for the maximum level permissible in bottled water.
The FDA has determined that the concentration of toxaphene in bottled drinking water should not exceed 0.003 milligrams per liter.
The United States Department of Transportation lists toxaphene as a hazardous material and has special requirements for marking, labeling, and transporting the material.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Trade names
Trade names and synonyms include Chlorinated camphene, Octachlorocamphene, Camphochlor, Agricide Maggot Killer, Alltex, Allotox, Crestoxo, Compound 3956, Estonox, Fasco-Terpene, Geniphene, Hercules 3956, M5055, Melipax, Motox, Penphene, Phenacide, Phenatox, Strobane-T, Toxadust, Toxakil, Vertac 90%, Toxon 63, Attac, Anatox, Royal Brand Bean Tox 82, Cotton Tox MP82, Security Tox-Sol-6, Security Tox-MP cotton spray, Security Motox 63 cotton spray, Agro-Chem Brand Torbidan 28, and Dr Roger's TOXENE.
References
External links
ASTDR ToxFAQs for Toxaphene
CDC - NIOSH Pocket Guide for Chemical Hazards - Chlorinated Camphene
Obsolete pesticides
Organochloride insecticides
IARC Group 2B carcinogens
Endocrine disruptors
Cycloalkenes
Persistent organic pollutants under the Stockholm Convention
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
Bicyclic compounds | Toxaphene | Chemistry | 2,646 |
54,040,276 | https://en.wikipedia.org/wiki/Strontium%20perchlorate | Strontium perchlorate is a white powder or colorless crystals with the formula Sr(ClO4)2.
It is a strong oxidizer which gives red flames. It can be used in pyrotechnics; however, usually the more common strontium nitrate is used. It is also used in Liquid Injection Thrust Vector Control (LITVC) in solid-propellant rockets to enable steering control with a simple fixed nozzle.
It can be prepared by oxidizing strontium chlorate with hypochlorites.
Strontium perchlorate forms a crystal structure in the orthorhombic space group Pbca which is comparable to that of calcium perchlorate.
References
Perchlorates
Strontium compounds | Strontium perchlorate | Chemistry | 156 |
40,974,507 | https://en.wikipedia.org/wiki/Biological%20Trace%20Element%20Research | Biological Trace Element Research is a journal established in 1979 and published by Springer Science+Business Media. The editor-in-chief is M.F. Flores-Arce (International Association of Bioorganic Scientists). According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.738.
References
External links
Springer Science+Business Media academic journals
Academic journals established in 1979
Biochemistry journals
Monthly journals | Biological Trace Element Research | Chemistry | 86 |
3,205,368 | https://en.wikipedia.org/wiki/Text-based%20game | A text game or text-based game is an electronic game that uses a text-based user interface, that is, the user interface employs a set of encodable characters,
such as ASCII, instead of bitmap or vector graphics.
All text-based games have been well documented since at least the 1960s, when teleprinters were interlaced with mainframe computers as a form of input, where the output was printed on paper. With that, notable titles were developed for those computers using the sprinter in the 1960s and 1970s and more numerous game titles have been developed for other video terminals since at least the mid-1970s, having reached their peak popularity in that decade and the 1980s, and continued as early online games into the mid-1990s.
Although generally replaced in favor of video games that use non-textual graphics, text-based games continue to be written by independent developers. They have been the basis of instigating genres of video gaming, especially adventure and role-playing video games.
Overview
Strictly speaking, text-based means employing an encoding system of characters designed to be printable as text data. As most computers
only read binary code, encoding formats are typically written in such, where a bit is the smallest unit of data that has two possible values and each combination of bits represents a byte. That said, a text-based game is any electronic game whereby information is conveyed as encoded text in the user interface.
Although technically graphical when displayed on a computer monitor, text data is sometimes contrasted with graphics as the former is text-only; data representation conveyed via an output device is restricted to a given set of encodable characters and the total number thereof, as well as graphical capabilities. For example, ASCII uses 96 printable characters in its set of 128, whereas ANSI uses both ASCII and 128 additional characters from extended ASCII and allows the text to be variously colored, allowing for further possibilities. Text data also has the advantage of requiring small processing power and minimal graphical capabilities by modern standards, as well as significantly reducing production costs compared to graphical data.
History
Text-based games trace as far back as teleprinters in the 1960s, when they were installed on early mainframe computers as an input-and-output form. At that time, video terminals were expensive and being experimented as "glass teletypes", and the user would submit commands via the teleprinter interfaced with the mainframe, the output being printed on paper. Notable early mainframe games include The Sumerian Game, Lunar Lander, The Oregon Trail, and Star Trek.
In the mid-1970s, when video terminals became the cheapest means for multiple users to interact with mainframes, text-based games were designed in universities for mainframes partly as an experiment on artificial intelligence, the majority of these games being either based on the 1974 role-playing game Dungeons & Dragons or inspired by J. R. R. Tolkien's works. As with other games, they often lacked functionalities such as saving. Proposed reasons for the absence of the ability to save included the fact that early computer games were often simple and gaming sessions were brief, as well as hardware limitations and costs. This may partly explain why earlier computer games were developed instead under the episodic structure, but such computer games whose source code could be accessed by anyone could be modified, and as designers wrote larger game worlds, gaming sessions lengthened, and the need to resume where left off became inevitable. This started in 1977 with Don Woods' revision of the 1976 text-based adventure game Colossal Cave Adventure (later renamed to Adventure), which saw expanded gameplay and story and, notably, the ability to save.
Text-based games were also early forerunners to online gaming. From the late-1970s until the worldwide dominance of the Internet in the mid-1990s, home computer users could still interact remotely with other computers by using dial-up modems, connecting them via telephone wires. These computers were often directed via text-based terminal emulators to hobbyist-run bulletin board systems (BBSes), which tended to be accessible—often freely—by area codes to cut costs from more distant communications. Without a graphical program for clients, most online computer games could only run using textual graphics, and where the user did have such a program, the often limited bandwidth of the modem made downloading graphics much slower than text. Online games designed for BBSes initially used ASCII as the character set, but since the late-1980s, most BBSes employed colored ANSI art as the graphical standard. These online games became known as "BBS door games", as connecting to a BBS opened the "door" between the client and the games on the BBS.
However, terminal emulators are still in use today, and people continue playing MUDs (multi-user dungeon) and exploring interactive fiction. The Interactive Fiction Competition was established in 1995 to encourage development of and explore independent interactive fiction titles, and has since held annual competitions for who can develop the best such game.
Genres
Although text-based games are not limited to any specific genre, several notable genres started as and were popularized by text-based games.
Text adventure
Text adventures (sometimes synonymously referred to as interactive fiction) are text-based games wherein worlds are described in the narrative and the player submits typically simple commands to interact with the worlds. Colossal Cave Adventure is considered to be the first adventure game, and indeed the name of the genre adventure game is derived from the title. As text-based adventure games reached their peak in popularity in the late 1970s and 1980s, notable text-based adventure titles were released by various developers, including Zork and The Hitchhiker's Guide to the Galaxy by Infocom.
MUD
An MUD (originally Multi-User Dungeon, with later variants Multi-User Dimension and Multi-User Domain), is a multi-user real-time online virtual world. Most MUDs are represented entirely in text, but graphical MUDs are not unknown. MUDs combine elements of role-playing games, hack and slash, interactive fiction, and online chat. Players can read or view depictions of rooms, objects, other players, non-player characters, and actions performed in the virtual world. Players typically interact with each other and the world by typing commands that resemble a natural language.
Roguelike
The roguelike is a subgenre of role-playing video games, characterized by randomization for replayability, permanent death, and turn-based movement. Many early roguelikes featured ASCII graphics. Games are typically dungeon crawls, with many monsters, items, and environmental features. Computer roguelikes usually employ the majority of the keyboard to facilitate interaction with items and the environment. The name of the genre comes from the 1980 game Rogue.
See also
ASCII art
List of text-based computer games
Online text-based role-playing game
References
Video game graphics
Video game terminology
Video games with textual graphics | Text-based game | Technology | 1,425 |
1,925,832 | https://en.wikipedia.org/wiki/Wilhelm%20Kress | Wilhelm Kress (29 July 1836 in Saint Petersburg – 24 February 1913 in Vienna) Born of German (Bavarian) parents in St. Petersburg in 1836. Moved to Vienna in 1873, where his self-propelled flying models attracted much attention. He became a naturalized Austrian.
Life
Kress came to Vienna in 1873, where he developed the first modern delta-flying hang glider in 1877. This hang-glider was a major achievement for the time, when many engineers still struggled with the development of "heavier-than-air" non-powered aircraft. He also displayed rubber band powered flying models called the 'Aeroveloce' in 1877 and 1880.
During the turn of the century he was one of the world-wide contestants for the creation of a break-through powered airplane. In 1900 he developed the control stick for aircraft, but did not apply for a patent (instead, a patent was awarded to the French aviator, Robert Esnault-Pelterie who applied for it in 1907). Kress' aircraft, the Drachenflieger, was constructed for water takeoff and achieved some brief hops in 1901 at the Wienerwaldsee near Vienna.
A longer controlled flight was not possible because the engine (made by Daimler) was twice as heavy as Kress had specified in his order, and could be operated only at half of its nominal power output. During one of his attempts at flight taking off from water, his plane was destroyed when it became entangled on debris floating in the lake.
See also
Wright brothers
Aviation
List of Austrian scientists
List of Austrians
References
Aviation - The Pioneer Years
1836 births
1913 deaths
Aerospace engineers
Aviation inventors
Engineers from Austria-Hungary
Engineers from the Russian Empire
Emigrants from the Russian Empire to Austria-Hungary
German aviation pioneers | Wilhelm Kress | Engineering | 358 |
37,979,864 | https://en.wikipedia.org/wiki/Termitomyces%20tylerianus | Termitomyces tylerianus is a species of agaric fungus in the family Lyophyllaceae. Found in Africa and China, it was first formally described in 1964. Fruit bodies (mushrooms) grow in groups or clusters near termite nests in deciduous forests. The mushrooms are edible.
References
Edible fungi
Fungi described in 1964
Fungi of Africa
Fungi of Asia
Lyophyllaceae
Fungus species | Termitomyces tylerianus | Biology | 83 |
29,468,225 | https://en.wikipedia.org/wiki/Zinc%20carboxypeptidase | The carboxypeptidase A family can be divided into two subfamilies: carboxypeptidase H (regulatory) and carboxypeptidase A (digestive). Members of the H family have longer C-termini than those of family A, and carboxypeptidase M (a member of the H family) is bound to the membrane by a glycosylphosphatidylinositol anchor, unlike the majority of the M14 family, which are soluble.
The zinc ligands have been determined as two histidines and a glutamate, and the catalytic residue has been identified as a C-terminal glutamate, but these do not form the characteristic metalloprotease HEXXH motif. Members of the carboxypeptidase A family are synthesised as inactive molecules with propeptides that must be cleaved to activate the enzyme. Structural studies of carboxypeptidases A and B reveal the propeptide to exist as a globular domain, followed by an extended alpha-helix; this shields the catalytic site, without specifically binding to it, while the substrate-binding site is blocked by making specific contacts.
Other examples of protein families in this entry include:
Intron maturase
Putative mitochondrial processing peptidase alpha subunit
Superoxide dismutase [Mn] ()
Asparagine synthetase [glutamine-hydrolysing] 3 ()
Glucose-6-phosphate isomerase ()
Human proteins containing this domain
AEBP1; AGBL1; AGBL2; AGBL3; AGBL4; AGBL5; AGTPBP1; CPA1;
CPA2; CPA3; CPA4; CPA5; CPA6; CPB1; CPB2; CPD;
CPE; CPM; CPN1; CPO; CPXM1; CPXM2; CPZ;
References
Protein domains
Zinc enzymes | Zinc carboxypeptidase | Biology | 423 |
23,533,032 | https://en.wikipedia.org/wiki/Bensbach%27s%20bird-of-paradise | Bensbach's bird-of-paradise, also known as Bensbach's riflebird, is a bird in the family Paradisaeidae that is often now considered an intergeneric hybrid between a magnificent riflebird and lesser bird-of-paradise. However, some authors, such as Errol Fuller, believe that it was a distinct and possibly extinct species.
History
Only one adult male specimen is known of this bird, held in the Netherlands Natural History Museum and coming from the Arfak Mountains of north-western New Guinea. It is named after Jacob Bensbach, Dutch Resident at Ternate, who presented the specimen to the museum.
Notes
References
Hybrid birds of paradise
Birds of the Doberai Peninsula
Intergeneric hybrids | Bensbach's bird-of-paradise | Biology | 153 |
70,873,435 | https://en.wikipedia.org/wiki/Axinelline%20A | Axinelline A is a COX-2 inhibitor with the molecular formula C12H15NO6 which is produced by the bacterium Streptomyces axinellae.
References
COX-2 inhibitors
Ethyl esters
Catechols
Benzamides
Triols | Axinelline A | Chemistry | 56 |
3,650,526 | https://en.wikipedia.org/wiki/Van%20Hove%20singularity | In condensed matter physics, a Van Hove singularity is a singularity (non-smooth point) in the density of states (DOS) of a crystalline solid. The wavevectors at which Van Hove singularities occur are often referred to as critical points of the Brillouin zone. For three-dimensional crystals, they take the form of kinks (where the density of states is not differentiable). The most common application of the Van Hove singularity concept comes in the analysis of optical absorption spectra. The occurrence of such singularities was first analyzed by the Belgian physicist Léon Van Hove in 1953 for the case of phonon densities of states.
Theory
Consider a one-dimensional lattice of N particle sites, with each particle site separated by distance a, for a total length of L = Na. Instead of assuming that the waves in this one-dimensional box are standing waves, it is more convenient to adopt periodic boundary conditions:
where is wavelength, and n is an integer. (Positive integers will denote forward waves, negative integers will denote reverse waves.) The shortest wavelength needed to describe a wavemotion in the lattice is equal to 2a which then corresponds to the largest needed wave number and which also corresponds to the maximum possible : . We may define the density of states g(k)dk as the number of standing waves with wave vector k to k+dk:
Extending the analysis to wavevectors in three dimensions the density of states in a box of side length will be
where is a volume element in k-space, and which, for electrons, will need to be multiplied by a factor of 2 to account for the two possible spin orientations. By the chain rule, the DOS in energy space can be expressed as
where is the gradient in k-space.
The set of points in k-space which correspond to a particular energy E form a surface in k-space, and the gradient of E will be a vector perpendicular to this surface at every point. The density of states as a function of this energy E satisfies:
where the integral is over the surface of constant E. We can choose a new coordinate system such that is perpendicular to the surface and therefore parallel to the gradient of E. If the coordinate system is just a rotation of the original coordinate system, then the volume element in k-prime space will be
We can then write dE as:
and, substituting into the expression for g(E) we have:
where the term is an area element on the constant-E surface. The clear implication of the equation for is that at the -points where the dispersion relation has an extremum, the integrand in the DOS expression diverges. The Van Hove singularities are the features that occur in the DOS function at these -points.
A detailed analysis shows that there are four types of Van Hove singularities in three-dimensional space, depending on whether the band structure goes through a local maximum, a local minimum or a saddle point. In three dimensions, the DOS itself is not divergent although its derivative is. The function g(E) tends to have square-root singularities (see the Figure) since for a spherical free electron gas Fermi surface
so that .
In two dimensions the DOS is logarithmically divergent at a saddle point and in one dimension the DOS itself is infinite where is zero.
Experimental observation
The optical absorption spectrum of a solid is most straightforwardly calculated from the electronic band structure using Fermi's Golden Rule where the relevant matrix element to be evaluated is the dipole operator where is the vector potential and is the momentum operator. The density of states which appears in the Fermi's Golden Rule expression is then the joint density of states, which is the number of electronic states in the conduction and valence bands that are separated by a given photon energy. The optical absorption is then essentially the product of the dipole operator matrix element (also known as the oscillator strength) and the JDOS.
The divergences in the two- and one-dimensional DOS might be expected to be a mathematical formality, but in fact they are readily observable. Highly anisotropic solids like graphite (quasi-2D) and Bechgaard salts (quasi-1D) show anomalies in spectroscopic measurements that are attributable to the Van Hove singularities. Van Hove singularities play a significant role in understanding optical intensities in single-walled carbon nanotubes (SWNTs) which are also quasi-1D systems. Twisted graphene layers also show pronounced Van-Hove singularities in the DOS due to the interlayer coupling.
Notes
Electronic band structures | Van Hove singularity | Physics,Chemistry,Materials_science | 955 |
2,168,889 | https://en.wikipedia.org/wiki/Asset%20allocation | Asset allocation is the implementation of an investment strategy that attempts to balance risk versus reward by adjusting the percentage of each asset in an investment portfolio according to the investor's risk tolerance, goals and investment time frame. The focus is on the characteristics of the overall portfolio. Such a strategy contrasts with an approach that focuses on individual assets.
Description
Many financial experts argue that asset allocation is an important factor in determining returns for an investment portfolio. Asset allocation is based on the principle that different assets perform differently in different market and economic conditions.
A fundamental justification for asset allocation is the notion that different asset classes offer returns that are not perfectly correlated, hence diversification reduces the overall risk in terms of the variability of returns for a given level of expected return. Asset diversification has been described as "the only free lunch you will find in the investment game". Academic research has painstakingly explained the importance and benefits of asset allocation and the problems of active management (see academic studies section below).
Although the risk is reduced as long as correlations are not perfect, it is typically forecast (wholly or in part) based on statistical relationships (like correlation and variance) that existed over some past period. Expectations for return are often derived in the same way. Studies of these forecasting methods constitute an important direction of academic research.
When such backward-looking approaches are used to forecast future returns or risks using the traditional mean-variance optimization approach to the asset allocation of modern portfolio theory (MPT), the strategy is, in fact, predicting future risks and returns based on history. As there is no guarantee that past relationships will continue in the future, this is one of the "weak links" in traditional asset allocation strategies as derived from MPT. Other, more subtle weaknesses include seemingly minor errors in forecasting leading to recommended allocations that are grossly skewed from investment mandates and/or impractical—often even violating an investment manager's "common sense" understanding of a tenable portfolio-allocation strategy.
==Asset classes==
An asset class is a group of economic resources sharing similar characteristics, such as riskiness and return. There are many types of assets that may or may not be included in an asset allocation strategy.
Traditional assets
The "traditional" asset classes are stocks, bonds, and cash:
Stocks: value, dividend, growth, or sector-specific (or a "blend" of any two or more of the preceding); large-cap versus mid-cap, small-cap or micro-cap; domestic, foreign (developed), emerging or frontier markets
Bonds (fixed income securities more generally): investment-grade or junk (high-yield); government or corporate; short-term, intermediate, long-term; domestic, foreign, emerging markets
Cash and cash equivalents (e.g., deposit account, money market fund)
Allocation among these three provides a starting point. Usually included are hybrid instruments such as convertible bonds and preferred stocks, counting as a mixture of bonds and stocks.
Alternative assets
Other alternative assets that may be considered include:
Valuable economic goods and consumer goods such as precious metals and other valuable tangible goods.
Commercial or residential real estate (also REITs)
Collectibles such as art, coins, or stamps
Insurance products (annuity, life settlements, catastrophe bonds, personal life insurance products, etc.)
Derivatives such as options, collateralized debt, and futures
Foreign currency
Venture capital
Private equity
Distressed securities
Infrastructure
Allocation strategy
There are several types of asset allocation strategies based on investment goals, risk tolerance, time frames and diversification. The most common forms of asset allocation are: strategic, dynamic, tactical, and core-satellite.
Strategic asset allocation
The primary goal of strategic asset allocation is to create an asset mix that seeks to provide the optimal balance between expected risk and return for a long-term investment horizon. Generally speaking, strategic asset allocation strategies are agnostic to economic environments, i.e., they do not change their allocation postures relative to changing market or economic conditions.
Dynamic asset allocation
Dynamic asset allocation is similar to strategic asset allocation in that portfolios are built by allocating to an asset mix that seeks to provide the optimal balance between expected risk and return for a long-term investment horizon. Like strategic allocation strategies, dynamic strategies largely retain exposure to their original asset classes; however, unlike strategic strategies, dynamic asset allocation portfolios will adjust their postures over time relative to changes in the economic environment.
Tactical asset allocation
Tactical asset allocation is a strategy in which an investor takes a more active approach that tries to position a portfolio into those assets, sectors, or individual stocks that show the most potential for perceived gains. While an original asset mix is formulated much like strategic and dynamic portfolio, tactical strategies are often traded more actively and are free to move entirely in and out of their core asset classes.
Core-satellite asset allocation
Core-satellite allocation strategies generally contain a 'core' strategic element making up the most significant portion of the portfolio, while applying a dynamic or tactical 'satellite' strategy that makes up a smaller part of the portfolio. In this way, core-satellite allocation strategies are a hybrid of the strategic and dynamic/tactical allocation strategies mentioned above.
Classification
Industry sectors may be classified according to an industry classification taxonomy (such as the Industry Classification Benchmark). The top-level sectors may be grouped as below:
Morningstar X-ray
Defensive
Consumer Staples
Health Care
Utilities
Sensitive
Energy
Industrials
Technology
Telecommunications
Cyclical
Consumer Discretionary
Basic Materials
Financials
Real Estate
Per the Tactical asset allocation strategy above, an investor may allocate more to cyclical sectors when the economy is showing gains, and more to defensive when it is not.
Academic studies
In 1986, Gary P. Brinson, L. Randolph Hood, and SEI's Gilbert L. Beebower (BHB) published a study about asset allocation of 91 large pension funds measured from 1974 to 1983. They replaced the pension funds' stock, bond, and cash selections with corresponding market indexes. The indexed quarterly return was found to be higher than the pension plan's actual quarterly return. The two quarterly return series' linear correlation was measured at 96.7%, with shared variance of 93.6%. A 1991 follow-up study by Brinson, Singer, and Beebower measured variance of 91.5%. The conclusion of the study was that replacing active choices with simple asset classes worked just as well as, if not even better than, professional pension managers. Also, a small number of asset classes was sufficient for financial planning. Financial advisors often pointed to this study to support the idea that asset allocation is more important than all other concerns, which the BHB study is incorrectly thought to have lumped together as "market timing" but was actually policy selection. One problem with the Brinson study was that the cost factor in the two return series was not clearly discussed. However, in response to a letter to the editor, Hood noted that the returns series were gross of management fees.
In 1997, William Jahnke initiated a debate on this topic, attacking the BHB study in a paper titled "The Asset Allocation Hoax". The Jahnke discussion appeared in the Journal of Financial Planning as an opinion piece, not a peer reviewed article. Jahnke's main criticism, still undisputed, was that BHB's use of quarterly data dampens the impact of compounding slight portfolio disparities over time, relative to the benchmark. One could compound 2% and 2.15% quarterly over 20 years and see the sizable difference in cumulative return. However, the difference is still 15 basis points (hundredths of a percent) per quarter; the difference is one of perception, not fact.
In 2000, Ibbotson and Kaplan used five asset classes in their study "Does Asset Allocation Policy Explain 40, 90, or 100 Percent of Performance?". The asset classes included were large-cap US stock, small-cap US stock, non-US stock, US bonds, and cash. Ibbotson and Kaplan examined the 10-year return of 94 US balanced mutual funds versus the corresponding indexed returns. This time, after properly adjusting for the cost of running index funds, the actual returns again failed to beat index returns. The linear correlation between monthly index return series and the actual monthly actual return series was measured at 90.2%, with shared variance of 81.4%. Ibbotson concluded 1) that asset allocation explained 40% of the variation of returns across funds, and 2) that it explained virtually 100% of the level of fund returns. Gary Brinson has expressed his general agreement with the Ibbotson-Kaplan conclusions.
In both studies, it is misleading to make statements such as "asset allocation explains 93.6% of investment return". Even "asset allocation explains 93.6% of quarterly performance variance" leaves much to be desired, because the shared variance could be from pension funds' operating structure. Hood, however, rejects this interpretation on the grounds that pension plans, in particular, cannot cross-share risks and that they are explicitly singular entities, rendering shared variance irrelevant. The statistics were most helpful when used to demonstrate the similarity of the index return series and the actual return series.
A 2000 paper by Meir Statman found that using the same parameters that explained BHB's 93.6% variance result, a hypothetical financial advisor with perfect foresight in tactical asset allocation performed 8.1% better per year, yet the strategic asset allocation still explained 89.4% of the variance. Thus, explaining variance does not explain performance. Statman says that strategic asset allocation is movement along the efficient frontier, whereas tactical asset allocation involves movement of the efficient frontier. A more common sense explanation of the Brinson, Hood, and Beebower study is that asset allocation explains more than 90% of the volatility of returns of an overall portfolio, but will not explain the ending results of your portfolio over long periods of time. Hood notes in his review of the material over 20 years, however, that explaining performance over time is possible with the BHB approach but was not the focus of the original paper.
Bekkers, Doeswijk and Lam (2009) investigate the diversification benefits for a portfolio by distinguishing ten different investment categories simultaneously in a mean-variance analysis as well as a market portfolio approach. The results suggest that real estate, commodities, and high yield add the most value to the traditional asset mix of stocks, bonds, and cash. A study with such broad coverage of asset classes has not been conducted before, not in the context of determining capital market expectations and performing a mean-variance analysis, neither in assessing the global market portfolio.
Doeswijk, Lam and Swinkels (2014) argue that the portfolio of the average investor contains important information for strategic asset allocation purposes. This portfolio shows the relative value of all assets according to the market crowd, which one could interpret as a benchmark or the optimal portfolio for the average investor. The authors determine the market values of equities, private equity, real estate, high yield bonds, emerging debt, non-government bonds, government bonds, inflation linked bonds, commodities, and hedge funds. For this range of assets, they estimate the invested global market portfolio for the period 1990 to 2012. For the main asset categories equities, real estate, non-government bonds, and government bonds they extend the period to 1959 until 2012.
Doeswijk, Lam and Swinkels (2019) show that the global market portfolio realizes a compounded real return of 4.45% per year with a standard deviation of 11.2% from 1960 until 2017. In the inflationary period from 1960 to 1979, the compounded real return of the global market portfolio is 3.24% per year, while this is 6.01% per year in the disinflationary period from 1980 to 2017. The average return during recessions was -1.96% per year, versus 7.72% per year during expansions. The reward for the average investor over the period 1960 to 2017 is a compounded return of 3.39% points above the risk-less rate earned by savers.
Historically, since the 20th century, US equities have outperformed equities of other countries because of the competitive advantage US has due to its large GDP.
Performance indicators
McGuigan described an examination of funds that were in the top quartile of performance during 1983 to 1993. During the second measurement period of 1993 to 2003, only 28.57% of the funds remained in the top quartile. 33.33% of the funds dropped to the second quartile. The rest of the funds dropped to the third or fourth quartile.
In fact, low cost was a more reliable indicator of performance. Bogle noted that an examination of five-year performance data of large-cap blend funds revealed that the lowest cost quartile funds had the best performance, and the highest cost quartile funds had the worst performance.
Return versus risk trade-off
In asset allocation planning, the decision on the amount of stocks versus bonds in one's portfolio is a very important decision. Simply buying stocks without regard of a possible bear market can result in panic selling later. One's true risk tolerance can be hard to gauge until having experienced a real bear market with money invested in the market. Finding the proper balance is key.
The tables show why asset allocation is important. It determines an investor's future return, as well as the bear market burden that he or she will have to carry successfully to realize the returns.
Problems with asset allocation
There are various reasons why asset allocation fails to work.
Investor behavior is inherently biased. Even though investor chooses an asset allocation, implementation is a challenge.
Investors agree to asset allocation, but after some good returns, they decide that they really wanted more risk.
Investors agree to asset allocation, but after some bad returns, they decide that they really wanted less risk.
Investors' risk tolerance is not knowable ahead of time.
Security selection within asset classes will not necessarily produce a risk profile equal to the asset class.
The long-run behavior of asset classes does not guarantee their shorter-term behavior.
Different assets are subject to distinct tax treatments and regulatory considerations, which can make asset allocation decisions more complex.
Frequent asset class rebalancing and maintaining a diversified portfolio can lead to substantial costs and fees, which may reduce overall returns.
Accurately predicting the optimal times to invest in or sell out of various asset classes is difficult, and poor timing can adversely affect returns.
See also
Asset location
Economic capital
Efficient-market hypothesis
Performance attribution
References
External links
Asset allocation performance
Model portfolios for buy and hold index investors
Calculator for determining allocation of retirement assets, and related risk questionnaire
Calculator which determines future asset mix based on differing growth rates and contributions
Investment management
Actuarial science
Corporate development | Asset allocation | Mathematics | 3,046 |
7,921,267 | https://en.wikipedia.org/wiki/Diisopromine | Diisopromine or disoprominum, usually as the hydrochloride salt, is a synthetic spasmolytic which neutralizes spastic conditions of the biliary tract and of the sphincter of Oddi. It was discovered at Janssen Pharmaceutica in 1955. It is sold in South Africa under the brand name Agofell syrup as a mixture with sorbitol, and elsewhere as Megabyl.
See also
Fenpiprane
Delucemine
References
Diisopropylamino compounds
Janssen Pharmaceutica
Belgian inventions
Drugs with unknown mechanisms of action
Aromatic compounds | Diisopromine | Chemistry | 128 |
6,486,663 | https://en.wikipedia.org/wiki/Isoaspartate | Isoaspartic acid (isoaspartate, isoaspartyl, β-aspartate) is an aspartic acid residue isomeric to the typical α peptide linkage. It is a β-amino acid, with the side chain carboxyl moved to the backbone. Such a change is caused by a chemical reaction in which the nitrogen atom on the N+1 following peptide bond (in black at top right of Figure 1) nucleophilically attacks the γ-carbon of the side chain of an asparagine or aspartic acid residue, forming a succinimide intermediate (in red). Hydrolysis of the intermediate results in two products, either aspartic acid (in black at left) or isoaspartic acid, which is a β-amino acid (in green at bottom right). The reaction also results in the deamidation of the asparagine residue. Racemization may occur leading to the formation of D-aminoacids.
Kinetics of isoaspartyl formation
Isoaspartyl formation reactions have been conjectured to be one of the factors that limit the useful lifetime of proteins.
Isoaspartyl formation proceeds much more quickly if the asparagine is followed by a small, flexible residue (such as Gly) that leaves the peptide group open for attack. These reactions also proceed much more quickly at elevated pH (>10) and temperatures.
Repair
L-isoaspartyl methyltransferase repairs isoaspartate and D-aspartate residues by sticking a methyl group onto the side chain carboxyl group in the residue, creating an ester. The ester rapidly and spontaneously turns into the succinimide (red), and randomly turns back into normal aspartic acid (black) or isoaspartate again (green) for another attempt.
References
Biomolecules
Amino acids | Isoaspartate | Chemistry,Biology | 397 |
71,190,099 | https://en.wikipedia.org/wiki/James%20Pinfold | James Lewis Pinfold (born 1950 in Ealing, West London) is a British-Canadian physicist, specializing in particle physics.
Education and career
Pinfold graduated in physics in 1972 with a B.Sc. from Imperial College London and in 1977 with a Ph.D. from the University of London. His Ph.D. thesis was on weak neutral currents, stemming from his work as part of the Gargamelle discovery team. From 1977 to 1989 he held research assistant and senior research assistant positions at CERN (near Geneva) and Fermilab (near Chicago). From 1989 to 1992 he was an associate professor at the Weizmann Institute of Science. At the University of Alberta, he was from 1992 to 1996 an associate professor and from 1996 to 2016 a full professor, and he is since 2016 a distinguished university professor. From 1995 to 2004 he was the University of Alberta's Centre for Subatomic Research (renamed in 2006 the Centre for Particle Physics). Since 2005 he has held a visiting professorship at King's College London. He frequently travels back and forth between the University of Alberta and CERN in Geneva. He is the author or co-author of over 1250 citable publications and has given over 220 invited talks.
Pinfold was from 1988 to 1989 the spokesperson for CERN's WA88 experiment. From 1987 to 1992 he was the spokesperson for the MODAL experiment at CERN's Large Electron Positron Collider (LEP). He was one of the founders in the 1990s of the ATLAS experiment involved in the Large Hadron Collider (LHC) discovery of the Higgs boson. From 2000 to 2002 he was the deputy spokesperson for ATLAS-Canada. Since 2000 he is the leader and spokesperson for the MoEDAL experiment. From 2004 to 2010 he was the deputy co-spokesperson for the SLIM experiment.
In 2007, he won an award from ASTech (Alberta Science & Technology Leadership Foundation) for his leadership in starting the Alberta Large-area Time-coincidence Array, or ALTA, Project. This educational and research project "involves spreading out many cosmic-ray detectors over vast areas, connecting them through the Internet, and synchronising their readings with an integrated GPS system. Most of the detectors are run by high school students". In 2013 he was elected a Fellow of the Royal Society of Canada (RSC). In 2018 he received the Izaak Walton Killam Memorial Prize.
Selected publications
1973
References
External links
(MoEDAL experiment spokesperson James Pinfold)
1950 births
Living people
Alumni of Imperial College London
Alumni of the University of London
Academic staff of the University of Alberta
20th-century British physicists
21st-century British physicists
20th-century Canadian physicists
21st-century Canadian physicists
Particle physicists
People associated with CERN
Fellows of the Royal Society of Canada | James Pinfold | Physics | 578 |
2,361,047 | https://en.wikipedia.org/wiki/Page%20layout | In graphic design, page layout is the arrangement of visual elements on a page. It generally involves organizational principles of composition to achieve specific communication objectives.
The high-level page layout involves deciding on the overall arrangement of text and images, and possibly on the size or shape of the medium. It requires intelligence, sentience, and creativity, and is informed by culture, psychology, and what the document authors and editors wish to communicate and emphasize. Low-level pagination and typesetting are more mechanical processes. Given certain parameters such as boundaries of text areas, the typeface, and font size, justification preference can be done in a straightforward way. Until desktop publishing became dominant, these processes were still done by people, but in modern publishing, they are almost always automated. The result might be published as-is (as for a residential phone book interior) or might be tweaked by a graphic designer (as for a highly polished, expensive publication).
Beginning from early illuminated pages in hand-copied books of the Middle Ages and proceeding down to intricate modern magazine and catalog layouts, proper page design has long been a consideration in printed material. With print media, elements usually consist of type (text), images (pictures), and occasionally place-holder graphics for elements that are not printed with ink such as die/laser cutting, foil stamping or blind embossing.
The term page furniture may be used for items on a page other than the main text and images, such as headlines, bylines or image captions.
History and layout technologies
Direct physical page setting
With manuscripts, all of the elements are added by hand, so the creator can determine the layout directly as they create the work, perhaps with an advanced sketch as a guide.
With ancient woodblock printing, all elements of the page were carved directly into the wood, though later layout decisions might need to be made if the printing was transferred onto a larger work, such as a large piece of fabric, potentially with multiple block impressions.
With the Renaissance invention of letterpress printing and cold-metal moveable type, typesetting was accomplished by physically assembling characters using a composing stick into a galley—a long tray. Any images would be created by engraving.
The original document would be a hand-written manuscript; if the typesetting was performed by someone other than the layout artist, markup would be added to the manuscript with instructions as to typeface, font size, and so on. (Even after authors began to use typewriters in the 1860s, originals were still called "manuscripts" and the markup process was the same.)
After the first round of typesetting, a galley proof might be printed in order for proofreading to be performed, either to correct errors in the original, or to make sure that the typesetter had copied the manuscript properly, and correctly interpreted the markup. The final layout would be constructed in a "form" or "forme" using pieces of wood or metal ("furniture") to space out the text and images as desired, a frame known as a chase, and objects which lock down the frame known as quoins. This process is called imposition, and potentially includes arranging multiple pages to be printed on the same sheet of paper which will later be folded and possibly trimmed. An "imposition proof" (essentially a short run of the press) might be created to check the final placement.
The invention of hot metal typesetting in 1884 sped up the typesetting process by allowing workers to produce slugs—entire lines of text—using a keyboard. The slugs were the result of molten metal being poured into molds temporarily assembled by the typesetting machine. The layout process remained the same as with cold metal type, however: assembly into physical galleys.
Paste-up era
Offset lithography allows the bright and dark areas of an image (at first captured on film) to control ink placement on the printing press. This means that if a single copy of the page can be created on paper and photographed, then any number of copies could be printed. The type could be set with a typewriter, or to achieve professional results comparable to letterpress, a specialized typesetting machine. The IBM Selectric Composer, for example, could produce type of different size, different fonts (including proportional fonts), and with text justification. With photoengraving and halftone, physical photographs could be transferred into print directly, rather than relying on hand-made engravings.
The layout process then became the task of creating the paste up, so named because rubber cement or another adhesive would be used to physically paste images and columns of text onto a rigid sheet of paper. Completed pages become known as camera-ready, "mechanical" or "mechanical art".
Phototypesetting was invented in 1945; after keyboard input, characters were shot one-by-one onto a photographic negative, which could then be sent to the print shop directly, or shot onto photographic paper for paste-up. These machines became increasingly sophisticated, with computer-driven models able to store text on magnetic tape.
Computer-aided publishing
As the graphics capabilities of computers matured, they began to be used to render characters, columns, pages, and even multi-page signatures directly, rather than simply summoning a photographic template from a pre-supplied set. In addition to being used as display devices for computer operators, cathode-ray tubes were used to render text for phototypesetting. The curved nature of the CRT display, however, led to distortions of text and art on the screen towards the outer edges of the screens. The advent of "flat screen" monitors (LCD, LED, and more recently OLED) in 1997 eliminated the distortion problems caused by older CRT displays. flat-panel displays have almost completely replaced CRT displays.
Printers attached directly to computers allowed them to print documents directly, in multiple copies, or as an original which could be copied on a ditto machine or photocopier. WYSIWYG word processors made it possible for general office users and consumers to make more sophisticated page layouts, use text justification, and use more fonts than were possible with typewriters. Early dot matrix printing was sufficient for office documents but was of too low a quality for professional typesetting. Inkjet printing and laser printing did produce sufficient quality type, and so computers with these types of printers quickly replaced phototypesetting machines.
With modern desktop publishing software such as flagship software Adobe InDesign and cloud-based Marq, the layout process can occur entirely on-screen. (Similar layout options that would be available to a professional print shop making a paste-up are supported by desktop publishing software; in contrast, "word processing" software usually has a much more limited set of layout and typography choices available, trading off flexibility for ease of use for more common applications.) A finished document can be directly printed as the camera-ready version, with no physical assembly required (given a big enough printer). Greyscale images must be either half-toned digitally if being sent to an offset press or sent separately for the print shop to insert into marked areas. Completed works can also be transmitted digitally to the print shop, who may print it themselves, shoot it directly to film, or use computer to plate technology to skip the physical original entirely. PostScript and Portable Document Format (PDF) have become standard file formats for digital transmission.
Digital media (non-paper)
Since the advent of personal computing, page layout skills have expanded to electronic media as well as print media. E-books, PDF documents, and static web pages mirror paper documents relatively closely, but computers can also add multimedia animation, and interactivity. Page layout for interactive media overlaps with interface design and user experience design; an interactive "page" is better known as a graphical user interface (GUI).
Modern web pages are typically produced using HTML for content and general structure, cascading style sheets to control presentation details such as typography and spacing, and JavaScript for interactivity. Since these languages are all text-based, this work can be done in a text editor, or a special HTML editor which may have WYSIWYG features or other aids. Additional technologies such as Macromedia Flash may be used for multimedia content. Web developers are responsible for actually creating a finished document using these technologies, but a separate web designer may be responsible for establishing the layout. A given web designer might be a fluent web developer as well, or may merely be familiar with the general capabilities of the technologies and merely visualize the desired result for the development team.
Projected pages
Projected slides used in presentations or entertainment often have similar layout considerations to printed pages.
The magic lantern and opaque projector were used during lectures in the 1800s, using printed, typed, photographed, or hand-drawn originals. Two sets of photographic film (one negative and one positive) or one reversal film can be used to create positive images that can be projected with light passing through. Intertitles were used extensively in the earliest motion pictures when sound was not available; they are still used occasionally in addition to the ubiquitous vanity cards and credits.
It became popular to use transparent film for presentations (with opaque text and images) using overhead projectors in the 1940s, and slide projectors in the 1950s. Transparencies for overhead projectors could be printed by some photocopiers. Computer presentation programs became available in the 1980s, making it possible to layout a presentation digitally. Computer-developed presentations could be printed to a transparency with some laser printers, transferred to slides, or projected directly using LCD overhead projectors. Modern presentations are often displayed digitally using a video projector, computer monitor, or large-screen television.
Laying out a presentation presents slightly different challenges than a print document, especially because a person will typically be speaking and referring to the projected pages. Consideration might be given to:
Editing the information presented so it either repeats what the speaker is saying (so the audience can pay attention to either) or only presents information that cannot be conveyed verbally (to avoid dividing audience attention or simply reading slides directly)
Making the slides useful for later reference if printed as handouts or posted online
Pacing, so slides are changed at comfortable intervals, fit the length of the talk, and content order matches the speaker's expectation
Providing a way for the speaker to refer to specific items on the page, such as with color, verbal labels, or a laser pointer
Sizing text and graphics so they can be seen from the back of the room, which limits the amount of information that can be presented on a single slide
Use of animation to add emphasis, introduce information slowly, or be entertaining
Using headers, footers, or repeated elements to make all pages similar so they feel cohesive, or indicate progress
Using titles to introduce new topics or segments
Grids versus templates
Grids and templates are page layout design patterns used in advertising campaigns and multiple-page publications, including websites.
A grid is a set of guidelines, able to be seen in the design process and invisible to the end-user/audience, for aligning and repeating elements on a page. A page layout may or may not stay within those guidelines, depending on how much repetition or variety the design style in the series calls for. Grids are meant to be flexible. Using a grid to layout elements on the page may require just as much or more graphic design skill than that which was required to design the grid.
In contrast, a template is more rigid. A template involves repeated elements mostly visible to the end-user/audience. Using a template to layout elements usually involves less graphic design skill than that which was required to design the template. Templates are used for minimal modification of background elements and frequent modification (or swapping) of foreground content.
Most desktop publishing software allows for grids in the form of a page filled with coloured lines or dots placed at a specified equal horizontal and vertical distance apart. Automatic margins and booklet spine (gutter) lines may be specified for global use throughout the document. Multiple additional horizontal and vertical lines may be placed at any point on the page. Invisible to the end-user/audience shapes may be placed on the page as guidelines for page layout and print processing as well. Software templates are achieved by duplicating a template data file, or with master page features in a multiple-page document. Master pages may include both grid elements and template elements such as header and footer elements, automatic page numbering, and automatic table of contents features.
Static versus dynamic layouts
Static layouts allow for more control over the aesthetics, and thorough optimization of space around and overlapping irregular-shaped content than dynamic layouts. In web design, this is sometimes referred to as a fixed width layout; but the entire layout may be scalable in size while still maintaining the original proportions, static placement, and style of the content. All raster image formats are static layouts in effect, but a static layout may include searchable text by separating the text from the graphics.
In contrast, electronic pages allow for dynamic layouts with swapping content, personalization of styles, text scaling, image scaling, or reflowable content with variable page sizes often referred to as fluid or liquid layout. Dynamic layouts are more likely to separate presentation from content, which comes with its own advantages. A dynamic layout lays out all text and images into rectangular areas of rows and columns. As these areas' widths and heights are defined to be percentages of the available screen, they are responsive to varying screen dimensions. They will automatically ensure maximized use of available space while always staying adapted optimally both on-screen resizes and hardware-given restrictions. Text may freely be resized to provide users' individual needs on legibility while never disturbing a given layout's proportions. The content's overall arrangement on screen this way may always remain as it was originally designed.
Static layout design may involve more graphic design and visual art skills, whereas dynamic layout design may involve more interactive design and content management skills to thoroughly anticipate content variation.
Motion graphics do not fit neatly into either category, but may involve layout skills or careful consideration of how the motion may affect the layout. In either case, the element of motion makes it a dynamic layout, but one that warrants motion graphic design more than static graphic design or interactive design.
Electronic pages may utilize both static and dynamic layout features by dividing the pages or by combining the effects. For example, a section of the page such as a web banner may contain static or motion graphics contained within a swapping content area. Dynamic or live text may be wrapped around irregularly shaped images by using invisible spacers to push the text away from the edges. Some computer algorithms can detect the edges of an object that contain transparency and flow content around contours.
Front-end versus back-end
With modern media content retrieval and output technology, there is much overlap between visual communications (front-end) and information technology (back-end). Large print publications (thick books, especially instructional in nature) and electronic pages (web pages) require meta data for automatic indexing, automatic reformatting, database publishing, dynamic page display, and end-user interactivity. Much of the metadata (meta tags) must be hand-coded or specified during the page layout process. This divides the task of page layout between artists and engineers, or tasks the artist/engineer to do both.
More complex projects may require two separate designs: page layout design as the front-end, and function coding as the back-end. In this case, the front-end may be designed using an alternative page layout technology such as image editing software or on paper with hand rendering methods. Most image editing software includes features for converting a page layout for use in a "What You See Is What You Get" (WYSIWYG) editor or features to export graphics for desktop publishing software. WYSIWYG editors and desktop publishing software allow front-end design prior to back-end coding in most cases. Interface design and database publishing may involve more technical knowledge or collaboration with information technology engineering in the front-end. Sometimes, a function on the back-end is to automate the retrieval and arrangement of content on the front end.
Design elements and choices
Page layout might be prescribed to a greater or lesser degree by a house style which might be implemented in a specific desktop publishing template. There might also be relatively little layout to do in comparison to the amount of pagination (as in novels and other books with no figures).
Typical page layout decisions include:
Deciding on the number and size of columns and gutters (gaps between columns)
Placement of intentional whitespace
Size and position of images and figures
Size of page margins
Use of color printing or spot color for emphasis
Use of special effects like overlaying text on an image, runaround and intrusions, or bleeding an image over the page margin
Specific elements to be laid out might include:
Boxouts and sidebars, which present information as asides from the main text flow
Chapter or section titles, or headlines and subheads
Image captions
Notes like footnotes and end notes; bibliography, for example in academic journals or textbooks
Page headers and page footers, the contents of which are usually uniform across content pages and thus automatically duplicated by layout software. The page number is usually included in the header or footer, and the software automatically increments it for each page.
Pull quotes and nut graphs which might be added out of course or to make a short story fit the layout
Table of contents
In newspaper production, final selection and cropping of photographs accompanying stories might be left to the layout editor (since the choice of photo could affect the shape of the area needed, and thus the rest of the layout), or there might be a separate photo editor. Likewise, headlines might be written by the layout editor, a copy editor, or the original author.
To make stories fit the final layout, relatively inconsequential copy tweaks might be made (for example, rephrasing for brevity), or the layout editor might make slight adjustments to typography elements like font size or leading.
Floating block
A floating block in writing and publishing is any graphic, text, table, or other representation that is unaligned from the main flow of text. The use of floating blocks to present pictures and tables is a typical feature of academic writing and technical writing, including scientific articles and books. Floating blocks are normally labeled with a caption or title that describes its contents and a number that is used to refer to the figure from the main text. A common system divides floating block into two separately numbered series, labeled figure (for pictures, diagrams, plots, etc.) and table. An alternative name for figure is image or graphic.
Floating blocks are said to be floating because they are not fixed in position on the page at the place, but rather drift to the side of the page. By placing pictures or other large items on the sides of pages rather than embedding them in the middle of the main flow of text, typesetting is more flexible and interruption to the flow of the narrative is avoided.
For example, an article on geography might have "Figure 1: Map of the world", "Figure 2: Map of Europe", "Table 1: Population of continents", "Table 2: Population of European countries", and so on. Some books will have a table of figures—in addition to the table of contents—that lists centrally all the figures appearing in the work.
Other kinds of floating blocks may be differentiated as well, for example:
Sidebar: For digressions from the main narrative. For example, a technical manual on the usage of a product might include examples of how various people have employed the product in their work in sidebars. Also called an intermezzo. See sidebar (publishing).
Program: Articles and books on computer programming often place code and algorithms in a figure.
Equation: Writing on mathematics may place large blocks of mathematical notation in numbered blocks set apart from the main text.
Presenting layouts under development
A mockup of a layout might be created to get early feedback, usually before all the content is actually ready. Whether for paper or electronic media, the first draft of a layout might be simply a rough paper and pencil sketch. A comprehensive layout for a new magazine might show placeholders for text and images, but demonstrate placement, typographic style, and other idioms intended to set the pattern for actual issues or a particular unfinished issue. A website wireframe is a low-cost way to show layout without doing all the work of creating the final HTML and CSS, and without writing the copy or creating any images.
Lorem ipsum text is often used to avoid the embarrassment any improvised sample copy might cause if accidentally published. Likewise, placeholder images are often labeled "for position only".
See also
Aesthetics
Book design
Canons of page construction
Database publishing
Desktop publishing
Editing
Layout engine
News design
Page margin
Publishing Interchange Language
Slicing
Swiss Style (design)
Web design
References
External links
SGML page at www.xml.org
Symbols – All articles categorized as relating to typographical symbols
TeX Users Group
XML page at www.W3C.org
Book arts
Communication design
Graphic design
Composition in visual art | Page layout | Engineering | 4,364 |
45,388,523 | https://en.wikipedia.org/wiki/Penicillium%20cryptum | Penicillium cryptum is a species of the genus of Penicillium.
See also
List of Penicillium species
References
cryptum
Fungi described in 1986
Fungus species | Penicillium cryptum | Biology | 38 |
4,171,950 | https://en.wikipedia.org/wiki/Constrained%20optimization | In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.
Relation to constraint-satisfaction problems
The constrained-optimization problem (COP) is a significant generalization of the classic constraint-satisfaction problem (CSP) model. COP is a CSP that includes an objective function to be optimized. Many algorithms are used to handle the optimization part.
General form
A general constrained minimization problem may be written as follows:
where and are constraints that are required to be satisfied (these are called hard constraints), and is the objective function that needs to be optimized subject to the constraints.
In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated.
Solution methods
Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a penalty method. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect.
Equality constraints
Substitution method
For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution. The idea is to substitute the constraint into the objective function to create a composite function that incorporates the effect of the constraint. For example, assume the objective is to maximize subject to . The constraint implies , which can be substituted into the objective function to create . The first-order necessary condition gives , which can be solved for and, consequently, .
Lagrange multiplier
If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.
Inequality constraints
With inequality constraints, the problem can be characterized in terms of the geometric optimality conditions, Fritz John conditions and Karush–Kuhn–Tucker conditions, under which simple problems may be solvable.
Linear programming
If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a linear programming problem. This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time.
Nonlinear programming
If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the problem is a nonlinear programming problem.
Quadratic programming
If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a quadratic programming problem. It is one type of nonlinear programming. It can still be solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem may be NP hard.
KKT conditions
Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers. It can be applied under differentiability and convexity.
Branch and bound
Constraint optimization can be solved by branch-and-bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution.
Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far.
On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible.
A variation of this approach called Hansen's method uses interval methods. It inherently implements rectangular constraints.
First-choice bounding functions
One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for while another constraint is maximal for .
Russian doll search
This method runs a branch-and-bound algorithm on problems, where is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables from the original problem, along with the constraints containing them. After the problem on variables is solved, its optimal cost can be used as an upper bound while solving the other problems,
In particular, the cost estimate of a solution having as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point.
There is similarity between the Russian Doll Search method and dynamic programming. Like dynamic programming, Russian Doll Search solves sub-problems in order to solve the whole problem. But, whereas Dynamic Programming
directly combines the results obtained on sub-problems to get the result of the whole problem, Russian Doll Search only uses them as bounds during its search.
Bucket elimination
The bucket elimination algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if is the variable to be removed, are the soft constraints containing it, and are their variables except , the new soft constraint is defined by:
Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket.
See also
Constrained least squares
Distributed constraint optimization
Constraint satisfaction problem (CSP)
Constraint programming
Integer programming
Metric projection
Penalty method
Superiorization
References
Further reading
Mathematical optimization
Constraint programming | Constrained optimization | Mathematics | 1,644 |
1,431,663 | https://en.wikipedia.org/wiki/Sulfate-reducing%20microorganism | Sulfate-reducing microorganisms (SRM) or sulfate-reducing prokaryotes (SRP) are a group composed of sulfate-reducing bacteria (SRB) and sulfate-reducing archaea (SRA), both of which can perform anaerobic respiration utilizing sulfate () as terminal electron acceptor, reducing it to hydrogen sulfide (H2S). Therefore, these sulfidogenic microorganisms "breathe" sulfate rather than molecular oxygen (O2), which is the terminal electron acceptor reduced to water (H2O) in aerobic respiration.
Most sulfate-reducing microorganisms can also reduce some other oxidized inorganic sulfur compounds, such as sulfite (), dithionite (), thiosulfate (), trithionate (), tetrathionate (), elemental sulfur (S8), and polysulfides (). Other than sulfate reduction, some sulfate-reducing microorganisms are also capable of other reactions like disproportionation of sulfur compounds. Depending on the context, "sulfate-reducing microorganisms" can be used in a broader sense (including all species that can reduce any of these sulfur compounds) or in a narrower sense (including only species that reduce sulfate, and excluding strict thiosulfate and sulfur reducers, for example).
Sulfate-reducing microorganisms can be traced back to 3.5 billion years ago and are considered to be among the oldest forms of microbes, having contributed to the sulfur cycle soon after life emerged on Earth.
Many organisms reduce small amounts of sulfates in order to synthesize sulfur-containing cell components; this is known as assimilatory sulfate reduction. By contrast, the sulfate-reducing microorganisms considered here reduce sulfate in large amounts to obtain energy and expel the resulting sulfide as waste; this is known as dissimilatory sulfate reduction. They use sulfate as the terminal electron acceptor of their electron transport chain. Most of them are anaerobes; however, there are examples of sulfate-reducing microorganisms that are tolerant of oxygen, and some of them can even perform aerobic respiration. No growth is observed when oxygen is used as the electron acceptor.
In addition, there are sulfate-reducing microorganisms that can also reduce other electron acceptors, such as fumarate, nitrate (), nitrite (), ferric iron (Fe3+), and dimethyl sulfoxide (DMSO).
In terms of electron donor, this group contains both organotrophs and lithotrophs. The organotrophs oxidize organic compounds, such as carbohydrates, organic acids (such as formate, lactate, acetate, propionate, and butyrate), alcohols (methanol and ethanol), aliphatic hydrocarbons (including methane), and aromatic hydrocarbons (benzene, toluene, ethylbenzene, and xylene). The lithotrophs oxidize molecular hydrogen (H2), for which they compete with methanogens and acetogens in anaerobic conditions. Some sulfate-reducing microorganisms can directly use metallic iron (Fe0, also known as zerovalent iron, or ZVI) as an electron donor, oxidizing it to ferrous iron (Fe2+).
Ecological importance and markers
Sulfate occurs widely in seawater, sediment, and water rich in decaying organic material. Sulfate is also found in more extreme environments such as hydrothermal vents, acid mine drainage sites, oil fields, and the deep subsurface, including the world's oldest isolated ground water. Sulfate-reducing microorganisms are common in anaerobic environments where they aid in the degradation of organic materials. In these anaerobic environments, fermenting bacteria extract energy from large organic molecules; the resulting smaller compounds such as organic acids and alcohols are further oxidized by acetogens and methanogens and the competing sulfate-reducing microorganisms.
The toxic hydrogen sulfide is a waste product of sulfate-reducing microorganisms; its rotten egg odor is often a marker for the presence of sulfate-reducing microorganisms in nature. Sulfate-reducing microorganisms are responsible for the sulfurous odors of salt marshes and mud flats. Much of the hydrogen sulfide will react with metal ions in the water to produce metal sulfides. These metal sulfides, such as ferrous sulfide (FeS), are insoluble and often black or brown, leading to the dark color of sludge.
During the Permian–Triassic extinction event (250 million years ago) a severe anoxic event seems to have occurred where these forms of bacteria became the dominant force in oceanic ecosystems, producing copious amounts of hydrogen sulfide.
Sulfate-reducing bacteria also generate neurotoxic methylmercury as a byproduct of their metabolism, through methylation of inorganic mercury present in their surroundings. They are known to be the dominant source of this bioaccumulative form of mercury in aquatic systems.
Uses
Some sulfate-reducing microorganisms can reduce hydrocarbons, and they have been used to clean up contaminated soils. Their use has also been proposed for other kinds of contaminations.
Sulfate-reducing microorganisms are considered a possible way to deal with acid mine waters that are produced by other microorganisms.
Problems caused by sulfate-reducing microorganisms
In engineering, sulfate-reducing microorganisms can create problems when metal structures are exposed to sulfate-containing water: Interaction of water and metal creates a layer of molecular hydrogen on the metal surface; sulfate-reducing microorganisms then oxidize the hydrogen while creating hydrogen sulfide, which contributes to corrosion.
Hydrogen sulfide from sulfate-reducing microorganisms also plays a role in the biogenic sulfide corrosion of concrete. It also occurs in sour crude oil.
Some sulfate-reducing microorganisms play a role in the anaerobic oxidation of methane:
CH4 + SO42- → HCO3− + HS− + H2O
An important fraction of the methane formed by methanogens below the seabed is oxidized by sulfate-reducing microorganisms in the transition zone separating the methanogenesis from the sulfate reduction activity in the sediments. This process is also considered a major sink for sulfate in marine sediments.
In hydraulic fracturing, fluids are used to frack shale formations to recover methane (shale gas) and hydrocarbons. Biocide compounds are often added to water to inhibit the microbial activity of sulfate-reducing microorganisms, in order to but not limited to, avoid anaerobic methane oxidation and the generation of hydrogen sulfide, ultimately resulting in minimizing potential production loss.
Biochemistry
Before sulfate can be used as an electron acceptor, it must be activated. This is done by the enzyme ATP-sulfurylase, which uses ATP and sulfate to create adenosine 5′-phosphosulfate (APS). APS is subsequently reduced to sulfite and AMP. Sulfite is then further reduced to sulfide, while AMP is turned into ADP using another molecule of ATP. The overall process, thus, involves an investment of two molecules of the energy carrier ATP, which must to be regained from the reduction.
The enzyme dissimilatory (bi)sulfite reductase, dsrAB (EC 1.8.99.5), that catalyzes the last step of dissimilatory sulfate reduction, is the functional gene most used as a molecular marker to detect the presence of sulfate-reducing microorganisms.
Phylogeny
The sulfate-reducing microorganisms have been treated as a phenotypic group, together with the other sulfur-reducing bacteria, for identification purposes. They are found in several different phylogenetic lines. As of 2009, 60 genera containing 220 species of sulfate-reducing bacteria are known.
Among the Thermodesulfobacteriota the orders of sulfate-reducing bacteria include Desulfobacterales, Desulfovibrionales, and Syntrophobacterales. This accounts for the largest group of sulfate-reducing bacteria, about 23 genera.
The second largest group of sulfate-reducing bacteria is found among the Bacillota, including the genera Desulfotomaculum, Desulfosporomusa, and Desulfosporosinus.
In the Nitrospirota phylum we find sulfate-reducing Thermodesulfovibrio species.
Two more groups that include thermophilic sulfate-reducing bacteria are given their own phyla, the Thermodesulfobacteriota and Thermodesulfobium.
There are also three known genera of sulfate-reducing archaea: Archaeoglobus, Thermocladium and Caldivirga. They are found in hydrothermal vents, oil deposits, and hot springs.
In July 2019, a scientific study of Kidd Mine in Canada discovered sulfate-reducing microorganisms living below the surface. The sulfate reducers discovered in Kidd Mine are lithotrophs, obtaining their energy by oxidizing minerals such as pyrite rather than organic compounds. Kidd Mine is also the site of the oldest known water on Earth.
See also
Anaerobic respiration
Deep biosphere
Extremophile
Microbial metabolism
Microorganism
Quinone-interacting membrane-bound oxidoreductase
Sulfur cycle
References
External links
'Follow the Water': Hydrogeochemical Constraints on Microbial Investigations 2.4 km Below Surface at the Kidd Creek Deep Fluid and Deep Life Observatory, Garnet S. Lollar, Oliver Warr, Jon Telling, Magdalena R. Osburn & Barbara Sherwood Lollar, Received 15 Jan 2019, Accepted 01 Jul 2019, Published online: 18 Jul 2019.
Deep fracture fluids isolated in the crust since the Precambrian era, G. Holland, B. Sherwood Lollar, L. Li, G. Lacrampe-Couloume, G. F. Slater & C. J. Ballentine, Nature volume 497, pages 357–360 (16 May 2013)
Sulfur mass-independent fractionation in subsurface fracture waters indicates a long-standing sulfur cycle in Precambrian rocks, by L. Li, B. A. Wing, T. H. Bui, J. M. McDermott, G. F. Slater, S. Wei, G. Lacrampe-Couloume & B. Sherwood Lollar October 27, 2016. Nature Communications volume 7, Article number: 13252 (2016.)
Earth's mysterious 'deep biosphere' may harbor millions of undiscovered species, By Brandon Specktor, Live Science, December 11, 2018, published online at nbcnews.com.
Bacteria
Martinus Beijerinck
Extremophiles
Environmental microbiology
Ecology
Geomicrobiology
Microbial growth and nutrition | Sulfate-reducing microorganism | Biology,Environmental_science | 2,311 |
15,940,961 | https://en.wikipedia.org/wiki/Shortcut%20model | An important question in statistical mechanics is the dependence of model behaviour on the dimension of the system. The shortcut model was introduced in the course of studying this dependence. The model interpolates between discrete regular lattices of integer dimension.
Introduction
The behaviour of different processes on discrete regular lattices have been studied quite extensively. They show a rich diversity of behaviour, including a non-trivial dependence on the dimension of the regular lattice. In recent years the study has been extended from regular lattices to complex networks. The shortcut model has been used in studying several processes and their dependence on dimension.
Dimension of complex network
Usually, dimension is defined based on the scaling exponent of some property in the appropriate limit. One property one could use is the scaling of volume with distance. For regular lattices the number of nodes within a distance of node scales as .
For systems which arise in physical problems one usually can identify some physical space relations among the vertices. Nodes which are linked directly will have more influence on each other than nodes which are separated by several links. Thus, one could define the distance between nodes and as the length of the shortest path connecting the nodes.
For complex networks one can define the volume as the number of nodes within a distance of node , averaged over , and the dimension may be defined as the exponent which determines the scaling behaviour of the volume with distance. For a vector , where is a positive integer, the Euclidean norm is defined as the Euclidean distance from the origin to , i.e.,
However, the definition which generalises to complex networks is the norm,
The scaling properties hold for both the Euclidean norm and the norm. The scaling relation is
where d is not necessarily an integer for complex networks. is a geometric constant which depends on the complex network. If the scaling relation Eqn. holds, then one can also define the surface area as the number of nodes which are exactly at a distance from a given node, and scales as
A definition based on the complex network zeta function generalises the definition based on the scaling property of the volume with distance and puts it on a mathematically robust footing.
Shortcut model
The shortcut model starts with a network built on a one-dimensional regular lattice. One then adds edges to create shortcuts that join remote parts of the lattice to one another. The starting network is a one-dimensional lattice of vertices with periodic boundary conditions. Each vertex is joined to its neighbors on either side, which results in a system with edges. The network is extended by taking each node in turn and, with probability , adding an edge to a new location nodes distant.
The rewiring process allows the model to interpolate between a one-dimensional regular lattice and a two-dimensional regular lattice. When the rewiring probability , we have a one-dimensional regular lattice of size . When , every node is connected to a new location and the graph is essentially a two-dimensional lattice with and nodes in each direction. For between and , we have a graph which interpolates between the one and two dimensional regular lattices. The graphs we study are parametrized by
Application to extensiveness of power law potential
One application using the above definition of dimension was to the
extensiveness of statistical mechanics systems with a power law potential where the interaction varies with the distance as . In one dimension the system properties like the free energy do not behave extensively when , i.e., they increase faster than N as , where N is the number of spins in the system.
Consider the Ising model with the Hamiltonian (with N spins)
where are the spin variables, is the distance between node and node , and are the couplings between the spins. When the have the behaviour , we have the power law potential. For a general complex network the condition on the exponent which preserves extensivity of the Hamiltonian was studied. At zero temperature, the energy per spin is proportional to
and hence extensivity requires that be finite. For a general complex network is proportional to the Riemann zeta function . Thus, for the potential to be extensive, one requires
Other processes which have been studied are self-avoiding random walks, and the scaling of the mean path length with the network size. These studies lead to the interesting result that the dimension transitions sharply as the shortcut probability increases from zero. The sharp transition in the dimension has been explained in terms of the combinatorially large
number of available paths for points separated by distances large compared to 1.
Conclusion
The shortcut model is useful for studying the dimension dependence of different processes. The processes studied include the behaviour of the power law potential as a function of the dimension, the behaviour of self-avoiding random walks, and the scaling of the mean path length. It may be useful to compare the shortcut model with the small-world network, since the definitions have a lot of similarity. In the small-world network also one starts with a regular lattice and adds shortcuts with probability . However, the shortcuts are not constrained to connect to a node a fixed distance ahead. Instead, the other end of the shortcut can connect to any randomly chosen node. As a result, the small world model tends to a random graph rather than a two-dimensional graph as the shortcut probability is increased.
References
Networks
Statistical mechanics | Shortcut model | Physics | 1,075 |
61,866 | https://en.wikipedia.org/wiki/Max%20Born | Max Born (; 11 December 1882 – 5 January 1970) was a German-British theoretical physicist who was instrumental in the development of quantum mechanics. He also made contributions to solid-state physics and optics and supervised the work of a number of notable physicists in the 1920s and 1930s. Born was awarded the 1954 Nobel Prize in Physics for his "fundamental research in quantum mechanics, especially in the statistical interpretation of the wave function".
Born entered the University of Göttingen in 1904, where he met the three renowned mathematicians Felix Klein, David Hilbert, and Hermann Minkowski. He wrote his PhD thesis on the subject of the stability of elastic wires and tapes, winning the university's Philosophy Faculty Prize. In 1905, he began researching special relativity with Minkowski, and subsequently wrote his habilitation thesis on the Thomson model of the atom. A chance meeting with Fritz Haber in Berlin in 1918 led to discussion of how an ionic compound is formed when a metal reacts with a halogen, which is today known as the Born–Haber cycle.
In World War I he was originally placed as a radio operator, but his specialist knowledge led to his being moved to research duties on sound ranging. In 1921 Born returned to Göttingen, where he arranged another chair for his long-time friend and colleague James Franck. Under Born, Göttingen became one of the world's foremost centres for physics. In 1925 Born and Werner Heisenberg formulated the matrix mechanics representation of quantum mechanics. The following year, he formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation, for which he was awarded the Nobel Prize in 1954. His influence extended far beyond his own research. Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, and Victor Weisskopf all received their PhD degrees under Born at Göttingen, and his assistants included Enrico Fermi, Werner Heisenberg, Gerhard Herzberg, Friedrich Hund, Wolfgang Pauli, Léon Rosenfeld, Edward Teller, and Eugene Wigner.
In January 1933, the Nazi Party came to power in Germany, and Born, who was Jewish, was suspended from his professorship at the University of Göttingen. He emigrated to the United Kingdom, where he took a job at St John's College, Cambridge, and wrote a popular science book, The Restless Universe, as well as Atomic Physics, which soon became a standard textbook. In October 1936, he became the Tait Professor of Natural Philosophy at the University of Edinburgh, where, working with German-born assistants E. Walter Kellermann and Klaus Fuchs, he continued his research into physics. Born became a naturalised British subject on 31 August 1939, one day before World War II broke out in Europe. He remained in Edinburgh until 1952. He retired to Bad Pyrmont, in West Germany, and died in a hospital in Göttingen on 5 January 1970.
Early life
Max Born was born on 11 December 1882 in Breslau (now Wrocław, Poland), which at the time of Born's birth was part of the Prussian Province of Silesia in the German Empire, to a family of Jewish descent. He was one of two children born to Gustav Born, an anatomist and embryologist, who was a professor of embryology at the University of Breslau, and his wife Margarethe (Gretchen) née Kauffmann, from a Silesian family of industrialists. She died when Max was four years old, on 29 August 1886. Max had a sister, Käthe, who was born in 1884, and a half-brother, Wolfgang, from his father's second marriage, to Bertha Lipstein. Wolfgang later became Professor of Art History at the City College of New York.
Initially educated at the König-Wilhelm-Gymnasium in Breslau, Born entered the University of Breslau in 1901. The German university system allowed students to move easily from one university to another, so he spent summer semesters at Heidelberg University in 1902 and the University of Zurich in 1903. Fellow students at Breslau, Otto Toeplitz and Ernst Hellinger, told Born about the University of Göttingen, and Born went there in April 1904. At Göttingen he found three renowned mathematicians: Felix Klein, David Hilbert and Hermann Minkowski. Very soon after his arrival, Born formed close ties to the latter two men. From the first class he took with Hilbert, Hilbert identified Born as having exceptional abilities and selected him as the lecture scribe, whose function was to write up the class notes for the students' mathematics reading room at the University of Göttingen. Being class scribe put Born into regular, invaluable contact with Hilbert. Hilbert became Born's mentor after selecting him to be the first to hold the unpaid, semi-official position of assistant. Born's introduction to Minkowski came through Born's stepmother, Bertha, as she knew Minkowski from dancing classes in Königsberg. The introduction netted Born invitations to the Minkowski household for Sunday dinners. In addition, while performing his duties as scribe and assistant, Born often saw Minkowski at Hilbert's house.
Born's relationship with Klein was more problematic. Born attended a seminar conducted by Klein and professors of applied mathematics, Carl Runge and Ludwig Prandtl, on the subject of elasticity. Although not particularly interested in the subject, Born was obliged to present a paper. He presented one in which, taking the simple case of a curved wire with both ends fixed, he used Hilbert's calculus of variations to determine the configuration that would minimise potential energy and therefore be the most stable. Klein was impressed, and invited Born to submit a thesis on the subject of "Stability of Elastica in a Plane and Space" – a subject near and dear to Klein – which Klein had arranged to be the subject for the prestigious annual Philosophy Faculty Prize offered by the university. Entries could also qualify as doctoral dissertations. Born responded by turning down the offer, as applied mathematics was not his preferred area of study. Klein was greatly offended.
Klein had the power to make or break academic careers, so Born felt compelled to atone by submitting an entry for the prize. Because Klein refused to supervise him, Born arranged for Carl Runge to be his supervisor. Woldemar Voigt and Karl Schwarzschild became his other examiners. Starting from his paper, Born developed the equations for the stability conditions. As he became more interested in the topic, he had an apparatus constructed that could test his predictions experimentally. On 13 June 1906, the rector announced that Born had won the prize. A month later, he passed his oral examination and was awarded his PhD in mathematics magna cum laude.
On graduation, Born was obliged to perform his military service, which he had deferred while a student. He found himself drafted into the German army, and posted to the 2nd Guards Dragoons "Empress Alexandra of Russia", which was stationed in Berlin. His service was brief, as he was discharged early after an asthma attack in January 1907. He then travelled to England, where he was admitted to Gonville and Caius College, Cambridge, and studied physics for six months at the Cavendish Laboratory under J. J. Thomson, George Searle and Joseph Larmor. After Born returned to Germany, the Army re-inducted him, and he served with the elite 1st (Silesian) Life Cuirassiers "Great Elector" until he was again medically discharged after just six weeks' service. He then returned to Breslau, where he worked under the supervision of Otto Lummer and Ernst Pringsheim, hoping to do his habilitation in physics. A minor accident involving Born's black body experiment, a ruptured cooling water hose, and a flooded laboratory, led to Lummer telling him that he would never become a physicist.
In 1905, Albert Einstein published his paper On the Electrodynamics of Moving Bodies about special relativity. Born was intrigued, and began researching the subject. He was devastated to discover that Minkowski was also researching special relativity along the same lines, but when he wrote to Minkowski about his results, Minkowski asked him to return to Göttingen and do his habilitation there. Born accepted. Toeplitz helped Born brush up on his matrix algebra so he could work with the four-dimensional Minkowski space matrices used in the latter's project to reconcile relativity with electrodynamics. Born and Minkowski got along well, and their work made good progress, but Minkowski died suddenly of appendicitis on 12 January 1909. The mathematics students had Born speak on their behalf at the funeral.
A few weeks later, Born attempted to present their results at a meeting of the Göttingen Mathematics Society. He did not get far before he was publicly challenged by Klein and Max Abraham, who rejected relativity, forcing him to terminate the lecture. However, Hilbert and Runge were interested in Born's work, and, after some discussion with Born, they became convinced of the veracity of his results and persuaded him to give the lecture again. This time he was not interrupted, and Voigt offered to sponsor Born's habilitation thesis. Born subsequently published his talk as an article on "The Theory of the Rigid Electron in the Kinematics of the Principle of Relativity" (), which introduced the concept of Born rigidity. On 23 October Born presented his habilitation lecture on the Thomson model of the atom.
Career
Berlin and Frankfurt
Born settled in as a young academic at Göttingen as a . In Göttingen, Born stayed at a boarding house run by Sister Annie at Dahlmannstraße 17, known as El BoKaReBo. The name was derived from the first letters of the last names of its boarders: "El" for Ella Philipson (a medical student), "Bo" for Born and Hans Bolza (a physics student), "Ka" for Theodore von Kármán (a ), and "Re" for Albrecht Renner (another medical student). A frequent visitor to the boarding house was Paul Peter Ewald, a doctoral student of Arnold Sommerfeld on loan to Hilbert at Göttingen as a special assistant for physics. Richard Courant, a mathematician and , called these people the "in group".
In 1912, Born met Hedwig (Hedi) Ehrenberg, the daughter of a Leipzig University law professor, and a friend of Carl Runge's daughter Iris. She was of Jewish background on her father's side, although he had become a practising Lutheran when he got married, as did Max's sister Käthe. Despite never practising his religion, Born refused to convert, and his wedding on 2 August 1913 was a garden ceremony. However, he was baptised as a Lutheran in March 1914 by the same pastor who had performed his wedding ceremony. Born regarded "religious professions and churches as a matter of no importance". His decision to be baptised was made partly in deference to his wife, and partly due to his desire to assimilate into German society. The marriage produced three children: two daughters, Irene, born in 1914, and Margarethe (Gritli), born in 1915, and a son, Gustav, born in 1921. Through marriage, Born is related to jurists Victor Ehrenberg, his father-in-law, and Rudolf von Jhering, his wife's maternal grandfather, as well as to philosopher and theologian Hans Ehrenberg, and is a great uncle of British comedian Ben Elton.
By the end of 1913, Born had published 27 papers, including important work on relativity and the dynamics of crystal lattices (3 with Theodore von Karman), which became a book. In 1914, he received a letter from Max Planck explaining that a new professor extraordinarius chair of theoretical physics had been created at the University of Berlin. The chair had been offered to Max von Laue, but he had turned it down. Born accepted. The First World War was now raging. Soon after arriving in Berlin in 1915, he enlisted in an Army signals unit. In October, he joined the Artillerie Prüfungskommission, the Army's Berlin-based artillery research and development organisation, under Rudolf Ladenburg, who had established a special unit dedicated to the new technology of sound ranging. In Berlin, Born formed a lifelong friendship with Einstein, who became a frequent visitor to Born's home. Within days of the armistice in November 1918, Planck had the Army release Born. A chance meeting with Fritz Haber that month led to discussion of the manner in which an ionic compound is formed when a metal reacts with a halogen, which is today known as the Born–Haber cycle.
Even before Born had taken up the chair in Berlin, von Laue had changed his mind, and decided that he wanted it after all. He arranged with Born and the faculties concerned for them to exchange jobs. In April 1919, Born became professor ordinarius and Director of the Institute of Theoretical Physics on the science faculty at the University of Frankfurt am Main. While there, he was approached by the University of Göttingen, which was looking for a replacement for Peter Debye as Director of the Physical Institute. "Theoretical physics," Einstein advised him, "will flourish wherever you happen to be; there is no other Born to be found in Germany today." In negotiating for the position with the education ministry, Born arranged for another chair, of experimental physics, at Göttingen for his long-time friend and colleague James Franck.
In 1919 Elisabeth Bormann joined the Institut für Theoretische Physik as his assistant. She developed the first atomic beams. Working with Born, Bormann was the first to measure the free path of atoms in gases and the size of molecules.
Göttingen
For the 12 years Born and Franck were at the University of Göttingen (1921 to 1933), Born had a collaborator with shared views on basic scientific concepts—a benefit for teaching and research. Born's collaborative approach with experimental physicists was similar to that of Arnold Sommerfeld at the University of Munich, who was ordinarius professor of theoretical physics and Director of the Institute of Theoretical Physics—also a prime mover in the development of quantum theory. Born and Sommerfeld collaborated with experimental physicists to test and advance their theories. In 1922, when lecturing in the United States at the University of Wisconsin–Madison, Sommerfeld sent his student Werner Heisenberg to be Born's assistant. Heisenberg returned to Göttingen in 1923, where he completed his habilitation under Born in 1924, and became a at Göttingen.
In 1919 and 1920, Max Born became displeased about the large number of objections against Einstein's relativity, and gave speeches in the winter of 1919 in support of Einstein. Born received pay for his relativity speeches which helped with expenses through the year of rapid inflation. The speeches in German language became a book published in 1920 of which Einstein received the proofs before publication. A third edition was published in 1922 and an English translation was published in 1924. Born represented light speed as a function of curvature, "the velocity of light is much greater for some directions of the light ray than its ordinary value c, and other bodies can also attain much greater velocities."
In 1925, Born and Heisenberg formulated the matrix mechanics representation of quantum mechanics. On 9 July, Heisenberg gave Born a paper entitled Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen ("Quantum-Theoretical Re-interpretation of Kinematic and Mechanical Relations") to review, and submit for publication. In the paper, Heisenberg formulated quantum theory, avoiding the concrete, but unobservable, representations of electron orbits by using parameters such as transition probabilities for quantum jumps, which necessitated using two indexes corresponding to the initial and final states. When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices, which he had learned from his study under Jakob Rosanes at Breslau University.
Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912, and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics. With the help of his assistant and former student Pascual Jordan, Born began immediately to make a transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper. A follow-on paper was submitted for publication before the end of the year by all three authors. The result was a surprising formulation:
where p and q were matrices for location and momentum, and I is the identity matrix. The left hand side of the equation is not zero because matrix multiplication is not commutative. This formulation was entirely attributable to Born, who also established that all the elements not on the diagonal of the matrix were zero. Born considered that his paper with Jordan contained "the most important principles of quantum mechanics including its extension to electrodynamics." The paper put Heisenberg's approach on a solid mathematical basis.
Born was surprised to discover that Paul Dirac had been thinking along the same lines as Heisenberg. Soon, Wolfgang Pauli used the matrix method to calculate the energy values of the hydrogen atom and found that they agreed with the Bohr model. Another important contribution was made by Erwin Schrödinger, who looked at the problem using wave mechanics. This had a great deal of appeal to many at the time, as it offered the possibility of returning to deterministic classical physics. Born would have none of this, as it ran counter to facts determined by experiment. He formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation, which he published in July 1926.
In a letter to Born on 4 December 1926, Einstein made his famous remark regarding quantum mechanics:
This quotation is often paraphrased as 'God does not play dice'.
In 1928, Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics, but Heisenberg alone won the 1932 Prize "for the creation of quantum mechanics, the application of which has led to the discovery of the allotropic forms of hydrogen", while Schrödinger and Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory". On 25 November 1933, Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration—you, Jordan and I." Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside." In 1954, Heisenberg wrote an article honouring Planck for his insight in 1900, in which he credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye."
Those who received their PhD degrees under Born at Göttingen included Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, and Victor Weisskopf. Born's assistants at the University of Göttingen's Institute for Theoretical Physics included Enrico Fermi, Werner Heisenberg, Gerhard Herzberg, Friedrich Hund, Pascual Jordan, Wolfgang Pauli, Léon Rosenfeld, Edward Teller, and Eugene Wigner. Walter Heitler became an assistant to Born in 1928, and completed his habilitation under him in 1929. Born not only recognised talent to work with him, but he "let his superstars stretch past him; to those less gifted, he patiently handed out respectable but doable assignments." Delbrück, and Goeppert-Mayer went on to be awarded Nobel Prizes.
Later life
In January 1933, the Nazi Party came to power in Germany. In May, Born became one of six Jewish professors at Göttingen who were suspended with pay; Franck had already resigned. In twelve years they had built Göttingen into one of the world's foremost centres for physics. Born began looking for a new job, writing to Maria Göppert-Mayer at Johns Hopkins University and Rudi Ladenburg at Princeton University. He accepted an offer from St John's College, Cambridge. At Cambridge, he wrote a popular science book, The Restless Universe, and a textbook, Atomic Physics, that soon became a standard text, going through seven editions. His family soon settled into life in England, with his daughters Irene and Gritli becoming engaged to Welshman Brinley (Bryn) Newton-John and Englishman Maurice Pryce respectively. Born's granddaughter Olivia Newton-John was the daughter of Irene.
Born's position at Cambridge was only a temporary one, and his tenure at Göttingen was terminated in May 1935. He therefore accepted an offer from C. V. Raman to go to Bangalore in 1935. Born considered taking a permanent position there, but the Indian Institute of Science did not create an additional chair for him. In November 1935, the Born family had their German citizenship revoked, rendering them stateless. A few weeks later Göttingen cancelled Born's doctorate. Born considered an offer from Pyotr Kapitsa in Moscow, and started taking Russian lessons from Rudolf Peierls's Russian-born wife Genia. But then Charles Galton Darwin asked Born if he would consider becoming his successor as Tait Professor of Natural Philosophy at the University of Edinburgh, an offer that Born promptly accepted, assuming the chair in October 1936.
In Edinburgh, Born promoted the teaching of mathematical physics. He had two German assistants, E. Walter Kellermann and Klaus Fuchs, and one Scottish assistant, Robert Schlapp, and together they continued to investigate the mysterious behaviour of electrons. Born became a Fellow of the Royal Society of Edinburgh in 1937, and of the Royal Society of London in March 1939. During 1939, he got as many of his remaining friends and relatives still in Germany as he could out of the country, including his sister Käthe, in-laws Kurt and Marga, and the daughters of his friend Heinrich Rausch von Traubenberg. Hedi ran a domestic bureau, placing young Jewish women in jobs. Born received his certificate of naturalisation as a British subject on 31 August 1939, one day before the Second World War broke out in Europe.
Born remained at Edinburgh until he reached the retirement age of 70 in 1952. He retired to Bad Pyrmont, in West Germany, in 1954. In October, he received word that he was being awarded the Nobel Prize. His fellow physicists had never stopped nominating him. Franck and Fermi had nominated him in 1947 and 1948 for his work on crystal lattices, and over the years, he had also been nominated for his work on solid state physics, quantum mechanics and other topics. In 1954, he received the prize for "fundamental research in Quantum Mechanics, especially in the statistical interpretation of the wave function"—something that he had worked on alone. In his Nobel lecture he reflected on the philosophical implications of his work:
In retirement, he continued scientific work, and produced new editions of his books. In 1955 he became one of signatories to the Russell-Einstein Manifesto. He died at age 87 in hospital in Göttingen on 5 January 1970, and is buried in the Stadtfriedhof there, in the same cemetery as Walther Nernst, Wilhelm Weber, Max von Laue, Otto Hahn, Max Planck, and David Hilbert.
Global policy
He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth.
Personal life
Born's wife Hedwig (Hedi) Martha Ehrenberg (1891–1972) was a daughter of the jurist Victor Ehrenberg and Elise von Jhering (a daughter of the jurist Rudolf von Jhering). Born was survived by his wife Hedi and their children Irene, Gritli and Gustav. Singer and actress Olivia Newton-John was a daughter of Irene (1914–2003), while Gustav is the father of musician and academic Georgina Born and actor Max Born (Fellini Satyricon) who are thus also Max's grandchildren. His great-grandchildren include songwriter Brett Goldsmith, singer Tottie Goldsmith, racing car driver Emerson Newton-John, and singer Chloe Rose Lattanzi. Born helped his nephew, architect, Otto Königsberger (1908–1999) obtain commission in the Mysore State.
Awards and honors
1934 – Stokes Medal of Cambridge
1939 – Fellow of the Royal Society
1945 – Makdougall–Brisbane Prize of the Royal Society of Edinburgh
1945 – Gunning Victoria Jubilee Prize of the Royal Society of Edinburgh
1948 – Max Planck Medaille der Deutschen Physikalischen Gesellschaft
1950 – Hughes Medal of the Royal Society of London
1953 – Honorary citizen of the town of Göttingen
1954 – Nobel Prize in Physics The award was for Born's fundamental research in quantum mechanics, especially for his statistical interpretation of the wavefunction.
1954 – Nobel Prize Banquet Speech
1954 – Born Nobel Prize Lecture
1956 – Hugo Grotius Medal for International Law, Munich
1959 – Grand Cross of Merit with Star of the Order of Merit of the German Federal Republic
1972 – Max Born Medal and Prize was created by the German Physical Society and the British Institute of Physics. It is awarded annually.
1982 – Ceremony at the University of Göttingen in the 100th Birth Year of Max Born and James Franck, Institute Directors 1921–1933.
1991 – – Institute named in his honor.
2017 – On 11 December 2017, Google showed a Google doodle, designed by Kati Szilagyi, in honouring the 135th birth anniversary of Born.
Bibliography
During his life, Born wrote several semi-popular and technical books. His volumes on topics like atomic physics and optics were very well received. They are considered classics in their fields, and are still in print. The following is a chronological listing of his major works:
Über das Thomson'sche Atommodell Habilitations-Vortrag (FAM, 1909) – The Habilitation was done at the University of Göttingen, on 23 October 1909.
– Based on Born's lectures at the University of Frankfurt am Main.
Available in English under the title .
Dynamik der Kristallgitter (Teubner, 1915) – After its publication, the physicist Arnold Sommerfeld asked Born to write an article based on it for the 5th volume of the Mathematical Encyclopedia. The First World War delayed the start of work on this article, but it was taken up in 1919 and finished in 1922. It was published as a revised edition under the title Atomic Theory of Solid States.
Vorlesungen über Atommechanik (Springer, 1925)
Problems of Atomic Dynamics (MIT Press, 1926) – A first account of matrix mechanics being developed in Germany, based on two series of lectures given at MIT, over three months, in late 1925 and early 1926.
Mechanics of the Atom (George Bell & Sons, 1927) – Translated by J. W. Fisher and revised by D. R. Hartree.
Elementare Quantenmechanik (Zweiter Band der Vorlesungen über Atommechanik), with Pascual Jordan. (Springer, 1930) – This was the first volume of what was intended as a two-volume work. This volume was limited to the work Born did with Jordan on matrix mechanics. The second volume was to deal with Erwin Schrödinger's wave mechanics. However, the second volume was not even started by Born, as he believed his friend and colleague Hermann Weyl had written it before he could do so.
Optik: Ein Lehrbuch der elektromagnetische Lichttheorie (Springer, 1933) – The book was released just as the Borns were emigrating to England.
Moderne Physik (1933) – Based on seven lectures given at the Technischen Hochschule Berlin.
Atomic Physics (Blackie, London, 1935) – Authorized translation of Moderne Physik by John Dougall, with updates.
The Restless Universe (Blackie and Son Limited, 1935) – A popularised rendition of the workshop of nature, translated by Winifred Margaret Deans. Born's nephew, Otto Königsberger, whose successful career as an architect in Berlin was brought to an end when the Nazis took over, was temporarily brought to England to illustrate the book.
Experiment and Theory in Physics (Cambridge University Press, 1943) – The address given King's College, Newcastle upon Tyne, at the request of the Durham Philosophical Society and the Pure Science Society. An expanded version of the lecture appeared in a 1956 Dover Publications edition.
Natural Philosophy of Cause and Chance (Oxford University Press, 1949) – Based on Born's 1948 Waynflete lectures, given at the College of St. Mary Magdalen, Oxford University. A later edition (Dover, 1964) included two appendices: "Symbol and Reality" and Born's lecture given at the Nobel laureates 1964 meeting in Landau, Germany.
A General Kinetic Theory of Liquids with H. S. Green (Cambridge University Press, 1949) – The six papers in this book were reproduced with permission from the Proceedings of the Royal Society.
Natural Philosophy Of Cause And Chance, Oxford 1949
Dynamical Theory of Crystal Lattices, with Kun Huang. (Oxford, Clarendon Press, 1954)
Max Born The statistical interpretation of quantum mechanics. Nobel Lecture – 11 December 1954.
Physics in My Generation: A Selection of Papers (Pergamon, 1956)
Physik im Wandel meiner Zeit (Vieweg, 1957)
Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, with Emil Wolf. (Pergamon, 1959) – This book is not an English translation of Optik, but rather a substantially new book. Shortly after World War II, a number of scientists suggested that Born update and translate his work into English. Since there had been many advances in optics in the intervening years, updating was warranted. In 1951, Wolf began as Born's private assistant on the book; it was eventually published in 1959 by Robert Maxwell's Pergamon Press. – the delay being due to the lengthy time needed "to resolve all the financial and publishing tricks created by Maxwell."
Physik und Politik (VandenHoeck und Ruprecht, 1960)
Zur Begründung der Matrizenmechanik, with Werner Heisenberg and Pascual Jordan (Battenberg, 1962) – Published in honor of Max Born's 80th birthday. This edition reprinted the authors' articles on matrix mechanics published in Zeitschrift für Physik, Volumes 26 and 33–35, 1924–1926.
My Life and My Views: A Nobel Prize Winner in Physics Writes Provocatively on a Wide Range of Subjects (Scribner, 1968) – Part II (pp. 63–206) is a translation of Von der Verantwortung des Naturwissenschaftlers.
Briefwechsel 1916–1955, kommentiert von Max Born with Hedwig Born and Albert Einstein (Nymphenburger, 1969)
The Born–Einstein Letters: Correspondence between Albert Einstein and Max and Hedwig Born from 1916–1955, with commentaries by Max Born (Macmillan, 1971).
Mein Leben: Die Erinnerungen des Nobelpreisträgers (Munich: Nymphenburger, 1975). Born's published memoirs.
My Life: Recollections of a Nobel Laureate (Scribner, 1978). Translation of Mein Leben.
For a full list of his published papers, see HistCite . For his published works, see Published Works – Berlin-Brandenburgische Akademie der Wissenschaften Akademiebibliothek.
See also
List of things named after Max Born
List of refugees
List of Jewish Nobel laureates
Citations
General references
Reprinted as chapter 7 in Bernstein, Jeremy (2014). A Chorus of Bells and Other Scientific Inquiries.
Also published in Germany: Max Born – Baumeister der Quantenwelt. Eine Biographie Spektrum Akademischer Verlag, 2005, .
External links
American Institute of Physics History Search: Max Born
Encyclopædia Britannica, Max Born – full article
Annotated bibliography for Max Born from the Alsos Digital Library for Nuclear Issues
Freeview video of Gustav Born (son of Max) with conversation and film on Gustav's memories of his father by the Vega Science Trust
Max Born information from Nobel Winners site
including his Nobel Lecture, 11 December 1954 The Statistical Interpretations of Quantum Mechanics
Papers of Professor Max Born (1882–1970) Held at the Edinburgh University Library, Special Collections Division
The Papers of Professor Max Born held at Churchill Archives Centre, Cambridge
Kuhn, Thomas S., John L. Heilbron, Paul Forman, and Lini Allen Sources for History of Quantum Physics (American Philosophical Society, 1967)
Oral history interview transcript for Max Born on 1 June 1960, American Institute of Physics, Niels Bohr Library & Archives - Session I
Oral history interview transcript for Max Born on 1 June 1960, American Institute of Physics, Niels Bohr Library & Archives - Session II
Oral history interview transcript for Max Born on 17 October 1962, American Institute of Physics, Niels Bohr Library & Archives - Session III
Oral history interview transcript for Max Born on 18 October 1962, American Institute of Physics, Niels Bohr Library & Archives - Session IV
1882 births
1970 deaths
Scientists from Göttingen
20th-century German physicists
Academics of the University of Cambridge
Academics of the University of Edinburgh
Alumni of Gonville and Caius College, Cambridge
20th-century British physicists
British theoretical physicists
Fellows of the Royal Society of Edinburgh
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Foreign members of the USSR Academy of Sciences
German emigrants to Scotland
German Nobel laureates
Academic staff of Goethe University Frankfurt
Grand Crosses with Star and Sash of the Order of Merit of the Federal Republic of Germany
Heidelberg University alumni
Honorary members of the USSR Academy of Sciences
Academic staff of the Humboldt University of Berlin
Jewish emigrants from Nazi Germany to the United Kingdom
Jewish German physicists
Members of the German Academy of Sciences at Berlin
Members of the Prussian Academy of Sciences
Nobel laureates in Physics
Optical physicists
People associated with the University of Zurich
People from the Province of Silesia
Scientists from Wrocław
Quantum physicists
Scientists from Frankfurt
Silesian Jews
Theoretical physicists
German theoretical physicists
University of Breslau alumni
University of Göttingen alumni
Academic staff of the University of Göttingen
Winners of the Max Planck Medal
Max
Members of the Göttingen Academy of Sciences and Humanities
Members of the Royal Swedish Academy of Sciences
Ehrenberg family
World Constitutional Convention call signatories
Jewish British physicists | Max Born | Physics | 7,279 |
65,064,571 | https://en.wikipedia.org/wiki/Shopping%20court | A shopping court is a type of neighborhood shopping center that developed, particularly in Greater Los Angeles, in the 1920s. Most had a few boutiques, themed shops (as today in a festival marketplace), and cafes, up to a dozen and sometimes included offices and studios. A linear walkway or patio connected the units, which was relatively new, as up to then, collections of shops under a management or coordination were connected by a public sidewalk, as in Westwood Village or Country Club Plaza. Patios of buildings in Mexico, Latin America and the Mediterranean inspired the design on the shopping court, as those regions also inspired much of the Southern California architecture during that era, e.g. Spanish Colonial Revival architecture. Shopping courts proliferated in the 1930s in affluent residential areas such as Hollywood, Beverly Hills, and Pasadena, and in resorts like Palm Springs and Santa Barbara. They were limited in impact as the scale could not accommodate larger stores and store windows did not draw attention of passing motorists.
Examples
Carmel-by-the-Sea – Carmel Plaza
Carthay Circle – Carthay Center (planned, mostly unbuilt
Downtown Los Angeles – Olvera Street (in form a pedestrian mall, but the selection of shops, restaurants and stands were selected as for a themed shopping court)
Fairfax District –
Farmers Market (not a true farmer's market)
Town & Country Market
Hollywood – Crossroads of the World
Southwest Los Angeles – Producer's Public Market
Santa Barbara – El Paseo
Ventura – La Floreira
In Mexico:
Pasaje Polanco in Mexico City, 1938; Colonial californiano style
References
Culture of Los Angeles
Architectural terminology
Shopping malls by type | Shopping court | Engineering | 334 |
14,374 | https://en.wikipedia.org/wiki/Haematopoiesis | Haematopoiesis (; ; also hematopoiesis in American English, sometimes h(a)emopoiesis) is the formation of blood cellular components. All cellular blood components are derived from haematopoietic stem cells. In a healthy adult human, roughly ten billion () to a hundred billion () new blood cells are produced per day, in order to maintain steady state levels in the peripheral circulation.
Process
Haematopoietic stem cells (HSCs)
Haematopoietic stem cells (HSCs) reside in the medulla of the bone (bone marrow) and have the unique ability to give rise to all of the different mature blood cell types and tissues. HSCs are self-renewing cells: when they differentiate, at least some of their daughter cells remain as HSCs so the pool of stem cells is not depleted. This phenomenon is called asymmetric division. The other daughters of HSCs (myeloid and lymphoid progenitor cells) can follow any of the other differentiation pathways that lead to the production of one or more specific types of blood cell, but cannot renew themselves. The pool of progenitors is heterogeneous and can be divided into two groups; long-term self-renewing HSC and only transiently self-renewing HSC, also called short-terms. This is one of the main vital processes in the body.
Cell types
All blood cells are divided into three lineages.
Red blood cells, which are also called erythrocytes, are the oxygen-carrying cells. Erythrocytes are functional, and are released into the blood. The number of reticulocytes, which are immature red blood cells, gives an estimate of the rate of erythropoiesis.
Lymphocytes are the cornerstone of the adaptive immune system. They are derived from common lymphoid progenitors. The lymphoid lineage is composed of T-cells, B-cells, and natural killer cells. This is lymphopoiesis.
Cells of the myeloid lineage, which include granulocytes, megakaryocytes, monocytes, and macrophages, are derived from common myeloid progenitors, and are involved in such diverse roles as innate immunity and blood clotting. This is myelopoiesis.
Granulopoiesis (or granulocytopoiesis) is haematopoiesis of granulocytes, except mast cells which are granulocytes but with an extramedullar maturation.
Thrombopoiesis is haematopoiesis of thrombocytes (platelets).
Terminology
Between 1948 and 1950, the Committee for Clarification of the Nomenclature of Cells and Diseases of the Blood and Blood-forming Organs issued reports on the nomenclature of blood cells. An overview of the terminology is shown below, from earliest to final stage of development:
[root]blast
pro[root]cyte
[root]cyte
meta[root]cyte
mature cell name
The root for erythrocyte colony-forming units (CFU-E) is "rubri", for granulocyte-monocyte colony-forming units (CFU-GM) is "granulo" or "myelo" and "mono", for lymphocyte colony-forming units (CFU-L) is "lympho" and for megakaryocyte colony-forming units (CFU-Meg) is "megakaryo". According to this terminology, the stages of red blood cell formation would be: rubriblast, prorubricyte, rubricyte, metarubricyte, and erythrocyte. However, the following nomenclature seems to be, at present, the most prevalent:
Osteoclasts also arise from hemopoietic cells of the monocyte/neutrophil lineage, specifically CFU-GM.
Location
In developing embryos, blood formation occurs in aggregates of blood cells in the yolk sac, called blood islands. As development progresses, blood formation occurs in the spleen, liver and lymph nodes. When bone marrow develops, it eventually assumes the task of forming most of the blood cells for the entire organism. However, maturation, activation, and some proliferation of lymphoid cells occurs in the spleen, thymus, and lymph nodes. In children, haematopoiesis occurs in the marrow of the long bones such as the femur and tibia. In adults, it occurs mainly in the pelvis, cranium, vertebrae, and sternum.
Extramedullary
In some cases, the liver, thymus, and spleen may resume their haematopoietic function, if necessary. This is called extramedullary haematopoiesis. It may cause these organs to increase in size substantially. During fetal development, since bones and thus the bone marrow develop later, the liver functions as the main haematopoietic organ. Therefore, the liver is enlarged during development. Extramedullary haematopoiesis and myelopoiesis may supply leukocytes in cardiovascular disease and inflammation during adulthood. Splenic macrophages and adhesion molecules may be involved in regulation of extramedullary myeloid cell generation in cardiovascular disease.
Maturation
As a stem cell matures it undergoes changes in gene expression that limit the cell types that it can become and moves it closer to a specific cell type (cellular differentiation). These changes can often be tracked by monitoring the presence of proteins on the surface of the cell. Each successive change moves the cell closer to the final cell type and further limits its potential to become a different cell type.
Cell fate determination
Two models for haematopoiesis have been proposed: determinism and stochastic theory. For the stem cells and other undifferentiated blood cells in the bone marrow, the determination is generally explained by the determinism theory of haematopoiesis, saying that colony stimulating factors and other factors of the haematopoietic microenvironment determine the cells to follow a certain path of cell differentiation. This is the classical way of describing haematopoiesis. In stochastic theory, undifferentiated blood cells differentiate to specific cell types by randomness. This theory has been supported by experiments showing that within a population of mouse haematopoietic progenitor cells, underlying stochastic variability in the distribution of Sca-1, a stem cell factor, subdivides the population into groups exhibiting variable rates of cellular differentiation. For example, under the influence of erythropoietin (an erythrocyte-differentiation factor), a subpopulation of cells (as defined by the levels of Sca-1) differentiated into erythrocytes at a sevenfold higher rate than the rest of the population. Furthermore, it was shown that if allowed to grow, this subpopulation re-established the original subpopulation of cells, supporting the theory that this is a stochastic, reversible process. Another level at which stochasticity may be important is in the process of apoptosis and self-renewal. In this case, the haematopoietic microenvironment prevails upon some of the cells to survive and some, on the other hand, to perform apoptosis and die. By regulating this balance between different cell types, the bone marrow can alter the quantity of different cells to ultimately be produced.
Growth factors
Red and white blood cell production is regulated with great precision in healthy humans, and the production of leukocytes is rapidly increased during infection. The proliferation and self-renewal of these cells depend on growth factors. One of the key players in self-renewal and development of haematopoietic cells is stem cell factor (SCF), which binds to the c-kit receptor on the HSC. Absence of SCF is lethal. There are other important glycoprotein growth factors which regulate the proliferation and maturation, such as interleukins IL-2, IL-3, IL-6, IL-7. Other factors, termed colony-stimulating factors (CSFs), specifically stimulate the production of committed cells. Three CSFs are granulocyte-macrophage CSF (GM-CSF), granulocyte CSF (G-CSF) and macrophage CSF (M-CSF). These stimulate granulocyte formation and are active on either progenitor cells or end product cells.
Erythropoietin is required for a myeloid progenitor cell to become an erythrocyte. On the other hand, thrombopoietin makes myeloid progenitor cells differentiate to megakaryocytes (thrombocyte-forming cells). The diagram to the right provides examples of cytokines and the differentiated blood cells they give rise to.
Transcription factors
Growth factors initiate signal transduction pathways, which lead to activation of transcription factors. Growth factors elicit different outcomes depending on the combination of factors and the cell's stage of differentiation. For example, long-term expression of PU.1 results in myeloid commitment, and short-term induction of PU.1 activity leads to the formation of immature eosinophils. Recently, it was reported that transcription factors such as NF-κB can be regulated by microRNAs (e.g., miR-125b) in haematopoiesis.
The first key player of differentiation from HSC to a multipotent progenitor (MPP) is transcription factor CCAAT-enhancer binding protein α (C/EBPα). Mutations in C/EBPα are associated with acute myeloid leukaemia. From this point, cells can either differentiate along the Erythroid-megakaryocyte lineage or lymphoid and myeloid lineage, which have common progenitor, called lymphoid-primed multipotent progenitor. There are two main transcription factors. PU.1 for Erythroid-megakaryocyte lineage and GATA-1, which leads to a lymphoid-primed multipotent progenitor.
Other transcription factors include Ikaros (B cell development), and Gfi1 (promotes Th2 development and inhibits Th1) or IRF8 (basophils and mast cells). Significantly, certain factors elicit different responses at different stages in the haematopoiesis. For example, CEBPα in neutrophil development or PU.1 in monocytes and dendritic cell development. It is important to note that processes are not unidirectional: differentiated cells may regain attributes of progenitor cells.
An example is PAX5 factor, which is important in B cell development and associated with lymphomas. Surprisingly, pax5 conditional knock out mice allowed peripheral mature B cells to de-differentiate to early bone marrow progenitors. These findings show that transcription factors act as caretakers of differentiation level and not only as initiators.
Mutations in transcription factors are tightly connected to blood cancers, as acute myeloid leukemia (AML) or acute lymphoblastic leukemia (ALL). For example, Ikaros is known to be regulator of numerous biological events. Mice with no Ikaros lack B cells, Natural killer and T cells. Ikaros has six zinc fingers domains, four are conserved DNA-binding domain and two are for dimerization. Very important finding is, that different zinc fingers are involved in binding to different place in DNA and this is the reason for pleiotropic effect of Ikaros and different involvement in cancer, but mainly are mutations associated with BCR-Abl patients and it is bad prognostic marker.
Other animals
In some vertebrates, haematopoiesis can occur wherever there is a loose stroma of connective tissue and slow blood supply, such as the gut, spleen or kidney.
Unlike eutherian mammals, the liver of newborn marsupials is actively haematopoietic.
See also
Clonal hematopoiesis
Erythropoiesis-stimulating agents
Haematopoietic stimulants:
Granulocyte colony-stimulating factor
Granulocyte macrophage colony-stimulating factor
Leukocyte extravasation
References
Further reading
External links
Hematopoietic cell lineage in KEGG
Hematopoiesis and bone marrow histology
Hematopoiesis
Histology | Haematopoiesis | Chemistry | 2,640 |
2,600,454 | https://en.wikipedia.org/wiki/Swarm%20%28Marvel%20Comics%29 | Swarm (Fritz von Meyer) is a supervillain appearing in American comic books published by Marvel Comics. The character's entire body is composed of bees, and is mainly featured as an enemy of Spider-Man.
Publication history
Swarm first appeared in The Champions #14 (July 1977). He was created by Bill Mantlo and John Byrne.
Fictional character biography
Fritz von Meyer was born in Leipzig, Germany and became one of Adolf Hitler's top scientists specializing in toxicology and melittology. Escaping capture after World War II, he was a beekeeper or apiarist in South America and discovered a colony of mutated bees. Intrigued by their intelligence and passive nature, von Meyer attempted to enslave the queen bee but failed and the bees devoured him, leaving only his skeleton. The bees' unique qualities caused von Meyer's consciousness to be absorbed into them, allowing him to manipulate the hive to do his will while his skeletal remains are inside the swarm itself. His consciousness merged with the hive to the extent that they are one being, calling himself/themselves "Swarm".
Swarm battled the Champions. After being defeated, Swarm resurfaced to battle Spider-Man. In the first of many fights, Spider-Man prevailed against him when the web-slinger's costume was dosed in a new type of insecticide that hurt the bees if they got too close. Swarm lost his/their skeleton in this battle but returned to fight again (no longer having the skeleton but still possessing von Meyer's consciousness), first teaming with Kraven the Hunter against Iceman and Firestar, then against Spider-Man, but feedback from a weapon fired by the Rhino caused Swarm's bee body to disperse temporarily.
Swarm next appears when a Super-Collider from Rand Industries is activated and called his/their attention. Swarm decides mankind should be exterminated so insects can rule the world. Doctor Druid convinced Swarm that mankind will exterminate themselves and the age of insects can begin. Eventually, Swarm was tired of waiting and returned to New York, after a psychic wave generated by Onslaught disrupted the psychic field that bonded him and the bees together. He forced a group of scientists investigating energy fields to help him not only restore his original field, but expand it to grant him control of every bee on Earth. As New York City is invaded by bees, the Scarlet Spider tracked the bees to their destination and — taking advantage of the fact that the swarms' instinctive memory of Raid caused the bees to automatically flinch away from Spider-Man — infiltrated the building to contact the scientists. By claiming that the scientists' equipment is having trouble broadcasting a sufficiently powerful signal through the dome of bees, Scarlet Spider is able to trick Swarm into allowing a device's construction designed to negate the vibrational frequency that the bees create to allow themselves to fly, presenting it as a means of boosting the existing signal's power. With the bees now grounded, Scarlet Spider subsequently recovers the queen of Swarm's hive and leaves the authorities' care, reasoning Swarm will not be a future threat without her.
Now back with an internal skeleton, Swarm felt that the criminal organization Pride's fall allowed access to their former territory, specifically Los Angeles. However, they are defeated by the Los Angeles' protector Runaways when their bees' mental link is disrupted by electrical blasts.
Swarm regained control over his colony and joins the Chameleon's Exterminators to kill their shared enemy now that Peter Parker's true identity is revealed. Swarm attacks Mary Jane Watson but the latter sprays Swarm with water while a co-worker smashes Swarm's skeleton, but the bees reformed around the skeleton as Stark Industries' bodyguards take him/them away.
When Alyosha Kravinoff began collecting a zoo of animal-themed superhumans, Swarm is in one of the cages. He fought Gargoyle as the Punisher passes them and escaped.
Swarm next turns up in Denver, Colorado, having amassed enough bees to become giant-sized. The Thunderbolts face him/them unsuccessfully until Venom devours Swarm's bones. Norman Osborn speculated this is a minor inconvenience that shouldn't prevent Swarm's return.
Swarm next turns up in Buenos Aires, having his intelligence again. He fought the Mighty Avengers by creating 'avatars' made of bees. Hank Pym, Stature and Amadeus Cho place an inhibitor collar on the queen bees which caused Swarm's intelligence to somehow disperse.
He was briefly seen trying to launch an attack of the Jean Grey School for Higher Learning only to be almost instantly thwarted by the X-Men's Krakoa, the Bamfs, and Doop.
Swarm later formed his own incarnation of the Sinister Six with 8-Ball, Delilah, Killer Shrike, Melter and Squid. They attack Spider-Man and the students of the Jean Grey School for Higher Learning. Swarm gets dispersed by Hellion which caused the other members to surrender.
Swarm later attacked New York but was defeated by Squirrel Girl and her ally Koi Boi covering him with water and turning bags full of his constituent bees in to the police.
Swarm later appeared as a member of the Hateful Hexad alongside Bearboarguy, Gibbon, Ox, Squid and White Rabbit. During the disastrous battle against Spider-Man and Deadpool, the battle is crashed by Itsy Bitsy.
Swarm relocates to Florida, where he encounters Macrothrax and his minions who are also sentient insect colonies in humanoid form, accidentally created by the invention behind him. He ends up joining forces with Ant-Man and taking a liking to the latter.
Powers and abilities
Fritz von Meyer is a composite being of thousand bees driven by his human intelligence. He is also technically intangible, as his body is an aggregate of tiny forms. As Swarm, he can fly through the air, assume any shape or size at will, and mentally influence other bees' actions (the full range may extend over a hundred yards in radius). At this end, Swarm seemed capable of controlling a mutant bee queen and, through her, countless drones. He even has exhibited a limited amount of super strength. As von Meyer, he possesses expertise in beekeeping, robotics, and toxicology.
Other versions
Marvel Fairy Tales
An alternate universe variant of Swarm from Earth-7082 appears in Spider-Man: Fairy Tales #2.
Marvel Adventures
An alternate universe variant of Swarm from Earth-20051 appears in Marvel Adventures: Spider-Man #38.
Ultimate Marvel
An original incarnation of Swarm from Earth-1610 appears in the Ultimate Marvel universe. This version is Petra Laskov, a Syrian mutant who is also known as the Insect Queen and Red-Wasp.
Marvel Noir
An original incarnation of Swarm from Earth-90214 appears in the Marvel Noir universe. This version is Madame Sturm, a female scientist whose powers are derived from a Spider-God totem.
In other media
Television
An original incarnation of Swarm appears in a self-titled episode of Spider-Man and His Amazing Friends, voiced by Al Fann. This version is a beehive irradiated by a fallen meteorite's energy, gaining sentience as well as the ability to increase other bees' size and mutate humans into insect hybrid drones. Swarm attempts to spread its hive mind throughout the universe until Spider-Man, Firestar, and Iceman intervene and launch the meteorite into space to reverse Swarm's effects.
An original incarnation of Swarm, Michael Tan, appears in Ultimate Spider-Man, voiced by Eric Bauza in his self-titled episode and Drake Bell in "Sandman Returns". This version is a disgruntled employee of Stark Industries whose body is made of self-replicating nanobots.
Swarm appears in Marvel Super Hero Adventures, voiced by Ian James Corlett.
An original incarnation of Swarm, Jefferson Davis, appears in Spider-Man, voiced by Alex Désert. This version utilizes purple nanotech bees that grant him a solid form and have mind-controlling stingers.
Video games
Swarm appears as an unlockable playable character in Marvel Strike Force. This version is a member of the Sinister Six.
Swarm is mentioned in Marvel's Spider-Man in J. Jonah Jameson's podcast.
Miscellaneous
Swarm appears in Spider-Man: Turn Off the Dark, portrayed by Gerald Avery. This version was originally an Oscorp scientist before he was manipulated into becoming Swarm by the Green Goblin and joining the Sinister Six.
The Symbiotic Warfare Anthophila Restraining Model (S.W.A.R.M.) appears in Spider-Man: City at War.
Reception
In August 2009, TIME listed Swarm as one of the "Top 10 Oddest Marvel Characters".
Swarm was ranked #29 on a listing of Marvel Comics' monster characters in 2015.
References
External links
Swarm at Marvel.com
Fiction about beekeeping
Fictional bees
Fictional characters from Saxony
Fictional characters who can change size
Fictional characters who can turn intangible
Fictional collective consciousnesses
Fictional entomologists
Fictional Nazi fugitives
Fictional roboticists
Fictional superorganisms
Fictional toxicologists
Characters created by Bill Mantlo
Characters created by John Byrne (comics)
Comics characters introduced in 1977
Marvel Comics shapeshifters
Marvel Comics characters with superhuman strength
Marvel Comics mutates
Marvel Comics Nazis
Marvel Comics scientists
Marvel Comics supervillains
Spider-Man characters | Swarm (Marvel Comics) | Biology | 1,911 |
22,917,401 | https://en.wikipedia.org/wiki/Volute%20%28pump%29 | A volute is a curved funnel that increases in area as it approaches the discharge port. The volute of a centrifugal pump is the casing that receives the fluid being pumped by the impeller, maintaining the velocity of the fluid through to the diffuser. As liquid exits the impeller it has high kinetic energy and the volute directs this flow through to the discharge. As the fluid travels along the volute it is joined by more and more fluid exiting the impeller but, as the cross sectional area of the volute increases, the velocity is maintained if the pump is running close to the design point. If the pump has a low flow rate then the velocity will decrease across the volute leading to a pressure rise causing a cross thrust across the impeller that we see as vibration. If the pump flow is higher than design the velocity will increase across the volute and the pressure will decrease according to the first law of thermodynamics. This will cause a side thrust in the opposite direction to that caused by low flow but the result is the samevibration with resultant short bearing and seal life.
The volute does not convert kinetic energy into pressurethat is done at the diffuser by reducing liquid velocity while increasing pressure.
The name "volute" is inspired by the resemblance of this kind of casing to the scroll-like part near the top of an ionic order column in classical architecture, called a volute.
Split volute
In a split volute or double volute pump, the path along the volute is partitioned, providing two distinct discharge paths. The streams start out 180 degrees from each other, and merge by the time they reach the discharge port. This arrangement helps to balance the radial force on the bearings.
See also
Roots blower
References
Fluid dynamics
Pumps | Volute (pump) | Physics,Chemistry,Engineering | 368 |
59,760,845 | https://en.wikipedia.org/wiki/Erin%20Lavik | Erin Baker Lavik (born 1973) is an American bioengineer serving as the deputy director and chief technology officer of the National Cancer Institute's Division of Cancer Prevention (DCP) since 2023. She was previously a professor of chemical, biochemical, and environmental engineering at the University of Maryland, Baltimore County. Lavik develops polymers and nanoparticles that can protect the nervous system. She is a fellow of the American Institute for Medical and Biological Engineering.
Early life and education
Lavik's father was a lawyer and her mother was an accountant. She was given a catapult as a teenager and broke her parents' windshield. She attended National Cathedral School, and had to take advanced placement physics courses at the nearby boys' school St. Albans School. Lavik was unsure whether to become a veterinarian or high school teacher, but her mother sat next to Martha Gray on an aeroplane and realised that she had a career Lavik would enjoy. She completed her bachelor's degree in materials science at Massachusetts Institute of Technology in 1995. She minored in theatre and is still a playwright. Her master's PhD looked at the electrical properties of cerium(IV) oxide. She stayed at MIT for her graduate studies, completing her master's degree and PhD in 2001.
Lavik created polymer scaffolds were seeded with neural stem cells, and implanted them in to paralysed rats. These spinal implants were developed whilst Lavik was a graduate student at MIT, mimicking the anatomy of the spine by binding a porous piece of polymer fabric and a plastic cylinder and including narrow channels for axons. Lavik conducted the experiment on 50 female paraplegic rats, and 7 out of 10 rats fitted with Lavik's scaffold-stem cell design could walk again. She was awarded the John Wuff Award for Excellence in Teaching. In 2003, two years after graduating her PhD, she was nominated to the TR100 list. Lavik was an assistant professor at Yale University, where she developed polymer scaffolds that imitate the spinal cord. She was nominated for a 2004 WIRED RAVE Award. In 2004 Lavik wrote the play Galileo Walking among the Stars, a play where Galileo, Kepler and Gene Kelly build a spaceship. She was selected as one of the Connecticut Technology Council's top women in innovation in 2008.
Career
Lavik was made an assistant professor at Case Western Reserve University where she worked on nanotechnology and biodegradable polymers. Today she is a member of the College of Engineering and Information Technology at University of Maryland, Baltimore County. She is interested in translatable approaches to treat injuries and disease. She works on tissue engineering and diseases of the central nervous system, including glaucoma and retinal degeneration.
Lavik has explored ways that nanoparticles can help reduce internal bleeding. The nanoparticles attach to activated platelets, forming clots and stopping bleeding. The nanoparticles are delivered intravenously and include a molecule that binds to a glycoprotein. They are based on poly(lactic-co-glycolic acid), polyethylene glycol and Arginine-Glycine-Aspartic acid. Lavik developed the nanoparticles using pig's blood, identifying which had the appropriate immune response. The nanoparticles could half the bleeding time in femoral artery models. Lavik and her team hoped that medics and emergency responders would carry the nanoparticles to treat traumatic injuries. In 2010 she was awarded the National Institutes of Health Director's New Innovator Award for the discovery. The NIH grant allows Lavik to explore the nanoparticles traumatic injuries of the central nervous system. The work underwent clinical tests at Case Western Reserve University. She found that the length of the polyethylene glycol arms and choice of peptide impacts the efficacy and clearance of the nanoparticles. She has also looked at spinal cord injury, exploring the optimal time to deliver nanoparticles after traumatic injury. Alongside her work on nanoparticles, Lavik engineers solutions for retinal degeneration, including screen printing human eye tissues. Her technique, which layers adult stem cells, was selected by the National Eye Institute's 3-D Retina Organoid Challenge. She contributed to the 2013 Elsevier book Retina, talking about drug delivery.
Lavik is a member of the University of Maryland, Baltimore County Women in Science and Engineering group. She is an advocate for improving diversity in the sciences. She was made a Fellow of the American Institute for Medical and Biological Engineering in 2014. In 2016 she delivered a TEDxBroadway talk on theatre and engineering. She discussed the importance of collaboration in scientific research and teamwork in theatre.
Lavik became the second deputy director and first chief technology officer of the National Cancer Institute's Division of Cancer Prevention (DCP) in August 2023. In this capacity, she provides leadership in how best to apply promising emerging technologies to the prevention and control of cancer and its consequences. In 2024, she was elected a fellow of the American Association for the Advancement of Science.
References
American bioengineers
Women materials scientists and engineers
21st-century American women engineers
MIT School of Engineering alumni
Case Western Reserve University faculty
University of Maryland, Baltimore County faculty
Fellows of the American Association for the Advancement of Science
Fellows of the American Institute for Medical and Biological Engineering
National Institutes of Health people
Women bioengineers | Erin Lavik | Materials_science,Technology | 1,132 |
454,977 | https://en.wikipedia.org/wiki/160%20%28number%29 | 160 (one hundred [and] sixty) is the natural number following 159 and preceding 161.
In mathematics
160 is the sum of the first 11 primes, as well as the sum of the cubes of the first three primes.
Given 160, the Mertens function returns 0. 160 is the smallest number n with exactly 12 solutions to the equation φ(x) = n.
In telecommunications
The number of characters permitted in a standard short message service
References
External links
Number Facts and Trivia: 160
Integers | 160 (number) | Mathematics | 104 |
25,360,121 | https://en.wikipedia.org/wiki/Hilbert%E2%80%93Samuel%20function | In commutative algebra the Hilbert–Samuel function, named after David Hilbert and Pierre Samuel, of a nonzero finitely generated module over a commutative Noetherian local ring and a primary ideal of is the map such that, for all ,
where denotes the length over . It is related to the Hilbert function of the associated graded module by the identity
For sufficiently large , it coincides with a polynomial function of degree equal to , often called the Hilbert-Samuel polynomial (or Hilbert polynomial).
Examples
For the ring of formal power series in two variables taken as a module over itself and the ideal generated by the monomials x2 and y3 we have
Degree bounds
Unlike the Hilbert function, the Hilbert–Samuel function is not additive on an exact sequence. However, it is still reasonably close to being additive, as a consequence of the Artin–Rees lemma. We denote by the Hilbert-Samuel polynomial; i.e., it coincides with the Hilbert–Samuel function for large integers.
Proof: Tensoring the given exact sequence with and computing the kernel we get the exact sequence:
which gives us:
.
The third term on the right can be estimated by Artin-Rees. Indeed, by the lemma, for large n and some k,
Thus,
.
This gives the desired degree bound.
Multiplicity
If is a local ring of Krull dimension , with -primary ideal , its Hilbert polynomial has leading term of the form for some integer . This integer is called the multiplicity of the ideal . When is the maximal ideal of , one also says is the multiplicity of the local ring .
The multiplicity of a point of a scheme is defined to be the multiplicity of the corresponding local ring .
See also
j-multiplicity
References
Commutative algebra
Algebraic geometry | Hilbert–Samuel function | Mathematics | 366 |
8,590,426 | https://en.wikipedia.org/wiki/Athermalization | Athermalization, in the field of optics, is the process of achieving optothermal stability in optomechanical systems. This is done by minimizing variations in optical performance over a range of temperatures.
Optomechanical systems are typically made of several materials with different thermal properties. These materials compose the optics (refractive or reflective elements) and the mechanics (optical mounts and system housing). As the temperature of these materials change, the volume and index of refraction will change as well, increasing strain and aberration content (primarily defocus). Compensating for optical variations over a temperature range is known as athermalizing a system in optical engineering.
Material property changes
Thermal expansion is the driving phenomena for the extensive and intensive property changes in an optomechanical system.
Extensive properties
Extensive property changes, such as volume, alter the shape of optical and mechanical components. Systems are geometrically optimized for optical performance and are sensitive to components changing shape and orientation. While volume is a three dimensional parameter, thermal changes can be modeled in a single dimension with linear expansion, assuming an adequately small temperature range. For examples, glass manufacturer Schott provides the coefficient of linear thermal expansion for a temperature range of -30 C to 70 C. The change in length of a material is a function of the change in temperature with respect to the standard measurement temperature, . This temperature is typically room temperature or 22 degrees Celsius.
Where is the length of a material at temperature , is the length of the material at temperature , is the change in temperature, and is the coefficient of thermal expansion. These equations describe how diameter, thickness, radius of curvature, and element spacing change as a function of temperature.
Intensive properties
The dominant intensive property change, in terms of optical performance, is the index of refraction. The refractive index of glass is a function of wavelength and temperature. There are multiple formulas that can be used to define the wavelength dependence, or dispersion, of a glass. Following the notation from Schott, the empirical Sellmeier equation is shown below.
Where is wavelength and , , , , , and are the Sellmeier coefficients. These coefficients can be found in glass catalogs provided from manufacturers and are usually valid from the near-ultraviolet to the near-infrared. For wavelengths beyond this range, it is necessary to know the material's transmittance with respect to wavelength. From the dispersion formula, the temperature dependence of refractive index can be written:
and
Where , , , , , and are glass-dependent constants for an optic in vacuum. The power of an optic as a function of temperature can be written from the equations for extensive and intensive property changes, in addition to the lensmaker's equation.
Where is optical power, is the radius of curvature, is the thickness of the lens. These equations assume spherical surfaces of curvature. If a system is not in vacuum, the index of refraction for air will vary with temperature and pressure according to the Ciddor equation, a modified version of the Edlén equation.
Athermalization techniques
To account for optical variations introduced by extensive and intensive property changes in materials, systems can be athermalized through material selection or feedback loops.
Passive athermalization
Passive athermalization works by choosing materials for a system that will compensate the overall change in system performance. The simplest way to do this is to choose materials for the optics and mechanics which have low CTE and values. This technique is not always possible as glass types are primarily chosen based on their refractive index and dispersion characteristics at operating temperature. Alternatively, mechanical materials can be chosen which have CTE values complementary to the change in focus introduced by the optics. A material with the preferred CTE is not always available, so two materials can be used in conjunction to effectively get the desired CTE value. Negative thermal expansion materials have recently increased the range of potential CTEs available, expanding passive athermalization options.
Active athermalization
When optical designs do not permit the selection of materials based on their thermal characteristics, passive athermalization may not be a viable technique. For example, the use of germanium in mid to long wave infrared systems is common because of its exceptional optical properties (high index of refraction and low dispersion). Unfortunately, germanium is also known for its large value, which makes it difficult to passively athermalize.
Because the primary aberration induced by temperature change is defocus, an optical element, group, or focal plane can be mechanically moved to refocus a system and account for thermal changes. Actively athermalized systems are designed with a feedback loop including a motor, for the focusing mechanism, and temperature sensor, to indicate the magnitude of the focus adjustment.
Temperature gradients
When a system is not in thermal equilibrium, it complicates the process of determining system performance. A common temperature gradient to encounter is an axial gradient. This involves temperatures changing in a lens as a function of the thickness of the lens, or often along the optical axis. In optical lens design it is standard notation for the optical axis to be co-linear with the Z-axis in cartesian coordinates. A difference between the temperature of the first and second surface of a lens will cause the lens to bend. This affects each radius of curvature, therefor changing the optical power of the lens. The radius of curvature change is a function of the temperature gradient in the optic.
Where is the thickness of the lens. Radial gradients are less predictable as they may cause the shape of curvature to change, making spherical surfaces aspherical. Determining temperature gradients in an optomechanical system can quickly become an arduous task, requiring an intimate understanding of the heat sources and sinks in a system. Temperature gradients are determined by heat flow and can be a result of conduction, convection, or radiation. Whether steady-state or transient solutions are adequate for an analysis is determined by operating requirements, system design, and the environment. It can be beneficial to leverage the computational power of the finite element method to solve the applicable heat flow equations to determine the temperature gradients of optical and mechanical components.
External links
Refractive index of air calculator
Table of common material CTE values
Information on glass from Schott
Information on glass from Hoya
Information on glass from Ohara
Information on glass from CDGM
References
Optics
Temperature | Athermalization | Physics,Chemistry | 1,315 |
41,858,625 | https://en.wikipedia.org/wiki/Cyclic%20sieving | In combinatorial mathematics, cyclic sieving is a phenomenon in which an integer polynomial evaluated at certain roots of unity counts the rotational symmetries of a finite set.
Given a family of cyclic sieving phenomena, the polynomials give a q-analog for the enumeration of the finite sets, and often arise from an underlying algebraic structure associated to the family of finite sets, such as a representation.
The first study of cyclic sieving was published by Reiner, Stanton and White in 2004. The phenomenon generalizes the "q = −1 phenomenon" of John Stembridge, which considers evaluations of the polynomial only at the first and second roots of unity (that is, q = 1 and q = −1).
Definition
For every positive integer , let denote the primitive th root of unity .
Let be a finite set with an action of the cyclic group , and let be an integer polynomial. The triple exhibits the cyclic sieving phenomenon (or CSP) if for every positive integer dividing , the number of elements in fixed by the action of the subgroup of is equal to . If acts as rotation by , this counts elements in with -fold rotational symmetry.
Equivalently, suppose is a bijection on such that , where is the identity map. Then induces an action of on , where a given generator of acts by . Then exhibits the cyclic sieving phenomenon if the number of elements in fixed by is equal to for every integer .
Example
Let be the set of pairs of elements from . Define a bijection which increases each element in the pair by one (and sends back to ). This induces an action of on , which has an orbit
of size two and an orbit
of size four. If , then counts all elements in , counts fixed points of , counts fixed points of , and counts fixed points of . Hence, the triple exhibits the cyclic sieving phenomenon.
More generally, set and define the q-binomial coefficient by
which is an integer polynomial evaluating to the usual binomial coefficient at . For any positive integer dividing ,
If is the set of size- subsets of with acting by increasing each element in the subset by one (and sending back to ), and if is the q-binomial coefficient above, then exhibits the cyclic sieving phenomenon for every .
In representation theory
The cyclic sieving phenomenon can be naturally stated in the language of representation theory. The group action of on is linearly extended to obtain a representation, and the decomposition of this representation into irreducibles determines the required coefficients of the polynomial .
Let be the vector space over the complex numbers with a basis indexed by a finite set . If the cyclic group acts on , then linearly extending each action turns into a representation of .
For a generator of , the linear extension of its action on gives a permutation matrix , and the trace of counts the elements of fixed by . In particular, the triple exhibits the cyclic sieving phenomenon if and only if for every , where is the character of .
This gives a method for determining . For every integer , let be the one-dimensional representation of in which acts as scalar multiplication by . For an integer polynomial , the triple exhibits the cyclic sieving phenomenon if and only if
Further examples
Let be a finite set of words of the form where each letter is an integer and is closed under permutation (that is, if is in , then so is any anagram of ). The major index of a word is the sum of all indices such that , and is denoted .
If acts on by rotating the letters of each word, and
then exhibits the cyclic sieving phenomenon.
Let be a partition of size with rectangular shape, and let be the set of standard Young tableaux with shape . Jeu de taquin promotion gives an action of on . Let be the following q-analog of the hook length formula:
Then exhibits the cyclic sieving phenomenon. If is the character for the irreducible representation of the symmetric group associated to , then for every , where is the long cycle .
If is the set of semistandard Young tableaux of shape with entries in , then promotion gives an action of the cyclic group on . Define and
where is the Schur polynomial. Then exhibits the cyclic sieving phenomenon.
If is the set of non-crossing (1,2)-configurations of , then acts on these by rotation. Let be the following q-analog of the th Catalan number:
Then exhibits the cyclic sieving phenomenon.
Let be the set of semi-standard Young tableaux of shape with maximal entry , where entries along each row and column are strictly increasing. If acts on by -promotion and
then exhibits the cyclic sieving phenomenon.
Let be the set of permutations of cycle type with exactly exceedances. Conjugation gives an action of on , and if
then exhibits the cyclic sieving phenomenon.
Notes and references
Combinatorics
Generating functions | Cyclic sieving | Mathematics | 996 |
63,063,610 | https://en.wikipedia.org/wiki/Center%20for%20Research%20on%20Computation%20and%20Society | The Center for Research on Computation and Society (CRCS, commonly pronounced "circus") is a research center at Harvard University that focuses on interdisciplinary research combining computer science with social sciences. It is based in Harvard John A. Paulson School of Engineering and Applied Sciences. It is currently directed by Milind Tambe.
History
The center was officially founded in 2005, although there are appearances of CRCS affiliation back in 1996. The center name mimics the name of the centers for Internet and Society such as Stanford's or Harvard's.The Privacy Tools Project was one of the most important efforts led by CRCS. It received funding from multiple sources from 2009 throughout 2020 in order to research and build tools to enhance privacy, in a common effort with Harvard's Berkman Klein Center, Harvard's Data Privacy Lab, and MIT Libraries. The CRCS founding director was Stuart M. Shieber. After him, the center was directed by Greg Morrisett and later by Salil Vadhan until 2015, when Margo Seltzer was named new director. In 2018, after her departure to Columbia University, she was replaced as director by Jim Waldo. When Milind Tambe joined Harvard in September 2019 he became the new center director.
The center has a yearly fellowship program, and relevant past fellows include Simson Garfinkel or Ariel Procaccia. It also hosts regular public talks ("seminars") with distinguished invited speakers, which are usually video recorded. Some speakers include Susan Crawford, Bruce Schneier or Megan Price.
Research
The center has covered a broad spectrum of research lines within computer science, typically with social aspects. These include social computing, privacy-enhancing technologies, encryption and data security, misinformation, machine learning fairness, internet of things, or a citizen-science platform.
See also
Berkman Klein Center for Internet and Society
References
Research institutes in Massachusetts
Computing and society
Research institutes established in 2005
2005 establishments in Massachusetts
Harvard University
Computer science institutes in the United States
Scientific organizations established in 2005
Information technology research institutes | Center for Research on Computation and Society | Technology | 418 |
5,044,648 | https://en.wikipedia.org/wiki/Filler%20%28linguistics%29 | In linguistics, a filler, filled pause, hesitation marker or planner (sometimes called crutches) is a sound or word that participants in a conversation use to signal that they are pausing to think but are not finished speaking. These are not to be confused with placeholder names, such as thingamajig. Fillers fall into the category of formulaic language, and different languages have different characteristic filler sounds. The term filler also has a separate use in the syntactic description of wh-movement constructions (see below).
Usage
Every conversation involves turn-taking, which means that whenever someone wants to speak and hears a pause, they do so. Pauses are commonly used to indicate that someone's turn has ended, which can create confusion when someone has not finished a thought but has paused to form a thought; in order to prevent this confusion, they will use a filler word such as um, er, or uh. The use of a filler word indicates that the other person should continue listening instead of speaking.
Filler words generally contain little to no lexical content, but instead provide clues to the listener about how they should interpret what the speaker has said. The actual words that people use may change (such as the increasing use of like), but the meaning and the reasons for using them do not change.
In English
In American English, the most common filler sounds are ah or uh and um (er and erm in British English). Among younger speakers, the fillers "like", "you know", "I mean", "okay", "so", "actually", "basically", and "right?" are among the more prevalent.
In other languages
In Afrikaans, , , and are common fillers (um, and uh being in common with English).
In American Sign Language, UM can be signed with open-8 held at chin, palm in, eyebrows down (similar to FAVORITE); or bilateral symmetric bent-V, palm out, repeated axial rotation of wrist (similar to QUOTE).
In Arabic, ("means") and ("by God") are common fillers. In Moroccan Arabic, ("like") is a common filler, as well as (so). In Iraqi Arabic, ("what's its name") is a filler.
In Armenian, ("thing"), , ("maybe"), ("c'mon") and ("as if") are common fillers.*
In Bengali, ( and ("..er..that is")) are common fillers.
In Bislama, is the common filler.
In Bulgarian, common fillers are (), (, 'well'), (, 'so'), (, 'thus'), (, 'well'), (, 'this') and (, 'it means'), (, 'right').
In Cantonese, speakers often say ("that is to say"; "meaning") and ("so; then") as fillers.
In Catalan, , ("so"), ("therefore"), ("it means"), saps? ("you know"?) and ("say") are common fillers.
In Croatian, the words (literally "this one", but the meaning is lost) and ("so"), and ("meaning", "it means") are frequent.
In Czech, fillers are called , meaning "word cotton/padding", or , meaning "parasitic expressions". The most frequent fillers are , or ("so"), ("simply"), ("like").
In Danish, and are among the most common fillers.
In Dhivehi, , , , and ("aww") are some common fillers.
In Dutch, , and ("thus") are some of the more common fillers. Also ("actually"), ("so"), ("come on") and ("so to say") in Netherlandic Dutch, ("well") or ("well") in Belgian Dutch, ("you know?") etc.
In Esperanto, ("well") and ("so") are the most common fillers.
In Estonian, ("so") is one of the most common fillers.
In Filipino, , , , and ("what"), ("like"), ("isn't it right?"), ("that's") are the most common fillers.
In Finnish, ("like"), , and are the most common fillers. Swearing is also used as a filler often, especially among youth. The most common swear word for that is , which is a word for female genitalia.
In Metropolitan French, is most common; other words used as fillers include ("what"), , ("well"), ("you see"), ("you see what I mean?"), , ("you know"), (roughly "well", as in "Well, I'm not sure"), and (roughly "suddenly"). Outside France other expressions are ("y'know what I mean?"; Québec), or ("go one time"; especially in Brussels, not in Wallonia). Additional filler words used by youngsters include ("kinda", "like"), ("like"), and ("style"; "kind").
In German, traditional filler words include , , , , , and ("actually"). So-called modal particles share some of the features of filler words, but they actually modify the sentence meaning.
In Greek, (), (), (, "so") and (, "good") are common fillers.
In Hebrew, () is the most common filler. () is also quite common. Millennials and the younger Generation X speakers commonly use (, the Hebrew version of "like"). Additional filler words include (, short for "that means"), (, "so") and (, "in short"). Use of fillers of Arabic origin such as (, a mispronunciation of the Arabic , ) is also common.
In Hindi, (, "it means"), (, "what do you say"), (, "that") and (, "what it is") are some word fillers. Sound fillers include (, ), अ (a, [ə]), (, ).
In Hungarian, filler sound is , common filler words include , (well...) and (a variant of , which means "it says here..."). Among intellectuals, (if you like) is used as filler.
In Icelandic, a common filler is ("here"). , a contraction of ("you know"), is popular among younger speakers.
In Indonesian, and are among the most common fillers.
In Irish, ("say"), ("well"), and are common fillers, along with as in Hiberno-English.
In Italian, common fillers include ("um", "uh"), ("well then", "so"), ("like"), ("there"), ("actually", "that is to say", "rather"), and ("well", "so"; most likely a shortening of or , which are themselves often used as filler words).
In Japanese, common fillers include (, or "um"), (, literally "that over there", used as "um"), (, or "well"), (, used as "hmmm"), and (, used as "huh" as a response of surprise or confusion).
In Kannada, for "also", for "the matter is" are common fillers.
In Korean, (), (), (), and () are commonly used as fillers.
In Kurdish, ("so, then") ( ( in Sorani and Palewani, mostly pronounced as "ija"), as well as ("well") (or ()) are common filler words. In Badinani, ("I said") and ("I say") (mostly shortened to "m'go'" and "e'd bê'm") are used similarly to "I mean". ("like, such as") ( () in others) is used similarly to "like".
In Kyrgyz, (, "then", "so"), (, "that"), (, "that"), (, "this"), (, "um"), are common fillers.
In Lithuanian, , , ("you know"), ("meaning"), ("like") are some of common fillers.
In Malay, speakers often use words and phrases such as (literally, "what name") or ("that") as common fillers.
In Malayalam, (, "that means...") and ("then...") are common.
In Maltese and Maltese English, ("then"), or just , is a common filler.
In Mandarin Chinese, speakers often say ; (pronounced nàge/nèige), meaning 'that'. Other common fillers are and .
In Mongolian, (, "now") and (, "that") are common fillers.
In Nepali, (, "meaning"), (), (), (, "No?") are commonly used as fillers.
In Norwegian, common fillers are , , ("in a way"), ("just") (literally "not true?", meaning "don't you agree?", "right?", "no kidding" or "exactly")l, ("well"), ("like") and ("is it", "it is"). In Bergen, ("true") is often used instead of . In the region of , (comes from which means "you see/understand)", "as you can see/understand") is also a common filler.
In Persian, (, "look"), (, "thing"), and (, "for instance") are commonly used filler words. As well as in Arabic and Urdu, (, "I mean") is also used in Persian. Also, is a common filler in Persian.
In Portuguese, , , ("so"), ("like") and ("well") are the most common fillers.
In Polish, the most common filler sound is and also (both like English um) and while common, its use is frowned upon. Other examples include, (like English well), ("you know"). Among the younger generation new, often english-inspired, fillers are gaining popularity: generalnie/ogólnie ("generally"), jakby ("like"), w sensie ("in the sense that"), w sumie ("to sum it up").
In Punjabi, (, , "it means") is a common filler.
In Romanian, ("therefore") is common, especially in school, and is also very common (can be lengthened according to the pause in speech, rendered in writing as ), whereas is widely used by almost anyone. A modern filler has gained popularity among the youths – gen , analogous to the English "like", literally translated as "type".
In Russian, fillers are called (, "parasite words"); the most common are (, "eh"), (, "here it is"), (, "this"), (, "that kind, sort of"), (, "some kind [of this]"), (, "well, so"), (, "I mean, kind of, like"), (, "so"), (, "what's it [called]"), (, "kinda"), (, "[just] like, sort of"), and (, "understand?, you know, you see").
In Serbian, (, "means"), па (pa, "so"), мислим (mislim, "i think") and (, "this") are common fillers.
In Slovak, ("that"), ("this"), ("simply"), or ("it's like...") are used as fillers. The Hungarian (or in its Slovak pronunciation) can also be heard, especially in parts of the country with a large Hungarian population. is a filler typical of Eastern Slovak and one of the most parodied features.
In Slovene, ("indeed", "just", "merely"), ("right?"), ("well"), v bistvu ("in fact"), and pravzaprav ("actually") are some of the most common fillers.
In Spanish, fillers are called . Some of the most common in American Spanish are , , (roughly equivalent to uhm, literally means "this"), and (roughly equivalent to "I mean", literally means "or be it"). In Spain the previous fillers are also used, but ("right?") and are very common too. and occasionally ("well") is used. Younger speakers there often use (meaning "as", "like" or "in [noun] mode"). The Argentine filler word che became the nickname of rebel Ernesto "Che" Guevara, by virtue of his frequent use of it. Other possible filled pauses in Spanish are: a, am, bueno, como, and others.
In Swedish, fillers are called ; some of the most common are or , ("yes"), or (for example ) or (comes from , which means "only"), or ("therefore", "thus"), (comes from , which means "what"), and and (both similar to the English "like").
In Tamil, ("if you see...") and ("then...") are common.
In Telugu, (, "what's here is...") and (, "then...") are common and there are numerous like this.
In Turkish, ("meaning..."), ("thing"), ("that is"), and ("as such", "so on") are common fillers.
In Ukrainian, (, similar to "um"), (, "well"), (, "and"), (, "this"), (, "this one") are common fillers.
In Urdu, (, "meaning..."), (, "this and that" or "blah blah"), (, "yeah yeah") and (, "ok") are also common fillers.
In Vietnamese (Tiếng Việt), "ơ" or "à" (surprise); "ý là" (I mean); ...
In Welsh (Cymraeg), or , from – 'Is it not so?' – is used as a filler, and in a similar way, especially in southern dialects and (abbreviations of and – the singular and plural/respectful forms of 'you know') along with and (abbreviations of and – 'you see'); (from – 'so/such/like/in that way', used in northern dialects); ('alright/right') is used as a filler at the beginning, middle or end of sentences; – used loosely to mean 'alright'; , an abbreviation of – 'there we are'; and are used similarly to the English 'um…' and 'uh…'.
In syntax
The linguistic term "filler" has another, unrelated use in syntactic terminology. It refers to the pre-posed element that fills in the "gap" in a wh-movement construction. Wh-movement is said to create a long-distance or unbounded "filler-gap dependency". In the following example, there is an object gap associated with the transitive verb saw, and the filler is the wh-phrase how many angels: "I don't care [how many angels] she told you she saw."
See also
Aizuchi
Interjection
Like: as a discourse particle
Phatic expression
So (word)
Speech disfluency
References
External links
Why do people say "um" and "er" when hesitating in their speech?, New Scientist, May 6, 1995
Citing
Nino Amiridze, Boyd H. Davis, and Margaret Maclagan , editors. Fillers, Pauses and Placeholders. Typological Studies in Language 93, John Benjamins, Amsterdam/Philadelphia, 2010. Review
Linguistics
Human communication | Filler (linguistics) | Biology | 3,659 |
3,996,438 | https://en.wikipedia.org/wiki/Flexor%20retinaculum%20of%20the%20hand | The flexor retinaculum (transverse carpal ligament or anterior annular ligament) is a fibrous band on the palmar side of the hand near the wrist. It arches over the carpal bones of the hands, covering them and forming the carpal tunnel.
Structure
The flexor retinaculum is a strong, fibrous band that covers the carpal bones on the palmar side of the hand near the wrist. It attaches to the bones near the radius and ulna. On the ulnar side, the flexor retinaculum attaches to the pisiform bone and the hook of the hamate bone. On the radial side, it attaches to the tubercle of the scaphoid bone, and to the medial part of the palmar surface and the ridge of the trapezium bone.
The flexor retinaculum is continuous with the palmar carpal ligament, and deeper with the palmar aponeurosis. The ulnar artery and ulnar nerve, and the cutaneous branches of the median and ulnar nerves, pass on top of the flexor retinaculum. On the radial side of the retinaculum is the tendon of the flexor carpi radialis, which lies in the groove on the greater multangular between the attachments of the ligament to the bone.
The tendons of the palmaris longus and flexor carpi ulnaris are partly attached to the surface of the retinaculum; below, the short muscles of the thumb and little finger originate from the flexor retinaculum.
Function
The flexor retinaculum is the roof of the carpal tunnel, through which the median nerve and tendons of muscles which flex the hand pass.
Clinical significance
In carpal tunnel syndrome, one of the tendons or tissues in the carpal tunnel is inflamed, swollen, or fibrotic and puts pressure on the other structures in the tunnel, including the median nerve. Carpal tunnel syndrome is the most commonly reported nerve entrapment syndrome. It is often associated with repetitive motions of the wrist and fingers. It is because of this that pianists, meat cutters, and people with jobs involving extensive typing are at particularly high risk. The tough flexor retinaculum along with the rest of the carpal tunnel cannot expand, putting pressure on the median nerve running through the carpal tunnel with the flexor tendons of the wrist. This results in the symptoms of carpal tunnel syndrome.
Symptoms of carpal tunnel syndrome include tingling sensations and muscle weakness in the palm and lateral side of the hand and palm. It is possible that the syndrome may extend and radiate up the nerve causing pain to the arm and shoulder.
Carpal tunnel syndrome may be treated surgically. This is usually done after all non-surgical methods of treatment have been exhausted. Non-surgical treatment methods include anti-inflammatory drugs. The wrist may be immobilized in order to prevent further use and inflammation. When surgery is needed, the flexor retinaculum is either completely severed or lengthened. Surgery to divide the flexor retinaculum is the most common procedure. The scar tissue will eventually fill the gap left by surgery. The intent is that this will lengthen the flexor retinaculum enough to accommodate inflamed or damaged tendons and reduce the effects of compression on the median nerve. In a 2004 double blind-study, researchers concluded that there was no perceivable benefit gained from lengthening the flexor retinaculum during surgery and so division of the ligament remains the preferred method of surgery.
See also
Peroneal retinacula
Extensor retinaculum of the hand
References
External links
Musculoskeletal system
Hand
Ligaments | Flexor retinaculum of the hand | Biology | 784 |
71,757,009 | https://en.wikipedia.org/wiki/HD%20183552 | HD 183552, also known as HR 7411, is a probable spectroscopic binary located in the southern constellation Telescopium. The system has a combined apparent magnitude of 5.74, allowing it to be faintly visible to the naked eye. Based on parallax measurements from the Gaia spacecraft, it is estimated to be 337 light years distant. The value is horribly constrained, but it appears to receding with a radial velocity of .
This object is an Am star with a spectral classification of kA6hF2mF2 (II), an evolved F-type star having the calcium K-line of an A6 star plus the hydrogen and metallic lines of an F2 star. Its current mass is and is estimated to be 733 million years old, having completed 83.1% of its main sequence lifetime. It has expanded to 4.7 times the radius of the Sun and now radiates 45 times the luminosity of the Sun from its photosphere at an effective temperature of .
References
Spectroscopic binaries
Am stars
F-type giants
Telescopium
Telescopium, 62
183552
7411
096141
PD-53 09585 | HD 183552 | Astronomy | 249 |
3,389,929 | https://en.wikipedia.org/wiki/Episulfide | In organic chemistry, episulfides are a class of organic compounds that contain a saturated, heterocyclic ring consisting of two carbon atoms and one sulfur atom. It is the sulfur analogue of an epoxide or aziridine. They are also known as thiiranes, olefin sulfides, thioalkylene oxides, and thiacyclopropanes. Episulfides are less common and generally less stable than epoxides. The most common derivative is ethylene sulfide ().
Structure
According to electron diffraction, the and distances in ethylene sulfide are respectively 1.473 and 1.811 Å. The and angles are respectively 66.0 and 48.0°.
Preparation
History
A number of chemists in the early 1900s, including Staudinger and Pfenninger (1916), as well as Delepine (1920) studied episulfides. In 1934 Dachlauer and Jackel devised a general synthesis of episulfides from epoxides using alkali thiocyanates and thiourea.
Contemporary methods
Following the lead of Dachlauer and Jackel, contemporary routes to episulfides utilize a two-step method, converting an olefin to an epoxide followed by thiation using thiocyanate or thiourea.
Episulfides can also be prepared from cyclic carbonates, hydroxy mercaptans, hydroxyalkyl halides, dihaloalkanes, and halo mercaptans. The reaction of ethylene carbonate and KSCN gives ethylene sulfide:
KSCN + C2H4O2CO -> KOCN + C2H4S + CO2
The metal-catalyzed reaction of sulfur with alkenes has been demonstrated.
Reactions
Common uses of episulfides in both academic and industrial settings most often involve their use as monomers in polymerization reactions.
Episulfides have an innate ring strain due to the nature of three-membered rings. Therefore, most reactions of episulfides involve ring-opening. Most commonly, nucleophiles are employed for the ring-opening process. For terminal episulfide, nucleophiles attack the primary carbon. Nucleophiles include hydrides, thiolates, alkoxides, amines, and carbanions.
Applications
Thiiranes occur very rarely in nature and are of no significance medicinally.
Very few commercial applications exist, although the polymerization of episulfide has been reported.
Dithiiranes
Dithiiranes are three membered rings containing two sulfur atoms and one carbon. One example was prepared by oxidation of a 1,3-dithietane.
References
Functional groups | Episulfide | Chemistry | 586 |
216,516 | https://en.wikipedia.org/wiki/Health%20Canada | Health Canada (HC; ) is the department of the Government of Canada responsible for national health policy. The department itself is also responsible for numerous federal health-related agencies, including the Canadian Food Inspection Agency (CFIA) and the Public Health Agency of Canada (PHAC), among others. These organizations help to ensure compliance with federal law in a variety of healthcare, agricultural, and pharmaceutical activities. This responsibility also involves extensive collaboration with various other federal- and provincial-level organizations in order to ensure the safety of food, health, and pharmaceutical products—including the regulation of health research and pharmaceutical manufacturing/testing facilities.
The department is responsible to Parliament through the minister of health—presently Mark Holland—as part of the federal health portfolio. The minister is aided by the associate minister of health, and minister of mental health and addictions—presently Ya'ara Saks. The deputy minister of health, the senior most civil servant within the department, is responsible for the day-to-day leadership and operations of the department and reports directly to the minister.
Originally created as the "Department of Health" in 1919—in the wake of the Spanish flu crisis—what is known as Health Canada today was formed in 1993 from the former Health and Welfare Canada department (established in 1944), which split into two separate units; the other department being Human Resources and Labour Canada.
Organization
Health Canada's leadership consists of:
Minister of Health
Deputy Minister
Associate Deputy Minister
Branches
The following branches, offices, and bureaus (and their respective services) fall under the jurisdiction of Health Canada:
Health Canada
Office of Audit and Evaluation
Departmental Audit Committee
Director General / Chief Audit Executive's Office
Internal Audit and Special Examinations
Program Evaluation Division
Performance Measurement Planning and Integration
Practice Management
Chief Financial Officer Branch
Departmental Performance Measurement and Evaluation Directorate
Departmental Resource Management Directorate
Financial Operations Directorate
Internal Control Division
Materiel and Assets Management Directorate
Planning and Corporate Management Practices Directorate
Communications and Public Affairs Branch
Ethics and Internal Ombudsman Services
Marketing and Communications Services Directorate
Planning and Operations Division
Public Affairs and Strategic Communications Directorate
Stakeholder Relations and Consultation Directorate
Controlled Substances and Cannabis Branch
Corporate Services Branch
Departmental Secretariat
Health Products and Food Branch
Assistant Deputy Minister's Office
Biologic and Radiopharmaceutical Drugs Directorate
Food Directorate
Marketed Health Products Directorate
Medical Devices Directorate
Natural and Non-prescription Health Products Directorate
Office of Nutrition Policy and Promotion
Policy, Planning and International Affairs Directorate
Resource Management and Operations Directorate
Therapeutic Products Directorate
Veterinary Drugs Directorate
Healthy Environments and Consumer Safety Branch
Consumer and Hazardous Products Safety Directorate
Environmental and Radiation Health Sciences Directorate
Policy Planning and Integration Directorate
Safe Environments Directorate
Climate Change and Innovation Bureau
Water and Air Quality Bureau
New Substances Assessment and Control Bureau
Existing Substances Risk Assessment Bureau
Legal Services
Opioid Response Team
Controlled Substances Directorate
Opioid Response Team Directorate
Pest Management Regulatory Agency
Regulatory Operations and Enforcement Branch
Strategic Policy Branch
Partner agencies
In their responsibility of maintaining and improving the health of Canadians, the Minister of Health is supported by the Health Portfolio, which comprises Health Canada as well as:
Public Health Agency of Canada;
Canadian Institutes of Health Research;
the Patented Medicine Prices Review Board; and
the Canadian Food Inspection Agency
Additionally, Health Canada is a corporate partner of the Canadian Association of Emergency Physicians (CAEP).
International collaboration
In December 2016, Health Canada approved the purchase of a new botulism antitoxin called heptavalent botulism antitoxin (BAT) from the American-based company Emergent Biosolutions, a global specialty biopharmaceutical company. The PHAC has identified botulism as a likely biological terrorist threat.
Labs and offices
Offices
Office of the Cameron Visiting Chair
Office of the Chief Dental Officer
The National Office of WHMIS
Nurse Recruitment
Public Services Health Medical Centre
Laboratories
Laboratory Centre for Disease Control
Sir Frederick G Banting Research Centre
Compliance and Enforcement Directorate
The Compliance and Enforcement Directorate provides support to Health Canada by enforcing the laws and regulations pertaining to the production, distribution, importation, sale, and/or use of consumer products, including but not limited to: tobacco, pest control materials, drugs and medical devices, biologics, and natural health products.
The Directorate conducts inspections and investigations to ensure that products are safe, of good quality, and properly labelled and distributed, in order to better protect Canadians from potentially harmful products and consumables.
Compliance and Enforcement Directorate is divided into six distinct programs:
Canada Vigilance Program
Controlled Substances Program
Inspectorate Program
Pesticide Compliance Program
Product Safety Program
Tobacco Control Program
Canada Vigilance Program
Health Canada's Canada Vigilance Program (CVP) "collects and assesses reports of suspected adverse reactions to health products marketed in Canada," including prescription and over-the-counter medications, natural health products, biotechnology products, vaccines, blood products, human cell products, human tissue products, human organs, disinfectants and radiopharmaceuticals. The program has been in effect since 1965.
Pharmacovigilance related to Adverse Events Following Immunization (AEFI) is a shared responsibility between Health Canada and the Public Health Agency of Canada.
Related legislation
Acts for which Health Canada has total or partial responsibility:
Assisted Human Reproduction Act
Canada Health Act
Canadian Centre on Substance Abuse Act
Canadian Environmental Protection Act
Canadian Institutes of Health Research Act
Cannabis Act
Controlled Drugs and Substances Act
Comprehensive Nuclear Test-Ban Treaty Implementation Act
Department of Health Act
Financial Administration Act
Fitness and Amateur Sport Act
Food and Drugs Act
Hazardous Materials Information Review Act
Hazardous Products Act
Patent Act
Pest Control Products Act
Pesticide Residue Compensation Act
Quarantine Act
Radiation Emitting Devices Act
Tobacco Act & Act to Amend the Tobacco Act (sponsorship)
Acts which Health Canada is involved or has special interest in:
Broadcasting Act
Canada Labour Code
Canada Medical Act
Canada Shipping Act
Canadian Food Inspection Agency Act
Emergency Preparedness Act
Energy Supplies Emergency Act
Excise Tax Act
Federal-Provincial Fiscal Arrangements Act
Feeds Act
Immigration and Refugee Protection Act
National Parks Act
Nuclear Safety and Control Act
Non-Smokers Health Act
Queen Elizabeth II Canadian Research Fund Act
Trade Marks Act
Special access program
Health Canada has a special access program that health care providers may use to request medications that are not currently commercially available in Canada.
COVID-19 response
The chief medical advisor of Health Canada, Supriya Sharma, as of April 2021, oversees the COVID-19 vaccine approval process in Canada. On 29 March 2021, Sharma supported the National Advisory Committee on Immunization's declaration of a pause for the administration of the AstraZeneca vaccine to Canadians under the age of 55.
Criticisms
An editorial published by the Canadian Medical Association Journal has called for Health Canada to more strictly regulate natural health products. The editorial cited weaknesses in current legislation that allow natural health products to make baseless health claims, to neglect side-effects research prior to products reaching market, and to be sold without being evaluated by Health Canada.
On 10 September 2012, a report on CBC Television questioned the safety of drugs sold in North America. The Canadian Press reported that Health Canada is secretive regarding inspections about drugs manufactured overseas, leaving the public unsure about the safety of these drugs.
Drug approvals process
Health Canada aims to provide responses to pharmaceutical innovators within 300 days of submitting a drug for review. However, for submissions filed between 2015 and 2019, only 33 percent received a response within that target. Fully 18 percent waited over a year, and almost 5 percent over two years. The average delay for a standard review was 335 days. Health Canada's accelerated pathway for approval dubbed "conditional compliance" reduces its target timeline to 200 days, but its actual average delay was still 302 days, and only 8 percent of applicants received responses within the 200-day target.
It has been suggested that government entities should make use of rolling submissions, as was done for COVID-19 vaccines, to proceed with the examination of partially complete submissions and accept new information as it becomes available, and also that drugs already approved in other jurisdictions should be approved more rapidly to avoid redundancy.
See also
Health care in Canada
Public Health Agency of Canada
First Nations Health Authority
U.S. Department of Health and Human Services (HHS)
Centers for Disease Control and Prevention (CDC)
Food and Drug Administration (FDA)
European Medicines Agency (EMA)
Japanese Ministry of Health, Labour and Welfare (MHLW)
National Centre for Disease Control (NCDC)
Notes
References
External links
Medical and health organizations based in Canada
Federal departments and agencies of Canada
Canada
Public health organizations
Regulators of biotechnology products
Ministries established in 1996
Regulation of medical devices
Federal law enforcement agencies of Canada
1996 establishments in Canada
Government health agencies in Canada | Health Canada | Biology | 1,721 |
24,335,106 | https://en.wikipedia.org/wiki/Left%204%20Dead%20%28franchise%29 | Left 4 Dead is a series of cooperative first-person shooter survival horror video games created by Turtle Rock Studios and published by Valve. Set in the days after a pandemic outbreak of a viral strain transforming people into zombie-like feral creatures, the games follow the adventures of four survivors attempting to reach safe houses and military rescue while fending off the attacking hordes.
The games encourage cooperative play between up to four players, each taking the role of one of the survivor characters and the computer controlling any unassigned characters. Players use a combination of melee weapons, firearms, and thrown objects to fend off attacks from the bulk of the infected creatures, while using an assortment of healing items to keep their group alive. Certain unique infected creatures pose a more difficult challenge, requiring teamwork to take down effectively. The games are overseen by an "AI Director", designed to give the players a more dramatic experience based on their performance, penalizing players for stalling while rewarding players with special weapons by taking longer or riskier paths. The Director also makes gameplay dynamic, meaning that no two playthroughs are quite the same.
Video games
Left 4 Dead was released in November 2008. The game's development was started by Turtle Rock Studios, who were bought by Valve during the game's creation, with continued development occurring at Valve's studios.
Left 4 Dead 2 was released a year later in November 2009. Valve attributed the short turnaround between the titles as a result of having many ideas to expand on the first game, but more than could be reasonably done through software patching or downloadable content. The fast turnaround of the sequel was initially criticized by many players of the first game, leading to a temporary effort to boycott the second game. Valve helped to placate matters, demonstrating that it was still developing content for the first game, including a crossover campaign, "The Passing", between the characters of both games.
Downloadable content
Crash Course is the first downloadable content (DLC) campaign for Left 4 Dead. It is free on PC, but not for Xbox 360. It was released on September 29, 2009. According to Valve it is meant to bridge the gap between No Mercy and Death Toll.
The Passing is the first DLC crossover campaign of Left 4 Dead 2, which includes a new campaign and "new co-operative challenge modes of play" and introduces a new firearm, the M60; the Golf Club, and a new Uncommon Infected called the Fallen Survivor. It was released April 22, 2010.
The Sacrifice is a three-chapter DLC for both Left 4 Dead and Left 4 Dead 2 and was released on October 5, 2010. Taking place after Blood Harvest and is considered to be the prologue to The Passing, as both campaigns are connected to each other.
Cold Stream is the third DLC for Left 4 Dead 2. It was announced in a blog post on February 16, 2011. Cold Stream was released on March 22, 2011, in beta form on Steam, and it was officially released in its final form on Steam on July 24, 2012. Cold Stream was released on the Xbox Marketplace August 3, 2012 for 560MSP. It was created by map making community member Matthew Lourdelet and features various wall graffiti featuring the author's friends.
The Last Stand is a community-made scenario for Left 4 Dead 2, and was released on September 24, 2020, with Valve's blessing as an official update to the game. The update includes over twenty new maps for survival mode, a new campaign, and additional updates to the game.
Spin-offs and crossovers
The Mercenaries: No Mercy makes a return on the PC version of Resident Evil 6. As an exclusive to No Mercy, Capcom and game developer Valve had teamed up to allow Left 4 Dead 2 content for No Mercy. The content includes the main cast from Left 4 Dead 2 being playable characters with their own loadouts alongside the Witch and the Tank replacing the Bloodshot and the Napad as enemies.
Pixel Force: Left 4 Dead is a downloadable indie game where the game is played as if it were released for NES.
is a version of Left 4 Dead 2 released on December 10, 2014, for Japanese arcades by Taito. The game features the same scenarios and locations as L4D2, but features a different cast of survivors unique to this edition.
The No Mercy level of Left 4 Dead appeared as DLC for the cooperative-shooter Payday: The Heist and its sequel Payday 2.
On August 20, 2015, a free update was released for the PC version of the game Zombie Army Trilogy which imported the eight survivor characters from the Left 4 Dead games.
In March 2017, asymmetric horror game Dead by Daylight released the "Left Behind" DLC for PC, which unlocked Bill as a playable character. It also unlocked Zoey, Ellis, Francis and Rochelle costumes for Meg, Dwight, Jake and Claudette respectively.
Comics
Tied to the release of "The Sacrifice" downloadable content, a four-part web comic drawn by Michael Avon Oeming was released to describe the events of the first set of survivors, Bill, Louis, Zoey, and Francis, that led them to meet the second group, Coach, Ellis, Rochelle, and Nick. Each of the parts provides a backstory for each of the first survivor characters at the onset of the pandemic, and also follows directly up on the final campaign from the first game, "Blood Harvest", where the survivors, identified as carriers, are taken to a secure military facility. The use of the comic medium to expand the game's story outside of the game's bounds has been used by Valve before in promoting Team Fortress 2.
Merchandise and other products
Within Left 4 Dead 2, a fictional southern rock band called the Midnight Riders was introduced. Several songs have been produced for the game and tie-in promotions. Two of them, "Midnight Ride" and "One Bad Man", have appeared as downloadable content for the Rock Band series via the Rock Band Network. Valve released a series of holiday cards themed after the survivors, created by Alexandria Neonakis. Three plush toys have been released based on the Left 4 Dead series' infected; first with the Boomer, and later with the Hunter and the Tank. They are to be followed by two more plush toys yet to be released. They were also created by Alexandria Neonakis.
In June 2011, it was confirmed that Valve had signed a deal with NECA, to produce action figures, which includes Left 4 Dead and other Valve series. There are two figures produced so far, Left 4 Deads Boomer (which was released in June), and Left 4 Dead 2s Smoker (which was released in November).
Cancelled sequel
Valve has not yet made any direct statement related to future games in the series, though Chet Faliszek said in 2012 that like Valve's other games, a sequel had not been ruled out. Rumors from 2013 through 2016 led players to believe that Valve was developing Left 4 Dead 3, with a target release date in 2017.
In April 2019, Tyler McVicker of Valve News Network reported that while Left 4 Dead 3 had been in development at some point at Valve, it had been subsequently cancelled a few years earlier. Assets that McVicker had obtained showed a level comparable to assets used in Counter-Strike: Global Offensive, and indicated that very little other work appeared to have been done on the title.
In July 2020, multimedia storybook The Final Hours of Half-Life: Alyx, which focused on the 2010s at Valve, gave information about the cancelled Left 4 Dead 3 project that was developed around 2013, describing it as an open-world game set in Morocco where the player fights hundreds of zombies. The project was shelved due to the slow development of the Source 2 engine.
At The Game Awards 2020, Turtle Rock announced Back 4 Blood, their spiritual successor to Left 4 Dead 2, released on 12 October 2021.
Setting
The Left 4 Dead games take place in the days following a pandemic outbreak of an infection that transforms humans into feral, zombie-like creatures, seeking to kill those yet to be infected. Within the United States, the Civil Emergency and Defense Agency (CEDA) orders the creation of safe zones with the aid of the military, and evacuates as many people as possible to these areas, aiming to transfer them to islands and ocean-going ships, as the infected are unable to cross bodies of water. A number of people are discovered to be immune to the infection, though they can carry and unintentionally spread it to others. In both games, four of these "Carriers" or otherwise just immune humans, meet and become the game's "Survivor" characters, making their way to safe houses and extraction points.
Characters
The first game follows four survivors as they travel from Pennsylvania to Georgia. The game's four Survivors are:
Bill: an old Vietnam War veteran. Voiced by Jim French.
Louis: a frazzled IT analyst. The most optimistic of the four. Voiced by Earl Alexander.
Francis: an obnoxious and insolent outlaw biker who hates everything, except for vests. Voiced by Vince Valenzuela.
Zoey: a college student who loves horror movies. Voiced by Jen Taylor.
The second game follows another group as they travel from Savannah, Georgia to New Orleans, Louisiana, and features four new Survivors:
Coach: a portly high school football coach. Voiced by Chad L. Coleman.
Ellis: an overly talkative mechanic. Voiced by Eric Ladin.
Nick: a pessimistic gambler and con artist. Voiced by Hugh Dillon.
Rochelle: a low-level TV reporter. Voiced by Rochelle Aytes and modeled on Shanola Hampton.
The Japanese arcade version of the second game follows the same route and story, but features four Survivors exclusive to the Arcade edition:
Yusuke: a Japanese college student visiting America on vacation. Voiced by Ryuichi Kijima.
Haruka: a Japanese schoolgirl visiting America on a school trip. Voiced by Ayane Sakura.
Sara: a Japanese-American tour guide. Voiced by Miyuki Sawashiro.
Blake: an American bartender and army veteran. Voiced by Hidenori Takahashi.
Gameplay
The Left 4 Dead games are first-person shooters incorporating survival horror elements. A player controls one of the four Survivor characters, and has the ability to move, jump, and use weapons in their possession. Players are limited to two weapons: a main firearm with limited ammunition taken from ammo caches, and either one (or two pistols) sidearm with unlimited ammunition or a melee weapon. Players also have three additional inventory slots. The third slot gives the player a thrown weapon, including a Molotov cocktail, a pipe bomb that can be used to lure a horde towards it before it explodes, or a bile jar that can be used to lure a horde to a specific area. The fourth slot provides for either a health kit which they can use on themselves or the other survivors, a defibrillator to revive a dead Survivor, or a special ammo deployment kit providing unique ammo such as explosive bullets. The fifth inventory slot is used for pain pills, giving the player a temporary health boost, or an adrenaline shot, temporarily increasing the player's movement, interaction, healing, and teammate revival speed. Some environmental objects like propane tanks or gasoline cans can be carried and thrown at hordes, upon which they can be fired upon as a makeshift explosive, but cannot be stored in the player's inventory. In Left 4 Dead 2, limited-use weapons such as the chainsaw, grenade launcher or the M60 machine gun can also be carried in a similar manner. The player can use whatever object they are holding to temporarily push back any Infected surrounding them.
A health bar is used to track each character's health; players are aware of the state of each other's health and special items, and each character is shown to other players through an outlined silhouette on the game's HUD, regardless of walls that separate the characters, and colored based on their health state. A character's health suffers from attacks from any Infected, environmental effects such as fires, and from friendly-fire incidents. When a character's health falls below a certain level, the character will not be able to move as fast until they can restore their health. If the player's health drops to zero, they become incapacitated, whereby they fall to the floor and need to be helped up by a teammate, and a new temporary health bar appears, representing a bleed out period. Should this bar drop to zero, the character dies, and can only be restored either through a defibrillator, appearing later in a level in a "rescue closet" from which they must be freed, or when the remaining players reach the safe house. Otherwise, an incapacitated character can be revived with the assistance of any other character. However, if a character becomes incapacitated three times in a row without using a first-aid kit, they will immediately die the third time. Similarly, if a character falls over an edge, they will hang precariously for a limited time, falling to their death if they are not assisted in time.
In the main campaign mode, each campaign is divided into two to five chapters. For all but the last chapter, the goal of the players is to reach the safehouse at the end of the level, where fresh supplies of weapons, ammo, and health items are typically found. In the finale, the campaign comes to a climax and requires the players to either make a stand against waves of Infected as they wait for a rescue vehicle, fill up an escape vehicle's gas tank while fending off the horde, or race through a gauntlet of Infected to make their way to the rescue point.
Additional game modes are also available. A 4-on-4 competitive mode named Versus is available, where in alternating matches, one side controls the Survivor characters while the other controls Special Infected. When playing as the Survivors, the goal remains the same as the normal campaign mode. The Infected side tries to prevent the Survivors from making their way to the safehouse; should they be killed by the survivors, they will respawn later as a new type of Infected. Scoring is based on how far the Survivors get and other factors, with the team with the most points at the end of a campaign considered the winner. Other modes are based on single-situation standoffs where the Survivors have to hold out as long as possible, or where two teams compete to fill a generator with as many gas cans as possible.
In Left 4 Dead 2 two additional modes have been introduced. In "Realism" mode, several of the video game aspects are removed from the game, such as the identification of the location of teammates via their silhouette, the respawning of dead characters later in rescue closets, and more severe damage models. The sequel also features "mutations", game modes based on either the campaign or competitive modes where specific rules may be in place. For example, one mutation may give all the Survivor characters chainsaws from the start, while another may make every unique infected appear as a specific type, such as the Tank.
Both games support user-generated content through new maps and campaigns using tools provided by Valve; Left 4 Dead 2 also provides support for custom mutations. Valve has further supported the user community by highlighting popular third-party maps, and including select ones in software patches for the game.
AI Director
The Left 4 Dead series uses a collection of artificial intelligence routines, collectively the "AI Director", to monitor and alter the gameplay experience in response to the players. Valve's primary goal with the AI Director was to promote replayability of the games' campaigns, as their previous multiplayer games with this feature, such as Counter-Strike and Team Fortress 2 have shown thriving communities of players that continue to play the games despite the limited number of maps available due to the unpredictable nature of online play. In considering this for Left 4 Dead, Valve identified that many games use static events that always occur at fixed points in a level, or limited dynamic events where one of several events could occur at fixed points. Valve themselves had used this idea in some key battles in Half-Life 2: Episode Two where the spawning of Combine forces would be based on the player's location. They recognized that such systems do not promote replayability or cooperation: players could easily memorize where events would occur, and those that had yet to experience the events would slow other players down. With the concept of the AI Director, Valve believed it could capture the same chaos and randomness that would occur in Counter-Strike and Team Fortress 2 in the cooperative gameplay experience, transforming it from simple memorization to a skills challenge. Valve has further termed this approach "procedural narrative", creating a new story each time the game is played.
The AI Director has several facets that combine to form "structured unpredictability" for each playthrough. The Director first procedurally generates a dramatic flow for the level, which identifies the size and location of hordes of common infected and uncommon infected throughout the level. The procedural generation considers each traversable area on the map, using pathfinding algorithms that Valve had incorporated into Counter-strike computer-controlled characters, and the "flow" of the map—the general direction from the start of the level to the safe house. As players progress through the map, the Director will spawn infected in areas near the players and out of sight, while removing infected from earlier portions of the map the players have passed through. The placement also considered the "Escape Route", the shortest path through the map, and will increase encounters along this route to increase the difficulty. Spawning rules are different for each of the infected; hordes are more often spawned behind players to take them by surprise, while special infected like the Hunter and Smoker will spawn ahead of the players, giving their individual AI the opportunity to lay in wait for the players. Procedural generation is also used to place weapons and other items throughout the level. Weapons and equipment can be programmed by the map designer to be generated by the Director at fixed points, allowing for some predictability for the players, and to give some creative control to the design for story-telling and visual effects.
Overriding the procedural generation is the aim to create "active dynamic pacing" of the game by continuous monitoring of players, and altering the predetermined pacing to react to this. Each player character is tracked by a metric called the Survivor Intensity, which increases as the character takes damage, becomes incapacitated, or another character dies nearby, among other effects, and slowly decays in time. The Director will alter its previously developed schedule for spawning of infected to build up Survivor Intensity to a certain threshold; when this occurs, the game sustains this peak for a few seconds, then enters a period where it relaxes and reduces the spawning of infected, allowing the players to finish their current encounter and allow their Survivor Intensities to fall away from the threshold. The Director then repeats this cycle until the players have reached the end of the level.
The boss infected encounters are generated through different means, creating a sequence based on cycling without repetition between three situations: Tank, Witch, or "No event". Boss events are then generated based on the number of areas the players have passed through. These events are not modified by the dynamic pacing of the game; Valve discovered when these were controlled by the pacing, players would often be at too high of an intensity to allow these encounters to occur. Keeping the events on a separate track allows the pacing to be unpredictable, further enhancing the replayability goal of the Director.
The Director also creates mood and tension with emotional cues, such as visual effects, audio cues for specific types of infected, and character communication. Within Left 4 Dead 2, the Director has the ability to alter placement of walls, level layout, lighting, and weather conditions, and reward players for taking more difficult routes with more useful weapons and items. A dedicated Director routine dynamically controls music and other ambient sound effects, monitoring what a player has experienced to create an appropriate sound mix. The process is client-side and done by a multi-track system. Each player hears their own mix, which is being generated as they play through the game, and dead players watching a teammate hear their teammates' mix.
Valve sees the AI Director as a tool to be used for other games. A software patch for the cooperative game Alien Swarm added a game mode, Onslaught, that used a version of the AI Director from Left 4 Dead to dynamically generate enemies for the players to fight.
Valve has investigated the use of biofeedback from the players directly as input for the Director. At the 2011 Game Developers Conference, Valve demonstrated a simple test where a player, fitted with a device to measure skin conductance level, played through a Left 4 Dead 2 level. Certain in-game actions, such as hearing a threatening noise or passing an opening doorway, would cause the skin conductance to rise. Valve postulates that such information fed into the Director could create a much more effective player experience.
Infected characters
The Infected characters within the game are divided into four categories. The most abundant are the "Common" Infected, the least deformed of the infected creatures. Alone, these infected are weak, but in large numbers, they pose a significant threat to the Survivors. Often, the Director will engineer a "horde", where a large number of Common Infected charge the players. Hordes can also be created by setting off in-game "crescendo events" that usually involve some loud noise created by the players, or by activating a car alarm. As such, many individual levels feature a unique, noisy event, such as the raising of a rusty elevator, where a very large horde will be attracted, with the players given forewarning so that they can prepare for the onslaught.
A second type of infected introduced in Left 4 Dead 2, are the "Uncommon Infected". Uncommon infected are unique to certain campaigns and have a special ability that can hamper the players' progress. For example, the Uncommon Infected in "Hard Rain" are infected road workers that are still wearing protective ear pieces, making them immune to the sound of pipe bombs.
The third type of Infected are the "Special Infected". Each of these creatures appear less frequently during a campaign, and are designed to keep the players working together as a team. A Survivor that is attacked by these Special Infected is rendered completely incapable of taking action, encapsulating their intent: the victim must be rescued by other Survivors as they cannot help themselves. Special Infected have very specific vocalizations, as well as leitmotifs which are inserted into the game's soundtrack when the AI Director places one, allowing Survivor players some forewarning. They serve as playable characters for the Infected in the games' Versus and Scavenge modes.
Hunter - a nimble and agile Infected that wears a hoodie and is able to leap large distances and pounce on a single Survivor, who will take damage until either they are killed or the Hunter is either killed or shoved off.
Smoker - an Infected with a long tongue that he uses to ensnare and strangle Survivors, rendering them helpless until the tongue is hit or the Smoker is killed or stunned with explosives or shoving. When killed, he leaves a cloud of smoke that temporarily obscures the Survivors' vision and makes them cough.
Boomer - a slow-moving bloated Infected that vomits bile onto Survivors; though not harmful itself, the bile attracts Common Infected to the character and obscures their vision for a time. When killed, a Boomer explodes, covering any nearby Survivors in more bile.
Spitter - an Infected that spews a large amount of acidic "spit" at the Survivors. Survivors standing in the "goo" suffer quickly-increasing damage until they move out of the puddle. The Spitter herself is quite weak and upon her death, she leaves a small puddle of acid that also damages Survivors.
Charger - an Infected with a large mutated arm that attacks by charging Survivors at high speed. The charge attack is capable of knocking down and effectively stunning every Survivor at once, and the Survivor at the forefront of the attack is carried as far as the Charger runs or until the Charger hits a wall, at which point the Charger begins to pummel the Survivor. Survivors being pummeled are completely helpless and will be killed unless the Charger is killed. The Charger is also unique among the Special Infected in that, due to his hulking size, cannot be shoved; only frag rounds and explosives can stumble a Charger.
Jockey - a small Infected with a hunchback that can "ride" a Survivor, hence the name. A Jockey can control a Survivor's movement while riding them, steering the Survivor into danger. A Survivor suffers damage as long as a Jockey rides them and is completely helpless unless they are incapacitated or the Jockey is shoved or killed.
Finally, there are two special "boss" Infected characters that the AI Director includes at rare moments:
The Witch - a female Infected with glowing eyes and long clawed fingers. Witches are unique in that they are passive, preferring to be left alone to cry. However, if a Survivor "startles" her, either with their flashlight, by invading her space, or by damaging her, she will charge that Survivor and automatically incapacitate them. Once incapacitated, the Witch will viciously attack the downed Survivor until either they are killed or she is, or if she changes targets by being set on fire. If she kills the Survivor who initially disturbed her, then she will run away. On higher difficulty levels, a Witch can automatically kill a Survivor. She is particularly durable, and can run faster than Survivors, making it difficult to slay her before someone takes damage. She is the strongest and fastest Infected on foot. However, she has the second most hit points, behind the Tank. She can be stunned by explosives and frag rounds.
The Tank - an enormous Infected with superhuman strength. The Tank can punch Survivors and smash them to the floor, or can throw debris and rocks, or punch cars, dumpsters, and other large objects at them (the latter resulting in instant incapacitation). Kiting is a valid tactic, but Survivors must do so for an extended period of time: a Tank on Normal difficulty has ten times the health of all the Survivors combined. In Versus, players may not control the Witch, and may only play as a Tank if the AI Director has already chosen to spawn one. He is the second strongest infected and has the highest number of hit points by far over all of the other infected. Due to his gargantuan size, the Tank can only be stumbled by exploding Boomers, oxygen canisters or propane tanks.
Legacy
The Left 4 Dead series has a considerably large and active fanbase; a large amount of community-made content are available for the two games, with The Last Stand, a content expansion in particular being blessed by Valve and released as an official update for Left 4 Dead 2.
Purple Francis
Purple Francis is a nonexistent protagonist in the series, created as a hoax by two teenagers on February 12, 2021 on the Left 4 Dead community wiki, which subsequently received a cult following within the fanbase for its dedication and convincing "integration" into the series' lore. The hoax consisted of a series of vandalistic edits to the Left 4 Dead wiki, describing a nonexistent playable character named "Purple Francis," who is visually a picture of Francis edited with a purple hue. The edits provided detailed nonexistent lore, and cross-references to over a dozen edited pages, where Purple Francis was discussed in detail alongside other existing characters. Said lore is highly absurd and nonsensical in nature, such as his birth in a candle factory and turning purple through an iron-rich diet, which made him "incredibly attractive to women", and that he has a daughter who had an affair with Ronald Reagan. "Real life events" caused by his "existence" included ten players passed out after hearing his voice in the game for the first time and that he was later removed from retail copies of the game by Valve "due to negative feedback". "Screenshots" of Purple Francis were also supplied, depicting the character crudely photoshopped into existing artwork throughout the wiki.
While the edits were eventually reverted a few days after creation, the Left 4 Dead fan community seemed to embrace the concept of Purple Francis, and a mod on Steam Workshop was released on February 14 by a developer who had worked on The Last Stand update, claiming to restore the "cut content" depicting Purple Francis. The original creators of Purple Francis were later identified as "Lucy" and "Unknowntrope," two teenagers who had originally come up with the concept for humor.
See also
Back 4 Blood — a game made by Turtle Rock Studios, Included some developers that worked on Left 4 Dead.
The Walking Dead — a game made by Telltale Games was planned to be a Left 4 Dead spin-off early in development.
Payday 2, A game often compared to Left 4 Dead.
References
External links
Asymmetrical multiplayer video games
Valve Corporation franchises
Video game franchises
Video game franchises introduced in 2008 | Left 4 Dead (franchise) | Physics | 6,010 |
41,660 | https://en.wikipedia.org/wiki/Resonance | Resonance is a phenomenon that occurs when an object or system is subjected to an external force or vibration that matches its natural frequency. When this happens, the object or system absorbs energy from the external force and starts vibrating with a larger amplitude. Resonance can occur in various systems, such as mechanical, electrical, or acoustic systems, and it is often desirable in certain applications, such as musical instruments or radio receivers. However, resonance can also be detrimental, leading to excessive vibrations or even structural failure in some cases.
All systems, including molecular systems and particles, tend to vibrate at a natural frequency depending upon their structure; this frequency is known as a resonant frequency or resonance frequency. When an oscillating force, an external vibration, is applied at a resonant frequency of a dynamic system, object, or particle, the outside vibration will cause the system to oscillate at a higher amplitude (with more force) than when the same force is applied at other, non-resonant frequencies.
The resonant frequencies of a system can be identified when the response to an external vibration creates an amplitude that is a relative maximum within the system. Small periodic forces that are near a resonant frequency of the system have the ability to produce large amplitude oscillations in the system due to the storage of vibrational energy.
Resonance phenomena occur with all types of vibrations or waves: there is mechanical resonance, orbital resonance, acoustic resonance, electromagnetic resonance, nuclear magnetic resonance (NMR), electron spin resonance (ESR) and resonance of quantum wave functions. Resonant systems can be used to generate vibrations of a specific frequency (e.g., musical instruments), or pick out specific frequencies from a complex vibration containing many frequencies (e.g., filters).
The term resonance (from Latin resonantia, 'echo', from resonare, 'resound') originated from the field of acoustics, particularly the sympathetic resonance observed in musical instruments, e.g., when one string starts to vibrate and produce sound after a different one is struck.
Overview
Resonance occurs when a system is able to store and easily transfer energy between two or more different storage modes (such as kinetic energy and potential energy in the case of a simple pendulum). However, there are some losses from cycle to cycle, called damping. When damping is small, the resonant frequency is approximately equal to the natural frequency of the system, which is a frequency of unforced vibrations. Some systems have multiple and distinct resonant frequencies.
Examples
A familiar example is a playground swing, which acts as a pendulum. Pushing a person in a swing in time with the natural interval of the swing (its resonant frequency) makes the swing go higher and higher (maximum amplitude), while attempts to push the swing at a faster or slower tempo produce smaller arcs. This is because the energy the swing absorbs is maximized when the pushes match the swing's natural oscillations.
Resonance occurs widely in nature, and is exploited in many devices. It is the mechanism by which virtually all sinusoidal waves and vibrations are generated. For example, when hard objects like metal, glass, or wood are struck, there are brief resonant vibrations in the object. Light and other short wavelength electromagnetic radiation is produced by resonance on an atomic scale, such as electrons in atoms. Other examples of resonance include:
Timekeeping mechanisms of modern clocks and watches, e.g., the balance wheel in a mechanical watch and the quartz crystal in a quartz watch
Tidal resonance of the Bay of Fundy
Acoustic resonances of musical instruments and the human vocal tract
Shattering of a crystal wineglass when exposed to a musical tone of the right pitch (its resonant frequency)
Friction idiophones, such as making a glass object (glass, bottle, vase) vibrate by rubbing around its rim with a fingertip
Electrical resonance of tuned circuits in radios and TVs that allow radio frequencies to be selectively received
Creation of coherent light by optical resonance in a laser cavity
Orbital resonance as exemplified by some moons of the Solar System's giant planets and resonant groups such as the plutinos
Material resonances in atomic scale are the basis of several spectroscopic techniques that are used in condensed matter physics
Electron spin resonance
Mössbauer effect
Nuclear magnetic resonance
Linear systems
Resonance manifests itself in many linear and nonlinear systems as oscillations around an equilibrium point. When the system is driven by a sinusoidal external input, a measured output of the system may oscillate in response. The ratio of the amplitude of the output's steady-state oscillations to the input's oscillations is called the gain, and the gain can be a function of the frequency of the sinusoidal external input. Peaks in the gain at certain frequencies correspond to resonances, where the amplitude of the measured output's oscillations are disproportionately large.
Since many linear and nonlinear systems that oscillate are modeled as harmonic oscillators near their equilibria, a derivation of the resonant frequency for a driven, damped harmonic oscillator is shown. An RLC circuit is used to illustrate connections between resonance and a system's transfer function, frequency response, poles, and zeroes. Building off the RLC circuit example, these connections for higher-order linear systems with multiple inputs and outputs are generalized.
The driven, damped harmonic oscillator
Consider a damped mass on a spring driven by a sinusoidal, externally applied force. Newton's second law takes the form
where m is the mass, x is the displacement of the mass from the equilibrium point, F0 is the driving amplitude, ω is the driving angular frequency, k is the spring constant, and c is the viscous damping coefficient. This can be rewritten in the form
where
is called the undamped angular frequency of the oscillator or the natural frequency,
is called the damping ratio.
Many sources also refer to ω0 as the resonant frequency. However, as shown below, when analyzing oscillations of the displacement x(t), the resonant frequency is close to but not the same as ω0. In general the resonant frequency is close to but not necessarily the same as the natural frequency. The RLC circuit example in the next section gives examples of different resonant frequencies for the same system.
The general solution of Equation () is the sum of a transient solution that depends on initial conditions and a steady state solution that is independent of initial conditions and depends only on the driving amplitude F0, driving frequency ω, undamped angular frequency ω0, and the damping ratio ζ. The transient solution decays in a relatively short amount of time, so to study resonance it is sufficient to consider the steady state solution.
It is possible to write the steady-state solution for x(t) as a function proportional to the driving force with an induced phase change φ,
where
The phase value is usually taken to be between −180° and 0 so it represents a phase lag for both positive and negative values of the arctan argument.
Resonance occurs when, at certain driving frequencies, the steady-state amplitude of x(t) is large compared to its amplitude at other driving frequencies. For the mass on a spring, resonance corresponds physically to the mass's oscillations having large displacements from the spring's equilibrium position at certain driving frequencies. Looking at the amplitude of x(t) as a function of the driving frequency ω, the amplitude is maximal at the driving frequency
ωr is the resonant frequency for this system. Again, the resonant frequency does not equal the undamped angular frequency ω0 of the oscillator. They are proportional, and if the damping ratio goes to zero they are the same, but for non-zero damping they are not the same frequency. As shown in the figure, resonance may also occur at other frequencies near the resonant frequency, including ω0, but the maximum response is at the resonant frequency.
Also, ωr is only real and non-zero if , so this system can only resonate when the harmonic oscillator is significantly underdamped. For systems with a very small damping ratio and a driving frequency near the resonant frequency, the steady state oscillations can become very large.
The pendulum
For other driven, damped harmonic oscillators whose equations of motion do not look exactly like the mass on a spring example, the resonant frequency remains
but the definitions of ω0 and ζ change based on the physics of the system. For a pendulum of length ℓ and small displacement angle θ, Equation () becomes
and therefore
RLC series circuits
Consider a circuit consisting of a resistor with resistance R, an inductor with inductance L, and a capacitor with capacitance C connected in series with current i(t) and driven by a voltage source with voltage vin(t). The voltage drop around the circuit is
Rather than analyzing a candidate solution to this equation like in the mass on a spring example above, this section will analyze the frequency response of this circuit. Taking the Laplace transform of Equation (),
where I(s) and Vin(s) are the Laplace transform of the current and input voltage, respectively, and s is a complex frequency parameter in the Laplace domain. Rearranging terms,
Voltage across the capacitor
An RLC circuit in series presents several options for where to measure an output voltage. Suppose the output voltage of interest is the voltage drop across the capacitor. As shown above, in the Laplace domain this voltage is
or
Define for this circuit a natural frequency and a damping ratio,
The ratio of the output voltage to the input voltage becomes
H(s) is the transfer function between the input voltage and the output voltage. This transfer function has two poles–roots of the polynomial in the transfer function's denominator–at
and no zeros–roots of the polynomial in the transfer function's numerator. Moreover, for , the magnitude of these poles is the natural frequency ω0 and that for , our condition for resonance in the harmonic oscillator example, the poles are closer to the imaginary axis than to the real axis.
Evaluating H(s) along the imaginary axis , the transfer function describes the frequency response of this circuit. Equivalently, the frequency response can be analyzed by taking the Fourier transform of Equation () instead of the Laplace transform. The transfer function, which is also complex, can be written as a gain and phase,
A sinusoidal input voltage at frequency ω results in an output voltage at the same frequency that has been scaled by G(ω) and has a phase shift Φ(ω). The gain and phase can be plotted versus frequency on a Bode plot. For the RLC circuit's capacitor voltage, the gain of the transfer function H(iω) is
Note the similarity between the gain here and the amplitude in Equation (). Once again, the gain is maximized at the resonant frequency
Here, the resonance corresponds physically to having a relatively large amplitude for the steady state oscillations of the voltage across the capacitor compared to its amplitude at other driving frequencies.
Voltage across the inductor
The resonant frequency need not always take the form given in the examples above. For the RLC circuit, suppose instead that the output voltage of interest is the voltage across the inductor. As shown above, in the Laplace domain the voltage across the inductor is
using the same definitions for ω0 and ζ as in the previous example. The transfer function between Vin(s) and this new Vout(s) across the inductor is
This transfer function has the same poles as the transfer function in the previous example, but it also has two zeroes in the numerator at . Evaluating H(s) along the imaginary axis, its gain becomes
Compared to the gain in Equation () using the capacitor voltage as the output, this gain has a factor of ω2 in the numerator and will therefore have a different resonant frequency that maximizes the gain. That frequency is
So for the same RLC circuit but with the voltage across the inductor as the output, the resonant frequency is now larger than the natural frequency, though it still tends towards the natural frequency as the damping ratio goes to zero. That the same circuit can have different resonant frequencies for different choices of output is not contradictory. As shown in Equation (), the voltage drop across the circuit is divided among the three circuit elements, and each element has different dynamics. The capacitor's voltage grows slowly by integrating the current over time and is therefore more sensitive to lower frequencies, whereas the inductor's voltage grows when the current changes rapidly and is therefore more sensitive to higher frequencies. While the circuit as a whole has a natural frequency where it tends to oscillate, the different dynamics of each circuit element make each element resonate at a slightly different frequency.
Voltage across the resistor
Suppose that the output voltage of interest is the voltage across the resistor. In the Laplace domain the voltage across the resistor is
and using the same natural frequency and damping ratio as in the capacitor example the transfer function is
This transfer function also has the same poles as the previous RLC circuit examples, but it only has one zero in the numerator at s = 0. For this transfer function, its gain is
The resonant frequency that maximizes this gain is
and the gain is one at this frequency, so the voltage across the resistor resonates at the circuit's natural frequency and at this frequency the amplitude of the voltage across the resistor equals the input voltage's amplitude.
Antiresonance
Some systems exhibit antiresonance that can be analyzed in the same way as resonance. For antiresonance, the amplitude of the response of the system at certain frequencies is disproportionately small rather than being disproportionately large. In the RLC circuit example, this phenomenon can be observed by analyzing both the inductor and the capacitor combined.
Suppose that the output voltage of interest in the RLC circuit is the voltage across the inductor and the capacitor combined in series. Equation () showed that the sum of the voltages across the three circuit elements sums to the input voltage, so measuring the output voltage as the sum of the inductor and capacitor voltages combined is the same as vin minus the voltage drop across the resistor. The previous example showed that at the natural frequency of the system, the amplitude of the voltage drop across the resistor equals the amplitude of vin, and therefore the voltage across the inductor and capacitor combined has zero amplitude. We can show this with the transfer function.
The sum of the inductor and capacitor voltages is
Using the same natural frequency and damping ratios as the previous examples, the transfer function is
This transfer has the same poles as the previous examples but has zeroes at
Evaluating the transfer function along the imaginary axis, its gain is
Rather than look for resonance, i.e., peaks of the gain, notice that the gain goes to zero at ω = ω0, which complements our analysis of the resistor's voltage. This is called antiresonance, which has the opposite effect of resonance. Rather than result in outputs that are disproportionately large at this frequency, this circuit with this choice of output has no response at all at this frequency. The frequency that is filtered out corresponds exactly to the zeroes of the transfer function, which were shown in Equation () and were on the imaginary axis.
Relationships between resonance and frequency response in the RLC series circuit example
These RLC circuit examples illustrate how resonance is related to the frequency response of the system. Specifically, these examples illustrate:
How resonant frequencies can be found by looking for peaks in the gain of the transfer function between the input and output of the system, for example in a Bode magnitude plot
How the resonant frequency for a single system can be different for different choices of system output
The connection between the system's natural frequency, the system's damping ratio, and the system's resonant frequency
The connection between the system's natural frequency and the magnitude of the transfer function's poles, pointed out in Equation (), and therefore a connection between the poles and the resonant frequency
A connection between the transfer function's zeroes and the shape of the gain as a function of frequency, and therefore a connection between the zeroes and the resonant frequency that maximizes gain
A connection between the transfer function's zeroes and antiresonance
The next section extends these concepts to resonance in a general linear system.
Generalizing resonance and antiresonance for linear systems
Next consider an arbitrary linear system with multiple inputs and outputs. For example, in state-space representation a third order linear time-invariant system with three inputs and two outputs might be written as
where ui(t) are the inputs, xi(t) are the state variables, yi(t) are the outputs, and A, B, C, and D are matrices describing the dynamics between the variables.
This system has a transfer function matrix whose elements are the transfer functions between the various inputs and outputs. For example,
Each Hij(s) is a scalar transfer function linking one of the inputs to one of the outputs. The RLC circuit examples above had one input voltage and showed four possible output voltages–across the capacitor, across the inductor, across the resistor, and across the capacitor and inductor combined in series–each with its own transfer function. If the RLC circuit were set up to measure all four of these output voltages, that system would have a 4×1 transfer function matrix linking the single input to each of the four outputs.
Evaluated along the imaginary axis, each Hij(iω) can be written as a gain and phase shift,
Peaks in the gain at certain frequencies correspond to resonances between that transfer function's input and output, assuming the system is stable.
Each transfer function Hij(s) can also be written as a fraction whose numerator and denominator are polynomials of s.
The complex roots of the numerator are called zeroes, and the complex roots of the denominator are called poles. For a stable system, the positions of these poles and zeroes on the complex plane give some indication of whether the system can resonate or antiresonate and at which frequencies. In particular, any stable or marginally stable, complex conjugate pair of poles with imaginary components can be written in terms of a natural frequency and a damping ratio as
as in Equation (). The natural frequency ω0 of that pole is the magnitude of the position of the pole on the complex plane and the damping ratio of that pole determines how quickly that oscillation decays. In general,
Complex conjugate pairs of poles near the imaginary axis correspond to a peak or resonance in the frequency response in the vicinity of the pole's natural frequency. If the pair of poles is on the imaginary axis, the gain is infinite at that frequency.
Complex conjugate pairs of zeroes near the imaginary axis correspond to a notch or antiresonance in the frequency response in the vicinity of the zero's frequency, i.e., the frequency equal to the magnitude of the zero. If the pair of zeroes is on the imaginary axis, the gain is zero at that frequency.
In the RLC circuit example, the first generalization relating poles to resonance is observed in Equation (). The second generalization relating zeroes to antiresonance is observed in Equation (). In the examples of the harmonic oscillator, the RLC circuit capacitor voltage, and the RLC circuit inductor voltage, "poles near the imaginary axis" corresponds to the significantly underdamped condition ζ < 1/.
Standing waves
A physical system can have as many natural frequencies as it has degrees of freedom and can resonate near each of those natural frequencies. A mass on a spring, which has one degree of freedom, has one natural frequency. A double pendulum, which has two degrees of freedom, can have two natural frequencies. As the number of coupled harmonic oscillators increases, the time it takes to transfer energy from one to the next becomes significant. Systems with very large numbers of degrees of freedom can be thought of as continuous rather than as having discrete oscillators.
Energy transfers from one oscillator to the next in the form of waves. For example, the string of a guitar or the surface of water in a bowl can be modeled as a continuum of small coupled oscillators and waves can travel along them. In many cases these systems have the potential to resonate at certain frequencies, forming standing waves with large-amplitude oscillations at fixed positions. Resonance in the form of standing waves underlies many familiar phenomena, such as the sound produced by musical instruments, electromagnetic cavities used in lasers and microwave ovens, and energy levels of atoms.
Standing waves on a string
When a string of fixed length is driven at a particular frequency, a wave propagates along the string at the same frequency. The waves reflect off the ends of the string, and eventually a steady state is reached with waves traveling in both directions. The waveform is the superposition of the waves.
At certain frequencies, the steady state waveform does not appear to travel along the string. At fixed positions called nodes, the string is never displaced. Between the nodes the string oscillates and exactly halfway between the nodes–at positions called anti-nodes–the oscillations have their largest amplitude.
For a string of length with fixed ends, the displacement of the string perpendicular to the -axis at time is
where
is the amplitude of the left- and right-traveling waves interfering to form the standing wave,
is the wave number,
is the frequency.
The frequencies that resonate and form standing waves relate to the length of the string as
where is the speed of the wave and the integer denotes different modes or harmonics. The standing wave with oscillates at the fundamental frequency and has a wavelength that is twice the length of the string. The possible modes of oscillation form a harmonic series.
Resonance in complex networks
A generalization to complex networks of coupled harmonic oscillators shows that such systems have a finite number of natural resonant frequencies, related to the topological structure of the network itself. In particular, such frequencies result related to the eigenvalues of the network's Laplacian matrix. Let be the adjacency matrix describing the topological structure of the network and the corresponding Laplacian matrix, where is the diagonal matrix of the degrees of the network's nodes. Then, for a network of classical and identical harmonic oscillators, when a sinusoidal driving force is applied to a specific node, the global resonant frequencies of the network are given by where are the eigenvalues of the Laplacian .
Types
Mechanical
Mechanical resonance is the tendency of a mechanical system to absorb more energy when the frequency of its oscillations matches the system's natural frequency of vibration than it does at other frequencies. It may cause violent swaying motions and even catastrophic failure in improperly constructed structures including bridges, buildings, trains, and aircraft. When designing objects, engineers must ensure the mechanical resonance frequencies of the component parts do not match driving vibrational frequencies of motors or other oscillating parts, a phenomenon known as resonance disaster.
Avoiding resonance disasters is a major concern in every building, tower, and bridge construction project. As a countermeasure, shock mounts can be installed to absorb resonant frequencies and thus dissipate the absorbed energy. The Taipei 101 building relies on a —a tuned mass damper—to cancel resonance. Furthermore, the structure is designed to resonate at a frequency that does not typically occur. Buildings in seismic zones are often constructed to take into account the oscillating frequencies of expected ground motion. In addition, engineers designing objects having engines must ensure that the mechanical resonant frequencies of the component parts do not match driving vibrational frequencies of the motors or other strongly oscillating parts.
Clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal.
The cadence of runners has been hypothesized to be energetically favorable due to resonance between the elastic energy stored in the lower limb and the mass of the runner.
International Space Station
The rocket engines for the International Space Station (ISS) are controlled by an autopilot. Ordinarily, uploaded parameters for controlling the engine control system for the Zvezda module make the rocket engines boost the International Space Station to a higher orbit. The rocket engines are hinge-mounted, and ordinarily the crew does not notice the operation. On January 14, 2009, however, the uploaded parameters made the autopilot swing the rocket engines in larger and larger oscillations, at a frequency of 0.5 Hz. These oscillations were captured on video, and lasted for 142 seconds.
Acoustic
Acoustic resonance is a branch of mechanical resonance that is concerned with the mechanical vibrations across the frequency range of human hearing, in other words sound. For humans, hearing is normally limited to frequencies between about 20 Hz and 20,000 Hz (20 kHz), Many objects and materials act as resonators with resonant frequencies within this range, and when struck vibrate mechanically, pushing on the surrounding air to create sound waves. This is the source of many percussive sounds we hear.
Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of, and tension on, a drum membrane.
Like mechanical resonance, acoustic resonance can result in catastrophic failure of the object at resonance. The classic example of this is breaking a wine glass with sound at the precise resonant frequency of the glass, although this is difficult in practice.
Electrical
Electrical resonance occurs in an electric circuit at a particular resonant frequency when the impedance of the circuit is at a minimum in a series circuit or at maximum in a parallel circuit (usually when the transfer function peaks in absolute value). Resonance in circuits are used for both transmitting and receiving wireless communications such as television, cell phones and radio.
Optical
An optical cavity, also called an optical resonator, is an arrangement of mirrors that forms a standing wave cavity resonator for light waves. Optical cavities are a major component of lasers, surrounding the gain medium and providing feedback of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times producing standing waves for certain resonant frequencies. The standing wave patterns produced are called "modes". Longitudinal modes differ only in frequency while transverse modes differ for different frequencies and have different intensity patterns across the cross-section of the beam. Ring resonators and whispering galleries are examples of optical resonators that do not form standing waves.
Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them; flat mirrors are not often used because of the difficulty of aligning them precisely. The geometry (resonator type) must be chosen so the beam remains stable, i.e., the beam size does not continue to grow with each reflection. Resonator types are also designed to meet other criteria such as minimum beam waist or having no focal point (and therefore intense light at that point) inside the cavity.
Optical cavities are designed to have a very large Q factor. A beam reflects a large number of times with little attenuation—therefore the frequency line width of the beam is small compared to the frequency of the laser.
Additional optical resonances are guided-mode resonances and surface plasmon resonance, which result in anomalous reflection and high evanescent fields at resonance. In this case, the resonant modes are guided modes of a waveguide or surface plasmon modes of a dielectric-metallic interface. These modes are usually excited by a subwavelength grating.
Orbital
In celestial mechanics, an orbital resonance occurs when two orbiting bodies exert a regular, periodic gravitational influence on each other, usually due to their orbital periods being related by a ratio of two small integers. Orbital resonances greatly enhance the mutual gravitational influence of the bodies. In most cases, this results in an unstable interaction, in which the bodies exchange momentum and shift orbits until the resonance no longer exists. Under some circumstances, a resonant system can be stable and self-correcting, so that the bodies remain in resonance. Examples are the 1:2:4 resonance of Jupiter's moons Ganymede, Europa, and Io, and the 2:3 resonance between Pluto and Neptune. Unstable resonances with Saturn's inner moons give rise to gaps in the rings of Saturn. The special case of 1:1 resonance (between bodies with similar orbital radii) causes large Solar System bodies to clear the neighborhood around their orbits by ejecting nearly everything else around them; this effect is used in the current definition of a planet.
Atomic, particle, and molecular
Nuclear magnetic resonance (NMR) is the name given to a physical resonance phenomenon involving the observation of specific quantum mechanical magnetic properties of an atomic nucleus in the presence of an applied, external magnetic field. Many scientific techniques exploit NMR phenomena to study molecular physics, crystals, and non-crystalline materials through NMR spectroscopy. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI).
All nuclei containing odd numbers of nucleons have an intrinsic magnetic moment and angular momentum. A key feature of NMR is that the resonant frequency of a particular substance is directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonant frequencies of the sample's nuclei depend on where in the field they are located. Therefore, the particle can be located quite precisely by its resonant frequency.
Electron paramagnetic resonance, otherwise known as electron spin resonance (ESR), is a spectroscopic technique similar to NMR, but uses unpaired electrons instead. Materials for which this can be applied are much more limited since the material needs to both have an unpaired spin and be paramagnetic.
The Mössbauer effect is the resonant and recoil-free emission and absorption of gamma ray photons by atoms bound in a solid form.
Resonance in particle physics appears in similar circumstances to classical physics at the level of quantum mechanics and quantum field theory. Resonances can also be thought of as unstable particles, with the formula in the Universal resonance curve section of this article applying if Γ is the particle's decay rate and Ω is the particle's mass M. In that case, the formula comes from the particle's propagator, with its mass replaced by the complex number M + iΓ. The formula is further related to the particle's decay rate by the optical theorem.
Disadvantages
A column of soldiers marching in regular step on a narrow and structurally flexible bridge can set it into dangerously large amplitude oscillations. On April 12, 1831, the Broughton Suspension Bridge near Salford, England collapsed while a group of British soldiers were marching across. Since then, the British Army has had a standing order for soldiers to break stride when marching across bridges, to avoid resonance from their regular marching pattern affecting the bridge.
Vibrations of a motor or engine can induce resonant vibration in its supporting structures if their natural frequency is close to that of the vibrations of the engine. A common example is the rattling sound of a bus body when the engine is left idling.
Structural resonance of a suspension bridge induced by winds can lead to its catastrophic collapse. Several early suspension bridges in Europe and United States were destroyed by structural resonance induced by modest winds. The collapse of the Tacoma Narrows Bridge on 7 November 1940 is characterized in physics as a classic example of resonance. It has been argued by Robert H. Scanlan and others that the destruction was instead caused by aeroelastic flutter, a complicated interaction between the bridge and the winds passing through it—an example of a self oscillation, or a kind of "self-sustaining vibration" as referred to in the nonlinear theory of vibrations.
Q factor
The Q factor or quality factor is a dimensionless parameter that describes how under-damped an oscillator or resonator is, and characterizes the bandwidth of a resonator relative to its center frequency.
A high value for Q indicates a lower rate of energy loss relative to the stored energy, i.e., the system is lightly damped. The parameter is defined by the equation:
.
The higher the Q factor, the greater the amplitude at the resonant frequency, and the smaller the bandwidth, or range of frequencies around resonance occurs. In electrical resonance, a high-Q circuit in a radio receiver is more difficult to tune, but has greater selectivity, and so would be better at filtering out signals from other stations. High Q oscillators are more stable.
Examples that normally have a low Q factor include door closers (Q=0.5). Systems with high Q factors include tuning forks (Q=1000), atomic clocks and lasers (Q≈1011).
Universal resonance curve
The exact response of a resonance, especially for frequencies far from the resonant frequency, depends on the details of the physical system, and is usually not exactly symmetric about the resonant frequency, as illustrated for the simple harmonic oscillator above.
For a lightly damped linear oscillator with a resonance frequency , the intensity of oscillations when the system is driven with a driving frequency is typically approximated by the following formula that is symmetric about the resonance frequency:
Where the susceptibility links the amplitude of the oscillator to the driving force in frequency space:
The intensity is defined as the square of the amplitude of the oscillations. This is a Lorentzian function, or Cauchy distribution, and this response is found in many physical situations involving resonant systems. is a parameter dependent on the damping of the oscillator, and is known as the linewidth of the resonance. Heavily damped oscillators tend to have broad linewidths, and respond to a wider range of driving frequencies around the resonant frequency. The linewidth is inversely proportional to the Q factor, which is a measure of the sharpness of the resonance.
In radio engineering and electronics engineering, this approximate symmetric response is known as the universal resonance curve, a concept introduced by Frederick E. Terman in 1932 to simplify the approximate analysis of radio circuits with a range of center frequencies and Q values.
See also
Cymatics
Driven harmonic motion
Earthquake engineering
Electric dipole spin resonance
Formant
Limbic resonance
Nonlinear resonance
Normal mode
Positive feedback
Schumann resonance
Simple harmonic motion
Stochastic resonance
Sympathetic string
Resonance (chemistry)
Fermi resonance
Resonance (particle physics)
Notes
References
External links
The Feynman Lectures on Physics Vol. I Ch. 23: Resonance
Resonance - a chapter from an online textbook
Greene, Brian, "Resonance in strings". The Elegant Universe, NOVA (PBS)
Hyperphysics section on resonance concepts
Resonance versus resonant (usage of terms)
Wood and Air Resonance in a Harpsichord
Breaking glass with sound , including high-speed footage of glass breaking
Antennas (radio)
Oscillation | Resonance | Physics,Chemistry | 7,436 |
26,909,817 | https://en.wikipedia.org/wiki/Clavaria%20fragilis | Clavaria fragilis, commonly known as fairy fingers, white worm coral, or white spindles, is a species of fungus in the family Clavariaceae. It is synonymous with Clavaria vermicularis. The fungus is the type species of the genus Clavaria and is a typical member of the clavarioid or club fungi. It produces tubular, unbranched, white basidiocarps (fruit bodies) that typically grow in clusters. The fruit bodies can reach dimensions of tall by thick. There are several similar coral-like fungi.
Clavaria fragilis is a saprobic species, growing in woodland litter or in old, unimproved grassland. It is widespread throughout temperate regions in the Northern Hemisphere, but has also been reported from Australia and South Africa. The fungus is edible, but insubstantial and flavorless.
Taxonomy
Clavaria fragilis was originally described from Denmark in 1790 by Danish naturalist and mycologist Theodor Holmskjold, and was sanctioned under this name by Elias Magnus Fries in his 1821 Systema Mycologicum. The Latin epithet fragilis refers to the brittle fruit bodies. The species was redescribed by Swedish mycologist Olof Swartz in 1811, using the name Clavaria vermicularis (the epithet meaning "wormlike"). Though it is a later synonym—and thus obsolete according to the principle of priority—the latter name is still frequently used today. There are several other names considered to be synonymous with C. fragilis by the online taxonomical database MycoBank (see the taxobox).
In North America, the fungus has colloquially been called "fairy fingers" or "white worm coral". In the UK its recommended English name is "white spindles". British naturalist Samuel Frederick Gray called it the "worm club-stool" in his 1821 A Natural Arrangement of British Plants.
Description
The fruit bodies of C. fragilis are irregularly tubular, smooth to furrowed, sometimes compressed, very fragile, white, up to tall and thick, and typically grow in dense clusters. The tip of the fruit body tapers to a point, and may yellow and curve with age. There is no distinct stalk, although it is evident as a short, semitransparent zone of tissue at the base of the club. Microscopically, the hyphae of the flesh are swollen up to 12 μm wide and lack clamp connections. The spores are smooth, colourless, ellipsoid to oblong, measuring 5–7 by 3–4 μm. The spores are white in deposit. The basidia (spore bearing cells) measure 40–50 by 6–8 μm, and lack clamps at their bases.
Similar species
Similar fungi with simple, white fruit bodies include Clavaria acuta, an equally widespread species that typically grows singly or in small groups rather than in dense clusters and can be distinguished microscopically by its clamped basidia and larger spores; the morphologically similar, but rare C. atkinsoniana, found in the southwestern and central United States, which cannot be distinguished from C. fragilis by field characteristics alone but has larger spores—8.5–10 by 4.5–5 μm; C. rubicundula, another North American species, which is similar in stature but has a reddish tint; and Multiclavula mucida, a widespread lichenized species with smaller fruit bodies that occurs with its associated algae on moist wood.
Other similar species include Alloclavaria purpurea, Clavulinopsis fusiformis, Clavulinopsis laeticolor, and Macrotyphula juncea.
Distribution and habitat
The species occurs throughout the Northern Hemisphere, in Europe (from July to October), Asia, and North America. In North America, it is more common east of the Rocky Mountains. It has also been recorded from Australia and South Africa. In 2006, it was reported from the Arctic zone of the Ural Mountains, in Russia.
The fungus grows in woodland and in grassland on moist soil, and is presumed to be saprobic, rotting fallen leaf litter and dead grass stems. The fruit bodies tend to grow in groups, tufts or clusters. Although they can grow singly, they are typically inconspicuous unless in clusters.
Conservation status
In North America, Clavaria fragilis has been called "by far our most common Clavaria". In northern Europe, it is one of a suite of "CHEG" fungi (CHEG standing for "Clavarioid fungi-Hygrocybe-Entoloma-Geoglossaceae") considered to be indicator species of old, unimproved grassland (permanent grassland that has not been cultivated for some years). Though such grasslands are a threatened habitat in Europe, C. fragilis is one of the commoner CHEG species. It is, nonetheless, on the national red list of threatened fungi in the Netherlands and Slovenia.
Edibility
Clavaria fragilis is nonpoisonous and reportedly edible raw or cooked, but the fruit bodies are insubstantial and fragile. One field guide says "its flesh is tasteless and so delicate that it seems to dissolve in one's mouth." Its odor has been compared to iodine.
References
Clavariaceae
Edible fungi
Fungi of Africa
Fungi of Asia
Fungi of Australia
Fungi of Europe
Fungi of North America
Fungi described in 1790
Fungi of the Arctic
Fungus species | Clavaria fragilis | Biology | 1,144 |
474,880 | https://en.wikipedia.org/wiki/Hawthorne%20effect | The Hawthorne effect is a type of human behavior reactivity in which individuals modify an aspect of their behavior in response to their awareness of being observed. The effect was discovered in the context of research conducted at the Hawthorne Western Electric plant; however, some scholars think the descriptions are fictitious.
The original research involved workers who made electrical relays at the Hawthorne Works, a Western Electric plant in Cicero, Illinois. Between 1924 and 1927, the lighting study was conducted, wherein workers experienced a series of lighting changes that were said to increase productivity. This conclusion turned out to be false. In an Elton Mayo study that ran from 1927 to 1928, a series of changes in work structure were implemented (e.g. changes in rest periods) in a group of six women. However, this was a methodologically poor, uncontrolled study from which no firm conclusions could be drawn. Elton Mayo later conducted two additional experiments to study the phenomenon: the mass interviewing experiment (1928-1930) and the bank wiring observation experiment (1931-32).
One of the later interpretations by Henry Landsberger, a sociology professor at UNC-Chapel Hill, suggested that the novelty of being research subjects and the increased attention from such could lead to temporary increases in workers' productivity. This interpretation was dubbed "the Hawthorne effect".
History
The term "Hawthorne effect" was coined in 1953 by John R. P. French after the Hawthorne studies were conducted between 1924 and 1932 at the Hawthorne Works (a Western Electric factory in Cicero, outside Chicago). The Hawthorne Works had commissioned a study to determine if its workers would become more productive in brighter or dimmer levels of light. The workers' productivity seemed to improve when changes were made but returned to their original level when the study ended. It has been alternatively suggested that the workers' productivity increased because they were motivated by interest being shown in them.
This effect was observed for minute increases in illumination. In these lighting studies, light intensity was altered to examine the resulting effect on worker productivity. When discussing the Hawthorne effect, most industrial and organizational psychology textbooks refer almost exclusively to the illumination studies as opposed to the other types of studies that have been conducted.
Although early studies focused on altering workplace illumination, other changes such as maintaining clean work stations, clearing floors of obstacles, and relocating workstations have also been found to result in increased productivity for short periods of time. Thus, the Hawthorne effect can apply to a cause or causes other than changing lighting.
Illumination experiment
The illumination experiment was conducted from 1924 to 1927. The purpose was to determine the effect of light variations on worker productivity. The experiment ran in two rooms: the experiment room, in which workers went about their workday under various light levels; and the control room, in which workers did their tasks under normal conditions. The hypothesis was that as the light level was increased in the experiment room, productivity would increase.
However, when the intensity of light was increased in the experiment room, researchers found that productivity had improved in both rooms. The light level in the experiment room was then decreased, and the results were the same: increased productivity in both rooms. Productivity only began to decrease in the experiment room when the light level was reduced to about the level of moonlight, which made it hard to see.
Ultimately it was concluded that illumination did not have any effect on productivity and that there must have been some other variable causing the observed productivity increases in both rooms. Another phase of experiments was needed to pinpoint the cause.
Relay assembly experiments
In 1927, researchers conducted an experiment where they chose two female workers as test subjects and asked them to choose four other women to join the test group. Until 1928, the team of women worked in a separate room, assembling telephone relays.
Output was measured mechanically by counting how many finished relays each worker dropped down a chute. To establish a baseline productivity level, the measurement was begun in secret two weeks before the women were moved to the experiment room, and then continued throughout the study. In the experiment room, a supervisor discussed changes in their productivity.
Some of the variables were:
Giving the workers two 5-minute breaks (which they said they preferred beforehand) and then switching to two 10-minute breaks. Productivity increased, but when they were given six 5-minute breaks, productivity decreased because many rests broke the workers' flow.
Providing soup or coffee with a sandwich in the morning and snacks in the evening. This increased productivity.
Changing the end of the workday from 5:00 to 4:30 and eliminating the Saturday workday. This increased productivity.
Changing a variable usually increased productivity, even if the variable was just a change back to the original condition. It is said that this reflects natural adaption to the environment without knowing the objective of the experiment. Researchers concluded that the workers worked harder because they thought that they were being monitored individually.
Researchers hypothesized that choosing one's own coworkers, working as a group, being treated as special (as evidenced by working in a separate room), and having a sympathetic supervisor were the real reasons for the productivity increase. One interpretation, mainly due to Elton Mayo's studies, was that "the six individuals became a team and the team gave itself wholeheartedly and spontaneously to cooperation in the experiment." Further, there was a second relay assembly test room study whose results were not as significant as the first experiment.
Mass interviewing program
The program was conducted from 1928 to 1930 and involved 20,000 interviews. The interviews initially used direct questioning, asking questions related to the supervision and policies of the company involved. The drawback of the direct questioning was that the answers were only "yes" or "no", which was unhelpful for finding the root of problems. Therefore, researchers took to indirect questioning, in which the interviewer would listen. This gave valuable insights about workers' behavior, specifically that the behavior of a worker (or individual) is shaped by group behavior.
Bank wiring room experiments
The purpose of the next study was to find out how payment incentives and small groups would affect productivity. The surprising result was that productivity actually decreased. Workers apparently had become suspicious that their productivity may have been boosted to justify firing some of the workers later on. The study was conducted by Elton Mayo and W. Lloyd Warner between 1931 and 1932 on a group of fourteen men who put together telephone switching equipment. The researchers found that although the workers were paid according to individual productivity, productivity decreased because the men were afraid that the company would lower the base rate. Detailed observation of the men revealed the existence of informal groups or "cliques" within the formal groups. These cliques developed informal rules of behavior as well as mechanisms to enforce them. The cliques served to control group members and to manage bosses; when bosses asked questions, clique members gave the same responses, even if they were untrue. These results show that workers were more responsive to the social force of their peer groups than to the control and incentives of management.
Interpretation and criticism
Richard Nisbett has described the Hawthorne effect as "a glorified anecdote", saying that "once you have got the anecdote, you can throw away the data." Other researchers have attempted to explain the effects with various interpretations. J. G. Adair warned of gross factual inaccuracy in most secondary publications on the Hawthorne effect and that many studies failed to find it. He argued that it should be viewed as a variant of Orne's (1973) experimental demand effect. For Adair, the Hawthorne effect depended on the participants' interpretation of the situation. An implication is that manipulation checks are important in social sciences experiments. He advanced the view that awareness of being observed was not the source of the effect, but participants' interpretation of the situation is critical. How did the participants' interpretation of the situation interact with the participants' goals?
Possible explanations for the Hawthorne effect include the impact of feedback and motivation towards the experimenter. Receiving feedback on their performance may improve their skills when an experiment provides this feedback for the first time. Research on the demand effect also suggests that people may be motivated to please the experimenter, at least if it does not conflict with any other motive. They may also be suspicious of the purpose of the experimenter. Therefore, Hawthorne effect may only occur when there is usable feedback or a change in motivation.
Parsons defined the Hawthorne effect as "the confounding that occurs if experimenters fail to realize how the consequences of subjects' performance affect what subjects do" [i.e. learning effects, both permanent skill improvement and feedback-enabled adjustments to suit current goals]. His key argument was that in the studies where workers dropped their finished goods down chutes, the participants had access to the counters of their work rate.
Mayo contended that the effect was due to the workers reacting to the sympathy and interest of the observers. He discussed the study as demonstrating an experimenter effect as a management effect: how management can make workers perform differently because they feel differently. He suggested that much of the Hawthorne effect concerned the workers feeling free and in control as a group rather than as being supervised. The experimental manipulations were important in convincing the workers to feel that conditions in the special five-person work group were actually different from the conditions on the shop floor. The study was repeated with similar effects on mica-splitting workers.
Clark and Sugrue in a review of educational research reported that uncontrolled novelty effects cause on average 30% of a standard deviation (SD) rise (i.e. 50–63% score rise), with the rise decaying to a much smaller effect after 8 weeks. In more detail: 50% of a SD for up to 4 weeks; 30% of SD for 5–8 weeks; and 20% of SD for > 8 weeks, (which is < 1% of the variance).
Harry Braverman pointed out that the Hawthorne tests were based on industrial psychology and the researchers involved were investigating whether workers' performance could be predicted by pre-hire testing. The Hawthorne study showed "that the performance of workers had little relation to their ability and in fact often bore an inverse relation to test scores ...". Braverman argued that the studies really showed that the workplace was not "a system of bureaucratic formal organisation on the Weberian model, nor a system of informal group relations, as in the interpretation of Mayo and his followers but rather a system of power, of class antagonisms". This discovery was a blow to those hoping to apply the behavioral sciences to manipulate workers in the interest of management.
The economists Steven Levitt and John A. List long pursued without success a search for the base data of the original illumination experiments (they were not true experiments but some authors labeled them experiments), before finding it in a microfilm at the University of Wisconsin in Milwaukee in 2011. Re-analysing it, they found slight evidence for the Hawthorne effect over the long-run, but in no way as drastic as suggested initially. This finding supported the analysis of an article by S. R. G. Jones in 1992 examining the relay experiments. Despite the absence of evidence for the Hawthorne effect in the original study, List has said that he remains confident that the effect is genuine.
Gustav Wickström and Tom Bendix (2000) argue that the supposed "Hawthorne effect" is actually ambiguous and disputable, and instead recommend that to evaluate intervention effectiveness, researchers should introduce specific psychological and social variables that may have affected the outcome.
It is also possible that the illumination experiments can be explained by a longitudinal learning effect. Parsons has declined to analyse the illumination experiments, on the grounds that they have not been properly published and so he cannot get at details, whereas he had extensive personal communication with Roethlisberger and Dickson.
Evaluation of the Hawthorne effect continues in the present day. Despite the criticisms, however, the phenomenon is often taken into account when designing studies and their conclusions. Some have also developed ways to avoid it. For instance, there is the case of holding the observation when conducting a field study from a distance, from behind a barrier such as a two-way mirror or using an unobtrusive measure.
Greenwood, Bolton, and Greenwood (1983) interviewed some of the participants in the experiments and found that the participants were paid significantly better. Bolton's archives relevant to his work on the Hawthorne Effect are held at West Virginia University.
Trial effect
Various medical scientists have studied possible trial effect (clinical trial effect) in clinical trials. Some postulate that, beyond just attention and observation, there may be other factors involved, such as slightly better care; slightly better compliance/adherence; and selection bias. The latter may have several mechanisms: (1) Physicians may tend to recruit patients who seem to have better adherence potential and lesser likelihood of future loss to follow-up. (2) The inclusion/exclusion criteria of trials often exclude at least some comorbidities; although this is often necessary to prevent confounding, it also means that trials may tend to work with healthier patient subpopulations.
Secondary observer effect
Despite the observer effect as popularized in the Hawthorne experiments being perhaps falsely identified (see above discussion), the popularity and plausibility of the observer effect in theory has led researchers to postulate that this effect could take place at a second level. Thus it has been proposed that there is a secondary observer effect when researchers working with secondary data such as survey data or various indicators may impact the results of their scientific research. Rather than having an effect on the subjects (as with the primary observer effect), the researchers likely have their own idiosyncrasies that influence how they handle the data and even what data they obtain from secondary sources. For one, the researchers may choose seemingly innocuous steps in their statistical analyses that end up causing significantly different results using the same data; e.g. weighting strategies, factor analytic techniques, or choice of estimation. In addition, researchers may use software packages that have different default settings that lead to small but significant fluctuations. Finally, the data that researchers use may not be identical, even though it seems so. For example, the OECD collects and distributes various socio-economic data; however, these data change over time such that a researcher who downloads the Australian GDP data for the year 2000 may have slightly different values than a researcher who downloads the same Australian GDP 2000 data a few years later. The idea of the secondary observer effect was floated by Nate Breznau in a thus far relatively obscure paper.
Although little attention has been paid to this phenomenon, the scientific implications are very large. Evidence of this effect may be seen in recent studies that assign a particular problem to a number of researchers or research teams who then work independently using the same data to try and find a solution. This is a process called crowdsourcing data analysis and was used in a groundbreaking study by Silberzahn, Rafael, Eric Uhlmann, Dan Martin and Brian Nosek et al. (2015) about red cards and player race in football (i.e. soccer).
See also
Barnum effect
Demand characteristics
Goodhart's law
John Henry effect
Mass surveillance
Monitoring and evaluation
Novelty effect
Panopticism
PDCA
Placebo effect
Pygmalion effect
Quantum Zeno effect
Reflexivity (social theory)
Scientific management
Self-determination theory
Social facilitation
Stereotype threat
Subject-expectancy effect
Time and motion study
Watching-eye effect
References
Ciment, Shoshy. “Costco Is Offering an Additional $2 an Hour to Its Hourly Employees across the US as the Coronavirus Outbreak Causes Massive Shopping Surges.” Business Insider, Business Insider, 23 Mar. 2020, www.businessinsider.com/costco-pays-workers-2-dollars-an-hour-more-coronavirus-2020-3.
Miller, Katherine, and Joshua Barbour. Organizational Communication: Approaches and Processes 7th Edition. Cengage Learning, 2014.
External links
The Hawthorne, Pygmalion, placebo and other expectancy effects: some notes, by Stephen W. Draper, Department of Psychology, University of Glasgow.
BBC Radio 4: Mind Changers: The Hawthorne Effect
Harvard Business School and the Hawthorne Experiments (1924–1933), Harvard Business School.
Industrial and organizational psychology
Social phenomena
1932 in science
Cognitive biases
Observational study
Human behavior
1950s neologisms | Hawthorne effect | Biology | 3,337 |
727,811 | https://en.wikipedia.org/wiki/Snark%20%28graph%20theory%29 | In the mathematical field of graph theory, a snark is an undirected graph with exactly three edges per vertex whose edges cannot be colored with only three colors. In order to avoid trivial cases, snarks are often restricted to have additional requirements on their connectivity and on the length of their cycles. Infinitely many snarks exist.
One of the equivalent forms of the four color theorem is that every snark is a non-planar graph. Research on snarks originated in Peter G. Tait's work on the four color theorem in 1880, but their name is much newer, given to them by Martin Gardner in 1976. Beyond coloring, snarks also have connections to other hard problems in graph theory: writing in the Electronic Journal of Combinatorics, Miroslav Chladný and Martin Škoviera state that
As well as the problems they mention, W. T. Tutte's snark conjecture concerns the existence of Petersen graphs as graph minors of snarks; its proof has been long announced but remains unpublished, and would settle a special case of the existence of nowhere zero 4-flows.
History and examples
Snarks were so named by the American mathematician Martin Gardner in 1976, after the mysterious and elusive object of the poem The Hunting of the Snark by Lewis Carroll. However, the study of this class of graphs is significantly older than their name. Peter G. Tait initiated the study of snarks in 1880, when he proved that the four color theorem is equivalent to the statement that no snark is planar. The first graph known to be a snark was the Petersen graph; it was proved to be a snark by Julius Petersen in 1898, although it had already been studied for a different purpose by Alfred Kempe in 1886.
The next four known snarks were
the Blanuša snarks (two with 18 vertices), discovered by Danilo Blanuša in 1946,
the Descartes snark (210 vertices), discovered by Bill Tutte in 1948, and
the Szekeres snark (50 vertices), discovered by George Szekeres in 1973.
In 1975, Rufus Isaacs generalized Blanuša's method to construct two infinite families of snarks: the flower snarks and the Blanuša–Descartes–Szekeres snarks, a family that includes the two Blanuša snarks, the Descartes snark and the Szekeres snark. Isaacs also discovered a 30-vertex snark that does not belong to the Blanuša–Descartes–Szekeres family and that is not a flower snark: the double-star snark. Another infinite family, the Loupekine snarks, was published by Isaacs in 1976, credited to F. Loupekine. It includes two 22-vertex snarks derived from the Petersen graph.
The 50-vertex Watkins snark was discovered in 1989.
Another notable cubic non-three-edge-colorable graph is Tietze's graph, with 12 vertices; as Heinrich Franz Friedrich Tietze discovered in 1910, it forms the boundary of a subdivision of the Möbius strip requiring six colors. However, because it contains a triangle, it is not generally considered a snark. Under strict definitions of snarks, the smallest snarks are the Petersen graph and Blanuša snarks, followed by six different 20-vertex snarks.
A list of all of the snarks up to 36 vertices (according to a strict definition), and up to 34 vertices (under a weaker definition), was generated by Gunnar Brinkmann, Jan Goedgebeur, Jonas Hägglund and Klas Markström in 2012. The number of snarks for a given even number of vertices grows at least exponentially in the number of vertices. (Because they have odd-degree vertices, all snarks must have an even number of vertices by the handshaking lemma.) OEIS sequence contains the number of non-trivial snarks of vertices for small values of .
Definition
The precise definition of snarks varies among authors, but generally refers to cubic graphs (having exactly three edges at each vertex) whose edges cannot be colored with only three colors. By Vizing's theorem, the number of colors needed for the edges of a cubic graph is either three ("class one" graphs) or four ("class two" graphs), so snarks are cubic graphs of class two. However, in order to avoid cases where a snark is of class two for trivial reasons, or is constructed in a trivial way from smaller graphs, additional restrictions on connectivity and cycle lengths are often imposed. In particular:
If a cubic graph has a bridge, an edge whose removal would disconnect it, then it cannot be of class one. By the handshaking lemma, the subgraphs on either side of the bridge have an odd number of vertices each. Whichever of three colors is chosen for the bridge, their odd number of vertices prevents these subgraphs from being covered by cycles that alternate between the other two colors, as would be necessary in a 3-edge-coloring. For this reason, snarks are generally required to be bridgeless.
A loop (an edge connecting a vertex to itself) cannot be colored without causing the same color to appear twice at that vertex, a violation of the usual requirements for graph edge coloring. Additionally, a cycle consisting of two vertices connected by two edges can always be replaced by a single edge connecting their two other neighbors, simplifying the graph without changing its three-edge-colorability. For these reasons, snarks are generally restricted to simple graphs, graphs without loops or multiple adjacencies.
If a graph contains a triangle, then it can again be simplified without changing its three-edge-colorability, by contracting the three vertices of the triangle into a single vertex. Therefore, many definitions of snarks forbid triangles. However, although this requirement was also stated in Gardner's work giving the name "snark" to these graphs, Gardner lists Tietze's graph, which contains a triangle, as being a snark.
If a graph contains a four-vertex cycle, it can be simplified in two different ways by removing two opposite edges of the cycle and replacing the resulting paths of degree-two vertices by single edges. It has a three-edge-coloring if and only if at least one of these simplifications does. Therefore, Isaacs requires a "nontrivial" cubic class-two graph to avoid four-vertex cycles, and other authors have followed suit in forbidding these cycles. The requirement that a snark avoid cycles of length four or less can be summarized by stating that the girth of these graphs, the length of their shortest cycles, is at least five.
More strongly, the definition used by requires snarks to be cyclically 4-edge-connected. That means there can be no subset of three or fewer edges, the removal of which would disconnect the graph into two subgraphs each of which has at least one cycle. Brinkmann et al. define a snark to be a cubic and cyclically 4-edge-connected graph of girth five or more and class two; they define a "weak snark" to allow girth four.
Although these definitions only consider constraints on the girth up to five, snarks with arbitrarily large girth exist.
Properties
Work by Peter G. Tait established that the four-color theorem is true if and only if every snark is non-planar. This theorem states that every planar graph has a graph coloring of its the vertices with four colors, but Tait showed how to convert 4-vertex-colorings of maximal planar graphs into 3-edge-colorings of their dual graphs, which are cubic and planar, and vice versa. A planar snark would therefore necessarily be dual to a counterexample to the four-color theorem. Thus, the subsequent proof of the four-color theorem also demonstrates that all snarks are non-planar.
All snarks are non-Hamiltonian: when a cubic graph has a Hamiltonian cycle, it is always possible to 3-color its edges, by using two colors in alternation for the cycle, and the third color for the remaining edges. However, many known snarks are close to being Hamiltonian, in the sense that they are hypohamiltonian graphs: the removal of any single vertex leaves a Hamiltonian subgraph. A hypohamiltonian snark must be bicritical: the removal of any two vertices leaves a three-edge-colorable subgraph. The oddness of a cubic graph is defined as the minimum number of odd cycles, in any system of cycles that covers each vertex once (a 2-factor). For the same reason that they have no Hamiltonian cycles, snarks have positive oddness: a completely even 2-factor would lead to a 3-edge-coloring, and vice versa. It is possible to construct infinite families of snarks whose oddness grows linearly with their numbers of vertices.
The cycle double cover conjecture posits that in every bridgeless graph one can find a collection of cycles covering each edge twice, or equivalently that the graph can be embedded onto a surface in such a way that all faces of the embedding are simple cycles. When a cubic graph has a 3-edge-coloring, it has a cycle double cover consisting of the cycles formed by each pair of colors. Therefore, among cubic graphs, the snarks are the only possible counterexamples. More generally, snarks form the difficult case for this conjecture: if it is true for snarks, it is true for all graphs. In this connection, Branko Grünbaum conjectured that no snark could be embedded onto a surface in such a way that all faces are simple cycles and such that every two faces either are disjoint or share only a single edge; if any snark had such an embedding, its faces would form a cycle double cover. However, a counterexample to Grünbaum's conjecture was found by Martin Kochol.
Determining whether a given cyclically 5-connected cubic graph is 3-edge-colorable is NP-complete. Therefore, determining whether a graph is a snark is co-NP-complete.
Snark conjecture
W. T. Tutte conjectured that every snark has the Petersen graph as a minor. That is, he conjectured that the smallest snark, the Petersen graph, may be formed from any other snark by contracting some edges and deleting others. Equivalently (because the Petersen graph has maximum degree three) every snark has a subgraph that can be formed from the Petersen graph by subdividing some of its edges. This conjecture is a strengthened form of the four color theorem, because any graph containing the Petersen graph as a minor must be nonplanar. In 1999, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas announced a proof of this conjecture. Steps towards this result have been published in 2016 and 2019, but the complete proof remains unpublished. See the Hadwiger conjecture for other problems and results relating graph coloring to graph minors.
Tutte also conjectured a generalization to arbitrary graphs: every bridgeless graph with no Petersen minor has a nowhere zero 4-flow. That is, the edges of the graph may be assigned a direction, and a number from the set {1, 2, 3}, such that the sum of the incoming numbers minus the sum of the outgoing numbers at each vertex is divisible by four. As Tutte showed, for cubic graphs such an assignment exists if and only if the edges can be colored by three colors, so the conjecture would follow from the snark conjecture in this case. However, proving the snark conjecture would not settle the question of the existence of 4-flows for non-cubic graphs.
References
External links
Graph families
Graph coloring
Graph minor theory
Regular graphs | Snark (graph theory) | Mathematics | 2,540 |
21,054,852 | https://en.wikipedia.org/wiki/Anwell%20Technologies | Anwell Technologies Limited was a Hong Kong multinational manufacturing company. Founded in 2000, the company initially designed machines that mass-produced optical discs, but later began manufacturing thin-film solar cells and organic light-emitting diodes (OLEDs) as well. The company was listed on the Singapore Exchange in 2004, but delisted in 2019 as the company shut down its operations.
History
Anwell was founded in 2000 by chairman and CEO Fan Kai Leung (), known as Franky Fan, and five other engineering partners with initial capital of US$100,000. In 2004, the company was listed on the mainboard of the Singapore Stock Exchange.
In September 2009, Anwell produced its first thin-film solar cell at their production plant located in Anyang, Henan, China. The following month, Anwell's wholly owned subsidiary Sungen signed a memorandum of understanding with American energy company Solargen to supply solar panels for their solar farm projects.
In 2011, Anwell received a total of RMB 800 million in funding from the municipal government of Dongguan for the construction of a second manufacturing base in the city, as well as RMB 700 million increase production capacity at its existing plant in Anyang.
In February 2012, Anwell secured its first engineering, procurement, and construction contract for a solar power plant in Thailand, in a deal worth US$25 million.
Legal issues and closure
In November 2017, Anwell's judicial managers RSM Corporate Advisory announced that Anwell's Chinese subsidiary, Dongguan Anwell Digital Machinery, as well as Anwell CEO Fan, executive director Wu Wai Kin (known as Ken Wu), and group financial controller Kwong Chi Kit (known as Victor Kwong) were found guilty of fraud by Chinese courts. Dongguan Anwell was ordered to pay a total of RMB 1.2 billion in fines and other payments; Fan was sentenced to life imprisonment and a seizure of personal assets worth up to RMB 5 million, while Wu and Kwong were fined RMB 4 million each and sentenced to 20 and 19 years' imprisonment, respectively.
In March 2018, the Singapore High Court granted an application for the company to shut down its operations and begin the process of liquidation. The company applied to delist from the Singapore Exchange in January 2019.
References
External links
Official website (archived 4 October 2017)
Companies formerly listed on the Singapore Exchange
Thin-film cells
Solar energy companies
Thin-film cell manufacturers
Photovoltaics manufacturers
Engineering companies of Hong Kong
Hong Kong brands | Anwell Technologies | Materials_science,Mathematics,Engineering | 511 |
11,273,121 | https://en.wikipedia.org/wiki/HD%20164922%20b | HD 164922 b is an exoplanet orbiting the star HD 164922 about 72 light-years from Earth in the constellation Hercules. Its inclination is not known, and its true mass may be significantly greater than the radial velocity lower limit of 0.36 Jupiter masses. The planet also has a low eccentricity, unlike most other long period extrasolar planets – 0.05 – about the same as Jupiter and Saturn in the Solar System. The exoplanet was found by using the radial velocity method, from radial-velocity measurements via observation of Doppler shifts in the spectrum of the planet's parent star.
Characteristics
Mass, radius and temperature
HD 164922 b is a gas giant, an exoplanet that has a radius and mass around that of the planets Jupiter and Saturn. It has a temperature of . It has an estimated mass of around 0.36 , and a potential radius of around 8 based on its similar mass to Saturn.
Host star
The planet orbits a (G-type) star named HD 164922. The star has a mass of 0.87 and a radius of around 0.99 . It has a temperature of 5293 K and is 13.4 billion years old. In comparison, the Sun is about 4.6 billion years old and has a temperature of 5778 K. The star is metal-rich, with a metallicity ([Fe/H]) of 0.16, or 144% the solar amount. This is particularly odd for a star as old as HD 164922. Its luminosity () is 70% of the solar luminosity.
The star's apparent magnitude, or how bright it appears from Earth's perspective, is 7.01. Therefore, HD 164922 is too dim to be seen with the naked eye, but can be viewed using good binoculars.
Orbit
HD 164922 b orbits its star every 1,155 days at a distance of 2.1 AU (compared to Mars's orbital distance from the Sun, which is around 1.5 AU). It receives only 15% of the sunlight as the Earth does from the Sun.
Discovery
The search for HD 164922 b started when its host star was chosen an ideal target for a planet search using the radial velocity method (in which the gravitational pull of a planet on its star is measured by observing the resulting Doppler shift), as stellar activity would not overly mask or mimic Doppler spectroscopy measurements. It was also confirmed that HD 164922 is neither a binary star nor a quickly rotating star, common false positives when searching for transiting planets. Analysis of the resulting data found that the radial velocity variations most likely indicated the existence of a planet. The net result was an estimate of a 12.9 planetary companion orbiting the star at a distance of 0.33 AU with an eccentricity of 0.07.
The discovery of HD 164922 b was reported in the online archive arXiv on June 30, 2016.
See also
14 Herculis b
List of exoplanets discovered between 2000–2009
References
Hercules (constellation)
Exoplanets discovered in 2006
Giant planets
Exoplanets detected by radial velocity | HD 164922 b | Astronomy | 647 |
5,816,356 | https://en.wikipedia.org/wiki/Pearl%20powder | Pearl powder () is a preparation of crushed pearls used in China and elsewhere for skin care and in traditional Chinese medicine.
Preparation
Pearl powder is made from freshwater pearls or saltwater pearls below jewellery grade. These are sterilised in boiling water and then milled into a fine powder using stainless steel grinding discs or by milling with small porcelain balls in moist conditions. The powder is sold as such or mixed into creams.
Cosmetic uses
Pearl powder is widely believed to help improve the appearance of the skin, and is used as a cosmetic by royal families in Asia. It is also used as a treatment for acne. Some studies have claimed that pearl powder can stimulate the skin's fibroblasts, help regenerate collagen, and accelerate healing of certain skin conditions, wounds, and burns.
Medical uses
Pearl powder contains a number of amino acids, over 30 trace minerals, and a high concentration of calcium. In Chinese medicine, it is used as an anti-inflammatory and detoxification agent, and as a relaxant.
The calcium content is considered beneficial for calcium deficient persons with issues such as osteoporosis. A typical dose is 1 gram of pearl powder taken by mouth, traditionally mixed into water or tea, twice weekly. Excessive doses may cause calcium toxicity.
The powder is also used to treat stomach and intestinal conditions such as indigestion and chronic constipation. It is claimed to minimize pain from sores and ulcers, and to help reduce the sores and ulcers themselves.
History
China
The use in China of pearl powder, both as medicine and as cosmetic, dates back at least to 320 AD. Pearl powder was an ingredient in traditional Chinese medicine (TCM), in the treatment of eye diseases, tuberculosis and to prevent heart attacks. The empress Wu Ze Tian (625 AD – 705 AD) used pearl powder internally and on her skin. The medical book Bencao Gangmu of the Ming dynasty claimed that pearl can stimulate new skin growth and healing, release toxins, and remove sun damage and age spots.
India
Pearl powder was also used in Ayurvedic medicine in the Indian subcontinent. Narahari, a physician of Kashmir, wrote in about 1240 that the pearl was an antidote to poisons, cured conditions of the eyes, consumption and "morbid disturbances", and increased general strength and health. Powdered pearl was also an ingredient of love potions. An Indian pharmacological work published in 1903 listed the powder as a tonic, stimulant and aphrodisiac.
Philippines
In the Philippines from pre-colonial times, selected youths called binukot are special type of princes and princesses that were kept in seclusion and hidden from the sun in order to have fair and white skin. The binukot were fed crushed pearl powders to enhance the fairness and luminosity of their skin. Crushed pearl powder was also applied to their face and body to make their skin more pale and firm.
Europe
In medieval Europe, pearl powder was widely perceived to have therapeutic qualities. It was used in an attempt to treat the insanity of Charles VI of France (1368–1422), and the fever of which Lorenzo de Medici died in 1492. Seventeenth-century German and English works claimed its effectiveness in a wide range of physical and mental conditions. Francis Bacon (1561–1626) recommended it as a means of prolonging life. Pearl powder was also used as a skin whitener by women in Europe during the nineteenth century; one work, however, deprecated it as imparting a "pale, sickly hue", as well as being injurious to the skin and general health.
References
George Frederick Kunz & Charles Hugh Stevenson (1908), "The Book of the Pearl: Its History, Art, Science, and Industry", Courier Corporation.
Pearls
Traditional Chinese medicine
Skin care
Powders | Pearl powder | Physics | 800 |
2,292,623 | https://en.wikipedia.org/wiki/Docking%20%28molecular%29 | In the field of molecular modeling, docking is a method which predicts the preferred orientation of one molecule to a second when a ligand and a target are bound to each other to form a stable complex. Knowledge of the preferred orientation in turn may be used to predict the strength of association or binding affinity between two molecules using, for example, scoring functions.
The associations between biologically relevant molecules such as proteins, peptides, nucleic acids, carbohydrates, and lipids play a central role in signal transduction. Furthermore, the relative orientation of the two interacting partners may affect the type of signal produced (e.g., agonism vs antagonism). Therefore, docking is useful for predicting both the strength and type of signal produced.
Molecular docking is one of the most frequently used methods in structure-based drug design, due to its ability to predict the binding-conformation of small molecule ligands to the appropriate target binding site. Characterisation of the binding behaviour plays an important role in rational design of drugs as well as to elucidate fundamental biochemical processes.
Definition of problem
One can think of molecular docking as a problem of “lock-and-key”, in which one wants to find the correct relative orientation of the “key” which will open up the “lock” (where on the surface of the lock is the key hole, which direction to turn the key after it is inserted, etc.). Here, the protein can be thought of as the “lock” and the ligand can be thought of as a “key”. Molecular docking may be defined as an optimization problem, which would describe the “best-fit” orientation of a ligand that binds to a particular protein of interest. However, since both the ligand and the protein are flexible, a “hand-in-glove” analogy is more appropriate than “lock-and-key”. During the course of the docking process, the ligand and the protein adjust their conformation to achieve an overall "best-fit" and this kind of conformational adjustment resulting in the overall binding is referred to as "induced-fit".
Molecular docking research focuses on computationally simulating the molecular recognition process. It aims to achieve an optimized conformation for both the protein and ligand and relative orientation between protein and ligand such that the free energy of the overall system is minimized.
Docking approaches
Two approaches are particularly popular within the molecular docking community.
One approach uses a matching technique that describes the protein and the ligand as complementary surfaces.
The second approach simulates the actual docking process in which the ligand-protein pairwise interaction energies are calculated.
Both approaches have significant advantages as well as some limitations. These are outlined below.
Shape complementarity
Geometric matching/shape complementarity methods describe the protein and ligand as a set of features that make them dockable. These features may include molecular surface/complementary surface descriptors. In this case, the receptor's molecular surface is described in terms of its solvent-accessible surface area and the ligand's molecular surface is described in terms of its matching surface description. The complementarity between the two surfaces amounts to the shape matching description that may help finding the complementary pose of docking the target and the ligand molecules. Another approach is to describe the hydrophobic features of the protein using turns in the main-chain atoms. Yet another approach is to use a Fourier shape descriptor technique. Whereas the shape complementarity based approaches are typically fast and robust, they cannot usually model the movements or dynamic changes in the ligand/protein conformations accurately, although recent developments allow these methods to investigate ligand flexibility. Shape complementarity methods can quickly scan through several thousand ligands in a matter of seconds and actually figure out whether they can bind at the protein's active site, and are usually scalable to even protein-protein interactions. They are also much more amenable to pharmacophore based approaches, since they use geometric descriptions of the ligands to find optimal binding.
Simulation
Simulating the docking process is much more complicated. In this approach, the protein and the ligand are separated by some physical distance, and the ligand finds its position into the protein's active site after a certain number of “moves” in its conformational space. The moves incorporate rigid body transformations such as translations and rotations, as well as internal changes to the ligand's structure including torsion angle rotations. Each of these moves in the conformation space of the ligand induces a total energetic cost of the system. Hence, the system's total energy is calculated after every move.
The obvious advantage of docking simulation is that ligand flexibility is easily incorporated, whereas shape complementarity techniques must use ingenious methods to incorporate flexibility in ligands. Also, it more accurately models reality, whereas shape complementary techniques are more of an abstraction.
Clearly, simulation is computationally expensive, having to explore a large energy landscape. Grid-based techniques, optimization methods, and increased computer speed have made docking simulation more realistic.
Mechanics of docking
To perform a docking screen, the first requirement is a structure of the protein of interest. Usually the structure has been determined using a biophysical technique such as
X-ray crystallography,
NMR spectroscopy or
cryo-electron microscopy (cryo-EM),
but can also derive from homology modeling construction. This protein structure and a database of potential ligands serve as inputs to a docking program. The success of a docking program depends on two components: the search algorithm and the scoring function.
Search algorithm
The search space in theory consists of all possible orientations and conformations of the protein paired with the ligand. However, in practice with current computational resources, it is impossible to exhaustively explore the search space — this would involve enumerating all possible distortions of each molecule (molecules are dynamic and exist in an ensemble of conformational states) and all possible rotational and translational orientations of the ligand relative to the protein at a given level of granularity. Most docking programs in use account for the whole conformational space of the ligand (flexible ligand), and several attempt to model a flexible protein receptor. Each "snapshot" of the pair is referred to as a pose.
A variety of conformational search strategies have been applied to the ligand and to the receptor. These include:
systematic or stochastic torsional searches about rotatable bonds
molecular dynamics simulations
genetic algorithms to "evolve" new low energy conformations and where the score of each pose acts as the fitness function used to select individuals for the next iteration.
Ligand flexibility
Conformations of the ligand may be generated in the absence of the receptor and subsequently docked or conformations may be generated on-the-fly in the presence of the receptor binding cavity, or with full rotational flexibility of every dihedral angle using fragment based docking. Force field energy evaluation are most often used to select energetically reasonable conformations, but knowledge-based methods have also been used.
Peptides are both highly flexible and relatively large-sized molecules, which makes modeling their flexibility a challenging task. A number of methods were developed to allow for efficient modeling of flexibility of peptides during protein-peptide docking.
Receptor flexibility
Computational capacity has increased dramatically over the last decade making possible the use of more sophisticated and computationally intensive methods in computer-assisted drug design. However, dealing with receptor flexibility in docking methodologies is still a thorny issue. The main reason behind this difficulty is the large number of degrees of freedom that have to be considered in this kind of calculations. Neglecting it, however, in some of the cases may lead to poor docking results in terms of binding pose prediction.
Multiple static structures experimentally determined for the same protein in different conformations are often used to emulate receptor flexibility. Alternatively rotamer libraries of amino acid side chains that surround the binding cavity may be searched to generate alternate but energetically reasonable protein conformations.
Scoring function
Docking programs generate a large number of potential ligand poses, of which some can be immediately rejected due to clashes with the protein. The remainder are evaluated using some scoring function, which takes a pose as input and returns a number indicating the likelihood that the pose represents a favorable binding interaction and ranks one ligand relative to another.
Most scoring functions are physics-based molecular mechanics force fields that estimate the energy of the pose within the binding site. The various contributions to binding can be written as an additive equation:
The components consist of solvent effects, conformational changes in the protein and ligand, free energy due to protein-ligand interactions, internal rotations, association energy of ligand and receptor to form a single complex and free energy due to changes in vibrational modes. A low (negative) energy indicates a stable system and thus a likely binding interaction.
Alternative approaches use modified scoring functions to include constraints based on known key protein-ligand interactions, or knowledge-based potentials derived from interactions observed in large databases of protein-ligand structures (e.g. the Protein Data Bank).
There are a large number of structures from X-ray crystallography for complexes between proteins and high affinity ligands, but comparatively fewer for low affinity ligands as the latter complexes tend to be less stable and therefore more difficult to crystallize. Scoring functions trained with this data can dock high affinity ligands correctly, but they will also give plausible docked conformations for ligands that do not bind. This gives a large number of false positive hits, i.e., ligands predicted to bind to the protein that actually don't when placed together in a test tube.
One way to reduce the number of false positives is to recalculate the energy of the top scoring poses using (potentially) more accurate but computationally more intensive techniques such as Generalized Born or Poisson-Boltzmann methods.
Docking assessment
The interdependence between sampling and scoring function affects the docking capability in predicting plausible poses or binding affinities for novel compounds. Thus, an assessment of a docking protocol is generally required (when experimental data is available) to determine its predictive capability. Docking assessment can be performed using different strategies, such as:
docking accuracy (DA) calculation;
the correlation between a docking score and the experimental response or determination of the enrichment factor (EF);
the distance between an ion-binding moiety and the ion in the active site;
the presence of induce-fit models.
Docking accuracy
Docking accuracy represents one measure to quantify the fitness of a docking program by rationalizing the ability to predict the right pose of a ligand with respect to that experimentally observed.
Enrichment factor
Docking screens can also be evaluated by the enrichment of annotated ligands of known binders from among a large database of presumed non-binding, “decoy” molecules. In this way, the success of a docking screen is evaluated by its capacity to enrich the small number of known active compounds in the top ranks of a screen from among a much greater number of decoy molecules in the database. The area under the receiver operating characteristic (ROC) curve is widely used to evaluate its performance.
Prospective
Resulting hits from docking screens are subjected to pharmacological validation (e.g. IC50, affinity or potency measurements). Only prospective studies constitute conclusive proof of the suitability of a technique for a particular target. In the case of G protein-coupled receptors (GPCRs), which are targets of more than 30% of marketed drugs, molecular docking led to the discovery of more than 500 GPCR ligands.
Benchmarking
The potential of docking programs to reproduce binding modes as determined by X-ray crystallography can be assessed by a range of docking benchmark sets.
For small molecules, several benchmark data sets for docking and virtual screening exist e.g. Astex Diverse Set consisting of high quality protein−ligand X-ray crystal structures, the Directory of Useful Decoys (DUD) for evaluation of virtual screening performance, or the LEADS-FRAG data set for fragments
An evaluation of docking programs for their potential to reproduce peptide binding modes can be assessed by Lessons for Efficiency Assessment of Docking and Scoring (LEADS-PEP).
Applications
A binding interaction between a small molecule ligand and an enzyme protein may result in activation or inhibition of the enzyme. If the protein is a receptor, ligand binding may result in agonism or antagonism. Docking is most commonly used in the field of drug design — most drugs are small organic molecules, and docking may be applied to:
hit identification – docking combined with a scoring function can be used to quickly screen large databases of potential drugs in silico to identify molecules that are likely to bind to protein target of interest (see virtual screening). Reverse pharmacology routinely uses docking for target identification.
lead optimization – docking can be used to predict in where and in which relative orientation a ligand binds to a protein (also referred to as the binding mode or pose). This information may in turn be used to design more potent and selective analogs.
bioremediation – protein ligand docking can also be used to predict pollutants that can be degraded by enzymes.
See also
Drug design
Katchalski-Katzir algorithm
List of molecular graphics systems
Macromolecular docking
Molecular mechanics
Protein structure
Protein design
Software for molecular mechanics modeling
List of protein-ligand docking software
Molecular design software
Docking@Home
Exscalate4Cov
Ibercivis
ZINC database
Lead Finder
Virtual screening
Scoring functions for docking
References
External links
Docking@GRID Project of Conformational Sampling and Docking on Grids : one aim is to deploy some intrinsic distributed docking algorithms on computational Grids, download Docking@GRID open-source Linux version
Click2Drug.org - Directory of computational drug design tools.
Ligand:Receptor Docking with MOE (Molecular Operating Environment)
Molecular modelling
Computational chemistry
Protein structure
Medicinal chemistry
Bioinformatics
Drug discovery
Articles containing video clips | Docking (molecular) | Chemistry,Engineering,Biology | 2,823 |
36,714,938 | https://en.wikipedia.org/wiki/Lagrangian%20particle%20tracking | In experimental fluid mechanics, Lagrangian Particle Tracking refers to the process of determining trajectories of small neutrally buoyant particles (flow tracers) that are freely suspended within a turbulent flow field. These are usually obtained by 3-D Particle Tracking Velocimetry. A collection of such particle trajectories can be used for analyzing the Lagrangian dynamics of the fluid motion, for performing Lagrangian statistics of various flow quantities etc.
In computational fluid dynamics, the Lagrangian particle tracking (or in short LPT method) is a numerical technique for simulated tracking of particle paths Lagrangian within an Eulerian phase. It is also commonly referred to as Discrete Particle Simulation (DPS). Some simulation cases for which this method is applicable are: sprays, small bubbles, dust particles, and is especially optimal for dilute multiphase flows with large Stokes number.
See also
Lagrangian and Eulerian specification of the flow field
References
fluid dynamics | Lagrangian particle tracking | Chemistry,Engineering | 209 |
165,744 | https://en.wikipedia.org/wiki/Kelp | Kelps are large brown algae or seaweeds that make up the order Laminariales. There are about 30 different genera. Despite its appearance and use of photosynthesis in chloroplasts, kelp is technically not a plant but a stramenopile (a group containing many protists).
Kelp grow from stalks close together in very dense areas like forests under shallow temperate and Arctic oceans. They were previously thought to have appeared in the Miocene, 5 to 23 million years ago based on fossils from California. New fossils of kelp holdfasts from early Oligocene rocks in Washington State show that kelps were present in the northeastern Pacific Ocean by at least 32 million years ago. The organisms require nutrient-rich water with temperatures between . They are known for their high growth rate—the genera Macrocystis and Nereocystis can grow as fast as half a metre a day (that is, about 20 inches a day), ultimately reaching .
Through the 19th century, the word "kelp" was closely associated with seaweeds that could be burned to obtain soda ash (primarily sodium carbonate). The seaweeds used included species from both the orders Laminariales and Fucales. The word "kelp" was also used directly to refer to these processed ashes.
Description
The thallus (or body) consists of flat or leaf-like structures known as blades that originate from elongated stem-like structures, the stipes. A root-like structure, the holdfast, anchors the kelp to the substrate of the ocean. Gas-filled bladders (pneumatocysts) form at the base of blades of American species, such as Nereocystis lueteana, (Mert. & Post & Rupr.) to hold the kelp blades close to the surface.
Growth and reproduction
Growth occurs at the base of the meristem, where the blades and stipe meet. Growth may be limited by grazing. Sea urchins, for example, can reduce entire areas to urchin barrens. The kelp life cycle involves a diploid sporophyte and haploid gametophyte stage. The haploid phase begins when the mature organism releases many spores, which then germinate to become male or female gametophytes. Sexual reproduction then results in the beginning of the diploid sporophyte stage, which will develop into a mature individual.
The parenchymatous thalli are generally covered with a mucilage layer, rather than cuticle.
Taxonomy
Phylogeny
Seaweed were generally considered homologues of terrestrial plants, but are only very distantly related to plants, and have evolved plant-like structures through convergent evolution. Where plants have leaves, stems, and reproductive organs, kelp have independently evolved blades, stipes, and sporangia. With radiometric dating and the measure Ma “unequivocal minimum constraint for total group Pinaceae” vascular plants have been measured as having evolved around 419–454 Ma while the ancestors of Laminariales are much younger at 189 Ma. Although these groups are distantly related as well as different in evolutionary age, there are still comparisons that can be made between the structures of terrestrial plants and kelp but in terms of evolutionary history, most of these similarities come from convergent evolution.
Some kelp species including giant kelp, have evolved transport mechanisms for organic as well as inorganic compounds, similar to mechanisms of transport in trees and other vascular plants. In kelp this transportation network uses trumpet-shaped sieve elements (SEs). A 2015 study aimed to evaluate the efficiency of giant kelp (Macrocystis pyrifera) transport anatomy looked at 6 different laminariales species to see if they had typical vascular plant allometric relationships (if SEs had a correlation with the size of an organism). Researchers expected to find the kelp’s phloem to work similarly to a plant's xylem and therefore display similar allometric trends to minimize pressure gradient. The study found no universal allometric scaling between all tested structures of the laminariales species which implies that the transport network of brown algae is only just beginning to evolve to efficiently fit their current niches.
Apart from undergoing convergent evolution with plants, species of kelp have undergone convergent evolution within their own phylogeny that has led to niche conservatism. This niche conservatism means that some species of kelp have convergently evolved to share similar niches, as opposed to all species diverging into distinct niches through adaptive radiation. A 2020 study looked at functional traits (blade mass per area, stiffness, strength, etc.) of 14 species of kelp and found that many of these traits evolved convergently across kelp phylogeny. With different species of kelp filling slightly different environmental niches, specifically along a wave disturbance gradient, many of these convergently evolved traits for structural reinforcement also correlate with distribution along that gradient. The wave disturbance gradient that this study refers to is the environments that this kelp inhabit have a varied level of perturbation from the tide and waves that pull at the kelp. It can be assumed from these results that niche partitioning along wave disturbance gradients is a key driver of divergence between closely related kelp.
Due to the often varied and turbulent habitat that kelp inhabit, plasticity of certain structural traits has been a key for the evolutionary history of the phyla. Plasticity helps with a very important aspect of kelp adaptations to ocean environments, and that is the unusually high levels of morphological homoplasy between lineages. This in fact has made classifying brown algae difficult. Kelp often have similar morphological features to other species within its own area since the roughness of the wave disturbance regime, but can look fairly different from other members of its own species that are found in different wave disturbance regimes. Plasticity in kelps most often involves blade morphology such as the width, ruffle, and thickness of blades. Just one example is the giant bull kelp Nereocystis luetkeana, which have evolved to change blade shape in order to increase drag in water and interception of light when exposed to certain environments. Bull kelp are not unique in this adaptation; many kelp species have evolved a genetic plasticity for blade shapes for different water flow habitats. So individuals of the same species will have differences to other individuals of the same species due to what habitat they grow in. Many species have different morphologies for different wave disturbance regimes but giant kelp Macrocystis integrifolia has been found to have plasticity allowing for 4 distinct types of blade morphology depending on habitat. Where many species only have two or three different blade shapes for maximizing efficiency in only two or three habitats. These different blade shapes were found to decrease breakage and increase ability to photosynthesize. Blade adaptations like these are how kelp have evolved for efficiency in structure in a turbulent ocean environment, to the point where their stability can shape entire habitats. Apart from these structural adaptations, the evolution of dispersal methods relating to structure have been important for the success of kelp as well.
Kelp have had to adapt dispersal methods that can make successful use of ocean currents. Buoyancy of certain kelp structures allows for species to disperse with the flow of water. Certain kelp form kelp rafts, which can travel great distances away from the source population and colonize other areas. The bull kelp genus Durvillaea includes six species, some that have adapted buoyancy and others that have not. Those that have adapted buoyancy have done so thanks to the evolution of a gas filled structure called the pneumatocysts which is an adaptation that allows the kelp to float higher towards the surface to photosynthesize and also aids in dispersal by floating kelp rafts. For Macrocystis pyrifera, adaptation of pneumatocysts and raft forming have made the species dispersal method so successful that the immense spread of coast in which the species can be found has been found to actually be very recently colonized. This can be observed by the low genetic diversity in the subantarctic region. Dispersal by rafts from buoyant species also explains some evolutionary history for non-buoyant species of kelp. Since these rafts commonly have hitchhikers of other diverse species, they provide a mechanism for dispersal for species that lack buoyancy. This mechanism has been recently confirmed to be the cause of some dispersal and evolutionary history for kelp species in a study done with genomic analysis. Studies of kelp structure evolution have helped in the understanding of the adaptations that have allowed for kelp to not only be extremely successful as a group of organisms but also successful as an ecosystem engineer of kelp forests, some of the most diverse and dynamic ecosystems on earth.
Prominent species
Bull kelp, Nereocystis luetkeana, a northwestern American species. Used by coastal indigenous peoples to create fishing nets.
Giant kelp, Macrocystis pyrifera, the largest seaweed. Found in the Pacific coast of North America and South America.
Kombu, Saccharina japonica (formerly Laminaria japonica) and others, several edible species of kelp found in Japan.
Species of Laminaria in the British Isles;
Laminaria digitata (Hudson) J.V. Lamouroux (Oarweed; Tangle)
Laminaria hyperborea (Gunnerus) Foslie (Curvie)
Laminaria ochroleuca Bachelot de la Pylaie
Saccharina latissima (Linnaeus) J.V.Lamouroux (sea belt; sugar kelp; sugarwack)
Species of Laminaria worldwide, listing of species at AlgaeBase:
Laminaria agardhii (NE. America)
Laminaria bongardina Postels et Ruprecht (Bering Sea to California)
Laminaria cuneifolia (NE. America)
Laminaria dentigera Klellm. (California - America)
Laminaria digitata (NE. America)
Laminaria ephemera Setchell (Sitka, Alaska, to Monterey County, California - America)
Laminaria farlowii Setchell (Santa Cruz, California, to Baja California - America)
Laminaria groenlandica (NE. America)
Laminaria longicruris (NE. America)
Laminaria nigripes (NE. America)
Laminaria ontermedia (NE. America)
Laminaria pallida Greville ex J. Agardh (South Africa)
Laminaria platymeris (NE. America)
Laminaria saccharina (Linnaeus) Lamouroux, synonym of Saccharina latissima (north east Atlantic Ocean, Barents Sea south to Galicia - Spain)
Laminaria setchellii Silva (Aleutian Islands, Alaska to Baja California America)
Laminaria sinclairii (Harvey ex Hooker f. ex Harvey) Farlow, Anderson et Eaton (Hope Island, British Columbia to Los Angeles, California - America)
Laminaria solidungula (NE. America)
Laminaria stenophylla (NE. America)
Other species in the Laminariales that may be considered as kelp:
Alaria esculenta (North Atlantic)
Alaria marginata Post. & Rupr. (Alaska and California - America)
Costaria costata (C.Ag.) Saunders (Japan; Alaska, California - America)
Ecklonia brevipes J. Agardh (Australia; New Zealand)
Ecklonia maxima (Osbeck) Papenfuss (South Africa)
Ecklonia radiata (C.Agardh) J. Agardh (Australia; Tasmania; New Zealand; South Africa)
Eisenia arborea Aresch. (Vancouver Island, British Columbia, Montrey, Santa Catalina Island, California - America)
Egregia menziesii (Turn.) Aresch.
Hedophyllum sessile (C.Ag.) Setch (Alaska, California - America)
Macrocystis pyrifera (Linnaeus, C.Agardh) (Australia; Tasmania and South Africa)
Pleurophycus gardneri Setch. & Saund. (Alaska, California - America)
Pterygophora californica Rupr. (Vancouver Island, British Columbia to Bahia del Ropsario, Baja California and California - America)
Non-Laminariales species that may be considered as kelp:
Durvillea antarctica, Fucales (New Zealand, South America, and Australia)
Durvillea willana, Fucales (New Zealand)
Durvillaea potatorum (Labillardière) Areschoug, Fucales (Tasmania; Australia)
Ecology
Kelp forests
Kelp may develop dense forests with high production,Abdullah, M.I., Fredriksen, S., 2004. Production, respiration and exudation of dissolved organic matter by the kelp Laminaria hyperborea along the west coast of Norway. Journal of the Marine Biological Association of the UK 84: 887. biodiversity and ecological function. Along the Norwegian coast these forests cover 5,800 km2, and they support large numbers of animals.Jørgensen, N.M., Christie, H., 2003. l Diurnal, horizontal and vertical dispersal of kelp associated fauna. Hydrobiologia 50, 69-76. Numerous sessile animals (sponges, bryozoans and ascidians) are found on kelp stipes and mobile invertebrate fauna are found in high densities on epiphytic algae on the kelp stipes and on kelp holdfasts. More than 100,000 mobile invertebrates per square meter are found on kelp stipes and holdfasts in well-developed kelp forests. While larger invertebrates and in particular sea urchins (Strongylocentrotus droebachiensis) are important secondary consumers controlling large barren ground areas on the Norwegian coast, they are scarce inside dense kelp forests.
Interactions
Some animals are named after the kelp, either because they inhabit the same habitat as kelp or because they feed on kelp. These include:
Northern kelp crab (Pugettia producta) and graceful kelp crab (Pugettia gracilis), Pacific coast of North America.
Kelpfish (blenny) (e.g., Heterosticbus rostratus, genus Gibbonsia), Pacific coast of North America.
Kelp goose (kelp hen) (Chloephaga hybrida), South America and the Falkland Islands
Kelp pigeon (sheathbill) (Chionis alba and Chionis minor), Antarctic
Conservation
Overfishing nearshore ecosystems leads to the degradation of kelp forests. Herbivores are released from their usual population regulation, leading to over-grazing of kelp and other algae. This can quickly result in barren landscapes where only a small number of species can thrive.Sala, E., C.F. Bourdouresque and M. Harmelin-Vivien. 1998. Fishing, trophic cascades, and the structure of algal assemblages: evaluation of an old but untested paradigm. Oikos 82: 425-439. Other major factors which threaten kelp include marine pollution and the quality of water, climate changes and certain invasive species.
Kelp forests are some of the most productive ecosystems in the world - they are home to a great diversity of species. Many groups, like those at the Seattle Aquarium, are studying the health, habitat, and population trends in order to understand why certain kelp (like bull kelp) thrives in some areas and not others. Remotely Operated Vehicles are used in the surveying of sites and the data extracted is used to learn about which conditions are best suited for kelp restoration.
Uses
Giant kelp can be harvested fairly easily because of its surface canopy and growth habit of staying in deeper water.
Kelp ash is rich in iodine and alkali. In great amount, kelp ash can be used in soap and glass production. Until the Leblanc process was commercialized in the early 19th century, burning of kelp in Scotland was one of the principal industrial sources of soda ash (predominantly sodium carbonate). Around 23 tons of seaweed was required to produce 1 ton of kelp ash. The kelp ash would consist of around 5% sodium carbonate.
Once the Leblanc Process became commercially viable in Britain during the 1820s, common salt replaced kelp ash as raw material for sodium carbonate. Though the price of kelp ash went into steep decline, seaweed remained the only commercial source of iodine. To supply the new industry in iodine synthesis, kelp ash production continued in some parts of West and North Scotland, North West Ireland and Guernsey. The species Saccharina latissima yielded the greatest amount of iodine (between 10 and 15 lbs per ton) and was most abundant in Guernsey. Iodine was extracted from kelp ash using a lixiviation process. As with sodium carbonate however, mineral sources eventually supplanted seaweed in iodine production.
Alginate, a kelp-derived carbohydrate, is used to thicken products such as ice cream, jelly, salad dressing, and toothpaste, as well as an ingredient in exotic dog food and in manufactured goods. Alginate powder is also used frequently in general dentistry and orthodontics for making impressions of the upper and lower arches. Kelp polysaccharides are used in skin care as gelling ingredients and because of the benefits provided by fucoidan.
Kombu (昆布 in Japanese, and 海带 in Chinese, Saccharina japonica and others), several Pacific species of kelp, is a very important ingredient in Chinese, Japanese, and Korean cuisines. Kombu is used to flavor broths and stews (especially dashi), as a savory garnish (tororo konbu) for rice and other dishes, as a vegetable, and a primary ingredient in popular snacks (such as tsukudani). Transparent sheets of kelp (oboro konbu) are used as an edible decorative wrapping for rice and other foods.
Kombu can be used to soften beans during cooking, and to help convert indigestible sugars and thus reduce flatulence.
In Russia, especially in the Russian Far East, and former Soviet Union countries several types of kelp are of commercial importance: Saccharina latissima, Laminaria digitata, Saccharina japonica. Known locally as "Sea Cabbage" (Морская капуста in Russian), it comes in retail trade in dried or frozen, as well as in canned form and used as filler in different types of salads, soups and pastries.
Because of its high concentration of iodine, brown kelp (Laminaria) has been used to treat goiter, an enlargement of the thyroid gland caused by a lack of iodine, since medieval times. An intake of roughly 150 micrograms of iodine per day is beneficial for preventing hypothyroidism. Overconsumption can lead to kelp-induced thyrotoxicosis.
In 2010, researchers found that alginate, the soluble fibre substance in sea kelp, was better at preventing fat absorption than most over-the-counter slimming treatments in laboratory trials. As a food additive, it may be used to reduce fat absorption and thus obesity. Kelp in its natural form has not yet been demonstrated to have such effects.
Kelp's rich iron content can help prevent iron deficiency.
Commercial production
Commercial production of kelp harvested from its natural habitat has taken place in Japan for over a century. Many countries today produce and consume laminaria products; the largest producer is China. Laminaria japonica, the important commercial seaweed, was first introduced into China in the late 1920s from Hokkaido, Japan. Yet mariculture of this alga on a very large commercial scale was realized in China only in the 1950s. Between the 1950s and the 1980s, kelp production in China increased from about 60 to over 250,000 dry weight metric tons annually.
In culture
Some of the earliest evidence for human use of marine resources, coming from Middle Stone Age sites in South Africa, includes the harvesting of foods such as abalone, limpets, and mussels associated with kelp forest habitats.
In 2007, Erlandson et al. suggested that kelp forests around the Pacific Rim may have facilitated the dispersal of anatomically modern humans following a coastal route from Northeast Asia to the Americas. This "kelp highway hypothesis" suggested that highly productive kelp forests supported rich and diverse marine food webs in nearshore waters, including many types of fish, shellfish, birds, marine mammals, and seaweeds that were similar from Japan to California, Erlandson and his colleagues also argued that coastal kelp forests reduced wave energy and provided a linear dispersal corridor entirely at sea level, with few obstacles to maritime peoples. Archaeological evidence from California's Channel Islands confirms that islanders were harvesting kelp forest shellfish and fish, beginning as much as 12,000 years ago.
During the Highland Clearances, many Scottish Highlanders were moved on to areas of estates known as crofts, and went to industries such as fishing and kelping (producing soda ash from the ashes of kelp). At least until the 1840s, when there were steep falls in the price of kelp, landlords wanted to create pools of cheap or virtually free labour, supplied by families subsisting in new crofting townships. Kelp collection and processing was a very profitable way of using this labour, and landlords petitioned successfully for legislation designed to stop emigration. The profitability of kelp harvesting meant that landlords began to subdivide their land for small tenant kelpers, who could now afford higher rent than their gentleman farmer counterparts. But the economic collapse of the kelp industry in northern Scotland during the 1820s led to further emigration, especially to North America.
Natives of the Falkland Islands are sometimes nicknamed "Kelpers". dictionary.com definition for "Kelper" This designation is primarily applied by outsiders rather than the natives themselves.
In Chinese slang, "kelp" (), is used to describe an unemployed returnee. It has negative overtones, implying the person is drifting aimlessly, and is also a homophonic expression (, literally "sea waiting"). This expression is contrasted with the employed returnee, having a dynamic ability to travel across the ocean: the "sea turtle" () and is also homophonic with another word (, literally "sea return").
Gallery
See also
Aquaculture of giant kelp
References
Further reading
Druehl, L.D. 1988. Cultivated edible kelp. in Algae and Human Affairs.'' Lembi, C.A. and Waaland, J.R. (Editors) 1988..
Erlandson, J.M., M.H. Graham, B.J. Bourque, D. Corbett, J.A. Estes, & R.S. Steneck. 2007. The Kelp Highway hypothesis: marine ecology, the coastal migration theory, and the peopling of the Americas. Journal of Island and Coastal Archaeology 2:161-174.
Eger, A. M., Layton, C., McHugh, T. A, Gleason, M., and Eddy, N. (2022). Kelp Restoration Guidebook: Lessons Learned from Kelp Projects Around the World. The Nature Conservancy, Arlington, VA, USA.
External links
Edible seaweeds
Seaweeds | Kelp | Biology | 4,974 |
21,893,074 | https://en.wikipedia.org/wiki/Fogbank | Fogbank (stylized as FOGBANK) is a code name given to a secret material used in the W76, W78 and W88 nuclear warheads that are part of the United States nuclear arsenal. The process to create Fogbank was lost by 2000, when it was needed for the refurbishment of old warheads. Fogbank was then reverse engineered by the National Nuclear Security Administration (NNSA) over five years and at the cost of tens of millions of dollars.
Fogbank's precise nature is classified; in the words of former Oak Ridge National Laboratory general manager Dennis Ruddy, "The material is classified. Its composition is classified. Its use in the weapon is classified, and the process itself is classified." Department of Energy Nuclear Explosive Safety documents simply describe it as a material "used in nuclear weapons and nuclear explosives" along with lithium hydride (LiH) and lithium deuteride (LiD), beryllium (Be), uranium hydride (UH3), and plutonium hydride.
However, NNSA Administrator Tom D'Agostino disclosed the role of Fogbank in the weapon: "There's another material in the—it's called interstage material, also known as Fogbank", and arms experts believe that Fogbank is an aerogel material which acts as an interstage material in a nuclear warhead; i.e., a material designed to become a superheated plasma following the detonation of the weapon's fission stage, the plasma then triggering the fusion-stage detonation.
History
It has been revealed by unclassified official sources that Fogbank was originally manufactured in Facility 9404-11 of the Y-12 National Security Complex in Oak Ridge, Tennessee, from 1975 until 1989, when the final batch of W76 warheads was completed. After that, the facility was deactivated and finally slated for decommissioning by 1993. Only a small pilot plant was left, which had been used to produce small batches of Fogbank for testing purposes.
In 1996, the US government decided to replace, refurbish, or decommission large numbers of its nuclear weapons. Accordingly, the Department of Energy established a refurbishment program to extend the service lives of older nuclear weapons. In 2000, the NNSA specified a life-extension program for W76 warheads that would enable them to remain in service until at least 2040.
It was soon realized that the Fogbank material was a potential source of problems for the program, as few records of its manufacturing process had been retained when it was originally manufactured in the 1980s, and nearly all staff members who had expertise in its production had either retired or left the agency. The NNSA briefly investigated sourcing a substitute for Fogbank but eventually decided that since Fogbank had been produced previously, they would be able to repeat it. Additionally, "Los Alamos computer simulations at that time were not sophisticated enough to determine conclusively that an alternate material would function as effectively as Fogbank," according to a Los Alamos publication.
With Facility 9404-11 long since decommissioned, a new production facility was required. Delays arose during its construction. Engineers repeatedly encountered failure in their efforts to produce Fogbank. Manufacture involves the moderately toxic, highly volatile solvent acetonitrile, which presents a hazard for workers (causing three evacuations in March 2006 alone). As multiple deadlines expired, and the schedule was pushed back repeatedly, the NNSA eventually invested $23 million to find an alternative to Fogbank.
In March 2007, engineers devised a manufacturing process for Fogbank. The material turned out to have problems when tested, and in September 2007 the Fogbank project was upgraded to "Code Blue" status by the NNSA, making it a major priority. In 2008, following the expenditure of a further $69 million, the NNSA managed to manufacture Fogbank, and 7 months later the first refurbished warhead was provided to the U.S. Navy, nearly a decade after the commencement of the refurbishment program. In May 2009 a U.S. Navy spokesman said that they had not received any refurbished weapons. The Energy Department stated that the current plan was to begin shipping refurbished weapons in late 2009, two years behind schedule.
The experience of reverse engineering Fogbank produced some improvements in scientific knowledge of the process. The new production scientists noticed that certain problems in production resembled those noted by the original team. These problems were traced to a particular impurity in the final product that was required to meet quality standards. A root cause investigation showed that input materials were subject to cleaning processes that had not existed during the original production run. This cleaning removed a substance that generated the required impurity. With the implicit role of this substance finally understood, the production scientists could control output quality better than during the original run.
The W76 life-extension project was completed in December 2018, when 800 W76s were upgraded to the W76-1 design. It is unclear whether the new W76-2 uses Fogbank.
References
Nuclear weapons of the United States
Foams
Plastics
Classified information in the United States
Nuclear weapon design
Aerogels | Fogbank | Physics,Chemistry | 1,069 |
65,055,311 | https://en.wikipedia.org/wiki/Eugene%20Thomas%20Allen | Eugene Thomas Allen (2 April 1864 – 17 July 1964) was an American pioneer of geochemistry who worked at the Geophysical Laboratory of the Carnegie Institution.
Allen was born to Frederick and Harriet Augusta (born Thomas) in Athol, Massachusetts. He received an AB from Amherst College 1887 and studied chemistry at Johns Hopkins University, receiving a PhD in 1892. He taught chemistry at the University of Colorado (1892–1893) and at the Missouri School of Mines (1895–1901). From 1879 he collaborated with G.F. Becker and Carl Barus at the newly founded US Geological Survey to study the chemistry of rocks. In 1906 he moved to the Geophysical Laboratory of the Carnegie Institution of Washington established in the previous year with Arthur Louis Day as director.
Allen worked on silicate minerals which included experimental synthetic approaches in petrological studies. In 1925 Allen and Day worked on geysers and hot springs of Yellowstone.
Allen married Harriet Doughty at Arlington in 1896. He was a member of the Cosmos Club.
References
External links
Steam wells and other thermal activity at "The Geysers", California (1927)
Diopside and its relations to calcium and magnesium metasilicates (1909)
The isomorphism and thermal properties of the feldspars (1905)
American geochemists
1864 births
People from Athol, Massachusetts
1964 deaths
20th-century American chemists
American men centenarians
Missouri University of Science and Technology faculty | Eugene Thomas Allen | Chemistry | 292 |
21,312,140 | https://en.wikipedia.org/wiki/Quantitative%20comparative%20linguistics | Quantitative comparative linguistics is the use of quantitative analysis as applied to comparative linguistics. Examples include the statistical fields of lexicostatistics and glottochronology, and the borrowing of phylogenetics from biology.
History
Statistical methods have been used for the purpose of quantitative analysis in comparative linguistics for more than a century. During the 1950s, the Swadesh list emerged: a standardised set of lexical concepts found in most languages, as words or phrases, that allow two or more languages to be compared and contrasted empirically.
Probably the first published quantitative historical linguistics study was by Sapir in 1916, while Kroeber and Chretien in 1937 investigated nine Indo-European (IE) languages using 74 morphological and phonological features (extended in 1939 by the inclusion of Hittite). Ross in 1950 carried out an investigation into the theoretical basis for such studies. Swadesh, using word lists, developed lexicostatistics and glottochronology in a series of papers published in the early 1950s but these methods were widely criticised though some of the criticisms were seen as unjustified by other scholars. Embleton published a book on "Statistics in Historical Linguistics" in 1986 which reviewed previous work and extended the glottochronological method. Dyen, Kruskal and Black carried out a study of the lexicostatistical method on a large IE database in 1992.
During the 1990s, there was renewed interest in the topic, based on the application of methods of computational phylogenetics and cladistics. Such projects often involved collaboration by linguistic scholars, and colleagues with expertise in information science and/or biological anthropology. These projects often sought to arrive at an optimal phylogenetic tree (or network), to represent a hypothesis about the evolutionary ancestry and perhaps its language contacts. Pioneers in these methods included the founders of CPHL: computational phylogenetics in historical linguistics (CPHL project): Donald Ringe, Tandy Warnow, Luay Nakhleh and Steven N. Evans.
In the mid-1990s a group at Pennsylvania University computerised the comparative method and used a different IE database with 20 ancient languages. In the biological field several software programs were then developed which could have application to historical linguistics. In particular a group at the University of Auckland developed a method that gave controversially old dates for IE languages. A conference on "Time-depth in Historical Linguistics" was held in August 1999 at which many applications of quantitative methods were discussed. Subsequently many papers have been published on studies of various language groups as well as comparisons of the methods.
Greater media attention was generated in 2003 after the publication by anthropologists Russell Gray and Quentin Atkinson of a short study on Indo-European languages in Nature. Gray and Atkinson attempted to quantify, in a probabilistic sense, the age and relatedness of modern Indo-European languages and, sometimes, the preceding proto-languages.
The proceedings of an influential 2004 conference, Phylogenetic Methods and the Prehistory of Languages were published in 2006, edited by Peter Forster and Colin Renfrew.
Studied language families
Computational phylogenetic analyses have been performed for:
Indo-European languages: Bouckaert (2012)
Uralic languages: Honkola (2013)
Turkic languages: Hruschka (2014)
Dravidian languages: Kolipakam (2018)
Austroasiatic languages: Sidwell (2015)
Austronesian languages: Gray (2009)
Pama-Nyungan languages: Bowern & Atkinson (2012), Bouckaert, Bowern and Atkinson (2018)
Bantu languages: Currie (2013), Grollemund (2015)
Semitic languages: Kitchen (2009)
Dené–Yeniseian languages: Sicoli & Holton (2014)
Uto-Aztecan languages: Wheeler & Whiteley (2014)
Mayan languages: Atkinson (2006)
Arawakan languages: Walker & Ribeiro (2011)
Tupi-Guarani languages: Michael (2015)
Sino-Tibetan languages: Zhang et al. (2019), Sagart et al. (2019)
Background
The standard method for assessing language relationships has been the comparative method. However this has a number of limitations. Not all linguistic material is suitable as input and there are issues of the linguistic levels on which the method operates. The reconstructed languages are idealized and different scholars can produce different results. Language family trees are often used in conjunction with the method and "borrowings" must be excluded from the data, which is difficult when borrowing is within a family. It is often claimed that the method is limited in the time depth over which it can operate. The method is difficult to apply and there is no independent test. Thus alternative methods have been sought that have a formalised method, quantify the relationships and can be tested.
A goal of comparative historical linguistics is to identify instances of genetic relatedness amongst languages. The steps in quantitative analysis are (i) to devise a procedure based on theoretical grounds, on a particular model or on past experience, etc. (ii) to verify the procedure by applying it to some data where there exists a large body of linguistic opinion for comparison (this may lead to a revision of the procedure of stage (i) or at the extreme of its total abandonment) (iii) to apply the procedure to data where linguistic opinions have not yet been produced, have not yet been firmly established or perhaps are even in conflict.
Applying phylogenetic methods to languages is a multi-stage process: (a) the encoding stage - getting from real languages to some expression of the relationships between them in the form of numerical or state data, so that those data can then be used as input to phylogenetic methods (b) the representation stage - applying phylogenetic methods to extract from those numerical and/or state data a signal that is converted into some useful form of representation, usually two dimensional graphical ones such as trees or networks, which synthesise and "collapse" what are often highly complex multi dimensional relationships in the signal (c) the interpretation stage - assessing those tree and network representations to extract from them what they actually mean for real languages and their relationships through time.
Types of trees and networks
An output of a quantitative historical linguistic analysis is normally a tree or a network diagram. This allows summary visualisation of the output data but is not the complete result. A tree is a connected acyclic graph, consisting of a set of vertices (also known as "nodes") and a set of edges ("branches") each of which connects a pair of vertices. An internal node represents a linguistic ancestor in a phylogenic tree or network. Each language is represented by a path, the paths showing the different states as it evolves. There is only one path between every pair of vertices. Unrooted trees plot the relationship between the input data without assumptions regarding their descent. A rooted tree explicitly identifies a common ancestor, often by specifying a direction of evolution or by including an "outgroup" that is known to be only distantly related to the set of languages being classified. Most trees are binary, that is a parent has two children. A tree can always be produced even though it is not always appropriate. A different sort of tree is that only based on language similarities / differences. In this case the internal nodes of the graph do not represent ancestors but are introduced to represent the conflict between the different splits ("bipartitions") in the data analysis. The "phenetic distance" is the sum of the weights (often represented as lengths) along the path between languages. Sometimes an additional assumption is made that these internal nodes do represent ancestors.
When languages converge, usually with word adoption ("borrowing"), a network model is more appropriate. There will be additional edges to reflect the dual parentage of a language. These edges will be bidirectional if both languages borrow from one another. A tree is thus a simple network, however there are many other types of network. A phylogentic network is one where the taxa are represented by nodes and their evolutionary relationships are represented by branches. Another type is that based on splits, and is a combinatorial generalisation of the split tree. A given set of splits can have more than one representation thus internal nodes may not be ancestors and are only an "implicit" representation of evolutionary history as distinct from the "explicit" representation of phylogenetic networks. In a splits network the phrenetic distance is that of the shortest path between two languages. A further type is the reticular network which shows incompatibilities (due to for example to contact) as reticulations and its internal nodes do represent ancestors. A network may also be constructed by adding contact edges to a tree. The last main type is the consensus network formed from trees. These trees may be as a result of bootstrap analysis or samples from a posterior distribution.
Language change
Change happens continually to languages, but not usually at a constant rate, with its cumulative effect producing splits into dialects, languages and language families. It is generally thought that morphology changes slowest and phonology the quickest. As change happens, less and less evidence of the original language remains. Finally there could be loss of any evidence of relatedness. Changes of one type may not affect other types, for example sound changes do not affect cognacy. Unlike biology, it cannot be assumed that languages all have a common origin and establishing relatedness is necessary. In modelling it is often assumed for simplicity that the characters change independently but this may not be the case. Besides borrowing, there can also be semantic shifts and polymorphism.
Analysis input
Data
Analysis can be carried out on the "characters" of languages or on the "distances" of the languages. In the former case the input to a language classification generally takes the form of a data matrix where the rows correspond to the various languages being analysed and the columns correspond to different features or characters by which each language may be described. These features are of two types cognates or typological data. Characters can take one or more forms (homoplasy) and can be lexical, morphological or phonological. Cognates are morphemes (lexical or grammatical) or larger constructions. Typological characters can come from any part of the grammar or lexicon. If there are gaps in the data these have to be coded.
In addition to the original database of (unscreened) data, in many studies subsets are formed for particular purposes (screened data).
In lexicostatistics the features are the meanings of words, or rather semantic slots. Thus the matrix entries are a series of glosses. As originally devised by Swadesh the single most common word for a slot was to be chosen, which can be difficult and subjective because of semantic shift. Later methods may allow more than one meaning to be incorporated.
Constraints
Some methods allow constraints to be placed on language contact geography (isolation by distance) and on sub-group split times.
Databases
Swadesh originally published a 200 word list but later refined it into a 100 word one. A commonly used IE database is that by Dyen, Kruskal and Black which contains data for 95 languages, though the original is known to contain a few errors. Besides the raw data it also contains cognacy judgements. This is available online. The database of Ringe, Warnow and Taylor has information on 24 IE languages, with 22 phonological characters, 15 morphological characters and 333 lexical characters. Gray and Atkinson used a database of 87 languages with 2449 lexical items, based on the Dyen set with the addition of three ancient languages. They incorporated the cognacy judgements of a number of scholars. Other databases have been drawn up for African, Australian and Andean language families, amongst others.
Coding of the data may be in binary form or in multistate form. The former is often used but does result in a bias. It has been claimed that there is a constant scale factor between the two coding methods, and that allowance can be made for this. However, another study suggests that the topology may change
Word lists
The word slots are chosen to be as culture- and borrowing- free as possible. The original Swadesh lists are most commonly used but many others have been devised for particular purposes. Often these are shorter than Swadesh's preferred 100 item list. Kessler has written a book on "The Significance of Word Lists while McMahon and McMahon carried out studies on the effects of reconstructability and retentiveness. The effect of increasing the number of slots has been studied and a law of diminishing returns found, with about 80 being found satisfactory. However some studies have used less than half this number.
Generally each cognate set is represented as a different character but differences between words can also be measured as a distance measurement by sound changes. Distances may also be measured letter by letter.
Morphological features
Traditionally these have been seen as more important than lexical ones and so some studies have put additional weighting on this type of character. Such features were included in the Ringe, Warnow and Taylor IE database for example. However other studies have omitted them.
Typological features
Examples of these features include glottalised constants, tone systems, accusative alignment in nouns, dual number, case number correspondence, object-verb order, and first person singular pronouns. These will be listed in the WALS database, though this is only sparsely populated for many languages yet.
Probabilistic models
Some analysis methods incorporate a statistical model of language evolution and use the properties of the model to estimate the evolution history. Statistical models are also used for simulation of data for testing purposes. A stochastic process can be used to describe how a set of characters evolves within a language. The probability with which a character will change can depend on the branch but not all characters evolve together, nor is the rate identical on all branches. It is often assumed that each character evolves independently but this is not always the case. Within a model borrowing and parallel development (homoplasy) may also be modelled, as well as polymorphisms.
Effects of chance
Chance resemblances produce a level of noise against which the required signal of relatedness has to be found. A study was carried out by Ringe into the effects of chance on the mass comparison method. This showed that chance resemblances were critical to the technique and that Greenberg's conclusions could not be justified, though the mathematical procedure used by Ringe was later criticised.
With small databases sampling errors can be important.
In some cases with a large database and exhaustive search of all possible trees or networks is not feasible because of running time limitations. Thus there is a chance that the optimum solution is not found by heuristic solution-space search methods.
Detection of borrowing
Loanwords can severely affect the topology of a tree so efforts are made to exclude borrowings. However, undetected ones sometimes still exist. McMahon and McMahon showed that around 5% borrowing can affect the topology while 10% has significant effects. In networks borrowing produces reticulations. Minett and Wang examined ways of detecting borrowing automatically.
Split dating
Dating of language splits can be determined if it is known how the characters evolve along each branch of a tree. The simplest assumption is that all characters evolve at a single constant rate with time and that this is independent of the tree branch. This was the assumption made in glottochronology. However, studies soon showed that there was variation between languages, some probably due to the presence of unrecognised borrowing. A better approach is to allow rate variation, and the gamma distribution is usually used because of its mathematical convenience. Studies have also been carried out that show that the character replacement rate depends on the frequency of use. Widespread borrowing can bias divergence time estimates by making languages seem more similar and hence younger. However, this also makes the ancestor's branch length longer so that the root is unaffected.
This aspect is the most controversial part of quantitative comparative linguistics.
Types of analysis
There is a need to understand how a language classification method works in order to determine its assumptions and limitations. It may only be valid under certain conditions or be suitable for small databases. The methods differ in their data requirements, their complexity and running time. The methods also differ in their optimisation criteria.
Character based models
Maximum parsimony and maximum compatibility
These two methods are similar but the maximum parsimony method's objective is to find the tree (or network) in which the minimum number of evolutionary changes occurs. In some implementations the characters can be given weights and then the objective is to minimise the total weighted sum of the changes. The analysis produces unrooted trees unless an outgroup is used or directed characters. Heuristics are used to find the best tree but optimisation is not guaranteed. The method is often implemented using the programs PAUP or TNT.
Maximum compatibility also uses characters, with the objective of finding the tree on which the maximum number of characters evolve without homoplasy. Again the characters can be weighted and when this occurs the objective is to maximise the sum of the weights of compatible characters. It also produces unrooted trees unless additional information is incorporated. There are no readily available heuristics available that are accurate with large databases. This method has only been used by Ringe's group.
In these two methods there are often several trees found with the same score so the usual practice is to find a consensus tree via an algorithm. A majority consensus has bipartitions in more than half of the input trees while a greedy consensus adds bipartitions to the majority tree. The strict consensus tree is the least resolved and contains those splits that are in every tree.
Bootstrapping (a statistical resampling strategy) is used to provide branch support values. The technique randomly picks characters from the input data matrix and then the same analysis is used. The support value is the fraction of the runs with that bipartition in the observed tree. However, bootstrapping is very time consuming.
Maximum likelihood and Bayesian analysis
Both of these methods use explicit evolution models. The maximum likelihood method optimises the probability of producing the
observed data, while Bayesian analysis estimates the probability of each tree and so produces a probability distribution. A random walk is made through the "model-tree space". Both take an indeterminate time to run, and stopping may be arbitrary so a decision is a problem. However, both produce support information for each branch.
The assumptions of these methods are overt and are verifiable. The complexity of the model can be increased if required. The model parameters are estimated directly from the input data so assumptions about evolutionary rate are avoided.
Perfect Phylogenetic Networks
This method produces an explicit phylogenic network having an underlying tree with additional contact edges. Characters can be borrowed but evolve without homoplasy. To produce such networks, a graph-theoretic algorithm has been used.
Gray and Atkinson's method
The input lexical data is coded in binary form, with one character for each state of the original multi-state character. The method allows homoplasy and constraints on split times. A likelihood-based analysis method is used, with evolution expressed as a rate matrix. Cognate gain and loss is modelled with a gamma distribution to allow rate variation and with rate smoothing. Because of the vast number of possible trees with many languages, Bayesian inference is used to search for the optimal tree. A Markov Chain Monte Carlo algorithm generates a sample of trees as an approximation to the posterior probability distribution. A summary of this distribution can be provided as a greedy consensus tree or network with support values. The method also provides date estimates.
The method is accurate when the original characters are binary, and evolve identically and independently of each other under a rates-across-sites model with gamma distributed rates; the dates are accurate when the rate of change is constant. Understanding the performance of the method when the original characters are multi-state is more complicated, since the binary encoding produces characters that are not independent, while the method assumes independence.
Nicholls and Gray's method
This method is an outgrowth of Gray and Atkinson's. Rather than having two parameters for a character, this method uses three. The birth rate, death rate of a cognate are specified and its borrowing rate. The birth rate is a Poisson random variable with a single birth of a cognate class but separate deaths of branches are allowed (Dollo parsimony). The method does not allow homoplasy but allows polymorphism and constraints. Its major problem is that it cannot handle missing data (this issue has since been resolved by Ryder and Nicholls. Statistical techniques are used to fit the model to the data. Prior information may be incorporated and an MCMC research is made of possible reconstructions. The method has been applied to Gray and Nichol's database and seems to give similar results.
Distance based models
These use a triangular matrix of pairwise language comparisons. The input character matrix is used to compute the distance matrix either using the Hamming distance or the Levenshtein distance. The former measures the proportion of matching characters while the latter allows costs of the various possible transforms to be included. These methods are fast compared with wholly character based ones. However, these methods do result in information loss.
UPGMA
The "Unweighted Pairwise Group Method with Arithmetic-mean" (UPGMA) is a clustering technique which operates by repeatedly joining the two languages that have the smallest distance between them. It operates accurately with clock-like evolution but otherwise it can be in error. This is the method used in Swadesh's original lexicostatistics.
Split Decomposition
This is a technique for dividing data into natural groups. The data could be characters but is more usually distance measures. The character counts or distances are used to generate the splits and to compute weights (branch lengths) for the splits. The weighted splits are then represented in a tree or network based on minimising the number of changes between each pair of taxa. There are fast algorithms for generating the collection of splits. The weights are determined from the taxon to taxon distances. Split decomposition is effective when the number of taxa is small or when the signal is not too complicated.
Neighbor joining
This method operates on distance data, computes a transformation of the input matrix and then computes the minimum distance of the pairs of languages. It operates correctly even if the languages do not evolve with a lexical clock. A weighted version of the method may also be used. The method produces an output tree. It is claimed to be the closest method to manual techniques for tree construction.
Neighbor-net
It uses a similar algorithm to neighbor joining. Unlike Split Decomposition it does not fuse nodes immediately but waits until a node has been paired a second time. The tree nodes are then replaced by two and the distance matrix reduced. It can handle large and complicated data sets. However, the output is a phenogram rather than a phylogram. This is the most popular network method.
Network
This was an early network method that has been used for some language analysis. It was originally developed for genetic sequences with more than one possible origin. Network collapses the alternative trees into a single network. Where there are multiple histories a reticulation (a box shape) is drawn. It generates a list of characters incompatible with a tree.
ASP
This uses a declarative knowledge representation formalism and the methods of Answer Set Programming. One such solver is CMODELS which can be used for small problems but larger ones require heuristics. Preprocessing is used to determine the informative characters. CMODELS transforms them into a propositional theory that uses a SAT solver to compute the models of this theory.
Fitch/Kitch
Fitch and Kitch are maximum likelihood based programs in PHYLIP that allow a tree to be rearranged after each addition, unlike NJ. Kitch differs from Fitch in assuming a constant rate of change throughout the tree while Fitch allows for different rates down each branch.
Separation level method
Holm introduced a method in 2000 to deal with some known problems of lexicostatistical analysis. These are the "symplesiomorphy trap", where shared archaisms are difficult to distinguish from shared innovations, and the "proportionality "trap" when later changes can obscure early ones. Later he introduced a refined method, called SLD, to take account of the variable word distribution across languages. The method does not assume aconstant rate of change.
Fast convergence methods
A number of fast converging analysis methods have been developed for use with large databases (>200 languages). One of these is the Disk Covering Method (DCM). This has been combined with existing methods to give improved performance. A paper on the DCM-NJ+MP method is given by the same authors in "The performance of Phylogenetic Methods on Trees of Bounded Diameter", where it is compared with the NJ method.
Resemblance based models
These models compare the letters of words rather than their phonetics. Dunn et al. studied 125 typological characters across 16 Austronesian and 15 Papuan languages. They compared their results to an MP tree and one constructed by traditional analysis. Significant differences were found. Similarly Wichmann and Saunders used 96 characters to study 63 American languages.
Computerised mass comparison
A method that has been suggested for initial inspection of a set of languages to see if they are related was mass comparison. However, this has been severely criticised and fell into disuse. Recently Kessler has resurrected a computerised version of the method but using rigorous hypothesis testing. The aim is to make use of similarities across more than two languages at a time. In another paper various criteria for comparing word lists are evaluated. It was found that the IE and Uralic families could be reconstructed but there was no evidence for a joint super-family.
Nichol's method
This method uses stable lexical fields, such as stance verbs, to try to establish long-distance relationships. Account is taken of convergence and semantic shifts to search for ancient cognates. A model is outlined and the results of a pilot study are presented.
ASJP
The Automated Similarity Judgment Program (ASJP) is similar to lexicostatistics, but the judgement of similarities is done by a computer program following a consistent set of rules. Trees are generated using standard phylogenetic methods. ASJP uses 7 vowel symbols and 34 consonant symbols. There are also various modifiers. Two words are judged similar if at least two consecutive consonants in the respective words are identical while vowels are also taken into account. The proportion of words with the same meaning judged to be similar for a pair of languages is the Lexical Similarity Percentage (LSP). The Phonological Similarity Percentage (PSP) is also calculated. PSP is then subtracted from the LSP yielding the Subtracted Similarity Percentage (SSP) and the ASJP distance is 100-SSP. Currently there are data on over 4,500 languages and dialects in the ASJP database from which a tree of the world's languages was generated.
Serva and Petroni's method
This measures the orthographical distance between words to avoid the subjectivity of cognacy judgements. It determines the minimum number of operations needed to transform one word into another, normalised by the length of the longer word. A tree is constructed from the distance data by the UPGMA technique.
Phonetic evaluation methods
Heggarty has proposed a means of providing a measure of the degrees of difference between cognates, rather than just yes/no answers. This is based on examining many (>30) features of the phonetics of the glosses in comparison with the protolanguage. This could require a large amount of work but Heggarty claims that only a representative sample of sounds is necessary. He also examined the rate of change of the phonetics and found a large rate variation, so that it was unsuitable for glottochronology. A similar evaluation of the phonetics had earlier been carried out by Grimes and Agard for Romance languages, but this used only six points of comparison.
Evaluation of methods
Metrics
Standard mathematical techniques are available for measuring the similarity/difference of two trees. For consensus trees the Consistency Index (CI) is a measure of homoplasy. For one character it is the ratio of the minimimum conceivable number of steps on any one tree (= 1 for binary trees) divided by the number of reconstructed steps on the tree. The CI of a tree is the sum of the character CIs divided by the number of characters. It represents the proportion of patterns correctly assigned.
The Retention Index (RI) measures the amount of similarity in a character. It is the ratio (g - s) / (g - m) where g is the greatest number of steps of a character on any tree, m is the minimum number of steps on any tree, and s is the minimum steps on a particular tree. There is also a Rescaled CI which is the product of the CI and RI.
For binary trees the standard way of comparing their topology is to use the Robinson-Foulds metric. This distance is the average of the number of false positives and false negatives in terms of branch occurrence. R-F rates above 10% are considered poor matches. For other sorts of trees and for networks there is yet no standard method of comparison.
Lists of incompatible characters are produced by some tree producing methods. These can be extremely helpful in analysing the output. Where heuristic methods are used repeatability is an issue. However, standard mathematical techniques are used to overcome this problem.
Comparison with previous analyses
In order to evaluate the methods a well understood family of languages is chosen, with a reliable dataset. This family is often the IE one but others have been used. After applying the methods to be compared to the database, the resulting trees are compared with the reference tree determined by traditional linguistic methods. The aim is to have no conflicts in topology, for example no missing sub-groups, and compatible dates. The families suggested for this analysis by Nichols and Warnow are Germanic, Romance, Slavic, Common Turkic, Chinese, and Mixe Zoque as well as older groups such as Oceanic and IE.
Use of simulations
Although the use of real languages does add realism and provides real problems, the above method of validation suffers from the fact that the true evolution of the languages is unknown. By generating a set of data from a simulated evolution correct tree is known. However it will be a simplified version of reality. Thus both evaluation techniques should be used.
Sensitivity analysis
To assess the robustness of a solution it is desirable to vary the input data and constraints, and observe the output. Each variable is changed slightly in turn. This analysis has been carried out in a number of cases and the methods found to be robust, for example by Atkinson and Gray.
Studies comparing methods
During the early 1990s, linguist Donald Ringe, with computer scientists Luay Nakhleh and Tandy Warnow, statistician Steven N. Evans and others, began collaborating on research in quantitative comparative linguistic projects. They later founded the CHPL project, the goals of which include: "producing and maintaining real linguistic datasets, in particular of Indo-European languages", "formulating statistical models that capture the evolution of historical linguistic data", "designing simulation tools and accuracy measures for generating synthetic data for studying the performance of reconstruction methods", and "developing and implementing statistically-based as well as combinatorial methods for reconstructing language phylogenies, including phylogenetic networks".
A comparison of coding methods was carried out by Rexova et al. (2003). They created a reduced data set from the Dyen database but with the addition of Hittite. They produced a standard multistate matrix where the 141 character states corresponds to individual cognate classes, allowing polymorphism. They also joined some cognate classes, to reduce subjectivity and polymorphic states were not allowed. Lastly they produced a binary matrix where each class of words was treated as a separate character. The matrices were analysed by PAUP. It was found that using the binary matrix produced changes near the root of the tree.
McMahon and McMahon (2003) used three PHYLIP programs (NJ, Fitch and Kitch) on the DKB dataset. They found that the results produced were very similar. Bootstrapping was used to test the robustness of any part of the tree. Later they used subsets of the data to assess its retentiveness and reconstructability. The outputs showed topological differences which were attributed to borrowing. They then also used Network, Split Decomposition, Neighbor-net and SplitsTree on several data sets. Significant differences were found between the latter two methods. Neighbor-net was considered optimal for discerning language contact.
In 2005, Nakhleh, Warnow, Ringe and Evans carried out a comparison of six analysis methods using an Indo-European database. The methods compared were UPGMA, NJ MP, MC, WMC and GA. The PAUP software package was used for UPGMA, NJ, and MC as well as computing the majority consensus trees. The RWT database was used but 40 characters were removed due to evidence of polymorphism. Then a screened database was produced excluding all characters that clearly exhibited parallel development, so eliminating 38 features. The trees were evaluated on the basis of the number of incompatible characters and on agreement with established sub-grouping results. They found that UPGMA was clearly worst but there was not a lot of difference between the other methods. The results depended on the data set used. It was found that weighting the characters was important, which requires linguistic judgement.
Saunders (2005) compared NJ, MP, GA and Neighbor-Net on a combination of lexical and typological data. He recommended use of the GA method but Nichols and Warnow have some concerns about the study methodology.
Cysouw et al. (2006) compared Holm's original method with NJ, Fitch, MP and SD. They found Holm's method to be less accurate than the others.
In 2013, François Barbancon, Warnow, Evans, Ringe and Nakleh (2013) studied various tree reconstruction methods using simulated data. Their simulated data varied in the number of contact edges, the degree of homoplasy, the deviation from a lexical clock, and the deviation from the rates-across-sites assumption. It was found that the accuracy of the unweighted methods (MP, NJ, UPGMA, and GA) were consistent in all the conditions studied, with MP being the best. The accuracy of the two weighted methods (WMC and WMP) depended on the appropriateness of the weighting scheme. With low homoplasy the weighted methods generally produced the more accurate results but inappropriate weighting could make these worse than MP or GA under moderate or high homoplasy levels.
Choosing the best model
Choice of an appropriate model is critical for the production of good phylogenetic analyses. Both underparameterised or overly restrictive models may produce aberrant behaviour when their underlying assumptions are violated, while overly complex or overparameterised models require long run times and their parameters may be overfit. The most common method of model selection is the "Likelihood Ratio Test" which produces an estimate of the fit between the model and the data, but as an alternative the Akaike Information Criterion or the Bayesian Information Criterion can be used. Model selection computer programs are available.
See also
Glottochronology
List of phylogenetics software
Quantitative linguistics
Notes
Bibliography
Atkinson, Nicholls, Welsh and Gray : From words to dates - Transactions of the Philological Society 103 (2005).
Bandelt and Drew : Split Decomposition - Molecular Phylogentic Evolution 1 (1992).
Bandelt, Forster and Rohl : Median-joining networks for inferring intraspecific phylogenies - Molecular Biological Evolution 16 (1999).
Bryant, Filimon and Gray : Untangling our past: Languages, trees, splits and networks (in The Evolution of Cultural Diversity by Mace, Holden and Shennan UCL 2005).
Evans and Warnow : Unidentifiable divergence times in rates-across-sites models - IEEE/ACM Transactions on Computational Biology and Bioinformation 1 (2005).
Huelsenbeck and Ronquist : Mr Bayes, Baysesian inference of phylogeny - Bioinfomatics 17 (2001).
Huson: Splitstree, a program for analysing and visualising evolutionary data - Bioinfomatics 14(1) (1998).
Warnow, Evans, Ringe and Nakhleh : A Stochastic Model of Language Evolution that Incorporates Homoplasy and Borrowing (in Phylogenetic Methods and the Prehistory of Languages - Forster and Renfrew, 2006).
Efron, Halloran and Holmes : Bootstrap confidence levels for phylogenetic trees - Proceedings of National Academy of Sciences USA 93 (1996).
Kowalski and Thorton : Performance of maximum parsimony and likelihood phylogenies when evolution is heterogeneous - Nature 431 (2004).
Felsentein : Cases in which parsimony and compatibility methods will be positively misleading - Systematic Zoology 27 (1978).
Rogers : Maximum likelihood estimation of phylogenetic trees is consistent when substitution rates vary according to the invariable sites plus gamma distribution - Systematic Biology 59 (2001).
Historical linguistics
Phylogenetics
Comparative linguistics
Quantitative linguistics | Quantitative comparative linguistics | Mathematics,Biology | 7,706 |
94,623 | https://en.wikipedia.org/wiki/The%20Dreaming | The Dreaming, also referred to as Dreamtime, is a term devised by early anthropologists to refer to a religio-cultural worldview attributed to Australian Aboriginal mythology. It was originally used by Francis Gillen, quickly adopted by his colleague Sir Baldwin Spencer and thereafter popularised by A. P. Elkin, who, however, later revised his views.
The Dreaming is used to represent Aboriginal concepts of "Everywhen", during which the land was inhabited by ancestral figures, often of heroic proportions or with supernatural abilities. These figures were often distinct from gods, as they did not control the material world and were not worshipped but only revered.
The term is based on a rendition of the Arandic word , used by the Aranda (Arunta, Arrernte) people of Central Australia, although it has been argued that it is based on a misunderstanding or mistranslation. Some scholars suggest that the word's meaning is closer to "eternal, uncreated". Anthropologist William Stanner said that the concept was best understood by non-Aboriginal people as "a complex of meanings". Jukurrpa is a widespread term used by Warlpiri people and other peoples of the Western Desert cultural bloc.
By the 1990s, Dreaming had acquired its own currency in popular culture, based on idealised or fictionalised conceptions of Australian mythology. Since the 1970s, Dreaming has also returned from academic usage via popular culture and tourism and is now ubiquitous in the English vocabulary of Aboriginal Australians in a kind of "self-fulfilling academic prophecy".
Etymology
The station-master, magistrate, and amateur ethnographer Francis Gillen first used the terms in an ethnographical report in 1896. Along with Walter Baldwin Spencer, Gillen published a major work, Native Tribes of Central Australia, in 1899. In that work, they spoke of the Alcheringa as "the name applied to the far distant past with which the earliest traditions of the tribe deal". Five years later, in their Northern Tribes of Central Australia, they gloss the far distant age as "the dream times", link it to the word meaning "dream", and affirm that the term is current also among the Kaitish and Unmatjera.
Altjira
Early doubts about the precision of Spencer and Gillen's English gloss were expressed by the German Lutheran pastor and missionary Carl Strehlow in his 1908 book Die Aranda (The Arrernte). He noted that his Arrernte contacts explained altjira, whose etymology was unknown, as an eternal being who had no beginning. In the Upper Arrernte language, the proper verb for "to dream" was , literally "to see God". Strehlow theorised that the noun is the somewhat rare word , which Spencer and Gillen gave a corrupted transcription and a false etymology. "The native," Strehlow concluded, "knows nothing of 'dreamtime' as a designation of a certain period of their history."
Strehlow gives or ( meaning "good") as the Arrente word for the eternal creator of the world and humankind. Strehlow describes him as a tall strong man with red skin, long fair hair, and emu legs, with many red-skinned wives (with dog legs) and children. In Strehlow's account, Altjira lives in the sky (which is a body of land through which runs the Milky Way, a river).
However, by the time Strehlow was writing, his contacts had been converts to Christianity for decades, and critics suggested that Altjira had been used by missionaries as a word for the Christian God.
In 1926, Spencer conducted a field study to challenge Strehlow's conclusion about Altjira and the implied criticism of Gillen and Spencer's original work. Spencer found attestations of from the 1890s that used the word to mean "associated with past times" or "eternal", not "god".
Academic Sam Gill finds Strehlow's use of Altjira ambiguous, sometimes describing a supreme being, and sometimes describing a totem being but not necessarily a supreme one. He attributes the clash partly to Spencer's cultural evolutionist beliefs that Aboriginal people were at a pre-religion "stage" of development (and thus could not believe in a supreme being), while Strehlow as a Christian missionary found presence of belief in the divine a useful entry point for proselytising.
Linguist David Campbell Moore is critical of Spencer and Gillen's "Dreamtime" translation, concluding:
Other terms
The complex of religious beliefs encapsulated by the Dreamings are also called:
Ngarrankarni or Ngarrarngkarni by the Gija people
Jukurrpa or Tjukurpa/Tjukurrpa by the Warlpiri people and in the Pitjantjatjara dialect
Ungud or Wungud by the Ngarinyin people
Manguny in the language Martu Wangka
Wongar in North-East Arnhem Land
Daramoolen in Ngunnawal language and Ngarigo language
Nura in the Dharug language
Nyitting in the Noongar language
Translations and meaning
In English, anthropologists have variously translated words normally understood to mean Dreaming or Dreamtime in a variety of other ways, including "Everywhen", "world-dawn", "ancestral past", "ancestral present", "ancestral now" (satirically), "unfixed in time", "abiding events" or "abiding law".
Most translations of the Dreaming into other languages are based on the translation of the word dream. Examples include in French ("dream spaces") and in Croatian (a gerund derived from the verb for "to dream").
The concept of the Dreaming is inadequately explained by English terms, and difficult to explain in terms of non-Aboriginal cultures. It has been described as "an all-embracing concept that provides rules for living, a moral code, as well as rules for interacting with the natural environment ... [it] provides for a total, integrated way of life ... a lived daily reality". It embraces past, present and future. Another definition suggests that it represents "the relationship between people, plants, animals and the physical features of the land; the knowledge of how these relationships came to be, what they mean and how they need to be maintained in daily life and in ceremony". According to Simon Wright, "jukurrpa has an expansive meaning for Warlpiri people, encompassing their own law and related cultural knowledge systems, along with what non-Indigenous people refer to as 'dreaming.
A dreaming is often associated with a particular place, and may also belong to specific ages, gender or skin groups. Dreamings may be represented in artworks, for example "Pikilyi Jukurrpa" by Theo (Faye) Nangala represents the Dreaming of Pikilyi (Vaughan Springs) in the Northern Territory, and belongs to the Japanangka/ Nanpanangka and Japangardi/ Napanangka skin groups.
Aboriginal beliefs and culture
Related entities are known as Mura-mura by the Dieri and as Tjukurpa in Pitjantjatjara.
"Dreaming" is now also used as a term for a system of totemic symbols, so that an Aboriginal person may "own" a specific Dreaming, such as Kangaroo Dreaming, Shark Dreaming, Honey Ant Dreaming, Badger Dreaming, or any combination of Dreamings pertinent to their country. This is because in the Dreaming an individual's entire ancestry exists as one, culminating in the idea that all worldly knowledge is accumulated through one's ancestors. Many Aboriginal Australians also refer to the world-creation time as "Dreamtime". The Dreaming laid down the patterns of life for the Aboriginal people.
Creation is believed to be the work of culture heroes who travelled across a formless land, creating sacred sites and significant places of interest in their travels. In this way, "songlines" (or in the Warlpiri language) were established, some of which could travel right across Australia, through as many as six to ten different language groupings. The dreaming and travelling trails of these heroic spirit beings are the songlines. The signs of the spirit beings may be of spiritual essence, physical remains such as petrosomatoglyphs of body impressions or footprints, among natural and elemental simulacra.
Some of the ancestor or spirit beings inhabiting the Dreamtime become one with parts of the landscape, such as rocks or trees. The concept of a life force is also often associated with sacred sites, and ceremonies performed at such sites "are a re-creation of the events which created the site during The Dreaming". The ceremony helps the life force at the site to remain active and to keep creating new life: if not performed, new life cannot be created.
Dreaming existed before the life of the individual begins, and continues to exist when the life of the individual ends. Both before and after life, it is believed that this spirit-child exists in the Dreaming and is only initiated into life by being born through a mother. The spirit of the child is culturally understood to enter the developing fetus during the fifth month of pregnancy. When the mother felt the child move in the womb for the first time, it was thought that this was the work of the spirit of the land in which the mother then stood. Upon birth, the child is considered to be a special custodian of that part of their country and is taught the stories and songlines of that place. As Wolf (1994: p. 14) states: "A 'black fella' may regard his totem or the place from which his spirit came as his Dreaming. He may also regard tribal law as his Dreaming."
In the Wangga genre, the songs and dances express themes related to death and regeneration. They are performed publicly with the singer composing from their daily lives or while Dreaming of a nyuidj (dead spirit).
Dreaming stories vary throughout Australia, with variations on the same theme. The meaning and significance of particular places and creatures is wedded to their origin in The Dreaming, and certain places have a particular potency or Dreaming. For example, the story of how the sun was made is different in New South Wales and in Western Australia. Stories cover many themes and topics, as there are stories about creation of sacred places, land, people, animals and plants, law and custom. In Perth, the Noongar believe that the Darling Scarp is the body of the Wagyl – a serpent being that meandered over the land creating rivers, waterways and lakes and who created the Swan River. In another example, the Gagudju people of Arnhemland, for whom Kakadu National Park is named, believe that the sandstone escarpment that dominates the park's landscape was created in the Dreamtime when Ginga (the crocodile-man) was badly burned during a ceremony and jumped into the water to save himself.
See also
Aboriginal mythology
Rainbow Serpent
Tjilbruke
Apeiron, the concept of the eternal or unlimited in Greek philosophy
Dreaming (Australian Aboriginal art)
Festival of the Dreaming, an arts festival that ran from 1997 until 2012
Wuji (philosophy) and Taiji (philosophy), concepts of the eternal or limitless in Chinese philosophy
The Dreaming (1982 album by Kate Bush)
Notes
Citations
Sources
("Into the Crystal Dreamtime", promotional pamphlet, late 1980s; "Crystal Woman: isters of the Dreamtime" 1987; p. 36:"the prescriptive New Age genre, which sells one-hundred-proof ethnological antimodernism without overmuch worry about bothersome ethnographic facts")
Further reading
Everywhen: Australia and the Language of Deep History (University of Nebraska, 2023), edited by Jakelin Troy, Ann McGrath, and Laura Rademaker.
Australian Aboriginal mythology
Creation myths | The Dreaming | Astronomy | 2,480 |
36,974,622 | https://en.wikipedia.org/wiki/CytoViva | CytoViva, Inc. is a scientific imaging and instrumentation company that develops and markets optical microscopy and hyperspectral imaging technology for nanomaterials, pathogen and general biology applications.
History
The company's core optical technology was invented by Vitaly Vodyanoy, Physiology Professor and Director of the Biosensor Laboratory at Auburn University. CytoViva commercialized this technology in 2005 and patents for the illumination optics were issued in 2009 (US patents No. 7,542,203, 7,564,623). In 2008, the company introduced hyperspectral imaging technology as an integrated solution with its patented optical microscopy capability.
The company is currently headquartered in Auburn, Alabama at the Auburn Research Park and has distribution partners worldwide. As of 2016, over 300 research laboratories worldwide utilize CytoViva technology.
Products
CytoViva combines patented enhanced darkfield optical microscopy technology with a proprietary hyperspectral imaging capability. This combination of technologies enables optical observation and spectral characterization of a wide range of nanoscale samples, including nanoparticles, pathogens and subcellular materials.
Products include:
The patented enhanced darkfield illumination system, which replaces the standard microscope condenser, provides up to 10x improved signal-to-noise optical images of nanoscale samples over standard darkfield microscopy. The system incorporates oblique angle, pre-aligned Kohler illumination. The resulting high signal-to-noise image enables direct observation of nanoscale sample elements.
The dual mode fluorescence module is a transmitted light fluorescent technique that enables real time observation of both fluorescent and non-fluorescent sample elements. This is accomplished through the proportionate mixing of fluorescence excitation light and full spectrum light.
The hyperspectral microscope system integrates hyperspectral imaging (HSI) onto the microscope to capture spectral image files. These spectral image files can be used to spectrally characterize sample elements such as nanoparticles, pathogens or subcellular materials. Image analysis software enables mapping sample elements based upon their unique spectral fingerprint. In its most general form, hyperspectral microscopy can be used to determine the location of nanoscale materials within a sample. Analysis methods include identifying and mapping materials in composites, conducting mean spectral analysis, and comparisons of comparable materials.
Applications
Identifying and mapping Ag, Au and other nanoparticles, in cells, tissue or other composite matrix
Characterizing drug loads and other functional groups added to nanoparticles
Confirming the presence of carbon nanotubes in tissue and cells
Detecting airborne carbon nanotubes and other airborne nanomaterials
Identifying liposomes used as drug delivery vectors
Mapping quantum dots and fluorescently tagged particles and subcellular structure
Bacteria, virus and other pathogen detection
Plant pathology
Subcellular structure characterization
Live cell imaging
References
External links
Company website
Nanotechnology companies
Companies based in Auburn, Alabama | CytoViva | Materials_science | 580 |
41,725,292 | https://en.wikipedia.org/wiki/Grassland%20degradation | Grassland degradation, also called vegetation or steppe degradation, is a biotic disturbance in which grass struggles to grow or can no longer exist on a piece of land due to causes such as overgrazing, burrowing of small mammals, and climate change. Since the 1970s, it has been noticed to affect plains and plateaus of alpine meadows or grasslands, most notably being in the Philippines and in the Tibetan and Inner Mongolian region of China, where of grassland is degraded each year. Across the globe it is estimated that 23% of the land is degraded. It takes years and sometimes even decades, depending on what is happening to that piece of land, for a grassland to become degraded. The process is slow and gradual, but so is restoring degraded grassland. Initially, only patches of grass appear to die and appear brown; but the degradation process, if not addressed, can spread to many acres of land. As a result, the frequency of landslides and dust storms may increase. The degraded land's less fertile ground cannot yield crops, or animals graze in these fields. With a dramatic decrease in plant diversity in this ecosystem, more carbon and nitrogen may be released into the atmosphere. These results can have serious effects on humans such as displacing herders from their community; a decrease in vegetables, fruit, and meat that are regularly acquired from these fields; and a catalyzing effect on global warming.
Causes
Overgrazing
It is thought that grassland degradation is principally attributed to overgrazing. This occurs when animals consume grass at a faster rate than it can grow back. Lately, overgrazing has become more apparent, partially because of the increase in urbanization, which makes less room for available farmland. With these smaller plots, farmers try to maximize their space and profits by densely packing their land with animals. Another point that comes with the high density of owned animals is that farmers need to be able to provide for them in the winter months, so they must gather much grass since the winter is often harsh and long in alpine meadows. As a result, the grass is given less chance to grow back due to either the rapid consumption of grass or the continual stomping of the feet of these animals. This latter suppression also encourages rats and insects to subsist here, both of which further inhibit grass growth. Overgrazing is a main cause of shrub and bush encroachment in grasslands and savanna ecosystems.
Small mammals
An increase in small animal populations has led to some grassland being degraded. These animals include the Himalayan marmots, the Brandt's and plateau vole, and the plateau pika and zokor. They damage this environment mainly through their burrowing into the ground and gnawing at the grass and other plants. Both of these actions encourage soil erosion and make it more difficult for plants to firmly ground themselves to this poor terrain. Hence, grass has a tougher time growing, and the terrain becomes spottily doused with grass. However, some do not think these animals contribute to grassland degradation. They claim that such burrowing aids in the recycling of nutrients in the soil and that the rise in population is only normal since grazing levels in these areas have also risen.
Climate change
Climate change has had a noticeable role in grassland degradation, for its characteristics are not suitable for the growth of grass. The increase in average temperatures of regions makes them less suitable for grass to grow due to the more rapid evaporation of water that was formerly utilized by the grass. Furthermore, neither periods of much rain nor stretches of drought, both of which become more prevalent with climate change, encourage the growing of grass. It is especially harmful when the times of drought are during the growing season, as is the case near the Yangtze and Yellow Rivers in China. Additionally, since alpine regions, where degradation typically occurs, are commonly of high elevation, they are more easily affected by climate and its changes. Some scientists, however, write off climate change as an insignificant cause of degradation. Climate change, particularly warmer and drier conditions, bring about suitable conditions for invading non-native grass species.
Human interference
Anthropogenic factors also play a role in the disturbance of grasslands. Degradation has been shown to appear when humans move into such areas to build, for example, roads or settlements. Roads reduce the area where grass can grow successfully; the settlements constructed by herdsmen have proven to be the most damaging to grassland since they are accompanied by their animals, which further harm the region. Also when humans convert natural grassland into farmland, they often harshly farm it by repeatedly planting the same crops year after year, and by having to do this, the soil quality is lowered when these crops suck the nutrients out of the ground. When the farmer is finally done with the land, it is in extremely poor condition for grass to grow. Another cause of degradation by man is deforestation. When these trees are demolished and taken away, the soil lacks the strong root system formerly contributed by trees; therefore, the soil is upturned, cannot support plant life as well, and is more susceptible to landslides. The gathering of medicinal plants, particularly in China, also contributed to a certain extent to degraded grasslands. Still, this practice is not done as frequently anymore.
Degrees of severity
There are three main degrees of degraded grassland. In order of decreasing frequency, they are lightly, moderately, and highly degraded grassland. These stages are sequential so no grassland can be highly degraded without first being lightly and moderately degraded and so forth. Lightly degraded grassland is the least potent of the three and is characterized by patches of dead or no grass, spottily dispersed throughout the land. Plant and animal diversity starts to lessen but becomes really apparent in moderately degraded grasslands, in which patches of dead grass increase in size and number. Also during this stage, pests, be they rats, insects, or other grassland animals, start to disturb the environment by damaging the soil through, for instance, extracting from the soil nutrients vital to a plant's well-being or by just damaging to plants themselves. The grasslands that are affected the worst are highly degraded, which can be recognized by the vast expanse of dead grass. This quality makes this land neither arable nor suitable for livestock. Hence, it makes sense that the animal and plant diversity is extremely low. The few plants that do inhabit this area are quite poisonous and ward off any animals or plants potentially trying to move back in.
Some specific names are given to highly degraded grasslands that are particularly damaged. Heitutan is a term that signifies severely degraded grasslands. A more common and more extreme term to describe degraded grassland is "black beach" or "black-soil-land", which is exactly what it sounds like: land with nothing but black, unusable soil that extends 10–15 cm below the ground level. In the winter and autumn seasons, this land is naked of any vegetation whatsoever; but in the summer and spring, it is at least populated by toxic herbage.
Consequences
There are many results stemming from grassland degradation. Two of the more logical outcomes are the decrease in arable land and a drop off in the amount of crops harvested. These two similar outcomes in some way only lead to more degradation in that farmers, who now see their land as useless, just move on to perhaps a smaller plot of land, since that is all their money can afford, after having to surrender their prior property. Hence, smaller plots are easier to be overgrazed and worked to exhaustion. Also, the numbers of livestock tend to decrease with grassland degradation, mainly because there is less grass to be eaten.
Besides anthropogenic productivity of the land, the biodiversity of degraded land also declines, as previously mentioned. With less biodiversity, this ecosystem is less adaptable to when disasters strike it It has a smaller available food supply, in terms of plants, for animals, who then may die out or more likely may relocate. Proof of this decline is that presently 15–20% of Tibetan Plateau species are now considered endangered, and now because of this animal and plant absence, the soil quality of these degraded lands is very poor. It does not hold the necessary nutrients, such as water, nitrogen, and carbon, essential to either supporting life or inviting life back to that land. As a result of such carbon and nitrogen loss in the Tibetan Plateau, $8,033/ha and $13,315/ha were respectively lost in economic terms. Soils are further weakened by dust storms whose frequency increases because of degradation. Erosion of soil becomes a bigger problem, since no longer are there as many plants to anchor in the soil. In the northern Chinese province alone, 400 million are affected every year with an associated 54 billion yuan of annual economic loss due to grassland degradation.
Grassland restoration
Successful grassland restoration has several dimensions, including recognition in policy, standardisation of indicators of degradation, scientific innovation, knowledge transfer and data sharing.
Having significantly impacted many areas, some attempts at restoration have been made. In general, it takes time for implanted methods to restore degraded grassland fully. Also, there are certain ways that degraded land should be counteracted, depending upon its severity. For an area that is lightly degraded, fencing, fertilizing, or weeding. Fencing an area off allows for that plot of land to be reprieved from grazing until it reaches its normal, healthy state, in which no more patches of dead grass exist. Active brush control can serve to restore areas affected by woody plant encroachment.
The earlier the problem is addressed, the easier it is to restore that plot of land. In some cases, grazing can even be continued as long as its intensity is decreased and the situation is monitored. For instance, a method as simple as seasonally rotating fields in which animals graze have been see as effective. More structured efforts must be put into place to combat moderately degraded grasslands. These actions include reseeding and rodent control, whose goal is not to extinguish that population but rather to manage it so that it does not further degrade the land. Rodent control can be in the form of either shooting, sterilizing, or poisoning the rodents. The administered poison must have a low toxicity so that it does not cause further damage to other animals or plants; a popular toxin that has worked well is Botulin toxin C.
As for highly degraded plots of land, planting semi-artificial grassland is the umbrella term that is used to address this type of restoration. It includes weed control, fertilizing, reseeding, rodent control, and scarification. Since weeds are so numerous in highly degraded grasslands and since they suck so many nutrients from the soil, it is important to eradicate them as much as possible; and this is done so quite successfully by herbicide solutions. Semi-artificial grassland works best when the highly degraded land has 30% or more plant coverage. For degraded plots that are worse off, and hence typically fall under the category of black soil or severely degraded Heitutan grassland, artificial grassland is required and entails weed and rodent control, plowing, seeding, and fertilizing. These two methods are successful at restoring plant life to a certain extent but are also somewhat expensive. For this reason research must be done to foretell if this method would be successful by, for instance, determining whether such seeds would thrive in that environment. Once an area of land is reduced from, for instance, heavily degraded to moderately degraded, the methods of restoring it must also change.
See also
Soil degradation
Land degradation
Woody plant encroachment
References
Environmental issues with soil
Physical geography
Grasslands
Grasses
Plains
Agricultural land | Grassland degradation | Biology,Environmental_science | 2,364 |
52,777,355 | https://en.wikipedia.org/wiki/Lundbeck%20Seattle%20Biopharmaceuticals | Lundbeck Seattle Biopharmaceuticals is a pharmaceutical development company based in Bothell, Washington. Formerly known as Alder Biopharmaceuticals, it specializes in therapeutic monoclonal antibodies.
In May 2014, Alder went public. In early 2018, the company made a public stock offering, aiming to raise . The company identifies, develops, and manufactures antibody therapeutics to alleviate human suffering in cancer, pain, cardiovascular, and autoimmune and inflammatory disease areas.
As of September 2019, the Alder Biopharmaceuticals shares have increased with 83% in price, following the company's acquisition by the Denmark-based H. Lundbeck, in a deal valued at $1.95 billion. The company subsequently changed its name to Lundbeck Seattle Biopharmaceuticals after the acquisition.
References
American companies established in 2004
Biotechnology companies established in 2004
Pharmaceutical companies established in 2004
2004 establishments in Washington (state)
Biotechnology companies of the United States
Life sciences industry
Pharmaceutical companies of the United States
Health care companies based in Washington (state)
Companies based in Bothell, Washington
Biopharmaceutical companies
Companies listed on the Nasdaq
2014 initial public offerings | Lundbeck Seattle Biopharmaceuticals | Biology | 247 |
40,957 | https://en.wikipedia.org/wiki/Control%20communications | In telecommunications, control communications is the branch of technology devoted to the design, development, and application of communications facilities used specifically for control purposes, such as for controlling (a) industrial processes, (b) movement of resources, (c) electric power generation, distribution, and utilization, (d) communications networks, and (e) transportation systems.
References
Telecommunications engineering
Control engineering
Mass media technology
Telecommunications | Control communications | Technology,Engineering | 81 |
45,206,870 | https://en.wikipedia.org/wiki/Comprehensive%20model%20of%20information%20seeking | The comprehensive model of information seeking, or CMIS, is a theoretical construct designed to predict how people will seek information. It was first developed by J. David Johnson and has been utilized by a variety of disciplines including library and information science and health communication.
The CMIS has been empirically tested in health and organizational contexts. It has inherent strengths for studying how people react to health problems such as cancer. It specifies "antecedents" that explain why people become information seekers, "information carrier characteristic" that shape how they go about looking for information, and "information seeking actions" that reflect the nature of the search itself.
Design
The CMIS has been quantitatively tested and performs well when it comes to health information seeking behaviors. There are three main schemas in the CMIS. These are: Antecedents, information field, and information seeking actions. The antecedents are those factors that determine how an information consumer will receive the information. Those factors are: Demographics, personal experience, salience, and beliefs. These factors are fluid and can change during the health information seeking process. The second schema is the information fields that consist of characteristics and utilities. This schema is concerned with the channels and carriers of information. A person's understanding is developed through the information field. The third schema involves the transformational processes and measured by the consumer's understanding of the messages received through the information field. The final schema involves information seeking actions. This is what the consumer does as a result of the first two schemas through information seeking. There are three major dimensions: the scope, depth, and method of information seeking.
Antecedents
The CMIS antecedents—demographics, personal experience, salience, and beliefs—are factors that determine an individual's natural predisposition to search for information from particular information carriers. Certain types of health information seeking can be triggered by an individual's degree of personal experience with disease. In the CMIS framework, two personal relevance factors, salience and beliefs, are seen as the primary determinants in translating a perceived gap into an active search for information. Salience refers to the personal significance of health information to the individual, such as perceptions of risk to one's health, which are likely to result in information seeking action. However, people also may be motivated to gather information to determine the implications of health events for themselves and/or others related to their future activities, a factor directly related to the rapidly growing field of genetics. An individual's beliefs about the nature of a particular disease, its impacts, and level of control, all directly relate to self-efficacy, one of our key variables, and one that plays an important role in information seeking and people's more general pattern of actions related to health.
Information carrier characteristics
The information carrier characteristics are drawn from a model of media exposure and appraisal that has been tested on a variety of information carriers, including both sources and channels, and in a variety of cultural settings. Following the media exposure and appraisal, the CMIS focuses on editorial tone, communication potential, and utility. In the CMIS, characteristics are composed of editorial tone, which reflects an audience member's perception of credibility, while communication potential relates to issues of style and comprehensiveness. Utility relates the characteristics of a medium directly to the needs of an individual, and shares much with the uses and gratifications perspectives.
Information-seeking actions
There are several types of information seeking actions that can result from the impetus provided by the factors identified by the CMIS. For example, search behavior can be characterized by its extent, or the number of activities carried out, which has two components: scope, the number of alternatives investigated; and, depth, the number of dimensions of an alternative investigated. There is also the method of the search, or channel, as another major dimension of the search. For instance, an individual might choose the method of consulting a telephone information service, decide to have a narrow scope by only asking questions about smoking cessation clinics, but investigate every recommendation in detail, thus increasing the depth of the search.
Stages in the CMIS
A key concept from the CMIS is the notion of "stages," or "cancer involvement." According to the CMIS, an individual may be at one of four stages regarding a cancer threat, and thereby have differing information needs and behaviors.
The casual stage is characterized by a general lack of concern or interest. At this stage, individuals are not purposive in their search for cancer-related information; rather, their search is accidental and aimless, even apathetic.
The purposive-placed stage is characterized by the question, "What can I do to prevent cancer?" Individuals here might have some passing interest in cancer or genetic information, but are generally still not affected or directly concerned.
The purposive-clustered stage is experienced by individuals in closer proximity to cancer. This is the point at which a person is motivated to look for practical information that will address the specific problem. For example, a first-degree relative of a recently diagnosed breast cancer patient may seek genetic screening or BRCA 1/2 testing. The person could clearly benefit from such information- seeking behavior since medical authorities acknowledge that early detection of cancer leads to earlier treatments and better treatment outcomes.
The directed stage includes individuals who have been diagnosed as having cancer. Such individuals need knowledge for making informed decisions about treatment and management of the disease.
References
Human communication
Information retrieval
Health education | Comprehensive model of information seeking | Biology | 1,123 |
2,526,950 | https://en.wikipedia.org/wiki/Isotopes%20of%20barium | Naturally occurring barium (56Ba) is a mix of six stable isotopes and one very long-lived radioactive primordial isotope, barium-130, identified as being unstable by geochemical means (from analysis of the presence of its daughter xenon-130 in rocks) in 2001. This nuclide decays by double electron capture (absorbing two electrons and emitting two neutrinos), with a half-life of (0.5–2.7)×1021 years (about 1011 times the age of the universe).
There are a total of thirty-three known radioisotopes in addition to 130Ba. The longest-lived of these is 133Ba, which has a half-life of 10.51 years. All other radioisotopes have half-lives shorter than two weeks. The longest-lived isomer is 133mBa, which has a half-life of 38.9 hours. The shorter-lived 137mBa (half-life 2.55 minutes) arises as the decay product of the common fission product caesium-137.
Barium-114 is predicted to undergo cluster decay, emitting a nucleus of stable 12C to produce 102Sn. However this decay is not yet observed; the upper limit on the branching ratio of such decay is 0.0034%.
List of isotopes
|-id=Barium-114
| rowspan=4|114Ba
| rowspan=4 style="text-align:right" | 56
| rowspan=4 style="text-align:right" | 58
| rowspan=4|113.95072(11)
| rowspan=4|460(125) ms
| β+ (79%)
| 114Cs
| rowspan=4|0+
| rowspan=4|
| rowspan=4|
|-
| α (0.9%)
| 110Xe
|-
| β+, p (20%)
| 113Xe
|-
| CD (<.0034%)
| 102Sn, 12C
|-id=Barium-115
| rowspan=2|115Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 59
| rowspan=2|114.94748(22)#
| rowspan=2|0.45(5) s
| β+
| 115Cs
| rowspan=2|5/2+#
| rowspan=2|
| rowspan=2|
|-
| β+, p (>15%)
| 114Xe
|-id=Barium-116
| rowspan=2|116Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 60
| rowspan=2|115.94162(22)#
| rowspan=2|1.3(2) s
| β+ (97%)
| 116Cs
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β+, p (3%)
| 115Xe
|-id=Barium-117
| rowspan=3|117Ba
| rowspan=3 style="text-align:right" | 56
| rowspan=3 style="text-align:right" | 61
| rowspan=3|116.93832(27)
| rowspan=3|1.75(7) s
| β+ (87%)
| 117Cs
| rowspan=3|(3/2+)
| rowspan=3|
| rowspan=3|
|-
| β+, p (13%)
| 116Xe
|-
| β+, α (0.024%)
| 113I
|-id=Barium-118
| 118Ba
| style="text-align:right" | 56
| style="text-align:right" | 62
| 117.93323(22)#
| 5.2(2) s
| β+
| 118Cs
| 0+
|
|
|-id=Barium-119
| rowspan=2|119Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 63
| rowspan=2|118.93066(21)
| rowspan=2|5.4(3) s
| β+ (75%)
| 119Cs
| rowspan=2|(3/2+)
| rowspan=2|
| rowspan=2|
|-
| β+, p (25%)
| 118Xe
|-id=Barium-119m
| style="text-indent:1em" | 119mBa
| colspan="3" style="text-indent:2em" | 66.0 keV
| 360(20) ns
| IT
| 119Ba
| (5/2−)
|
|
|-id=Barium-120
| 120Ba
| style="text-align:right" | 56
| style="text-align:right" | 64
| 119.92604(32)
| 24(2) s
| β+
| 120Cs
| 0+
|
|
|-id=Barium-121
| rowspan=2|121Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 65
| rowspan=2|120.92405(15)
| rowspan=2|29.7(15) s
| β+ (99.98%)
| 121Cs
| rowspan=2|5/2+
| rowspan=2|
| rowspan=2|
|-
| β+, p (0.02%)
| 120Xe
|-id=Barium-122
| 122Ba
| style="text-align:right" | 56
| style="text-align:right" | 66
| 121.91990(3)
| 1.95(15) min
| β+
| 122Cs
| 0+
|
|
|-id=Barium-123
| 123Ba
| style="text-align:right" | 56
| style="text-align:right" | 67
| 122.918781(13)
| 2.7(4) min
| β+
| 123Cs
| 5/2+
|
|
|-id=Barium-123m
| style="text-indent:1em" | 123mBa
| colspan="3" style="text-indent:2em" | 120.95(8) keV
| 830(60) ns
| IT
| 123Ba
| 1/2+#
|
|
|-id=Barium-124
| 124Ba
| style="text-align:right" | 56
| style="text-align:right" | 68
| 123.915094(13)
| 11.0(5) min
| β+
| 124Cs
| 0+
|
|
|-id=Barium-125
| 125Ba
| style="text-align:right" | 56
| style="text-align:right" | 69
| 124.914472(12)
| 3.3(3) min
| β+
| 125Cs
| 1/2+
|
|
|-id=Barium-125m
| style="text-indent:1em" | 125mBa
| colspan="3" style="text-indent:2em" | 120(20)# keV
| 2.76(14) μs
| IT
| 125Ba
| (7/2−)
|
|
|-id=Barium-126
| 126Ba
| style="text-align:right" | 56
| style="text-align:right" | 70
| 125.911250(13)
| 100(2) min
| β+
| 126Cs
| 0+
|
|
|-id=Barium-127
| 127Ba
| style="text-align:right" | 56
| style="text-align:right" | 71
| 126.911091(12)
| 12.7(4) min
| β+
| 127Cs
| 1/2+
|
|
|-id=Barium-127m
| style="text-indent:1em" | 127mBa
| colspan="3" style="text-indent:2em" | 80.32(11) keV
| 1.93(7) s
| IT
| 127Ba
| 7/2−
|
|
|-id=Barium-128
| 128Ba
| style="text-align:right" | 56
| style="text-align:right" | 72
| 127.9083524(17)
| 2.43(5) d
| EC
| 128Cs
| 0+
|
|
|-id=Barium-129
| 129Ba
| style="text-align:right" | 56
| style="text-align:right" | 73
| 128.908683(11)
| 2.23(11) h
| β+
| 129Cs
| 1/2+
|
|
|-id=Barium-129m
| rowspan=2 style="text-indent:1em" | 129mBa
| rowspan=2 colspan="3" style="text-indent:2em" | 8.42(6) keV
| rowspan=2|2.135(10) h
| β+
| 129Cs
| rowspan=2|7/2+
| rowspan=2|
| rowspan=2|
|-
| IT
| 129Ba
|-id=Barium-130
| 130Ba
| style="text-align:right" | 56
| style="text-align:right" | 74
| 129.9063260(3)
| ≈ 1×1021 y
| 2EC?
| 130Xe
| 0+
| 0.0011(1)
|
|-id=Barium-130m
| style="text-indent:1em" | 130mBa
| colspan="3" style="text-indent:2em" | 2475.12(18) keV
| 9.54(14) ms
| IT
| 130Ba
| 8−
|
|
|-id=Barium-131
| 131Ba
| style="text-align:right" | 56
| style="text-align:right" | 75
| 130.9069463(4)
| 11.52(1) d
| β+
| 131Cs
| 1/2+
|
|
|-id=Barium-131m
| style="text-indent:1em" | 131mBa
| colspan="3" style="text-indent:2em" | 187.995(9) keV
| 14.26(9) min
| IT
| 131Ba
| 9/2−
|
|
|-id=Barium-132
| 132Ba
| style="text-align:right" | 56
| style="text-align:right" | 76
| 131.9050612(11)
| colspan=3 align=center|Observationally Stable
| 0+
| 0.0010(1)
|
|-id=Barium-133
| 133Ba
| style="text-align:right" | 56
| style="text-align:right" | 77
|132.9060074(11)
| 10.5379(16) y
| EC
| 133Cs
| 1/2+
|
|
|-id=Barium-133m
| rowspan=2 style="text-indent:1em" | 133mBa
| rowspan=2 colspan="3" style="text-indent:2em" | 288.252(9) keV
| rowspan=2|38.90(6) h
| IT (99.99%)
| 133Ba
| rowspan=2|11/2−
| rowspan=2|
| rowspan=2|
|-
| EC (0.0104%)
| 133Cs
|-id=Barium-134
| 134Ba
| style="text-align:right" | 56
| style="text-align:right" | 78
| 133.90450825(27)
| colspan=3 align=center|Stable
| 0+
| 0.0242(15)
|
|-id=Barium-134m
| style="text-indent:1em" | 134mBa
| colspan="3" style="text-indent:2em" | 2957.2(5) keV
| 2.61(13) μs
| IT
| 134Ba
| 10+
|
|
|-id=Barium-135
| 135Ba
| style="text-align:right" | 56
| style="text-align:right" | 79
| 134.90568845(26)
| colspan=3 align=center|Stable
| 3/2+
| 0.0659(10)
|
|-id=Barium-135m1
| style="text-indent:1em" | 135m1Ba
| colspan="3" style="text-indent:2em" | 268.218(20) keV
| 28.11(2) h
| IT
| 135Ba
| 11/2−
|
|
|-id=Barium-135m2
| style="text-indent:1em" | 135m2Ba
| colspan="3" style="text-indent:2em" | 2388.0(5) keV
| 1.06(4) ms
| IT
| 135Ba
| (23/2+)
|
|
|-id=Barium-136
| 136Ba
| style="text-align:right" | 56
| style="text-align:right" | 80
|135.90457580(26)
| colspan=3 align=center|Stable
| 0+
| 0.0785(24)
|
|-id=Barium-136m1
| style="text-indent:1em" | 136m1Ba
| colspan="3" style="text-indent:2em" | 2030.535(18) keV
| 308.4(19) ms
| IT
| 136Ba
| 7−
|
|
|-id=Barium-136m2
| style="text-indent:1em" | 136m2Ba
| colspan="3" style="text-indent:2em" | 3357.19(25) keV
| 91(2) ns
| IT
| 136Ba
| 10+
|
|
|-id=Barium-137
| 137Ba
| style="text-align:right" | 56
| style="text-align:right" | 81
|136.90582721(27)
| colspan=3 align=center|Stable
| 3/2+
| 0.1123(23)
|
|-id=Barium-137m1
| style="text-indent:1em" | 137m1Ba
| colspan="3" style="text-indent:2em" | 661.659(3) keV
| 2.552(1) min
| IT
| 137Ba
| 11/2−
|
|
|-id=Barium-137m2
| style="text-indent:1em" | 137m2Ba
| colspan="3" style="text-indent:2em" | 2349.1(5) keV
| 589(20) ns
| IT
| 137Ba
| (19/2−)
|
|
|-id=Barium-138
| 138Ba
| style="text-align:right" | 56
| style="text-align:right" | 82
| 137.90524706(27)
| colspan=3 align=center|Stable
| 0+
| 0.7170(29)
|
|-id=Barium-138m
| style="text-indent:1em" | 138mBa
| colspan="3" style="text-indent:2em" | 2090.536(21) keV
| 850(100) ns
| IT
| 138Ba
| 6+
|
|
|-id=Barium-139
| 139Ba
| style="text-align:right" | 56
| style="text-align:right" | 83
| 138.90884116(27)
| 82.93(9) min
| β−
| 139La
| 7/2−
|
|
|-id=Barium-140
| 140Ba
| style="text-align:right" | 56
| style="text-align:right" | 84
| 139.910608(8)
| 12.7534(21) d
| β−
| 140La
| 0+
|
|
|-id=Barium-141
| 141Ba
| style="text-align:right" | 56
| style="text-align:right" | 85
| 140.914404(6)
| 18.27(7) min
| β−
| 141La
| 3/2−
|
|
|-id=Barium-142
| 142Ba
| style="text-align:right" | 56
| style="text-align:right" | 86
| 141.916433(6)
| 10.6(2) min
| β−
| 142La
| 0+
|
|
|-id=Barium-143
| 143Ba
| style="text-align:right" | 56
| style="text-align:right" | 87
| 142.920625(7)
| 14.5(3) s
| β−
| 143La
| 5/2−
|
|
|-id=Barium-144
| 144Ba
| style="text-align:right" | 56
| style="text-align:right" | 88
|143.922955(8)
| 11.73(8) s
| β−
| 144La
| 0+
|
|
|-id=Barium-145
| 145Ba
| style="text-align:right" | 56
| style="text-align:right" | 89
| 144.927518(9)
| 4.31(16) s
| β−
| 145La
| 5/2−
|
|
|-id=Barium-146
| 146Ba
| style="text-align:right" | 56
| style="text-align:right" | 90
| 145.9303632(19)
| 2.15(4) s
| β−
| 146La
| 0+
|
|
|-id=Barium-147
| rowspan=2|147Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 91
| rowspan=2|146.935304(21)
| rowspan=2|893(1) ms
| β− (99.93%)
| 147La
| rowspan=2|5/2−
| rowspan=2|
| rowspan=2|
|-
| β−, n (0.07%)
| 146La
|-id=Barium-148
| rowspan=2|148Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 92
| rowspan=2| 147.9382230(16)
| rowspan=2|620(5) ms
| β− (99.6%)
| 148La
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β−, n (0.4%)
| 147La
|-id=Barium-149
| rowspan=2|149Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 93
| rowspan=2|148.9432840(27)
| rowspan=2|349(4) ms
| β− (96.1%)
| 149La
| rowspan=2|3/2−#
| rowspan=2|
| rowspan=2|
|-
| β−, n (3.9%)
| 148La
|-id=Barium-150
| rowspan=2|150Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 94
| rowspan=2| 149.946441(6)
| rowspan=2|258(5) ms
| β− (99.0%)
| 150La
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β−, n (1.0%)
| 149La
|-id=Barium-151
| rowspan=2|151Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 95
| rowspan=2|150.95176(43)#
| rowspan=2|167(5) ms
| β−
| 151La
| rowspan=2|3/2−#
| rowspan=2|
| rowspan=2|
|-
| β−, n?
| 150La
|-id=Barium-152
| rowspan=2|152Ba
| rowspan=2 style="text-align:right" | 56
| rowspan=2 style="text-align:right" | 96
| rowspan=2|151.95533(43)#
| rowspan=2|139(8) ms
| β−
| 152La
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| β−, n?
| 151La
|-id=Barium-153
| rowspan=3|153Ba
| rowspan=3 style="text-align:right" | 56
| rowspan=3 style="text-align:right" | 97
| rowspan=3|152.96085(43)#
| rowspan=3|113(39) ms
| β−
| 153La
| rowspan=3|5/2−#
| rowspan=3|
| rowspan=3|
|-
| β−, n?
| 152La
|-
| β−, 2n?
| 151La
|-id=Barium-154
| 154Ba
| style="text-align:right" | 56
| style="text-align:right" | 98
|153.96466(54)#
| 53(48) ms
| β−
| 154La
| 0+
|
|
References
Isotope masses from:
Isotopic compositions and standard atomic masses from:
Half-life, spin, and isomer data selected from the following sources.
Half-life of 130Ba from:
Barium
Barium | Isotopes of barium | Chemistry | 5,051 |
29,692,404 | https://en.wikipedia.org/wiki/Type%20II%20Cepheid | Type II Cepheids are variable stars which pulsate with periods typically between 1 and 50 days. They are population II stars: old, typically metal-poor, low mass objects.
Like all Cepheid variables, Type IIs exhibit a relationship between the star's luminosity and pulsation period, making them useful as standard candles for establishing distances where little other data is available
Longer period Type II Cepheids, which are more luminous, have been detected beyond the Local Group in the galaxies NGC 5128 and NGC 4258.
Classification
Historically Type II Cepheids were called W Virginis variables, but are now divided into three subclasses based on the length of their period. Stars with periods between 1 and 4 days are of the BL Herculis subclass and 10–20 days belong to the W Virginis subclass. Stars with periods greater than 20 days, and usually alternating deep and shallow minima, belong to the RV Tauri subclass. RV Tauri variables are usually classified by a formal period from deep minimum to deep minimum, hence 40 days or more.
The divisions between the types are not always clearcut or agreed. For example, the dividing line between BL Her and W Vir types is quoted at anything between 4 and 10 days, with no obvious division between the two. RV Tau variables may not have obvious alternating minima, while some W Vir stars do. Nevertheless, each type is thought to represent a distinct different evolutionary stage, with BL Her stars being helium core burning objects moving from the horizontal branch towards the asymptotic giant branch (AGB), W Vir stars undergoing hydrogen or helium shell burning on a blue loop, and RV Tau stars being post-AGB objects at or near the end of nuclear fusion.
RV Tau stars in particular show irregularities in their light curves, with slow variations in the brightness of both maxima and minima, variations in the period, intervals with little variation, and sometimes a temporary breakdown into chaotic behaviour. R Scuti has one of the most irregular light curves.
Properties
The physical properties of all the type II Cepheid variables are very poorly known. For example, it is expected that they have masses near or below that of the Sun, but there are few examples of reliable known masses.
Period-luminosity relationship
Type II Cepheids are fainter than their classical Cepheid counterparts for a given period by about 1.6 magnitudes. Cepheid variables are used to establish the distance to the Galactic Center, globular clusters, and galaxies.
Examples
Type II Cepheids are not as well known as their type I counterparts, with only a couple of naked eye examples. In this list, the period quoted for RV Tauri variables is the interval between successive deep minima, hence twice the comparable period for the other sub-types.
References
External links
OGLE Atlas of Variable Star Light Curves - Type II Cepheids
Type II Cepheid
Astrometry
Standard candles
Population II stars | Type II Cepheid | Physics,Astronomy | 625 |
3,005,170 | https://en.wikipedia.org/wiki/Bell%27s%20law%20of%20computer%20classes | Bell's law of computer classes formulated by Gordon Bell in 1972 describes how types of computing systems (referred to as computer classes) form, evolve and may eventually die out. New classes of computers create new applications resulting in new markets and new industries.
Description
Bell considers the law to be partially a corollary to Moore's law which states "the number of transistors per chip double every 18 months". Unlike Moore's law, a new computer class is usually based on lower cost components that have fewer transistors or less bits on a magnetic surface, etc. A new class forms about every decade. It also takes up to a decade to understand how the class formed, evolved, and is likely to continue. Once formed, a lower priced class may evolve in performance to take over and disrupt an existing class. This evolution has caused clusters of scalable personal computers with 1 to thousands of computers to span a price and performance range of use from a PC, through mainframes, to become the largest supercomputers of the day. Scalable clusters became a universal class beginning in the mid-1990s; by 2010, clusters of at least one million independent computers will constitute the world's largest cluster.
Definition: Roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.
Established market class computers aka platforms are introduced and continue to evolve at roughly a constant price (subject to learning curve cost reduction) with increasing functionality (or performance) based on Moore's law that gives more transistors per chip, more bits per unit area, or increased functionality per system. Roughly every decade, technology advances in semiconductors, storage, networks, and interfaces enable the emergence of a new, lower-cost computer class (aka "platform") to serve a new need that is enabled by smaller devices (e.g. more transistors per chip, less expensive storage, displays, i/o, network, and unique interface to people or some other information processing sink or source). Each new lower-priced class is then established and maintained as a quasi-independent industry and market. Such a class is likely to evolve to substitute for an existing class or classes as described above with computer clusters.
Computer classes that conform to the law
mainframes (1960s)
minicomputers (1970s)
personal computers and workstations evolving into a network enabled by Local Area Networking or Ethernet (1980s)
web browser client-server structures enabled by the Internet (1990s)
cloud computing, e.g., Amazon Web Services (2006) or Microsoft Azure (2012)
hand held devices from media players and cell phones to tablets, e.g., Creative, iPods, BlackBerrys, iPhones, smartphones, Kindles, iPads (c. 2000–2010)
wireless sensor networks (WSNs) that enable sensor and actuator interconnection, enabling the evolving Internet of things (c. >2005)
Beginning in the 1990s, a single class of scalable computers or mega-servers, (built from clusters of a few to tens of thousands of commodity microcomputer-storage-networked bricks), began to cover and replace mainframes, minis, and workstations to become the largest computers of the day, and when applied for scientific calculation they are commonly called a supercomputer.
History
Bell's law of computer classes and class formation was first mentioned in 1970 with the introduction of the Digital Equipment PDP-11 mini to differentiate it from mainframes and the potentially emerging micros. The law was described in 1972 by Gordon Bell. The emergence and observation of a new, lower-priced microcomputer class based on the microprocessor stimulated the creation of the law that Bell described in articles and Bell's books.
Other computer industry laws
See also the several laws (e.g. Moore's law, Metcalfe's law) that describe the computer industry.
References
Adages
Bell's Law of Computer Classes
Computer architecture statements | Bell's law of computer classes | Technology | 838 |
76,053,079 | https://en.wikipedia.org/wiki/ALX-1393 | ALX-1393 is a glycine reuptake inhibitor.
Pharmacodynamics
ALX-1393 works by inhibiting the action of GLYT2. This causes elevated levels of glycine, an inhibitory neurotransmitter.
Potential uses
ALX-1393 has been shown to have potential as an analgesic. This is thought to be due to the elevated glycine levels reducing the transmission of the pain signals.
Tests have shown that it was able to help reduce cancer pain in a potent way.
References
Amino acids
Aromatic ethers
3-Fluorophenyl compounds | ALX-1393 | Chemistry | 130 |
1,800,596 | https://en.wikipedia.org/wiki/Information%20model | An information model in software engineering is a representation of concepts and the relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context.
Overview
The term information model in general is used for models of individual things, such as facilities, buildings, process plants, etc. In those cases, the concept is specialised to facility information model, building information model, plant information model, etc. Such an information model is an integration of a model of the facility with the data and documents about the facility.
Within the field of software engineering and data modeling, an information model is usually an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. The entity types in the model may be kinds of real-world objects, such as devices in a network, or occurrences, or they may themselves be abstract, such as for the entities used in a billing system. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations.
An information model provides formalism to the description of a problem domain without constraining how that description is mapped to an actual implementation in software. There may be many mappings of the information model. Such mappings are called data models, irrespective of whether they are object models (e.g. using UML), entity relationship models or XML schemas.
Information modeling languages
In 1976, an entity-relationship (ER) graphic notation was introduced by Peter Chen. He stressed that it was a "semantic" modelling technique and independent of any database modelling techniques such as Hierarchical, CODASYL, Relational etc. Since then, languages for information models have continued to evolve. Some examples are the Integrated Definition Language 1 Extended (IDEF1X), the EXPRESS language and the Unified Modeling Language (UML).
Research by contemporaries of Peter Chen such as J.R.Abrial (1974) and G.M Nijssen (1976) led to today's Fact Oriented Modeling (FOM) languages which are based on linguistic propositions rather than on "entities". FOM tools can be used to generate an ER model which means that the modeler can avoid the time-consuming and error prone practice of manual normalization. Object-Role Modeling language (ORM) and Fully Communication Oriented Information Modeling (FCO-IM) are both research results developed in the early 1990s, based upon earlier research.
In the 1980s there were several approaches to extend Chen’s Entity Relationship Model. Also important in this decade is REMORA by Colette Rolland.
The ICAM Definition (IDEF) Language was developed from the U.S. Air Force ICAM Program during the 1976 to 1982 timeframe. The objective of the ICAM Program, according to Lee (1999), was to increase manufacturing productivity through the systematic application of computer technology. IDEF includes three different modeling methods: IDEF0, IDEF1, and IDEF2 for producing a functional model, an information model, and a dynamic model respectively. IDEF1X is an extended version of IDEF1. The language is in the public domain. It is a graphical representation and is designed using the ER approach and the relational theory. It is used to represent the “real world” in terms of entities, attributes, and relationships between entities. Normalization is enforced by KEY Structures and KEY Migration. The language identifies property groupings (Aggregation) to form complete entity definitions.
EXPRESS was created as ISO 10303-11 for formally specifying information requirements of product data model. It is part of a suite of standards informally known as the STandard for the Exchange of Product model data (STEP). It was first introduced in the early 1990s. The language, according to Lee (1999), is a textual representation. In addition, a graphical subset of EXPRESS called EXPRESS-G is available. EXPRESS is based on programming languages and the O-O paradigm. A number of languages have contributed to EXPRESS. In particular, Ada, Algol, C, C++, Euler, Modula-2, Pascal, PL/1, and SQL. EXPRESS consists of language elements that allow an unambiguous object definition and specification of constraints on the objects defined. It uses SCHEMA declaration to provide partitioning and it supports specification of data properties, constraints, and operations.
UML is a modeling language for specifying, visualizing, constructing, and documenting the artifacts, rather than processes, of software systems. It was conceived originally by Grady Booch, James Rumbaugh, and Ivar Jacobson. UML was approved by the Object Management Group (OMG) as a standard in 1997. The language, according to Lee (1999), is non-proprietary and is available to the public. It is a graphical representation. The language is based on the objected-oriented paradigm. UML contains notations and rules and is designed to represent data requirements in terms of O-O diagrams. UML organizes a model in a number of views that present different aspects of a system. The contents of a view are described in diagrams that are graphs with model elements. A diagram contains model elements that represent common O-O concepts such as classes, objects, messages, and relationships among these concepts.
IDEF1X, EXPRESS, and UML all can be used to create a conceptual model and, according to Lee (1999), each has its own characteristics. Although some may lead to a natural usage (e.g., implementation), one is not necessarily better than another. In practice, it may require more than one language to develop all information models when an application is complex. In fact, the modeling practice is often more important than the language chosen.
Information models can also be expressed in formalized natural languages, such as Gellish. Gellish, which has natural language variants Gellish Formal English, Gellish Formal Dutch (Gellish Formeel Nederlands), etc. is an information representation language or modeling language that is defined in the Gellish smart Dictionary-Taxonomy, which has the form of a Taxonomy/Ontology. A Gellish Database is not only suitable to store information models, but also knowledge models, requirements models and dictionaries, taxonomies and ontologies. Information models in Gellish English use Gellish Formal English expressions. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as:
- the Eiffel tower <is located in> Paris
- Paris <is classified as a> city
whereas information requirements and knowledge can be expressed for example as follows:
- tower <shall be located in a> geographical area
- city <is a kind of> geographical area
Such Gellish expressions use names of concepts (such as 'city') and relation types (such as and ) that should be selected from the Gellish Formal English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains definitions of more than 40000 concepts, including more than 600 standard relation types. Thus, an information model in Gellish consists of a collection of Gellish expressions that use those phrases and dictionary concepts to express facts or make statements, queries and answers.
Standard sets of information models
The Distributed Management Task Force (DMTF) provides a standard set of information models for various enterprise domains under the general title of the Common Information Model (CIM). Specific information models are derived from CIM for particular management domains.
The TeleManagement Forum (TMF) has defined an advanced model for the Telecommunication domain (the Shared Information/Data model, or SID) as another. This includes views from the business, service and resource domains within the Telecommunication industry. The TMF has established a set of principles that an OSS integration should adopt, along with a set of models that provide standardized approaches.
The models interact with the information model (the Shared Information/Data Model, or SID), via a process model (the Business Process Framework (eTOM), or eTOM) and a life cycle model.
See also
Building information modeling
Concept map
Conceptual model (computer science)
System information modelling
Notes
References
ISO/IEC TR9007 Conceptual Schema, 1986
Andries van Renssen, Gellish, A Generic Extensible Ontological Language, (PhD, Delft University of Technology, 2005)
Further reading
Richard Veryard (1992). Information modelling : practical guidance. New York : Prentice Hall.
External links
– Terminology for Policy-Based Management
Data modeling
Information technology management | Information model | Technology,Engineering | 1,805 |
20,056 | https://en.wikipedia.org/wiki/MPEG-1 | MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to about 1.5 Mbit/s (26:1 and 6:1 compression ratios respectively) without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting (DAB) practical.
Today, MPEG-1 has become the most widely compatible lossy audio/video format in the world, and is used in a large number of products and technologies. Perhaps the best-known part of the MPEG-1 standard is the first version of the MP3 audio format it introduced.
The MPEG-1 standard is published as ISO/IEC 11172, titled Information technology—Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s.
The standard consists of the following five Parts:
Systems (defining a format for storage and synchronization of video, audio, and other data together in a single file—later dubbed the MPEG program stream to distinguish it from the MPEG transport stream format introduced as an alternative in MPEG-2).
Video (compressed video content)
Audio (compressed audio content), including MP3 and MP2
Conformance testing (testing the correctness of implementations of the standard)
Reference software (example software showing how to encode and decode according to the standard)
History
The predecessor of MPEG-1 for video coding was the H.261 standard produced by the CCITT (now known as the ITU-T). The basic architecture established in H.261 was the motion-compensated DCT hybrid video coding structure. It uses macroblocks of size 16×16 with block-based motion estimation in the encoder and motion compensation using encoder-selected motion vectors in the decoder, with residual difference coding using a discrete cosine transform (DCT) of size 8×8, scalar quantization, and variable-length codes (like Huffman codes) for entropy coding. H.261 was the first practical video coding standard, and all of its described design elements were also used in MPEG-1.
Modeled on the successful collaborative approach and the compression technologies developed by the Joint Photographic Experts Group and CCITT's Experts Group on Telephony (creators of the JPEG image compression standard and the H.261 standard for video conferencing respectively), the Moving Picture Experts Group (MPEG) working group was established in January 1988, by the initiative of Hiroshi Yasuda (Nippon Telegraph and Telephone) and Leonardo Chiariglione (CSELT). MPEG was formed to address the need for standard video and audio formats, and to build on H.261 to get better quality through the use of somewhat more complex encoding methods (e.g., supporting higher precision for motion vectors).
Development of the MPEG-1 standard began in May 1988. Fourteen video and fourteen audio codec proposals were submitted by individual companies and institutions for evaluation. The codecs were extensively tested for computational complexity and subjective (human perceived) quality, at data rates of 1.5 Mbit/s. This specific bitrate was chosen for transmission over T-1/E-1 lines and as the approximate data rate of audio CDs. The codecs that excelled in this testing were utilized as the basis for the standard and refined further, with additional features and other improvements being incorporated in the process.
After 20 meetings of the full group in various cities around the world, and 4½ years of development and testing, the final standard (for parts 1–3) was approved in early November 1992 and published a few months later. The reported completion date of the MPEG-1 standard varies greatly: a largely complete draft standard was produced in September 1990, and from that point on, only minor changes were introduced. The draft standard was publicly available for purchase. The standard was finished with the 6 November 1992 meeting. The Berkeley Plateau Multimedia Research Group developed an MPEG-1 decoder in November 1992. In July 1990, before the first draft of the MPEG-1 standard had even been written, work began on a second standard, MPEG-2, intended to extend MPEG-1 technology to provide full broadcast-quality video (as per CCIR 601) at high bitrates (3–15 Mbit/s) and support for interlaced video. Due in part to the similarity between the two codecs, the MPEG-2 standard includes full backwards compatibility with MPEG-1 video, so any MPEG-2 decoder can play MPEG-1 videos.
Notably, the MPEG-1 standard very strictly defines the bitstream, and decoder function, but does not define how MPEG-1 encoding is to be performed, although a reference implementation is provided in ISO/IEC-11172-5. This means that MPEG-1 coding efficiency can drastically vary depending on the encoder used, and generally means that newer encoders perform significantly better than their predecessors. The first three parts (Systems, Video and Audio) of ISO/IEC 11172 were published in August 1993.
Patents
Due to its age, MPEG-1 is no longer covered by any essential patents and can thus be used without obtaining a licence or paying any fees. The ISO patent database lists one patent for ISO 11172, US 4,472,747, which expired in 2003. The near-complete draft of the MPEG-1 standard was publicly available as ISO CD 11172 by December 6, 1991. Neither the July 2008 Kuro5hin article "Patent Status of MPEG-1, H.261 and MPEG-2", nor an August 2008 thread on the gstreamer-devel mailing list were able to list a single unexpired MPEG-1 Video and MPEG-1 Audio Layer I/II patent. A May 2009 discussion on the whatwg mailing list mentioned US 5,214,678 patent as possibly covering MPEG-1 Audio Layer II. Filed in 1990 and published in 1993, this patent is now expired.
A full MPEG-1 decoder and encoder, with "Layer III audio", could not be implemented royalty free since there were companies that required patent fees for implementations of MPEG-1 Audio Layer III, as discussed in the MP3 article. All patents in the world connected to MP3 expired 30 December 2017, which makes this format totally free for use. On 23 April 2017, Fraunhofer IIS stopped charging for Technicolor's MP3 licensing program for certain MP3 related patents and software.
Former patent holders
The following corporations filed declarations with ISO saying they held patents for the MPEG-1 Video (ISO/IEC-11172-2) format, although all such patents have since expired.
BBC
Daimler Benz AG
Fujitsu
IBM
Matsushita Electric Industrial Co., Ltd.
Mitsubishi Electric
NEC
NHK
Philips
Pioneer Corporation
Qualcomm
Ricoh
Sony
Texas Instruments
Thomson Multimedia
Toppan Printing
Toshiba
Victor Company of Japan
Applications
Most popular software for video playback includes MPEG-1 decoding, in addition to any other supported formats.
The popularity of MP3 audio has established a massive installed base of hardware that can play back MPEG-1 Audio (all three layers).
"Virtually all digital audio devices" can play back MPEG-1 Audio. Many millions have been sold to-date.
Before MPEG-2 became widespread, many digital satellite/cable TV services used MPEG-1 exclusively.
The widespread popularity of MPEG-2 with broadcasters means MPEG-1 is playable by most digital cable and satellite set-top boxes, and digital disc and tape players, due to backwards compatibility.
MPEG-1 was used for full-screen video on Green Book CD-i, and on Video CD (VCD).
The Super Video CD standard, based on VCD, uses MPEG-1 audio exclusively, as well as MPEG-2 video.
The DVD-Video format uses MPEG-2 video primarily, but MPEG-1 support is explicitly defined in the standard.
The DVD-Video standard originally required MPEG-1 Audio Layer II for PAL countries, but was changed to allow AC-3/Dolby Digital-only discs. MPEG-1 Audio Layer II is still allowed on DVDs, although newer extensions to the format, like MPEG Multichannel, are rarely supported.
Most DVD players also support Video CD and MP3 CD playback, which use MPEG-1.
The international Digital Video Broadcasting (DVB) standard primarily uses MPEG-1 Audio Layer II, and MPEG-2 video.
The international Digital Audio Broadcasting (DAB) standard uses MPEG-1 Audio Layer II exclusively, due to its especially high quality, modest decoder performance requirements, and tolerance of errors.
The Digital Compact Cassette uses PASC (Precision Adaptive Sub-band Coding) to encode its audio. PASC is an early version of MPEG-1 Audio Layer I with a fixed bit rate of 384 kilobits per second.
Part 1: Systems
Part 1 of the MPEG-1 standard covers systems, and is defined in ISO/IEC-11172-1.
MPEG-1 Systems specifies the logical layout and methods used to store the encoded audio, video, and other data into a standard bitstream, and to maintain synchronization between the different contents. This file format is specifically designed for storage on media, and transmission over communication channels, that are considered relatively reliable. Only limited error protection is defined by the standard, and small errors in the bitstream may cause noticeable defects.
This structure was later named an MPEG program stream: "The MPEG-1 Systems design is essentially identical to the MPEG-2 Program Stream structure." This terminology is more popular, precise (differentiates it from an MPEG transport stream) and will be used here.
Elementary streams, packets, and clock references
Elementary Streams (ES) are the raw bitstreams of MPEG-1 audio and video encoded data (output from an encoder). These files can be distributed on their own, such as is the case with MP3 files.
Packetized Elementary Streams (PES) are elementary streams packetized into packets of variable lengths, i.e., divided ES into independent chunks where cyclic redundancy check (CRC) checksum was added to each packet for error detection.
System Clock Reference (SCR) is a timing value stored in a 33-bit header of each PES, at a frequency/precision of 90 kHz, with an extra 9-bit extension that stores additional timing data with a precision of 27 MHz. These are inserted by the encoder, derived from the system time clock (STC). Simultaneously encoded audio and video streams will not have identical SCR values, however, due to buffering, encoding, jitter, and other delay.
Program streams
Program Streams (PS) are concerned with combining multiple packetized elementary streams (usually just one audio and video PES) into a single stream, ensuring simultaneous delivery, and maintaining synchronization. The PS structure is known as a multiplex, or a container format.
Presentation time stamps (PTS) exist in PS to correct the inevitable disparity between audio and video SCR values (time-base correction). 90 kHz PTS values in the PS header tell the decoder which video SCR values match which audio SCR values. PTS determines when to display a portion of an MPEG program, and is also used by the decoder to determine when data can be discarded from the buffer. Either video or audio will be delayed by the decoder until the corresponding segment of the other arrives and can be decoded.
PTS handling can be problematic. Decoders must accept multiple program streams that have been concatenated (joined sequentially). This causes PTS values in the middle of the video to reset to zero, which then begin incrementing again. Such PTS wraparound disparities can cause timing issues that must be specially handled by the decoder.
Decoding Time Stamps (DTS), additionally, are required because of B-frames. With B-frames in the video stream, adjacent frames have to be encoded and decoded out-of-order (re-ordered frames). DTS is quite similar to PTS, but instead of just handling sequential frames, it contains the proper time-stamps to tell the decoder when to decode and display the next B-frame (types of frames explained below), ahead of its anchor (P- or I-) frame. Without B-frames in the video, PTS and DTS values are identical.
Multiplexing
To generate the PS, the multiplexer will interleave the (two or more) packetized elementary streams. This is done so the packets of the simultaneous streams can be transferred over the same channel and are guaranteed to both arrive at the decoder at precisely the same time. This is a case of time-division multiplexing.
Determining how much data from each stream should be in each interleaved segment (the size of the interleave) is complicated, yet an important requirement. Improper interleaving will result in buffer underflows or overflows, as the receiver gets more of one stream than it can store (e.g. audio), before it gets enough data to decode the other simultaneous stream (e.g. video). The MPEG Video Buffering Verifier (VBV) assists in determining if a multiplexed PS can be decoded by a device with a specified data throughput rate and buffer size. This offers feedback to the multiplexer and the encoder, so that they can change the multiplex size or adjust bitrates as needed for compliance.
Part 2: Video
Part 2 of the MPEG-1 standard covers video and is defined in ISO/IEC-11172-2. The design was heavily influenced by H.261.
MPEG-1 Video exploits perceptual compression methods to significantly reduce the data rate required by a video stream. It reduces or completely discards information in certain frequencies and areas of the picture that the human eye has limited ability to fully perceive. It also exploits temporal (over time) and spatial (across a picture) redundancy common in video to achieve better data compression than would be possible otherwise. (See: Video compression)
Color space
Before encoding video to MPEG-1, the color-space is transformed to Y′CbCr (Y′=Luma, Cb=Chroma Blue, Cr=Chroma Red). Luma (brightness, resolution) is stored separately from chroma (color, hue, phase) and even further separated into red and blue components.
The chroma is also subsampled to 4:2:0, meaning it is reduced to half resolution vertically and half resolution horizontally, i.e., to just one quarter the number of samples used for the luma component of the video. This use of higher resolution for some color components is similar in concept to the Bayer pattern filter that is commonly used for the image capturing sensor in digital color cameras. Because the human eye is much more sensitive to small changes in brightness (the Y component) than in color (the Cr and Cb components), chroma subsampling is a very effective way to reduce the amount of video data that needs to be compressed. However, on videos with fine detail (high spatial complexity) this can manifest as chroma aliasing artifacts. Compared to other digital compression artifacts, this issue seems to very rarely be a source of annoyance. Because of the subsampling, Y′CbCr 4:2:0 video is ordinarily stored using even dimensions (divisible by 2 horizontally and vertically).
Y′CbCr color is often informally called YUV to simplify the notation, although that term more properly applies to a somewhat different color format. Similarly, the terms luminance and chrominance are often used instead of the (more accurate) terms luma and chroma.
Resolution/bitrate
MPEG-1 supports resolutions up to 4095×4095 (12 bits), and bit rates up to 100 Mbit/s.
MPEG-1 videos are most commonly seen using Source Input Format (SIF) resolution: 352×240, 352×288, or 320×240. These relatively low resolutions, combined with a bitrate less than 1.5 Mbit/s, make up what is known as a constrained parameters bitstream (CPB), later renamed the "Low Level" (LL) profile in MPEG-2. This is the minimum video specifications any decoder should be able to handle, to be considered MPEG-1 compliant. This was selected to provide a good balance between quality and performance, allowing the use of reasonably inexpensive hardware of the time.
Frame/picture/block types
MPEG-1 has several frame/picture types that serve different purposes. The most important, yet simplest, is I-frame.
I-frames
"I-frame" is an abbreviation for "Intra-frame", so-called because they can be decoded independently of any other frames. They may also be known as I-pictures, or keyframes due to their somewhat similar function to the key frames used in animation. I-frames can be considered effectively identical to baseline JPEG images.
High-speed seeking through an MPEG-1 video is only possible to the nearest I-frame. When cutting a video it is not possible to start playback of a segment of video before the first I-frame in the segment (at least not without computationally intensive re-encoding). For this reason, I-frame-only MPEG videos are used in editing applications.
I-frame only compression is very fast, but produces very large file sizes: a factor of 3× (or more) larger than normally encoded MPEG-1 video, depending on how temporally complex a specific video is. I-frame only MPEG-1 video is very similar to MJPEG video. So much so that very high-speed and theoretically lossless (in reality, there are rounding errors) conversion can be made from one format to the other, provided a couple of restrictions (color space and quantization matrix) are followed in the creation of the bitstream.
The length between I-frames is known as the group of pictures (GOP) size. MPEG-1 most commonly uses a GOP size of 15–18. i.e. 1 I-frame for every 14-17 non-I-frames (some combination of P- and B- frames). With more intelligent encoders, GOP size is dynamically chosen, up to some pre-selected maximum limit.
Limits are placed on the maximum number of frames between I-frames due to decoding complexing, decoder buffer size, recovery time after data errors, seeking ability, and accumulation of IDCT errors in low-precision implementations most common in hardware decoders (See: IEEE-1180).
P-frames
"P-frame" is an abbreviation for "Predicted-frame". They may also be called forward-predicted frames or inter-frames (B-frames are also inter-frames).
P-frames exist to improve compression by exploiting the temporal (over time) redundancy in a video. P-frames store only the difference in image from the frame (either an I-frame or P-frame) immediately preceding it (this reference frame is also called the anchor frame).
The difference between a P-frame and its anchor frame is calculated using motion vectors on each macroblock of the frame (see below). Such motion vector data will be embedded in the P-frame for use by the decoder.
A P-frame can contain any number of intra-coded blocks (DCT and Quantized), in addition to any forward-predicted blocks (Motion Vectors).
If a video drastically changes from one frame to the next (such as a cut), it is more efficient to encode it as an I-frame.
B-frames
"B-frame" stands for "bidirectional-frame" or "bipredictive frame". They may also be known as backwards-predicted frames or B-pictures. B-frames are quite similar to P-frames, except they can make predictions using both the previous and future frames (i.e. two anchor frames).
It is therefore necessary for the player to first decode the next I- or P- anchor frame sequentially after the B-frame, before the B-frame can be decoded and displayed. This means decoding B-frames requires larger data buffers and causes an increased delay on both decoding and during encoding. This also necessitates the decoding time stamps (DTS) feature in the container/system stream (see above). As such, B-frames have long been subject of much controversy, they are often avoided in videos, and are sometimes not fully supported by hardware decoders.
No other frames are predicted from a B-frame. Because of this, a very low bitrate B-frame can be inserted, where needed, to help control the bitrate. If this was done with a P-frame, future P-frames would be predicted from it and would lower the quality of the entire sequence. However, similarly, the future P-frame must still encode all the changes between it and the previous I- or P- anchor frame. B-frames can also be beneficial in videos where the background behind an object is being revealed over several frames, or in fading transitions, such as scene changes.
A B-frame can contain any number of intra-coded blocks and forward-predicted blocks, in addition to backwards-predicted, or bidirectionally predicted blocks.
D-frames
MPEG-1 has a unique frame type not found in later video standards. "D-frames" or DC-pictures are independently coded images (intra-frames) that have been encoded using DC transform coefficients only (AC coefficients are removed when encoding D-frames—see DCT below) and hence are very low quality. D-frames are never referenced by I-, P- or B- frames. D-frames are only used for fast previews of video, for instance when seeking through a video at high speed.
Given moderately higher-performance decoding equipment, fast preview can be accomplished by decoding I-frames instead of D-frames. This provides higher quality previews, since I-frames contain AC coefficients as well as DC coefficients. If the encoder can assume that rapid I-frame decoding capability is available in decoders, it can save bits by not sending D-frames (thus improving compression of the video content). For this reason, D-frames are seldom actually used in MPEG-1 video encoding, and the D-frame feature has not been included in any later video coding standards.
Macroblocks
MPEG-1 operates on video in a series of 8×8 blocks for quantization. However, to reduce the bit rate needed for motion vectors and because chroma (color) is subsampled by a factor of 4, each pair of (red and blue) chroma blocks corresponds to 4 different luma blocks. That is, for 4 luma blocks of size 8x8, there is one Cb block of 8x8 and one Cr block of 8x8. This set of 6 blocks, with a picture resolution of 16×16, is processed together and called a macroblock.
All of these 8x8 blocks are independently put through DCT and quantization.
A macroblock is the smallest independent unit of (color) video. Motion vectors (see below) operate solely at the macroblock level.
If the height or width of the video are not exact multiples of 16, full rows and full columns of macroblocks must still be encoded and decoded to fill out the picture (though the extra decoded pixels are not displayed).
Motion vectors
To decrease the amount of temporal redundancy in a video, only blocks that change are updated, (up to the maximum GOP size). This is known as conditional replenishment. However, this is not very effective by itself. Movement of the objects, and/or the camera may result in large portions of the frame needing to be updated, even though only the position of the previously encoded objects has changed. Through motion estimation, the encoder can compensate for this movement and remove a large amount of redundant information.
The encoder compares the current frame with adjacent parts of the video from the anchor frame (previous I- or P- frame) in a diamond pattern, up to a (encoder-specific) predefined radius limit from the area of the current macroblock. If a match is found, only the direction and distance (i.e. the vector of the motion) from the previous video area to the current macroblock need to be encoded into the inter-frame (P- or B- frame). The reverse of this process, performed by the decoder to reconstruct the picture, is called motion compensation.
A predicted macroblock rarely matches the current picture perfectly, however. The differences between the estimated matching area, and the real frame/macroblock is called the prediction error. The larger the amount of prediction error, the more data must be additionally encoded in the frame. For efficient video compression, it is very important that the encoder is capable of effectively and precisely performing motion estimation.
Motion vectors record the distance between two areas on screen based on the number of pixels (also called pels). MPEG-1 video uses a motion vector (MV) precision of one half of one pixel, or half-pel. The finer the precision of the MVs, the more accurate the match is likely to be, and the more efficient the compression. There are trade-offs to higher precision, however. Finer MV precision results in using a larger amount of data to represent the MV, as larger numbers must be stored in the frame for every single MV, increased coding complexity as increasing levels of interpolation on the macroblock are required for both the encoder and decoder, and diminishing returns (minimal gains) with higher precision MVs. Half-pel precision was chosen as the ideal trade-off for that point in time. (See: qpel)
Because neighboring macroblocks are likely to have very similar motion vectors, this redundant information can be compressed quite effectively by being stored DPCM-encoded. Only the (smaller) amount of difference between the MVs for each macroblock needs to be stored in the final bitstream.
P-frames have one motion vector per macroblock, relative to the previous anchor frame. B-frames, however, can use two motion vectors; one from the previous anchor frame, and one from the future anchor frame.
Partial macroblocks, and black borders/bars encoded into the video that do not fall exactly on a macroblock boundary, cause havoc with motion prediction. The block padding/border information prevents the macroblock from closely matching with any other area of the video, and so, significantly larger prediction error information must be encoded for every one of the several dozen partial macroblocks along the screen border. DCT encoding and quantization (see below) also isn't nearly as effective when there is large/sharp picture contrast in a block.
An even more serious problem exists with macroblocks that contain significant, random, edge noise, where the picture transitions to (typically) black. All the above problems also apply to edge noise. In addition, the added randomness is simply impossible to compress significantly. All of these effects will lower the quality (or increase the bitrate) of the video substantially.
DCT
Each 8×8 block is encoded by first applying a forward discrete cosine transform (FDCT) and then a quantization process. The FDCT process (by itself) is theoretically lossless, and can be reversed by applying an Inverse DCT (IDCT) to reproduce the original values (in the absence of any quantization and rounding errors). In reality, there are some (sometimes large) rounding errors introduced both by quantization in the encoder (as described in the next section) and by IDCT approximation error in the decoder. The minimum allowed accuracy of a decoder IDCT approximation is defined by ISO/IEC 23002-1. (Prior to 2006, it was specified by IEEE 1180-1990.)
The FDCT process converts the 8×8 block of uncompressed pixel values (brightness or color difference values) into an 8×8 indexed array of frequency coefficient values. One of these is the (statistically high in variance) "DC coefficient", which represents the average value of the entire 8×8 block. The other 63 coefficients are the statistically smaller "AC coefficients", which have positive or negative values each representing sinusoidal deviations from the flat block value represented by the DC coefficient.
An example of an encoded 8×8 FDCT block:
Since the DC coefficient value is statistically correlated from one block to the next, it is compressed using DPCM encoding. Only the (smaller) amount of difference between each DC value and the value of the DC coefficient in the block to its left needs to be represented in the final bitstream.
Additionally, the frequency conversion performed by applying the DCT provides a statistical decorrelation function to efficiently concentrate the signal into fewer high-amplitude values prior to applying quantization (see below).
Quantization
Quantization is, essentially, the process of reducing the accuracy of a signal, by dividing it by some larger step size and rounding to an integer value (i.e. finding the nearest multiple, and discarding the remainder).
The frame-level quantizer is a number from 0 to 31 (although encoders will usually omit/disable some of the extreme values) which determines how much information will be removed from a given frame. The frame-level quantizer is typically either dynamically selected by the encoder to maintain a certain user-specified bitrate, or (much less commonly) directly specified by the user.
A "quantization matrix" is a string of 64 numbers (ranging from 0 to 255) which tells the encoder how relatively important or unimportant each piece of visual information is. Each number in the matrix corresponds to a certain frequency component of the video image.
An example quantization matrix:
Quantization is performed by taking each of the 64 frequency values of the DCT block, dividing them by the frame-level quantizer, then dividing them by their corresponding values in the quantization matrix. Finally, the result is rounded down. This significantly reduces, or completely eliminates, the information in some frequency components of the picture. Typically, high frequency information is less visually important, and so high frequencies are much more strongly quantized (drastically reduced). MPEG-1 actually uses two separate quantization matrices, one for intra-blocks (I-blocks) and one for inter-block (P- and B- blocks) so quantization of different block types can be done independently, and so, more effectively.
This quantization process usually reduces a significant number of the AC coefficients to zero, (known as sparse data) which can then be more efficiently compressed by entropy coding (lossless compression) in the next step.
An example quantized DCT block:
Quantization eliminates a large amount of data, and is the main lossy processing step in MPEG-1 video encoding. This is also the primary source of most MPEG-1 video compression artifacts, like blockiness, color banding, noise, ringing, discoloration, etc. This happens when video is encoded with an insufficient bitrate, and the encoder is therefore forced to use high frame-level quantizers (strong quantization) through much of the video.
Entropy coding
Several steps in the encoding of MPEG-1 video are lossless, meaning they will be reversed upon decoding, to produce exactly the same (original) values. Since these lossless data compression steps don't add noise into, or otherwise change the contents (unlike quantization), it is sometimes referred to as noiseless coding. Since lossless compression aims to remove as much redundancy as possible, it is known as entropy coding in the field of information theory.
The coefficients of quantized DCT blocks tend to zero towards the bottom-right. Maximum compression can be achieved by a zig-zag scanning of the DCT block starting from the top left and using Run-length encoding techniques.
The DC coefficients and motion vectors are DPCM-encoded.
Run-length encoding (RLE) is a simple method of compressing repetition. A sequential string of characters, no matter how long, can be replaced with a few bytes, noting the value that repeats, and how many times. For example, if someone were to say "five nines", you would know they mean the number: 99999.
RLE is particularly effective after quantization, as a significant number of the AC coefficients are now zero (called sparse data), and can be represented with just a couple of bytes. This is stored in a special 2-dimensional Huffman table that codes the run-length and the run-ending character.
Huffman Coding is a very popular and relatively simple method of entropy coding, and used in MPEG-1 video to reduce the data size. The data is analyzed to find strings that repeat often. Those strings are then put into a special table, with the most frequently repeating data assigned the shortest code. This keeps the data as small as possible with this form of compression. Once the table is constructed, those strings in the data are replaced with their (much smaller) codes, which reference the appropriate entry in the table. The decoder simply reverses this process to produce the original data.
This is the final step in the video encoding process, so the result of Huffman coding is known as the MPEG-1 video "bitstream."
GOP configurations for specific applications
I-frames store complete frame info within the frame and are therefore suited for random access. P-frames provide compression using motion vectors relative to the previous frame ( I or P ). B-frames provide maximum compression but require the previous as well as next frame for computation. Therefore, processing of B-frames requires more buffer on the decoded side. A configuration of the Group of Pictures (GOP) should be selected based on these factors. I-frame only sequences give least compression, but are useful for random access, FF/FR and editability. I- and P-frame sequences give moderate compression but add a certain degree of random access, FF/FR functionality. I-, P- and B-frame sequences give very high compression but also increase the coding/decoding delay significantly. Such configurations are therefore not suited for video-telephony or video-conferencing applications.
The typical data rate of an I-frame is 1 bit per pixel while that of a P-frame is 0.1 bit per pixel and for a B-frame, 0.015 bit per pixel.
Part 3: Audio
Part 3 of the MPEG-1 standard covers audio and is defined in ISO/IEC-11172-3.
MPEG-1 Audio utilizes psychoacoustics to significantly reduce the data rate required by an audio stream. It reduces or completely discards certain parts of the audio that it deduces that the human ear can't hear, either because they are in frequencies where the ear has limited sensitivity, or are masked by other (typically louder) sounds.
Channel encoding modes:
Mono
Joint stereo – intensity encoded
Joint stereo – M/S encoded (Layer III only)
Stereo
Dual (two uncorrelated mono channels)
Sampling rates:
32000 Hz
44100 Hz
48000 Hz
Bit rates:
Layer I: 32, 64, 96, 128, 160, 192, 224, 256, 288, 320, 352, 384, 416 and 448 kbit/s
Layer II: 32, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320 and 384 kbit/s
Layer III: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s
MPEG-1 Audio is divided into 3 layers. Each higher layer is more computationally complex, and generally more efficient at lower bitrates than the previous. The layers are semi backwards compatible as higher layers reuse technologies implemented by the lower layers. A "full" Layer II decoder can also play Layer I audio, but not Layer III audio, although not all higher level players are "full".
Layer I
MPEG-1 Audio Layer I is a simplified version of MPEG-1 Audio Layer II. Layer I uses a smaller 384-sample frame size for very low delay, and finer resolution. This is advantageous for applications like teleconferencing, studio editing, etc. It has lower complexity than Layer II to facilitate real-time encoding on the hardware available .
Layer I saw limited adoption in its time, and most notably was used on Philips' defunct Digital Compact Cassette at a bitrate of 384 kbit/s. With the substantial performance improvements in digital processing since its introduction, Layer I quickly became unnecessary and obsolete.
Layer I audio files typically use the extension ".mp1" or sometimes ".m1a".
Layer II
MPEG-1 Audio Layer II (the first version of MP2, often informally called MUSICAM) is a lossy audio format designed to provide high quality at about 192 kbit/s for stereo sound. Decoding MP2 audio is computationally simple relative to MP3, AAC, etc.
History/MUSICAM
MPEG-1 Audio Layer II was derived from the MUSICAM (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing) audio codec, developed by Centre commun d'études de télévision et télécommunications (CCETT), Philips, and Institut für Rundfunktechnik (IRT/CNET) as part of the EUREKA 147 pan-European inter-governmental research and development initiative for the development of digital audio broadcasting.
Most key features of MPEG-1 Audio were directly inherited from MUSICAM, including the filter bank, time-domain processing, audio frame sizes, etc. However, improvements were made, and the actual MUSICAM algorithm was not used in the final MPEG-1 Audio Layer II standard. The widespread usage of the term MUSICAM to refer to Layer II is entirely incorrect and discouraged for both technical and legal reasons.
Technical details
MP2 is a time-domain encoder. It uses a low-delay 32 sub-band polyphased filter bank for time-frequency mapping; having overlapping ranges (i.e. polyphased) to prevent aliasing. The psychoacoustic model is based on the principles of auditory masking, simultaneous masking effects, and the absolute threshold of hearing (ATH). The size of a Layer II frame is fixed at 1152-samples (coefficients).
Time domain refers to how analysis and quantization is performed on short, discrete samples/chunks of the audio waveform. This offers low delay as only a small number of samples are analyzed before encoding, as opposed to frequency domain encoding (like MP3) which must analyze many times more samples before it can decide how to transform and output encoded audio. This also offers higher performance on complex, random and transient impulses (such as percussive instruments, and applause), offering avoidance of artifacts like pre-echo.
The 32 sub-band filter bank returns 32 amplitude coefficients, one for each equal-sized frequency band/segment of the audio, which is about 700 Hz wide (depending on the audio's sampling frequency). The encoder then utilizes the psychoacoustic model to determine which sub-bands contain audio information that is less important, and so, where quantization will be inaudible, or at least much less noticeable.
The psychoacoustic model is applied using a 1024-point fast Fourier transform (FFT). Of the 1152 samples per frame, 64 samples at the top and bottom of the frequency range are ignored for this analysis. They are presumably not significant enough to change the result. The psychoacoustic model uses an empirically determined masking model to determine which sub-bands contribute more to the masking threshold, and how much quantization noise each can contain without being perceived. Any sounds below the absolute threshold of hearing (ATH) are completely discarded. The available bits are then assigned to each sub-band accordingly.
Typically, sub-bands are less important if they contain quieter sounds (smaller coefficient) than a neighboring (i.e. similar frequency) sub-band with louder sounds (larger coefficient). Also, "noise" components typically have a more significant masking effect than "tonal" components.
Less significant sub-bands are reduced in accuracy by quantization. This basically involves compressing the frequency range (amplitude of the coefficient), i.e. raising the noise floor. Then computing an amplification factor, for the decoder to use to re-expand each sub-band to the proper frequency range.
Layer II can also optionally use intensity stereo coding, a form of joint stereo. This means that the frequencies above 6 kHz of both channels are combined/down-mixed into one single (mono) channel, but the "side channel" information on the relative intensity (volume, amplitude) of each channel is preserved and encoded into the bitstream separately. On playback, the single channel is played through left and right speakers, with the intensity information applied to each channel to give the illusion of stereo sound. This perceptual trick is known as "stereo irrelevancy". This can allow further reduction of the audio bitrate without much perceivable loss of fidelity, but is generally not used with higher bitrates as it does not provide very high quality (transparent) audio.
Quality
Subjective audio testing by experts, in the most critical conditions ever implemented, has shown MP2 to offer transparent audio compression at 256 kbit/s for 16-bit 44.1 kHz CD audio using the earliest reference implementation (more recent encoders should presumably perform even better). That (approximately) 1:6 compression ratio for CD audio is particularly impressive because it is quite close to the estimated upper limit of perceptual entropy, at just over 1:8. Achieving much higher compression is simply not possible without discarding some perceptible information.
MP2 remains a favoured lossy audio coding standard due to its particularly high audio coding performances on important audio material such as castanet, symphonic orchestra, male and female voices and particularly complex and high energy transients (impulses) like percussive sounds: triangle, glockenspiel and audience applause. More recent testing has shown that MPEG Multichannel (based on MP2), despite being compromised by an inferior matrixed mode (for the sake of backwards compatibility) rates just slightly lower than much more recent audio codecs, such as Dolby Digital (AC-3) and Advanced Audio Coding (AAC) (mostly within the margin of error—and substantially superior in some cases, such as audience applause). This is one reason that MP2 audio continues to be used extensively. The MPEG-2 AAC Stereo verification tests reached a vastly different conclusion, however, showing AAC to provide superior performance to MP2 at half the bitrate. The reason for this disparity with both earlier and later tests is not clear, but strangely, a sample of applause is notably absent from the latter test.
Layer II audio files typically use the extension ".mp2" or sometimes ".m2a".
Layer III
MPEG-1 Audio Layer III (the first version of MP3) is a lossy audio format designed to provide acceptable quality at about 64 kbit/s for monaural audio over single-channel (BRI) ISDN links, and 128 kbit/s for stereo sound.
History/ASPEC
MPEG-1 Audio Layer III was derived from the Adaptive Spectral Perceptual Entropy Coding (ASPEC) codec developed by Fraunhofer as part of the EUREKA 147 pan-European inter-governmental research and development initiative for the development of digital audio broadcasting. ASPEC was adapted to fit in with the Layer II model (frame size, filter bank, FFT, etc.), to become Layer III.
ASPEC was itself based on Multiple adaptive Spectral audio Coding (MSC) by E. F. Schroeder, Optimum Coding in the Frequency domain (OCF) the doctoral thesis by Karlheinz Brandenburg at the University of Erlangen-Nuremberg, Perceptual Transform Coding (PXFM) by J. D. Johnston at AT&T Bell Labs, and Transform coding of audio signals by Y. Mahieux and J. Petit at Institut für Rundfunktechnik (IRT/CNET).
Technical details
MP3 is a frequency-domain audio transform encoder. Even though it utilizes some of the lower layer functions, MP3 is quite different from MP2.
MP3 works on 1152 samples like MP2, but needs to take multiple frames for analysis before frequency-domain (MDCT) processing and quantization can be effective. It outputs a variable number of samples, using a bit buffer to enable this variable bitrate (VBR) encoding while maintaining 1152 sample size output frames. This causes a significantly longer delay before output, which has caused MP3 to be considered unsuitable for studio applications where editing or other processing needs to take place.
MP3 does not benefit from the 32 sub-band polyphased filter bank, instead just using an 18-point MDCT transformation on each output to split the data into 576 frequency components, and processing it in the frequency domain. This extra granularity allows MP3 to have a much finer psychoacoustic model, and more carefully apply appropriate quantization to each band, providing much better low-bitrate performance.
Frequency-domain processing imposes some limitations as well, causing a factor of 12 or 36 × worse temporal resolution than Layer II. This causes quantization artifacts, due to transient sounds like percussive events and other high-frequency events that spread over a larger window. This results in audible smearing and pre-echo. MP3 uses pre-echo detection routines, and VBR encoding, which allows it to temporarily increase the bitrate during difficult passages, in an attempt to reduce this effect. It is also able to switch between the normal 36 sample quantization window, and instead using 3× short 12 sample windows instead, to reduce the temporal (time) length of quantization artifacts. And yet in choosing a fairly small window size to make MP3's temporal response adequate enough to avoid the most serious artifacts, MP3 becomes much less efficient in frequency domain compression of stationary, tonal components.
Being forced to use a hybrid time domain (filter bank) /frequency domain (MDCT) model to fit in with Layer II simply wastes processing time and compromises quality by introducing aliasing artifacts. MP3 has an aliasing cancellation stage specifically to mask this problem, but which instead produces frequency domain energy which must be encoded in the audio. This is pushed to the top of the frequency range, where most people have limited hearing, in hopes the distortion it causes will be less audible.
Layer II's 1024 point FFT doesn't entirely cover all samples, and would omit several entire MP3 sub-bands, where quantization factors must be determined. MP3 instead uses two passes of FFT analysis for spectral estimation, to calculate the global and individual masking thresholds. This allows it to cover all 1152 samples. Of the two, it utilizes the global masking threshold level from the more critical pass, with the most difficult audio.
In addition to Layer II's intensity encoded joint stereo, MP3 can use middle/side (mid/side, m/s, MS, matrixed) joint stereo. With mid/side stereo, certain frequency ranges of both channels are merged into a single (middle, mid, L+R) mono channel, while the sound difference between the left and right channels is stored as a separate (side, L-R) channel. Unlike intensity stereo, this process does not discard any audio information. When combined with quantization, however, it can exaggerate artifacts.
If the difference between the left and right channels is small, the side channel will be small, which will offer as much as a 50% bitrate savings, and associated quality improvement. If the difference between left and right is large, standard (discrete, left/right) stereo encoding may be preferred, as mid/side joint stereo will not provide any benefits. An MP3 encoder can switch between m/s stereo and full stereo on a frame-by-frame basis.
Unlike Layers I and II, MP3 uses variable-length Huffman coding (after perceptual) to further reduce the bitrate, without any further quality loss.
Quality
MP3's more fine-grained and selective quantization does prove notably superior to MP2 at lower-bitrates. It is able to provide nearly equivalent audio quality to Layer II, at a 15% lower bitrate (approximately). 128 kbit/s is considered the "sweet spot" for MP3; meaning it provides generally acceptable quality stereo sound on most music, and there are diminishing quality improvements from increasing the bitrate further. MP3 is also regarded as exhibiting artifacts that are less annoying than Layer II, when both are used at bitrates that are too low to possibly provide faithful reproduction.
Layer III audio files use the extension ".mp3".
MPEG-2 audio extensions
The MPEG-2 standard includes several extensions to MPEG-1 Audio. These are known as MPEG-2 BC – backwards compatible with MPEG-1 Audio. MPEG-2 Audio is defined in ISO/IEC 13818-3.
MPEG Multichannel – Backward compatible 5.1-channel surround sound.
Sampling rates: 16000, 22050, and 24000 Hz
Bitrates: 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144 and 160 kbit/s
These sampling rates are exactly half that of those originally defined for MPEG-1 Audio. They were introduced to maintain higher quality sound when encoding audio at lower-bitrates. The even-lower bitrates were introduced because tests showed that MPEG-1 Audio could provide higher quality than any existing () very low bitrate (i.e. speech) audio codecs.
Part 4: Conformance testing
Part 4 of the MPEG-1 standard covers conformance testing, and is defined in ISO/IEC-11172-4.
Conformance: Procedures for testing conformance.
Provides two sets of guidelines and reference bitstreams for testing the conformance of MPEG-1 audio and video decoders, as well as the bitstreams produced by an encoder.
Part 5: Reference software
Part 5 of the MPEG-1 standard includes reference software, and is defined in ISO/IEC TR 11172–5.
Simulation: Reference software.
C reference code for encoding and decoding of audio and video, as well as multiplexing and demultiplexing.
This includes the ISO Dist10 audio encoder code, which LAME and TooLAME were originally based upon.
File extension
.mpg is one of a number of file extensions for MPEG-1 or MPEG-2 audio and video compression. MPEG-1 Part 2 video is rare nowadays, and this extension typically refers to an MPEG program stream (defined in MPEG-1 and MPEG-2) or MPEG transport stream (defined in MPEG-2). Other suffixes such as .m2ts also exist specifying the precise container, in this case MPEG-2 TS, but this has little relevance to MPEG-1 media.
.mp3 is the most common extension for files containing MP3 audio (typically MPEG-1 Audio, sometimes MPEG-2 Audio). An MP3 file is typically an uncontained stream of raw audio; the conventional way to tag MP3 files is by writing data to "garbage" segments of each frame, which preserve the media information but are discarded by the player. This is similar in many respects to how raw .AAC files are tagged (but this is less supported nowadays, e.g. iTunes).
Note that although it would apply, .mpg does not normally append raw AAC or AAC in MPEG-2 Part 7 Containers. The .aac extension normally denotes these audio files.
See also
MPEG The Moving Picture Experts Group, developers of the MPEG-1 standard
MP3 Additional less technical details about MPEG-1 Audio Layer III
MPEG Multichannel Backwards compatible 5.1 channel surround sound extension to MPEG-1 Audio Layer II
MPEG-2 The direct successor to the MPEG-1 standard.
ISO/IEC JTC 1/SC 29
Implementations
Libavcodec includes MPEG-1/2 video/audio encoders and decoders
Mjpegtools MPEG-1/2 video/audio encoders
TooLAME A high quality MPEG-1 Audio Layer II encoder.
LAME A high quality MP3 audio encoder.
Musepack A format originally based on MPEG-1 Audio Layer II, but now incompatible.
References
External links
Official Web Page of the Moving Picture Experts Group (MPEG) a working group of ISO/IEC
MPEG Industry Forum Organization
Source Code to Implement MPEG-1
A simple, concise explanation from Berkeley Multimedia Research Center
Audio codecs
Video codecs
MPEG
ISO/IEC standards
Computer-related introductions in 1993
Data compression | MPEG-1 | Technology | 11,229 |
42,864,924 | https://en.wikipedia.org/wiki/Zeta%20Pictoris | ζ Pictoris, Latinised as Zeta Pictoris, is a solitary star in the southern constellation of Pictor. It is visible to the naked eye with an apparent visual magnitude of +5.43. Based upon an annual parallax shift of 28.00 mas as seen from the Earth, the system is located 116.5 light years from the Sun.
This is an evolving F-type subgiant star with a stellar classification of F6 IV. It is a thin disk star with an estimated 1.4 times the mass of the Sun and about 5.3 times the Sun's radius. At the age of 2.6 billion years, Zeta Pictoris is spinning with a projected rotational velocity of 5.6 km/s.
References
F-type subgiants
Pictor
Pictoris, Zeta
Durchmusterung objects
035072
024829
1767 | Zeta Pictoris | Astronomy | 184 |
33,024,297 | https://en.wikipedia.org/wiki/Rilotumumab | Rilotumumab (previously AMG102) is a human monoclonal antibody designed for the treatment of solid tumors.
Rilotumumab was in development by Amgen, Inc. until in November 2014, when Amgen announced it had halted all clinical trials of the compound in advanced gastric cancer (including two Phase III studies) after one of the trials found an increase in the number of deaths in those who were taking rilotumumab and chemotherapy when compared to those who were only administered chemotherapy.
References
Amgen
Abandoned drugs | Rilotumumab | Chemistry | 113 |
48,133,344 | https://en.wikipedia.org/wiki/Spike%20%28software%20development%29 | A spike is a product development method originating from extreme programming that uses the simplest possible program to explore potential solutions. It is used to determine how much work will be required to solve or work around a software issue. Typically, a "spike test" involves gathering additional information or testing for easily reproduced edge cases. The term is used in agile software development approaches like Scrum or Extreme Programming.
Uses
A spike in a sprint can be used in a number of ways:
As a way to familiarize the team with new hardware or software
To analyze a problem thoroughly and assist in properly dividing work among separate team members.
Spike tests can also be used to mitigate future risk, and may uncover additional issues that have escaped notice.
A distinction can be made between technical spikes and functional spikes. The technical spike is used more often for evaluating the impact new technology has on the current implementation. A functional spike is used to determine the interaction with a new feature or implementation.
To track such work items, in a ticketing system, a new user story can be set up for each spike, for organization purposes.
Following a spike, the results (a new design, a refined workflow, etc.) are shared and discussed with the team.
References
Agile software development
Software development process | Spike (software development) | Engineering | 254 |
76,457,599 | https://en.wikipedia.org/wiki/Adriana%20Salerno | Adriana Julia Salerno Domínguez (born 1979) is a Venezuelan-American mathematician, a professor of mathematics at Bates College, and a program director at the National Science Foundation. Her research interests include arithmetic geometry and arithmetic dynamics in number theory. She is also a mathematics blogger, the co-founder of the American Mathematical Society blogs "Ph.D. plus epsilon" and "inclusion/exclusion".
Education and career
Salerno was born in Caracas in 1979, and earned a licenciatura in mathematics from Simón Bolívar University (Venezuela) in 2001, advised by Pedro Berrizbeitia. She completed a Ph.D. in mathematics in 2009 at the University of Texas at Austin, with the dissertation Hypergeometric Functions in Arithmetic Geometry supervised by Fernando Rodríguez-Villegas.
She joined Bates College as an assistant professor in 2009. In 2016, she visited the Mathematical Association of America (MAA) headquarters in Washington, D.C., as Dolciani Visiting Mathematician. After serving as department chair, she took a leave from Bates College beginning in 2021 to become a program officer for the National Science Foundation, where she is a program director for algebra and number theory. In 2021, she was also elected vice president of the MAA.
Recognition
Salerno is a 2023 recipient of one of the Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics.
References
External links
Home page
Adriana Salerno, Calendar 2017, Latinxs and Hispanics in the Mathematical Sciences
1979 births
Living people
People from Caracas
Venezuelan mathematicians
Venezuelan women scientists
21st-century American mathematicians
21st-century American women mathematicians
Number theorists
Simón Bolívar University (Venezuela) alumni
University of Texas at Austin alumni
Bates College faculty
United States National Science Foundation officials | Adriana Salerno | Mathematics | 352 |
1,081,285 | https://en.wikipedia.org/wiki/Point-set%20triangulation | A triangulation of a set of points in the Euclidean space is a simplicial complex that covers the convex hull of , and whose vertices belong to . In the plane (when is a set of points in ), triangulations are made up of triangles, together with their edges and vertices. Some authors require that all the points of are vertices of its triangulations. In this case, a triangulation of a set of points in the plane can alternatively be defined as a maximal set of non-crossing edges between points of . In the plane, triangulations are special cases of planar straight-line graphs.
A particularly interesting kind of triangulations are the Delaunay triangulations. They are the geometric duals of Voronoi diagrams. The Delaunay triangulation of a set of points in the plane contains the Gabriel graph, the nearest neighbor graph and the minimal spanning tree of .
Triangulations have a number of applications, and there is an interest to find the "good" triangulations of a given point set under some criteria as, for instance minimum-weight triangulations. Sometimes it is desirable to have a triangulation with special properties, e.g., in which all triangles have large angles (long and narrow ("splinter") triangles are avoided).
Given a set of edges that connect points of the plane, the problem to determine whether they contain a triangulation is NP-complete.
Regular triangulations
Some triangulations of a set of points can be obtained by lifting the points of into (which amounts to add a coordinate to each point of ), by computing the convex hull of the lifted set of points, and by projecting the lower faces of this convex hull back on . The triangulations built this way are referred to as the regular triangulations of . When the points are lifted to the paraboloid of equation , this construction results in the Delaunay triangulation of . Note that, in order for this construction to provide a triangulation, the lower convex hull of the lifted set of points needs to be simplicial. In the case of Delaunay triangulations, this amounts to require that no points of lie in the same sphere.
Combinatorics in the plane
Every triangulation of any set of points in the plane has triangles and edges where is the number of points of in the boundary of the convex hull of . This follows from a straightforward Euler characteristic argument.
Algorithms to build triangulations in the plane
Triangle Splitting Algorithm : Find the convex hull of the point set and triangulate this hull as a polygon. Choose an interior point and draw edges to the three vertices of the triangle that contains it. Continue this process until all interior points are exhausted.
Incremental Algorithm : Sort the points of according to x-coordinates. The first three points determine a triangle. Consider the next point in the ordered set and connect it with all previously considered points which are visible to p. Continue this process of adding one point of at a time until all of has been processed.
Time complexity of various algorithms
The following table reports time complexity results for the construction of triangulations of point sets in the plane, under different optimality criteria, where is the number of points.
See also
Mesh generation
Polygon triangulation
Notes
References
Triangulation (geometry)
Point (geometry)
de:Gitter (Geometrie)#Dreiecksgitter | Point-set triangulation | Mathematics | 725 |
52,448,137 | https://en.wikipedia.org/wiki/Twisted%20sheaf | In mathematics, a twisted sheaf is a variant of a coherent sheaf. Precisely, it is specified by: an open covering in the étale topology Ui, coherent sheaves Fi over Ui, a Čech 2-cocycle θ for on the covering Ui as well as the isomorphisms
satisfying
,
The notion of twisted sheaves was introduced by Jean Giraud. The above definition due to Căldăraru is down-to-earth but is equivalent to a more sophisticated definition in terms of gerbe; see § 2.1.3 of .
See also
Reflexive sheaf
Torsion sheaf
References
Geometry | Twisted sheaf | Mathematics | 128 |
55,268,440 | https://en.wikipedia.org/wiki/Coherent%20algebra | A coherent algebra is an algebra of complex square matrices that is closed under ordinary matrix multiplication, Schur product, transposition, and contains both the identity matrix and the all-ones matrix .
Definitions
A subspace of is said to be a coherent algebra of order if:
.
for all .
and for all .
A coherent algebra is said to be:
Homogeneous if every matrix in has a constant diagonal.
Commutative if is commutative with respect to ordinary matrix multiplication.
Symmetric if every matrix in is symmetric.
The set of Schur-primitive matrices in a coherent algebra is defined as .
Dually, the set of primitive matrices in a coherent algebra is defined as .
Examples
The centralizer of a group of permutation matrices is a coherent algebra, i.e. is a coherent algebra of order if for a group of permutation matrices. Additionally, the centralizer of the group of permutation matrices representing the automorphism group of a graph is homogeneous if and only if is vertex-transitive.
The span of the set of matrices relating pairs of elements lying in the same orbit of a diagonal action of a finite group on a finite set is a coherent algebra, i.e. where is defined as for all of a finite set acted on by a finite group .
The span of a regular representation of a finite group as a group of permutation matrices over is a coherent algebra.
Properties
The intersection of a set of coherent algebras of order is a coherent algebra.
The tensor product of coherent algebras is a coherent algebra, i.e. if and are coherent algebras.
The symmetrization of a commutative coherent algebra is a coherent algebra.
If is a coherent algebra, then for all , , and if is homogeneous.
Dually, if is a commutative coherent algebra (of order ), then for all , , and as well.
Every symmetric coherent algebra is commutative, and every commutative coherent algebra is homogeneous.
A coherent algebra is commutative if and only if it is the Bose–Mesner algebra of a (commutative) association scheme.
A coherent algebra forms a principal ideal ring under Schur product; moreover, a commutative coherent algebra forms a principal ideal ring under ordinary matrix multiplication as well.
See also
Association scheme
Bose–Mesner algebra
References
Algebras
Algebraic combinatorics | Coherent algebra | Mathematics | 480 |
12,538,295 | https://en.wikipedia.org/wiki/Madagascar%20sucker-footed%20bat | The Madagascar sucker-footed bat, Old World sucker-footed bat, or simply sucker-footed bat (Myzopoda aurita) is a species of bat in the family Myzopodidae endemic to Madagascar, especially in the eastern part of the forests. The genus was thought to be monospecific until a second species, Myzopoda schliemanni, was discovered in the central western lowlands. It was classified as Vulnerable in the 1996 IUCN Red List of Threatened Species but is now known to be more abundant and was reclassified in 2008 as of "Least Concern".
The bat is named for the presence of small cups on its wrists and ankles. They roost inside the rolled leaves of the traveller's tree, using their suckers to attach themselves to the smooth surface. Despite the name, it is now known that the bats do not use suction to attach themselves to roost sites, but instead use a form of wet adhesion by secreting a body fluid at their pads. The ankle and wrist pads of the bat are controlled by muscle contraction and allow the bat to separate the pads to reduce the adhesive effect. This allows the bats to climb with ease and to remove themselves from surfaces after sticking. Due to this property the Madagascar sucker-footed bat is one of the few bat species that roosts with its head up rather than upside down. This is so the bat does not accidentally lose control of the adhesive pads while it is sleeping due to the muscle tension associated with roosting upside down.
Because of their unique habitat, sucker-footed bats don't carry ectoparasites, due to the smooth surface of the Ravenala leaves being inhospitable to small arthropods The majority of sucker-footed bats caught in eastern Madagascar were within or close to stands of traveller's trees, and according to research, the maximum distance they will travel while foraging is about . Sucker-footed bats feed largely on beetles and small moths.
References
External links
"Monastic" Malagasy bat mystifies experts BBC Earth News 13 July 2010
Myzopodidae
Bats of Africa
Endemic fauna of Madagascar
Mammals of Madagascar
EDGE species
Taxa named by Alphonse Milne-Edwards
Taxa named by Alfred Grandidier
Mammals described in 1878
Taxonomy articles created by Polbot | Madagascar sucker-footed bat | Biology | 467 |
60,468,045 | https://en.wikipedia.org/wiki/Wizzy%20Active%20Lifestyle%20Telephone | The Wizzy Active Lifestyle Telephone (W.A.L.T.) was a prototype "phone companion" created by Apple Computer in collaboration with BellSouth. W.A.L.T. featured "touchscreen, send/receive fax functionality, on-display caller ID, a built-in address book, customizable ringtones, and online banking access". The system was based on the PowerBook 100, and included touchscreen, stylus, and handwriting recognition. The operating system was based on System 6 with a HyperCard GUI. Announced in 1993, the system was not mass-produced. A prototype machine was sold on eBay in 2012 for US$8,000. In 2019 a video demonstration of a prototype machine was uploaded to the internet.
References
External links
YouTube video of a working W.A.L.T. prototype
Apple Inc. hardware
Macintosh platform
Network computer (brand) | Wizzy Active Lifestyle Telephone | Technology | 188 |
26,233,271 | https://en.wikipedia.org/wiki/Variator | A variator is a device that can change its parameters, or can change parameters of other devices.
Often a variator is a mechanical power transmission device that can change its gear ratio continuously (rather than in steps).
Examples
Beier variable-ratio gear
Continuously variable transmission
Evans friction cone
NuVinci continuously variable transmission
Variator (variable valve timing)
Variomatic
VANOS
See also
Epicyclic gearing
References
Mechanical power control | Variator | Physics,Engineering | 92 |
1,305,761 | https://en.wikipedia.org/wiki/Afshar%20experiment | The Afshar experiment is a variation of the double-slit experiment in quantum mechanics, devised and carried out by Shahriar Afshar in 2004. In the experiment, light generated by a laser passes through two closely spaced pinholes, and is refocused by a lens so that the image of each pinhole falls on a separate single-photon detector. In addition, a grid of thin wires is placed just before the lens on the dark fringes of an interference pattern.
Afshar claimed that the experiment gives information about which path a photon takes through the apparatus, while simultaneously allowing interference between the paths to be observed. According to Afshar, this violates the complementarity principle of quantum mechanics.
The experiment has been analyzed and repeated by a number of investigators. There are several theories that explain the effect without violating complementarity. John G. Cramer claims the experiment provides evidence for the transactional interpretation of quantum mechanics over other interpretations.
History
Shahriar Afshar's experimental work was done initially at the Institute for Radiation-Induced Mass Studies (IRIMS) in Boston and later reproduced at Harvard University, while he was there as a visiting researcher. The results were first presented at a seminar at Harvard in March 2004. The experiment was featured as the cover story in the July 24, 2004 edition of the popular science magazine New Scientist endorsed by professor John G. Cramer of the University of Washington. The New Scientist feature article generated many responses, including various letters to the editor that appeared in the August 7 and August 14, 2004, issues, arguing against the conclusions being drawn by Afshar. The results were published in a SPIE conference proceedings in 2005. A follow-up paper was published in a scientific journal Foundations of Physics in January 2007 and featured in New Scientist in February 2007.
Experimental setup
The experiment uses a setup similar to that for the double-slit experiment. In Afshar's variant, light generated by a laser passes through two closely spaced circular pinholes (not slits). After the dual pinholes, a lens refocuses the light so that the image of each pinhole falls on separate photon-detectors (Fig. 1). With pinhole 2 closed, a photon that goes through pinhole 1 impinges only on photon detector 1. Similarly, with pinhole 1 closed, a photon that goes through pinhole 2 impinges only on photon detector 2. With both pinholes open, Afshar claims, citing Wheeler in support, that pinhole 1 remains correlated to photon Detector 1 (and vice versa for pinhole 2 to photon Detector 2), and therefore that which-way information is preserved when both pinholes are open.
When the light acts as a wave, because of quantum interference one can observe that there are regions that the photons avoid, called dark fringes. A grid of thin wires is placed just before the lens (Fig. 2) so that the wires lie in the dark fringes of an interference pattern which is produced by the dual pinhole setup. If one of the pinholes is blocked, the interference pattern will no longer be formed, and the grid of wires causes appreciable diffraction in the light and blocks some of it from detection by the corresponding photon detector. However, when both pinholes are open, the effect of the wires is negligible, comparable to the case in which there are no wires placed in front of the lens (Fig. 3), because the wires lie in the dark fringes of an interference pattern. The effect is not dependent on the light intensity (photon flux).
Afshar's interpretation
Afshar's conclusion is that, when both pinholes are open, the light exhibits wave-like behavior when going past the wires, since the light goes through the spaces between the wires but avoids the wires themselves, but also exhibits particle-like behavior after going through the lens, with photons going to a correlated photo-detector. Afshar argues that this behavior contradicts the principle of complementarity to the extent that it shows both wave and particle characteristics in the same experiment for the same photons.
Afshar asserts that there is simultaneously high visibility V of interference as well as high distinguishability D (corresponding to which-path information), so that V2 + D2 > 1, and the wave-particle duality relation is violated.
Reception
Specific criticism
A number of scientists have published criticisms of Afshar's interpretation of his results, some of which reject the claims of a violation of complementarity, while differing in the way they explain how complementarity copes with the experiment. For example, one paper contests Afshar's core claim, that the Englert–Greenberger duality relation is violated. The researchers re-ran the experiment, using a different method for measuring the visibility of the interference pattern than that used by Afshar, and found no violation of complementarity, concluding "This result demonstrates that the experiment can be perfectly explained by the Copenhagen interpretation of quantum mechanics."
Below is a synopsis of papers by several critics highlighting their main arguments and the disagreements they have amongst themselves:
Ruth Kastner, Committee on the History and Philosophy of Science, University of Maryland, College Park.
Kastner's criticism, published in a peer-reviewed paper, proceeds by setting up a thought experiment and applying Afshar's logic to it to expose its flaw. She proposes that Afshar's experiment is equivalent to preparing an electron in a spin-up state and then measuring its sideways spin. This does not imply that one has found out the up-down spin state and the sideways spin state of any electron simultaneously. Applied to Afshar's experiment: "Nevertheless, even with the grid removed, since the photon is prepared in a superposition S, the measurement at the final screen at t2 never really is a 'which-way' measurement (the term traditionally attached to the slit-basis observable ), because it cannot tell us 'which slit the photon actually went through.'
Daniel Reitzner, Research Center for Quantum Information, Institute of Physics, Slovak Academy of Sciences, Bratislava, Slovakia.
Reitzner performed numerical simulations, published in a preprint, of Afshar's arrangement and obtained the same results that Afshar obtained experimentally. From this he argues that the photons exhibit wave behavior, including high fringe visibility but no which-way information, up to the point they hit the detector: "In other words the two-peaked distribution is an interference pattern and the photon behaves as a wave and exhibits no particle properties until it hits the plate. As a result a which-way information can never be obtained in this way."
W. G. Unruh, Professor of Physics at University of British Columbia
Unruh, like Kastner, proceeds by setting up an arrangement that he feels is equivalent but simpler. The size of the effect is larger so that it is easier to see the flaw in the logic. In Unruh's view that flaw is, in the case that an obstacle exists at the position of the dark fringes, "drawing the inference that IF the particle was detected in detector 1, THEN it must have come from path 1. Similarly, IF it were detected in detector 2, then it came from path 2." In other words, he accepts the existence of an interference pattern but rejects the existence of which-way information.
Luboš Motl, Former assistant professor of physics, Harvard University.
Motl's criticism, published in his blog, is based on an analysis of Afshar's actual setup, instead of proposing a different experiment like Unruh and Kastner. In contrast to Unruh and Kastner, he believes that which-way information always exists, but argues that the measured contrast of the interference pattern is actually very low: "Because this signal (disruption) from the second, middle picture is small (equivalently, it only affects a very small portion of the photons), the contrast V is also very small, and goes to zero for infinitely thin wires." He also argues that the experiment can be understood with classical electrodynamics and has "nothing to do with quantum mechanics".
Ole Steuernagel, School of Physics, Astronomy and Mathematics, University of Hertfordshire, UK.
Steuernagel makes a quantitative analysis of the various transmitted, refracted, and reflected modes in a setup that differs only slightly from Afshar's. He concludes that the Englert-Greenberger duality relation is strictly satisfied, and in particular that the fringe visibility for thin wires is small. Like some of the other critics, he emphasizes that inferring an interference pattern is not the same as measuring one: "Finally, the greatest weakness in the analysis given by Afshar is the inference that an interference pattern must be present."
Andrew Knight, Department of Physics, New York University
Argues that Afshar's claim to violate complementarity is a simple logical inconsistency: by setting up the experiment so that photons are spatially coherent over the two pinholes, the pinholes are necessarily indistinguishable by those photons. “In other words, Afshar et al. claim in one breath to have set up the experiment so that pinholes A and B are inherently indistinguishable by certain photons [specifically, photons that are produced to be spatially coherent over the width spanned by pinholes that are thus incapable of distinguishing them], and in another breath to have distinguished pinholes A and B with those same photons.”
Specific support
Afshar's coauthors Eduardo Flores and Ernst Knoesel criticize Kastner's setup and propose an alternative experimental setup. By removing the lens of Afshar and causing two beams to overlap at a small angle, Flores et al. aimed to show that conservation of momentum guarantee the preservation of which-path information when both pinholes are open. But this experiment is still subject to Motl's objection that the 2 beams have a sub-microscopic diffraction pattern created by the convergence of the beams before the slits; the result would have been the measuring of which slit was open before the wires were ever reached.
John G. Cramer adopts Afshar's interpretation of the experiment to support his own transactional interpretation of quantum mechanics over the Copenhagen interpretation and the many-worlds interpretation.
See also
Wheeler's delayed choice experiment
Delayed choice quantum eraser
Weak measurement
Wheeler–Feynman absorber theory
References
Quantum measurement
Physics experiments
Philosophy of physics | Afshar experiment | Physics | 2,189 |
69,264,543 | https://en.wikipedia.org/wiki/Sodium%20ozonide | Sodium ozonide (NaO3) is an oxygen-rich compound of sodium. As an ozonide, it contains the ozonide anion (O3−).
Some experiments report creating sodium ozonide by applying ozone to sodium hydroxide, but the substance was not pure, and the claimed stability at room temperature was contradicted by other reports. This is in contrast to potassium ozonide, rubidium ozonide, and caesium ozonide, which can be synthesized applying ozone directly to the metal. Instead, it is made in ammonia solution using ion exchange and cryptands.
The compound is unstable at room temperature and decomposes at -10 °C to sodium superoxide and oxygen.
However, the compound can be stored for months at -18 °C.
References
Sodium compounds
Ozonides | Sodium ozonide | Chemistry | 170 |
516,757 | https://en.wikipedia.org/wiki/Video%20camera%20tube | Video camera tubes are devices based on the cathode-ray tube that were used in television cameras to capture television images, prior to the introduction of charge-coupled device (CCD) image sensors in the 1980s. Several different types of tubes were in use from the early 1930s, and as late as the 1990s.
In these tubes, an electron beam is scanned across an image of the scene to be broadcast focused on a target. This generated a current that is dependent on the brightness of the image on the target at the scan point. The size of the striking ray is tiny compared to the size of the target, allowing 480–486 horizontal scan lines per image in the NTSC format, 576 lines in PAL, and as many as 1035 lines in Hi-Vision.
Cathode-ray tube
Any vacuum tube which operates using a focused beam of electrons, originally called cathode rays, is known as a cathode-ray tube (CRT). These are usually seen as display devices as used in older (i.e., non-flat panel) television receivers and computer displays. The camera pickup tubes described in this article are also CRTs, but they display no image.
Early research
In June 1908, the scientific journal Nature published a letter in which Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), discussed how a fully electronic television system could be realized by using cathode-ray tubes (or "Braun" tubes, after their inventor, Karl Braun) as both imaging and display devices. He noted that the "real difficulties lie in devising an efficient transmitter", and that it was possible that "no photoelectric phenomenon at present known will provide what is required". A cathode-ray tube was successfully demonstrated as a displaying device by the German Professor Max Dieckmann in 1906; his experimental results were published by the journal Scientific American in 1909. Campbell-Swinton later expanded on his vision in a presidential address given to the Röntgen Society in November 1911. The photoelectric screen in the proposed transmitting device was a mosaic of isolated rubidium cubes. His concept for a fully electronic television system was later popularized as the "Campbell-Swinton Electronic Scanning System" by Hugo Gernsback and H. Winfield Secor in the August 1915 issue of the popular magazine Electrical Experimenter and by Marcus J. Martin in the 1921 book The Electrical Transmission of Photographs.
In a letter to Nature published in October 1926, Campbell-Swinton also announced the results of some "not very successful experiments" he had conducted with G. M. Minchin and J. C. M. Stanton. They had attempted to generate an electrical signal by projecting an image onto a selenium-coated metal plate that was simultaneously scanned by a cathode ray beam. These experiments were conducted before March 1914, when Minchin died, but they were later repeated by two different teams in 1937, by H. Miller and J. W. Strange from EMI, and by H. Iams and A. Rose from RCA. Both teams succeeded in transmitting "very faint" images with the original Campbell-Swinton's selenium-coated plate, but much better images were obtained when the metal plate was covered with zinc sulphide or selenide, or with aluminum or zirconium oxide treated with caesium. These experiments would form the base of the future vidicon. A description of a CRT imaging device also appeared in a patent application filed by Edvard-Gustav Schoultz in France in August 1921, and published in 1922, although a working device was not demonstrated until some years later.
Experiments with image dissectors
An image dissector is a camera tube that creates an "electron image" of a scene from photocathode emissions (electrons) which pass through a scanning aperture to an anode, which serves as an electron detector. Among the first to design such a device were German inventors Max Dieckmann and Rudolf Hell, who had titled their 1925 patent application Lichtelektrische Bildzerlegerröhre für Fernseher (Photoelectric Image Dissector Tube for Television). The term may apply specifically to a dissector tube employing magnetic fields to keep the electron image in focus, an element lacking in Dieckmann and Hell's design, and in the early dissector tubes built by American inventor Philo Farnsworth.
Dieckmann and Hell submitted their application to the German patent office in April 1925, and a patent was issued in October 1927. Their experiments on the image dissector were announced in September 1927 issue of the popular magazine Discovery and in the May 1928 issue of the magazine Popular Radio. However, they never transmitted a clear and well focused image with such a tube.
In January 1927, American inventor and television pioneer Philo T. Farnsworth applied for a patent for his Television System that included a device for "the conversion and dissecting of light".
Its first moving image was successfully transmitted on September 7 of 1927,
and a patent was issued in 1930. Farnsworth quickly made improvements to the device, among them introducing an electron multiplier made of nickel and using a "longitudinal magnetic field" in order to sharply focus the electron image.
The improved device was demonstrated to the press in early September 1928.
The introduction of a multipactor in October 1933 and a multi-dynode "electron multiplier" in 1937 made Farnsworth's image dissector the first practical version of a fully electronic imaging device for television. It had very poor light sensitivity, and was therefore primarily useful only where illumination was exceptionally high (typically over 685 cd/m2). However, it was ideal for industrial applications, such as monitoring the bright interior of an industrial furnace. Due to their poor light sensitivity, image dissectors were rarely used in television broadcasting, except to scan film and other transparencies.
In April 1933, Farnsworth submitted a patent application also entitled Image Dissector, but which actually detailed a CRT-type camera tube. This is among the first patents to propose the use of a "low-velocity" scanning beam and RCA had to buy it in order to sell image orthicon tubes to the general public. However, Farnsworth never transmitted a clear and well focused image with such a tube.
Dissectors were used only briefly for research in television systems before being replaced by different much more sensitive tubes based on the charge-storage phenomenon like the iconoscope during the 1930s. Although camera tubes based on the idea of image dissector technology quickly and completely fell out of use in the field of television broadcasting, they continued to be used for imaging in early weather satellites and the Lunar lander, and for star attitude tracking in the Space Shuttle and the International Space Station.
Operation
The optical system of the image dissector focuses an image onto a photocathode mounted inside a high vacuum. As light strikes the photocathode, electrons are emitted in proportion to the intensity of the light (see photoelectric effect). The entire electron image is deflected and a scanning aperture permits only those electrons emanating from a very small area of the photocathode to be captured by the detector at any given time. The output from the detector is an electric current whose magnitude is a measure of the brightness of the corresponding area of the image. The electron image is periodically deflected horizontally and vertically ("raster scanning") such that the entire image is read by the detector many times per second, producing an electrical signal that can be conveyed to a display device, such as a CRT monitor, to reproduce the image.
The image dissector has no "charge storage" characteristic; the vast majority of electrons emitted by the photocathode are excluded by the scanning aperture, and thus wasted rather than being stored on a photo-sensitive target.
Charge-storage tubes
Iconoscope
The early electronic camera tubes (like the image dissector) suffered from a very disappointing and fatal flaw: They scanned the subject and what was seen at each point was only the tiny piece of light viewed at the instant that the scanning system passed over it. A practical functional camera tube needed a different technological approach, which later became known as Charge - Storage camera tube. It was based on a new physical phenomenon which was discovered and patented in Hungary in 1926, but became widely understood and recognised only from around 1930.
An iconoscope is a camera tube that projects an image on a special charge storage plate containing a mosaic of electrically isolated photosensitive granules separated from a common plate by a thin layer of isolating material, somewhat analogous to the human eye's retina and its arrangement of photoreceptors. Each photosensitive granule constitutes a tiny capacitor that accumulates and stores electrical charge in response to the light striking it. An electron beam periodically sweeps across the plate, effectively scanning the stored image and discharging each capacitor in turn such that the electrical output from each capacitor is proportional to the average intensity of the light striking it between each discharge event.
After Hungarian engineer Kálmán Tihanyi studied Maxwell's equations, he discovered a new hitherto unknown physical phenomenon, which led to a break-through in the development of electronic imaging devices. He named the new phenomenon as charge-storage principle. (further information: Charge-storage principle)
The problem of low sensitivity to light resulting in low electrical output from transmitting or camera tubes would be solved with the introduction of charge-storage technology by Tihanyi in the beginning of 1925. His solution was a camera tube that accumulated and stored electrical charges (photoelectrons) within the tube throughout each scanning cycle. The device was first described in a patent application he filed in Hungary in March 1926 for a television system he dubbed Radioskop. After further refinements included in a 1928 patent application, Tihanyi's patent was declared void in Great Britain in 1930, and so he applied for patents in the United States. Tihanyi's charge storage idea remains a basic principle in the design of imaging devices for television to the present day.
In 1924, while employed by the Westinghouse Electric Corporation in Pittsburgh, Pennsylvania, Russian-born American engineer Vladimir Zworykin presented a project for a totally electronic television system to the company's general manager. In July 1925, Zworykin submitted a patent application titled Television System that included a charge storage plate constructed of a thin layer of isolating material (aluminum oxide) sandwiched between a screen (300 mesh) and a colloidal deposit of photoelectric material (potassium hydride) consisting of isolated globules. The following description can be read between lines 1 and 9 in page 2: "The photoelectric material, such as potassium hydride, is evaporated on the aluminum oxide, or other insulating medium, and treated so as to form a colloidal deposit of potassium hydride consisting of minute globules. Each globule is very active photoelectrically and constitutes, to all intents and purposes, a minute individual photoelectric cell". Its first image was transmitted in late summer of 1925, and a patent was issued in 1928. However the quality of the transmitted image failed to impress H.P. Davis, the general manager of Westinghouse, and Zworykin was asked "to work on something useful". A patent for a television system was also filed by Zworykin in 1923, but this filing is not a definitive reference because extensive revisions were done before a patent was issued fifteen years later and the file itself was divided into two patents in 1931.
The first practical iconoscope was constructed in 1931 by Sanford Essig, when he accidentally left a silvered mica sheet in the oven too long. Upon examination with a microscope, he noticed that the silver layer had broken up into a myriad of tiny isolated silver globules. He also noticed that, "the tiny dimension of the silver droplets would enhance the image resolution of the iconoscope by a quantum leap". As head of television development at Radio Corporation of America (RCA), Zworykin submitted a patent application in November 1931, and it was issued in 1935. Nevertheless, Zworykin's team was not the only engineering group working on devices that used a charge storage plate. In 1932, the EMI engineers Tedham and McGee under the supervision of Isaac Shoenberg applied for a patent for a new device they dubbed the "Emitron". A 405-line broadcasting service employing the Emitron began at studios in Alexandra Palace in 1936, and patents were issued in the United Kingdom in 1934 and in the US in 1937.
The iconoscope was presented to the general public at a press conference in June 1933, and two detailed technical papers were published in September and October of the same year. Unlike the Farnsworth image dissector, the Zworykin iconoscope was much more sensitive, useful with an illumination on the target between 40and215lux (4–20 ft-c). It was also easier to manufacture and produced a very clear image. The iconoscope was the primary camera tube used by RCA broadcasting from 1936 until 1946, when it was replaced by the image orthicon tube.
Super-Emitron and image iconoscope
The original iconoscope was noisy, had a high ratio of interference to signal, and ultimately gave disappointing results, especially when compared to the high definition mechanical scanning systems then becoming available. The EMI team under the supervision of Isaac Shoenberg analyzed how the Emitron (or iconoscope) produces an electronic signal and concluded that its real efficiency was only about 5% of the theoretical maximum. This is because secondary electrons released from the mosaic of the charge storage plate when the scanning beam sweeps across it may be attracted back to the positively charged mosaic, thus neutralizing many of the stored charges. Lubszynski, Rodda, and McGee realized that the best solution was to separate the photo-emission function from the charge storage one, and so communicated their results to Zworykin.
The new video camera tube developed by Lubszynski, Rodda and McGee in 1934 was dubbed "the super-Emitron". This tube is a combination of the image dissector and the Emitron. It has an efficient photocathode that transforms the scene light into an electron image; the latter is then accelerated towards a target specially prepared for the emission of secondary electrons. Each individual electron from the electron image produces several secondary electrons after reaching the target, so that an amplification effect is produced. The target is constructed of a mosaic of electrically isolated metallic granules separated from a common plate by a thin layer of isolating material, so that the positive charge resulting from the secondary emission is stored in the granules. Finally, an electron beam periodically sweeps across the target, effectively scanning the stored image, discharging each granule, and producing an electronic signal like in the iconoscope.
The super-Emitron was between ten and fifteen times more sensitive than the original Emitron and iconoscope tubes and, in some cases, this ratio was considerably greater. It was used for an outside broadcast by the BBC, for the first time, on Armistice Day 1937, when the general public could watch in a television set how the King laid a wreath at the Cenotaph. This was the first time that anyone could broadcast a live street scene from cameras installed on the roof of neighboring buildings.
On the other hand, in 1934, Zworykin shared some patent rights with the German licensee company Telefunken. The image iconoscope (Superikonoskop in Germany) was produced as a result of the collaboration. This tube is essentially identical to the super-Emitron, but the target is constructed of a thin layer of isolating material placed on top of a conductive base, the mosaic of metallic granules is missing. The production and commercialization of the super-Emitron and image iconoscope in Europe were not affected by the patent war between Zworykin and Farnsworth, because Dieckmann and Hell had priority in Germany for the invention of the image dissector, having submitted a patent application for their Lichtelektrische Bildzerlegerröhre für Fernseher (Photoelectric Image Dissector Tube for Television) in Germany in 1925, two years before Farnsworth did the same in the United States.
The image iconoscope (Superikonoskop) became the industrial standard for public broadcasting in Europe from 1936 until 1960, when it was replaced by the vidicon and plumbicon tubes. Indeed, it was the representative of the European tradition in electronic tubes competing against the American tradition represented by the image orthicon. The German company Heimann produced the Superikonoskop for the 1936 Berlin Olympic Games, later Heimann also produced and commercialized it from 1940 to 1955, finally the Dutch company Philips produced and commercialized the image iconoscope and multicon from 1952 until 1963, when it was replaced by the much better Plumbicon.
Operation
The super-Emitron is a combination of the image dissector and the Emitron. The scene image is projected onto an efficient continuous-film semitransparent photocathode that transforms the scene light into a light-emitted electron image, the latter is then accelerated (and focused) via electromagnetic fields towards a target specially prepared for the emission of secondary electrons. Each individual electron from the electron image produces several secondary electrons after reaching the target, so that an amplification effect is produced, and the resulting positive charge is proportional to the integrated intensity of the scene light. The target is constructed of a mosaic of electrically isolated metallic granules separated from a common plate by a thin layer of isolating material, so that the positive charge resulting from the secondary emission is stored in the capacitor formed by the metallic granule and the common plate. Finally, an electron beam periodically sweeps across the target, effectively scanning the stored image and discharging each capacitor in turn such that the electrical output from each capacitor is proportional to the average intensity of the scene light between each discharge event (as in the iconoscope).
The image iconoscope is essentially identical to the super-Emitron, but the target is constructed of a thin layer of isolating material placed on top of a conductive base, the mosaic of metallic granules is missing. Therefore, secondary electrons are emitted from the surface of the isolating material when the electron image reaches the target, and the resulting positive charges are stored directly onto the surface of the isolated material.
Orthicon and CPS Emitron
The original iconoscope was very noisy due to the secondary electrons released from the photoelectric mosaic of the charge storage plate when the scanning beam swept it across. An obvious solution was to scan the mosaic with a low-velocity electron beam which produced less energy in the neighborhood of the plate such that no secondary electrons were emitted at all. That is, an image is projected onto the photoelectric mosaic of a charge storage plate, so that positive charges are produced and stored there due to photo-emission and capacitance, respectively. These stored charges are then gently discharged by a low-velocity electron scanning beam, preventing the emission of secondary electrons. Not all the electrons in the scanning beam may be absorbed in the mosaic, because the stored positive charges are proportional to the integrated intensity of the scene light. The remaining electrons are then deflected back into the anode, captured by a special grid, or deflected back into an electron multiplier.
Low-velocity scanning beam tubes have several advantages; there are low levels of spurious signals and high efficiency of conversion of light into signal, so that the signal output is maximum. However, there are serious problems as well, because the electron beam spreads and accelerates in a direction parallel to the target when it scans the image's borders and corners, so that it produces secondary electrons and one gets an image that is well focused in the center but blurry in the borders. Henroteau was among the first inventors to propose in 1929 the use of low-velocity electrons for stabilizing the potential of a charge storage plate, but Lubszynski and the EMI team were the first engineers in transmitting a clear and well focused image with such a tube. Another improvement is the use of a semitransparent charge storage plate. The scene image is then projected onto the back side of the plate, while the low-velocity electron beam scans the photoelectric mosaic at the front side. This configurations allows the use of a straight camera tube, because the scene to be transmitted, the charge storage plate, and the electron gun can be aligned one after the other.
The first fully functional low-velocity scanning beam tube, the CPS Emitron, was invented and demonstrated by the EMI team under the supervision of Sir Isaac Shoenberg. In 1934, the EMI engineers Blumlein and McGee filed for patents for television transmitting systems where a charge storage plate was shielded by a pair of special grids, a negative (or slightly positive) grid lay very close to the plate, and a positive one was placed further away. The velocity and energy of the electrons in the scanning beam were reduced to zero by the decelerating electric field generated by this pair of grids, and so a low-velocity scanning beam tube was obtained. The EMI team kept working on these devices, and Lubszynski discovered in 1936 that a clear image could be produced if the trajectory of the low-velocity scanning beam was nearly perpendicular (orthogonal) to the charge storage plate in a neighborhood of it. The resulting device was dubbed the cathode potential stabilized Emitron, or CPS Emitron. The industrial production and commercialization of the CPS Emitron had to wait until the end of the Second World War; it was widely used in the UK until 1963, when it was replaced by the much better Plumbicon.
On the other side of the Atlantic, the RCA team led by Albert Rose began working in 1935 on a low-velocity scanning beam device they came to dub the orthicon. Iams and Rose solved the problem of guiding the beam and keeping it in focus by installing specially designed deflection plates and deflection coils near the charge storage plate to provide a uniform axial magnetic field. The orthicon's performance was similar to that of the image iconoscope, but it was also unstable under sudden flashes of bright light, producing "the appearance of a large drop of water evaporating slowly over part of the scene".
Image orthicon
The image orthicon (sometimes abbreviated IO), was common in American broadcasting from 1946 until 1968. A combination of the image dissector and the orthicon technologies, it replaced the iconoscope in the United States, which required a great deal of light to work adequately.
The image orthicon tube was developed at RCA by Albert Rose, Paul K. Weimer, and Harold B. Law. It represented a considerable advance in the television field, and after further development work, RCA created original models between 1939 and 1940. The National Defense Research Committee entered into a contract with RCA where the NDRC paid for its further development. Upon RCA's development of the more sensitive image orthicon tube in 1943, RCA entered into a production contract with the U.S. Navy, the first tubes being delivered in January 1944. RCA began production of image orthicons for civilian use in the second quarter of 1946.
While the iconoscope and the intermediate orthicon used capacitance between a multitude of small but discrete light sensitive collectors and an isolated signal plate for reading video information, the image orthicon employed direct charge readings from a continuous electronically charged collector. The resultant signal was immune to most extraneous signal crosstalk from other parts of the target, and could yield extremely detailed images. Image orthicon cameras were still being used by NASA for capturing Apollo/Saturn rockets nearing orbit, although the television networks had phased the cameras out.
An image orthicon camera can take television pictures by candlelight because of the more ordered light-sensitive area and the presence of an electron multiplier at the base of the tube, which operated as a high-efficiency amplifier. It also has a logarithmic light sensitivity curve similar to the human eye. However, it tends to flare in bright light, causing a dark halo to be seen around the object; this anomaly was referred to as blooming in the broadcast industry when image orthicon tubes were in operation. Image orthicons were used extensively in the early color television cameras such as the RCA TK-40/41, where the increased sensitivity of the tube was essential to overcome the very inefficient, beam-splitting optical system of the camera.
The image orthicon tube was at one point colloquially referred to as an Immy. Harry Lubcke, the then-President of the Academy of Television Arts & Sciences, decided to have their award named after this nickname. Since the statuette was female, it was feminized into Emmy. The Image orthicon was used until the end of black and white television production in the 1960s.
Operation
An image orthicon consists of three parts: a photocathode with an image store (target), a scanner that reads this image (an electron gun), and a multistage electron multiplier.
In the image store, light falls upon the photocathode which is a photosensitive plate at a very negative potential (approx. -600 V), and is converted into an electron image (a principle borrowed from the image dissector). This electron rain is then accelerated towards the target (a very thin glass plate acting as a semi-isolator) at ground potential (0 V), and passes through a very fine wire mesh (nearly 200 or 390 wires per cm), very near (a few hundredths of a cm) and parallel to the target, acting as a screen grid at a slightly positive voltage (approx +2 V). Once the image electrons reach the target, they cause a splash of electrons by the effect of secondary emission. On average, each image electron ejects several splash electrons (thus adding amplification by secondary emission), and these excess electrons are soaked up by the positive mesh effectively removing electrons from the target and causing a positive charge on it in relation to the incident light in the photocathode. The result is an image painted in positive charge, with the brightest portions having the largest positive charge.
A sharply focused beam of electrons (a cathode ray) is generated by the electron gun at ground potential and accelerated by the anode (the first dynode of the electron multiplier) around the gun at a high positive voltage (approx. +1500 V). Once it exits the electron gun, its inertia makes the beam move away from the dynode towards the back side of the target. At this point the electrons lose speed and get deflected by the horizontal and vertical deflection coils, effectively scanning the target. Thanks to the axial magnetic field of the focusing coil, this deflection is not in a straight line, thus when the electrons reach the target they do so perpendicularly avoiding a sideways component. The target is nearly at ground potential with a small positive charge, thus when the electrons reach the target at low speed they are absorbed without ejecting more electrons. This adds negative charge to the positive charge until the region being scanned reaches some threshold negative charge, at which point the scanning electrons are reflected by the negative potential rather than absorbed (in this process the target recovers the electrons needed for the next scan). These reflected electrons return down the cathode-ray tube toward the first dynode of the electron multiplier surrounding the electron gun which is at high potential. The number of reflected electrons is a linear measure of the target's original positive charge, which, in turn, is a measure of brightness.
Dark halo
The mysterious dark "orthicon halo" around bright objects in an orthicon-captured image (also known as "blooming") is based on the fact that the IO relies on the emission of photoelectrons, but very bright illumination can produce more of them locally than the device can successfully deal with. At a very bright point on a captured image, a great preponderance of electrons is ejected from the photosensitive plate. So many may be ejected that the corresponding point on the collection mesh can no longer soak them up, and thus they fall back to nearby spots on the target instead, much as water splashes in a ring when a rock is thrown into it. Since the resultant splashed electrons do not contain sufficient energy to eject further electrons where they land, they will instead neutralize any positive charge that has been built-up in that region. Since darker images produce less positive charge on the target, the excess electrons deposited by the splash will be read as a dark region by the scanning electron beam.
This effect was actually cultivated by tube manufacturers to a certain extent, as a small, carefully controlled amount of the dark halo has the effect of crispening the visual image due to the contrast effect. (That is, giving the illusion of being more sharply focused than it actually is). The later vidicon tube and its descendants (see below) do not exhibit this effect, and so could not be used for broadcast purposes until special detail correction circuitry could be developed.
Vidicon
A vidicon tube is a video camera tube design in which the target material is a photoconductor. The vidicon was developed in 1950 at RCA by P. K. Weimer, S. V. Forgue and R. R. Goodrich as a simple alternative to the structurally and electrically complex image orthicon. While the initial photoconductor used was selenium, other targets—including silicon diode arrays—have been used. Vidicons with these targets are known as Si-vidicons or Ultricons.
The vidicon is a storage-type camera tube in which a charge-density pattern is formed by the imaged scene radiation on a photoconductive surface which is then scanned by a beam of low-velocity electrons. This surface is on a glass plate and is also called the target. More specifically, this glass plate is covered in a transparent, electrically conductive, indium tin oxide (ITO) layer, on top of which the photoconductive surface is formed by depositing photoconductive material which can be applied as small squares with insulation between the squares. The photoconductor is normally an insulator but becomes partially conductive when struck by electrons. The output of the tube comes from the ITO layer.
The target is kept at a positive voltage of 30 volts and the cathode in the tube is at a voltage of negative 30 volts. The cathode releases electrons which are modulated by grid G1 and accelerated by grid G2 creating an electron beam. Magnetic coils deflect, focus, and align the electron beam so it can scan the surface of the target. The beam deposits electrons on the target and when enough photons strike the target, a difference in current is produced between the two electrically conductive layers of the target, and due to a connection to an electrical resistor this difference is output as a voltage. The fluctuating voltage created in the target is coupled to a video amplifier and used to reproduce the scene being imaged, in other words it is the video output. The electrical charge produced by an image will remain in the face plate until it is scanned or until the charge dissipates. Special Vidicons can have resolutions of up to 5,000 TV lines.
By using a pyroelectric material such as triglycine sulfate (TGS) as the target, a vidicon sensitive over a broad portion of the infrared spectrum is possible. This technology was a precursor to modern microbolometer technology, and mainly used in firefighting thermal cameras.
Prior to the design and construction of the Galileo probe to Jupiter, in the late 1970s to early 1980s NASA used vidicon cameras on nearly all the unmanned deep space probes equipped with the remote sensing ability. Vidicon tubes were also used aboard the first three Landsat earth imaging satellites launched in 1972, as part of each spacecraft's Return Beam Vidicon (RBV) imaging system. The Uvicon, a UV-variant Vidicon was also used by NASA for UV duties.
Vidicon tubes were popular in 1970s and 1980s, after which they were rendered obsolete by solid-state image sensors, with the charge-coupled device (CCD) and then the CMOS sensor.
All vidicon and similar tubes are prone to image lag, better known as ghosting, smearing, burn-in, comet tails, luma trails and luminance blooming. Image lag is visible as noticeable (usually white or colored) trails that appear after a bright object (such as a light or reflection) has moved, leaving a trail that eventually fades into the image. It cannot be avoided or eliminated, as it is inherent to the technology. To what degree the image generated by the Vidicon is affected will depend on the properties of the target material used on the Vidicon, and the capacitance of the target material (known as the storage effect) as well as the resistance of the electron beam used to scan the target. The higher the capacitance of the target, the higher the charge it can hold and the longer it will take for the trail to disappear. The remmanant charges on the target eventually dissipate making the trail disappear.
Vidicons can be damaged by high intensity light exposure. Image burn-in occurs when an image is captured by a Vidicon for a long time and appears as a persistent outline of the image when it changes, and the outline disappears over time. Vidicons can become damaged by direct exposure to the sun which causes them to develop dark spots. Vidicons often used antimony trisulfide as the photoconductive material. They were not very successful because of image lag, which was seen in the RCA TK-42 color camera.
Si-vidicon (1969)
Si-vidicons, silicon vidicons or Epicons, Vidicons using arrays of silicon diodes for the target, were introduced in 1969 for the Picturephone. They are very resistant to burn-in, have low image lag and very high sensitivity but are not considered suitable for broadcast TV production as they suffer from high image blooming and image non uniformity. The targets in these tubes are made on silicon substrates and require 10 volts to operate, they are made with semiconductor device fabrication processes. These tubes could be used with an image intensifier in which case they were known as silicon intensified tubes (SITs) which had an additional photocathode in front of the target that produced large amounts of electrons when struck by photons, and the electrons were accelerated to the target with several hundred volts. These tubes were used for tracking satellite debris.
Plumbicon (1965)
Plumbicon is a registered trademark of Philips from 1963, for its lead(II) oxide (PbO) target vidicons. It was demonstrated in 1965 at the NAB Show. Used frequently in broadcast camera applications, these tubes have low output, but a high signal-to-noise ratio. They have excellent resolution compared to image orthicons, but lack the artificially sharp edges of IO tubes, which cause some of the viewing audience to perceive them as softer. CBS Labs invented the first outboard edge enhancement circuits to sharpen the edges of Plumbicon generated images.
Philips received the 1966 Technology & Engineering Emmy Award for the Plumbicon. Targets in Plumbicons have two layers: a pure PbO layer, and a doped PbO layer. The pure PbO is an intrinsic I type semiconductor, and a layer of it is doped to create a P type PbO semiconductor, thus creating a semiconductor junction. The PbO is in crystalline form.
Plumbicons were the first commercially successful version of the Vidicon. They were smaller, had lower noise, higher sensitivity and resolution, had less image lag than Vidicons, and were a defining factor in the development of color TV cameras. The most widely used camera tubes in TV production were the Plumbicons and the Saticon.
Compared to Saticons, Plumbicons have much higher resistance to burn-in, and comet and trailing artifacts from bright lights in the shot. Saticons though, usually have slightly higher resolution. After 1980, and the introduction of the diode-gun Plumbicon tube, the resolution of both types was so high, compared to the maximum limits of the broadcasting standard, that the Saticon's resolution advantage became moot. While broadcast cameras migrated to solid-state charge-coupled devices, Plumbicon tubes remained a staple imaging device in the medical field. High resolution Plumbicons were made for the HD-MAC standard. Since PbO is not stable in air, the deposition of PbO on the target is challenging. Vistacons developed by RCA and Leddicons made by EEV also use PbO in their targets.
Until 2016, Narragansett Imaging was the last company making Plumbicons, using factories Philips built in Rhode Island, USA. While still a part of Philips, the company purchased EEV's (English Electric Valve) lead oxide camera tube business, and gained a monopoly in lead-oxide tube production. Lead oxide tubes were also made by Matsushita.
Saticon (1973)
Saticon is a registered trademark of Hitachi from 1973, also produced by Thomson and Sony. It was developed in a joint effort by Hitachi and NHK Science & Technology Research Laboratories (NHK is The Japan Broadcasting Corporation). Introduced in 1973, Its surface consists of selenium with trace amounts of arsenic and tellurium added (SeAsTe) to make the signal more stable. SAT in the name is derived from (SeAsTe). Saticon tubes have an average light sensitivity equivalent to that of 64 ASA film. Compared to the Plumbicon it has a less advantageous operating temperature range and has more image lag. The target in a Saticon has a transparent Tin oxide transparent electrically conductive layer, followed by a SeAsTe layer, a SeAs layer, and an Antimony trisulfide layer which faces the electron beam.
A high-gain avalanche rushing amorphous photoconductor (HARP) made of amorphous Selenium (a-Se) can be used to increase light sensitivity to up to 10 times that of conventional saticons, and Saticons with this kind of target are known as HARPICONs. The target in HARPICONs is made up of ITO (indium-tin oxide), CeO2 (Cerium oxide), Selenium doped with Arsenic and Lithium Fluoride, Selenium doped with Arsenic and Tellurium, amorphous Selenium made by doping it with Arsenic, and antimony trisulfide. Saticons were made for the Sony HDVS system, used to produce early analog high-definition television using multiple sub-Nyquist sampling encoding (MUSE).
Pasecon (1972)
Originally developed by Toshiba in 1972 as chalnicon, Pasecon is a registered trademark of Heimann GmbH from 1977. Its surface consists of cadmium selenide trioxide (CdSeO3). Due to its wide spectral response, it is labelled as panchromatic selenium vidicon, hence the acronym 'pasecon'. It is not considered suitable for broadcast TV production, as it suffers from high image lag.
Newvicon (1974)
Newvicon is a registered trademark of Matsushita from 1973. Introduced in 1974, The Newvicon tubes were characterized by high light sensitivity. Its surface consists of a combination of zinc selenide (ZnSe) and zinc cadmium Telluride (ZnCdTe). It is not considered suitable for broadcast TV production, as it suffers from high image lag and non uniformity.
Trinicon (1971)
Trinicon is a registered trademark of Sony from 1971. It uses a vertically striped RGB color filter over the faceplate of an otherwise standard vidicon imaging tube to segment the scan into corresponding red, green and blue segments. Only one tube was used in the camera, instead of a tube for each color, as was standard for color cameras used in television broadcasting. It is used mostly in low-end consumer cameras, such as the HVC-2200 and HVC-2400 models, though Sony also used it in some moderate cost professional cameras in the 1970s and 1980s, such as the DXC-1600 series.
Although the idea of using color stripe filters over the target was not new, the Trinicon was the only tube to use the primary RGB colors. This necessitated an additional electrode buried in the target to detect where the scanning electron beam was relative to the stripe filter. Previous color stripe systems had used colors where the color circuitry was able to separate the colors purely from the relative amplitudes of the signals. As a result, the Trinicon featured a larger dynamic range of operation.
Sony later combined the Saticon tube with the Trinicon's RGB color filter, providing low-light sensitivity and superior color. This type of tube was known as the SMF Trinicon tube, or Saticon Mixed Field. SMF Trinicon tubes were used in the HVC-2800 and HVC-2500 consumer cameras, the DXC-1800 and BVP-1 professional cameras, as well as the first Betamovie camcorders. Toshiba offered a similar tube in 1974, and Hitachi also developed a similar Saticon with a color filter in 1981.
Light biasing
All the vidicon type tubes except the vidicon itself were able to use a light biasing technique to improve the sensitivity and contrast. The photosensitive target in these tubes suffered from the limitation that the light level had to rise to a particular level before any video output resulted. Light biasing was a method whereby the photosensitive target was illuminated from a light source just enough that no appreciable output was obtained, but such that a slight increase in light level from the scene was enough to provide discernible output. The light came from either an illuminator mounted around the target, or in more professional cameras from a light source on the base of the tube and guided to the target by light piping. The technique would not work with the baseline vidicon tube because it suffered from the limitation that as the target was fundamentally an insulator, the constant low light level built up a charge which would manifest itself as a form of fogging. The other types had semiconducting targets which did not have this problem.
Color cameras
Early color cameras used the obvious technique of using separate red, green and blue image tubes in conjunction with a color separator, a technique still in use with 3CCD solid state cameras today. It was also possible to construct a color camera that used a single image tube. One technique has already been described (Trinicon above). A more common technique and a simpler one from the tube construction standpoint was to overlay the photosensitive target with a color striped filter having a fine pattern of vertical stripes of green, cyan and clear filters (i.e. green; green and blue; and green, blue and red) repeating across the target. The advantage of this arrangement was that for virtually every color, the video level of the green component was always less than the cyan, and similarly the cyan was always less than the white. Thus the contributing images could be separated without any reference electrodes in the tube. If the three levels were the same, then that part of the scene was green. This method suffered from the disadvantage that the light levels under the three filters were almost certain to be different, with the green filter passing not more than one third of the available light.
Variations on this scheme exist, the principal one being to use two filters with color stripes overlaid such that the colors form vertically oriented lozenge shapes overlaying the target. The method of extracting the color is similar however.
Field-sequential color system
During the 1930s and 1940s, field-sequential color systems were developed which used synchronized motor-driven color-filter disks at the camera's image tube and at the television receiver. Each disk consisted of red, blue, and green transparent color filters. In the camera, the disk was in the optical path, and in the receiver, it was in front of the CRT. Disk rotation was synchronized with vertical scanning so that each vertical scan in sequence was for a different primary color. This method allowed regular black-and-white image tubes and CRTs to generate and display color images. A field-sequential system developed by Peter Goldmark for CBS was demonstrated to the press on September 4, 1940, and was first shown to the general public on January 12, 1950. Guillermo González Camarena independently developed a field-sequential color disk system in Mexico in the early 1940s, for which he requested a patent in Mexico on August 19 of 1940 and in the US in 1941. Gonzalez Camarena produced his color television system in his laboratory Gon-Cam for the Mexican market and exported it to the Columbia College of Chicago, who regarded it as the best system in the world.
Magnetic focusing in typical camera tubes
The phenomenon known as magnetic focusing was discovered by A. A. Campbell-Swinton in 1896. He found that a longitudinal magnetic field generated by an axial coil can focus an electron beam. This phenomenon was immediately corroborated by J. A. Fleming, and Hans Busch gave a complete mathematical interpretation in 1926.
Diagrams in this article show that the focus coil surrounds the camera tube; it is much longer than the focus coils for earlier TV CRTs. Camera-tube focus coils, by themselves, have essentially parallel lines of force, very different from the localized semi-toroidal magnetic field geometry inside a TV receiver CRT focus coil. The latter is essentially a magnetic lens; it focuses the "crossover" (between the CRT's cathode and G1 electrode, where the electrons pinch together and diverge again) onto the screen.
The electron optics of camera tubes differ considerably. Electrons inside these long focus coils take helical paths as they travel along the length of the tube. The center (think local axis) of one of those helices is like a line of force of the magnetic field. While the electrons are traveling, the helices essentially don't matter. Assuming that they start from a point, the electrons will focus to a point again at a distance determined by the strength of the field. Focusing a tube with this kind of coil is simply a matter of trimming the coil's current. In effect, the electrons travel along the lines of force, although helically, in detail.
These focus coils are essentially as long as the tubes themselves, and surround the deflection yoke (coils). Deflection fields bend the lines of force (with negligible defocusing), and the electrons follow the lines of force.
In a conventional magnetically deflected CRT, such as in a TV receiver or computer monitor, basically the vertical deflection coils are equivalent to coils wound around an horizontal axis. That axis is perpendicular to the neck of the tube; lines of force are basically horizontal. (In detail, coils in a deflection yoke extend some distance beyond the neck of the tube, and lie close to the flare of the bulb; they have a truly distinctive appearance.)
In a magnetically focused camera tube (there are electrostatically focused vidicons), the vertical deflection coils are above and below the tube, instead of being on both sides of it. One might say that this sort of deflection starts to create S-bends in the lines of force, but doesn't become anywhere near to that extreme.
Size
The size of video camera tubes is simply the overall outside diameter of the glass envelope. This differs from the size of the sensitive area of the target which is typically two thirds of the size of the overall diameter. Tube sizes are always expressed in inches for historical reasons. A one-inch camera tube has a sensitive area of approximately two thirds of an inch on the diagonal or about 16 mm.
Although the video camera tube is now technologically obsolete, the size of solid-state image sensors is still expressed as the equivalent size of a camera tube. For this purpose a new term was coined and it is known as the optical format. The optical format is approximately the true diagonal of the sensor multiplied by . The result is expressed in inches and is usually, though not always, rounded to a convenient fraction (hence the approximation). For instance, a sensor has a diagonal of and therefore an optical format of 8.0 × = , which is rounded to the convenient imperial fraction of . The parameter is also the source of the "Four Thirds" in the Four Thirds system and its Micro Four Thirds extension—the imaging area of the sensor in these cameras is approximately that of a video-camera tube at approximately .
Although the optical format size bears no relationship to any physical parameter of the sensor, its use means that a lens that would have been used with (say) a -inch camera tube will give roughly the same angle of view when used with a solid-state sensor with an optical format of of an inch.
Late use and decline
The lifespan of videotube technology reached as far as the 90s, when high definition, 1035-line videotubes were used in the early MUSE HD broadcasting system. While CCDs were tested for this application, as of 1993 broadcasters still found them inadequate due to issues achieving the necessary high resolution without compromising image quality with undesirable side-effects.
Modern charge-coupled device (CCD) and CMOS-based sensors offer many advantages over their tube counterparts. These include a lack of image lag, high overall picture quality, high light sensitivity and dynamic range, a better signal-to-noise ratio and significantly higher reliability and ruggedness. Other advantages include the elimination of the respective high and low-voltage power supplies required for the electron beam and heater filament, elimination of the drive circuitry for the focusing coils, no warm-up time and a significantly lower overall power consumption. Despite these advantages, acceptance and incorporation of solid-state sensors into television and video cameras was not immediate. Early sensors were of lower resolution and performance than picture tubes, and were initially relegated to consumer-grade video recording equipment.
Also, video tubes had progressed to a high standard of quality and were standard issue equipment to networks and production entities. Those entities had a substantial investment in not only tube cameras, but also in the ancillary equipment needed to correctly process tube-derived video. A switch-over to solid-state image sensors rendered much of that equipment (and the investments behind it) obsolete and required new equipment optimized to work well with solid-state sensors, just as the old equipment was optimized for tube-sourced video.
Due to their relative insensitivity to radiation, compared to semi-conductor based devices, video camera tubes are still occasionally used in high radiation environments such as nuclear power plants.
See also
Monoscope
Professional video camera
References
External links
Orthicon: Brief history, description and diagram.
The Cathode Ray Tube site
CCD Technology – A Brief History
The German TV museum with a lot of knowledge (in German)
Most of the TV tubes were shown and carefully explained (in German)
Television technology
Vacuum tubes | Video camera tube | Physics,Technology | 10,733 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.