id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
21,111,913 | https://en.wikipedia.org/wiki/Baeospora%20myosura | Baeospora myosura, commonly known as conifercone cap, is a species of fungus that produces agaricoid fruit bodies on decaying pine and spruce cones. The pileus is pale brown to cream, the lamellae are pale and very crowded, and the spore print is white or cream and amyloid. It is commonly found in North America and Europe. It is regarded as nonpoisonous but is of unknown edibility.
References
Fungi of Europe
Fungi of North America
Fungi described in 1818
Fungus species | Baeospora myosura | Biology | 111 |
14,766,178 | https://en.wikipedia.org/wiki/ERV3 | HERV-R_7q21.2 provirus ancestral envelope (Env) polyprotein is a protein that in humans is encoded by the ERV3 gene.
Function
The human genome includes many retroelements including the human endogenous retroviruses (HERVs), which compose about 7-8% of the human genome. ERV3, one of the most studied HERVs, is thought to have integrated 30 to 40 million years ago and is present in higher primates with the exception of gorillas. Taken together, the observation of genome conservation, the detection of transcript expression, and the presence of conserved ORFs is circumstantial evidence for a functional role. Similar endogenous retroviral Env genes like syncytin-1 have important roles in placental formation and embryonic development by enabling cell-cell fusion. Despite its origin as an Env gene, ERV3 has a premature stop codon that precludes any cell-cell fusion functionality. However, it does have an immunosuppressive function that helps the fetus evade a damaging maternal immune response, which may explain its high expression in the placenta.
There is speculation that ERV3 originally did have cell-cell fusion functionality in the placenta, but that it was eventually supplanted by other Env genes like syncytin, leading to a loss of this function.
Another functional role is suggested by the observation that downregulation of ERV3 is reported in choriocarcinoma.
References
Further reading
External links
Transcription factors | ERV3 | Chemistry,Biology | 322 |
30,235,513 | https://en.wikipedia.org/wiki/Plakin | A plakin is a protein that associates with junctional complexes and the cytoskeleton.
Types include desmoplakin, envoplakin, periplakin, plectin, bullous pemphigoid antigen 1, corneodesmosin, and microtubule actin cross-linking factor.
References
Protein families | Plakin | Chemistry,Biology | 76 |
35,897,579 | https://en.wikipedia.org/wiki/Copper%28I%29%20sulfate | Copper(I) sulfate, also known as cuprous sulfate, is an inorganic compound with the chemical formula Cu2SO4. It is a white solid, in contrast to copper(II) sulfate, which is blue in hydrous form. Compared to the commonly available reagent, copper(II) sulfate, copper(I) sulfate is unstable and not readily available.
Structure
Cu2SO4 crystallizes in the orthorhombic space group Fddd. Each oxygen in a sulfate anion is bridged to another sulfate by a copper atom, and the Cu−O distances are 196 pm.
Synthesis
Cuprous sulfate is produced by the reaction of copper metal with sulfuric acid at 200 °C:
Cu2SO4 can also be synthesized by the action of dimethyl sulfate on cuprous oxide:
The material is stable in dry air at room temperature but decomposes rapidly in presence of moisture or upon heating. It decomposes into copper(II) sulfate pentahydrate upon contact with water.
References
Copper(I) compounds
Sulfates | Copper(I) sulfate | Chemistry | 218 |
5,533,371 | https://en.wikipedia.org/wiki/E-selectin | E-selectin, also known as CD62 antigen-like family member E (CD62E), endothelial-leukocyte adhesion molecule 1 (ELAM-1), or leukocyte-endothelial cell adhesion molecule 2 (LECAM2), is a selectin cell adhesion molecule expressed only on endothelial cells activated by cytokines. Like other selectins, it plays an important part in inflammation. In humans, E-selectin is encoded by the SELE gene.
Structure
E selectin has a cassette structure: an N-terminal, C-type lectin domain, an EGF (epidermal-growth-factor)-like domain, 6 Sushi domain (SCR repeat) units, a transmembrane domain (TM) and an intracellular cytoplasmic tail (cyto). The three-dimensional structure of the ligand-binding region of human E-selectin has been determined at 2.0 Å resolution in 1994. The structure reveals limited contact between the two domains and a coordination of Ca2+ not predicted from other C-type lectins. Structure/function analysis indicates a defined region and specific amino-acid side chains that may be involved in ligand binding. The E-selectin bound to sialyl-LewisX (SLeX; NeuNAcα2,3Galβ1,4[Fucα1,3]GlcNAc) tetrasaccharide was solved in 2000.
Gene and regulation
In humans, E-selectin is encoded by the SELE gene. Its C-type lectin domain, EGF-like, SCR repeats, and transmembrane domains are each encoded by separate exons, whereas the E-selectin cytosolic domain derives from two exons. The E-selectin locus flanks the L-selectin locus on chromosome 1.
Different from P-selectin, which is stored in vesicles called Weibel-Palade bodies, E-selectin is not stored in the cell and has to be transcribed, translated, and transported to the cell surface. The production of E-selectin is stimulated by the expression of P-selectin which in turn, is stimulated by tumor necrosis factor α (TNFα), interleukin-1 (IL-1) and lipopolysaccharide (LPS). It takes about two hours, after cytokine recognition, for E-selectin to be expressed on the endothelial cell's surface. Maximal expression of E-selectin occurs around 6–12 hours after cytokine stimulation, and levels returns to baseline within 24 hours.
Shear forces are also found to affect E-selectin expression. A high laminar shear enhances acute endothelial cell response to interleukin-1β in naïve or shear-conditioned endothelial cells as may be found in the pathological setting of ischemia/reperfusion injury while conferring rapid E-selectin down regulation to protect against chronic inflammation.
Phytoestrogens, plant compounds with estrogen-like biological activity, such as genistein, formononetin, biochanin A and daidzein, as well as a mixture of these phytoestrogens were found able to reduce E-selectin as well as VCAM-1 and ICAM-1 on cell surface and in culture supernatant.
Ligands
E-selectin recognizes and binds to sialylated carbohydrates present on the surface proteins of certain leukocytes. E-selectin ligands are expressed by neutrophils, monocytes, eosinophils, memory-effector T-like lymphocytes, and natural killer cells. Each of these cell types is found in acute and chronic inflammatory sites in association with expression of E-selectin, thus implicating E-selectin in the recruitment of these cells to such inflammatory sites.
These carbohydrates include members of the Lewis X and Lewis A families found on monocytes, granulocytes, and T-lymphocytes.
The glycoprotein ESL-1, present on neutrophils and myeloid cells, was the first counter-receptor for E-selectin to be described. It is a variant of the tyrosine kinase FGF glycoreceptor, raising the possibility that its binding to E-selectin is involved in initiating signaling in the bound cells.
P-selectin glycoprotein ligand-1 (PSGL-1) derived from human neutrophils is also a high-efficiency ligand for endothelium-expressed E-selectin under flow. It mediates the rolling of leukocytes on the activated endothelium surrounding an inflamed tissue.
Both ESL-1 and PSGL-1 should bear sialyl Lewis a/x in order to bind E/P-selectins.
E-selectin is found to mediate the adhesion of tumor cells to endothelial cells, by binding to E-selectin ligands on the tumor cells. E-selectin ligands also play a role in cancer metastasis. The role of these two E-selectin ligands in metastasis in vivo is poorly defined and remains to be firmly demonstrated. PSGL-1 was detected on the surfaces of bone-metastatic prostate tumor cells, suggesting that it may have a functional role in the bone tropism of prostate tumor cells.
In cancer cells, CD44, death receptor-3 (DR3), LAMP1, and LAMP2 were identified as E-selectin ligands present on colon cancer cells, and CD44v, Mac2-BP, and gangliosides were identified as E-selectin ligands present on breast cancer cells.
On human neutrophils the glycosphingolipid NeuAcα2-3Galβ1-4GlcNAcβ1-3[Galβ1-4(Fucα1-3)GlcNAcβ1-3]2[Galβ1-4GlcNAcβ1-3]2Galβ1-4GlcβCer (and closely related structures) are functional E-selectin receptors.
Function
Role in inflammation
During inflammation, E-selectin plays an important part in recruiting leukocytes to the site of injury. The local release of cytokines IL-1 and TNF-α by macrophages in the inflamed tissue induces the over-expression of E-selectin on endothelial cells of nearby blood vessels. Leukocytes in the blood expressing the correct ligand will bind with low affinity to E-selectin, also under the shear stress of blood flow, causing the leukocytes to "roll" along the internal surface of the blood vessel as temporary interactions are made and broken.
As the inflammatory response progresses, chemokines released by injured tissue enter the blood vessels and activate the rolling leukocytes, which are now able to tightly bind to the endothelial surface and begin making their way into the tissue.
P-selectin has a similar function, but is expressed on the endothelial cell surface within minutes as it is stored within the cell rather than produced on demand.
Role in cancer
E-selectin was first discovered as an transmembrane receptor induced in endothelial cells upon inflammatory stimulation which mediated adhesion of monocytic or HL60 leukemic cells. This led to the hypothesis that cancer cells secreted inflammatory cytokines such as IL-1β or TNFα to induce E-selectin at distant metastatic sites. This induction would enable circulating tumor cells to arrest at stimulated sites, roll along activated endothelium, extravasate, and form metastases. Studies since have showed that E-selectin binding to colon cancer cells correlates with increasing metastatic potential, and that cancer cells of multiple tumor types bind E-selectin using glycoprotein or glycolipid ligands normally expressed on immune cells. Studies have further described a mechanistic cascade wherein cancer cells first bind E-selectin at shear flow rates: E-selectin binding results in a velcro-like interaction allowing the cancer cells to engage higher affinity integrin binding that eventually results in a tight binding between tumor cells and the activated endothelium.
While numerous pieces of in vitro and clinical evidence continue to support this hypothesis of E-selectin-mediated cancer metastasis, in vivo studies of cancer metastasis have shown that E-selectin knockout only minimally affects leukemic cell adhesion to bone immediately following injection. while experimental lung metastasis is not affected by the genetic deletion of E-selectin. Furthermore, studies have also shown that primary tumor growth is increased in E-selectin knockout mice. This paradox was more recently solved by a trio of studies showing that E-selectin is only constitutively expressed in the bone marrow endothelium where it is thought to perform functions vital to hematopoiesis. that are hijacked specifically by cells metastasizing to bone and not other sites. This data supports ongoing clinical efforts to inhibit breast cancer bone metastasis with E-selectin-blocking agents. The complexity of E-selectin ligand biology may also play a role in these discrepant in vitro and in vivo results. At least 15 different glycoprotein and glycolipid substrates for E-selectin have been described on various cancer cells, while only n-glycan Glg1 (Esl1) was shown to mediate bone metastasis. Other ligands or combinations thereof may result in distinct mechanisms during cancer metastasis.
Beyond a direct interaction with tumor cells, E-selectin induction in response to cytokines locally secreted by cancer cells enables specific tumor targeting of sLeX-conjugated nanoparticles or thioaptamers containing anti-tumor payloads. In addition, E-selectin may also function to recruit monocytes to primary tumors or lung metastases to promote an inflammatory pro-tumor microenvironment. Blocking these interactions or enabling trafficking of CAR-T cells to E-selectin-positive sites may hold promise for future therapeutic development.
Pathological relevance
Critical illness polyneuromyopathy
In cases of elevated blood glucose levels, such as in sepsis, E-selectin expression is higher than normal, resulting in greater microvascular permeability. The greater permeability leads to edema (swelling) of the skeletal endothelium (blood vessel linings), resulting in skeletal muscle ischemia (restricted blood supply) and eventually necrosis (cell death). This underlying pathology is the cause of the symptomatic disease critical illness polyneuromyopathy (CIPNM). Traditional Chinese herbal medicines, like berberine downregulate E-selectin.
Pathogen attachment
Study shows the adherence of Porphyromonas gingivalis to human umbilical vein endothelial cells increases with the induction of E-selectin expression by TNF-α. An antibody to E-selectin and sialyl LewisX suppressed P. gingivalis adherence to stimulated HUVECs. P. gingivalis mutants lacking OmpA-like proteins Pgm6/7 had reduced adherence to stimulated HUVECs, but fimbriae-deficient mutants were not affected. E-selecin-mediated P. gingivalis adherence activated endothelial exocytosis. These results suggest that the interaction between host E-selectin and pathogen Pgm6/7 mediates P. gingivalis adherence to endothelial cells and may trigger vascular inflammation.
Acute coronary syndrome
The immunohistochemical expressions of E-selectin and PECAM-1 were significantly increased at intima in vulnerable plaques of acute coronary syndrome (ACS) group, especially in neovascular endothelial cells, and positively correlated with inflammatory cell density, suggesting that PECAM-1 and E-selectin might play an important role in inflammatory reaction and development of vulnerable plaque. E-selectin Ser128Arg polymorphism is associated with ACS, and it might be a risk factor for ACS.
Nicotine-mediated induction
Smoking is highly correlated with enhanced likelihood of atherosclerosis by inducing endothelial dysfunction. In endothelial cells, various cell-adhesion molecules including E-selectin, are shown to be upregulated upon exposure to nicotine, the addictive component of tobacco smoke. Nicotine-stimulated adhesion of monocytes to endothelial cells is dependent on the activation of α7-nAChRs, β-Arr1 and cSrc regulated increase in E2F1-mediated transcription of E-selectin gene. Therefore, agents such as RRD-251 that can target activity of E2F1 may have potential therapeutic benefit against cigarette smoke induced atherosclerosis.
Cerebral aneurysm
It's also found that E-selectin expression increased in human ruptured cerebral aneurysm tissues. E-selectin might be an important factor involved in the process of cerebral aneurysm formation and rupture, by promoting inflammation and weakening cerebral artery walls.
As a biomarker
E-selectin is also an emerging biomarker for the metastatic potential of some cancers including colorectal cancer and recurrences.
References
External links
Cell adhesion proteins
Clusters of differentiation
Glycoproteins
Transmembrane receptors
Selectins
Biomarkers | E-selectin | Chemistry,Biology | 2,866 |
25,429,099 | https://en.wikipedia.org/wiki/Qalyub%20orthonairovirus | Qalyub orthonairovirus, also known as Qalyub nairovirus or simply Qalyub virus, is a negative-sense single-stranded RNA virus discovered in a rat's nest in a tomb wall in the Egyptian town of Qalyub ( ) in 1952. The primary vector for transmission is the Carios erraticus tick, and thus it is an arbovirus.
There is no evidence of clinical disease in humans.
References
External links
Nairoviridae
Viral diseases
Species described in 1952 | Qalyub orthonairovirus | Biology | 106 |
73,211,492 | https://en.wikipedia.org/wiki/Clothing%20physiology | Clothing physiology is a branch of science that studies the interaction between clothing and the human body, with a particular focus on how clothing affects the physiological and psychological responses of individuals to different environmental conditions. The goal of clothing physiology research is to develop a better understanding of how clothing can be designed to optimize comfort, performance, and protection for individuals in various settings, including outdoor recreation, occupational environments, and medical contexts.
Purpose of clothing
Human clothing motives are frequently oversimplified in cultural and sociological theories, with the assumption that they are solely motivated by modesty, adornment, protection, or sex. However, clothing is primarily motivated by the environment, with its form being influenced by human characteristics and traits, as well as physical and social factors such as sex relations, costume, caste, class, and religion. Ultimately, clothing must be comfortable in various environmental conditions to support physiological behavior. The concept of clothing has been aptly characterized as a quasi-physiological system that interacts with the human body.
Quasi-physiological systems
Clothing can be considered as a quasi-physiological system that interacts with the body in different ways, just like the distinct physiological systems of the human body, such as digestive system and nervous system, which can be analyzed systematically.
Purpose of clothing physiology
The acceptance and perceived comfort of a garment cannot be attributed solely to its thermal properties. Rather, the sensation of comfort when wearing a garment is associated with various factors, including the fit of the garment, its moisture buffering properties, and the mechanical characteristics of the fibers and fabrics used in its construction.
The field of clothing physiology concerns the complex interplay between the human body, environmental conditions, and clothing. Through the use of scientific methods, it is possible to accurately measure and quantify the effects of clothing on wearer comfort and overall well-being.
Louis Newburgh is widely recognized among thermal physiologists primarily due to his role as the editor of "Physiology of Heat Regulation and the Science of Clothing".
From a physiological perspective, the purpose of clothing is to shield the body from extreme temperatures, whether they be hot or cold. The role of clothing in affecting the wearer's comfort can be described as the connection between the body and the surroundings. When engaged in outdoor activities, the individual's comfort level is influenced by various environmental factors, such as air temperature, humidity, solar radiation, atmospheric and ground thermal radiation. The wearer's posture, metabolic rate, sweating rate, and bodily processes such as moisture absorption, sweat evaporation, and heat loss through conduction and convection via blood, are among additional factors that also play a role in determining the individual's comfort level.
Skin physiology
The contact between clothing and skin facilitates the regulation of body temperature through the control of blood flow and sweat evaporation in localized areas. However, the design of functional fabrics that efficiently regulate skin temperature must take into account crucial factors such as age, gender, and activity level.
The skin plays a vital role in safeguarding the body's homeostasis by performing a variety of crucial protective functions. Clothing and other textiles interact dynamically with the skin's functions, and the mechanical properties of the fabric, such as its surface roughness, can lead to non-specific skin reactions, such as wool intolerance or keratosis follicularis.
Thermal comfort and insulation
It's common to express metabolic activity in terms of heat production. A resting adult typically generates 100 W of heat, with a significant amount dissipating through the skin. Heat production per unit area of skin, referred to as 1 met, is around 58 W/m2 for a resting individual, based on the average male European's skin surface area of approximately 1.8 m2. The average female European's skin surface area is 1.6 m2 for comparison.
Skin temperatures that correspond to comfort during stationary activities range from 91.4°F to 93.2°F (33°C to 34°C), and these temperatures decrease as the level of physical activity increases. Skin temperature that exceeds 45°C or falls below 18°C induces a sensation of pain. Internal temperatures increase with activity. The brain's temperature regulatory center is around 36.8°C when at rest and rises to about 37.4°C when walking and 37.9°C when jogging. A temperature below 28°C can cause fatal cardiac arrhythmia, while a temperature above 43°C can result in permanent brain damage. Thus, it's crucial to regulate body temperature carefully for both comfort and health.
Clothing insulation can be denoted using the unit of measurement called clo. In the absence of clothing, a thin layer of static air known as the boundary layer forms in close proximity to the skin, acting as an insulating layer that restricts heat exchange between the skin and the surrounding environment. This layer typically offers approximately 0.8 clo units of insulation in a motionless state. It's difficult to apply this generalization to very thin fabric layers or underwear, as they occupy an existing static air layer of no more than 0.5 cm thickness. Consequently, these thin layers offer minimal contribution to the clothing's intrinsic insulation.
The standard measure for clothing insulation is 1.57 clo·cm-1 in thickness, which is equivalent to 4 clo·inch-1.
Applications
The advancements in fibers, textiles, electronics, functional finishing, and clothing physiology are anticipated to improve human life in numerous areas such as medicine, military, firefighting, extreme sports, and other apparel applications. The study of clothing physiology has been prompted by the need to design effective clothing systems for various specialized environments such as space, polar regions, underwater operations, and industrial settings.
Clothing comfort
Comfort is a multifaceted concept that encompasses various perceptions, including physiological, social, and psychological needs. After sustenance, clothing is one of the most vital objects that can satisfy comfort requirements. This is because clothing offers a range of benefits, including aesthetic, tactile, thermal, moisture, and pressure comfort.
Protection
The clothing physiology comfort of an athlete is significantly influenced by the compression effect exerted by their garments. The degree of compression load exerted by the clothing has a direct correlation with the intensity of sweating and the resulting elevation in skin temperature. Specifically, a greater compression load on the body results in a higher degree of sweating and increased skin temperature.
Testing
Thermophysiological models have become a prevalent tool for forecasting human physiological reactions in varying environmental and clothing conditions.
Clothing physiology can be assessed through the utilization of various advanced instruments, including: Sherlock is a thermal manikin test device developed by the Hohenstein Institutes to evaluate clothing physiology, and it is equipped with perspiration simulation capabilities.
SpaceTex experiment
In the SpaceTex experiment, novel fabrics were evaluated for their ability to enhance heat transfer and manage sweat during physical activity, based on their antibacterial properties. Quick-drying T-shirts made from such fabrics would be advantageous to athletes, firefighters, miners, and military personnel. This marks the first experiment in clothing physiology conducted in microgravity, with sportswear manufacturers aiming to improve their products accordingly. In fact, a modified polyester has already been developed for use by the Swiss military.
Certain precautions
The integumentary system is a significant immune organ, possessing both specific and non-specific activities related to immunity. Antimicrobial fabrics could potentially disrupt the skin's non-specific defense mechanisms such as antimicrobial peptides or the resident microflora.
Social psychology of clothing
The social psychology of dress entails comprehending the interconnections that exist between attire and human conduct.
See also
Technical textile
Textile performance
Personal protective equipment
Uniforms of the Canadian Armed Forces
References
Clothing
Physiology | Clothing physiology | Biology | 1,584 |
640,046 | https://en.wikipedia.org/wiki/Norden%20bombsight | The Norden Mk. XV, known as the Norden M series in U.S. Army service, is a bombsight that was used by the United States Army Air Forces (USAAF) and the United States Navy during World War II, and the United States Air Force in the Korean and the Vietnam Wars. It was an early tachometric design, which combined optics, a mechanical computer, and an autopilot for the first time to not merely identify a target but fly the airplane to it. The bombsight directly measured the aircraft's ground speed and direction, which older types could only estimate with lengthy manual procedures. The Norden further improved on older designs by using an analog computer that continuously recalculated the bomb's impact point based on changing flight conditions, and an autopilot that reacted quickly and accurately to changes in the wind or other effects.
Together, these features promised unprecedented accuracy for daytime bombing from high altitudes. During prewar testing the Norden demonstrated a circular error probable (CEP), an astonishing performance for that period. This precision would enable direct attacks on ships, factories, and other point targets. Both the Navy and the USAAF saw it as a means to conduct successful high-altitude bombing. For example, an invasion fleet could be destroyed long before it could reach U.S. shores.
To protect these advantages, the Norden was granted the utmost secrecy well into the war, and was part of a production effort on a similar scale to the Manhattan Project: the overall cost (both R&D and production) was $1.1 billion, as much as 2/3 of the latter or over a quarter of the production cost of all B-17 bombers. The Norden was not as secret as believed; both the British SABS and German Lotfernrohr 7 worked on similar principles, and details of the Norden had been passed to Germany even before the war started.
Under combat conditions the Norden did not achieve its expected precision, yielding an average CEP in 1943 of , similar to other Allied and German results. Both the Navy and Air Forces had to give up using pinpoint attacks. The Navy turned to dive bombing and skip bombing to attack ships, while the Air Forces developed the lead bomber procedure to improve accuracy, and adopted area bombing techniques for ever-larger groups of aircraft. Nevertheless, the Norden's reputation as a pin-point device endured, due in no small part to Norden's own advertising of the device after secrecy was reduced late in the war.
The Norden saw reduced use in the post–World War II period after radar-based targeting was introduced, but the need for accurate daytime attacks kept it in service, especially during the Korean War. The last combat use of the Norden was in the U.S. Navy's VO-67 squadron, which used it to drop sensors onto the Ho Chi Minh Trail in 1967. The Norden remains one of the best-known bombsights.
History and development
Early work
The Norden sight was designed by Carl Norden, a Dutch engineer educated in Switzerland who immigrated to the U.S. in 1904. In 1911, Norden joined Sperry Gyroscope to work on ship gyrostabilizers, and then moved to work directly for the U.S. Navy as a consultant. At the Navy, Norden worked on a catapult system for a proposed flying bomb that was never fully developed, but this work introduced various Navy personnel to Norden's expertise with gyro stabilization.
World War I bomb sight designs had improved rapidly, with the ultimate development being the Course Setting Bomb Sight, or CSBS. This was essentially a large mechanical calculator that directly represented the wind triangle using three long pieces of metal in a triangular arrangement. The hypotenuse of the triangle was the line the aircraft needed to fly along in order to arrive over the target in the presence of wind, which, before the CSBS, was an intractable problem. Almost all air forces adopted some variation of the CSBS as their standard inter-war bomb sight, including the U.S. Navy, who used a modified version known as the Mark III.
It was already realized that one major source of error in bombing was levelling the aircraft enough so the bombsight pointed straight down, even small errors in levelling could produce dramatic errors in accuracy. The US Army did not adopt the CSBS and instead used a simpler design, the Estoppey D-series, as it automatically levelled the sight during use. Navy experiments showed these roughly doubled accuracy, so they began a series of developments to add a gyroscopic stabilizer to their bombsights. In addition to new designs like the Inglis (working with Sperry) and Seversky, Norden was asked to provide an external stabilizer for the Navy's existing Mark III designs.
Mark III-A
Although the CSBS and similar designs allowed the calculation of the proper flight angle needed to correct for windage, they did so by looking downward out of the aircraft. Very simple bombsights could be operated by the pilot, but as their sophistication grew they demanded full-time operators. This task was often given to the front or rear gunner. In Army aircraft they would sit near enough to the pilot to indicate any required directional adjustments using hand signals, or if they sat behind the pilot, using strings attached to the pilot's jacket.
The Navy's first bombers were large flying boats, where the pilot sat well away from the front of the fuselage, and one could not simply cut a hole for the bombsight to view through. Instead, the bombs were normally aimed by an observer in the nose of the aircraft. This made communications with the pilot very difficult. To address this, the Navy developed the concept of the pilot direction indicator, or PDI, an electrically-driven pointer that the observer used to indicate which direction to turn. The bombardier used switches to move the pointer on his unit to indicate the direction of the target, which was duplicated on the unit in front of the pilot so he could maneuver the aircraft to follow suit.
Norden's attempt to fit a stabilizer to the Mark III, the Mark III-A, also included a separate contract to develop a new automatic PDI. Norden proposed removing the electrical switches used to move the pointer and using the entire bombsight itself as the indicator. In place of the thin metal wires that formed the sights on the Mark III, a small low-power telescope would be used in its place. The bombardier would rotate the telescope left or right to follow the target. This motion would cause the gyros to precess, and this signal would drive the PDI automatically. The pilot would follow the PDI as before.
Norden initially delivered three prototypes of the stabilized bombsight without the automatic PDI. In testing, the Navy found that while the system did improve accuracy when it worked, it was complicated to use and often failed, leaving the real-world accuracy no better than before. They asked Norden for suggestions on ways to improve this. They were still interested in the PDI work, and the contract was allowed to continue.
Mark XI
Norden suggested that the only solution to improve accuracy would be to directly measure the ground speed, as opposed to calculating it using the CSBS's wind triangle. To time the drop, Norden used an idea already in use on other bombsights, the "equal distance" concept. This was based on the observation that the time needed to travel a certain distance over the ground would remain relatively constant during the bomb run, as the wind would not be expected to change dramatically over a short period of time. If you could accurately mark out a distance on the ground, or in practice, an angle in the sky, timing the passage over that distance would give you all the information needed to time the drop.
Norden's version of the system was very similar to the Army's Estoppey D-4 of the same era, differing largely in the physical details of the actual sights. The D-4 used thin wires as the sights, while Norden's would use the small telescope of the Mark III-A. To use the system, the bombardier looked up the expected time it would take for the bombs to fall from the current altitude. This time was set into a countdown stopwatch, and the sights were set to the angle that the bombs would fall if there was no wind. The bombardier waited for the target to line up with a crosshair in the telescope. When it did, the timer was started, and the bombardier rotated the telescope around its vertical axis to track the target as they flew toward it. This movement was linked to a second crosshair through a gearing system. The bombardier continued moving the telescope until the timer ran out. The second crosshair was now at the correct aiming angle, or range angle, after accounting for any difference between groundspeed and airspeed. The bombardier then waited for the target to pass through the second crosshair to time the drop.
In 1924, the first prototype of this design, known to the Navy as the Mark XI, was delivered to the Navy's proving grounds in Virginia. In testing, the system proved disappointing. The circular error probable (CEP), a circle into which 50% of the bombs would fall, was wide from only altitude. This was an error of over 3.6%, somewhat worse than existing systems. Moreover, bombardiers universally complained that the device was far too hard to use. Norden worked tirelessly on the design, and by 1928, the accuracy had improved to 2% of altitude. This was enough for the Navy's Bureau of Ordnance to place a US$348,000 contract (equivalent to $ million in ) for 80 production examples.
Norden was known for his confrontational and volatile nature. He often worked 16-hour days and thought little of anyone who did not. Navy officers began to refer to him as "Old Man Dynamite". During development, the Navy asked Norden to consider taking on a partner to handle the business and leave Norden free to develop the engineering side. They recommended former Army colonel Theodore Barth, an engineer who had been in charge of gas mask production during World War I. The match-up was excellent, as Barth had the qualities Norden lacked: charm, diplomacy, and a head for business. The two became close friends.
Initial U.S. Army interest
In December 1927, the United States Department of War was granted permission to use a bridge over the Pee Dee River in North Carolina for target practice, as it would soon be sunk in the waters of a new dam. The 1st Provisional Bombardment Squadron, equipped with Keystone LB-5 bombers, attacked the bridge over a period of five days, flying 20 missions a day in perfect weather and attacking at altitudes from . After this massive effort, the middle section of the bridge finally fell on the last day. However, the effort as a whole was clearly a failure in any practical sense.
About the same time as the operation was being carried out, General James Fechet replaced General Mason Patrick as commander of the USAAC. He received a report on the results of the test, and on 6 January 1928 sent out a lengthy memo to Brigadier General William Gillmore, chief of the Material Division at Wright Field, stating:
He went on to request information on every bombsight then used at Wright, as well as "the Navy's newest design". However, the Mark XI was so secret that Gillmore was not aware Fechet was referring to the Norden. Gilmore produced contracts for twenty-five examples of an improved version of the Seversky C-1, the C-3, and six prototypes of a new design known as the Inglis L-1. The L-1 never matured, and Inglis later helped Seversky to design the improved C-4.
The wider Army establishment became aware of the Mark XI in 1929 and was eventually able to buy an example in 1931. Their testing mirrored the Navy's experience; they found that the gyro stabilization worked and the sight was accurate, but it was also "entirely too complicated" to use. The Army turned its attention to further upgraded versions of their existing prototypes, replacing the older vector bombsight mechanisms with the new synchronous method of measuring the proper dropping angle.
Fully automatic bombsight
While the Mk. XI was reaching its final design, the Navy learned of the Army's efforts to develop a synchronous bombsight, and asked Norden to design one for them. Norden was initially unconvinced this was workable, but the Navy persisted and offered him a development contract in June 1929. Norden retreated to his mother's house in Zürich and returned in 1930 with a working prototype. Lieutenant Frederick Entwistle, the Navy's chief of bombsight development, judged it revolutionary.
The new design, the Mark XV, was delivered in production quality in the summer of 1931. In testing, it proved to eliminate all of the problems of the earlier Mk. XI design. From altitude the prototype delivered a CEP of , while even the latest production Mk. XI's were . At higher altitudes, a series of 80 bomb runs demonstrated a CEP of . In a test on 7 October 1931, the Mk. XV dropped 50% of its bombs on a static target, the USS Pittsburgh, while a similar aircraft with the Mk. XI had only 20% of its bombs hit.
Moreover, the new system was dramatically simpler to use. After locating the target in the sighting system, the bombardier simply made fine adjustments using two control wheels throughout the bomb run. There was no need for external calculation, lookup tables or pre-run measurementseverything was carried out automatically through an internal wheel-and-disc calculator. The calculator took a short time to settle on a solution, with setups as short as six seconds, compared to the 50 needed for the Mk. XI to measure its ground speed. In most cases, the bomb run needed to be only 30 seconds long.
Despite this success, the design also demonstrated several serious problems. In particular, the gyroscopic platform had to be levelled out before use using several spirit levels, and then checked and repeatedly reset for accuracy. Worse, the gyros had a limited degree of movement, and if the plane banked far enough the gyro would reach its limit and have to be re-set from scratch – something that could happen even due to strong turbulence. If the gyros were found to be off, the levelling procedure took as long as eight minutes. Other minor problems were the direct current electric motors which drove the gyroscopes, whose brushes wore down quickly and left carbon dust throughout the interior of the device, and the positioning of the control knobs, which meant the bombardier could only adjust side-to-side or up-and-down aim at a time, not both. But despite all of these problems, the Mark XV was so superior to any other design that the Navy ordered it into production.
Carl L. Norden Company was incorporated in 1931, supplying the sights under a dedicated source contract. In effect, the company was owned by the Navy. In 1934 the newly-forming GHQ Air Force, the purchasing arm of the U.S. Army Air Corps, selected the Norden for their bombers as well, referring to it as the M-1. However, due to the dedicated source contract, the Army had to buy the sights from the Navy. This was not only annoying for inter-service rivalry reasons, but the Air Corps' higher-speed bombers demanded several changes to the design, notably the ability to aim the sighting telescope further forward to give the bombardier more time to set up. The Navy was not interested in these changes, and would not promise to work them into the production lines. Worse, Norden's factories were having serious problems keeping up with demand for the Navy alone, and in January 1936, the Navy suspended all shipments to the Army.
Autopilot
Mk. XV's were initially installed with the same automatic PDI as the earlier Mk. XI. In practice, it was found that the pilots had a very difficult time keeping the aircraft stable enough to match the accuracy of the bombsight. Starting in 1932 and proceeding in fits and starts for the next six years, Norden developed the Stabilized Bombing Approach Equipment (SBAE), a mechanical autopilot that attached to the bombsight. However, it was not a true "autopilot", in that it could not fly the aircraft by itself. By rotating the bombsight in relationship to the SBAE, the SBAE could account for wind and turbulence and calculate the appropriate directional changes needed to bring the aircraft onto the bomb run far more precisely than a human pilot. The minor adaptations needed on the bombsight itself produced what the Army referred to as the M-4 model.
In 1937 the Army, faced with the continuing supply problems with the Norden, once again turned to Sperry Gyroscope to see if they could come up with a solution. Their earlier models had all proved unreliable, but they had continued working with the designs throughout this period and had addressed many of the problems. By 1937, Orland Esval had introduced a new AC-powered electrical gyroscope that spun at 30,000 RPM, compared to the Norden's 7,200 , which dramatically improved the performance of the inertial platform. The use of three-phase AC power and inductive pickup eliminated the carbon brushes, and further simplified the design. Carl Frische had developed a new system to automatically level the platform, eliminating the time-consuming process needed on the Norden. The two collaborated on a new design, adding a second gyro to handle heading changes, and named the result the Sperry S-1. Existing supplies of Nordens continued to be supplied to the USAAC's B-17s, while the S-1 equipped the B-24Es being sent to the 15th Air Force.
Some B-17s had been equipped with a simple heading-only autopilot, the Sperry A-3. The company had also been working on an all-electronic model, the A-5, which stabilized in all three directions. By the early 1930s, it was being used in a variety of Navy aircraft to excellent reviews. By connecting the outputs of the S-1 bombsight to the A-5 autopilot, Sperry produced a system similar to the M-4/SBAE, but it reacted far more quickly. The combination of the S-1 and A-5 so impressed the Army that on 17 June 1941 they authorized the construction of a factory and noted that "in the future all production models of bombardment airplanes be equipped with the A-5 Automatic Pilot and have provisions permitting the installation of either the M-Series [Norden] Bombsight or the S-1 Bombsight".
British interest, Tizard mission
By 1938, information about the Norden had worked its way up the Royal Air Force chain of command and was well known within that organization. The British had been developing a tachometric bombsight of their own known as the Automatic Bomb Sight, but combat experience in 1939 demonstrated the need for it to be stabilized. Work was underway as the Stabilized Automatic Bomb Sight (SABS), but it would not be available until 1940 at the earliest, and likely later. Even then, it did not feature the autopilot linkage of the Norden, and would thus find it difficult to match the Norden's performance in anything but smooth air. Acquiring the Norden became a major goal.
The RAF's first attempt, in the spring of 1938, was rebuffed by the U.S. Navy. Air Chief Marshal Edgar Ludlow-Hewitt, commanding RAF Bomber Command, demanded Air Ministry action. They wrote to George Pirie, the British air attaché in Washington, suggesting he approach the U.S. Army with an offer of an information exchange with their own SABS. Pirie replied that he had already looked into this, and was told that the U.S. Army had no licensing rights to the device as it was owned by the U.S. Navy. The matter was not helped by a minor diplomatic issue that flared up in July when a French air observer was found to be on board a crashed Douglas Aircraft Company bomber, forcing President Roosevelt to promise no further information exchanges with foreign powers.
Six months later, after a change of leadership within the U.S. Navy's Bureau of Aeronautics, on 8 March 1939, Pirie was once again instructed to ask the U.S. Navy about the Norden, this time enhancing the deal with offers of British power-operated turrets. However, Pirie expressed concern as he noted the Norden had become as much political as technical, and its relative merits were being publicly debated in Congress weekly while the U.S. Navy continued to say the Norden was "the United States' most closely guarded secret".
The RAF's desires were only further goaded on 13 April 1939, when Pirie was invited to watch an air demonstration at Fort Benning where the painted outline of a battleship was the target:
The three following B-17s also hit the target, and then a flight of a dozen Douglas B-18 Bolos placed most of their bombs in a separate square outlined on the ground.
Another change of management within the Bureau of Aeronautics had the effect of making the U.S. Navy more friendly to British overtures, but no one was willing to fight the political battle needed to release the design. The Navy brass was concerned that giving the Norden to the RAF would increase its chances of falling into German hands, which could put the U.S.'s own fleet at risk. The UK Air Ministry continued increasing pressure on Pirie, who eventually stated there was simply no way for him to succeed, and suggested the only way forward would be through the highest diplomatic channels in the Foreign Office. Initial probes in this direction were also rebuffed. When a report stated that the Norden's results were three to four times as good as their own bombsights, the Air Ministry decided to sweeten the pot and suggested they offer information on radar in exchange. This too was rebuffed.
The matter eventually worked its way to the Prime Minister, Neville Chamberlain, who wrote personally to President Roosevelt asking for the Norden, but even this was rejected. The reason for these rejections was more political than technical, but the U.S. Navy's demands for secrecy were certainly important. They repeated that the design would be released only if the British could demonstrate the basic concept was common knowledge, and therefore not a concern if it fell into German hands. The British failed to convince them, even after offering to equip their examples with a variety of self-destruct devices.
This may have been ameliorated by the winter of 1939, at which point a number of articles about the Norden appeared in the U.S. popular press with reasonably accurate descriptions of its basic workings. But when these were traced back to the press corps at the U.S. Army Air Corps, the U.S. Navy was apoplectic. Instead of accepting it was now in the public domain, any discussion about the Norden was immediately shut down. This drove both the British Air Ministry and Royal Navy to increasingly anti-American attitudes when they considered sharing their own developments, notably newer ASDIC systems. By 1940 the situation on scientific exchange was entirely deadlocked as a result.
Looking for ways around the deadlock, Henry Tizard sent Archibald Vivian Hill to the U.S. to take a survey of U.S. technical capability in order to better assess what technologies the U.S. would be willing to exchange. This effort was the start on the path that led to the famous Tizard Mission in late August 1940. Ironically, by the time the Mission was being planned, the Norden had been removed from the list of items to be discussed, and Roosevelt personally noted this was due largely to political reasons. Ultimately, although Tizard was unable to convince the U.S. to release the design, he was able to request information about its external dimensions and details on the mounting system so it could be easily added to British bombers if it were released in the future.
Production, problems, and Army standardization
The conversion of Norden Laboratories Corporation's New York City engineering lab to a production factory was a long process. Before the war, skilled craftsmen, most of them German or Italian immigrants, hand-made almost every part of the 2,000-part machine. Between 1932 and 1938, the company produced only 121 bombsights per year. During the first year after the Attack on Pearl Harbor, Norden produced 6,900 bombsights, three-quarters of which went to the U.S. Navy.
When Norden heard of the U.S. Army's dealings with Sperry, Theodore Barth called a meeting with the U.S. Army and U.S. Navy at their factory in New York City. Barth offered to build an entirely new factory just to supply the U.S. Army, but the U.S. Navy refused this. Instead, the U.S. Army suggested that Norden adapt their sight to work with Sperry's A-5, which Barth refused. Norden actively attempted to make the bombsight incompatible with the A-5.
It was not until 1942 that the impasse was finally solved by farming out autopilot production to Honeywell Regulator, who combined features of the Norden-mounted SBAE with the aircraft-mounted A-5 to produce what the U.S. Army referred to as "Automatic Flight Control Equipment" (AFCE); the unit would later be redesigned as the C-1. The Norden, now connected with the aircraft's built-in autopilot, allowed the bombardier alone to fully control minor movements of the aircraft during the bombing run.
By May 1943 the U.S. Navy was complaining that they had a surplus of devices, with full production turned over to the USAAF. After investing more than $100 million in Sperry bombsight manufacturing plants, the USAAF concluded that the Norden M-series was far superior in accuracy, dependability, and design. Sperry contracts were cancelled in November 1943. When production ended a few months later, 5,563 Sperry bombsight-autopilot combinations had been built, most of which were installed in Consolidated B-24 Liberator bombers.
Expansion of Norden bombsight production to a final total of six factories took several years. The USAAF demanded additional production to meet their needs, and eventually arranged for the Victor Adding Machine company to gain a manufacturing license, and then Remington Rand. Ironically, during this period the U.S. Navy abandoned the Norden in favor of dive bombing, reducing the demand. By the end of the war, Norden and its subcontractors had produced 72,000 M-9 bombsights for the U.S. Army Air Force alone, costing $8,800 each.
Description and operation
Background
Typical bombsights of the pre-war era worked on the "vector bombsight" principle introduced with the World War I Course Setting Bomb Sight. These systems consisted of a slide rule-type calculator that was used to calculate the effects of the wind on the bomber based on simple vector arithmetic. The mathematical principles are identical to those on the E6B calculator used to this day.
In operation, the bombardier would first take a measurement of the wind speed using one of a variety of methods, and then dial that speed and direction into the bombsight. This would move the sights to indicate the direction the plane should fly to take it directly over the target with any cross-wind taken into account, and also set the angle of the iron sights to account for the wind's effect on ground speed.
These systems had two primary problems in terms of accuracy. The first was that there were several steps that had to be carried out in sequence in order to set up the bombsight correctly, and there was limited time to do all of this during the bomb run. As a result, the accuracy of the wind measurement was always limited, and errors in setting the equipment or making the calculations were common. The second problem was that the sight was attached to the aircraft, and thus moved about during maneuvers, during which time the bombsight would not point at the target. As the aircraft had to maneuver in order to make the proper approach, this limited the time allowed to accurately make corrections. This combination of issues demanded a long bomb run.
Experiments had shown that adding a stabilizer system to a vector bombsight would roughly double the accuracy of the system. This would allow the bombsight to remain level while the aircraft maneuvered, giving the bombardier more time to make his adjustments, as well as reducing or eliminating mis-measurements when sighting off of non-level sights. However, this would not have any effect on the accuracy of the wind measurements, nor the calculation of the vectors. The Norden attacked all of these problems.
Basic operation
To improve the calculation time, the Norden used a mechanical computer inside the bombsight to calculate the range angle of the bombs. By simply dialing in the aircraft's altitude and heading, along with estimates of the wind speed and direction (in relation to the aircraft), the computer would automatically, and quickly, calculate the aim point. This not only reduced the time needed for the bombsight setup but also dramatically reduced the chance for errors. This attack on the accuracy problem was by no means unique; several other bombsights of the era used similar calculators. It was the way the Norden used these calculations that differed.
Conventional bombsights are set up pointing at a fixed angle, the range angle, which accounts for the various effects on the trajectory of the bomb. To the operator looking through the sights, the crosshairs indicate the location on the ground the bombs would impact if released at that instant. As the aircraft moves forward, the target approaches the crosshairs from the front, moving rearward, and the bombardier releases the bombs as the target passes through the line of the sights. One example of a highly automated system of this type was the RAF's Mark XIV bomb sight.
The Norden worked in an entirely different fashion, based on the "synchronous" or "tachometric" method. Internally, the calculator continually computed the impact point, as was the case for previous systems. However, the resulting range angle was not displayed directly to the bombardier or dialed into the sights. Instead, the bombardier used the sighting telescope to locate the target long in advance of the drop point. A separate section of the calculator used the inputs for altitude and airspeed to determine the angular velocity of the target, the speed at which it would be seen drifting backward due to the forward motion of the aircraft. The output of this calculator drove a rotating prism at that angular speed in order to keep the target centered in the telescope. In a properly adjusted Norden, the target remains motionless in the sights.
The Norden thus calculated two angles: the range angle based on the altitude, airspeed and ballistics; and the current angle to the target, based on the ground speed and heading of the aircraft. The difference between these two angles represented the "correction" that needed to be applied to bring the aircraft over the proper drop point. If the aircraft was properly aligned with the target on the bomb run, the difference between the range and target angles would be continually reduced, eventually to zero (within the accuracy of the mechanisms). At this moment the Norden automatically dropped the bombs.
In practice, the target failed to stay centered in the sighting telescope when it was first set up. Instead, due to inaccuracies in the estimated wind speed and direction, the target would drift in the sight. To correct for this, the bombardier would use fine-tuning controls to slowly cancel out any motion through trial and error. These adjustments had the effect of updating the measured ground speed used to calculate the motion of the prisms, slowing the visible drift. Over a short period of time of continual adjustments, the drift would stop, and the bombsight would now hold an extremely accurate measurement of the exact ground speed and heading. Better yet, these measurements were being carried out on the bomb run, not before it, and helped eliminate inaccuracies due to changes in the conditions as the aircraft moved. And by eliminating the manual calculations, the bombardier was left with much more time to adjust his measurements, and thus settle at a much more accurate result.
The angular speed of the prism changes with the range of the target: consider the reverse situation, the apparent high angular speed of an aircraft passing overhead compared to its apparent speed when it is seen at a longer distance. In order to properly account for this non-linear effect, the Norden used a system of slip-disks similar to those used in differential analysers. However, this slow change at long distances made it difficult to fine-tune the drift early in the bomb run. In practice, bombardiers would often set up their ground speed measurements in advance of approaching the target area by selecting a convenient "target" on the ground that was closer to the bomber and thus had more obvious motion in the sight. These values would then be used as the initial setting when the target was later sighted.
System description
The Norden bombsight consisted of two primary parts, the gyroscopic stabilization platform on the left side, and the mechanical calculator and sighting head on the right side. They were essentially separate instruments, connecting through the sighting prism. The sighting eyepiece was located in the middle, between the two, in a less than convenient location that required some dexterity to use.
Before use, the Norden's stabilization platform had to be righted, as it slowly drifted over time and no longer kept the sight pointed vertically. Righting was accomplished through a time-consuming process of comparing the platform's attitude to small spirit levels seen through a glass window on the front of the stabilizer. In practice, this could take as long as eight and a half minutes. This problem was made worse by the fact that the platform's range of motion was limited, and could be tumbled even by strong turbulence, requiring it to be reset again. This problem seriously upset the usefulness of the Norden, and led the RAF to reject it once they received examples in 1942. Some versions included a system that quickly righted the platform, but this "Automatic Gyro Leveling Device" proved to be a maintenance problem, and was removed from later examples.
Once the stabilizer was righted, the bombardier would then dial in the initial setup for altitude, speed, and direction. The prism would then be "clutched out" of the computer, allowing it to be moved rapidly to search for the target on the ground. Later Nordens were equipped with a reflector sight to aid in this step. Once the target was located the computer was clutched in and started moving the prism to follow the target. The bombardier would begin making adjustments to the aim. As all of the controls were located on the right, and had to be operated while sighting through the telescope, another problem with the Norden is that the bombardier could only adjust either the vertical or horizontal aim at a given time, his other arm was normally busy holding himself up above the telescope.
On top of the device, to the right of the sight, were two final controls. The first was the setting for "trail", which was pre-set at the start of the mission for the type of bombs being used. The second was the "index window" which displayed the aim point in numerical form. The bombsight calculated the current aim point internally and displayed this as a sliding pointer on the index. The current sighting point, where the prism was aimed, was also displayed against the same scale. In operation, the sight would be set far in advance of the aim point, and as the bomber approached the target the sighting point indicator would slowly slide toward the aim point. When the two met, the bombs were automatically released. The aircraft was moving over , so even minor interruptions in timing could dramatically affect aim.
Early examples, and most used by the Navy, had an output that directly drove a Pilot Direction Indicator meter in the cockpit. This eliminated the need to manually signal the pilot, as well as eliminating the possibility of error.
In U.S. Army Air Forces use, the Norden bombsight was attached to its autopilot base, which was in turn connected with the aircraft's autopilot. The Honeywell C-1 autopilot could be used as an autopilot by the flight crew during the journey to the target area through a control panel in the cockpit, but was more commonly used under direct command of the bombardier. The Norden's box-like autopilot unit sat behind and below the sight and attached to it at a single rotating pivot. After control of the aircraft was passed to the bombardier during the bomb run, he would first rotate the entire Norden so the vertical line in the sight passed through the target. From that point on, the autopilot would attempt to guide the bomber so it followed the course of the bombsight, and pointed the heading to zero out the drift rate, fed to it through a coupling. As the aircraft turned onto the correct angle, a belt and pulley system rotated the sight back to match the changing heading. The autopilot was another reason for the Norden's accuracy, as it ensured the aircraft quickly followed the correct course and kept it on that course much more accurately than the pilots could.
Later in the war, the Norden was combined with other systems to widen the conditions for successful bombing. Notable among these was the radar system called the H2X (Mickey), which were used directly with the Norden bombsight. The radar proved most accurate in coastal regions, as the water surface and the coastline produced a distinctive radar echo.
Combat use
Early tests
The Norden bombsight was developed during a period of United States non-interventionism when the dominant U.S. military strategy was the defense of the U.S. and its possessions. A considerable amount of this strategy was based on stopping attempted invasions by sea, both with direct naval power, and starting in the 1930s, with USAAC airpower. Most air forces of the era invested heavily in dive bombers or torpedo bombers for these roles, but these aircraft generally had limited range; long-range strategic reach would require the use of an aircraft carrier. The Army felt the combination of the Norden and B-17 Flying Fortress presented an alternate solution, believing that small formations of B-17s could successfully attack shipping at long distances from the USAAC's widespread bases. The high altitudes the Norden allowed would help increase the range of the aircraft, especially if equipped with a turbocharger, as with each of the four Wright Cyclone 9 radial engines of the B-17.
In 1940, Barth claimed that "we do not regard a 15 foot (4.6 m) square... as being a very difficult target to hit from an altitude of ". At some point the company started using the pickle barrel imagery, to reinforce the bombsight's reputation. After the device became known about publicly in 1942, the Norden company in 1943 rented Madison Square Garden and folded their own show in between the presentations of the Ringling Bros. and Barnum & Bailey Circus. Their show involved dropping a wooden "bomb" into a pickle barrel, at which point a pickle popped out.
These claims were greatly exaggerated; in 1940 the average score for an Air Corps bombardier was a circular error of from , not from . Real-world performance was poor enough that the Navy de-emphasized level attacks in favor of dive bombing almost immediately. The Grumman TBF Avenger could mount the Norden, like the preceding Douglas TBD Devastator, but combat use was disappointing and eventually described as "hopeless" during the Guadalcanal Campaign. In spite of giving up on the device in 1942, bureaucratic inertia meant they were supplied as standard equipment until 1944.
USAAF anti-shipping operations in the Far East were generally unsuccessful. In early operations during the Battle of the Philippines, B-17s claimed to have sunk one minesweeper and damaged two Japanese transports, the cruiser , and the destroyer . However, all of these ships are known to have suffered no damage from air attack during that period. In other early battles, including the Battle of Coral Sea or Battle of Midway, no claims were made at all, although some hits were seen on docked targets. The USAAF eventually replaced all of their anti-shipping B-17s with other aircraft, and came to use the skip bombing technique in direct low-level attacks.
Air war in Europe
As U.S. participation in the war started, the U.S. Army Air Forces drew up widespread and comprehensive bombing plans based on the Norden. They believed the B-17 had a 1.2% probability of hitting a target from , meaning that 220 bombers would be needed for a 93% probability of one or more hits. This was not considered a problem, and the USAAF forecast the need for 251 combat groups to provide enough bombers to fulfill their comprehensive pre-war plans.
After earlier combat trials proved troublesome, the Norden bombsight and its associated AFCE were used on a wide scale for the first time on the 18 March 1943 mission to Bremen-Vegesack, Germany. The 303d Bombardment Group dropped 76% of its load within a ring, representing a CEP well under . As at sea, many early missions over Europe demonstrated varied results; on wider inspection, only 50% of American bombs fell within a of the target, and American flyers estimated that as many as 90% of bombs could miss their targets. The average CEP in 1943 was , meaning that only 16% of the bombs fell within of the aiming point. A bomb, standard for precision missions after 1943, had a lethal radius of only .
Faced with these poor results, Curtis LeMay started a series of reforms in an effort to address the problems. In particular, he introduced the "combat box" formation in order to provide maximum defensive firepower by densely packing the bombers. As part of this change, he identified the best bombardiers in his command and assigned them to the lead bomber of each box. Instead of every bomber in the box using their Norden individually, the lead bombardiers were the only ones actively using the Norden, and the rest of the box's aircraft dropped their bombs when they saw the lead's bombs leaving his aircraft. Although this spread the bombs over the area of the combat box, this could still improve accuracy over individual efforts. It also helped stop a problem where various aircraft, all slaved to their autopilots on the same target, would drift into each other. These changes did improve accuracy, which suggests that much of the problem is attributable to the bombardier. However, true "precision" attacks still proved difficult or impossible.
When Jimmy Doolittle took over command of the 8th Air Force from Ira Eaker in early 1944, precision bombing attempts were dropped. Area bombing, like the RAF efforts, were widely used with 750 and then 1,000-bomber raids against large targets. The main targets were railroad marshaling yards (27.4% of the bomb tonnage dropped), airfields (11.6%), oil refineries (9.5%), and military installations (8.8%). To some degree, the targets themselves were secondary; Doolittle used the bombers as an irresistible target to draw up Luftwaffe fighters into the ever-increasing swarms of Allied long-range escort fighters. As these missions broke the Luftwaffe, missions were able to be carried out at lower altitudes or especially in bad weather when the H2X radar could be used. In spite of abandoning precision attacks, accuracy nevertheless improved. By 1945, the 8th was putting up to 60% of its bombs within , a CEP of about .
Still pursuing precision attacks, various remotely guided weapons were developed, notably the AZON and VB-3 Razon bombs and similar weapons.
Adaptations
The Norden operated by mechanically turning the viewpoint so the target remained stationary in the display. The mechanism was designed for the low angular rate encountered at high altitudes, and thus had a relatively low range of operational speeds. The Norden could not rotate the sight fast enough for bombing at low altitude, for instance. Typically this was solved by removing the Norden completely and replacing it with simpler sighting systems.
A good example of its replacement was the refitting of the Doolittle Raiders with a simple iron sight. Designed by Capt. C. Ross Greening, the sight was mounted to the existing pilot direction indicator, allowing the bombardier to make corrections remotely, like the bombsights of an earlier era.
However, the Norden combined two functions, aiming and stabilization. While the former was not useful at low altitudes, the latter could be even more useful, especially if flying in rough air near the surface. This led James "Buck" Dozier to mount a Doolittle-like sight on top of the stabilizer in the place of the sighting head in order to attack German submarines in the Caribbean Sea. This proved extraordinarily useful and was soon used throughout the fleet.
Postwar use
In the postwar era, the United States mostly stopped developing new precision bombsights. Initially this was because of the war's end but, as budgets increased during the Cold War, the deployment of nuclear weapons meant accuracies of around were sufficient, well within the capabilities of existing radar bombing systems. Only one major bombsight of note was developed, the Y-4 developed on the Boeing B-47 Stratojet. This sight combined the images of the radar and a lens system in front of the aircraft, allowing them to be directly compared at once through a binocular eyepiece.
Bombsights on older aircraft, like the Boeing B-29 Superfortress and the later B-50, were left in their wartime state. When the Korean War began, these aircraft were pressed into service and the Norden once again became the USAF's primary bombsight. This occurred again when the Vietnam War started; in this case retired World War II technicians had to be called up in order to make the bombsights operational again. Its last use in combat was by the Naval Air Observation Squadron Sixty-Seven (VO-67), during the Vietnam War. The bombsights were used in Operation Igloo White for implanting Air-Delivered Seismic Intrusion Detectors (ADSID) along the Ho Chi Minh Trail.
Wartime security
Since the Norden was considered a critical wartime instrument, bombardiers were required to take an oath during their training stating that they would defend its secret with their own life if necessary. In case the plane should make an emergency landing on enemy territory, the bombardier would have to shoot the important parts of the Norden with a gun to disable it. The Douglas TBD Devastator torpedo bomber was originally equipped with flotation bags in the wings to aid the aircrew's escape after ditching, but they were removed once the Pacific War began; this ensured that the aircraft would sink, taking the Norden with it.
After each completed mission, bomber crews left the aircraft with a bag which they deposited in a safe ("the Bomb Vault"). This secure facility ("the AFCE and Bombsight Shop") was typically in one of the base's Nissen hut (Quonset hut) support buildings. The Bombsight Shop was manned by enlisted men who were members of a Supply Depot Service Group ("Sub Depot") attached to each USAAF bombardment group. These shops not only guarded the bombsights but performed critical maintenance on the Norden and related control equipment. This was probably the most technically skilled ground-echelon job, and certainly the most secret, of all the work performed by Sub Depot personnel. The non-commissioned officer in charge and his staff had to have a high aptitude for understanding and working with mechanical devices.
As the end of World War II neared, the bombsight was gradually downgraded in its secrecy; however, it was not until 1944 that the first public display of the instrument occurred.
Espionage
Despite the security precautions, the entire Norden system had been passed to the Germans before the war started. Herman W. Lang, a German spy, had been employed by the Carl L. Norden Company. During a visit to Germany in 1938, Lang conferred with German military authorities and reconstructed plans of the confidential materials from memory. In 1941, Lang, along with the 32 other German agents of the Duquesne Spy Ring, was arrested by the FBI and convicted in the largest espionage prosecution in U.S. history. He received a sentence of 18 years in prison on espionage charges and a two-year concurrent sentence under the Foreign Agents Registration Act.
German instruments were fairly similar to the Norden, even before World War II. A similar set of gyroscopes provided a stabilized platform for the bombardier to sight through, although the complex interaction between the bombsight and autopilot was not used. The Carl Zeiss Lotfernrohr 7, or Lotfe 7, was an advanced mechanical system similar to the Norden bombsight, although in form it was more similar to the Sperry S-1. It started replacing the simpler Lotfernrohr 3 and BZG 2 in 1942, and emerged as the primary late-war bombsight used in most Luftwaffe level bombers. The use of the autopilot allowed single-handed operation, and was key to bombing use of the single-crewed Arado Ar 234.
Japanese forces captured examples of the Norden, primarily from North American B-25 Mitchell bombers. They developed a simplified and more compact version known as the Type 4 Automatic Bombing Sight, but found it too complex to mass produce. Further development led to the Type 1 Model 2 Automatic Bombing Sight which began limited production just before the end of the war. Approximately 20 were in service at the end of the war.
Postwar analysis
Postwar analysis placed the overall accuracy of daylight precision attacks with the Norden at about the same level as radar bombing efforts. The 8th Air Force put 31.8% of its bombs within from an average altitude of , the 15th Air Force averaged 30.78% from , and the 20th Air Force against Japan averaged 31% from .
Many factors have been put forth to explain the Norden's poor real-world performance. Over Europe, the cloud cover was a common explanation, although performance did not improve even in favorable conditions. Over Japan, bomber crews soon discovered strong winds at high altitudes, the so-called jet streams, but the Norden bombsight worked only for wind speeds with minimal wind shear. Additionally, the bombing altitude over Japan reached up to , but most of the testing had been done well below . This extra altitude compounded factors that could previously be ignored; the shape of and even the paint on the bomb changed its aerodynamic properties and, at that time, nobody knew how to calculate the trajectory of bombs that reached supersonic speeds during their fall.
The RAF developed their own designs. Having moved to night bombing, where visual accuracy was difficult under even the best conditions, they introduced the much simpler Mark XIV bomb sight. This was designed not for accuracy above all, but ease of use in operational conditions. In testing in 1944, it was found to offer a CEP of , about what the Norden was offering at that time. This led to a debate within the RAF whether to use their own tachometric design, the Stabilized Automatic Bomb Sight, or use the Mk. XIV on future bombers. The Mk. XIV ultimately served into the 1960s while the SABS faded from service as the Lancaster and Lincoln bombers fitted with it were retired by the late 1940s.
See also
Mary Babnik Brown, who donated her hair in 1944, often said to be for the bombsight crosshairs, though this is incorrect
Glasgow Army Airfield Norden Bombsight Storage Vault
Lotfernrohr 7, a similar German design of late-war vintage
Stabilized Automatic Bomb Sight, a British bomb sight
Mark XIV bomb sight, a British bomb sight
Notes
References
Bibliography
Further reading
Stewart Halsey Ross: "Strategic Bombing by the United States in World War II"
"Bombardier: A History", Turner Publishing, 1998
"The Norden Bombsight
"Bombing – Students' Manual"
"Bombardier's Information File"
Stephen McFarland: "America's Pursuit of Precision Bombing, 1910–1945"
Charles Babbage Institute, University of Minnesota. Pasinski produced the prototype for the bombsight. He designed production tools and supervised production of the bombsight at Burroughs Corporation.
Charles Babbage Institute, University of Minnesota. Information on the Norden bombsight, which Burroughs produced beginning in 1942.
External links
Flight 1945 Norden Bomb Sight
How the Norden Bombsight Does Its Job by V. Torrey, June 1945 Popular Science
Norden bombsight images and information from twinbeech.com
Optical bombsights
World War II military equipment of the United States
Military computers
Mechanical computers
American inventions
Fire-control computers of World War II
Military equipment introduced in the 1930s | Norden bombsight | Physics,Technology | 11,011 |
3,145,482 | https://en.wikipedia.org/wiki/Enos%20%28chimpanzee%29 | Enos (born about 1957 – died November 4, 1962) was a chimpanzee launched into space by NASA, following his predecessor Ham. He was the only non-human primate to orbit the Earth, and the third hominid to do so after cosmonauts Yuri Gagarin and Gherman Titov. Enos's flight occurred on November 29, 1961.
Enos was brought from the Miami Rare Bird Farm on April 3, 1960. He completed more than 1,250 training hours at the University of Kentucky and Holloman Air Force Base. Training was more intense for him than for Ham, who had become the first great ape in space in January 1961, because Enos was exposed to weightlessness and higher gs for longer periods of time. His training included psychomotor instruction and aircraft flights.
Enos was selected for his Project Mercury flight only three days before launch. Two months prior, NASA launched Mercury-Atlas 4 on September 13, 1961, to conduct an identical mission with a "crewman simulator" on board. Enos flew into space aboard Mercury-Atlas 5 on November 29, 1961. He completed his first orbit in 1 hour and 28.5 minutes.
Enos was scheduled to complete three orbits, but the mission was aborted after two, due to two issues: capsule overheating and a malfunctioning "avoidance conditioning" test subjecting the primate to 76 electrical shocks. According to one history of primatology, "The chimpanzee, about five years old, behaved like a true hero: despite the malfunctions of the electronic system, he conscientiously performed all the tasks he had learned during the entire flight of over three hours...Enos demonstrated that he was careful to successfully complete his mission and that he perfectly understood what was expected of him."
After his space capsule made an ocean landing, Enos "had become angry and frustrated at the three-hour wait" before being retrieved by U.S. Navy seamen.
The capsule was brought aboard in the late afternoon and Enos was immediately taken below deck by his Air Force handlers. Stormes then dropped Enos at the Kindley US Air Force Base hospital in Bermuda, where he was found to be in good shape. On December 1, 1961 Enos left Bermuda for Cape Canaveral, and eventually Holloman Air Force Base.
Enos's flight was a full dress rehearsal for the next Mercury launch on February 20, 1962, which would make John Glenn the first American to orbit Earth.
Enos was nicknamed "the Penis," due to his frequent fondling of himself.
On November 4, 1962, Enos died of shigellosis-related dysentery, which was resistant to then-known antibiotics. He was constantly observed for two months before his death. Pathologists reported no symptoms that could be attributed or related to his previous space flight.
See also
Monkeys and apes in space
Albert II, first monkey and first primate in space
Little Joe 2 flight with Sam, Project Mercury rhesus monkey
Animals in space
List of individual apes
One Small Step: The Story of the Space Chimps: (2008 documentary)
Félicette: First cat in space
References
External links
Enos the Chimpanzee travels to space on NASA's Mercury Atlas 5 1960, YouTube video
"Voyage of space chimpanzee Enos ended in Bermuda" by Gail Westerfield
One Small Step: The Story of the Space Chimps, documentary on history of chimpanzees used in space travel
Atlantic.com article about electrical shocks
MentalFloss.com on 54th anniversary of flight
1950s animal births
1962 animal deaths
1961 in spaceflight
Animals in space
Deaths from dysentery
Individual chimpanzees
Project Mercury
Year of birth missing
Place of birth missing
Non-human primate astronauts of the American space program | Enos (chimpanzee) | Chemistry,Biology | 788 |
60,057,016 | https://en.wikipedia.org/wiki/Estriol%20phosphate | Estriol phosphate (E3P), or estriol 17β-phosphate, also known as estra-1,3,5(10)-triene-3,16α,17β-triol 17β-(dihydrogen phosphate), is an estrogen which was never marketed. It is an estrogen ester, specifically an ester of estriol with phosphoric acid, and acts as a prodrug of estriol by cleavage via phosphatase enzymes in the body. Estriol phosphate is contained within the chemical structure of polyestriol phosphate (a polymer of estriol phosphate), and this medication has been marketed for medical use (brand names Gynäsan, Klimadurin, Triodurin).
See also
List of estrogen esters § Estriol esters
References
Abandoned drugs
Estriol esters
Phosphate esters
Synthetic estrogens | Estriol phosphate | Chemistry | 197 |
2,864,015 | https://en.wikipedia.org/wiki/Pan%20Twardowski | Pan Twardowski (Polish: Pan Twardowski, ), also known as Master Twardowski (Polish: Mistrz Twardowski), is a sorcerer in Polish folklore and literature who made a deal with the Devil. Twardowski sold his soul in exchange for special powers – such as being able to summon for King Sigismund Augustus the spirit of his deceased wife – and eventually met a tragic fate.
The tale of Twardowski exists in various versions, and forms the basis for many works of fiction, including the humorous ballad "Pani Twardowska" by Adam Mickiewicz. The folklore is commonly assumed to have been heavily inspired by the similar German story of Faust, with which there are many parallels.
Legend
According to an old legend, Twardowski was a nobleman (szlachcic) who lived in Kraków in the 16th century. He sold his soul to the devil in exchange for great knowledge and magical powers. However, Twardowski wanted to outwit the devil by including a special clause in the contract, stating that the devil could only take Twardowski's soul to Hell during his visit to Rome – a place the sorcerer never intended to go. Other variants of the story have Twardowski being sold to the devil as a child by his father.
With the devil's aid, Twardowski quickly rose to wealth and fame, eventually becoming a courtier of King Sigismund Augustus, who sought consolation in magic and astrology after the death of his beloved wife, Barbara Radziwiłł. He was said to have summoned the ghost of the late queen to comfort the grieving monarch, using a magic mirror. The sorcerer also wrote two books, both dictated to him by the devil – a book on magic and an encyclopedia.
After years of evading his fate, Twardowski was eventually tricked by the devil and caught not in the city, but at an inn called Rzym (Rome in Polish). While being spirited away, Twardowski started to pray to the Virgin Mary, who made the devil drop his victim midway to hell. Twardowski fell on the Moon where he lives to this day. His only companion is his sidekick whom he once turned into a spider; from time to time Twardowski lets the spider descend to Earth on a thread and bring him news from the world below.
Historical Twardowski
Dr. Jan Kuchta in his 1935 doctoral thesis "Cracovian Warlock of XVI Century. Master Twardowski" suggested that Twardowski may have been a German nobleman who was born in Nuremberg and studied in Wittenberg before coming to Kraków. His name Lorenz Dhur was Latinised to Laurentius Durus and in turn rendered as Twardowski in Polish; durus and twardy mean "hard" in Latin and Polish respectively. There is also some speculation that this legend was inspired by the life of either John Dee or his associate Edward Kelley, both of whom lived for a time in Kraków.
"Pan" – used in modern Polish as a universal honorific and polite form of address – at the time the tale developed, was reserved for members of the nobility (szlachta) and was roughly equivalent to the English "Sir" (see Polish name), but in the English language "Sir" is used before a man's given name (e.g., "Sir Isaac") or his complete name (e.g., "Sir Isaac Newton"), not before his surname only (e.g., "Sir Newton").
Twardowski's given name is sometimes given as Jan (John), though most versions of the tale do not mention a given name. Pan Twardowski may have been confused with the Polish Catholic priest writer, Jan Twardowski.
Twardowski in literature, music, film and gaming
The legend of Pan Twardowski has inspired a great many Polish, Czech, Ukrainian, Russian, and German poets, novelists, composers, directors, and other artists.
One of the best known literary works featuring Pan Twardowski is the humorous ballad Pani Twardowska by Adam Mickiewicz (1822). In this version of the story, Twardowski agrees to be taken to Hell on condition that the Devil spends a year living with his wife, Pani Twardowska. The Devil, however, prefers to run away and thus Pan Twardowski is saved. In 1869 Stanisław Moniuszko wrote music for the ballad.
Other works based on the legend include:
Pan Tvardovsky, an opera by Alexey Verstovsky, libretto by Mikhail Zagoskin (1828);
Pan Tvardovsky, Zagoskin's short story from the collection An Evening on the Khopyor (1834);
Mistrz Twardowski [Master Twardowski], a novel by Józef Ignacy Kraszewski (1840);
Tvardovsky, a ballad by Semen Hulak-Artemovsky;
Pan Twardowski, a ballet by Adolf Gustaw Sonnenfeld (1874);
Pan Tvardovski, an opera by Ivan Zajc (1880);
Twardowski, a poem by Jaroslav Vrchlický (1885);
Mistrz Twardowski, a poem by Leopold Staff (1902);
Pan Twardowski, a ballad by Lucjan Rydel (1906);
Pan Tvardovsky, a film by Ladislas Starevich (1917);
Pan Twardowski, a ballet by Ludomir Różycki (1921);
Pan Twardowski, a film by Wiktor Biegański (1921);
Pan Twardowski, czarnoksiężnik polski [Pan Twardowski, a Polish sorcerer], a novel by Wacław Sieroszewski (1930);
Pan Twardowski, a film by Henryk Szaro, screenplay by Wacław Gąsiorowski (1936);
Pan Twardowski oder Der Polnische Faust [Pan Twardowski or The Polish Faust], a novel by Matthias Werner Kruse (1981);
Dzieje Mistrza Twardowskiego (The Story of Master Twardowski), a film by Krzysztof Gradowski (1995).
Twardowsky, a short sci-fi film from Polish Legends series directed by Tomasz Bagiński (2015)
Hearts of Stone, an expansion to RPG game The Witcher 3: Wild Hunt (2015), has a main storyline heavily inspired by the legend.
Pan Twardowski is also a popular character in the folk art of Poland's Kraków region; he appears in some Kraków Nativity scenes (szopki). He is typically depicted as a Polish noble either riding a rooster or standing on the Moon.
Places associated with Pan Twardowski
Pan Twardowski is said to have lived in or near Kraków, the capital of Poland at the time. Different places in Kraków claim to be the exact location of Twardowski's house. The sorcerer might have lived either somewhere in the city center, near the Rynek Główny or Ulica Grodzka, or across the River Vistula in the village of Krzemionki (now part of Kraków).
Across Poland, there are various inns and pubs called Rzym ("Rome"), all of which claim to be the one where Pan Twardowski met the devil. The oldest of these inns date back to only the late 17th century, about 100 years after Twardowski's time. The one in Sucha is probably the best known of these inns.
In the sacristy of a church in Węgrów, hangs a polished metal plate claimed to be the magic mirror which once belonged to Pan Twardowski. According to a legend, it was possible to see future events reflected in the mirror until it was broken in 1812 by Emperor Napoléon Bonaparte of France when he saw in it his future retreat from Russia and collapse of his empire.
It is also said that Pan Twardowski spent some time in the city of Bydgoszcz, where, in his memory, a figure was recently mounted in a window of a tenement, overseeing the Old Town. At 1:13 p.m. and 9:13 p.m. the window opens and Pan Twardowski appears, to the accompaniment of weird music and devilish laughter. He takes a bow, waves his hand, and then disappears. This little show gathers crowds of amused spectators.
See also
Faust
Pan Tadeusz
Simon Magus
The Smith and the Devil
Theophilus of Adana
References
Further reading
European folklore characters
Legendary Polish people
Supernatural legends
Polish folklore
Fictional Polish people
Fictional characters who have made pacts with devils
Moon myths
Deal with the Devil
Fictional characters from the 16th century
Culture in Kraków | Pan Twardowski | Astronomy | 1,815 |
1,209,622 | https://en.wikipedia.org/wiki/Microfossil | A microfossil is a fossil that is generally between 0.001 mm and 1 mm in size, the visual study of which requires the use of light or electron microscopy. A fossil which can be studied with the naked eye or low-powered magnification, such as a hand lens, is referred to as a macrofossil.
Microfossils are a common feature of the geological record, from the Precambrian to the Holocene. They are most common in deposits of marine environments, but also occur in brackish water, fresh water and terrestrial sedimentary deposits. While every kingdom of life is represented in the microfossil record, the most abundant forms are protist skeletons or microbial cysts from the Chrysophyta, Pyrrhophyta, Sarcodina, acritarchs and chitinozoans, together with pollen and spores from the vascular plants.
Overview
A microfossil is a descriptive term applied to fossilized plants and animals whose size is just at or below the level at which the fossil can be analyzed by the naked eye. A commonly applied cutoff point between "micro" and "macro" fossils is 1 mm. Microfossils may either be complete (or near-complete) organisms in themselves (such as the marine plankters foraminifera and coccolithophores) or component parts (such as small teeth or spores) of larger animals or plants. Microfossils are of critical importance as a reservoir of paleoclimate information, and are also commonly used by biostratigraphers to assist in the correlation of rock units.
Microfossils are found in rocks and sediments as the microscopic remains of what were once life forms such as plants, animals, fungus, protists, bacteria and archaea. Terrestrial microfossils include pollen and spores. Marine microfossils found in marine sediments are the most common microfossils. Everywhere in the oceans, microscopic protist organisms multiply prolifically, and many grow tiny skeletons which readily fossilise. These include foraminifera, dinoflagellates and radiolarians. Palaeontologists (geologists who study fossils) are interested in these microfossils because they can use them to determine how environments and climates have changed in the past, and where oil and gas can be found today.<ref name=Campbell2006>Campbell, Hamish (12 Jun 2006) "Fossils - Microfossils", Te Ara - the Encyclopedia of New Zealand". Accessed 11 May 2021.</ref>
Some microfossils are formed by colonial organisms such as Bryozoa (especially the Cheilostomata), which have relatively large colonies but are classified by fine skeletal details of the small individuals of the colony. As another example, many fossil genera of Foraminifera, which are protists are known from shells (called tests) that were as big as coins, such as the genus Nummulites.
In 2017, fossilized microorganisms, or microfossils, were discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that may be as old as 4.28 billion years old, the oldest record of life on Earth, suggesting "an almost instantaneous emergence of life" (in a geological time-scale), after ocean formation 4.41 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. Nonetheless, life may have started even earlier, at nearly 4.5 billion years ago, as claimed by some researchers.
Index fossils
Index fossils, also known as guide fossils, indicator fossils or dating fossils, are the fossilized remains or traces of particular plants or animals that are characteristic of a particular span of geologic time or environment, and can be used to identify and date the containing rocks. To be practical, index fossils must have a limited vertical time range, wide geographic distribution, and rapid evolutionary trends. Rock formations separated by great distances but containing the same index fossil species are thereby known to have both formed during the limited time that the species lived.
Index fossils were originally used to define and identify geologic units, then became a basis for defining geologic periods, and then for faunal stages and zones.
Species of microfossils such as acritarchs, chitinozoans, conodonts, dinoflagellate cysts, ostracods, pollen, spores and foraminiferans are amongst the many species have been identified as index fossils that are widely used in biostratigraphy. Different fossils work well for sediments of different ages. To work well, the fossils used must be widespread geographically, so that they can be found in many different places. They must also be short lived as a species, so that the period of time during which they could be incorporated in the sediment is relatively narrow. The longer lived the species, the poorer the stratigraphic precision, so fossils that evolve rapidly.
Often biostratigraphic correlations are based on a faunal assemblage, rather than an individual species — this allows greater precision as the time span in which all of the species in the assemblage existed together is narrower than the time spans of any of the members. Further, if only one species is present in a sample, it can mean either that (1) the strata were formed in the known fossil range of that organism; or (2) that the fossil range of the organism was incompletely known, and the strata extend the known fossil range. If the fossil is easy to preserve and easy to identify, more precise time estimating of the stratigraphic layers is possible.
Composition
Microfossils can be classified by their composition as: (a) siliceous, as in diatoms and radiolaria, (b) calcareous, as in coccoliths and foraminifera, (c) phosphatic, as in the study of some vertebrates, or (d) organic, as in the pollen and spores studied in palynology. This division focuses on differences in the mineralogical and chemical composition of microfossil remains rather than on taxonomic or ecological distinctions.
Siliceous microfossils: Siliceous microfossils include diatoms, radiolarians, silicoflagellates, ebridians, phytoliths, some scolecodonts (worm jaws), and sponge spicules.
Calcareous microfossils: Calcareous (CaCO3) microfossils include coccoliths, foraminifera, calcareous dinoflagellate cysts, and ostracods (seed shrimp).
Phosphatic microfossils: Phosphatic microfossils include conodonts (tiny oral structures of an extinct chordate group), some scolecodonts (worm jaws), shark spines and teeth and other fish remains (collectively called ichthyoliths).
Organic microfossils: The study of organic microfossils is called palynology. Organic microfossils include pollen, spores, chitinozoans (thought to be the egg cases of marine invertebrates), scolecodonts (worm jaws), acritarchs, dinoflagellate cysts, and fungal remains.
Organic-walled
Palynomorphs
Pollen grain
Pollen has an outer sheath, called a sporopollenin, which affords it some resistance to the rigours of the fossilisation process that destroy weaker objects. It is produced in huge quantities. There is an extensive fossil record of pollen grains, often disassociated from their parent plant. The discipline of palynology is devoted to the study of pollen, which can be used both for biostratigraphy and to gain information about the abundance and variety of plants alive — which can itself yield important information about paleoclimates. Also, pollen analysis has been widely used for reconstructing past changes in vegetation and their associated drivers.
Pollen is first found in the fossil record in the late Devonian period, but at that time it is indistinguishable from spores. It increases in abundance until the present day.
Plant spores
A spore is a unit of sexual or asexual reproduction that may be adapted for dispersal and for survival, often for extended periods of time, in unfavourable conditions. Spores form part of the life cycles of many plants, algae, fungi and protozoa. Bacterial spores are not part of a sexual cycle but are resistant structures used for survival under unfavourable conditions.
Fungal spores
Chitinozoa
Chitinozoa are a taxon of flask-shaped, organic walled marine microfossils produced by an as yet unknown organism.
Common from the Ordovician to Devonian periods (i.e. the mid-Paleozoic), the millimetre-scale organisms are abundant in almost all types of marine sediment across the globe. This wide distribution, and their rapid pace of evolution, makes them valuable biostratigraphic markers.
Their bizarre form has made classification and ecological reconstruction difficult. Since their discovery in 1931, suggestions of protist, plant, and fungal affinities have all been entertained. The organisms have been better understood as improvements in microscopy facilitated the study of their fine structure, and it has been suggested that they represent either the eggs or juvenile stage of a marine animal. However, recent research has suggested that they represent the test of a group of protists with uncertain affinities.
The ecology of chitinozoa is also open to speculation; some may have floated in the water column, where others may have attached themselves to other organisms. Most species were particular about their living conditions, and tend to be most common in specific paleoenvironments. Their abundance also varied with the seasons.
Acritarchs
Acritarchs, Greek for confused origins, are organic-walled microfossils, known from about to the present. Acritarchs are not a specific biological taxon, but rather a group with uncertain or unknown affinities. Most commonly they are composed of thermally altered acid insoluble carbon compounds (kerogen). While the classification of acritarchs into form genera is entirely artificial, it is not without merit, as the form taxa show traits similar to those of genuine taxa — for example the 'explosion' in the Cambrian and the mass extinction at the end of the Permian.
Acritarch diversity reflects major ecological events such as the appearance of predation and the Cambrian explosion. Precambrian marine diversity was dominated by acritarchs. They underwent a boom around , increasing in abundance, diversity, size, complexity of shape, and especially size and number of spines. Their increasingly spiny forms in the last 1 billion years may indicate an increased need for defence against predation.
Acritarchs may include the remains of a wide range of quite different kinds of organisms—ranging from the egg cases of small metazoans to resting cysts of many kinds of chlorophyta (green algae). It is likely that most acritarch species from the Paleozoic represent various stages of the life cycle of algae that were ancestral to the dinoflagellates. The nature of the organisms associated with older acritarchs is generally not well understood, though many are probably related to unicellular marine algae. In theory, when the biological source (taxon) of an acritarch does become known, that particular microfossil is removed from the acritarchs and classified with its proper group.
Acritarchs were most likely eukaryotes. While archaea, bacteria and cyanobacteria (prokaryotes) usually produce simple fossils of a very small size, eukaryotic unicellular fossils are usually larger and more complex, with external morphological projections and ornamentation such as spines and hairs that only eukaryotes can produce; as most acritarchs have external projections (e.g., hair, spines, thick cell membranes, etc.), they are predominantly eukaryotes, although simple eukaryote acritarchs also exist.
Acritarchs are found in sedimentary rocks from the present back into the Archean. They are typically isolated from siliciclastic sedimentary rocks using hydrofluoric acid but are occasionally extracted from carbonate-rich rocks. They are excellent candidates for index fossils used for dating rock formations in the Paleozoic Era and when other fossils are not available. Because most acritarchs are thought to be marine (pre-Triassic), they are also useful for palaeoenvironmental interpretation. The Archean and earliest Proterozoic microfossils termed "acritarchs" may actually be prokaryotes. The earliest eukaryotic acritarchs known (as of 2020) are from between 1950 and 2150 million years ago.
Recent application of atomic force microscopy, confocal microscopy, Raman spectroscopy, and other analytic techniques to the study of the ultrastructure, life history, and systematic affinities of mineralized, but originally organic-walled microfossils, have shown some acritarchs are fossilized microalgae. In the end, it may well be, as Moczydłowska et al. suggested in 2011, that many acritarchs will, in fact, turn out to be algae. Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
Archean cells
Cells can be preserved in the rock record because their cell walls are made of proteins which convert to the organic material kerogen as the cell breaks down after death. Kerogen is insoluble in mineral acids, bases, and organic solvents. Over time, it is mineralised into graphite or graphite-like carbon, or degrades into oil and gas hydrocarbons. There are three main types of cell morphologies. Though there is no established range of sizes for each type, spheroid microfossils can be as small as about 8 micrometres, filamentous microfossils have diameters typically less than 5 micrometres and have a length that can range from tens of micrometres to 100 micrometres, and spindle-like microfossils can be as long as 50 micrometres.
Mineralised
Siliceous
Siliceous ooze is a type of biogenic pelagic sediment located on the deep ocean floor. Siliceous oozes are the least common of the deep sea sediments, and make up approximately 15% of the ocean floor. Oozes are defined as sediments which contain at least 30% skeletal remains of pelagic microorganisms. Siliceous oozes are largely composed of the silica based skeletons of microscopic marine organisms such as diatoms and radiolarians. Other components of siliceous oozes near continental margins may include terrestrially derived silica particles and sponge spicules. Siliceous oozes are composed of skeletons made from opal silica Si(O2), as opposed to calcareous oozes, which are made from skeletons of calcium carbonate organisms (i.e. coccolithophores). Silica (Si) is a bioessential element and is efficiently recycled in the marine environment through the silica cycle. Distance from land masses, water depth and ocean fertility are all factors that affect the opal silica content in seawater and the presence of siliceous oozes.
Phytoliths (Greek for plant stones) are rigid, microscopic structures made of silica, found in some plant tissues and persisting after the decay of the plant. These plants take up silica from the soil, whereupon it is deposited within different intracellular and extracellular structures of the plant. Phytoliths come in varying shapes and sizes. The term "phytolith" is sometimes used to refer to all mineral secretions by plants, but more commonly refers to siliceous plant remains.
Calcareous
The term calcareous can be applied to a fossil, sediment, or sedimentary rock which is formed from, or contains a high proportion of, calcium carbonate in the form of calcite or aragonite. Calcareous sediments (limestone) are usually deposited in shallow water near land, since the carbonate is precipitated by marine organisms that need land-derived nutrients. Generally speaking, the farther from land sediments fall, the less calcareous they are. Some areas can have interbedded calcareous sediments due to storms, or changes in ocean currents. Calcareous ooze is a form of calcium carbonate derived from planktonic organisms that accumulates on the sea floor. This can only occur if the ocean is shallower than the carbonate compensation depth. Below this depth, calcium carbonate begins to dissolve in the ocean, and only non-calcareous sediments are stable, such as siliceous ooze or pelagic red clay.
Ostracods
Ostracods are widespread crustaceans, generally small, sometimes known as seed shrimps. They are flattened from side to side and protected with a calcareous or chitinous bivalve-like shell. There are about 70,000 known species, 13,000 of which are extant. Ostracods are typically about in size, though they can range from , with some species such as Gigantocypris being too large to be regarded as microfossils.
Conodonts
Conodonts (cone tooth in Greek) are tiny, extinct jawless fish that resemble eels. For many years, they were known only from tooth-like microfossils found in isolation and now called conodont elements. The evolution of mineralized tissues has been a puzzle for more than a century. It has been hypothesized that the first mechanism of chordate tissue mineralization began either in the oral skeleton of conodont or the dermal skeleton of early agnathans. Conodont elements are made of a phosphatic mineral, hydroxylapatite.
The element array constituted a feeding apparatus that is radically different from the jaws of modern animals. They are now termed "conodont elements" to avoid confusion. The three forms of teeth (i.e., coniform cones, ramiform bars, and pectiniform platforms) probably performed different functions. For many years, conodonts were known only from enigmatic tooth-like microfossils (200 micrometres to 5 millimetres in length) which occur commonly, but not always in isolation, and were not associated with any other fossil.
Conodonts are globally widespread in sediments.Their many forms are considered index fossils, fossils used to define and identify geological periods and date strata. Conodonts elements can be used to estimate the temperatures rocks have been exposed to, which allows the thermal maturation levels of sedimentary rocks to be determined, which is important for hydrocarbon exploration. Conodont teeth are the earliest vertebrate teeth found in the fossil record, and some conodont teeth are the sharpest that have ever been recorded.
Scolecodonts
Scolecodonts (worm jaws in Latin) are tiny jaws of polychaete annelids of the order Eunicida - a diverse and abundant group of worms which has been inhabiting different marine environments in the past 500 million years. Composed of highly resistant organic substance, the scolecodonts are frequently found as fossils from the rocks as old as the late Cambrian. Since the worms themselves were soft-bodied and hence extremely rarely preserved in the fossil record, their jaws constitute the main evidence of polychaetes in the geological past, and the only way to restore the evolution of this important group of animals. Small size of scolecodonts, usually less than 1 mm, puts them into a microfossil category. They are common by-product of conodont, chitinozoan and acritarch samples, but sometimes they occur in the sediments where other fossils are very rare or absent.
Cloudinids
The cloudinids were an early metazoan family that lived in the late Ediacaran period about 550 million years ago, and became extinct at the base of the Cambrian. They formed small millimetre size conical fossils consisting of calcareous cones nested within one another; the appearance of the organism itself remains unknown. The name Cloudina honors Preston Cloud. Fossils consist of a series of stacked vase-like calcite tubes, whose original mineral composition is unknown, Cloudinids comprise two genera: Cloudina itself is mineralized, whereas Conotubus is at best weakly mineralized, whilst sharing the same "funnel-in-funnel" construction.
Cloudinids had a wide geographic range, reflected in the present distribution of localities in which their fossils are found, and are an abundant component of some deposits. Cloudina is usually found in association with microbial stromatolites, which are limited to shallow water, and it has been suggested that cloudinids lived embedded in the microbial mats, growing new cones to avoid being buried by silt. However no specimens have been found embedded in mats, and their mode of life is still an unresolved question.
The classification of the cloudinids has proved difficult: they were initially regarded as polychaete worms, and then as coral-like cnidarians on the basis of what look like buds on some specimens. Current scientific opinion is divided between classifying them as polychaetes and regarding it as unsafe to classify them as members of any broader grouping. In 2020, a new study showed the presence of Nephrozoan type guts, the oldest on record, supporting the bilaterian interpretation.
Cloudinids are important in the history of animal evolution for two reasons. They are among the earliest and most abundant of the small shelly fossils with mineralized skeletons, and therefore feature in the debate about why such skeletons first appeared in the Late Ediacaran. The most widely supported answer is that their shells are a defense against predators, as some Cloudina specimens from China bear the marks of multiple attacks, which suggests they survived at least a few of them. The holes made by predators are approximately proportional to the size of the Cloudina specimens, and Sinotubulites fossils, which are often found in the same beds, have so far shown no such holes. These two points suggest that predators attacked in a selective manner, and the evolutionary arms race which this indicates is commonly cited as a cause of the Cambrian explosion of animal diversity and complexity.
Dinoflagellate cysts
Some dinoflagellates produce resting stages, called dinoflagellate cysts or dinocysts, as part of their lifecycles. Dinoflagellates are mainly represented in the fossil record by these dinocysts, typically 15 to 100 micrometres in diameter, which accumulate in sediments as microfossils. Organic-walled dinocysts have resistant cell walls made out of dinosporin. There are also calcareous dinoflagellate cysts and siliceous dinoflagellate cysts.
Dinocysts are produced by a proportion of dinoflagellates as a dormant, zygotic stage of their lifecycle. These dinocyst stages are known to occur in 84 of the 350 described freshwater dinoflagellate species, and in about 10% of the known marine species. Dinocysts have a long geological record with geochemical markers suggest a presence that goes back to the Early Cambrian.
Sponge spicules
Spicules are structural elements found in most sponges. They provide structural support and deter predators. The meshing of many spicules serves as the sponge's skeleton, providing structural support and defense against predators.
Smaller, microscopic spicules can become microfossils, and are referred to as microscleres. Larger spicules visible to the naked eye are called megascleres. Spicule can be calcareous, siliceous, or composed of spongin. They are found in a range of symmetry types.
Freshwater sediments
Marine sediments
Sediments at the bottom of the ocean have two main origins, terrigenous and biogenous.
Terrigenous sediments account for about 45% of the total marine sediment, and originate in the erosion of rocks on land, transported by rivers and land runoff, windborne dust, volcanoes, or grinding by glaciers.
Biogenous
Biogenous sediments account for the other 55% of the total sediment, and originate in the skeletal remains of marine protists (single-celled plankton and benthos microorganisms). Much smaller amounts of precipitated minerals and meteoric dust can also be present. Ooze, in the context of a marine sediment, does not refer to the consistency of the sediment but to its biological origin. The term ooze was originally used by John Murray, the "father of modern oceanography", who proposed the term radiolarian ooze for the silica deposits of radiolarian shells brought to the surface during the Challenger expedition. A biogenic ooze'' is a pelagic sediment containing at least 30 per cent from the skeletal remains of marine organisms.
Diatomaceous earth
Siliceous ooze
Kerogen
Alginite
Lithified
Micropaleontology
The study of microfossils is called micropaleontology. In micropaleontology, what would otherwise be distinct categories are grouped together based solely on their size, including microscopic organisms and minute parts of larger organisms. Numerous sediments have microfossils, which serve as significant biostratigraphic, paleoenvironmental, and paleoceanographic markers. Their widespread presence around the world and physical toughness makes microfossils important for biostratigraphy, while the manner in which they have reacted to environmental changes makes them helpful when reconstructing past environments.
See also
Biosignature
Biostratigraphy
Chemostratigraphy
Gunflint microfossils
Macrofossil
Protists in the fossil record
Protist shell
Scale microfossils
Small carbonaceous fossil
References
Other sources
Microfossils | Microfossil | Chemistry | 5,452 |
25,711,199 | https://en.wikipedia.org/wiki/Kepler-4 | Kepler-4 is a sunlike star located about 1626 light-years away in the constellation Draco. It is in the field of view of the Kepler Mission, a NASA operation purposed with finding Earth-like planets. Kepler-4b, a Neptune-sized planet that orbits extremely close to its star, was discovered in its orbit and made public by the Kepler team on January 4, 2010. Kepler-4b was the first discovery by the Kepler satellite, and its confirmation helped to demonstrate the spacecraft's effectiveness.
Nomenclature and history
Kepler-4 is named for the Kepler spacecraft, a NASA telescope tasked with finding Earth-like planets that transit their stars as seen from Earth. As the previous three planets that Kepler confirmed had already been confirmed by others, Kepler-4 and its planet were the first to be discovered by the Kepler team. The star and its system were announced in Washington, D.C. at the 215th meeting of the American Astronomical Society on January 4, 2010, along with Kepler-5, Kepler-6, Kepler-7, and Kepler-8. Of the presented planets, Kepler-4b was the smallest, around the size of planet Neptune. The discovery of Kepler-4b and the other planets presented at the AAS meeting helped to confirm that the Kepler spacecraft was indeed functional.
The Harlan J. Smith Telescope at McDonald Observatory in Fort Davis, Texas was used by astronomers from the University of Texas at Austin to follow up on Kepler's discoveries and confirm them. Telescopes in Hawaii, California, Arizona, and the Canary Islands were also used to confirm the findings.
Characteristics
Kepler-4 is a G0-type star, which is similar to the Sun, except slightly brighter. The star is 1.117 Msun and 1.555 Rsun, or 111% the mass of and 155% the radius of the Sun. With a metallicity of .09 (± 0.10) [Fe/H], Kepler-4 is more metal-rich than the Sun, a figure that is important in that metal-rich stars tend to have orbiting planets more often than metal-poor stars. Kepler-4 is also about 6.7 billion years old. In comparison, the Sun is 4.6 billion years old. In addition, Kepler-4 has an effective temperature of 5781 (± 76) K, which is almost identical, within the errors, to that of the Sun, which is 5778 K.
As seen from Earth, Kepler-4 has an apparent magnitude of 12.7. It is, as a result, not visible with the naked eye.
Planetary system
Kepler-4b's discovery was announced on January 4, 2010. It is the size of planet Neptune, at 0.077 MJ (7% the mass of Jupiter) and 0.357 RJ (36% the radius of Jupiter). The planet orbits its star every 3.213 days at 0.045 AU from the star. This distance compares to planet Mercury, which is 0.39 AU from the Sun. Kepler-4's eccentricity was assumed to be 0, however a subsequent independent reanalysis of the discovery data found a value of 0.25 ± 0.12. Likewise, the temperature of the planet is assumed to be 1650 K, far hotter than Jupiter's, which is assumed to be 124 K (not considering its internal heat and atmosphere).
A search for transit-timing variations in all 17 quarters of Kepler data did not detect any evidence of additional planets.
See also
List of extrasolar planets
Kepler Mission
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Draco (constellation)
7
Planetary transit variables | Kepler-4 | Astronomy | 753 |
17,245,684 | https://en.wikipedia.org/wiki/ASM%20International | ASM (previously known as ASM International N.V., originally standing for Advanced Semiconductor Materials) is a Dutch headquartered multinational corporation that specializes in the design, manufacturing, sales and service of semiconductor wafer processing equipment for the fabrication of semiconductor devices. ASM's products are used by semiconductor manufacturers in front-end wafer processing in their semiconductor fabrication plants. ASM's technologies include atomic layer deposition, epitaxy, chemical vapor deposition and diffusion.
The company was founded by Arthur del Prado (1931-2016) as 'Advanced Semiconductor Materials' in 1964. From 2008 until 2020, son of Arthur del Prado, Chuck del Prado was CEO.
ASM pioneered important aspects of many established wafer-processing technologies used in industry, including lithography, deposition, ion implantation, single-wafer epitaxy, and in recent years atomic layer deposition. Semiconductor equipment companies ASML, ASM Pacific Technology (ASMPT) and Besi are former divisions of ASM.
ASM headquarters is located in Almere, the Netherlands. The company has R&D sites in Almere (the Netherlands), Helsinki (Finland), Leuven (Belgium, near IMEC), Phoenix (Arizona), Tama (Japan), and Dongtan (South Korea). Manufacturing primarily occurs in Singapore and Dongtan (South-Korea). ASM also has sales & service offices across the globe, including United States, South Korea, China, Taiwan, Japan, Singapore and Israel. As of 2021, it has 3,312 staff, located in 14 countries.
The shares of the company are listed on the Euronext Amsterdam. In March 2020, ASM was promoted to the AEX index. ASM has a minority stake in ASM Pacific Technology, a Hong Kong–based company active in semiconductor assembly, packaging and surface-mount technology.
Technology
To create a semiconductor chip, many individual steps are performed using various types of wafer processing equipment, including photolithographic patterning, depositing thin layers, etching to remove material, thermal treatments, and other steps. ASM's systems are designed for deposition processes, when thin films, or layers, of various materials are grown or deposited onto the wafer. Many different thin-film layers are deposited to complete the full sequence of process steps necessary to manufacture a chip.
ASM's technology development is driven by its customers' goal to build faster, cheaper, and more powerful semiconductor chips with reduced energy consumption. This goal drives the need to shrink the dimensions of components on the chip, targeting to double the number of components per unit area on a chip every two years (Moore's law). As part of this scaling of dimensions, ASM supplies its customers – chip manufacturers – with machines that deposit ever thinner films of semiconductor materials. ASM also develops deposition processes for new materials to be used in semiconductor fabrication.
During the past 15 years, an increasing array of new materials has been introduced in the fabrication of chips. These new materials were required to achieve the necessary performance improvements of chips, as outlined by Moore's Law. For instance, in 2007 in a MOSFET transistor, the silicon oxidegate dielectric was replaced with a high-κ, a material that has a higher electrical resistance than silicon oxide. In this particular case, ASM pioneered the chemical process and the new deposition method called atomic layer deposition during nearly a decade of R&D. In addition, increasingly precise deposition methods are required as components on a chip such as transistors moved from planar to 3D structures, like FinFETs in the past decade. ASM has a leading position in single wafer atomic layer deposition (ALD).
Research
ASM offers a number of methods and accompanying machines to deposit these thin films of materials. The company tries to expand the applicability of its deposition technologies and machines as much as possible. R&D is critical in that effort. In 2021, the company spent 151 million euro on R&D (or 9% of its annual revenues). R&D activities stretch from basic research of new materials to the application of new materials in chip manufacturing.
Products
ASM designs and sells both single-wafer deposition tools, in which the process is performed one wafer at a time, as well as so-called batch tools, in which the deposition is performed on multiple wafers at a time. The prices of the company's systems varies, but typically are multiple of million euros per system.
The products of ASM can be categorized by deposition method:
Atomic Layer Deposition is a layer-by-layer process that results in the deposition of thin films one atomic layer at a time in a highly controlled manner. Layers are formed during reaction cycles by alternately pulsing precursors and reactants and purging with inert gas in between each pulse. ASM offers single wafer ALD tools in two technology segments: thermal ALD and plasma enhanced ALD (PEALD). ASM's ALD tools include Synergis, Pulsar and EmerALD. PEALD tools include Eagle XP8 and the XP8 QCM.
Epitaxy is a process that is used for depositing precisely controlled crystalline silicon-based layers that are important for semiconductor device electrical properties. The silicon epitaxy process can be used to modify the electrical characteristics of the wafer surface to create high-performance transistors during the manufacturing of semiconductor chips. ASM's epitaxy tools are single wafer tools and include Intrepid and Epsilon.
Chemical Vapor Deposition is a chemical deposition process in which the wafer is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired film. Within Chemical Vapor Deposition (CVD) ASM offers two types of tools: single-wafer plasma enhanced CVD (PECVD) and batch low pressure CVD (LPCVD). ASM provides single-wafer PECVD processes on the Dragon XP8 tool. ASM provides batch LPCVD/diffusion processes on the vertical furnace A400 DUO and novel Sonora tools.
History
1960s: In 1964, founds ASM as 'Advanced Semiconductor Materials' in Bilthoven, the Netherlands. Initially the company operates as a sales agent in semiconductor fabrication technology in Europe. In 1968, the company was formally listed as a private limited company.
1970s: ASM starts to design, manufacture and sell chemical vapor deposition equipment. In 1974 it acquires Fico Toolings, a Dutch manufacturer of semiconductor molds. A Hong Kong sales office ASM Asia, now known and traded as ASM Pacific Technology, is established in 1975. ASM America is founded in Phoenix, Arizona, in 1976. Sale of ASM's horizontal plasma-enhanced chemical vapor deposition furnaces drive the company's growth.
1980s: Following an initial public offering on the Nasdaq in May 1981, the company expands. In 1982 ASM Japan is established. ASM invests in new semiconductor fabrication technologies, like lithography, ion implantation, epitaxy, and wire bonding. In 1988, the company divests ASML Holding N.V., ASM Ion Implant, and it lists its Hong Kong–based activities as ASM Pacific Technology on the Hong Kong stock exchange in 1989.
1990s: The company reorganizes thoroughly between 1991 and 1994. In 1993, ASM divests ASM Fico to Berliner Electro Holding, now known as Besi. ASM focusses on vertical low-pressure chemical vapor deposition furnaces by ASM Europe, single wafer plasma-enhanced chemical vapor deposition by ASM Japan and single wafer epitaxy by ASM America. From 1996 onwards, the company is also listed on the Euronext, Amsterdam.ASM retains a majority stake in ASM Pacific Technology.
2000s: ASM expands again with investments in 300-mm wafer technology and atomic layer deposition. In 2007, the company successfully brings atomic layer deposition from R&D to high-volume production via the high-κ metal gate application. At the same time, hedge funds question the company's stake in ASM Pacific Technology. In 2008 Arthur del Prado is succeeded as CEO by his son, Chuck del Prado. In 2009 headquarters move from Bilthoven to Almere, the Netherlands.
2010s: The company returns to structural profitability after execution of a worldwide restructuring program, that includes the implementation of a product driven organization, a single global sales organization, consolidation of manufacturing in Singapore, and the establishment of a global human resources, finance, IT, operational excellence and environment, health and safety organization. The application of (plasma enhanced) atomic layer deposition in multiple patterning and high-κ metal gate drives ASM's growth. Other products include epitaxy, PECVD and vertical furnace. Its stake in ASM Pacific Technology is reduced to 25%.
2020s: In 2020, on the Euronext, the company is included on the AEX index. which includes the top-25 of companies listed on the Euronext Amsterdam stock exchange. The same year, after 12 years as CEO, Chuck del Prado decided to step down, and was succeeded by Benjamin Loh. Between 2020 and 2022, ASM renewed its vertical furnace product line with A400DUO (200mm wafers) and Sonora (300mm wafers).
Finances
Revenues
ASM sells its equipment to semiconductor manufacturers worldwide, with the majority of its revenues from Asian customers. In 2021, 1.41 billion euro of the total 1.73 billion euro in revenues was generated through equipment sales, the rest came from spares and service.
Market capitalization
Shares of ASM are traded on the Euronext stock exchange since 1996. Since March 2020, ASM is included on the AEX index. The market capitalization of ASM Pacific Technology is no longer consolidated after ASM's interest in ASM Pacific Technology decreased to 25 percent in 2013. Between 1981 and 2015 ASM was also listed on the Nasdaq.
In 2018 share price averaged at € 48.62 resulting in an average market capitalization of 2.53 billion euro. In 2019 average closing price was € 68.98, resulting in an average market capitalization of 3.38 billion euro. Market capitalization at year-end 2021 was 18.88 billion euro, based on the closing share price of €388.70 on Euronext Amsterdam on December 31, 2021.
References
External links
Companies based in Flevoland
Multinational companies headquartered in the Netherlands
Equipment semiconductor companies
Electronics companies established in 1964
Organisations based in Almere
1964 establishments in the Netherlands
Companies listed on Euronext Amsterdam
Companies formerly listed on the Nasdaq
Companies in the AEX index | ASM International | Engineering | 2,205 |
68,103,177 | https://en.wikipedia.org/wiki/Arturo%20Campos | Arturo Campos (1934 – September 5, 2001) was an American electrical engineer who worked at NASA on the electrical systems for the Apollo and Space Shuttle programs. He played a major role in devising a solution to the emergency that arose during the Apollo 13 mission.
Early life and education
Campos was born into a Mexican American family in Laredo, Texas; his father was a mechanic. He graduated in 1952 from Martin High School, attended Laredo Junior College, and in 1956 earned a degree in electrical engineering from the University of Texas.
Career
He worked at Kelly Air Force Base as an aircraft maintenance supervisor before joining NASA in September 1963. At the Johnson Space Center, he played a major role in developing the electrical systems for both the Apollo spacecraft and the Space Shuttle. On April 13, 1970, he was the subsystem manager responsible for the lunar module power system when the Apollo 13 mission suffered a loss of power due to a fuel cell explosion, and led the way in devising a solution so that the three astronauts aboard could return to Earth safely. He retired from NASA in 1980 and became a consultant in electrical engineering in Houston.
While at the Johnson Space Center, Campos established its branch of the League of United Latin American Citizens and in 1974 became its first president, was a member of the employees' Hispanic Heritage Program, and served as Equal Employment Opportunity and Affirmative Action Program representative.
Personal life and death
Campos and his wife, Petra T. Campos, had three daughters. He died of a heart attack at his home in Seabrook, Texas, at 66.
Honors and legacy
Campos shared in the Presidential Medal of Freedom that was awarded to the Mission Control staff after the Apollo 13 incident.
He was inducted into the Martin High School Hall of Fame in 2002.
After a public contest, his name was used for the male mannequin Commander Moonikin Campos to be used to test radiation exposure and other hazards on the Artemis 1 lunar mission in 2022.
References
1934 births
2001 deaths
American electrical engineers
University of Texas alumni
Martin High School (Laredo, Texas) alumni
People from Laredo, Texas
Apollo 13
Artemis program
American people of Mexican descent
NASA people | Arturo Campos | Astronomy | 433 |
11,004,282 | https://en.wikipedia.org/wiki/Shailer%20Mathews | Shailer Mathews (1863–1941) was an American liberal Christian theologian, involved with the Social Gospel movement.
Career
Born on May 26, 1863, in Portland, Maine, and graduated from Colby College. Mathews was a progressive, advocating social concerns as part of the Social Gospel message, and subjecting biblical texts to scientific study, in opposition to contemporary conservative Christians. He incorporated evolutionary theory into his religious views, noting that the two were not mutually exclusive. He remained a devout Baptist for his entire life, and helped establish the Northern Baptist Convention, serving as its president in 1915. Mathews was a prolific author, served as president of the Chicago Society of Biblical Research twice (in 1898–1899 and 1928–1929), and also served as dean of the Divinity School of the University of Chicago (from 1908 to 1933). An endowed chair in his honor, the Shailer Mathews Professorship at the University of Chicago Divinity School, has recently been held by Franklin I. Gamwell and Hans Dieter Betz. He died on October 23, 1941. His ashes are interred in the crypt of First Unitarian Church of Chicago.
Select publications
The Social Teachings of Jesus, 1897
A History of New Testament Times in Palestine, 1899
The French Revolution, 1900
The Messianic Hope in the New Testament, 1905
The Church and the Changing Order, 1907
The Social Gospel, 1909
The Gospel and the modern Man, 1910
The Social Teaching of Jesus, 1910
Scientific Management in Churches, 1911
The Individual and the Social Gospel, 1914
The Spiritual Interpretation of History, 1916
Patriotism and Religion, 1918
The Validity of American Ideals, 1922
The Faith of Modernism, 1924
Jesus on Social Institutions, 1928
The Atonement and the Social Process, 1930
The Growth of the Idea of God, 1931
Immortality and the Cosmic Process, 1933
Christianity and Social Process, 1934
Creative Christianity, 1935
New Faith for Old: An Autobiography, 1936
The Church and the Christian, 1938
Is God Emeritus? 1940
See also
Ernest DeWitt Burton
References
Footnotes
Bibliography
External links
Photograph of Shailer Mathews
Mathews House
History of New Testament Times in Palestine Macmillan Company, 1899
Guide to the Shailer Mathews Papers 1892-1942 at the University of Chicago Special Collections Research Center
1863 births
1941 deaths
Writers from Portland, Maine
American theologians
American biblical scholars
American historians of religion
Colby College alumni
University of Chicago Divinity School faculty
Baptist ministers from the United States
Theistic evolutionists | Shailer Mathews | Biology | 482 |
7,766,835 | https://en.wikipedia.org/wiki/Anamorphism | In computer programming, an anamorphism is a function that generates a sequence by repeated application of the function to its previous result. You begin with some value A and apply a function f to it to get B. Then you apply f to B to get C, and so on until some terminating condition is reached. The anamorphism is the function that generates the list of A, B, C, etc. You can think of the anamorphism as unfolding the initial value into a sequence.
The above layman's description can be stated more formally in category theory: the anamorphism of a coinductive type denotes the assignment of a coalgebra to its unique morphism to the final coalgebra of an endofunctor. These objects are used in functional programming as unfolds.
The categorical dual (aka opposite) of the anamorphism is the catamorphism.
Anamorphisms in functional programming
In functional programming, an anamorphism is a generalization of the concept of unfolds on coinductive lists. Formally, anamorphisms are generic functions that can corecursively construct a result of a certain type and which is parameterized by functions that determine the next single step of the construction.
The data type in question is defined as the greatest fixed point ν X . F X of a functor F. By the universal property of final coalgebras, there is a unique coalgebra morphism A → ν X . F X for any other F-coalgebra a : A → F A. Thus, one can define functions from a type A _into_ a coinductive datatype by specifying a coalgebra structure a on A.
Example: Potentially infinite lists
As an example, the type of potentially infinite lists (with elements of a fixed type value) is given as the fixed point [value] = ν X . value × X + 1, i.e. a list consists either of a value and a further list, or it is empty. A (pseudo-)Haskell-Definition might look like this:
data [value] = (value:[value]) | []
It is the fixed point of the functor F value, where:
data Maybe a = Just a | Nothing
data F value x = Maybe (value, x)
One can easily check that indeed the type [value] is isomorphic to F value [value], and thus [value] is the fixed point.
(Also note that in Haskell, least and greatest fixed points of functors coincide, therefore inductive lists are the same as coinductive, potentially infinite lists.)
The anamorphism for lists (then usually known as unfold) would build a (potentially infinite) list from a state value. Typically, the unfold takes a state value x and a function f that yields either a pair of a value and a new state, or a singleton to mark the end of the list. The anamorphism would then begin with a first seed, compute whether the list continues or ends, and in case of a nonempty list, prepend the computed value to the recursive call to the anamorphism.
A Haskell definition of an unfold, or anamorphism for lists, called ana, is as follows:
ana :: (state -> Maybe (value, state)) -> state -> [value]
ana f stateOld = case f stateOld of
Nothing -> []
Just (value, stateNew) -> value : ana f stateNew
We can now implement quite general functions using ana, for example a countdown:
f :: Int -> Maybe (Int, Int)
f current = let oneSmaller = current - 1
in if oneSmaller < 0
then Nothing
else Just (oneSmaller, oneSmaller)
This function will decrement an integer and output it at the same time, until it is negative, at which point it will mark the end of the list. Correspondingly, ana f 3 will compute the list [2,1,0].
Anamorphisms on other data structures
An anamorphism can be defined for any recursive type, according to a generic pattern, generalizing the second version of ana for lists.
For example, the unfold for the tree data structure
data Tree a = Leaf a | Branch (Tree a) a (Tree a)
is as follows
ana :: (b -> Either a (b, a, b)) -> b -> Tree a
ana unspool x = case unspool x of
Left a -> Leaf a
Right (l, x, r) -> Branch (ana unspool l) x (ana unspool r)
To better see the relationship between the recursive type and its anamorphism, note that Tree and List can be defined thus:
newtype List a = List {unCons :: Maybe (a, List a)}
newtype Tree a = Tree {unNode :: Either a (Tree a, a, Tree a))}
The analogy with ana appears by renaming b in its type:
newtype List a = List {unCons :: Maybe (a, List a)}
anaList :: (list_a -> Maybe (a, list_a)) -> (list_a -> List a)
newtype Tree a = Tree {unNode :: Either a (Tree a, a, Tree a))}
anaTree :: (tree_a -> Either a (tree_a, a, tree_a)) -> (tree_a -> Tree a)
With these definitions, the argument to the constructor of the type has the same type as the return type of the first argument of ana, with the recursive mentions of the type replaced with b.
History
One of the first publications to introduce the notion of an anamorphism in the context of programming was the paper Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire, by Erik Meijer et al., which was in the context of the Squiggol programming language.
Applications
Functions like zip and iterate are examples of anamorphisms. zip takes a pair of lists, say ['a','b','c'] and [1,2,3] and returns a list of pairs [('a',1),('b',2),('c',3)]. Iterate takes a thing, x, and a function, f, from such things to such things, and returns the infinite list that comes from repeated application of f, i.e. the list [x, (f x), (f (f x)), (f (f (f x))), ...].
zip (a:as) (b:bs) = if (as==[]) || (bs ==[]) -- || means 'or'
then [(a,b)]
else (a,b):(zip as bs)
iterate f x = x:(iterate f (f x))
To prove this, we can implement both using our generic unfold, ana, using a simple recursive routine:
zip2 = ana unsp fin
where
fin (as,bs) = (as==[]) || (bs ==[])
unsp ((a:as), (b:bs)) = ((a,b),(as,bs))
iterate2 f = ana (\a->(a,f a)) (\x->False)
In a language like Haskell, even the abstract functions fold, unfold and ana are merely defined terms, as we have seen from the definitions given above.
Anamorphisms in category theory
In category theory, anamorphisms are the categorical dual of catamorphisms (and catamorphisms are the categorical dual of anamorphisms).
That means the following.
Suppose (A, fin) is a final F-coalgebra for some endofunctor F of some category into itself.
Thus, fin is a morphism from A to FA, and since it is assumed to be final we know that whenever (X, f) is another F-coalgebra (a morphism f from X to FX), there will be a unique homomorphism h from (X, f) to (A, fin), that is a morphism h from X to A such that fin . h = Fh . f.
Then for each such f we denote by ana f that uniquely specified morphism h.
In other words, we have the following defining relationship, given some fixed F, A, and fin as above:
Notation
A notation for ana f found in the literature is . The brackets used are known as lens brackets, after which anamorphisms are sometimes referred to as lenses.
See also
Morphism
Morphisms of F-algebras
From an initial algebra to an algebra: Catamorphism
An anamorphism followed by an catamorphism: Hylomorphism
Extension of the idea of catamorphisms: Paramorphism
Extension of the idea of anamorphisms: Apomorphism
References
External links
Anamorphisms in Haskell
Category theory
Recursion schemes | Anamorphism | Mathematics | 1,938 |
55,831,305 | https://en.wikipedia.org/wiki/Ministry%20of%20Oil%20Industry | The Ministry of Oil Industry (Minnefteprom; ) was a government ministry in the Soviet Union.
History
The Ministry of the Petroleum Industry was created by the 28 December 1948 ukase of the Presidium of the Supreme Soviet USSR merging the Ministry of the Petroleum Industry of the Southern and Western Regions, the Ministry of the Petroleum Industry of the Eastern Regions, Glavgaztoppron (Main Administration of Synthetic Liquid Fuel and Gas) and Glavneftegazstroy (Main Administration for the Construction of the Petroleum and Gas Industry) under the Council of Ministers USSR, and Glavneftesnab (Main Administration for the Supply of Petroleum Products to the National Economy) under Gossnab USSR (State Committee of the Council of Ministers USSR for Material and Technical Supply to the National Economy).
The Ministry of Petroleum Industry (People's Commissariat of Petroleum Industry prior to 15 March 1946) was established 12 October 1939 by ukase of the Presidium, Supreme Soviet USSR, and was subdivided on 4 March 1946 into the Ministry of the Petroleum Industry of the Southern and Western Regions of the USSR and the Ministry of the Petroleum Industry of the Eastern Region of the USSR.
Prior to 1939 it had been a part of the All-Union People's Commissariat of the Fuel Industry, established 24 January 1939 by ukase of the Presidium, Supreme Soviet USSR, as a result of the subdivision of the People's Commissariat of Heavy Industry USSR.
Organization
The Ministry of the Petroleum Industry was an all-Union ministry which administered the petroleum extraction, petroleum refining, and gas industries, the production of liquid fuels, the construction of installations for the petroleum and gas industries, the construction of petroleum-drilling machinery, and the marketing of petroleum products.
The Ministry of the Petroleum Industry was headed by a minister who directed the entire work of the Ministry of the Petroleum Industry and of the enterprises and organizations under its jurisdiction. The Minister of the Petroleum Industry issued, within the limits of his competence, orders and directives based on, and in execution of, existing laws, as well as of decrees and regulations of the Council of Ministers USSR, and he checked on their execution.
The Minister of the Petroleum Industry appointed directors of main administrations, administrations, and divisions of the Ministry and of the enterprises and organizations under its jurisdiction; he organized enterprises and organizations in accordance with established procedures; he approved statutes on main administrations, administrations, and divisions of the Ministry as well as the statutes and charters of organizations and enterprises under the jurisdiction of the Ministry.
A collegium was formed within the Ministry of the Petroleum Industry, consisting of the Minister (chairman), his deputies, and the supervisory personnel of the Ministry. The membership of the collegium was approved by the Council of Ministers USSR, upon recommendation of the Minister of the Petroleum Industry.
The collegium of the Ministry of the Petroleum Industry, at its regular sessions, considered questions of practical supervision, of checking on the execution of decisions, and of selection of personnel, as well as the most important orders and directives of the Ministry; it hears the reports of the directors of main administrations, administrations, and divisions of the Ministry and of organizations and enterprises under its jurisdiction.
The decisions of the collegium are carried out by orders of the Minister. The basic task, for the carrying out of which the Ministry of the Petroleum Industry is responsible, is to assure the development of the petroleum industry in accordance with plans approved by the USSR government, with the goal of completely satisfying the requirement of the rational economy for petroleum products and gas.
This goal was to be achieved by the introduction of the most modern equipment, by mechanizing and automatizing production processes in the extraction and refining of petroleum and gas, by the construction of oil fields, refineries, gas plants, plants for the production of synthetic liquid fuels, pipelines and petroleum tanks by the development of petroleum machinery building, by the creation of permanent cadres of qualified workers, engineers, and technicians, by increasing labor productivity, and by improving the quality of production and reducing its costs. The Ministry of the Petroleum Industry prepares production plans (prospective, yearly, quarterly), plans for capital construction, marketing, and railroad and water transport, as well as for balancing income and expenditures.
It submitted these plans for ratification by the Council of Ministers USSR in accordance with established procedure, and it takes measures toward the fulfillment of plans that have been ratified.
Role
The Ministry of the Petroleum Industry organized the prospecting and surveying of new petroleum, gas, and ozocerite deposits by geological, geophysical, and geochemical surveying methods. It takes measures to increase labor productivity, to improve the quality of production, and to reduce the cost of production and construction. It exercises technical and production supervision over enterprises of the Ministry, and introduces the most modern equipment, technical improvements and inventions, mechanization and automation of production processes, and established norms for the consumption of materials, power and fuel.
It drafts plans for production standards in branches of industry under the jurisdiction of the Ministry- and submits them for ratification according to established procedure. The Ministry directs the construction of enterprises and installations of the petroleum industry and, in accordance with established procedure, ratifies planned quotas, technical plans, and estimates for capital construction. It organizes and supervises the material and technical supply of enterprises, organizations, and installations. It supervises the operations of enterprises which produce industrial and construction materials required by branches of industry under the jurisdiction of the Ministry.
The Ministry handled the sales of petroleum products produced by enterprises of the Ministry of the Petroleum Industry through a system of administrations and petroleum bases of Glavneftesbyt (Main Administration of Petroleum Sales).
It selected the personnel of the Ministry, takes measures to provide enterprises and construction projects of the Ministry with personnel and to utilize them properly, at the same time providing for their living conditions.
It draws up regulations on the wages of workers, engineers, technicians, and employees of the enterprises, submits them for ratification by the Council of Ministers USSR, in accordance with established procedure, and supervises their application. It determines technical norms and the revision of output norms (except for the basic revision of norms carried out with the permission of the government), and checks on their fulfillment.
It supervised the observance of labor legislation and rules for accident prevention. It directs the conclusion of collective contracts and checks on their fulfillment. The Ministry directs socialist competition to develop the creative initiative of workers, engineers, and technicians, to increase labor productivity further, to fulfill production plans ahead of schedule, to improve the quality of production, to increase profits of the enterprises, and to acquire above-plan accumulations.
It financed the enterprises and organizations of the Ministry in accordance with established procedure, supervises their financial activity, and takes measures to speed up the turnover of working capital and to increase the profits of the enterprises. It supervises the organization and system of accounting, approves balances and reports of main administrations and organizations under the direct jurisdiction of the Ministry, and draws up periodic and yearly accounting records for all types of industrial and administrative activity of the Ministry.
It exercised financial control and makes documentary revisions in the administration of the Ministry. It administers scientific research institutes under the jurisdiction of the Ministry, planning organizations, and educational establishments and directs the training of workers in the mass professions. It directs the activity of administrations and divisions of labor supply and subsidiary economies.
It takes measures for the protection of state socialist, property in enterprises, establishments, and organizations under the Ministry. It publishes literature on the technology, economics, and organization of the petroleum and gas industries. The structure of the Ministry of the Petroleum Industry is determined by the Council of Ministers USSR to utilize to the best advantage outstanding experiences of directors, ordinary workers, and Stakhanovites, the Ministry of the Petroleum Industry, its chain administrations associations, trusts and enterprises summon meetings at which reports are heard and discussed dealing with the most important party and government decisions as well as administrative directives of the Ministry. To develop criticism and self-criticism, questions on the activity of production and management are also discussed.
List of ministers
Source:
References
Oil Industry
Energy ministries | Ministry of Oil Industry | Engineering | 1,662 |
12,477,399 | https://en.wikipedia.org/wiki/Friedrich%20Kipp | Friedrich Kipp (c. 1814 – 21 January 1869) was a German physician and entomologist.
References
Anonym 1869: [Kipp, F.] Vereinsbl. westph.rhein. Ver. Bienenzucht und Seidenbau 20:17-18.
German entomologists
1810s births
1869 deaths
People involved with the periodic table | Friedrich Kipp | Chemistry | 80 |
57,675,728 | https://en.wikipedia.org/wiki/Drug%20titration | Drug titration is the process of adjusting the dose of a medication for the maximum benefit without adverse effects.
When a drug has a narrow therapeutic index, titration is especially important, because the range between the dose at which a drug is effective and the dose at which side effects occur is small. Some examples of the types of drugs commonly requiring titration include insulin, anticonvulsants, blood thinners, anti-depressants, and sedatives.
Titrating off of a medication instead of stopping abruptly is recommended in some situations. Glucocorticoids should be tapered after extended use to avoid adrenal insufficiency.
Drug titration is also used in phase I of clinical trials. The experimental drug is given in increasing dosages until side effects become intolerable. A clinical trial in which a suitable dose is found is called a dose-ranging study.
See also
Therapeutic drug monitoring
Pituri – chewed as a stimulant (or, after extended use, a depressant) by Aboriginal Australians
References
Pharmacology | Drug titration | Chemistry | 224 |
19,228,318 | https://en.wikipedia.org/wiki/Tropical%20Easterly%20Jet | The Tropical Easterly Jet (jet stream) is the meteorological term referring to an upper level easterly wind that starts in late June and continues until early September. This strong flow of air that develops in the upper atmosphere during the Asian monsoon is centred on 15°N, 50-80°E and extends from South-East Asia to Africa. The strongest development of the jet is at about above the Earth's surface with wind speeds of up to over the Indian Ocean.
The term easterly jets was given by Indian researchers P. Koteshwaram and P.R. Krishnan in 1952.
This jet subsides at somalian coast also known as Mascarene high.
The easterly jet induces significant vertical wind shear during the monsoonal months (especially from July to September) which suppresses any tropical cyclone activity. (Monsoonal depressions generally do not intensify into cyclones)
References
Winds | Tropical Easterly Jet | Chemistry | 186 |
75,452,067 | https://en.wikipedia.org/wiki/Tavis%E2%80%93Cummings%20model | In quantum optics, fhe Tavis–Cummings model is a theoretical model to describe an ensemble of identical two-level atoms coupled symmetrically to a single-mode quantized bosonic field. The model extends the Jaynes–Cummings model to larger spin numbers that represent collections of multiple atoms. It differs from the Dicke model in its use of the rotating-wave approximation to conserve the number of excitations of the system.
Originally introduced by Michael Tavis and Fred Cummings in 1968 to unify representations of atomic gases in electromagnetic fields under a single fully quantum Hamiltonian — as Robert Dicke had done previously using perturbation theory — the Tavis–Cummings model's restriction to a single field-mode with negligible counterrotating interactions simplifies the system's mathematics while preserving the breadth of its dynamics.
The model demonstrates superradiance, bright and dark states, Rabi oscillations and spontaneous emission, and other features of interest in quantum electrodynamics, quantum control and computation, atomic and molecular physics, and many-body physics. The model has been experimentally tested to determine the conditions of its viability, and realized in semiconducting and superconducting qubits.
Hamiltonian
The Tavis–Cummings model assumes that for the purposes of electromagnetic interactions, atomic structures are dominated by their dipole, as they are for distant neutral atoms in the weak-field limit. Thus the only atomic quantity under consideration is its angular momentum, not its position nor fine electronic structure. Furthermore, the model asserts the atoms to be sufficiently distant that they don't interact with each-other, only with the electromagnetic field, modeled as a bosonic field (since photons are the gauge bosons of electromagnetism).
Formal derivation
For two atomic-electronic states separated by a Bohr frequency , then transitions between the ground- and excited-states and are mediated by Pauli operators: , , and , and the Hamiltonian separating these energy states in the th atom is . With independent atoms each subject to this energy gap, the total atomic Hamiltonian is thus with total spin operators .
Similarly, in a free field with no modal restrictions, creation and annihilation operators dictate the presence of photons in each mode: with wave number , polarization , and frequency . If the dynamics occur within a sufficiently small cavity, only one mode (the cavity's resonant mode) will couple to the atom, thus the field Hamiltonian simplifies to , just as in the Jaynes-Cummings and Dicke models.thumb|Schematic diagram of the Tavis–Cummings physical model, representing an ensemble of two-level atoms (four dots on red and green levels) interacting symmetrically with a single mode photonic field (blue standing wave), isolated within a cavity. The atomic level separation is , the cavity's resonant frequency is , and the coupling strength of atom-field interactions is .|400x400px
Finally, interactions between atoms and the field is determined by the atomic dipole, rendered quantumly as an operator , and the similarly expressed electric field at the atoms' centers (assuming the field is the same at each atom's position) , thus which acts on both the qubit and bosonic degrees of freedom. The dipole operator couples the excited and ground states of each atom , while the electric free field solution is:
, which at a static point evaluates as:
, thus the interaction Hamiltonian expands as
.
Here, specifies the coupling strength of the total dipole to each electric field mode, and functioning as a Rabi frequency that scales with ensemble size due to the Pythagorean addition of single-atom dipoles. Then, in the rotating frame, , which results in corotating terms (representing photon absorption causing atomic excitation), (representing spontaneous emission), and counterrotating terms and (representing second-order effects like self-interaction and Lamb shifts). When and (close to resonance in a weak field), the corotating terms accumulate phase very slowly, while the counterrotating terms accumulate phase too fast to significantly affect time-ordered integrals, thus the rotating wave approximation allows counterrotating terms to drop in the rotating frame. The cavity permits only one field mode with energy sufficiently close to the Bohr energy, , so the final form of the interaction Hamiltonian is for dephasing .
In total, the Tavis–Cummings Hamiltonian includes the atomic and photonic self-energies and the atom-field interaction:
,
,
,
.
Symmetries
The Tavis–Cummings model as described above exhibits two symmetries arising from the Hamiltonian's commutation with excitation number and angular momentum magnitude . Since , it is possible to find simultaneous eigenstates such that:
,
,
.
The quantum number is bounded by , and , but due to the infinity of Fock space, excitation number is unbounded above, unlike angular momentum projection quantum numbers. Just as the Jaynes-Cummings Hamiltonian block-diagonalizes into infinite blocks of constant excitation number, the Tavis–Cummings Hamiltonian block-diagonlizes into infinite blocks of size up to with constant , and within these larger blocks, further block-diagonalizes into (usually degenerate) blocks of size where , with constant cooperation number . The size of each of these smallest blocks (irreps of SU(2)) determine the bounds of the final quantum number that specifies the eigenenergy: , with signifying the ground state of each irrep, and the maximally excited state.
Dynamics
Under the simplifications of real and quasistatically small , the Hamiltonian becomes , whose matrix elements one can express in a joint Dicke and Fock basis such that and . Necessarily, , so the matrix elements are as follows:
,
,
if , , or .
From these elements, one can express Schrödinger equations of motion to demonstrate the photon field's ability to mediate entanglement formation between atoms without atom-atom interactions: , for which the fine-tuned, multivariate dependence on quantum numbers demonstrates the difficulty of solving the Tavis-Cumming model's eigensystem. Here, a few approximate methods, and an exact solution involving Stark shifts and Kerr nonlinearities follow.
Spectrum approximations
In 1969, Tavis and Cummings found approximate eigenenergies and eigenstates for a nondimensionalized Hamiltonian in three different regimes of approximation: first, for near the ground-state of each irrep; second, for and when each atom sees a highly saturated "averaged" field; third, for with sparse excitations. In all solutions, eigenstates are related to Dicke-Fock joint states by , for coefficients that are solved from the Hamiltonian spectrum.
For close to zero, a differential approach provides the eigenvalues: with , average photon number , and differential coefficient .
For an averaged photon field when in a large photon-rich system, the off-diagonal matrix elements in (above) replace with , and each two-level atom interacts independently with a photon field that conveys no information about the other atoms. For each of these atoms, there are two dressed eigenstates coupled to "pseudophoton" number states: , with single-atom eigenenergies . Superpositions of these single-atom dressed states construct the full eigenstates according to addition of angular momenta, mediated by Clebsch-Gordan coefficients. The full eigenvalues are approximately .
For when the atomic ensemble, and not the photon field, is averaged, then the off-diagonal elements in approximate as , smearing over the atomic degrees of freedom in . Eigenstates are constructed as weighted superpositions of photon number states coupled to a spin state, but the eigenvalues are now.
Bethe ansatz
In 1996, Nikolay Bogoliubov (son of the 1992 Dirac Medalist of the same name), Robin Bullough, and Jussi Timonen found that adding quadratic excitation-dependent terms to the Tavis–Cummings Hamiltonian allowed for an exact analytic eigensystem. In the limit where these Kerr and Stark shifts vanish, this solution can recover the eigensystem of the unmodified Tavis–Cummings system.
Including a Kerr term of the form , or a Stark term equivalently results in a new Hamiltonian: , which obeys the same operator symmetries (above) as does the unmodified Tavis–Cummings Hamiltonian, and which reduces to Tavis–Cummings in the limit . Thus the transformation preserves the dynamics and shares joint eigenvectors with the untransformed Hamiltonian. The transformed Hamiltonian, explicitly, is for new parameters and , which is integrable using quantum inverse methods. Separating the dynamics into two complex-parametrized operator matrices (that is, matrices whose elements are operators), one acting on the bosonic degrees of freedom and the other on the spin degrees of freedom produces a monodromy matrix whose determinant is directly proportional to , whose trace is proportional to , and whose trace's parametric derivative is proportional to . Manipulating the monodromy matrix allows its spectral parameter to determine the Hamiltonian eigenstates and eigenenergies as the complex roots of a Bethe ansatz. Every for must satisfy the following Bethe equations:
,
then the Hamiltonian eigenvalues arise from the roots as:
.
In the limit where (and thus ), the above Bethe equations simplify to , and the eigenenergies to . Eigenstates follow similarly.
Experiments
The Tavis–Cummings model has seen numerous experimental implementations verifying its phenomena, including several since 2009 virtually realizing the model on quantum computational platforms like superconducting qubits and circuit QED. Such experiments utilize the Tavis–Cummings Hamiltonian's ability to generate superradiance wherein the artificial atoms emit and absorb light from the field coherently, as though they were a single atom with a large total angular momentum. Superradiance, scaling dipole-interaction strength , and other features allow Tavis–Cummings-type dynamics to manifest quantum computationally and metrologically desirable states, such as Dicke states (joint eigenstates of and ) through global interactions, as was explored in the 2003 paper by Tessier et al.
One realization by Tuchman et al., in 2006, used a stream of ultracold Rubidium-87 atoms (), and observed cooperation number , or 12% its maximum possible value, indicating very high interatomic coherence relative to experimental capabilities of the time. This experiment also confirmed the scaling of the dipole-interaction; at the level of single atoms, dipole interactions are much weaker than monopolar interactions, so the ability of Tavis–Cummings dynamics to counteract the weakness of with quadrature addition of dipoles makes neutral atom control more feasible.
Circuit QED
A seminal result from Fink et al. in 2009 involved 3 transmons as virtual "atoms" with qubit-dependent Bohr frequencies for controllable Josephson energy and experimentally determined single electron charging energy , inside a microwave waveguide resonator which supplies a standing electric field at GHz. To ensure symmetric coupling of the qubits to the field, each transmon was placed at an antinode of the standing wave, and to best conserve excitations by minimizing photon leakage, the resonator was kept ultracold (20mK) which ensured a high quality factor. Manipulating each qubit's Bohr frequency so only one qubit resonated with the field, the team measured each single-qubit coupling strength , then reintroduced the other qubits to compare the total coupling strength with the average strength of the resonating qubits, , confirming that . In addition, the team observed bright and dark states characterized by high emission rates and zero emissions respectively, for 2 and 3 active qubits, with the 3-qubit bright and dark states each being degenerate.
In addition to superconducting qubits, semiconducting qubits have also been a platform for Tavis–Cummings dynamics, such as in a 2018 investigation by van Woerkom et al. at ETH Zürich, in which two qubits constructed of double quantum dots (DQDs) coupled to a SQUID resonator, with the two DQDs separated by a distance of 42μm. The micrometer regime is a far greater distance than that over which semiconducting qubits had previously achieved entanglement, and the difficulty of long-range interactions in semiconducting qubits was at the time a major weakness compared to other quantum computing platforms, for which the Tavis–Cummings model's ability to form entanglement through global atom-field interactions is one solution. By observing the reflection amplitude of field waves between the SQUID array and the DQDs, the team isolated the photon number states as they smoothly coupled to the first qubit to form superpositional Jaynes-Cummings eigenstates when the first qubit tuned to the resonator. Similarly, they observed these hybrid states shift into a pair of bright states and a dark state (which did not interact with the light, and thus did not cause a dip in the reflection amplitude) when the second qubit was tuned to resonance. In addition to physical photons mediating the long-range entanglement at , the team found similar energy shifts at signalling qubits interacting with "virtual" photons, measured by the phase shift of the field rather than the reflection amplitude.
Limitations
Recent investigations by Johnson, Blaha, et al., have verified and explained two major regimes where the Tavis–Cummings model fails to predict physical reality, both following from systemic parameters approaching or exceeding the free spectral range based on the waveguide length . The violating quantities are the coupling strength , and the rate of photon-loss from atomic emissions into non-cavity modes, , where is the single-atom spontaneous-emission rate into all modes. When and , the Tavis–Cummings model well-describes the system, since the atom-light interactions are suppressed for all but one mode, and the field intensity is not significantly attenuated due to atomic emissions into other modes. However, when then the coupling enters the so-called "superstrong" regime and atom-light interactions must consider multiple field-modes. More severely, when then the atomic ensemble becomes optically thick, and the model must consider time-ordered interactions between the field and each atom, as atoms at the front of the ensemble will experience a more intense photonic wavefront than those at the back, due to the frontal atoms' absorptions and non-cavity mode emissions. This has the effect of interactions and correlations cascading successively across multiple atoms. As photons cross the waveguide and interact sequentially with the atoms in the ensemble, they accumulate phase at phenomenon-dependent rates. The total phase accumulated by electromagnetic waves in one round-trip of the waveguide may manifest resonances causing high transmission rates under specific dephasings and emission rates , and the locations of these resonances differs between the standard Tavis–Cummings model and the team's proposed "cascade" model. Using a fluid of ultracool Cesium atoms surrounding a nanofiber-section of a 30m fiber-ring resonator, the team coupled the atoms to the light passing through the nanofiber via an evanescent field, measuring the light's transmission for variable and , into the superstrong coupling and cascade regimes. The data from the nanofiber-Cesium experiment agreed better with the cascade model's predictions than with the Tavis–Cummings', specifically in the parametrically violating regimes above.
References
Quantum models
Quantum optics | Tavis–Cummings model | Physics | 3,293 |
1,677,333 | https://en.wikipedia.org/wiki/Toxicogenomics | Toxicogenomics is a subdiscipline of pharmacology that deals with the collection, interpretation, and storage of information about gene and protein activity within a particular cell or tissue of an organism in response to exposure to toxic substances. Toxicogenomics combines toxicology with genomics or other high-throughput molecular profiling technologies such as transcriptomics, proteomics and metabolomics. Toxicogenomics endeavors to elucidate the molecular mechanisms evolved in the expression of toxicity, and to derive molecular expression patterns (i.e., molecular biomarkers) that predict toxicity or the genetic susceptibility to it.
Pharmaceutical research
In pharmaceutical research, toxicogenomics is defined as the study of the structure and function of the genome as it responds to adverse xenobiotic exposure. It is the toxicological subdiscipline of pharmacogenomics, which is broadly defined as the study of inter-individual variations in whole-genome or candidate gene single-nucleotide polymorphism maps, haplotype markers, and alterations in gene expression that might correlate with drug responses. Though the term toxicogenomics first appeared in the literature in 1999, it was by that time already in common use within the pharmaceutical industry as its origin was driven by marketing strategies from vendor companies. The term is still not universally accepted, and others have offered alternative terms such as chemogenomics to describe essentially the same field of study.
Bioinformatics
The nature and complexity of the data (in volume and variability) demands highly developed processes of automated handling and storage. The analysis usually involves a wide array of bioinformatics and statistics, often including statistical classification approaches.
Drug discovery
In pharmaceutical drug discovery and development, toxicogenomics is used to study possible adverse (i.e. toxic) effects of pharmaceutical drugs in defined model systems in order to draw conclusions on the toxic risk to patients or the environment. Both the United States Environmental Protection Agency (EPA) and the Food and Drug Administration (FDA) currently preclude basing regulatory decision-making on genomics data alone. However, they do encourage the voluntary submission of well-documented, quality genomics data. Both agencies are considering the use of submitted data on a case-by-case basis for assessment purposes (e.g., to help elucidate mechanism of action or contribute to a weight-of-evidence approach) or for populating relevant comparative databases by encouraging parallel submissions of genomics data and traditional toxicological test results.
Public projects
Chemical Effects in Biological Systems is a project hosted by the National Institute of Environmental Health Sciences building a knowledge base of toxicology studies including study design, clinical pathology, and histopathology and toxicogenomics data.
InnoMed PredTox assesses the value of combining results from various omics technologies together with the results from more conventional toxicology methods in more informed decision-making in preclinical safety evaluation.
Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System) is a Japanese public-private effort which has published gene expression and pathology information for more than 170 compounds (mostly drugs).
The Predictive Safety Testing Consortium aims to identify and clinically qualify safety biomarkers for regulatory use as part of the FDA's "Critical Path Initiative".
ToxCast is a program for Predicting Hazard, Characterizing Toxicity Pathways, and Prioritizing the Toxicity Testing of Environmental Chemicals at the United States Environmental Protection Agency.
Tox21 is a federal collaboration involving the National Institutes of Health (NIH), Environmental Protection Agency (EPA), and Food and Drug Administration (FDA), is aimed at developing better toxicity assessment methods. Within this project the toxic effects of chemical compounds on cell lines derived from the 1000 Genomes Project individuals were assessed and associations with genetic markers were determined. Parts of this data were used in the NIEHS-NCATS-UNC DREAM Toxicogenetics Challenge in order to determine methods for cytotoxicity predictions for individuals.
See also
Comparative Toxicogenomics Database
Pharmacogenetics
Structural genomics
References
External links
Center for Research on Occupational and Environmental Toxicology definition by the CROET Research Centers: (Neuro)toxicogenomics and Child Health Research Center.
Genomics
Toxicology
Medical genetics | Toxicogenomics | Environmental_science | 876 |
33,955,796 | https://en.wikipedia.org/wiki/Die%20Bachschmiede | Die Bachschmiede is the cultural center of the Austrian community Wals-Siezenheim. It was opened in 2008.
History
The history of the Bachschmiede can be traced back to the year 1567. At that time Georg Markhner built below the rock where the church of Wals-Siezenheim is situated, the Bachschmied-house at the River Mühlbach. In 1985 there was the thought about the revitalization of the building. Source of this idea was Sepp Forcher. Initially planned as a museum, it could then be opened in 2008 as a museum and cultural center.
On the place in front of the Bachschmiede the time stone can be seen. This is a four-time two meters high and eleven-ton stone out of the Untersberger marble. In his history of the community Wals shown in pictures. The artistic stone carvings were done by students of the vocational school in Walserfeld on a design by Franz Hirnsperger; each Student has signed its relief with its stone marks.
Cultural opportunities
The museum's centerpiece is the beautifully restored forge of the former Bachschmied family Reischl, which is now a show-forge. In the former horseshoe chamber you will find the complete set of horseshoes Ebner family from Loig. Space on the first floor provide insight into the lifestyle of the 19th century and remember specifically to the Bachschmiede son Jakob Lechner, a former professor at the Vienna Military Veterinary Institute.
In the grown culture house is a museum that is dedicated primarily to the Salzburg-Bavaria topics. A first exhibition was referring 2008 to 1809 - Napoleon's army before Salzburg (Fourth Napoleonic War), a current exhibition is dedicated to the coinage of the past 2000 years (from the Roman denarius for Euro - 2000 Year money in Salzburg and neighboring Bavaria), the next exhibition comes to Sacred folk Art from the surrounding region.
As the permanent exhibition The Roman Villa Rustica is presented in Loig in the museum, a Roman country house, which was discovered in 1815 in the village of Loig. Particularly impressive here is the copy of Theseus Mosaic. The originals of the mosaics found, the stocks of the Kunsthistorisches Museum in Vienna.
Also located in the museum collection of antique toys, which goes back to the collecting activities of Karin Gugg. The changing exhibited historical and contemporary toys provide insight into the cultural history of toys and toy industry in the growing of the 19th and the 20th century.
In an art gallery (foyer of the event location) are regular exhibitions of different artists presented (including Robert Roubin, Ute Födermayr, Käthe Perner).
The variable design Sepp Forcher Hall is the place of theater performances, lectures, cabaret performances (including Martina Schwarzmann, Vince Ebert, Kernölamazonen, Monika Gruber), the International Walser film festivals and concert events.
External links
Die Bachschmiede - official site
Facebook - Bachschmiede - Facebook Page
Museums in Salzburg (federal state)
Local museums in Austria
Art museums and galleries in Austria
Metal companies of Austria
Music venues in Austria
Hammer mills | Die Bachschmiede | Chemistry | 650 |
3,025,108 | https://en.wikipedia.org/wiki/Seapost%20Service | A Seapost was a mail compartment aboard an ocean-going vessel wherein international exchange mail was distributed. The first American service of this type was the U.S.-German Seapost, which began operating in 1891 on the S.S. Havel North German Lloyd Line. The service rapidly expanded with routes to Great Britain, Central America, South America, and Asia. The Seapost service still employed fifty-five clerks in early 1941. The last route of this type (to South America) was terminated October 19, 1941, due to unsafe wartime conditions on the Atlantic Ocean. The few remaining Seapost clerks transferred to branches of the Railway Mail Service (RMS). Seapost operations for the US Post Office Department were supervised from a New York City, New York, office.
Seapost offices were also operated by the postal authorities of France, Germany, Great Britain, Italy, Japan and New Zealand.
Sources
Wilking, Clarence. (1985) The Railway Mail Service, Railway Mail Service Library, Boyce, Virginia. Available as an MS Word file at http://www.railwaymailservicelibrary.org/articles/THE_RMS.DOC
United States Sea Post Cancellations Part 1 Transatlantic Routes, Edited by Philip Cockrill, Cockrill Series Booklet No 54
Seaposts of the USA by Roger Hosking, Published by the TPO & Seapost Society, September 2008
External links
TPO and Seapost Society for all collectors of Rail and Ship Mail worldwide
Postal systems
Philatelic terminology | Seapost Service | Technology | 308 |
37,522,784 | https://en.wikipedia.org/wiki/Art%20and%20emotion | In psychology of art, the relationship between art and emotion has newly been the subject of extensive study thanks to the intervention of esteemed art historian Alexander Nemerov. Emotional or aesthetic responses to art have previously been viewed as basic stimulus response, but new theories and research have suggested that these experiences are more complex and able to be studied experimentally. Emotional responses are often regarded as the keystone to experiencing art, and the creation of an emotional experience has been argued as the purpose of artistic expression. Research has shown that the neurological underpinnings of perceiving art differ from those used in standard object recognition. Instead, brain regions involved in the experience of emotion and goal setting show activation when viewing art.
Basis for emotional responses to art
Evolutionary ancestry has hard-wired humans to have affective responses for certain patterns and traits. These predispositions lend themselves to responses when looking at certain visual arts as well. Identification of subject matter is the first step in understanding the visual image. Being presented with visual stimuli creates initial confusion.
Other methods of stimulating initial interest that can lead to emotion involves pattern recognition. Symmetry is often found in works of art, and the human brain unconsciously searches for symmetry for a number of reasons. Potential predators were bilaterally symmetrical, as were potential prey. Bilateral symmetry also exists in humans, and a healthy human is typically relatively symmetrical. This attraction to symmetry was therefore advantageous, as it helped humans recognize danger, food, and mates. Art containing symmetry therefore is typically approached and positively valenced to humans.
Another example is to observe paintings or photographs of bright, open landscapes that often evoke a feeling of beauty, relaxation, or happiness. This connection to pleasant emotions exists because it was advantageous to humans before today's society to be able to see far into the distance in a brightly lit vista. Similarly, visual images that are dark and/or obscure typically elicit emotions of anxiety and fear. This is because an impeded visual field is disadvantageous for a human to be able to defend itself.
Meta-emotions
The optimal visual artwork creates what Noy & Noy-Sharav call "meta-emotions". These are multiple emotions that are triggered at the same time. They posit that what people see when immediately looking at a piece of artwork are the formal, technical qualities of the work and its complexity. Works that are well-made but lacking in appropriate complexity, or works that are intricate but missing in technical skill will not produce "meta-emotions". For example, seeing a perfectly painted chair (technical quality but no complexity) or a sloppily drawn image of Christ on the cross (complex but no skill) would be unlikely to stimulate deep emotional responses. However, beautifully painted works of Christ's crucifixion are likely make people who can relate or who understand the story behind it weep.
Noy & Noy-Sharav also claim that art is the most potent form of emotional communication. They cite examples of people being able to listen to and dance to music for hours without getting tired and literature being able to take people to far away, imagined lands inside their heads. Instead of being passive recipients of actions and images, art is intended for people to challenge themselves and work through the emotions they see presented in the artistic message.
Often, people have a difficulty recognizing and explicitly expressing the emotions they are feeling. Art tends to have a way to reach people's emotions on a deeper level and when creating art, it is a way for them to release the emotions they cannot otherwise express. There is a professional denomination within psychotherapy called art therapy or creative arts therapy in which deals with diverse ways of coping with emotions and other cognitive dimensions.
Types of elicited emotions
There is debate among researchers as to what types of emotions works of art can elicit; whether these are defined emotions such as anger, confusion or happiness, or a general feeling of aesthetic appreciation. The aesthetic experience seems to be determined by liking or disliking a work of art, placed along a continuum of pleasure–displeasure. However, other diverse emotions can still be felt in response to art, which can be sorted into three categories: Knowledge Emotions, Hostile Emotions, and Self-Conscious Emotions.
Liking and comprehensibility
Pleasure elicited by works of art can also have multiple sources. A number of theories suggest that enjoyment of a work of art is dependent on its comprehensibility or ability to be understood easily. Therefore, when more information about a work of art is provided, such as a title, description, or artist's statement, researchers predict that viewers will understand the piece better, and demonstrate greater liking for it. Experimental evidence shows that the presence of a title increases perceived understanding, regardless of whether that title is elaborate or descriptive. Elaborate titles did affect aesthetic responses to the work, suggesting viewers were not creating alternative explanations for the works if an explaining title is given. Descriptive or random titles do not show any of these effects.
Furthering the thought that pleasure in art derives from its comprehensibility and processing fluency, some authors have described this experience as an emotion. The emotional feeling of beauty, or an aesthetic experience, does not have a valence emotional undercurrent. Rather it is general cognitive arousal due to the fluent processing of a novel stimuli. Some authors believe that aesthetic emotions is enough of a unique and verifiable experience that it should be included in general theories of emotion.
Knowledge emotions
Knowledge emotions deal with reactions to thinking and feeling, such as interest, confusion, awe, and surprise. They often stem from self-analysis of what the viewer knows, expects, and perceives. This set of emotions also spur actions that motivate further learning and thinking.
Emotions are momentary states and differ in intensity depending on the person. Each emotion elicits a different response. Surprise completely wipes the brain and body of any other thoughts or functions because everything is focused on the possibility of danger. Interest ties in with curiosity and humans are a curious species. Interest spikes learning and exploration. Confusion goes hand in hand with interest, because when learning something new, it can often be hard to understand, especially if unfamiliar. However, confusion also promotes learning and thinking. Awe is a state of wonder, and it is the deepest of the knowledge emotions as well as very uncommon.
Interest
Interest in a work of art arises from perceiving the work as new, complex, and unfamiliar, as well as understandable. This dimension is studied most often by aesthetics researchers, and can be equated with aesthetic pleasure or an aesthetic experience. This stage of art experience usually occurs as the viewer understands the artwork they are viewing, and the art fits into their knowledge and expectations while providing a new experience.
Confusion
Confusion can be viewed as an opposite to interest, and serves as a signal to the self to inform the viewer that they cannot comprehend what they are looking at, and confusion often necessitates a shift in action to remedy the lack of understanding. Confusion is thought to stem from uncertainty, and a lack of one's expectations and knowledge being met by a work of art.
Confusion is most often experienced by art novices, and therefore must often be dealt with by those in arts education.
Surprise
Surprise functions as a disruption of current action to alert a viewer to a significant event. The emotion is centered around the experience of something new and unexpected, and can be elicit by sensory incongruity. Art can elicit surprise when expectations about the work are not met, but the work changes those expectations in an understandable way.
Hostile emotions
Hostile emotions toward art are often very visible in the form of anger or frustration, and can result in censorship, but are less easily described by a continuum of aesthetic pleasure-displeasure. These reactions center around the hostility triad: anger, disgust, and contempt. These emotions often motivate aggression, self-assertion, and violence, and arise from perception of the artist's deliberate trespass onto the expectations of the viewer.
Self-conscious emotions
Self-conscious emotions are responses that reflect upon the self and one's actions, such as pride, guilt, shame, regret and embarrassment. These are much more complex emotions, and involve assessing events as agreeing with one's self-perception or not, and adjusting one's behavior accordingly. There are numerous instances of artists expressing self-conscious emotions in response to their art, and self-conscious emotions can also be felt collectively.
Sublime feelings
Researchers have investigated the experience of the sublime, viewed as similar to aesthetic appreciation, which causes general psychological arousal. The sublime feeling has been connected to a feeling of happiness in response to art, but may be more related to an experience of fear. Researchers have shown that feelings of fear induced before looking at artwork results in more sublime feelings in response to those works.
Aesthetic chills
Another common emotional response is that of chills when viewing a work of art. The feeling is predicted to be related to similar aesthetic experiences such as awe, feeling touched, or absorption. Personality traits along the Big 5 Inventory have been shown to be predictors of a person's experience of aesthetic chills, especially a high rating on Openness to Experience. Experience with the arts also predicts someone's experience of aesthetic chills, but this may be due to them experiencing art more frequently.
Effects of expertise
The fact that art is analyzed and experienced differently by those with artistic training and expertise than those who are artistically naive has been shown numerous times. Researchers have tried to understand how experts interact with art so differently from the art naive, as experts tend to like more abstract compositions, and show a greater liking for both modern and classical types of art. Experts also exhibit more arousal when looking at modern and abstract works, while non-experts show more arousal to classical works.
Other researchers predicted that experts find more complex art interesting because they have changed their appraisals of art to create more interest, or are possibly making completely different types of appraisals than novices. Experts described works rated high in complexity as easier to understand and more interesting than did novices, possibly as experts tend to use more idiosyncratic criteria when judging artworks. However, experts seem to use the same appraisals of emotions that novices do, but these appraisals are at a higher level, because a wider range of art is comprehensible to experts.
Expertise and museum visits
Due to most art being in museums and galleries, most people have to make deliberate choices to interact with art. Researchers are interested in what types of experiences and emotions people are looking for when going to experience art in a museum. Most people respond that they visit museums to experience 'the pleasure of art' or 'the desire for cultural learning', but when broken down, visitors of museums of classical art are more motivated to see famous works and learn more about them. Visitors in contemporary art museums were more motivated by a more emotional connection to the art, and went more for the pleasure than a learning experience. Predictors of who would prefer to go to which type of museum lay in education level, art fluency, and socio-economic status.
Theories and models of elicited emotions
Researchers have offered a number of theories to describe emotional responses to art, often aligning with the various theories of the basis of emotions. Authors have argued that the emotional experience is created explicitly by the artist and mimicked in the viewer, or that the emotional experience of art is a by-product of the analysis of that work.
Appraisal theory
The appraisal theory of emotions centers on the assumption that it is the evaluation of events, and not the events themselves, that cause emotional experiences. Emotions are then created by different groups of appraisal structures that events are analyzed through. When applied to art, appraisal theories argue that various artistic structures, such as complexity, prototypically, and understanding are used as appraisal structures, and works that show more typical art principles will create a stronger aesthetic experience . Appraisal theories suggest that art is experienced as interesting after being analyzed through a novelty check and coping-potential check, which analyze the art's newness of experience for the viewer, and the viewer's ability to understand the new experience. Experimental evidence suggests that art is preferred when the viewer finds it easier to understand, and that interest in a work is predictable with knowledge of the viewer's ability to process complex visual works, which supports the appraisal theory. People with higher levels of artistic expertise and knowledge often prefer more complex works of art. Under appraisal theory, experts have a different emotional experience to art due to a preference for more complex works that they can understand better than a naive viewer.
Appraisal and negative emotions
A newer take on this theory focuses on the consequences of the emotions elicited from art, both positive and negative. The original theory argues that positive emotions are the result of a biobehavioral reward system, where a person feels a positive emotion when they have completed a personal goal. These emotional rewards create actions by motivating approach or withdrawal from a stimuli, depending if the object is positive or negative to the person. However, these theories have not often focused on negative emotions, especially negative emotional experiences from art. These emotions are central to experimental aesthetics research in order to understand why people have negative, rejecting, condemning, or censoring reactions to works of art. By showing research participants controversial photographs, rating their feelings of anger, and measuring their subsequent actions, researchers found that the participants that felt hostile toward the photographs displayed more rejection of the works. This suggests that negative emotions towards a work of art can create a negative action toward it, and suggests the need for further research on negative reactions towards art.
Minimal model
Other psychologists believe that emotions are of minimal functionality, and are used to move a person towards incentives and away from threats. Therefore, positive emotions are felt upon the attainment of a goal, and negative emotions when a goal has failed to be achieved. The basic states of pleasure or pain can be adapted to aesthetic experiences by a disinterested buffer, where the experience is not explicitly related to the goal-reaching of the person, but a similar experience can be analyzed from a disinterested distance. These emotions are disinterested because the work of art or artist's goals are not affecting the person's well-being, but the viewer can feel whether or not those goals were achieved from a third-party distance.
Five-step aesthetic experience
Other theorists have focused their models on the disrupting and unique experience that comes from the interacting with a powerful work of art. An early model focused on a two-part experience: facile recognition and meta-cognitive perception, or the experience of the work of art and the mind's analysis of that experience. A further cognitive model strengthens this idea into a five-part emotional experience of a work of art. As this five-part model is new, it remains only a theory, as not much empirical evidence for the model had been researched yet.
Part one: Pre-expectations and self-image
The first stage of this model focuses on the viewer's expectations of the work before seeing it, based on their previous experiences, their observational strategies, and the relation of the work to themselves. Viewers who tend to appreciate art, or know more about it will have different expectations at this stage than those who are not engaged by art.
Part two: Cognitive mastery and introduction of discrepancy
After viewing the work of art, people will make an initial judgment and classification of the work, often based on their preconceptions of the work. After initial classification, viewers attempt to understand the motive and meaning of the work, which can then inform their perception of the work, creating a cycle of changing perception and the attempt to understand it. It is at this point any discrepancies between expectations and the work, or the work and understanding arise.
Part three: Secondary control and escape
When an individual finds a discrepancy in their understanding that cannot be resolved or ignored, they move to the third stage of their interaction with a work of art. At this point, interaction with the work has switched from lower-order and unconscious processes to higher-order cognitive involvement, and tension and frustration starts to be felt. In order to maintain their self-assumptions and to resolve the work, an individual will try to change their environment in order for the issue to be resolved or ignored. This can be done by re-classifying the work and its motives, blaming the discrepancy on an external source, or attempting to escape the situation or mentally withdraw from the work.
Part four: Meta-cognitive reassessment
If viewers cannot escape or reassess the work, they are forced to reassess the self and their interactions with works of art. This experience of self-awareness through a work of art is often externally caused, rather than internally motivated, and starts a transformative process to understand the meaning of the discrepant work, and edit their own self-image.
Part five: Aesthetic outcome and new mastery
After the self-transformation and change in expectations, the viewer resets their interaction with the work, and begins the process anew with deeper self-understanding and cognitive mastery of the artwork.
Pupillary response tests
In order to research emotional responses to art, researchers often rely on behavioral data. But new psychophysiological methods of measuring emotional response are beginning to be used, such as the measurement of pupillary response. Pupil responses have been predicted to indicate image pleasantness and emotional arousal, but can be confounded by luminance, and confusion between an emotion's positive or negative valence, requiring an accompanying verbal explanation of emotional state. Pupil dilatations have been found to predict emotional responses and the amount of information the brain is processing, measures important in testing emotional response elicited by artwork. Further, the existence of pupillary responses to artwork can be used as an argument that art does elicit emotional responses with physiological reactions.
Pupil responses to art
After viewing Cubist paintings of varying complexity, abstraction, and familiarity, participants' pupil responses were greatest when viewing aesthetically pleasing artwork, and highly accessible art, or art low in abstraction. Pupil responses also correlated with personal preferences of the cubist art. High pupil responses were also correlated with faster cognitive processing, supporting theories that aesthetic emotions and preferences are related to the brain's ease of processing the stimuli.
Left-cheek biases
These effects are also seen when investigating the Western preference for left-facing portraits. This skew towards left-cheek is found in the majority of Western portraits, and is rated as more pleasing than other portrait orientations. Theories for this preference suggest that the left side of the face as more emotionally descriptive and expressive, which lets viewers connect to this emotional content better. Pupil response tests were used to test emotional response to different types of portraits, left or right cheek, and pupil dilation was linearly related to the pleasantness of the portrait, with increased dilations for pleasant images, and constrictions for unpleasant images. Left-facing portraits were rated as more pleasant, even when mirrored to appear right-facing, suggesting that people are more attracted to more emotional facial depictions.
This research was continued, using portraits by Rembrandt featuring females with a left-cheek focus and males with a right-cheek focus. Researchers predicted Rembrandt chose to portray his subjects this way to elicit different emotional responses in his viewers related to which portrait cheek was favored. In comparison to previous studies, increased pupil size was only found for male portraits with a right-cheek preference. This may be because the portraits were viewed as domineering, and the subsequent pupil response was due to unpleasantness. As pupil dilation is more indicative of strength of emotional response than the valence, a verbal description of emotional responses should accompany further pupillary response tests.
Art as emotional regulation
Art is also used as an emotional regulator, most often in Art Therapy sessions. Art therapy is a form of therapy that uses artistic activities such as painting, sculpture, sketching, and other crafts to allow people to express their emotions and find meaning in that art to find trauma and ways to experience healing. Studies have shown that creating art can serve as a method of short-term mood regulation. This type of regulation falls into two categories: venting and distraction. Artists in all fields of the arts have reported emotional venting and distraction through the creation of their art.
Venting
Venting through art is the process of using art to attend to and discharge negative emotions. However, research has shown venting to be a less effective method of emotional regulation. Research participants asked to draw either an image related to a sad movie they just watched, or a neutral house, demonstrated less negative mood after the neutral drawing. Venting drawings did improve negative mood more than no drawing activity. Other research suggests that this is because analyzing negative emotions can have a helpful effect, but immersing in negative emotions can have a deleterious effect.
Distraction
Distraction is the process of creating art to oppose, or in spite of negative emotions. This can also take the form of fantasizing, or creating an opposing positive to counteract a negative affect. Research has demonstrated that distractive art making activities improve mood greater than venting activities. Distractive drawings were shown to decrease negative emotions more than venting drawings or no drawing task even after participants were asked to recall their saddest personal memories. These participants also experienced an increase in positive affect after a distractive drawing task. The change in mood valence after a distractive drawing task is even greater when participants are asked to create happy drawings to counter their negative mood.
See also
Aesthetic emotions
Emotionalism
References
Further reading
Concepts in aesthetics
Emotion | Art and emotion | Biology | 4,440 |
17,630,629 | https://en.wikipedia.org/wiki/Dinitrophenyl | Dinitrophenyl is any chemical compound containing two nitro functional groups attached to a phenyl ring. It is a hapten used in vaccine preparation. Dinitrophenyl does not elicit any immune response on its own and it does not bind to any antigen.
References
Nitroarenes | Dinitrophenyl | Chemistry | 65 |
2,916,635 | https://en.wikipedia.org/wiki/12%20Cancri | 12 Cancri is a star in the zodiac constellation Cancer. It has an apparent visual magnitude of 6.25, placing just below the normal limit for stars visible to the naked eye in good seeing conditions. The star displays an annual parallax shift of 12.7 mas as seen from Earth's orbit, which places it at a distance of about 257 light-years. It is moving toward the Sun with a radial velocity of around −10 km/s.
This is an ordinary F-type main-sequence star with a stellar classification of F3 V, which indicates it is generating energy through hydrogen fusion at its core. It is spinning with a projected rotational velocity of 52 km/s and appears to be undergoing solar-like differential rotation with relative rate of α = . The star is about 2.5 billion years old with 1.16 times the mass of the Sun and is radiating nearly 18 times the Sun's luminosity from its photosphere at an effective temperature of around .
References
F-type main-sequence stars
Cancer (constellation)
Durchmusterung objects
Cancri, 12
067483
3184
3184 | 12 Cancri | Astronomy | 236 |
68,445,623 | https://en.wikipedia.org/wiki/Retrieval%20Data%20Structure | In computer science, a retrieval data structure, also known as static function, is a space-efficient dictionary-like data type composed of a collection of (key, value) pairs that allows the following operations:
Construction from a collection of (key, value) pairs
Retrieve the value associated with the given key or anything if the key is not contained in the collection
Update the value associated with a key (optional)
They can also be thought of as a function for a universe and the set of keys where retrieve has to return for any value and an arbitrary value from otherwise.
In contrast to static functions, AMQ-filters support (probabilistic) membership queries and dictionaries additionally allow operations like listing keys or looking up the value associated with a key and returning some other symbol if the key is not contained.
As can be derived from the operations, this data structure does not need to store the keys at all and may actually use less space than would be needed for a simple list of the key value pairs. This makes it attractive in situations where the associated data is small (e.g. a few bits) compared to the keys because we can save a lot by reducing the space used by keys.
To give a simple example suppose video game names annotated with a boolean indicating whether the game contains a dog that can be petted are given. A static function built from this database can reproduce the associated flag for all names contained in the original set and an arbitrary one for other names. The size of this static function can be made to be only bits for a small which is obviously much less than any pair based representation.
Examples
A trivial example of a static function is a sorted list of the keys and values which implements all the above operations and many more.
However, the retrieve on a list is slow and we implement many unneeded operations that can be removed to allow optimizations.
Furthermore, we are even allowed to return junk if the queried key is not contained which we did not use at all.
Perfect hash functions
Another simple example to build a static function is using a perfect hash function: After building the PHF for our keys, store the corresponding values at the correct position for the key. As can be seen, this approach also allows updating the associated values, the keys have to be static. The correctness follows from the correctness of the perfect hash function. Using a minimum perfect hash function gives a big space improvement if the associated values are relatively small.
XOR-retrieval
Hashed filters can be categorized by their queries into OR, AND and XOR-filters. For example, the bloom filter is an AND-filter since it returns true for a membership query if all probed locations match. XOR filters work only for static retrievals and are the most promising for building them space efficiently. They are built by solving a linear system which ensures that a query for every key returns true.
Construction
Given a hash function that maps each key to a bitvector of length where all are linearly independent the following system of linear equations has a solution :
Therefore, the static function is given by and and the space usage is dominated by which is roughly bits per key for , the hash function is assumed to be small.
A retrieval for can be expressed as the bitwise XOR of the rows for all set bits of . Furthermore, fast queries require sparse , thus the problems that need to be solved for this method are finding a suitable hash function and still being able to solve the system of linear equations efficiently.
Ribbon retrieval
Using a sparse random matrix makes retrievals cache inefficient because they access most of in a random non local pattern. Ribbon retrieval improves on this by giving each a consecutive "ribbon" of width in which bits are set at random.
Using the properties of the matrix can be computed in expected time: Ribbon solving works by first sorting the rows by their starting position (e.g. counting sort). Then, a REM form can be constructed iteratively by performing row operations on rows strictly below the current row, eliminating all 1-entries in all columns below the first 1-entry of this row. Row operations do not produce any values outside of the ribbon and are very cheap since they only require an XOR of bits which can be done in time on a RAM. It can be shown that the expected amount of row operations is . Finally, the solution is obtained by backsubstitution.
Applications
Approximate membership
To build an approximate membership data structure use a fingerprinting function . Then build a static function on restricted to the domain of our keys .
Checking the membership of an element is done by evaluating with and returning true if the returned value equals .
If , returns the correct value and we return true.
Otherwise, returns a random value and we might give a wrong answer. The length of the hash allows controlling the false positive rate.
The performance of this data structure is exactly the performance of the underlying static function.
Perfect hash functions
A retrieval data structure can be used to construct a perfect hash function: First insert the keys into a cuckoo hash table with hash functions and buckets of size 1. Then, for every key store the index of the hash function that lead to a key's insertion into the hash table in a -bit retrieval data structure . The perfect hash function is given by .
References
Abstract data types
Associative arrays | Retrieval Data Structure | Mathematics | 1,093 |
61,636,316 | https://en.wikipedia.org/wiki/Black%20Sea%20Biogeographic%20Region | The Black Sea Biogeographic Region is a biogeographic region of land bordering the west and south of the Black Sea, as defined by the European Environment Agency .
Extent
The Black Sea Region is a coastal strip of land wide that runs along the coasts of Romania, Bulgaria, and a broader coastal strip in northern Turkey and Georgia.
The coastline has rocky bays and sea cliffs, but is dominated by long stretches of low sand dunes and beaches sloping into the Black Sea.
Environment
The sea has a moderating effect on the climate, so temperatures do not fall much below in winter, and are not as high in the summer as in area further inland.
The Kaliakra cliffs in the north of Bulgaria are rich in flora, including many species in common with the neighboring Steppic and Mediterranean Regions.
The western Black Sea Region is the Via Pontica, Europe's second largest bird migration route.
The migrating birds use the coastal lakes, marshes and lagoons behind the shoreline, and some spend the winter in these wetlands.
The Danube Delta is the best known of the wetlands.
In Bulgaria and Romania the region is threatened by development of agriculture, industry, urbanization and tourism.
The Black Sea itself, a very deep inland sea, is poor in oxygen and supports very little marine life in the deeper regions.
However, it had a productive fishery until the 1960s, when stocks crashed in part because of over-fishing and in part from pollution and invasion by exotic species.
Conservation
The first list of Natura 2000 sites in the region was adopted in December 2008.
This included 40 Sites of Community Importance under the Habitats Directive and 27 Special Protection Areas under the Birds Directive for the part of the Black Sea biogeographical region in the European Union.
Some sites meet both criteria, so the total was less than 67.
About half the land area was covered by the sites.
As of 12 December 2017 there were 45 Sites of Community Importance.
These were:
Plazh Shkorpilovtsi
Dolinata na reka Batova
Galata
Kamchia
Zlatni pyasatsi
Trite bratya
Kraymorska Dobrudzha
Pobitite kamani
Kamchiyska i Emenska planina
Reka Kamchia
Karaagach
Plazh Gradina - Zlatna ribka
Aytoska planina
Ezero Durankulak
Sredetska reka
Bosna
Derventski vazvishenia
Fakiyska reka
Zaliv Chengene skele
Atanasovsko ezero
Mandra - Poda
Burgasko ezero
Kompleks Kaliakra
Aheloy - Ravda - Nesebar
Pomorie
Ezero Shabla - Ezerets
Ropotamo
Emine - Irakli
Strandzha
Aladzha banka
Emona
Otmanli
Delta Dunării
Delta Dunării - zona marină
Dunele marine de la Agigea
Mlaștina Hergheliei - Obanul Mare și Peștera Movilei
Pădurea Hagieni - Cotul Văii
Plaja submersă Eforie Nord - Eforie Sud
Vama Veche
Zona marină de la Capul Tuzla
Cap Aurora
Costinesti
Canionul Viteaz
Lobul sudic al Câmpului de Phyllophora al lui Zernov
Notes
Sources
Environment of Bulgaria
Environment of Romania
Environment of Turkey
Biogeography | Black Sea Biogeographic Region | Biology | 700 |
6,242,575 | https://en.wikipedia.org/wiki/Shiva%20hypothesis | The Shiva hypothesis, also known as coherent catastrophism, is the idea that global natural catastrophes on Earth, such as extinction events, happen at regular intervals because of the periodic motion of the Sun in relation to the Milky Way galaxy.
Initial proposal in 1979
William Napier and Victor Clube in their 1979 Nature article, ”A Theory of Terrestrial Catastrophism”, proposed the idea that gravitational disturbances caused by the Solar System crossing the plane of the Milky Way galaxy are enough to disturb comets in the Oort cloud surrounding the Solar System. This sends comets in towards the inner Solar System, which raises the chance of an impact. According to the hypothesis, this results in the Earth experiencing large impact events about every 30 million years (such as the Cretaceous–Paleogene extinction event).
Later work by Rampino
Starting in 1984, Michael R. Rampino published followup research on the hypothesis. Certainly Rampino was aware of Napier and Clube's earlier publication, as Rampino and Stothers' letter to Nature in 1984 references it.
In the 1990s, Rampino and Bruce Haggerty renamed Napier and Clube's Theory of Terrestrial Catastrophism after Shiva, the Hindu god of destruction. In 2020, Rampino and colleagues published non-marine evidence corroborating previous marine evidence in support of the Shiva hypothesis.
Similar theories
The Sun's passage through the higher density spiral arms of the galaxy, rather than its passage through the plane of the galaxy, could hypothetically coincide with mass extinction on Earth.
However, a reanalysis of the effects of the Sun's transit through the spiral structure based on CO data has failed to find a correlation.
The Shiva Hypothesis may have inspired yet another theory: that a brown dwarf named Nemesis causes extinctions every 26 million years, which varies slightly from 30 million years.
Criticism
The idea of extinction periodicity has been criticised due to the fact that the hypothesis assumes that most or all extinction events have the same cause, when evidence suggests that extinctions are likely the result of a variety of causes that are unlikely to be cyclically induced.
See also
Local Bubble
Tyche (hypothetical planet)
References
External links
Napier and Clube's 1979 article "A Theory of Terrestrial Catastrophism"
A description of the Shiva hypothesis by Michael Rampino
Asteroid/Comet Impact Craters and Mass Extinctions and Shiva Hypothesis of Periodic Mass Extinctions , by Michael Paine
The "Shiva Hypothesis": Impacts, Mass Extinctions, and the Galaxy, by Rampino and Haggerty
The Shiva hypothesis: impacts, mass extinctions, and the Galaxy, by Rampino, M. R.
The correlation between mas extinctions and impacts of near-Earth objects. The review of Shiva hypothesis, by Yang Su, Yi Xia and Yanan Zhang.
Hypothetical impact events | Shiva hypothesis | Astronomy,Biology | 566 |
35,431,035 | https://en.wikipedia.org/wiki/Derjaguin%20approximation | The Derjaguin approximation (or sometimes also referred to as the proximity approximation), named after the Russian scientist Boris Derjaguin, expresses the force profile acting between finite size bodies in terms of the force profile between two planar semi-infinite walls. This approximation is widely used to estimate forces between colloidal particles, as forces between two planar bodies are often much easier to calculate. The Derjaguin approximation expresses the force F(h) between two bodies as a function of the surface separation as
where W(h) is the interaction energy per unit area between the two planar walls and Reff the effective radius. When the two bodies are two spheres of radii R1 and R2, respectively, the effective radius is given by
Experimental force profiles between macroscopic bodies as measured with the surface forces apparatus (SFA) or colloidal probe technique are often reported as the ratio F(h)/Reff.
Quantities involved and validity
The force F(h) between two bodies is related to the interaction free energy U(h) as
where h is the surface-to-surface separation. Conversely, when the force profile is known, one can evaluate the interaction energy as
When one considers two planar walls, the corresponding quantities are expressed per unit area. The disjoining pressure is the force per unit area and can be expressed by the derivative
where W(h) is the surface free energy per unit area. Conversely, one has
The main restriction of the Derjaguin approximation is that it is only valid at distances much smaller than the size of the objects involved, namely h ≪ R1 and h ≪ R2. Furthermore, it is a continuum approximation and thus valid at distances larger than the molecular length scale. Even when rough surfaces are involved, this approximation has been shown to be valid in many situations. Its range of validity is restricted to distances larger than the characteristic size of the surface roughness features (e.g., root mean square roughness).
Special cases
Frequent geometries considered involve the interaction between two identical spheres of radius R where the effective radius becomes
In the case of interaction between a sphere of radius R and a planar surface, one has
The above two relations can be obtained as special cases of the expression for Reff given further above. For the situation of perpendicularly crossing cylinders as used in the surface forces apparatus, one has
where R1 and R2 are the curvature radii of the two cylinders involved.
Simplified derivation
Consider the force F(h) between two identical spheres of radius R as an illustration. The surfaces of the two respective spheres are thought to be sliced into infinitesimal disks of width dr and radius r as shown in the figure. The force is given by the sum of the corresponding swelling pressures between the two disks
where x is the distance between the disks and dA the area of one of these disks. This distance can be expressed as x=h+2y. By considering the Pythagorean theorem on the grey triangle shown in the figure one has
Expanding this expression and realizing that y ≪ R one finds that the area of the disk can be expressed as
The force can now be written as
where W(h) is the surface free energy per unit area introduced above. When introducing the equation above, the upper integration limit was replaced by infinity, which is approximately correct as long as h ≪ R.
General case
In the general case of two convex bodies, the effective radius can be expressed as follows
where Ri and R"i are the principal radii of curvature for the surfaces i = 1 and 2, evaluated at points of closest approach distance, and φ is the angle between the planes spanned by the circles with smaller curvature radii. When the bodies are non-spherical around the position of closest approach, a torque between the two bodies develops and is given by
where
The above expressions for two spheres are recovered by setting R'''i = R"i = Ri. The torque vanishes in this case.
The expression for two perpendicularly crossing cylinders is obtained from Ri = Ri and R"''i → ∞. In this case, torque will tend to orient the cylinders perpendicularly for repulsive forces.
For attractive forces, the torque will tend to align them.
These general formulas have been used to evaluate approximate interaction forces between ellipsoids.
Beyond the Derjaguin approximation
The Derjaguin approximation is unique given its simplicity and generality. To improve this approximation, the surface element integration method as well as the surface integration approach were proposed to obtain more accurate expressions of the forces between two bodies. These procedures also considers the relative orientation of the approaching surfaces.
See also
Atomic force microscopy
Electrical double layer forces
DLVO theory
Van der Waals force
References
Further reading
Physical chemistry
Colloidal chemistry | Derjaguin approximation | Physics,Chemistry | 978 |
30,873,602 | https://en.wikipedia.org/wiki/Realm-Specific%20IP | Realm-Specific IP was an experimental IETF framework and protocol intended as an alternative to network address translation (NAT) in which the end-to-end integrity of packets is maintained.
RSIP lets a host borrow one or more IP addresses (and UDP/TCP port) from one or more RSIP gateways, by leasing (usually public) IP addresses and ports to RSIP hosts located in other (usually private) addressing realms.
The RSIP client requests registration with an RSIP gateway. The gateway in turn delivers either a unique IP address or a shared IP address and a unique set of TCP/UDP ports and associates the RSIP host address to this address. The RSIP host uses this address to send packets to destinations in the other realm. The tunnelled packets between RSIP host and gateway contain both addresses, and the RSIP gateway strips off the host address header and sends the packet to the destination.
RSIP can also be used to relay traffic between several different privately addressed networks by leasing several different addresses to reach different destination networks.
RSIP should be useful for NAT traversal as an IETF standard alternative to Universal Plug and Play (UPnP).
, the protocol was in the experimental stage and not yet in widespread use.
See also
Interactive Connectivity Establishment (ICE)
Middlebox Middlebox Communications (MIDCOM)
Simple Traversal of UDP over NATs (STUN)
SOCKS
Traversal Using Relay NAT (TURN)
Universal Plug and Play (UPnP)
References
IETF References
- Realm Specific IP: Framework
- Realm Specific IP: Protocol Specification
- RSIP Support for End-to-end IPsec
Internet protocols | Realm-Specific IP | Technology | 336 |
2,078,850 | https://en.wikipedia.org/wiki/Narcotic%20Drugs%20and%20Psychotropic%20Substances%20Act%20%28Sudan%29 | Sudan's Narcotic Drugs and Psychotropic Substances Act, passed in 1994, is designed to fulfill that country's treaty obligations under the Single Convention on Narcotic Drugs, Convention on Psychotropic Substances, and United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances.
References
Narcotic Drugs and Psychotropic Substances Act, 1994 .
1994 in law
Drug control law
Law of Sudan
1994 in Sudan | Narcotic Drugs and Psychotropic Substances Act (Sudan) | Chemistry | 91 |
31,487,216 | https://en.wikipedia.org/wiki/Louise%20Boursier | Louise (Bourgeois) Boursier (1563–1636) was royal midwife at the court of King Henry IV of France and the first female author in that country to publish a medical text. Largely self-taught, she delivered babies for and offered obstetrical and gynecological services to Parisian women of all social classes before coming to serve Queen Marie de Medicis in 1601. Bourgeois successfully delivered Louis XIII, King of France (1601) and his five royal siblings: Elizabeth, Queen of Spain (1602); Christine Marie, Duchess of Savoy (1607); Nicolas Henri, Duke of Orléans (1607); Gaston, Duke of Orléans (1608); and Henrietta Maria, Queen of England, Queen of Scots, and Queen of Ireland (1609). In 1609, Bourgeois published the first of three successive volumes on obstetrics: Observations diverses sur la sterilité, perte de fruict, foecondite, accouchements et maladies des femmes et enfants nouveaux naiz / Amplement traictees et heureusement praticquees par L. Bourgeois dite Boursier (Diverse observations on sterility, miscarriage, fertility childbirth, and diseases of women and newborn children amply treated and successfully practiced). Subsequent volumes were published in 1617 and 1626, also in Paris.
These publications include observation-based, innovative obstetrical protocols to manage difficult births as well as advice for pregnant and postpartum mothers and newborns. Bourgeois also offered recipes for various kinds of medications that would have been easy for a woman to make herself. The three volumes include over four dozen detailed case histories that made a substantial contribution to the emerging empiricism of seventeenth-century European science and medicine.
Overall, Bourgeois’s mission was to educate midwives so that they could become more competent at caring for women’s obstetrical and gynecological needs as well as to inform women about how to care for their bodies themselves. At a time when the best trained and most skilled midwives of Paris were competing for elite clients—who had begun to prefer male surgeons not only for difficult but also for normal births—Bourgeois called out midwives, surgeons, and physicians alike for their incompetence and ignorance when it came to the care of pregnant, parturient, and postpartum mothers. Moreover, Bourgeois envisioned a collaborative rather than hierarchical relationship among trained midwives, surgeons, and physicians, one that would serve the best interests of mother and child.
Bourgeois’s works were as popular in her day as those of male medical authors like Ambroise Paré and Jacques Guillemeau. Even after her death she enjoyed fame and influence in France and beyond. Her work is reflected in Jane Sharp's The Midwives Book: Or the Whole Art of Midwifry Discovered (1671); Marguerite du Tertre de la Marche's Instruction familière et utile aux sages-femmes pour bien pratiquer les accouchemens (1677); and Justine Siegemund's Die chur-Brandeburgische Hoff-Wehe-Mutter (1690). Also following Bourgeois's example was Angélique Marguerite Le Boursier du Coudray (c. 1712–1794); it is unknown whether du Coudray was related to Bourgeois.
Bourgeois's career as a royal court midwife spanned more than twenty-six years. She was paid 900 livres for each of the last four of Louis XIII's siblings' births, a sum eight times greater than the average municipal midwife's salary. In 1608, she received an additional sum of 6000 livres, most likely in recognition of her superior services to the royal family. After the birth of Marie de Médicis’s last child in 1609, Bourgeois asked for a pension. King Henry IV agreed to 900 livres, which was considered a reasonable retirement income.
Early life
Bourgeois was born into a wealthy, propertied family in 1563 in Faubourg Saint-Germain, an upper-class suburb just outside of Paris. Bourgeois wrote, “Not for anything would we have traded our house for a beautiful one in the city, because … we had everything that those who lived in the city had, plus good air and the freedom of beautiful places to walk.”
In 1584, Bourgeois married Martin Boursier, a barber–surgeon who had lived and worked for twenty years with the obstetrical and surgical innovator Ambroise Paré. The couple had a comfortable life until the dynastic and religious wars that had wracked France for over thirty years came to the quiet suburb. In 1589, while her husband was away with the army, troops destroyed Bourgeois’s ancestral home and others like it. She escaped with her three children and mother by fleeing inside the Paris city walls.
Bourgeois wrote that to make ends meet she sold the furniture and other objects she had salvaged from her home as well as items she had embroidered. Life was difficult while her husband was at the front lines, but their financial circumstances did not improve after he returned in late 1593 or early 1594. Bourgeois recounts that because she could read and had a surgeon for a husband, “[A] respectable woman [that is, an unlicensed midwife] who had delivered me of my three children and who liked me persuaded me to learn how to be a midwife.” Initially, Bourgeois writes, “I could not bring myself to [become a midwife] when I thought of the [responsibility of] taking children to be baptized. … In the end … fear of seeing my children go hungry made me do it.”
Early career
Unlike the majority of practicing midwives, Bourgeois did not learn midwifery by apprenticing to a more experienced midwife nor does she acknowledge that her husband instructed her. Instead, she recounts that she read the work of Ambroise Paré who, by 1593 or 1594 when Bourgeois decided to become a midwife, was deceased (he died in 1590). In Paré’s writing, Bourgeois would have found instructions on how to perform an obstetrical technique called podalic version that he reintroduced into medical practice; the technique allows a birth attendant to deliver a malpresenting infant feet first. This procedure obviated the need to use hooks or other sharp instruments to extract an impacted fetus, a procedure that killed the fetus and sometimes mortally harmed the mother.
Paré also emphasized the importance of learning human anatomy by performing dissections, a part of medical and surgical training to which most midwives never had access. However, Bourgeois had a friendship with the head midwife at Paris’s Hôtel-Dieu (poor hospital); she allowed Bourgeois to witness both deliveries of infants and autopsies of women who had died in childbirth. These experiences contributed to her knowledge of female anatomy and the skills required to deliver a baby safely. At the time, the Hôtel-Dieu was the only institution in Paris where women could obtain formal training in midwifery. But apprenticeships were limited: only four interns were accepted every three months.
Bourgeois recounts that her first client was her porter’s wife. Following this first delivery, she became “quite busy among the poor and other kinds of people.” In 1598, Bourgeois went before the official medical licensing board to receive a midwifery license. The board consisted of two senior midwives, a physician, and two surgeons. Madame Dupuis, one of the two senior midwives, was royal midwife in the court of Henri IV. Dupuis objected to Bourgeois’s obtaining a license because she was married to a surgeon. At the time, Parisian surgeons were competing with midwives for the most elite clients. Bourgeois claims that Dupuis remarked: “My heart tells me this doesn’t bode well for us.” Bourgeois adds that Dupuis kept her for a long time and threatened to have her burned at the stake if she tried to compete with Dupuis. Despite Dupuis’s concerns, the other members of the board allowed Bourgeois to receive her license and become a sworn midwife.
In 1601, Bourgeois learned that Henri IV’s new queen, Marie de Médicis, was pregnant and did not find Madame Dupuis, the royal midwife, “agreeable.” Contemplating the grief that Madame Dupuis had given her at the licensing board examination, Bourgeois confessed, “I [too] would have wanted another woman.” With the help of neighbors, friends, former clients, and royal physicians as well as the queen’s own ladies-in-waiting and their servants, Bourgeois created an elaborate scheme to supplant Dupuis. While Bourgeois could not find a way to meet privately with the queen, she was able to gain the queen’s attention for a moment at a large banquet at the House of Gondi where the royal couple dined once or twice a week. At just the right moment, Bourgeois’s allies directed the queen to observe Bourgeois from afar. Impressed with her calm demeanor and upright stance—characteristics that in Bourgeois’s era connoted moral and physical strength, the queen declared that she wanted no other midwife to ever touch her.
Writing
Bourgeois’s successes in the royal birthing room provided her with a large salary; in addition, the queen’s literary patronage resulted in Bourgeois’s publishing Observations diverses in three consecutive volumes. These volumes comprise numerous genres: medical treatise, autobiography, history, poetry—to extol her supporters and lambast her enemies—and parental advice. But Bourgeois’s chief goal in publishing was to improve the health and alleviate the suffering of women and newborns. In all three volumes, Bourgeois relies more on her own experience than on ancient texts—a relatively radical choice at a time when French medicine still often relied on the practices of ancient Greece and Rome as well as of medieval Europe. Her first volume includes innovative obstetrical protocols that, if followed correctly, could save lives. For example, Bourgeois gave instructions on how to induce labor in the case of a contracted pelvis; how to deliver a baby with a face presentation; and how to cut the cord between two ligatures when the cord was wrapped around the neck. She also included medicinal recipes she had validated for everyday use (sometimes, she claimed, by testing them on herself) as well as over four dozen detailed case histories. Few texts with such practical information on obstetrics and maternal care directed to women existed at the time, let alone ones written in a female voice. During the early modern period, Observations diverses was translated into German and Dutch; and partially and inexactly into English.<
The second volume of Observations diverses, first published in 1617, has medical advice as well as autobiographical and historical materials. The volume includes “Advice to My Daughter,” a didactic essay on the pitfalls of practicing midwifery. It is, as far as we know, the first text of its kind written by a midwife—a tradeswoman—to her daughter. The essay outlines religious and moral guidance regarding such topics as abortion, sexually transmitted diseases, and female modesty; it also describes how a midwife might avoid being blamed for unsuccessful deliveries. The second volume includes, in addition, “How I Learned the Art of Midwifery”—a brief autobiographical sketch that has become source material for almost all secondary accounts of Bourgeois’s life. A third essay, “The True Account of the Births of My Lords and Ladies the Children of France with the Noteworthy Particularities Thereof,” incorporates a dramatization of the birth of the future Louis XIII. The queen’s first pregnancy took place at a time when France was in desperate need of a direct male heir to the throne; the lack of an heir had exacerbated the dynastic and religious wars of the prior thirty years.
Bourgeois’s narrative of the birth of the future Louis XIII displays her knowledge of and playful attitude toward the critical importance that the Bourbon royals placed in having a male heir. This attitude, of course, could only be exhibited after the actual birth of the future king. In her dramatization of his birth, Bourgeois exhibited a carnivalesque interpretation of this key event by implying that she could control the sex of the unborn child just before its delivery, a commonly held notion of her era. She went on to claim that she set Henri IV on an emotional roller coaster by not revealing the child’s sex immediately after it was born. She created narrative tension by describing at length how distraught the king and his courtiers were—until Bourgeois unveiled the naked child. In this narrative, Bourgeois also underplayed whatever part the attending royal physicians and surgeons had at the event; she barely mentions them.
In her narratives of the subsequent births of the future Louis XIII’s five siblings, Bourgeois supplied intimate details about the queen’s labor and relays the royals’ concerns about finding appropriate wet nurses; she also described where the births took place; exchanges between the queen and others attending her; and the queen’s awarding Bourgeois a special velvet cap. Of this last event, she boasted, “Formerly, royal midwives wore velvet neckpieces and a thick gold chain around their neck. … I have the honor that no other woman except for me has touched the queen during her deliveries and afterwards.” These narratives provide a unique account of royal births that emphasize not only Bourgeois’s obstetrical prowess but also her perspective on the court’s internal workings at a critical moment in French history.
In the second volume, Bourgeois told her readers that that she wanted to “revise and enlarge the previous volume” by including a long chapter on diseases of the womb. In addition, she created a mythological genealogy of her ascent to the position of royal midwife, and she included her daughter in that genealogy. Bourgeois traced her ancestry to Phaenarete, a midwife and the mother of Socrates, who, Bourgeois asserted, adopted her. Upon this adoption, Bourgeois further claimed, the ancient goddess of childbirth Lucina became jealous of Phaenarete. To demonstrate her allegiance to Bourgeois, Lucina then ordered Mercury to guide Bourgeois to the palace, where she became royal midwife. Creating genealogies of this kind to defend and assert one’s personal and professional authority was a commonplace practice among male and female authors during this period. Also in this volume, Bourgeois discussed how to choose wet nurses and presented a series of unusual case histories.
The third volume, published in 1626, was the briefest; it contains case histories that emphasize the importance of orally transmitted knowledge, and Bourgeois wrote of her growing concern about incompetent physicians who advise women without really understanding the signs of or other aspects of pregnancy.
Late career
Bourgeois was royal midwife under the regency of Marie de Médicis and the reign of Louis XIII. In 1627, while under Bourgeois’s care, the king’s sister-in-law, Marie de Montpensier, died six days after giving birth. Marie de Médicis ordered that an autopsy be made. The published report intimated that Bourgeois was to blame for the death, which was believed to have been caused by retained pieces of the placenta found in the uterus.
In response to this implicit attack upon her competency, Bourgeois wrote a brief pamphlet, Fidelle relation de l’accouchement, maladie et ouverture du corps de feu Madame, in which she defended herself. She highlighted her many qualifications; cited her practice as a midwife for thirty-four years; and noted that she had honorably acquired the proper license and had written books on midwifery that were used by physicians in England and Germany. More specifically, she asserted that she carried out the delivery of the placenta properly. Even if small pieces of placenta remained, she insisted, they would have been flushed out by the lochia as the ancient Greek surgeon Paulus Aeginata and her own contemporary, the anatomist Girolamo Fabrici d’Acquapendente (1565–1613), had discussed in their writings. However, the self-defense did not persuade her detractors. With all of her allies at court deceased, the scandal most likely ended her career as royal midwife.
One year before her death, and only because of the persistent urging of her publisher, Bourgeois published Recueil de secrets, a book of remedies. Her reluctance to publish stemmed from her concern about including recipes for certain remedies that she had been keeping secret in order to pass them on to her daughter, Antoinette, who was also a midwife. The publisher wrote, “The only thing that kept her from bowing to my prayers for a long time was the consideration of her daughter, who had embraced her profession, which she feared to harm. Finally recognizing that she had acquired by her skill and great judgment, such a reputation, that she [her daughter] was henceforth quite recommendable in herself, without her needing to be so by her mother’s secrets, gave me this manuscript.”
Bourgeois died on 20 December 1636. She was buried with her ancestors, who lived outside of Paris, rather than with her husband, whose grave was in the city.
Publications by Louise Bourgeouis
1609: Observations diverses sur la stérilité, perte de fruict, foecondité, accouchements et maladies des femmes et enfants nouveaux naiz, 1 vol. Paris, Saugrain (1er volume).
1617: Observations diverses sur la stérilité, perte de fruict, foecondité, accouchements et maladies des femmes et enfants nouveaux naiz, 2 vols. Paris, Saugrain.
1626: Observations diverses sur la stérilité, perte de fruict, foecondité, accouchements et maladies des femmes et enfants nouveaux naiz, 3 vols. Paris, Mondiere.
1627: Apologie de Louyse Bourgeois dite Bourcier, Sage femme de la Royne mere du Roy, & de feu Madame. Contre le rapport des medecins. Paris, Mondiere.
1627(?): Fidelle relation de l’accouchement, maladie et ouverture du corps de feu Madame [s.l.]
1635: Recueil des secrets. Paris, Mondiere.
See also
Timeline of women in science
References
Suggested Reading
Bourgeois, Louise. Midwife to the Queen of France: Diverse Observations. Edited by Alison Klairmont Lingo. Translated by Stephanie O’Hara. Toronto and Tempe: Iter and Arizona Center for Medieval and Renaissance Studies, 2017.
Bourgeois, Louise. Observations diverses sur la stérilité, perte de fruits, fécondité, accouchements et maladies des femmes et enfants nouveau-nés; suivi de Instructions à ma fille: 1609. Edited by Françoise Olive. Paris: Côté-femmes, 1992.
Broomhall, Susan. Women’s Medical Work in Early Modern France. Manchester, UK: Manchester University Press, 2004.
Chereau, Achille. Marie de Medici Queen of France and Navarre Six Deliveries. Paris: Willem Press, 1875.
Gelbart, Nina Rattner, The King’s Midwife: A History and Mystery of Madame du Coudray. Berkeley: University of California Press, 1998.
Green, Monica. Making Women’s Medicine Masculine: The Rise of Male Authority in Pre-Modern Gynaecology. Oxford, UK: Oxford University Press, 2008.
Green, Monica. Women’s Healthcare in the Medieval West: Texts and Contexts. Aldershot, UK: Routledge, 2002.
Klairmont Lingo, Alison. “Louise Bourgeois’s School of Learning and Action.” Women’s Studies: An Interdisciplinary Journal 49, no. 3 (2020): 229–255. doi.org/ 10.1080/00497878.2020.1714396.
Klairmont Lingo, Alison. “Connaître le secret des femmes: Louise Bourgeois (1563–1636), sage-femme de la reine, et Jacques Guillemeau (1549–1613), chirurgien du roi.” In Enfanter: discours, pratiques et représentations de l’accouchement, ed. Adeline Gargam, 113–126. France: Artois Presses Université, 2017.
Klairmont Lingo, Alison. “Louise Bourgeois.” In Dictionnaire des femmes de l’Ancienne France, ed. Marie-Elisabeth Henneau. SIEFAR, 2016.
Morwiche, Pascale, Donner vie au royaume. Grossesses et maternités à la Cour, XVIIe–XVIIIe siècle. Paris: CNRS Editions, 2022.
O’Hara, Stephanie. “Translation, Gender, and Early Modern Midwifery: Louise Bourgeois’s Observations diverses and The Compleat Midwife’s Practice.” New England Journal of History 65, no. 1 (2008): 28–55.
Perkins, Wendy. Midwifery Medicine in Modern France: Louise Bourgeois. Exeter, UK: University of Exeter Press, 1996.
Read, Kirk D. Birthing Bodies in Early France: Stories of Gender and Reproduction. Abingdon, UK: Routledge, 2011.
Robin, Diana, Anne R. Larsen, and Carole Levin, eds. Encyclopedia of Women in the Renaissance: Italy, France, and England. Santa Barbara, CA: ABC-CLIO, 2007.
Sheridan, Bridgette. “Whither Childbearing: Gender, Status, and the Professionalization of Medicine in Early Modern France.” In Gender and Scientific Discourse in Early Modern, ed. Kathleen P. Long, 239–258. Abingdon, UK: Routledge, 2016.
Thomas, Samuel. “Early Modern Midwifery: Splitting the Profession, Connecting the History.” Journal of Social History 43, no. 1 (2009): 115–138.
Tucker, Holly. Pregnant Fictions: Childbirth and the Fairy Tale in Early Modern France. Detroit, IL: Wayne State University Press, 2003.
Worth-Stylianou, Valerie. “Birthing Tales in French Medical Works ca.1500–1650.” www.birthingtales.org.
Worth-Stylianou, Valerie. Pregnancy and Birth in Early Modern France: Treatises by Caring Physicians and Surgeons (1581–1625). Toronto: Centre for Reformation and Renaissance Studies, 2013.
1563 births
1636 deaths
French midwives
French science writers
Writers from Paris
Women science writers
Women medical researchers
16th-century women scientists
16th-century French scientists
17th-century French women scientists
17th-century French scientists
17th-century French women writers
17th-century French writers | Louise Boursier | Technology | 4,759 |
2,533,091 | https://en.wikipedia.org/wiki/List%20of%20fictional%20elements%2C%20materials%2C%20isotopes%20and%20subatomic%20particles | This list contains fictional chemical elements, materials, isotopes or subatomic particles that either a) play a major role in a notable work of fiction, b) are common to several unrelated works, or c) are discussed in detail by independent sources.
Fictional elements and materials
Fictional isotopes of real elements
Fictional subatomic particles
See also
Computronium
Neutronium
List of discredited substances
List of Star Trek materials
References
External links
Elements from DC Comics Legion of Super-heroes
Periodic Table of Comic Books – lists comic book uses of real elements
Periodic table from the BBC comedy series Look Around You.
Tarzan at the Earths Core
Elements, materials, isotopes and subatomic particles
Fictional elements, List of fictional elements, materials, isotopes and subatomic particles
Fictional elements, materials, isotopes and subatomic particles | List of fictional elements, materials, isotopes and subatomic particles | Physics,Chemistry | 169 |
4,961,315 | https://en.wikipedia.org/wiki/CentiMark | CentiMark Corporation (est. 1968) is a national roofing contractor company headquartered in Canonsburg, Pennsylvania in the United States. The company also has a flooring division, QuestMark, and a Canadian Roofing subsidiary, CentiMark LTD.
History
Edward B. Dunlap started D&B Laboratories in 1967 as a part-time industrial cleaning products business in the basement of his home. In 1968, with $1,000 seed money from D&B Laboratories and one associate, Dunlap started Northern Chemical Company.
In response to customer needs, Northern Chemical Company became involved in roofing and flooring maintenance. In the 1970s, the oil crisis negatively impacted the built-up roofing market that was dependent on crude oil for asphalt. The quality of asphalt decreased as oil companies were pressed to extract as much oil from crude as possible. The price of asphalt increased, thus resulting in higher roofing prices.
Concerned about the quality of bituminous materials used in built-up tar and asphalt roofs, Northern Chemical Company began marketing and installing single-ply rubber (EPDM) roof systems. The newly developed EPDM polymer was both durable and waterproof. It was a cost-effective solution to the increasing costs associated with built-up roofing. In the late 1970s and early 1980s, EPDM was one of the fastest growing roofing products and accounted for almost 40% of new and replacement roofs on commercial and industrial properties.
Growth
The company grew through geographical expansion, diversification of product lines and an aggressive National Accounts Program. In 1987, the corporate name was officially changed to CentiMark Corporation. "Centi" refers to the 1987 goal of achieving $100 million in revenue. "Mark" recognizes the company's unique contributions to the roofing industry including: a National Account program in roofing and flooring, Single Source warranties on workmanship and materials and nationwide geographical expansion.
In January 2003, Timothy M. Dunlap was appointed president and chief operating officer of CentiMark. Edward B. Dunlap, Founder of CentiMark, continues to serve as chairman and chief executive officer.
Services
According to the company website, CentiMark offers the following systems and services:
Roof repairs
Green roofing
Metal roofing
Asset Management
Commercial roofing
Emergency Response
Preventative Maintenance
Additionally, they offer Single Source Warranties on workmanship and materials.
QuestMark
In 2006, QuestMark was established to expand CentiMark's presence in the flooring industry. DiamondQuest, a new technology for concrete grinding and polishing, catered to a fast-growing market segment in the flooring industry. According to the company website, QuestMark operates out of numerous offices nationwide.
Locations
The CentiMark Corporation headquarters is located in Canonsburg, Pennsylvania, just outside Pittsburgh. CentiMark has over 80 offices and 3,500 associates across North America. They service all major cities.
Recognition
In 1991, CentiMark became the first and only roofing contractor to be rated 4A1 by Dun & Bradstreet based on a strong credit appraisal and net worth. By 2000, the rating increased to 5A1, the highest level by Dun & Bradstreet. CentiMark continues to be peerless in the commercial roofing industry regarding the 5A1 Dun & Bradstreet rating.
In 2014, CentiMark was named the #1 Roofing Contractor in North America with revenue of $484.7 million, according to Roofing Contractor magazine, August 2014. Since 1991, in various roofing magazine rankings, CentiMark has consistently been the #1 or #2 roofing contractor in North America.
References
External links
Official Website
Roof Replacement
Three Brothers Roofing
Roofs
Companies based in Pittsburgh
Construction and civil engineering companies established in 1968
1968 establishments in Pennsylvania | CentiMark | Technology,Engineering | 769 |
15,228,933 | https://en.wikipedia.org/wiki/ZNF366 | Zinc finger protein 366, also known as DC-SCRIPT (Dendritic cell-specific transcript), is a protein that in humans is encoded by the ZNF366 gene. The ZNF366 gene was first identified in a DNA comparison study between 85 kb of Fugu rubripes sequence containing 17 genes with its homologous loci in the human draft genome.
Function
In 2006, DC-SCRIPT was isolated and characterized in human monocyte-derived dendritic cells (mo-DCs).
DC-SCRIPT contains a DNA-binding domain (11 C2H2 zinc (Zn) fingers), flanked by a proline-rich and an acidic region, which can interact with C-terminal-binding protein 1 (CtBP1), a global corepressor. In the immune system of both mice and humans, DC-SCRIPT was found to be specifically expressed in dendritic cells (DCs).
In COS-1 cells, DC-SCRIPT was shown to interact with the estrogen receptor DNA-binding domain (ERDBD) and represses ER activity through the association with RIP140, CtBP and histone deacetylases.
In DCs, DC-SCRIPT was found to be highly expressed in type one conventional DCs (cDC1s) under the control of PU.1. The presence of DC-SCRIPT is important for the cDC1s lineage specification via maintaining Interferon regulatory factor 8 (IRF8) expression. The DC-SCRIPT deficient cDC1s had impaired capacity to capture and present cell-associated antigens and to secrete IL-12p40.
Breast cancer
In 2010, it was shown that DC-SCRIPT can act as a coregulator of multiple nuclear receptors having opposite effects on type I vs type II NRs. DC-SCRIPT is able to repress ER and PR mediated transcription, whereas it can activate transcription mediated by RAR and PPAR. In the same study, it was shown that breast tumor tissue expresses lower levels of DC-SCRIPT than normal breast tissue from the same patient and that DC-SCRIPT mRNA expression is an independent prognostic factor for good survival of breast cancer patients with estrogen receptor- and/or progesterone receptor-positive tumors.
References
Further reading
External links
Transcription factors | ZNF366 | Chemistry,Biology | 478 |
16,782,308 | https://en.wikipedia.org/wiki/HD%20178911%20Bb | HD 178911 Bb is a planet discovered in 2001 by Zucker who used the radial velocity method. The minimum mass of this giant planet is 7.35 times that of Jupiter that orbits close to the star. The period of the planet is 71.5 days and the semi-amplitude is 346.9 m/s.
References
External links
Exoplanets discovered in 2001
Giant planets
Lyra
Exoplanets detected by radial velocity
de:HD 178911 B b | HD 178911 Bb | Astronomy | 97 |
20,234,262 | https://en.wikipedia.org/wiki/Von%20Neumann%20paradox | In mathematics, the von Neumann paradox, named after John von Neumann, is the idea that one can break a planar figure such as the unit square into sets of points and subject each set to an area-preserving affine transformation such that the result is two planar figures of the same size as the original. This was proved in 1929 by John von Neumann, assuming the axiom of choice. It is based on the earlier Banach–Tarski paradox, which is in turn based on the Hausdorff paradox.
Banach and Tarski had proved that, using isometric transformations, the result of taking apart and reassembling a two-dimensional figure would necessarily have the same area as the original. This would make creating two unit squares out of one impossible. But von Neumann realized that the trick of such so-called paradoxical decompositions was the use of a group of transformations that include as a subgroup a free group with two generators. The group of area-preserving transformations (whether the special linear group or the special affine group) contains such subgroups, and this opens the possibility of performing paradoxical decompositions using them.
Sketch of the method
The following is an informal description of the method found by von Neumann. Assume that we have a free group H of area-preserving linear transformations generated by two transformations, σ and τ, which are not far from the identity element. Being a free group means that all its elements can be expressed uniquely in the form for some n, where the s and s are all non-zero integers, except possibly the first and the last . We can divide this group into two parts: those that start on the left with σ to some non-zero power (we call this set A) and those that start with τ to some power (that is, is zero—we call this set B, and it includes the identity).
If we operate on any point in Euclidean 2-space by the various elements of H we get what is called the orbit of that point. All the points in the plane can thus be classed into orbits, of which there are an infinite number with the cardinality of the continuum. Using the axiom of choice, we can choose one point from each orbit and call the set of these points M. We exclude the origin, which is a fixed point in H. If we then operate on M by all the elements of H, we generate each point of the plane (except the origin) exactly once. If we operate on M by all the elements of A or of B, we get two disjoint sets whose union is all points but the origin.
Now we take some figure such as the unit square or the unit disk. We then choose another figure totally inside it, such as a smaller square, centred at the origin. We can cover the big figure with several copies of the small figure, albeit with some points covered by two or more copies. We can then assign each point of the big figure to one of the copies of the small figure. Let us call the sets corresponding to each copy . We shall now make a one-to-one mapping of each point in the big figure to a point in its interior, using only area-preserving transformations. We take the points belonging to and translate them so that the centre of the square is at the origin. We then take those points in it which are in the set A defined above and operate on them by the area-preserving operation σ τ. This puts them into set B. We then take the points belonging to B and operate on them with σ2. They will now still be in B, but the set of these points will be disjoint from the previous set. We proceed in this manner, using σ3τ on the A points from C2 (after centring it) and σ4 on its B points, and so on. In this way, we have mapped all points from the big figure (except some fixed points) in a one-to-one manner to B type points not too far from the centre, and within the big figure. We can then make a second mapping to A type points.
At this point we can apply the method of the Cantor-Bernstein-Schroeder theorem. This theorem tells us that if we have an injection from set D to set E (such as from the big figure to the A type points in it), and an injection from E to D (such as the identity mapping from the A type points in the figure to themselves), then there is a one-to-one correspondence between D and E. In other words, having a mapping from the big figure to a subset of the A points in it, we can make a mapping (a bijection) from the big figure to all the A points in it. (In some regions points are mapped to themselves, in others they are mapped using the mapping described in the previous paragraph.) Likewise we can make a mapping from the big figure to all the B points in it. So looking at this the other way round, we can separate the figure into its A and B points, and then map each of these back into the whole figure (that is, containing both kinds of points)!
This sketch glosses over some things, such as how to handle fixed points. It turns out that more mappings and more sets are necessary to work around this.
Consequences
The paradox for the square can be strengthened as follows:
Any two bounded subsets of the Euclidean plane with non-empty interiors are equidecomposable with respect to the area-preserving affine maps.
This has consequences concerning the problem of measure. As von Neumann notes,
"Infolgedessen gibt es bereits in der Ebene kein nichtnegatives additives Maß (wo das Einheitsquadrat das Maß 1 hat), dass [sic] gegenüber allen Abbildungen von A2 invariant wäre."
"In accordance with this, already in the plane there is no nonnegative additive measure (for which the unit square has a measure of 1), which is invariant with respect to all transformations belonging to A2 [the group of area-preserving affine transformations]."
To explain this a bit more, the question of whether a finitely additive measure exists, that is preserved under certain transformations, depends on what transformations are allowed. The Banach measure of sets in the plane, which is preserved by translations and rotations, is not preserved by non-isometric transformations even when they do preserve the area of polygons. As explained above, the points of the plane (other than the origin) can be divided into two dense sets which we may call A and B. If the A points of a given polygon are transformed by a certain area-preserving transformation and the B points by another, both sets can become subsets of the B points in two new polygons. The new polygons have the same area as the old polygon, but the two transformed sets cannot have the same measure as before (since they contain only part of the B points), and therefore there is no measure that "works".
The class of groups isolated by von Neumann in the course of study of Banach–Tarski phenomenon turned out to be very important for many areas of mathematics: these are amenable groups, or groups with an invariant mean, and include all finite and all solvable groups. Generally speaking, paradoxical decompositions arise when the group used for equivalences in the definition of equidecomposability is not amenable.
Recent progress
Von Neumann's paper left open the possibility of a paradoxical decomposition of the interior of the unit square with respect to the linear group SL(2,R) (Wagon, Question 7.4). In 2000, Miklós Laczkovich proved that such a decomposition exists. More precisely, let A be the family of all bounded subsets of the plane with non-empty interior and at a positive distance from the origin, and B the family of all planar sets with the property that a union of finitely many translates under some elements of SL(2,R) contains a punctured neighbourhood of the origin. Then all sets in the family A are SL(2,R)-equidecomposable, and likewise for the sets in B. It follows that both families consist of paradoxical sets.
See also
References
Group theory
Measure theory
Mathematical paradoxes
Theorems in the foundations of mathematics | Von Neumann paradox | Mathematics | 1,743 |
807,687 | https://en.wikipedia.org/wiki/Gable | A gable is the generally triangular portion of a wall between the edges of intersecting roof pitches. The shape of the gable and how it is detailed depends on the structural system used, which reflects climate, material availability, and aesthetic concerns. The term gable wall or gable end more commonly refers to the entire wall, including the gable and the wall below it. Some types of roof do not have a gable (for example hip roofs do not). One common type of roof with gables, the 'gable roof', is named after its prominent gables.
A parapet made of a series of curves (shaped gable, see also Dutch gable) or horizontal steps (crow-stepped gable) may hide the diagonal lines of the roof.
Gable ends of more recent buildings are often treated in the same way as the Classic pediment form. But unlike Classical structures, which operate through trabeation, the gable ends of many buildings are actually bearing-wall structures.
Gable style is also used in the design of fabric structures, with varying degree sloped roofs, dependent on how much snowfall is expected.
Sharp gable roofs are a characteristic of the Gothic and classical Greek styles of architecture.
The opposite or inverted form of a gable roof is a V-roof or butterfly roof.
Front-gabled and side-gabled
While a front-gabled or gable-fronted building faces the street with its gable, a side-gabled building faces it with its (gutter), meaning the ridge is parallel to the street. The terms are used in architecture and city planning to determine a building in its urban situation.
Front-gabled buildings are considered typical for German city streets in the Gothic period, while later Renaissance buildings, influenced by Italian architecture, are often side-gabled. In America, front-gabled houses, such as the gablefront house, were popular between the early 19th century and 1920.
Wimperg
A Wimperg, in German and Dutch, is a Gothic ornamental gable with tracery over windows or portals, which were often accompanied by pinnacles. It was a typical element in Gothic architecture, especially in cathedral architecture. Wimpergs often had crockets or other decorative elements in the Gothic style. The intention behind the wimperg was the perception of increased height.
Drawbacks
The gable end roof is a poor design for hurricane or tornado-prone regions. Winds blowing against the gable end can exert tremendous pressure, both on the gable and on the roof edges where they overhang it, causing the roof to peel off and the gable to cave in.
In popular culture
"The Seven Lamps of Architecture", an 1849 essay. It gives John Ruskin's opinion on truth in architecture
The House of the Seven Gables, an 1851 novel by Nathaniel Hawthorne
Anne of Green Gables, a 1908 novel by Canadian author Lucy Maud Montgomery, set in Canada
"The Adventure of the Three Gables", a 1926 story by Arthur Conan Doyle.
See also
Bell-gable (espadaña)
Clock gable
Cape Dutch architecture
Eaves
Façade
Gablet roof
Hip roof
List of roof shapes
Tympanum (architecture)
References
Further reading
External links
Architectural elements
Types of wall
Roofs | Gable | Technology,Engineering | 634 |
72,650,562 | https://en.wikipedia.org/wiki/Apomyoglobin | Apomyoglobin is a representative of a group of relatively small, α-helical and globular proteins. It has been extensively employed as a model system for protein folding and stability studies. Apomyoglobin is a type of myoglobin that does not have a haem group. This means that apomyoglobin lacks the haem groups that would have their iron atoms bind to Oxygen. There is a possibility, however, that apomyoglobin can bind to other different cofactors that is not a haem group. It also serves as an intermediate of myoglobin in its biosynthesis process. Apomyoglobin can be found in certain solutions with a neutral pH, it has a spatial structure that is compact and unique. Apomyoglobin also has an extended hydrophobic core. Apomyoglobin's structure has also been found to be similar to that of holomyoglobin's structure. Apomyoglobin also has an extended hydrophobic core. Apomyoglobin is produced in the sarcoplasm and is stated as being a hydrophilic protein. This means that the protein has a high affinity for water.
Folding and unfolding at different pHs
Apomyoglobin folds slowly (taking around 2 seconds) in comparison to other proteins. Apomyoglobin is an ideal protein to take into consideration when looking at folding in proteins. This is because it has no cysteines, no disulfides, and also does not exhibit any proline isomerization, which makes it easier to label. The protein contains primary, secondary, tertiary (not stable), and a quartenary structure at different pHs.
Apomyoglobin also has different folding states at different pH's. At a pH of 6, the F helix of the monomer is not folded completely. At this pH of 6, both secondary and tertiary structures are contained by the folded protein. At a pH of 4, apomyoglobin forms a structure known as "molten globule" and becomes more stabilized. A molten globule, in other words, is where the tertiary structure of the protein is lost, but the secondary structure is allowed to remain and becomes stronger. At a pH of 2, the monomer protein becomes unfolded, yet it still has some small amount of helical structure. The important thing to note about the apomyoglobin monomer when it comes to folding and unfolding, is that it unfolds backwards at acidic pHs but it can also be refolded as easily from acidic or alkaline solutions.
The kinetic folding pathway intermediate
Apomyoglobin is stated as having eight total helices, with the labels (A, B, C, D, E, F, G and H). These helices, ranging from A to H, are directly involved in what is known as helix-helix interactions (13 total and hydrophobic) in its native folding state. When looking at the first-approximation to apomyoglobin's folding kinetics (calculated using a diffusion-collision model), the two small helices D and C can be disregarded. To further explain, the diffusion-collision model is a model that states that the process of folding for four-helix proteins should be placed in randomized helix-helix collisions. This leaves the apomyoglobin as only having A, B, E, F, G, H helices as part of its protein, meaning that the only possible interactions left are BG, GH, BE, FH, and AE.
The kinetic folding pathway of apomyoglobin, showcased that the folding of the protein proceeded forward through a "burst phase" intermediate, delineated via 2D1H NMR spectra. This specific kinetic intermediate, formed within 6 ms (milliseconds), contained only parts A, G, and H helices. Furthermore, it can be said that the folding kinetics of apomyoglobin are transitions among the 64 states that commonly occur when breaking or forming one of the helix-helix interactions. Apomyoglobin tends to collapse rapidly when the initiation of folding occurs into an intermediate that contains A, G, and H helices. This kinetic pathway folding of apomyoglobin results in the A(B)GH intermediate occurring.
It can also be shown that the folding kinetics associated with apomyoglobin can be tied to nascent helices through a network of diffusion-collision steps. These steps are stated as being: G + H <-> GH + A <-> AGH + B <-> A(B)GH.
Interactions with membranes
There was a research study conducted by a group of scientists, that studied the way apomyoglobin interacts with membranes. The purpose of the study, specifically, was to determine if apomyoglobin interacted with the membranes to extract a heme group from the lipid bilayer of the membrane. This was done by looking at apomyoglobin's interactions with large unilamellar vesicles (LUVs), and then measuring the impact that apomyoglobin's membrane binding has to that of the rate of heme uptake.
The results of the research conducted points to apomyoglobin and membrane interactions being heavily pH dependent. It was also found that apomyoglobins may require the presence of phospholipids that are of an anionic state. All of the conditions that were found that showcased a positive interaction between apomyoglobin with a membrane, pointed towards the destabilization of apomyoglobin and a decrease in the rate of the protein's binding with heme. This further sealed the narrative that the interactions between membranes and the protein is not necessary for holomyoglobin formation. It was also concluded that the molten globule state of apomyoglobin is an important step in making the hydrophobic regions of the protein accessible when interacting with the membrane.
In sperm whales
Apomyoglobin can be found in certain animals, one of the main animals being sperm whales. The apomyoglobin from sperm whales is used for studying ligand binding, folding in proteins, and protein stability. Apomyoglobin found in sperm whales binds pigments similar to chlorophyll, while its counterpart Myoglobin, has never really encountered these pigments. It was also discovered that apomyoglobin is around 20 to 100 times more resistant to GdmCl (Guanidinium chloride) denaturation. When comparing sperm whale apomyoglobin to other mammal apomyoglobin, it can be stated that sperm whale apomyoglobin is much more resistant to GdmCl-induced denaturation. Sperm whale apomyoglobin has also been vastly used in studies involving comprehensive studies of protein unfolding. There is also specific whale sperm apomyoglobin mutations that give rise amyloid fibrils at a pH 7. Examples of mutations that would give rise to this are mutations such as, the replacement of Trp-7 and Trp-14 by two Phe's. Apomyoglobins found in deep-diving whales are far more stable than those from mammals found on land or those who swim on the surface.
See also
Bovine serum albumin
Myoglobin
Molten globule
Hydrophobic collapse
References
Proteins | Apomyoglobin | Chemistry | 1,561 |
27,038,807 | https://en.wikipedia.org/wiki/Nellie%20Kershaw | Nellie Kershaw (c. 1891 – 14 March 1924) was an English textile worker from Rochdale, Lancashire. Her death due to pulmonary asbestosis was the first such case to be described in medical literature, and the first published account of disease attributed to occupational asbestos exposure. Before his publication of the case in the British Medical Journal, Dr William Edmund Cooke had already testified at Kershaw's inquest that "mineral particles in the lungs originated from asbestos and were, beyond reasonable doubt, the primary cause of the fibrosis of the lungs and therefore of death". Her employers, Turner Brothers Asbestos, accepted no liability for her injuries, paid no compensation to her bereaved family and refused to contribute towards funeral expenses as it "would create a precedent and admit responsibility". She was buried in a family grave at Rochdale Cemetery; this grave is not marked with a headstone. The subsequent inquiries into her death led to the publication of the first Asbestos Industry Regulations in 1931.
Early life
Nellie Kershaw was born to Elizabeth and Arthur Kershaw in Rochdale in 1891. In 1903 she left school, aged 12, to take up employment as a cotton rover in a cotton mill and 5 months later began working at Garsides asbestos mill. She transferred to Turner Brothers Asbestos on 31 December 1917, where she was employed as a rover, spinning raw asbestos fibre into yarn. She was married to Frank Kershaw, a slater's labourer, and had at least one child, a daughter born in about 1920.
Illness and death
She first began to exhibit symptoms in 1920 at the age of 29 but continued to work in the asbestos mill until 22 July 1922, when she was certified unfit to work. Because Kershaw's medical certificate (produced by her local physician Walter Scott Joss) recorded the diagnosis as "asbestos poisoning" she was ineligible for National Health Insurance sickness benefits. As the illness was linked to her occupation, the insurers advised her that she should instead seek sickness benefits from her employers, under the Workmen's Compensation Act, and wrote to Turner Brothers on her behalf on several occasions. However, Turner Brothers refused to pay any benefits because asbestos-related illness was not a recognised occupational disease at that time, and instructed their insurance company to "repudiate the claim" as it would be "exceedingly dangerous" to accept "any liability whatever in such a case." Percy George Kenyon, Turner's works' manager at Rochdale, wrote to Joss demanding that he "inform us what you have said to Miss Kershaw about suffering from Asbestos poisoning" and then wrote to the medical insurance board stating that "We repudiate the term "Asbestos Poisoning". Asbestos is not poisonous and no definition or knowledge of such a disease exists. Such a description is not to be found amongst the list of industrial diseases in the schedule published with the Workmen's Compensation Act."
She is known to have written to Turner Brothers herself on at least one occasion, asking: "What are you going to do about my case? I have been home 9 weeks now and have not received a penny – I think it's time that there was something from you as the National Health refuses to pay me anything. I am needing nourishment and the money, I should have had 9 weeks wages now through no fault of my own."
There is no record that she received payments of any form from any official source between July 1922 and her death. She died at 6.30am on 14 March 1924, aged 33.
Inquest
E. N. Molesworth, the coroner for Rochdale, was obliged to investigate all cases of "unnatural" death, and Dr Joss's diagnosis of "asbestos poisoning" led Molesworth to launch a formal inquest on 14 March 1924, which was adjourned awaiting an postmortem, to be conducted by F.W. Mackichan. Mackichan gave the cause of death as "pulmonary tuberculosis and heart failure" but a further adjournment was granted for microscopic examination of the lungs. When the inquest was resumed on 1 April 1924 Turner Brothers instructed a barrister, Mr McCleary, and their solicitor G.L. Collins, of Jackson & Co., to attend in order to represent their interests and to "evade any financial liability for Mrs Kershaw's death."
William Edmund Cooke, a pathologist and bacteriologist at Wigan Infirmary and Leigh Infirmary, testified that his examination of the lungs indicated old scarring indicative of a previously healed tuberculosis infection, and in addition, extensive fibrosis, in which were visible "particles of mineral matter ... of various shapes, but the large majority have sharp angles. The size varies from 393.6 to 3 μm in length." Having compared these particles with samples of asbestos dust provided by S.A. Henry, Medical Inspector of Factories, Cooke concluded that they "originated from asbestos and were, beyond a reasonable doubt, the primary cause of the fibrosis of the lungs and therefore of death".
In his written testimony to the inquest, Walter Joss stated that his diagnosis of "asbestos poisoning" was based upon his "previous experience of such a lung condition for many of his patients who were asbestos workers", and that he personally saw 10 to 12 similar cases each year, all in persons working with asbestos. Nellie's death certificate was issued 2 April 1924, citing "Fibrosis of the lungs due to the inhalation of mineral particles" as the cause of death.
In a fuller version of Kershaw's case published in the BMJ in 1927, Cooke gave the disease the name by which it is still known: "pulmonary asbestosis".
Parliamentary inquiry
As a result of Cooke's paper, Parliament commissioned an inquiry into the effects of asbestos dust by E. R. A. Merewether, Medical Inspector of Factories, and , a factory inspector and pioneer of dust monitoring and control. Their subsequent report, Occurrence of Pulmonary Fibrosis & Other Pulmonary Affections in Asbestos Workers, was presented to parliament on 24 March 1930. It concluded that the development of asbestosis was irrefutably linked to the prolonged inhalation of asbestos dust, and included the first health study of asbestos workers, which found that 66% of those employed for 20 years or more suffered from asbestosis. The report led to the publication of the first Asbestos Industry Regulations in 1931, which came into effect on 1 March 1932.
Memorial
In April 2006, a relative of Kershaw unveiled a memorial stone to asbestos victims worldwide in Rochdale. The memorial service was organised by the Save Spodden Valley campaign, an action group concerned about asbestos contamination on the former Turner's factory site where Kershaw had been employed.
References
1890s births
1924 deaths
Asbestos
People from Rochdale
Textile workers
Year of birth uncertain | Nellie Kershaw | Environmental_science | 1,400 |
77,925,920 | https://en.wikipedia.org/wiki/Aurodox | Aurodox (X-5108, goldinomycin) is a naturally occurring polyketide antibiotic drug, first isolated in 1972 from Streptomyces goldiniensis. It is active against various species of Gram-positive bacteria through inhibition of the type III secretion system (T3SS), and while its chemical properties make it unsuitable for use in human medicine directly, it is used in antibiotic research and related compounds may be developed for medical use.
See also
Fluorothiazinone
References
Antibiotics
Dienes
Pyrans
Pyridones
Tetrahydrofurans | Aurodox | Biology | 125 |
1,657,639 | https://en.wikipedia.org/wiki/Timeout%20%28computing%29 | In telecommunications and related engineering (including computer networking and programming), the term timeout or time-out has several meanings, including:
A network parameter related to an enforced event designed to occur at the conclusion of a predetermined elapsed time.
A specified period of time that will be allowed to elapse in a system before a specified event is to take place, unless another specified event occurs first; in either case, the period is terminated when either event takes place. Note: A timeout condition can be canceled by the receipt of an appropriate time-out cancellation signal.
An event that occurs at the end of a predetermined period of time that began at the occurrence of another specified event. The timeout can be prevented by an appropriate signal.
Timeouts allow for more efficient usage of limited resources without requiring additional interaction from the agent interested in the goods that cause the consumption of these resources. The basic idea is that in situations where a system must wait for something to happen, rather than waiting indefinitely, the waiting will be aborted after the timeout period has elapsed. This is based on the assumption that further waiting is useless, and some other action is necessary.
Challenges
Balancing timeout values in distributed systems and microservices can be tricky: short timeout values can fail healthy requests prematurely, leading to complex workarounds, while long timeout values can result in slow error responses and poor user experiences. The circuit breaker design pattern can be a better alternative, as it can monitor service health, detect failures dynamically and faster, and improve the user experience.
Examples
Specific examples include:
In the Microsoft Windows and ReactOS command-line interfaces, the timeout command pauses the command processor for the specified number of seconds.
In POP connections, the server will usually close a client connection after a certain period of inactivity (the timeout period). This ensures that connections do not persist forever, if the client crashes or the network goes down. Open connections consume resources, and may prevent other clients from accessing the same mailbox.
In HTTP persistent connections, the web server saves opened connections (which consume CPU time and memory). The web client does not have to send an "end of requests series" signal. Connections are closed (timed out) after five minutes of inactivity; this ensures that the connections do not persist indefinitely.
In a timed light switch, both energy and lamp's life-span are saved. The user does not have to switch off manually.
Tablet computers and smartphones commonly turn off their backlight after a certain time without user input.
See also
Fibre Channel time out values
Human-Machine Interaction
Permanent signal
References
Further reading
Computer programming
Telecommunications engineering
Computer networking | Timeout (computing) | Technology,Engineering | 552 |
65,056,057 | https://en.wikipedia.org/wiki/Gennady%20Ermak | Gennady Ermak (born 1963) is a scientist and writer. He conducted research in several fields of molecular biology: neurodegeneration, cancer, dermatology, and genetics of plants. His work is cited by over 4000 other scientific publications. He is co-author of over 50 scholarly articles and several books.
He was born in the former USSR, where he acquired PhD in biology. Since then he worked at Swiss Federal Institute of Technology in Zurich, Switzerland (1991–1992), Institute of Genetics and Cytology in Minsk, Belarus (1992–1994), Albany Medical Center in Albany, NY, US (1994–1996), and The University of Southern California in Los Angeles, California (1996–2012).
After finishing his research career as professor at the University of Southern California in 2012, he became a writer. He is the author of the books Emerging Medical Technologies, Plant-based, Meat-Based and Between and Communism: The Great Misunderstanding (two editions), which was named the best book in the world history for 2016 by the National Association of Book Entrepreneurs.
References
1963 births
Living people
Molecular biologists
Soviet biologists
Historians of communism
Academic staff of ETH Zurich
University of Southern California faculty | Gennady Ermak | Chemistry | 250 |
38,900,162 | https://en.wikipedia.org/wiki/Tricholoma%20apium | Tricholoma apium is a mushroom of the agaric genus Tricholoma that is found in Europe. It is classified as vulnerable in the IUCN Red List of Threatened Species.
See also
List of North American Tricholoma
List of Tricholoma species
References
apium
Fungi described in 1925
Fungi of Europe
Fungi of North America
Fungus species | Tricholoma apium | Biology | 74 |
31,487,183 | https://en.wikipedia.org/wiki/Spirit%20Studios | Spirit Studios, formerly known as SSR and The School of Sound Recording, is a music and media training academy producing graduates within the music, television, film and radio industries. It is based in Manchester in northern England, and has offshoots in London and Jakarta.
History
Commercial studio
Spirit Studios, based on Tariff Street in Manchester, began as a commercial recording studio in 1980 as part of the Northern Quarter, also known as the creative quarter. John Breakell, Spirit Studios' founder and Managing Director, ran the business with facilities that included four small rehearsal rooms and a single 4-track recording studio.
The first band to use Spirit's facilities were The Smiths, named by NME magazine as the most influential band since 1952. Spirit Studios continued to provide rehearsal and recording space for many Mancunian bands and international artists, notably: The Stone Roses, Tony Wilson, Simply Red, The MembranesHappy Mondays and 808 State, Hypnotone, 2 for Joy, and Illustration Creator "歩き目です(Arukimedesu)". Producers such as Trevor Horn (Frankie Goes to Hollywood, Seal etc.), Martin Hannet (The Stone Roses, Joy Division etc.) and Arthur Baker (Bruce Springsteen, Bob Dylan, Kraftwerk, Al Green) all visited Spirit to record and produce their work.
Educational facility
In 1984, Spirit Studios made the transition from a commercial recording studio to an educational facility to become the School of Sound Recording (SSR). SSR was the first dedicated audio engineering school in the UK, using the advice and assistance of producers who had previously recorded at Spirit Studios.
At this time SSR occupied half of the basement of 10 Tariff Street, and the entire facility consisted of a single studio, one classroom and a reception/office area. SSR grew steadily during its first 15 years of trading and by 2000 the school occupied all three floors of 10 Tariff Street, two floors of 12 Tariff Street and a single floor in Fourways House (also on Tariff Street). By this time the school housed eight studios, two computer suites, four DJ booths, a classroom and had become Europe's first AVID (then Digidesign) "Pro School" in May 2002.
In 2004, the Tariff Street campus closed its doors for the final time, allowing the launch of SSR's current location on Downing Street, Manchester. The newly formed School of Sound Recording is located around 0.5 miles south of Manchester's city centre.
Friday 5 June 2009 saw the Lord Mayor of Manchester and MP for Manchester Central Tony Lloyd officially re-launch a brand new Spirit Studio on the fourth floor of SSR's Downing Street premises. The new 1500 sq ft space was designed with help from acoustic design specialist Jochen Veith. The studio facility houses a Neve VRP60/48 Legend console and a variety of professional-quality analogue and digital equipment.
SSR celebrated its 25th anniversary in 2009 by launching the Anthony H Wilson (Tony Wilson) Scholarship in recognition of the contribution made by Tony to the creative and cultural life of Manchester.
In November 2009, SSR was awarded the Manchester Evening News Business of the Year Award 2009, for firms with turnover of under £5 million.
In November 2018, SSR was rebranded to its original Spirit Studios name, with a greater focus on its core educational provisions within the music and audio industries.
SSR London
In July 2010, SSR London was launched, taking up residence in Camden's Piano Factory building, London. The distinctive rotunda-shaped Piano Factory on Gloucester Crescent has been there for over a hundred years, built for Collard and Collard, which were the oldest of the well-known piano manufacturing firms of the St Pancras area. The building was renovated with recording studios, green screen filming area and editing suites to be used as educational and commercial facilities by SSR. SSR London has also formed a partnership with the Roundhouse venue in Camden to deliver master classes in music production.
SSR Jakarta
Launched in 2011, SSR Jakarta delivers industry-led training programmes in audio engineering and creative media production ranging from weekend short courses to 18 month programmes. As a 'Partner Institution' of the University of Central Lancashire (UCLan), SSR Jakarta delivers degree programmes in Jakarta validated by a UK University.
Facilities
• Recording and post production studios
• Live sound venue
• Live sound workstations
• DJ booths
• PC suites
• Apple Mac suite
• Lecture room
• Student lounge
• Avid Pro Tools and Media Composer
• Apple Logic Pro Studio & Final Cut Studio
• Steinberg Cubase, Hypersonic & Wavelab
• Propellerhead Reason & ReCycle
• Ableton Live
• Sony CD Architect
• Celemony Melodyne
• Microsoft Office
Notable staff
Ian Carmichael (musician) – Vice Principal
Discography
All tracks recorded, produced or mixed at Spirit Studios:
The Stone Roses – Sally Cinnamon
Carmel – More, More, More
Candy Flip – Strawberry Fields Forever
Nathan Burton – Lucky #1
2 For Joy – In A State, Let The Bass Kick
Awesome 3 – Don't Go
Dr Umbardi – (One Day) We'll All Be Free
Denki Groove – FLASH PAPA
Massonix – Just A Little Bit More
808 State – Ninety, Prebuild, Newbuild, Quadrastate, Cubik
Hypnotone – Hypnotone, Dream Beam, Ai, Hypnotonic /Yu-Yu
Iris – Bad Hair Day
The Pleasure Crew – So Good
Biting Tongues – Fever House, Recharge
Living In A Box– Living In A Box
Mark Hall – Hard Core Uproar
References
External links
Spirit Studios Official Website
Sound recording
Music schools in England
Audio engineering schools | Spirit Studios | Engineering | 1,136 |
4,811,551 | https://en.wikipedia.org/wiki/Yellow%20fluorescent%20protein | Yellow fluorescent protein (YFP) is a genetic mutant of green fluorescent protein (GFP) originally derived from the jellyfish Aequorea victoria. Its excitation peak is 513 nm and its emission peak is 527 nm. Like the parent GFP, YFP is a useful tool in cell and molecular biology because the excitation and emission peaks of YFP are distinguishable from GFP which allows for the study of multiple processes/proteins within the same experiment.
Three improved versions of YFP are Citrine, Venus, and Ypet. They have reduced chloride sensitivity, faster maturation, and increased brightness (defined as the product of the extinction coefficient and quantum yield). Typically, YFP serves as the acceptor for genetically-encoded FRET sensors of which the most likely donor FP is monomeric cyan fluorescent protein (mCFP). The red-shift relative to GFP is caused by a Pi-Pi stacking interaction as a result of the T203Y substitution introduced by mutation, which essentially increases the polarizability of the local chromophore environment as well as providing additional electron density into the chromophore.
"Venus" contains a novel amino acid substitution –F46L– which accelerates the oxidation of the chromophore at 37°C, the rate limiting step of maturation. The protein has other substitutions (F64L/ M153T/ V163A/ S175G), permitting Venus to fold well and giving it relative tolerance to acidosis and Cl−.
Evolution of YFP from GFP
Four protein mutations from the wild-type GFP found in Aequorea Victoria jellyfish were needed to create the YFP mutant. The most important mutation was the replacement of threonine with tyrosine at residue position 203 (the substitution is denoted by T203Y, where T and Y represent the single letter code for the amino acids threonine and tyrosine, respectively).
See also
Red fluorescent protein
References
External links
Introduction to fluorescent proteins
EYFP on FPbase
Fluorescent proteins | Yellow fluorescent protein | Chemistry,Biology | 434 |
76,823,014 | https://en.wikipedia.org/wiki/Taxonomic%20synonyms%20of%20Solanum%20tuberosum | The potato, Solanum tuberosum, has at least 438 taxonomic synonyms, as listed by the Royal Botanic Gardens, Kew website Plants of the World Online.
Synonyms
The attribution after each synonym is to the authors who first described the species under that name.
Battata tuberosa
Larnax sylvarum subsp. novogranatensis
Lycopersicon tuberosum
Parmentiera edulis
Solanum andigenum
Solanum andigenum convar. acutifolium
Solanum andigenum convar. adpressipilosum
Solanum andigenum f. alccai-huarmi
Solanum andigenum f. ancacc-maquin
Solanum andigenum f. arcuatum
Solanum andigenum subsp. argentinicum
Solanum andigenum subsp. australiperuvianum
Solanum andigenum subsp. aya-papa
Solanum andigenum var. aymaranum
Solanum andigenum f. basiscopum
Solanum andigenum f. bifidum
Solanum andigenum var. bolivianum
Solanum andigenum subsp. bolivianum
Solanum andigenum convar. brachistylum
Solanum andigenum convar. brevicalyces
Solanum andigenum var. brevicalyx
Solanum andigenum convar. brevipilosum
Solanum andigenum f. caesium
Solanum andigenum f. caiceda
Solanum andigenum var. carhua
Solanum andigenum f. ccompetillo
Solanum andigenum f. ccompis
Solanum andigenum var. ccusi
Solanum andigenum subsp. centraliperuvianum
Solanum andigenum f. cevallosii
Solanum andigenum f. chalcoense
Solanum andigenum f. chimaco
Solanum andigenum var. ckello-huaccoto
Solanum andigenum f. coeruleum
Solanum andigenum var. colombianum
Solanum andigenum subsp. colombianum
Solanum andigenum f. conicicolumnatum
Solanum andigenum f. cryptostylum
Solanum andigenum convar. curtibaccatum
Solanum andigenum var. cuzcoense
Solanum andigenum var. digitotuberosum
Solanum andigenum f. dilatatum
Solanum andigenum f. discolor
Solanum andigenum subsp. ecuatorianum
Solanum andigenum convar. elongatibaccatum
Solanum andigenum f. elongatipedicellatum
Solanum andigenum f. globosum
Solanum andigenum var. grauense
Solanum andigenum f. guatemalense
Solanum andigenum var. hederiforme
Solanum andigenum var. herrerae
Solanum andigenum f. huaca-layra
Solanum andigenum var. huairuru
Solanum andigenum f. huallata
Solanum andigenum f. huaman-uma
Solanum andigenum var. imilla
Solanum andigenum f. incrassatum
Solanum andigenum var. juninum
Solanum andigenum f. lanciacuminatum
Solanum andigenum f. lapazense
Solanum andigenum var. latius
Solanum andigenum f. lecke-umo
Solanum andigenum f. lilacinoflorum
Solanum andigenum f. lisarassa
Solanum andigenum f. llutuc-runtum
Solanum andigenum convar. longiacuminatum
Solanum andigenum var. longibaccatum
Solanum andigenum convar. macron
Solanum andigenum f. magnicorollatum
Solanum andigenum var. mexicanum
Solanum andigenum f. microstigma
Solanum andigenum convar. microstigmatum
Solanum andigenum f. nodosum
Solanum andigenum convar. nudiculum
Solanum andigenum convar. obtusiacuminatum
Solanum andigenum f. ovatibaccatum
Solanum andigenum f. pacus
Solanum andigenum f. pallidum
Solanum andigenum var. platyantherum
Solanum andigenum f. pomacanchicum
Solanum andigenum f. ppacc-nacha
Solanum andigenum f. ppaqui
Solanum andigenum convar. puca-mata
Solanum andigenum var. quechuanum
Solanum andigenum var. sihuanum
Solanum andigenum var. socco-huaccoto
Solanum andigenum convar. stenon
Solanum andigenum var. stenophyllum
Solanum andigenum f. sunchchu
Solanum andigenum subsp. tarmense
Solanum andigenum f. tenue
Solanum andigenum f. tiahuanacense
Solanum andigenum convar. titicacense
Solanum andigenum f. tocanum
Solanum andigenum f. tolucanum
Solanum andigenum f. uncuna
Solanum apurimacense
Solanum aracatscha
Solanum aracc-papa
Solanum ascasabii
Solanum boyacense
Solanum caniarense
Solanum cardenasii
Solanum cayeuxi
Solanum chariense
Solanum chaucha
Solanum chaucha var. ccoe-sulla
Solanum chaucha var. ckati
Solanum chaucha var. khoyllu
Solanum chaucha var. puca-suitu
Solanum chaucha f. purpureum
Solanum chaucha f. roseum
Solanum chaucha var. surimana
Solanum chiloense
Solanum chilotanum
Solanum chilotanum var. angustifurcatum
Solanum chilotanum f. magnicorollatum
Solanum chilotanum f. parvicorollatum
Solanum chilotanum var. talukdarii
Solanum chocclo
Solanum churuspi
Solanum coeruleiflorum
Solanum cultum
Solanum diemii
Solanum dubium
Solanum erlansonii
Solanum esculentum
Solanum estradea
Solanum goniocalyx
Solanum goniocalyx var. caeruleum
Solanum herrerae
Solanum hygrothermicum
Solanum kesselbrenneri
Solanum leptostigma
Solanum leptostigma
Solanum macmillanii
Solanum maglia var. chubutense
Solanum maglia var. guaytecarum
Solanum mamilliferum
Solanum molinae
Solanum oceanicum
Solanum ochoanum
Solanum paramoense
Solanum parmentieri
Solanum parvicorollatum
Solanum phureja
Solanum phureja var. caeruleum
Solanum phureja var. erlansonii
Solanum phureja subsp. estradae
Solanum phureja var. flavum
Solanum phureja subsp. hygrothermicum
Solanum phureja var. janck'o-phureja
Solanum phureja var. macmillanii
Solanum phureja f. orbiculatum
Solanum phureja var. pujeri
Solanum phureja var. rubroroseum
Solanum phureja var. sanguineum
Solanum phureja f. sayhuanimayo
Solanum phureja f. timusi
Solanum phureja f. viuda
Solanum riobambense
Solanum rybinii
Solanum rybinii var. bogotense
Solanum rybinii var. boyacense
Solanum rybinii var. pastoense
Solanum rybinii var. popayanum
Solanum sabinei
Solanum sanmartinense
Solanum sendigena
Solanum sinense
Solanum stenotomum
Solanum stenotomum f. alcay-imilla
Solanum stenotomum f. canasense
Solanum stenotomum f. canastilla
Solanum stenotomum f. catari-papa
Solanum stenotomum f. ccami
Solanum stenotomum var. ccami
Solanum stenotomum var. chapina
Solanum stenotomum f. chilcas
Solanum stenotomum f. chincherae
Solanum stenotomum f. chojllu
Solanum stenotomum f. cochicallo
Solanum stenotomum f. cohuasa
Solanum stenotomum f. cuchipacon
Solanum stenotomum var. cyaneum
Solanum stenotomum f. eucaliptae
Solanum stenotomum subsp. goniocalyx
Solanum stenotomum f. huallata-chinchi
Solanum stenotomum f. huamanpa-uman
Solanum stenotomum f. huanuchi
Solanum stenotomum var. huicu
Solanum stenotomum f. kamara
Solanum stenotomum f. kantillero
Solanum stenotomum var. keccrana
Solanum stenotomum f. kehuillo
Solanum stenotomum f. koso-nahui
Solanum stenotomum var. megalocalyx
Solanum stenotomum f. negrum
Solanum stenotomum f. orcco-amajaya
Solanum stenotomum f. pallidum
Solanum stenotomum var. peruanum
Solanum stenotomum f. phinu
Solanum stenotomum f. phitu-huayacas
Solanum stenotomum f. piticana
Solanum stenotomum var. pitiquilla
Solanum stenotomum f. pitoca
Solanum stenotomum var. poccoya
Solanum stenotomum f. puca
Solanum stenotomum var. puca-lunca
Solanum stenotomum var. putis
Solanum stenotomum f. roseum
Solanum stenotomum f. tiele
Solanum stenotomum f. yana-cculi
Solanum stenotomum f. yuracc
Solanum subandigenum
Solanum sylvestre
Solanum tarmense
Solanum tascalense
Solanum tenuifilamentum
Solanum tuberosum f. acuminatum
Solanum tuberosum var. aethiopicum
Solanum tuberosum var. alaudinum
Solanum tuberosum var. album
Solanum tuberosum f. alkka-imilla
Solanum tuberosum f. alkka-silla
Solanum tuberosum f. amajaya
Solanum tuberosum subsp. andigenum
Solanum tuberosum var. anglicum
Solanum tuberosum f. araucanum
Solanum tuberosum f. auriculatum
Solanum tuberosum f. azul-runa
Solanum tuberosum var. batatinum
Solanum tuberosum var. bertuchii
Solanum tuberosum var. borsdorfianum
Solanum tuberosum var. brachyceras
Solanum tuberosum f. brachykalukon
Solanum tuberosum f. brevipapillosum
Solanum tuberosum var. brevipilosum
Solanum tuberosum var. bufoninum
Solanum tuberosum var. californicum
Solanum tuberosum f. camota
Solanum tuberosum var. cepinum
Solanum tuberosum f. chaped
Solanum tuberosum f. chiar-lelekkoya
Solanum tuberosum f. chiar-pala
Solanum tuberosum subsp. chiloense
Solanum tuberosum var. chiloense
Solanum tuberosum var. chilotanum
Solanum tuberosum f. chojo-sajama
Solanum tuberosum var. chubutense
Solanum tuberosum f. conicum
Solanum tuberosum var. conocarpum
Solanum tuberosum f. contortum
Solanum tuberosum f. coraila
Solanum tuberosum var. cordiforme
Solanum tuberosum var. corsicanum
Solanum tuberosum f. crassifilamentum
Solanum tuberosum var. crassipedicellatum
Solanum tuberosum var. cucumerinum
Solanum tuberosum var. cultum
Solanum tuberosum var. drakeanum
Solanum tuberosum var. elegans
Solanum tuberosum f. elongatum
Solanum tuberosum var. elongatum
Solanum tuberosum f. enode
Solanum tuberosum var. erythroceras
Solanum tuberosum var. fragariinum
Solanum tuberosum var. guaytecarum
Solanum tuberosum var. hassicum
Solanum tuberosum var. helenanum
Solanum tuberosum var. hispanicum
Solanum tuberosum var. holsaticum
Solanum tuberosum f. huaca-zapato
Solanum tuberosum f. huichinkka
Solanum tuberosum f. indianum
Solanum tuberosum f. infectum
Solanum tuberosum f. isla-imilla
Solanum tuberosum f. jancck'o-kkoyllu
Solanum tuberosum f. janck'o-chockella
Solanum tuberosum f. janck'o-pala
Solanum tuberosum var. julianum
Solanum tuberosum var. kaunitzii
Solanum tuberosum f. kunurana
Solanum tuberosum f. laram-lelekkoya
Solanum tuberosum f. latum
Solanum tuberosum var. laurentianum
Solanum tuberosum var. lelekkoya
Solanum tuberosum var. leonhardianum
Solanum tuberosum f. mahuinhue
Solanum tuberosum var. malcachu
Solanum tuberosum var. melanoceras
Solanum tuberosum var. menapianum
Solanum tuberosum var. merceri
Solanum tuberosum f. milagro
Solanum tuberosum f. montticum
Solanum tuberosum var. multibaccatum
Solanum tuberosum var. murukewillu
Solanum tuberosum f. nigrum
Solanum tuberosum var. nobile
Solanum tuberosum var. norfolcicum
Solanum tuberosum var. nucinum
Solanum tuberosum f. oculosum
Solanum tuberosum f. ovatum
Solanum tuberosum f. overita
Solanum tuberosum var. palatinatum
Solanum tuberosum var. pecorum
Solanum tuberosum var. peruvianum
Solanum tuberosum f. pichuna
Solanum tuberosum f. pillicuma
Solanum tuberosum var. platyceras
Solanum tuberosum var. polemoniifolium
Solanum tuberosum var. praecox
Solanum tuberosum var. praedicandum
Solanum tuberosum f. pulo
Solanum tuberosum var. putscheanum
Solanum tuberosum var. recurvatum
Solanum tuberosum var. reniforme
Solanum tuberosum var. rockii
Solanum tuberosum var. rossicum
Solanum tuberosum var. rubrisuturatum
Solanum tuberosum var. rugiorum
Solanum tuberosum var. runa
Solanum tuberosum var. sabinei
Solanum tuberosum var. saccharatum
Solanum tuberosum var. salamandrinum
Solanum tuberosum f. sani-imilla
Solanum tuberosum var. schnittspahnii
Solanum tuberosum f. sebastianum
Solanum tuberosum var. sesquimensale
Solanum tuberosum var. sicha
Solanum tuberosum var. sipancachi
Solanum tuberosum var. strobilinum
Solanum tuberosum f. surico
Solanum tuberosum var. taraco
Solanum tuberosum var. tener
Solanum tuberosum f. tenuipedicellatum
Solanum tuberosum f. thalassinum
Solanum tuberosum var. tinctorium
Solanum tuberosum f. tinguipaya
Solanum tuberosum var. ulmense
Solanum tuberosum var. versicolor
Solanum tuberosum var. villaroella
Solanum tuberosum f. viride
Solanum tuberosum var. vuchefeldicum
Solanum tuberosum var. vulgare
Solanum tuberosum var. vulgare
Solanum tuberosum f. wila-huaycku
Solanum tuberosum f. wila-imilla
Solanum tuberosum f. wila-k'oyu
Solanum tuberosum f. wila-monda
Solanum tuberosum f. wila-pala
Solanum tuberosum var. xanthoceras
Solanum tuberosum f. yurac-taraco
Solanum tuberosum var. yutuense
Solanum utile
Solanum yabari
Solanum yabari var. cuzcoense
Solanum yabari var. pepino
Solanum zykinii
References
Solanum
Lists of plants | Taxonomic synonyms of Solanum tuberosum | Biology | 3,682 |
3,107,628 | https://en.wikipedia.org/wiki/Cobalt%28III%29%20fluoride | Cobalt(III) fluoride is the inorganic compound with the formula . Hydrates are also known. The anhydrous compound is a hygroscopic brown solid. It is used to synthesize organofluorine compounds.
The related cobalt(III) chloride is also known but is extremely unstable. Cobalt(III) bromide and cobalt(III) iodide have not been synthesized.
Structure
Anhydrous
Anhydrous cobalt trifluoride crystallizes in the rhombohedral group, specifically according to the aluminium trifluoride motif, with a = 527.9 pm, α = 56.97°. Each cobalt atom is bound to six fluorine atoms in octahedral geometry, with Co–F distances of 189 pm. Each fluoride is a doubly bridging ligand.
Hydrates
A hydrate is known. It is conjectured to be better described as .
There is a report of an hydrate , isomorphic to .
Preparation
Cobalt trifluoride can be prepared in the laboratory by treating with fluorine at 250 °C:
+ 3/2 → +
In this redox reaction, and are oxidized to and , respectively, while is reduced to . Cobalt(II) oxide (CoO) and cobalt(II) fluoride () can also be converted to cobalt(III) fluoride using fluorine.
The compound can also be formed by treating with chlorine trifluoride or bromine trifluoride .
Reactions
decomposes upon contact with water to give oxygen:
4 + 2 H2O → 4 HF + 4 Co + O2
It reacts with fluoride salts to give the anion [CoF6]3−, which is also features high-spin, octahedral cobalt(III) center.
Applications
is a powerful fluorinating agent. Used as slurry, converts hydrocarbons to the perfluorocarbons:
2 + R-H → 2 Co + R-F + HF
Co is the byproduct.
Such reactions are sometimes accompanied by rearrangements or other reactions. The related reagent KCoF4 is more selective.
Gaseous
In the gas phase, is calculated to be planar in its ground state, and has a 3-fold rotation axis (point group D3h). The ion has a ground state of 3d6 5D. The fluoride ligands split this state into, in energy order, 5A', 5E", and 5E' states. The first energy difference is small and the 5E" state is subject to the Jahn-Teller effect, so this effect needs to be considered to be sure of the ground state. The energy lowering is small and does not change the energy order. This calculation was the first treatment of the Jahn-Teller effect using calculated energy surfaces.
References
External links
National Pollutant Inventory - Cobalt fact sheet
National Pollutant Inventory - Fluoride and compounds fact sheet
Fluorides
Metal halides
Cobalt(III) compounds
Fluorinating agents | Cobalt(III) fluoride | Chemistry | 646 |
2,231,743 | https://en.wikipedia.org/wiki/Explicit%20knowledge | Explicit knowledge (also expressive knowledge) is knowledge that can be readily articulated, conceptualized, codified, formalized, stored and accessed. It can be expressed in formal and systematical language and shared in the form of data, scientific formulae, specifications, manuals and such like. It is easily codifiable and thus transmittable without loss of integrity once the syntactical rules required for deciphering it are known. Most forms of explicit knowledge can be stored in certain media. Explicit knowledge is often seen as complementary to tacit knowledge.
Explicit knowledge is often seen as easier to formalize compared to tacit knowledge, but both are necessary for knowledge creation. Nonaka and Takeuchi introduce the SECI model as a way for knowledge creation. The SECI model involves four stages where explicit and tacit knowledge interact with each other in a spiral manner. The four stages are:
Socialization, from tacit to tacit knowledge
Externalization, from tacit to explicit knowledge
Combination, from explicit to explicit knowledge
Internalization, from explicit to tacit knowledge.
Examples
The information contained in encyclopedias and textbooks are good examples of explicit knowledge, specifically declarative knowledge. The most common forms of explicit knowledge are manuals, documents, procedures, and how-to videos. Knowledge also can be audio-visual. Engineering works and product design can be seen as other forms of explicit knowledge where human skills, motives and knowledge are externalized.
In the scholarly literature, papers presenting an up-to-date "systemization of knowledge" (SoK) on a particular area of research are valuable resources for PhD students.
See also
Descriptive knowledge
SECI model of knowledge dimensions
Tacit knowledge
References
External links
National Library for Health - Knowledge Management Specialist Library - collection of resources about auditing intellectual capital.
Knowledge
Cognitive psychology | Explicit knowledge | Biology | 368 |
16,823,207 | https://en.wikipedia.org/wiki/Free%20fraction | The free fraction is a parameter in pharmacokinetics and receptor-ligand kinetics.
One speaks of two different free fractions:
Plasma free fraction, previously referred to as ƒ1, is now referred to as ƒP according to consensus nomenclature.
Tissue free fraction (ƒND), previously referred to as ƒ2
The plasma free fraction is the fraction of the ligand at equilibrium in blood plasma that is not bound to plasma proteins.
See also
Binding potential
References
Others
Pharmacokinetics | Free fraction | Chemistry,Biology | 103 |
21,727,396 | https://en.wikipedia.org/wiki/Double%20group | The concept of a double group was introduced by Hans Bethe for the quantitative treatment of magnetochemistry. Because the fermions' phase changes with 360-degree rotation, enhanced symmetry groups that describe band degeneracy and topological properties of magnonic systems are needed, which depend not only on geometric rotation, but on the corresponding fermionic phase factor in representations (for the related mathematical concept, see the formal definition). They were introduced for studying complexes of ions that have a single unpaired electron in the metal ion's valence electron shell, like Ti3+, and complexes of ions that have a single "vacancy" in the valence shell, like Cu2+.
In the specific instances of complexes of metal ions that have the electronic configurations 3d1, 3d9, 4f1 and 4f13, rotation by 360° must be treated as a symmetry operation R, in a separate class from the identity operation E. This arises from the nature of the wave function for electron spin. A double group is formed by combining a molecular point group with the group that has two symmetry operations, identity and rotation by 360°. The double group has twice the number of symmetry operations compared to the molecular point group.
Background
In magnetochemistry, the need for a double group arises in a very particular circumstance, namely, in the treatment of the paramagnetism of complexes of a metal ion in whose electronic structure there is a single electron (or its equivalent, a single vacancy) in a metal ion's d- or f-shell. This occurs, for example, with the elements copper and silver in the +2 oxidation state, where there is a single vacancy in a d electron shell, with titanium(III), which has a single electron in the 3d shell, and with cerium(III), which has a single electron in the 4f shell.
In group theory, the character , for rotation of a molecular wavefunction for angular momentum by an angle α is given by
where ; angular momentum is the vector sum of orbital and spin angular momentum. This formula applies with most paramagnetic chemical compounds of transition metals and lanthanides. However, in a complex containing an atom with a single electron in the valence shell, the character, , for a rotation through an angle of about an axis through that atom is equal to minus the character for a rotation through an angle of
The change of sign cannot be true for an identity operation in any point group. Therefore, a double group, in which rotation by , is classified as being distinct from the identity operation, is used.
A character table for the double group 4 is as follows. The new symmetry operations are shown in the second row of the table.
{| class="wikitable"|+ style="text-align: center;"
|+ Character table: double group 4
|-
!4 ||E || ||C4||C43||C2||2C2||2C2
|-
! || ||R ||C4R||C43R||C2R||22R||2C2R
|-
!1
||1||1||1||1||1||1||1|1
|-
!2
||1||1||1||1||1||1|−1||−1
|-
!1
||1||1||−1||−1||1||1||−1
|-
!2
||1||1||−1||−1||1||−1||1
|-
!1
||2||−2||0||0||−2||0||0
|-
!2
||2||−2||√2||−√2||0||0||0
|-
!3
||2 ||−2 ||−√2||√2 ||0 ||0 ||0
|}
The symmetry operations such as C4 and C4R belong to the same class but the column header is shown, for convenience, in two rows, rather than C4, C4R in a single row.
Character tables for the double groups , , , , , , , , , , , , and are given in Salthouse and Ware.
Applications
The need for a double group occurs, for example, in the treatment of magnetic properties of 6-coordinate complexes of copper(II). The electronic configuration of the central Cu2+ ion can be written as [Ar]3d9. It can be said that there is a single vacancy, or hole, in the copper 3d-electron shell, which can contain up to 10 electrons. The ion [Cu(H2O)6]2+ is a typical example of a compound with this characteristic.
Six-coordinate complexes of the Cu(II) ion, with the generic formula [CuL6]2+, are subject to the Jahn-Teller effect so that the symmetry is reduced from octahedral (point group Oh) to tetragonal (point group D4h). Since d orbitals are centrosymmetric the related atomic term symbols can be classified in the subgroup D4.
To a first approximation spin–orbit coupling can be ignored and the magnetic moment is then predicted to be 1.73 Bohr magnetons, the so-called spin-only value. However, for a more accurate prediction spin–orbit coupling must be taken into consideration. This means that the relevant quantum number is J, where .
When J is half-integer, the character for a rotation by an angle of radians is equal to minus the character for rotation by an angle α. This cannot be true for an identity in a point group. Consequently, a group must be used in which rotations by are classed as symmetry operations distinct from rotations by an angle α. This group is known as the double group, .
With species such as the square-planar complex of the silver(II) ion [AgF4]2− the relevant double group is also ; deviations from the spin-only value are greater as the magnitude of spin–orbit coupling is greater for silver(II) than for copper(II).
A double group is also used for some compounds of titanium in the +3 oxidation state. Titanium(III) has a single electron in the 3d shell; the magnetic moments of its complexes have been found to lie in the range 1.63–1.81 B.M. at room temperature. The double group is used to classify their electronic states.
The cerium(III) ion, Ce3+, has a single electron in the 4f shell. The magnetic properties of octahedral complexes of this ion are treated using the double group .
When a cerium(III) ion is encapsulated in a C60 cage, the formula of the endohedral fullerene is written as . The magnetic properties of the compound are treated using the icosahedral double group I2h.
Free radicals
Double groups may be used in connection with free radicals. This has been illustrated for the species CH3F+ and CH3BF2+, each of which contain a single unpaired electron.
See also
Molecular symmetry
Point group
Magnetochemistry
References
Further reading
Group theory
Molecular physics
Theoretical chemistry
Materials science | Double group | Physics,Chemistry,Materials_science,Mathematics,Engineering | 1,543 |
1,503,963 | https://en.wikipedia.org/wiki/Chebyshev%20distance | In mathematics, Chebyshev distance (or Tchebychev distance), maximum metric, or L∞ metric is a metric defined on a real coordinate space where the distance between two points is the greatest of their differences along any coordinate dimension. It is named after Pafnuty Chebyshev.
It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board. For example, the Chebyshev distance between f6 and e2 equals 4.
Definition
The Chebyshev distance between two vectors or points x and y, with standard coordinates and , respectively, is
This equals the limit of the Lp metrics:
hence it is also known as the L∞ metric.
Mathematically, the Chebyshev distance is a metric induced by the supremum norm or uniform norm. It is an example of an injective metric.
In two dimensions, i.e. plane geometry, if the points p and q have Cartesian coordinates
and , their Chebyshev distance is
Under this metric, a circle of radius r, which is the set of points with Chebyshev distance r from a center point, is a square whose sides have the length 2r and are parallel to the coordinate axes.
On a chessboard, where one is using a discrete Chebyshev distance, rather than a continuous one, the circle of radius r is a square of side lengths 2r, measuring from the centers of squares, and thus each side contains 2r+1 squares; for example, the circle of radius 1 on a chess board is a 3×3 square.
Properties
In one dimension, all Lp metrics are equal – they are just the absolute value of the difference.
The two dimensional Manhattan distance has "circles" i.e. level sets in the form of squares, with sides of length r, oriented at an angle of π/4 (45°) to the coordinate axes, so the planar Chebyshev distance can be viewed as equivalent by rotation and scaling to (i.e. a linear transformation of) the planar Manhattan distance.
However, this geometric equivalence between L1 and L∞ metrics does not generalize to higher dimensions. A sphere formed using the Chebyshev distance as a metric is a cube with each face perpendicular to one of the coordinate axes, but a sphere formed using Manhattan distance is an octahedron: these are dual polyhedra, but among cubes, only the square (and 1-dimensional line segment) are self-dual polytopes. Nevertheless, it is true that in all finite-dimensional spaces the L1 and L∞ metrics are mathematically dual to each other.
On a grid (such as a chessboard), the points at a Chebyshev distance of 1 of a point are the Moore neighborhood of that point.
The Chebyshev distance is the limiting case of the order- Minkowski distance, when reaches infinity.
Applications
The Chebyshev distance is sometimes used in warehouse logistics, as it effectively measures the time an overhead crane takes to move an object (as the crane can move on the x and y axes at the same time but at the same speed along each axis).
It is also widely used in electronic Computer-Aided Manufacturing (CAM) applications, in particular, in optimization algorithms for these. Many tools, such as plotting or drilling machines, photoplotter, etc. operating in the plane, are usually controlled by two motors in x and y directions, similar to the overhead cranes.
Generalizations
For the sequence space of infinite-length sequences of real or complex numbers, the Chebyshev distance generalizes to the -norm; this norm is sometimes called the Chebyshev norm. For the space of (real or complex-valued) functions, the Chebyshev distance generalizes to the uniform norm.
See also
King's graph
Taxicab geometry
References
Distance
Mathematical chess problems
Metric geometry | Chebyshev distance | Physics,Mathematics | 868 |
1,702,858 | https://en.wikipedia.org/wiki/Entoptic%20phenomenon | Entoptic phenomena () are visual effects whose source is within the human eye itself. (Occasionally, these are called entopic phenomena, which is probably a typographical mistake.)
In Helmholtz's words: "Under suitable conditions light falling on the eye may render visible certain objects within the eye itself. These perceptions are called entoptical."
Overview
Entoptic images have a physical basis in the image cast upon the retina. Hence, they are different from optical illusions, which are caused by the visual system and characterized by a visual percept that (loosely said) appears to differ from reality. Because entoptic images are caused by phenomena within the observer's own eye, they share one feature with optical illusions and hallucinations: the observer cannot share a direct and specific view of the phenomenon with others.
Helmholtz commented on entoptic phenomena which could be seen easily by some observers, but could not be seen at all by others. This variance is not surprising because the specific aspects of the eye that produce these images are unique to each individual. Because of the variation between individuals, and the inability for two observers to share a nearly identical stimulus, these phenomena are unlike most visual sensations. They are also unlike most optical illusions which are produced by viewing a common stimulus. Yet, there is enough commonality between the main entoptic phenomena that their physical origin is now well understood.
Examples
Some examples of entoptical effects include:
Floaters or muscae volitantes are slowly drifting blobs of varying size, shape, and transparency, which are particularly noticeable when viewing a bright, featureless background (such as the sky) or a point source of diffuse light very close to the eye. They are shadow images of objects floating in liquid between the retina and the gel inside the eye (vitreous humor). They are visible because they move; if they were pinned to retina by the vitreous or fixed within the vitreous they would be as invisible as ordinary viewing of any stationary object, such as the retinal blood vessels (see Purkinje tree below). Some may be individual red blood cells swollen due to osmotic pressure. Others may be chains of red blood cells stuck together; diffraction patterns can be seen around these. Others may be "coagula of the proteins of the vitreous gel, to embryonic remnants, or the condensation round the walls of Cloquet's canal" that exist in pockets of liquid within the vitreous. The first two sort of floaters may collect over the fovea (the center of vision), and therefore be more visible, when a person is lying on his or her back looking upwards.
Blue field entoptic phenomenon has the appearance of tiny bright dots moving rapidly along squiggly lines in the visual field. It is much more noticeable when viewed against a field of pure blue light and is caused by white blood cells moving in the capillaries in front of the retina. White cells are larger than red blood cells and can be larger than the diameter of a capillary, so must deform to fit. As a large, deformed white blood cell goes through a capillary, a space opens up in front of it and red blood cells pile up behind. This makes the dots of light appear slightly elongated with dark tails.
Haidinger's brush is a very subtle bowtie or hourglass shaped pattern that is seen when viewing a field with a component of blue light that is plane or circularly polarized. It's easier to see if the polarisation is rotating with respect to the observer's eye, although some observers can see it in the natural polarisation of sky light. If the light is all blue, it will appear as a dark shadow; if the light is full spectrum, it will appear yellow. It is due to the preferential absorption of blue polarized light by pigment molecules in the fovea.
Purkinje images are the reflections from the anterior and posterior surfaces of the cornea and the anterior and posterior surfaces of the lens. Although these first four reflections are not entoptic—they are seen by others who are looking at someone’s eye— Becker described how light can reflect from the posterior surface of the lens and then again from the anterior surface of the cornea to focus a second image on the retina, this one much fainter and inverted. Tscherning referred to this as the sixth image (the fifth image is formed by reflections from the anterior surfaces of the lens and cornea to form an image too far in front of the retina to be visible) and noted it was much fainter and best seen with a relaxed emmetropic eye. To see it, one must be in a dark room, with one eye closed; one must look straight ahead while moving a light back and forth in the field of the open eye. Then one should see the sixth Purkinje as a dimmer image moving in the opposite direction.
The Purkinje tree is an image of the retinal blood vessels in one's own eye, first described by Purkyně in 1823. It can be seen by shining the beam of a small bright light through the pupil from the periphery of a subject's vision. This results in an image of the light being focused on the periphery of the retina. Light from this spot then casts shadows of the blood vessels (which lie on top of the retina) onto unadapted portions of the retina. Normally the image of the retinal blood vessels is invisible because of adaptation. Unless the light moves, the image disappears within a second or so. If the light is moved at about 1 Hz, adaptation is defeated, and a clear image can be seen indefinitely. The vascular figure is often seen by patients during an ophthalmic examination when the examiner is using an ophthalmoscope. Another way in which the shadows of blood vessels may be seen is by holding a bright light against the eyelid at the corner of the eye. The light penetrates the eye and casts a shadow on the blood vessels as described previously. The light must be jiggled to defeat adaptation. Viewing in both cases is improved in a dark room while looking at a featureless background. This topic is discussed in more detail by Helmholtz.
Purkinje's blue arcs are associated with the activity of the nerves sending signals from where a spot of light is focussed on the retina near the fovea to the optic disk. To see it, one needs to look at the right edge of a small red light in a dark room with the right eye (left eye closed) after dark-adapting for about 30 seconds; one should see two faint blue arcs starting at the light and heading towards the blind spot. When one looks at the left edge, one will see a faint blue spike going from the light to the right.
A phosphene is the perception of light without light actually entering the eye, for instance caused by pressure applied to the closed eyes.
A phenomenon that could be entoptical if the eyelashes are considered to be part of the eye is seeing light diffracted through the eyelashes. The phenomenon appears as one or more light disks crossed by dark blurry lines (the shadows of the lashes), each having fringes of spectral colour. The disk shape is given by the circular aperture of the pupil.
See also
Theory of Colours
References
Sources
Jan E. Purkyně, 1823: Beiträge zur Kenntniss des Sehens in subjectiver Hinsicht in Beobachtungen und Versuche zur Physiologie der Sinne, In Commission der J.G. Calve'schen Buchhandlung, Prag.
H. von Helmholtz, Handbuch der Physiologischen Optik, published as "Helmholtz's Treatise on Physiological Optics, Translated from the Third German Edition," ed. James P. C. Southall; 1925; The Optical Society of America.
Leonard Zusne, 1990: Anomalistic Psychology: A Study of Magical Thinking; Lea;
Becker, O., 1860, “Über Wahrnehmung eines Reflexbildes im eigenen Auge [About perception of a reflected image in your own eye],” Wiener Medizinische Wochenschrift, pp. 670 672 & 684 688.
M. Tscherning, 1920, Physiologic Optics; Third Edition, (English translation by C. Weiland). Philadelphia: Keystone Publishing Co. pp. 55–56.
White, Harvey E., and Levatin, Paul, 1962, "'Floaters' in the eye," Scientific American, Vol. 206, No. 6, June, 1962, pp. 199 127.
Duke Elder, W. S. (ed.), 1962, System of Ophthalmology, Volume 7, The Foundations of Ophthalmology: heredity pathology diagnosis and therapeutics, St. Louis, The C.V. Mosby Company. p450.
Snodderly, D.M., Weinhaus, R.S., & Choi, J.C. (1992). Neural-vascular relationships in central retina of Macaque monkeys (Macaca fascicularis). Journal of Neuroscience, 12(4), 1169-1193. Available online at: http://www.jneurosci.org/cgi/reprint/12/4/1169.pdf.
Sinclair, S.H., Azar-Cavanagh, M., Soper, K.A., Tuma, R.F., & Mayrovitz, H.N. (1989). Investigation of the source of the blue field entoptic phenomenon. Investigative Ophthalmology & Visual Science, 30(4), 668-673. Available online at: http://www.iovs.org/.
Giles Skey Brindley, Physiology of the Retina and Visual Pathway, 2nd ed. (Edward Arnold Ltd., London, 1970), pp. 140–141.
Bill Reid, “Haidinger's brush,” Physics Teacher, Vol. 28, p. 598 (Dec. 1990).
Walker, J., 1984, “How to stop a spinning object by humming and perceive curious blue arcs around the light,” Scientific American, February, Vol. 250, No. 2, pp. 136 138, 140, 141, 143, 144, 148.
External links
Picture of the entoptic phenomenon: Vitreous Floaters (PDF file, requires an Acrobat Reader or plugin)
Diagram of entoptic subjective visual phenomena
Video describing history and science of first entoptic viewing technique
Video describing history and science of second entoptic viewing technique
The Relation Between Migraine, Typical Migraine Aura and “Visual Snow”
Ophthalmology
Vision | Entoptic phenomenon | Physics | 2,281 |
1,630,483 | https://en.wikipedia.org/wiki/Prime%20power | In mathematics, a prime power is a positive integer which is a positive integer power of a single prime number.
For example: , and are prime powers, while
, and are not.
The sequence of prime powers begins:
2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 61, 64, 67, 71, 73, 79, 81, 83, 89, 97, 101, 103, 107, 109, 113, 121, 125, 127, 128, 131, 137, 139, 149, 151, 157, 163, 167, 169, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 243, 251, … .
The prime powers are those positive integers that are divisible by exactly one prime number; in particular, the number 1 is not a prime power. Prime powers are also called primary numbers, as in the primary decomposition.
Properties
Algebraic properties
Prime powers are powers of prime numbers. Every prime power (except powers of 2 greater than 4) has a primitive root; thus the multiplicative group of integers modulo pn (that is, the group of units of the ring Z/pnZ) is cyclic.
The number of elements of a finite field is always a prime power and conversely, every prime power occurs as the number of elements in some finite field (which is unique up to isomorphism).
Combinatorial properties
A property of prime powers used frequently in analytic number theory is that the set of prime powers which are not prime is a small set in the sense that the infinite sum of their reciprocals converges, although the primes are a large set.
Divisibility properties
The totient function (φ) and sigma functions (σ0) and (σ1) of a prime power are calculated by the formulas
All prime powers are deficient numbers. A prime power pn is an n-almost prime. It is not known whether a prime power pn can be a member of an amicable pair. If there is such a number, then pn must be greater than 101500 and n must be greater than 1400.
See also
Almost prime
Fermi–Dirac prime
Perfect power
Semiprime
References
Further reading
Jones, Gareth A. and Jones, J. Mary (1998) Elementary Number Theory Springer-Verlag London
Prime numbers
Exponentials
Number theory
Integer sequences | Prime power | Mathematics | 525 |
23,903,285 | https://en.wikipedia.org/wiki/ANNINE-6plus | ANNINE-6plus is a water soluble voltage sensitive dye (also called potentiometric dyes). This compound was developed at the Max Planck Institute for Biochemistry in Germany. It is used to optically measure the changes in transmembrane voltage of excitable cells, including neurons, skeletal and cardiac myocytes.
Voltage sensitivity
ANNINE-6plus has a fractional fluorescent intensity change (ΔF/F per 100 mV change) of about 30% with single-photon excitation (~488 nm) and >50% with two-photon excitation (~1060 nm).
Applications
ANNINE-6plus has been applied in the microscopic imaging of action potentials of cardiomyocyte in perfused mice heart. Using confocal microscopy in conjunction with ANNINE-6plus, single sweep action potentials with high peak signal-to-noise ratio (SNR) have been recorded from single transverse tubule (t-tubule) of a few micrometers in the ventricular cardiomyocyte.
References
Dyes
Microscopy
Imaging
Electrophysiology
Quaternary ammonium compounds
Bromides | ANNINE-6plus | Chemistry | 243 |
185,185 | https://en.wikipedia.org/wiki/Nominet%20UK | Nominet UK is currently delegated by IANA to be the manager of the .uk domain name. Nominet directly manages registrations directly under .uk, and some of the second level domains .co.uk, .org.uk, .sch.uk, .me.uk, .net.uk, .ltd.uk and .plc.uk.
Nominet also manages the .wales and .cymru domains.
As of February 2021, the .uk register held 10,922,477 .uk domain names. This represents a year-on-year downward trend, when compared to February 2020, this is mainly due to the lapsing of the recently launched .uk domain names.
Nominet was founded by Dr. Willie Black and five others on 14 May 1996 when its predecessor, the "Naming Committee" was unable to deal with the volume of registrations then being sought under the .uk domain. Nominet is a not-for-profit company limited by guarantee. It has members who act as shareholders, but without the right to participate in the profits of the company. Anyone can become a member, but most members are internet service providers who are also registrars.
Customers wishing to register a domain can approach Nominet directly but will generally register the domain via a domain registrar – a business entity authorized by Nominet to register and manage .uk domains on behalf of customers. Registrars for .uk domains were formerly known as "tagholders".
Nominet also deals with disputes about registrations of .uk domain names, via its Dispute Resolution Service (DRS) which is similar to the UDRP system used for generic Top Level Domain Names, but with certain innovations such as a free mediation service.
In 2008 Nominet launched a charitable foundation, the Nominet Trust, and contributed some of its profits to it. In May 2018 Nominet relinquished control over the organisation, which was renamed Social Tech Trust and announced a "strategic partnership with Social Tech Business". By 2020 the income of Social Tech Trust had fallen to £99,000 from £5.6 million in 2018.
In February 2021 over 140 members requested an extraordinary general meeting to vote on removing five board directors, including the chair, and to appoint Michael Lyons, former chairman of the BBC, and Axel Pawlik, former Managing Director of RIPE NCC. The EGM was held on 22 March 2021 and the five directors were removed. Rob Binns, a former Group Treasurer at HP Inc. and then CFO at The Access Group, was appointed as acting chair.
History
Most countries have their own top-level domain (TLD). The .uk TLD was first used in 1985. and at that time a voluntary group called the "Naming Committee" managed the registration of .uk domain names. This consisted of members of LINX as full members (the main ISPs in the UK), and their resellers as guest members. By the mid-1990s, Internet Service Providers (ISPs) who registered domains for their customers were joined by a new breed of domain name specialists who had an entrepreneurial attitude to domain names. The Naming Committee operated a ruleset that forced all name registrations to 'exactly' match the name of the registering company and also limited all companies to a single domain name. The growth of the commercial internet soon brought these restrictions into close focus.
As demand for domain name registrations grew, it became clear that a voluntary group could no longer cope with the volume of registrations being requested. It also became clear that the existing ruleset was not sustainable and the Naming Committee was going to break down under the pressure of registrations.
Forming of Nominet UK
When it became clear that a new organization with a new approach was needed to manage the .uk TLD, the Naming Committee mailing list had mutated into a discussion group for domain name issues and many discussions about what type of corporation the Registry should be were held. Meanwhile, at UKERNA, Dr. Willie Black and John Carey were watching the situation and in 1996 John Carey wrote a proposed plan for a new organization to be called Nominet. This was distributed widely, and a meeting to discuss ways forward was held at a hotel at Heathrow Airport on 11 April 1996.
The options to set up as a profit-making company or a charity were rejected, and Nominet was established on 14 May 1996 as a private, not-for-profit membership company, limited by guarantee. Whilst the move was generally popular, there was strong resistance from some parts of the industry. Although formed with a board composed of Dr. Black (who became the first CEO of the new company), John Carey, and the four co-founders drawn from the internet industry, elections were held by the new membership which resulted in the first elected board members to oversee the growth of the UK domain name industry. John Carey resigned before taking up his role following a disagreement over the creation of police.uk
Pre-Nominet domain names
From Nominet's inception on 1 July 1996 until 2002 domains registered pre-Nominet were provided free of charge. In 2002, as had been hinted at its inception in 1996, Nominet began a process over two years of migrating those domains onto Nominet standard terms and conditions and implementing charging. Those domains can be identified within the whois results as having a registration date of before August 1996.
Controversy
In 2011 The Independent published an article containing quotes about people trying to demutualize Nominet for their own benefit.
Failure of investments
Under the tenure of Russell Haworth, a former mergers and acquisition specialist, Nominet invested heavily into autonomous vehicles, the Internet of things, and white-space spectrum management which did not pan out. Despite these outcomes, board salaries continued to rise.
Structural issues
Nominet's success brought with it several structural concerns. Over time, it built considerable cash reserves. In 1999, candidates stood for the board on a platform similar to 'carpetbagging' attempts with mutual building societies; whilst this was defeated, following a financial report from Alex Bligh, one of the founders, covering the potential conflict between turning a profit to maintaining sufficient financial reserves and its goal of maintaining long-term profit and loss neutrality.
Nominet has consistently increased the salaries of its employees, especially directors. Average salaries have increased from £28,542 in 2002 to £60,276 in 2014. In the same period, the highest-paid director went from £125,000 to £308,000.
Closure of Members' Forum
Speaking at Nominet's 2020 annual general meeting (AGM), the organization's CEO Russell Haworth shocked members by announcing he was shutting down its internal web forum – the only means of independent communication between members – effective immediately.
Voting irregularities
Nominet admitted it wrongly calculated election results for its board of directors in 2018 and 2019.
Domain names
In 2019, customers of 123-Reg and Namesco were invoiced for domain names that were reserved for free by Nominet.
.uk registry watchers noticed unexpected changes in ownership of various .uk names, including sunset.uk, waterfall.uk, pad.uk and trending.uk, all of which were sold by Fasthosts to one or more industry insiders in advance of the domains being released by Nominet rather than going through the proper public process.
Nominet introduction of .uk to compete with .co.uk and .org.uk has resulted in increased cost to UK brand owners and caused much confusion amongst registrants.
In July 2020 Nominet announced a new policy consultation for expiring .uk domain names, but it has been mired with controversy as it seems not to be in the public's benefit.
2021 removal of directors
On 29 January 2021, an email request was made to Rory Kelly, the Nominet company secretary to provide a list of members of Nominet.
On 3 February 2021, Rory Kelly, company secretary responded to the request by printing out the Nominet members list on 575 sheets of A4 paper, omitting the contact email addresses of members, and posting the bundle to the registered company address of the original requester. In response, the recipient Krystal Hosting arranged for 575 trees to be planted "to offset Nominet's nonsense".
On 2 February 2021, over 140 members wrote to the board of directors via Rory Kelly the company secretary calling for a general meeting to propose two ordinary resolutions:
Remove Eleanor Bradley, Russell Haworth, Ben Hill, Jane Tozer and Mark Wood as directors before expiration of their respective periods of office and despite anything in any agreement between them and Nominet UK; and
Appoint Sir Michael Lyons and Axel Pawlik as directors.
Rory Kelly, Company Secretary stated that the number of votes each member was entitled to cast would not be released until the results had been published.
Following the EGM the motion to remove the directors was passed, confirming the removal of the five board directors. Nominet stated that "Eleanor Bradley and Ben Hill remain in their executive posts."
Voting pledges
Nominet announced that they would not release the number of votes each member is entitled to cast, including to the members themselves, until after the vote had taken place. The voting rights of the members were subsequently released after the EGM.
In the lead up to the vote at the EGM a number of Nominet members publicly announced their position on the resolution to be tabled at the EGM:
Publicly for:
Tucows, the largest domain registrar, announced they would support the removal of the directors.
Namecheap
LINX, the UK's largest Internet peering point, following a unanimous board decision.
Krystal Hosting, the company whose owner started the campaign.
Publicly against:
Blacknight
Publicly abstained:
Google announced that it did not intend to vote on the motion.
Result
The Extraordinary General Meeting (EGM) was held on 22 March 2021. The motion to remove the five directors from the board was carried with 53.5% of the votes for the motion, and 47.26% against.
Five of the original Nominet founders (Richard Almeida, Willie Black, Alex Bligh, Keith Mitchell, and Nigel Titley) pledged their "commitment to support Sir Michael Lyons and Axel Pawlik if they are appointed to the board of Nominet UK, in whatever way we reasonably can with matters relating to the future of Nominet".
Response
By 31 March 2021, Nominet had made four internal appointments:
Rob Binns, as acting chair of the Board (a previous board member)
Eleanor Bradley, as interim CEO (Nominet , and former board member)
Rory Kelly, as board member (Nominet Company Secretary)
Adam Leach, as board member (Nominet Chief Information Officer)
Appointment of Kelly and Leach to the board had occurred on 22 March 2021, corresponding to the date of the EGM.
On 7 April 2021, Bradley, as interim CEO, stated that Nominet would begin to regularly publish the voting rights of members.
On 12 April 2021 Sir Michael Lyons and Alex Pawlik sent a joint letter to the acting Chair Rob Binns. The remnant board of Nominet subsequently announced: "After much careful consideration, the board has decided not to invite Sir Michael to be acting chair".
By 13 April 2021 Nominet had selected Russell Reynolds Associates to seek a replacement chair of Nominet on an initial three-year term.
By 15 April 2021 survey responses from 200 Nominet members that had supported the earlier EGM, showed 97% support for calling a second EGM, with confidence in the board decreasing to 1.3 out of 10.
On 22 July 2021, Nominet stated that CBE had been appointed new chair of the board, along with Eva Lindqvist as a new independent director, from 21 July 2021. Priorities would be reduction of costs and executive pay, re-building of trust with the membership, and restoration of Nominet's public benefit purpose and charitable proceeds.
2021 Annual General Meeting
Nominet's Annual General Meeting (AGM) was scheduled for 18 November 2021. Prior to the AGM, elections were held to appoint two new member-elected non-executive directors to the Nominet board. Public Benefit candidates received the highest votes, having Simon Blackler elected immediately with over 50 percent of first-preference votes, followed by Ashley of Tucows taking 35% of the vote. Blackler and replaced out-going directors James Bladel of GoDaddy (who had not stood for re-election), and David Thornton, who "defended the previous management's actions", had received six percent of the votes. It was announced that BCS CEO Paul Fletcher would takeover as Nominet CEO from February 2022.
Votes held at the 2021 AGM formally appointed Andy Green and Eva Lindqvist, both for three years, with over 94% of votes. Stephen Page (who "was opposed by the Public Benefit campaign") received 51.4% of weighted votes for an extension of one year as a director.
2022
Paul Fletcher was appointed as replacement CEO of Nominet starting in February 2022. In the first days Fletcher joined the board and held a meeting with members.
On 10 March 2022, Nominet announced it would be "not accepting registrations from registrars in Russia – we are suspending the relevant tags" and in doing so became the first previously neutral ccTLD in history to reposition itself as non-neutral.
Non-core activities
Nominet also delivers the National Cyber Security Centre's Protective Domain Name Service (PDNS) since 2016, protecting the UK public sector's internet traffic.
A £4 million investment into registry services was announced in February 2020 alongside the acquisition of US-based cyber security company CyGlass.
Registry
Top-level domains
Nominet manages the registry of the following top-level domains:
.uk – top-level domain for the United Kingdom
.cymru – top-level domain for Wales
.wales – top-level domain for Wales
Second-level domains managed by Nominet
Nominet manages the following second-level domains:
.co.uk – unrestricted, intended for businesses
.org.uk – unrestricted, intended for non-profit organisations
.net.uk – reserved exclusively for UK internet service providers
.ltd.uk – reserved exclusively for UK limited liability companies; subdomain must correspond to the company's registered name
.plc.uk – reserved exclusively for UK public limited companies; subdomain must correspond to the company's registered name
.sch.uk – reserved exclusively for primary and secondary schools
.me.uk – unrestricted, intended for personal use
Second-level domains managed by other organisations
.mod.uk – operated by the Ministry of Defence
.mil.uk – operated by the Ministry of Defence
Quasi second-level domains
The following are widely used as second-level domains but are registered with Nominet as top-level domains:
.gov.uk was used as a second-level domain for UK government agencies until 2012 when gov.uk started functioning as a Nominet-registered independent domain.
.ac.uk is commonly used as a second-level domain for UK education and research establishments, but ac.uk is actually a Nominet-registered domain held by JANET.
See also
.uk (ccTLD)
.io (TLD)
Uniform Domain-Name Dispute-Resolution Policy
– Nominet's ruling for Apple Computer in the dispute of the itunes.co.uk domain
Internet Provider Security
References
External links
Nominet UK web site
Knowthenet
Nominet Trust
Companies based in Oxford
Domain name registries
History of computing in the United Kingdom
Information technology organisations based in the United Kingdom
Internet governance organizations
1996 establishments in the United Kingdom
Internet in the United Kingdom
Science and technology in Oxfordshire | Nominet UK | Technology | 3,261 |
32,279,357 | https://en.wikipedia.org/wiki/TRE%20%28computing%29 | TRE is an open-source library for pattern matching in text, which works like a regular expression engine with the ability to do approximate string matching. It was developed by Ville Laurikari and is distributed under a 2-clause BSD-like license.
The library is written in C and provides functions which allow using regular expressions for searching over input text lines. The main difference from other regular expression engines is that TRE can match text fragments in an approximate way, that is, supposing that text could have some number of typos.
Features
TRE uses extended regular expression syntax with the addition of "directions" for matching preceding fragment in approximate way. Each of such directions specifies how many typos are allowed for this fragment.
Approximate matching is performed in a way similar to Levenshtein distance, which means that there are three types of typos 'recognized':
TRE allows specifying of cost for each of three typos type independently.
The project comes with a command-line utility, a reimplementation of agrep.
Though approximate matching requires some syntax extension, when this feature is not used, TRE works like most of other regular expression matching engines. This means that
it implements ordinary regular expressions written for strict matching;
programmers familiar with POSIX-style regular expressions need not do much study to be able to use TRE.
Predictable time and memory consumption
The library's author states that time spent for matching grows linearly with increasing of input text length, while memory requirement is constant during matching and does not depend on the input, only on the pattern.
Other
Other features, common for most regular expression engines could be checked in regex engines comparison tables or in list of TRE features on its web-page.
Usage example
Approximate matching directions are specified in curly brackets and should be distinguishable from repetitive quantifiers (possibly with inserting a space after opening bracket):
(regular){~1}\s+(expression){~2} would match variants of phrase "regular expression" in which "regular" have no more than one typo and "expression" no more than two; as in ordinary regular expressions "" means one or more space characters i.e. rogular ekspression would pass test;
(expression){ 5i + 3d + 2s < 11} would match word "expression" if total cost of typos is less than 11, while insertion cost is set to 5, deletion to 3 and substitution of character to 2 - i.e. gives cost of 10.
Language bindings
Apart from C, TRE is usable through bindings for Perl, Python and Haskell. It is the default regular expression engine in R. However, if the project should be cross-platform, each target platform would need a separate interface.
Disadvantages
Since other regular expression engines usually do not provide approximate matching ability, there is almost no concurrent implementation with which TRE could be compared. However, there are a few things which programmers may wish to see implemented in future releases:
a replacement mechanism for substituting matched text fragments (like in sed string processor and many modern implementations of regular expressions, including built into Perl or Java);
opportunity to use another approximate matching algorithm (than Levenshtein's) for better typo value assessment (for example Soundex), or at least this algorithm to be improved to allow typos of the "swap" type (see Damerau–Levenshtein distance).
See also
Levenshtein automaton
Comparison of regular expression engines
Agrep
References
External links
TRE - The free and portable approximate regular expression matching library
Further reading
Computer libraries
Regular expressions
Software using the BSD license | TRE (computing) | Technology | 757 |
682,421 | https://en.wikipedia.org/wiki/Gal%C3%A1pagos%20%28novel%29 | Galápagos (1985) is the eleventh novel published by American author Kurt Vonnegut. Set in the Galápagos Islands after a global financial disaster, the novel questions the merit of the human brain from an evolutionary perspective. The title is both a reference to the islands on which part of the story plays out, and a tribute to Charles Darwin, on whose theory Vonnegut relies to reach his own conclusions. It was published by Delacorte Press.
Plot summary
Galápagos is the story of a small band of mismatched humans who are shipwrecked on the fictional island of Santa Rosalia in the Galápagos Islands after a global financial crisis cripples the world's economy. Shortly thereafter, a disease renders all humans on Earth infertile, with the exception of the people on Santa Rosalia, making them the last specimens of humankind. Over the next million years, their descendants, the only fertile humans left on the planet, eventually evolve into a furry species resembling sea lions: though possibly still able to walk upright (it is not explicitly mentioned, but it is stated that they occasionally catch land animals), they have a snout with teeth adapted for catching fish, a streamlined skull and flipper-like hands with rudimentary fingers (described as "nubbins").
The story's narrator is a spirit who has been watching over humans for the last million years. This particular ghost is the immortal spirit of Leon Trotsky Trout, son of Vonnegut's recurring character Kilgore Trout. Leon is a Vietnam War veteran who is affected by the massacres in Vietnam. He goes AWOL and settles in Sweden, where he works as a shipbuilder and dies during the construction of the ship, the Bahía de Darwin. This ship is used for the "Nature Cruise of the Century". Planned as a celebrity cruise, it was in limbo due to the economic downturn, and due to a chain of unconnected events the ship ended up allowing humans to reach and survive in the Galápagos. A group of girls from a cannibal tribe living in the Amazon rainforest, called the Kanka-bono girls also end up on the ship, eventually having children with sperm obtained from the ship's captain.
The deceased Kilgore Trout makes four appearances in the novel, urging his son to enter the "blue tunnel" that leads to the afterlife. When Leon refuses for the fourth time, Kilgore pledges that he, and the blue tunnel, will not return for one million years, which leaves Leon to observe the slow process of evolution that transforms the humans into aquatic mammals. The process begins when a Japanese woman on the island, the granddaughter of a Hiroshima survivor, gives birth to a fur-covered daughter.
Trout maintains that all the sorrows of humankind were caused by "the only true villain in my story: the oversized human brain". Natural selection eliminates this problem, since the humans best fitted to Santa Rosalia were those who could swim best, which required a streamlined head, which in turn required a smaller brain.
Main characters
Leon Trout, dead narrator and son of Kilgore Trout
Hernando Cruz, first mate of the Bahía de Darwin
Mary Hepburn, an American widow who teaches at Ilium High School
the Kanka-bono girls, a group of young girls from a cannibal tribe living in the Amazon rainforest
Roy Hepburn, Mary's husband who died in 1985 from a brain tumor
Akiko Hiroguchi, the daughter of Hisako that will be born with fur covering her entire body
Hisako Hiroguchi, a teacher of ikebana and Zenji's pregnant wife
Zenji Hiroguchi, a Japanese computer genius who invented the voice translator Gokubi and its successor Mandarax
Bobby King, publicity man and organizer of the "Nature Cruise of the Century"
Andrew MacIntosh, an American financier and adventurer of great inherited wealth
Selena MacIntosh, Andrew's blind daughter, eighteen years old
Jesús Ortiz, a talented Inca waiter who looks up to wealthy and powerful people
Adolf von Kleist, captain of Bahía de Darwin who doesn't really know how to steer the ship
Siegfried von Kleist, brother of Adolf and carrier of Huntington's chorea who temporarily takes care of the reception at hotel El Dorado
James Wait, a 35-year-old American swindler
Pvt. Geraldo Delgado, an Ecuadorian soldier
Literary techniques
Form
The main story is told a-chronologically with a framing story told by a narrator from a point a million years in the future, interspersed with flashbacks and commentary about the outcomes of future events of the main storyline. As a gimmick, an asterisk is placed in front of a character's name if they will die before sunset.
Quotations
The novel contains a large number of quotations from other authors. They are related to the story itself and are functionally inserted through Mandarax, a fictional voice translator that is also able to provide quotations from literature and history. The following authors are quoted (in order of their appearance in the book): Anne Frank, Alfred Tennyson, Rudyard Kipling, John Masefield, William Cullen Bryant, Ambrose Bierce, Lord Byron, Noble Claggett, John Greenleaf Whittier, Benjamin Franklin, John Heywood, Cesare Bonesana Beccaria, Bertolt Brecht, Saint John, Charles Dickens, Isaac Watts, William Shakespeare, Plato, Robert Browning, Jean de La Fontaine, François Rabelais, Patrick R. Chalmers, Michel de Montaigne, Joseph Conrad, George William Curtis, Samuel Butler, T. S. Eliot, A. E. Housman, Oscar Hammerstein II, Edgar Allan Poe, Charles E. Carryl, Samuel Johnson, Thomas Carlyle, Edward Lear, Henry David Thoreau, Sophocles, Robert Frost, and Charles Darwin.
Adaptations
In 2009, Audible.com produced an audio version of Galapagos, narrated by Jonathan Davis, as part of its Modern Vanguard line of audiobooks.
In 2014, artists Tucker Marder and Christian Scheider adapted "Galapagos" into a live theatrical performance at the new Parrish Art Museum in Watermill, N.Y. Endorsed by the Kurt Vonnegut Estate, the multi-media production featured 26 performers including Bob Balaban; live orchestral underscoring composed and conducted by Forrest Gray featuring Max Feldschuh on vibraphone and Ken Sacks on mbira; animal costumes by Isla Hansen; a three-story scenic design by Shelby Jackson; experimental video projections by James Bayard; and choreography by Matt Davies.
In 2019, Canadian band The PepTides released a ten-song collection titled "Galápagos Vol.1", inspired by the themes and characters in Galápagos.
See also
References
Vonnegut, Kurt. Galápagos. New York: Dell Publishing, 1999. .
1985 American novels
American post-apocalyptic novels
Evolution in popular culture
Galápagos Islands
Ghost narrator
Novels by Kurt Vonnegut
Novels set in Ecuador
Novels set on islands
Postmodern novels
Science fantasy novels
Speculative evolution | Galápagos (novel) | Biology | 1,462 |
15,581,418 | https://en.wikipedia.org/wiki/Aviation%20transponder%20interrogation%20modes | The aviation transponder interrogation modes are the standard formats of pulsed sequences from an interrogating Secondary Surveillance Radar (SSR) or similar Automatic Dependent Surveillance-Broadcast (ADS-B) system. The reply format is usually referred to as a "code" from a transponder, which is used to determine detailed information from a suitably equipped aircraft.
In its simplest form, a "Mode" or interrogation type is generally determined by pulse spacing between two or more interrogation pulses. Various modes exist from Mode 1 to 5 for military use, to Mode A, B, C and D, and Mode S for civilian use.
Interrogation modes
Several different RF communication protocols have been standardized for aviation transponders:
Mode A and Mode C are implemented using air traffic control radar beacon system as the physical layer, whereas Mode S is implemented as a standalone backwards-compatible protocol. ADS-B can operate using Mode S-ES or Universal Access Transceiver as its transport layer:
Mode A
When the transponder receives an interrogation request, it broadcasts the configured transponder code (or "squawk code"). This is referred to as "Mode 3A" or more commonly, Mode A. A separate type of response called "Ident" can be initiated from the airplane by pressing a button on the transponder control panel.
Mode A with Mode C
A Mode A transponder code response can be augmented by a pressure altitude response, which is then referred to as Mode C operation. Pressure altitude is obtained from an altitude encoder, either a separate self-contained unit mounted in the aircraft or an integral part of the transponder. The altitude information is passed to the transponder using a modified form of the modified Gray code called a Gillham code.
Mode A and C responses are used to help air traffic controllers identify a particular aircraft's position and altitude on a radar screen, in order to maintain separation.
Mode S
Another mode called Mode S (Select) is designed to help avoiding overinterrogation of the transponder (having many radars in busy areas) and to allow automatic collision avoidance. Mode S transponders are compatible with Mode A and Mode C Secondary Surveillance Radar (SSR) systems. This is the type of transponder that is used for TCAS or ACAS II (Airborne Collision Avoidance System) functions, and is required to implement the extended squitter broadcast, one means of participating in ADS-B systems. A TCAS-equipped aircraft must have a Mode S transponder, but not all Mode S transponders include TCAS. Likewise, a Mode S transponder is required to implement 1090ES extended squitter ADS-B Out, but there are other ways to implement ADS-B Out (in the U.S. and China.) The format of Mode S messages is documented in ICAO Doc 9688, Manual on Mode S Specific Services.
Mode S features
Upon interrogation, Mode S transponders transmit information about the aircraft to the SSR system, to TCAS receivers on board aircraft and to the ADS-B SSR system. This information includes the call sign of the aircraft and/or the aircraft's permanent ICAO 24-bit address (which is represented for human interface purposes as six hexadecimal characters.) One of the hidden features of Mode S transponders is that they are backwards compatible; an aircraft equipped with a Mode S transponder can still be used to send replies to Mode A or C interrogations. This feature can be activated by a specific type of interrogation sequence called inter-mode.
ICAO 24-bit address
Mode S equipped aircraft are assigned a unique ICAO 24-bit address or (informally) Mode-S "hex code" upon national registration and this address becomes a part of the aircraft's Certificate of Registration. Normally, the address is never changed, however, the transponders are reprogrammable and, occasionally, are moved from one aircraft to another (presumably for operational or cost purposes), either by maintenance or by changing the appropriate entry in the aircraft's Flight management system.
There are 16,777,214 (224-2) unique ICAO 24-bit addresses (hex codes) available. The ICAO 24-bit address can be represented in three digital formats: hexadecimal, octal, and binary. These addresses are used to provide a unique identity normally allocated to an individual aircraft or registration.
As an example, following is the ICAO 24-bit address assigned to the Shuttle Carrier Aircraft with the registration N905NA:
Hexadecimal: AC82EC
Octal: 53101354
Binary: 101011001000001011101100 (Note: occasionally, spaces are added for visual clarity, thus 1010 1100 1000 0010 1110 1100 {Hex big endian} and 001 101 110 100 000 100 110 101 {Octal little endian})
Decimal: 11305708
These are all the same 24-bit address of the Shuttle Carrier Aircraft, represented in different numeral systems (see above).
Issues with Mode S transponders
An issue with Mode S transponders arises when pilots enter the wrong flight identity code into the Mode S transponder. In this case, the capabilities of ACAS II and Mode S SSR can be degraded.
Extended squitter
In 2009 the ICAO published an "extended" form of Mode S with more message formats to use with ADS-B; it was further refined in 2012. Countries implementing ADS-B can require the use of either the extended squitter mode of a suitably-equipped Mode S transponder, or the UAT transponder on 978 MHz.
Use in meteorology
Mode-S data has the potential to contain the aircraft's movement vectors in relation to the Earth and its atmosphere. The difference between these two vectors is the wind acting on the aircraft. Deriving winds (and temperatures from the Mach number and true airspeed) was developed simultaneously by Siebren de Haan of the KNMI and Edmund Stone of the Met Office. Over the UK the number of aircraft observations has increased from approximately 7500 per day from AMDAR to over 10 million per day. The Met Office together with KNMI and FlightRadar24 are actively developing an expanded capability including data from every continent other than Antarctica.
See also
Identification friend or foe
References
Radar
Microwave technology
Measuring instruments
Navigational equipment
Air traffic control | Aviation transponder interrogation modes | Technology,Engineering | 1,318 |
53,212,411 | https://en.wikipedia.org/wiki/Newton%20for%20Beginners | Newton for Beginners, republished as Introducing Newton, is a 1993 graphic study guide to the Isaac Newton and classical physics written and illustrated by William Rankin. The volume, according to the publisher's website, "explains the extraordinary ideas of a man who [...] single-handedly made enormous advances in mathematics, mechanics and optics," and, "was also a secret heretic, a mystic and an alchemist."
"William Rankin," Public Understanding of Science reviewer Patrick Fullick confirms, "sets out to illuminate the man whose work laid the foundations of the physics of the last 350 years, and to place him and his work in the context of the times in which he lived." New Scientist reviewer Roy Herbert adds that, "alongside theories of the Universe from ancient times, the book explains those originating since Isaac Newton, so placing him deftly in his scientific context."
Publication History
This volume was originally published in the UK by Icon Books in 1993 as Newton for Beginners, and subsequently republished with different covers in different editions.
Selected editions:
Related volumes in the series:
Reception
"This book shares the general characteristics of the Beginners series with a large number of line drawings and cartoons with associated text and many asides," states Patrick Fullick, writing in Public Understanding of Science, "for some readers the asides may seem idiosyncratic or even annoying." "Some may dislike the humour and bad puns that abound in this work," confirms Bill Palmer, writing in the Journal of the Science Teacher Association of the Northern Territory, "but I suspect that those starting the study of Newton's life and work will appreciate this attempt to facilitate reading."
"The book is well-grounded in recent historiography," and, "Rankin is clearly sympathetic towards his subject," states Fullick, "but inevitably Newton still comes over as one whose intellectual vanity was at times apt to overcome his self-control." Roy Herbert, writing in New Scientist, confirms that despite being a colossus, "Many of his contemporaries saw him as something else and these bit players provide a background of 17th-century backbiting and squabbling (Newton took part) that is always fascinating."
"Newton's story is told accurately and entertainingly," concludes Palmer. "It combines drawings with text and pulls off the difficult trick of imparting serious information while keeping the reader amused with jokes and irreverent asides," adds Herbert, "it is a technique that has strong appeal and so, even if you have misgivings about it, you are lured along the trail." "The communication of the idea that the great scientists of the past had their hopes and fears and that they had concerns other than the purely academic or professional is probably done as well pictorially as by other means," confirms Fullick. "Easily swallowed and," concludes Herbert, "retainable."
References
Non-fiction graphic novels
Biographical comics
Comics based on real people
Popular physics books
Educational comics
1993 in comics
Cultural depictions of Isaac Newton
Comics set in the 17th century
Comics set in the United Kingdom
Books about the history of physics | Newton for Beginners | Astronomy | 645 |
8,879,537 | https://en.wikipedia.org/wiki/Hybrid%20Scheduling | Hybrid Scheduling is a class of scheduling mechanisms that mix different scheduling criteria or disciplines in one algorithm. For example, scheduling uplink and downlink traffic in a WLAN (Wireless Local Area Network, such as IEEE 802.11e) using a single discipline or framework is an instance of hybrid scheduling. Other examples include a scheduling scheme that can provide differentiated and integrated (guaranteed) services in one discipline. Another example could be scheduling of node communications where centralized communications and distributed communications coexist. Further examples of such schedulers are found in the following articles:
References
1- Y. Pourmohammadi Fallah, H. Alnuweiri,"Hybrid Polling and Contention Access Scheduling in IEEE 802.11e WLANs", Journal of Parallel and Distributed Computing, Elsevier, Vol 67, Issue 2, Feb. 2007, pp. 242–256.
Computer networking | Hybrid Scheduling | Technology,Engineering | 179 |
19,190,688 | https://en.wikipedia.org/wiki/Ehud%20Tenenbaum | Ehud "Udi" Tenenbaum (; born August 29, 1979), also known as The Analyzer, is an Israeli hacker.
Biography
Tenenbaum was born in Hod HaSharon in 1979. He became famous in 1998 when he was arrested for hacking computers belonging to NASA, The Pentagon, the U.S. Air Force, the U.S. Navy, the Knesset, MIT, among other high-profile organizations. He also hacked into the computers of Palestinian groups and claimed to have destroyed the website of Hamas. To do this, Tenenbaum installed packet analyzer and trojan horse software on some of the hacked servers.
The then-US Deputy Defense Secretary John Hamre stated that the attack was "the most organized and systematic attack to date" on US military systems. The military had thought that they were witnessing sophisticated Iraqi 'information warfare'. In an effort to stop the attack, the United States government assembled agents from the FBI, the Air Force Office of Special Investigations, NASA, the US Department of Justice, the Defense Information Systems Agency, the NSA, and the CIA. The government was so worried that the warning and briefings went all the way up to the President of the United States. The investigation, code-named "Solar Sunrise," eventually snared two California teenagers. After their arrest, a subsequent probe led US investigators to Tenenbaum, who was arrested after Israeli police were given evidence of Tenenbaum's activities. Later, the FBI sent agents to Israel to question Tenenbaum.
Before he was sentenced, Tenenbaum served briefly in the Israel Defense Forces, but was released soon thereafter after he was involved in a traffic collision.
In 2001, Tenenbaum pleaded guilty, while stating that he was not attempting to infiltrate the computer systems to get a hold of secrets but rather to prove that the systems were flawed. Tenenbaum was sentenced to a year and a half in prison, of which he served only 8 months following the "Deri Law". After the attack, the FBI made a short 18 minutes training video called, Solar Sunrise: Dawn of a New Threat that was sold as part of a hacker defense course that was discontinued in September 2004.
In 2003, after being freed from prison, Tenenbaum founded his own Information security company called "2XS".
In September 2008, following an investigation by Canadian police and the US Secret Service, Tenenbaum and three accomplices were arrested in Montreal. Tenenbaum was charged with six counts of credit card fraud, in the sum of approx. US$1.5 million. U.S. investigators suspected Tenenbaum of being part of a scam, in which the hackers penetrated financial institutions around the world to steal credit card numbers. They then sold these numbers to other people, who used them to perpetrate massive credit card fraud. He was later extradited to the United States to stand trial, and was in the custody of the US Marshals for more than a year. In August 2010, he was released on bond after agreeing to plead guilty.
In July 2012, after Tenenbaum accepted a plea bargain which may have involved cooperation in the investigation, New York district judge Edward Korman sentenced Tenenbaum to the time already served in prison. Tenenbaum was also ordered to pay $503,000 and was given three years' probation.
References
External links
BBC radio show about Solar Sunrise
Youtube showing of the FBI movie, Solar Sunrise
‘The Analyzer’ Pleads Guilty in $10 Million Bank-Hacking Case - published in Wired on August 25, 2009
1979 births
Living people
Computer programmers
Hackers
People from Hod HaSharon
Cybercriminals
Israeli criminals | Ehud Tenenbaum | Technology | 757 |
53,394,249 | https://en.wikipedia.org/wiki/The%20Compatibility%20Gene | The Compatibility Gene is a 2013 book about the discovery of the mechanism of compatibility in the human immune system by the English professor of immunology, Daniel M. Davis. It describes the history of immunology with the discovery of the principle of graft rejection by Peter Medawar in the 1950s, and the way the body distinguishes self from not-self via natural killer cells. The compatibility mechanism contributes also to the success of pregnancy by helping the placenta to form, and may play a role in mate selection.
Context
Author
Daniel M. Davis has a doctorate in physics from Strathclyde University. He was professor of molecular immunology at Imperial College London and director of research at the University of Manchester's collaborative centre for inflammation research. Davis is a recognised as an expert in the field by the Nature journal of immunology. Davis is a recognised expert for his research in the immune synapse, membrane nanotubes, and natural killer cells.
Subject
The book's context is the history of immunology, from the earliest questioning about why people become ill and why some may recover, to the 19th century pioneers who demonstrated that bacteria caused many diseases. In the 20th century where, slowly at first but at an accelerating pace, biologists started to build an understanding of the genetic basis of variation and natural selection, and alongside that, the foundations of scientific medicine, including immunology. As Steven Pinker observes, few stories of scientific endeavour have never been told. "This is one of them. Ostensibly about a set of genes that we all have and need, this book is really about the men and women who discovered them and worked out what they do. It’s about brilliant insights and lucky guesses; the glory of being proved right and the paralysing fear of getting it wrong; the passion for cures and the lust for Nobels. It’s a search for the essence of scientific greatness by a scientist who is headed that way himself."
Book
Contents
The book is in three parts. In part 1, Davis describes the history of research into biological compatibility, starting with the story of Peter Medawar's life and discoveries in graft rejection. He tours the history of medicine from Hippocrates to the 19th century pioneers Louis Pasteur and Robert Koch, and Frank Macfarlane Burnet's concept of the immune system's ability to discriminate self from non-self. He explains how advances in understanding of immunity, from Karl Landsteiner's discovery of the ABO blood group system onwards, permit organ transplants to take place. The compatibility genes are named as three class I human leucocyte antigen (HLA) genes (A, B, and C) and three class II (DP, DQ, and DR), each with numerous versions (alleles).
Lastly, Davis tells the human side of the story of the discovery of killer T-cells. Alan Townsend found that killer T-cells destroyed cells that carried an HLA protein and small fragments of viral protein. Those small peptides were all the evidence the T-cells needed to decide that a cell was diseased.
In part 2, Davis describes the nature of the genetic differences between people, like having the allele for Huntington's disease, can be small but decisive. An HLA protein variant, B*27, is associated with a serious inherited disease, ankylosing spondylitis, but also protects against AIDS. Other variants protected against other diseases. Perhaps the polymorphisms in HLA, the many forms each HLA gene can take, are maintained by natural selection for competing factors. He explains that variations in HLA genes may predict which drugs will be beneficial for individuals, implying a new era of personalised medicine. He tells the story of how Klas Kärre came up with the concept of the missing self, a sign (by the absence of an HLA protein) that a cell was diseased, and should be killed by a natural killer cell.
In part 3, Davis describes the famous experiment that called for female partners to sniff boxes containing their male partners' T-shirts, which they had worn for two days. There was a slight association between finding the smell sexy and the two partners having different compatibility genes. It could possibly indicate sexual selection for outbreeding, at least in the HLA system. He explains what is known of the role of compatibility genes in the brain. He tells the story of how the variable genes of the immune system affect the success of pregnancy. Far from the baby's HLA proteins somehow being tolerated by the mother (unlike anyone else's), the strong reaction against the baby's antigens helps to drive proper development of the placenta, in particular the growth of chorionic villi that ensure efficient transfer (for instance of oxygen) between mother and baby. Davis concludes the book by telling a story of genetic compatibility between his wife and himself. He finds himself wondering whether all women should have found him exceptionally attractive, at least when he was younger. He observes that on the contrary there is no hierarchy in HLA: some variants are good in one situation, and bad in another.
Publication
The book was first published in the UK by Allen Lane (hardback) in 2013. Paperback editions were brought out by Penguin Books in Britain, and by Oxford University Press in America, both in 2014. An Italian translation was published by Bollati Boringhieri in Turin in 2016.
Reception
The Compatibility Gene has been well received by critics and scientists.
Mark Viney, reviewing the book in the New Scientist, comments that Davis covers human compatibility genes well, but that he should have gone into more detail on the different systems in other organisms.
The science writer Peter Forbes, writing in The Guardian, notes that when Watson and Crick cracked the genetic code in 1953, it seemed that medicine would instantly profit: but half a century went by before the genome was decoded, and 98% of it seemed at first glance to be junk DNA. Now its complexity is starting to be understood, one function at a time. One specialised area is the immune system, with its own ultra-variable set of proteins. They are not only complicated, but have many functions, in immunity, sexual attraction (perhaps), pregnancy, and brain function. Unsurprisingly, Forbes observes, this makes immunology, and its popularisation, "extremely difficult". Davis "sugars the pill" by choosing to go into the researchers' lives and struggles in great detail. Forbes notes that Davis does not mention that most of the genetic differences between humans and chimpanzees are to do with the immune system and brain development: perhaps (he suggests) these are connected.
Nicola Davis, reviewing the book in The Times, writes that Davis "weaves a warm biographical thread through his tale of scientific discovery, revealing the drive and passion of those in the vanguard of research." The tale of the pioneers such as Medawar is "fairly familiar but Davis's readable narrative allows them to be seen afresh". She finds the account more challenging as it approaches more recent discoveries, but with "plenty of rewarding moments".
Emily Banham, reviewing the book for Nature, notes that compatibility genes lie at the heart of our immune systems, playing a part in the success of skin grafts, pregnancy, and more.
The biologist Rebecca Nesbit, reviewing The Compatibility Gene for The Biologist, writes that Davis shares many stories of dedicated scientists, brought together by "a small cluster of 'compatibility genes' which play a large role in how we react to disease, and are central to how our immune systems work." She notes that the book is as much about the people as the discoveries, but these are made worthwhile by the medical advances they keep producing, for example with possibilities for personalised medicine, as when people with one particular compatibility gene react adversely to an AIDS drug. She observes that all the same, he ends with the scientist's favourite refrain: "more research needed".
References
External links
Website
2013 non-fiction books
Popular science books
Immunology
Allen Lane (imprint) books | The Compatibility Gene | Biology | 1,666 |
46,846,453 | https://en.wikipedia.org/wiki/Non-linear%20preferential%20attachment | In network science, preferential attachment means that nodes of a network tend to connect to those nodes which have more links. If the network is growing and new nodes tend to connect to existing ones with linear probability in the degree of the existing nodes then preferential attachment leads to a scale-free network. If this probability is sub-linear then the network’s degree distribution is stretched exponential and hubs are much smaller than in a scale-free network. If this probability is super-linear then almost all nodes are connected to a few hubs. According to Kunegis, Blattner, and Moser several online networks follow a non-linear preferential attachment model. Communication networks and online contact networks are sub-linear while interaction networks are super-linear. The co-author network among scientists also shows the signs of sub-linear preferential attachment.
Types of preferential attachment
For simplicity it can be assumed that the probability with which a new node connects to an existing one follows a power function of the existing nodes’ degree k:
where α > 0. This is a good approximation for a lot of real networks such as the Internet, the citation network or the actor network. If α = 1 then the preferential attachment is linear. If α < 1 then it is sub-linear while if α > 1 then it is super-linear.
In measuring preferential attachment from real networks, the above log-linearity functional form kα can be relaxed to a free form function, i.e. (k) can be measured for each k without any assumptions on the functional form of (k). This is believed to be more flexible, and allows the discovery of non-log-linearity of preferential attachment in real networks.
Sub-linear preferential attachment
In this case the new nodes still tend to connect to the nodes with higher degree but this effect is smaller than in the case of linear preferential attachment. There are less hubs and their size is also smaller than in a scale-free network. The size of the largest component logarithmically depends on the number of nodes:
so it is smaller than the polynomial dependence.
Super-linear preferential attachment
If α > 1 then a few nodes tend to connect to every other node in the network. For α > 2 this process happens more extremely, the number of connections between other nodes is still finite in the limit when n goes to infinity. So the degree of the largest hub is proportional to the system size:
References
Networks
Network theory | Non-linear preferential attachment | Mathematics | 509 |
70,189,737 | https://en.wikipedia.org/wiki/Blank%20value | A blank value in analytical chemistry is a measurement of a blank. The reading does not originate from a sample, but the matrix effects, reagents and other residues. These contribute to the sample value in the analytical measurement and therefore have to be subtracted.
The limit of blank is defined by the Clinical And Laboratory Standards Institute as the highest apparent analyte concentration expected to be found when replicates of a sample containing no analyte are tested.
See also
Blank (solution)
References
Analytical chemistry | Blank value | Chemistry | 100 |
1,436,198 | https://en.wikipedia.org/wiki/Land%20degradation | Land degradation is a process where land becomes less healthy and productive due to a combination of human activities or natural conditions. The causes for land degradation are numerous and complex. Human activities are often the main cause, such as unsustainable land management practices. Natural hazards are excluded as a cause; however human activities can indirectly affect phenomena such as floods and wildfires.
One of the impacts of land degradation is that it can diminish the natural capacity of the land to store and filter water leading to water scarcity. Human-induced land degradation and water scarcity are increasing the levels of risk for agricultural production and ecosystem services.
The United Nations estimate that about 30% of land is degraded worldwide, and about 3.2 billion people reside in these degrading areas, giving a high rate of environmental pollution. Land degradation reduces agricultural productivity, leads to biodiversity loss, and can reduce food security as well as water security. It was estimated in 2007 that up to 40% of the world's agricultural land is seriously degraded, with the United Nations estimating that the global economy could lose $23 trillion by 2050 through degradation.
Definition
As per the Millennium Ecosystem Assessment of 2005, land degradation is in defined as "the reduction or loss of the biological or economic productivity of drylands". A similar definition states that land degradation is the "degradation, impoverishment and long-term loss of ecosystem services".
It is viewed as any change or disturbance to the land perceived to be deleterious or undesirable.
Scale
According to the Special Report on Climate Change and Land of the Intergovernmental Panel on Climate Change in 2019: "About a quarter of the Earth's ice-free land area is subject to human-induced degradation (medium confidence). Soil erosion from agricultural fields is estimated to be currently 11 to 20 times (no-tillage) to more than 100 times (conventional tillage) higher than the soil formation rate (medium confidence)."
The United Nations estimate that about 30% of land is degraded worldwide, and about 3.2 billion people reside in these degrading areas, giving a high rate of environmental pollution. About 12 million hectares of productive land—which roughly equals the size of Greece—is degraded every year. This happens because people exploit the land without protecting it.
Estimates from 2021 say that two thirds of Africa's productive land area are severely affected by land degradation.
Types
In addition to the usual types of land degradation that have been known for centuries (water, wind and mechanical erosion, physical, chemical and biological degradation), four other types have emerged in the last 50 years:
pollution, often chemical, due to agricultural, industrial, mining or commercial activities;
loss of arable land due to urban construction, road building, land conversion, agricultural expansion, etc.;
artificial radioactivity, sometimes accidental;
land-use constraints associated with armed conflicts.
Overall, more than 36 types of land degradation can be assessed. All are induced or aggravated by human activities, e.g. soil erosion, soil contamination, soil acidification, sheet erosion, silting, aridification, salinization, urbanization, etc.
A problem with defining land degradation is that what one group of people might view as degradation, others might view as a benefit or opportunity. For example, planting crops at a location with heavy rainfall and steep slopes would create scientific and environmental concern regarding the risk of soil erosion by water, yet farmers could view the location as a favourable one for high crop yields.
Causes
Land degradation is mainly derived by numerous, complex, and interrelated anthropogenic and/or natural proximate and underlying causes. For example, in Ethiopia the country has been affected by chronic and ongoing land degradation processes and forms. The major proximate drivers are biophysical factors and unsustainable land management practices, while the underlying drivers are social, economic, and institutional factors.
Land degradation is a global problem largely related to the agricultural sector, general deforestation and climate change. Causes include:
Land clearance, such as clearcutting, overlogging and deforestation
Agricultural activities such as:
Activities that lead to depletion of soil nutrients through poor farming practices such as exposure of naked soil after crop harvesting;
Monocultures which destabilize the local ecosystem;
Poor livestock farming practices such as overgrazing (the grazing of natural pastures at stocking intensities above the livestock carrying capacity);
Inappropriate irrigation
Climate change because it can "exacerbate land degradation, particularly in low-lying coastal areas, river deltas, drylands and in permafrost areas"
High population density is not always related to land degradation. Rather, it is the practices of the human population that can cause a landscape to become degraded.
Severe land degradation affects a significant portion of the Earth's arable lands, decreasing the wealth and economic development of nations. As the land resource base becomes less productive, food security is compromised and competition for dwindling resources increases, the seeds of famine and potential conflict are sown.
Climate change and land degradation
According to the Special Report on Climate Change and Land of the Intergovernmental Panel on Climate Change (IPCC) climate change is one of the causes of land degradation. The report state that: "Climate change exacerbates land degradation, particularly in low-lying coastal areas, river deltas, drylands and in permafrost areas (high confidence). Over the period 1961–2013, the annual area of drylands in drought has increased, on average by slightly more than 1% per year, with large inter-annual variability. In 2015, about 500 (380–620) million people lived within areas which experienced desertification between the year 1980s and 2000s. The highest numbers of people affected are in South and East Asia, the circum Sahara region including North Africa, and the Middle East including the Arabian Peninsula (low confidence). Other dryland regions have also experienced desertification. People living in already degraded or desertified areas are increasingly negatively affected by climate change (high confidence)." Additionally, it is reported that 74% of the poor are directly affected by land degradation globally.
Significant land degradation from seawater inundation, particularly in river deltas and on low-lying islands, is a potential hazard that was identified in a 2007 IPCC report.
As a result of sea-level rise from climate change, salinity levels can reach levels where agriculture becomes impossible in very low-lying areas.
In 2009 the European Investment Bank agreed to invest up to $45 million in the Land Degradation Neutrality Fund (LDN Fund). Launched at UNCCD COP 13 in 2017, the LDN Fund invests in projects that generate environmental benefits, socio-economic benefits, and financial returns for investors. The Fund was initially capitalized at US$100 million and is expected to grow to US$300 million.
In the 2022 IPCC report, land degradation is responding more directly to climate change as all types of erosion and SOM declines (soil focus) are increasing. Other land degradation pressures are also being caused by human pressures like managed ecosystems. These systems include human run croplands and pastures.
Impacts
Land degradation takes many forms and affects water and land resources. It can diminish the natural capacity of the land to store and filter water leading to water scarcity.
The results of land degradation are significant and complex. They include lower crop yields, less diverse ecosystems, more vulnerability to natural disasters like floods and droughts, people losing their homes, less food available, and economic problems. Degraded land also releases greenhouse gases, making climate change worse.
Further possible impacts include:
A temporary or permanent decline in the productive capacity of the land. This can be seen through a loss of biomass, a loss of actual productivity or in potential productivity, or a loss or change in vegetative cover and soil nutrients.
Loss of biodiversity: A loss of range of species or ecosystem complexity as a decline in the environmental quality.
Increased vulnerability of the environment or people to destruction or crisis.
Sensitivity and resilience
Sensitivity and resilience are measures of the vulnerability of a landscape to degradation. These two factors combine to explain the degree of vulnerability. Sensitivity is the degree to which a land system undergoes change due to natural forces, human intervention or a combination of both. Resilience is the ability of a landscape to absorb change, without significantly altering the relationship between the relative importance and numbers of individuals and species that compose the community. It also refers to the ability of the region to return to its original state after being changed in some way. The resilience of a landscape can be increased or decreased through human interaction based upon different methods of land-use management. Land that is degraded becomes less resilient than undegraded land, which can lead to even further degradation through shocks to the landscape.
Prevention
Sustainable land management
Actions to halt land degradation can be broadly classified as prevention, mitigation, and restoration interventions.
Sustainable land management has been proven in reversing land degradation. It also ensures water security by increasing soil moisture availability, decreasing surface runoff, decreasing soil erosion, leading to an increased infiltration, and decreased flood discharge.
The United Nations Sustainable Development Goal 15 has a target to restore degraded land and soil and achieve a land degradation-neutral world by 2030. The full title of Target 15.3 is: "By 2030, combat desertification, restore degraded land and soil, including land affected by desertification, drought and floods, and strive to achieve a land degradation-neutral world."
Public awareness and education
Increasing public awareness about the importance of land conservation, sustainable land management, and the consequences of land degradation is vital for fostering behavioral change and mobilizing support for action. Education, outreach campaigns, and knowledge-sharing platforms can empower individuals, communities, and stakeholders to adopt more sustainable practices and become stewards of the land.
See also
Environmental impact of irrigation
Land improvement
Land reclamation
Sustainable agriculture
Economics of Land Degradation Initiative
Population growth
Tillage erosion
References
Environmental issues with soil
Physical geography
Human impact on the environment | Land degradation | Environmental_science | 2,055 |
620,362 | https://en.wikipedia.org/wiki/Cointegrate | A cointegrate is the intermediate molecule between donor DNA and target DNA covalently bind during the formation of a Holliday junction. Transposons elements are DNA sequences that can change its position within the genome, sometimes creating reversing mutations. A number of bacterial transposons, especially those related to Tn3 Tn552, encodes two recombinases that participate in their transposition to other DNA. The initial step that mediated this process involve the formation of cointegrate. The cointegrate is formed between the transposon containing donor DNA and the target molecule. In this transpositional intermediate, the donor and target DNAs are joined together by copies of the duplicated transposon, one copy occurring at each donor–target junction.
References
DNA | Cointegrate | Chemistry,Biology | 160 |
2,053,537 | https://en.wikipedia.org/wiki/Cryogenic%20processor | A cryogenic processor is a device engineered to reduce the temperature of an object to cryogenic levels, typically around −300°F (−184.44°C), at a moderate rate in order to prevent thermal shock to the components being treated. The inception of commercial cryogenic processors dates back to the late 1960s, pioneered by Ed Busch. The development of programmable microprocessor controls allowed machines to follow temperature profiles that increased the effectiveness of the process. Certain manufacturers integrate home computers into cryogenic processors to program the temperature profiles.
Before programmable controls were added to control cryogenic processors, the treatment process of an object was done manually by immersing the object in liquid nitrogen. This method often induced thermal shock, leading to structural cracks within the object. Contemporary cryogenic processors monitor temperature fluctuations and modulate the liquid nitrogen input to ensure that only fractional changes in temperature occur over a specific period of time. These temperature readings and adjustments are synthesized into profiles that are used to repeat the process when treating similarly grouped objects.
The standard processing cycle for contemporary cryogenic processors spans a three-day period, which includes 24 hours to reach the optimal minimum temperature for the product, 24 hours to hold the product at the minimum temperature, and 24 hours to return the product to room temperature. Certain items necessitate post-cryogenic heating in an oven to achieve higher temperatures. While some processors can provide both the negative and positive extreme temperatures, in some instances, distinct apparatuses like a cryogenic processor and a specialized oven may yield superior outcomes, contingent on the application.
The optimal minimum temperatures for objects, as well as the hold times involved, are determined by utilizing different research methods and are backed by analysis of the product to determine the optimum procedure for a particular product. The advent of novel metals and their amalgamations in new market products may necessitate alterations to processing profiles to suit these materials. Additionally, thermal profiles might be revised in response to insights from case studies conducted by either the manufacturer or the clientele of cryogenic services. When a cryogenic processor is manufactured, the thermal profiles for the year of manufacture will be included. Nonetheless, profiles originating from the initial engineering phase of the processor model could be antiquated. Manufacturers facing budget constraints might include obsolete profiles with processors due to limited research funding.
To find thermal profiles for cryogenics, a number of companies maintain thermal profiles of various products that are updated for accuracy at regular intervals according to ongoing research, including data from independent trials and studies. Acquiring these profiles can pose challenges if their use is not educational, primarily within institutional research contexts, as they typically only provide the updated profiles to their longtime service center partners.
It is asserted that cryogenic processors have revolutionized the domain of cryogenics. Previously, cryogenics was largely theoretical, with inconsistent results from incremental improvements. Ongoing research aims to increase the accuracy of temperature treatment profiles, as well as the efficiency of hardware and associated control systems.
See also
Absolute zero
Cryogenics
Cryogenic tempering
Cryocoolers
Coldest temperature achieved on earth
Refrigeration
Superfluidity
Superconductivity
Quantum hydrodynamics
References
External links
300 Below, Inc. - Founder: Peter Paulin
Cryotron (Canada) Ltd. - Developer: Vari-Cold Cryogenic Process
Cryogenics | Cryogenic processor | Physics | 678 |
14,778,811 | https://en.wikipedia.org/wiki/Webcam%20model | A webcam model (colloquially, camgirl, camboy, or cammodel) is a video performer who streams on the Internet with a live webcam broadcast. A webcam model often performs erotic acts online, such as stripping, masturbation, or sex acts in exchange for money, goods, or attention. They may also sell videos of their performances. Once viewed as a small niche in the world of adult entertainment, camming became "the engine of the porn industry," according to Alec Helmy, the publisher of XBIZ, a sex-trade industry journal.
As many webcam models operate in the comfort of their own homes, they are free to choose the amount of sexual content for their broadcasts. While most display nudity and sexually provocative behavior, some choose to remain mostly clothed and merely talk about various topics, while still soliciting payment as tips from their fans. Webcam models are predominantly women, and also include noted performers of all genders and sexualities.
Background
The conceptual artist Jenny Ringley is considered the first camgirl. In 1996, as a student at Dickinson College, Ringley created a website called "JenniCam". Her webcam was located in her dorm room and automatically photographed her every few minutes. Ringley viewed her site as a straightforward document of her life. She did not wish to filter the events that were shown on her camera, so sometimes she was shown nude or engaging in sexual behavior, including sexual intercourse and masturbation. These images were then broadcast live over the Internet. Two years later, in 1998, she divided her website's access between free and paying.
Also in 1998, a commercial site called AmandaCam was launched. Amanda's site, like Ringley's, had multiple cameras around her house, which allowed people to look in on her. However, Amanda made an important early discovery that would influence the camming industry for decades to come – that a website's popularity could be greatly increased by enabling viewers to chat with a performer while online. Within her members section, Amanda made it a point to chat with her viewers for over three hours a day. Since the early days of live webcasts by Ringley and Amanda, the phenomenon of camming has grown to become a multibillion-dollar industry, which has an average of at least 12,500 cam models online at any given time, and more than 240,000 viewers at any given time.
Payment systems and earnings
A camming website acts as an intermediary and aggregator by hosting independent models, and verifies that all are at least 18 years old. Camming websites typically fall into two main categories, dependent upon whether their video chat rooms are free or private. Viewers in private chat rooms pay the performance by the minute. In free chat rooms, payment is voluntary in the form of tips.
Tips are electronic tokens that viewers can buy from a camming website, and then give to the models during live performances to show appreciation. Tokens can also be used to buy access to private shows, operate a Teledildonic device that a model may be wearing, or buy videos and souvenirs from a model. The website provides the transactional platform and then collects and distributes a percentage of the tips to the models. For public chat rooms, the model's portion of a tip ranges from 30% to 70%, depending on the cam site.
A July 2020 survey found the average webcam model in the United States works 18 hours per week, and earns $4,470 per month. Webcam models who work full-time (40 hours per week or more) earn $11,250 per month on average. Top-earning webcam models have a self-reported income of over $312,000 annually, while bottom earners take home as little as $100 per week.
US taxes
In the United States, webcam models are considered as self-employed workers, and their tax rate is 15.3% (where 12.4% is for social security and 2.9% for Medicare), plus income tax. If a webcam model earns more than $600 in a year, they are sent a 1099 form and are required to report the income to the IRS.
Personal connection and interaction
Performances can be interactive in both public and private video chat rooms, as viewers and performers can communicate with each other using a keyboard, speech, and two-way cameras. Within public chat rooms, the audience can see tips and viewer comments as scrolling text next to the real-time video stream. Camgirls will frequently read and respond to the scrolling viewer comments. The chatter is constant and is often led by a small band of regular fans.
This is not the first time conversational interaction has become a boon for the erotic entertainment industry. In the early 20th century, sociologist Paul Cressey noted that within the hundreds of taxi-dance halls of America, "the traffic in romance and feminine society" would become available when taxi dancers would offer their companionship and "the illusion of romance" for ten cents a dance. The Mitchell Brothers O'Farrell Theatre strip club is credited with the invention of the lap dance in 1977 when their new stage, New York Live, pioneered customer-contact shows with strippers that came off the stage and sat in the laps of customers for tips. Enabled with this new revenue stream for strippers, the strip club industry went through a period of extreme growth during the 1980s.
There are often connections between erotic video performance and the everyday social lives of camming customers. Webcam performers are often highly entrepreneurial and use mainstream social networking sites such as Twitter, Instagram, Snapchat, Skype, and Tumblr to build and maintain relationships with their customers. Some fans communicate multiple times a day with models through social media.
Unlike traditional pornography, the interactive nature of the camming medium titillates with the promise of virtual friendship. Princeton University sociologist and author of The Purchase of Intimacy, Viviana Zelizer, states of camming: "they're defining a new kind of intimacy. It's not traditional sex work, not a relationship, but something in between." In addition to performing sex work, cam models also perform through their hosting duties, conveying authenticity, creating and animating fantasies, and managing relationships over time.
Within Cam Girlz, a documentary film about the industry, male fans often say that they come to camming sites as a way to fulfill emotional needs. The film's director, Sean Dunne, states of the fans, "they said it's not like a strip club – it's like a community, and you feel it when you're in these chat rooms. It's a community and entertainment that goes very far beyond sexuality."
However, Dr Kari Lerum of the University of Washington suggests that men are more open and vulnerable in cam rooms than in strip clubs, and can become very invested in relationships which only exist on the screen. This proposition was supported by a 2019 study of over 6,000 webcam users by the webcam platform Stripchat. The study found that over 40% of its users had developed significant relationships with their cam models, ranging from friendship to deep emotional connections.
Terminology
The term webcam is a clipped compound, combining the terms World Wide Web and video camera.
When webcam models create live webcasts, the activity is known as camming. A third-party hosting website which transmits multiple webcam models' video-streams is known as a camming site. Webcam models mostly perform individually in separate video chat rooms, frequently referred to as rooms.
The generally derogatory and pejorative term camwhore was used in print as early as November 2001. While commonly applied to sexually explicit performers, the term has also been applied to non-explicit female livestreamers on platforms such as Twitch and YouTube.
Camming industry
As of 2016, the money generated by camming sites was upwards of US$2 billion annually. The pornography business as a whole is estimated to be about $5 billion. According to the web traffic analysis service Compete.com, LiveJasmin generates more than 9 million unique viewers a month. Similar webcam model hosting sites such as Chaturbate, CAM4, and MyFreeCams boast 4.1 million, 3.7 million and 2 million unique monthly visitors, respectively. Certain hosting sites such as the aforementioned Chaturbate, LiveJasmin, ManyVids, and many other providers also offer payment in cryptocurrency.
The decentralized business model of camming has upended the pornography industry in multiple ways. Camming revenue has been severely cutting into the profits of the pornographic movie business, which has also been eroded for several years by piracy and the distribution of free sexual content on the Internet. The pornographic film industry used to be male-dominated, except for the performers. Since camming requires only a video camera, broadband service, and a computer, there has now been a power reversal, and female webcam performers are driving the industry. Todd Blatt, a former pornographic movie producer, has said, "If you're the middle guy who has been eating off this industry for 20 years, it's a big change. The girls don't need anybody."
The new revolution that the decentralized camming industry has brought also challenged many cultural stereotypes concerning both the camgirls and their customers. Ethnography researcher Dr Theresa Senft became a camgirl for a year while doing four years of research for her 2008 book Camgirls: Celebrity and Community in the Age of Social Networks. Senft has described herself as "the first academic camgirl" while becoming a "camgirl writing about camgirls." Anna Katzen, a camgirl who has a postgraduate degree from Harvard, stated during an interview:
Furthermore, she says that:
During the COVID-19 pandemic, the webcamming industry experienced explosive growth. The popular platform OnlyFans reported $2.4 billion in transactions in 2020, a 600% increase from 2019. This was driven in part by a large influx of new creators with little or no previous experience in sex work who joined the platform due to unemployment.
Hosting websites
Webcam models typically make use of third-party websites to stream their real-time video performances on the Internet. Some sites charge viewers a fixed fee per minute, although many allow free access for unregistered visitors. These Internet hosting websites, known as camming sites, take care of the technical work hosting the video feed broadcast, processing payments, providing an intuitive interface, advertising so that the cam model only has to focus on the actual shows for their video chat room.
A fee can be charged for service as a percentage of the revenue made by the model. To improve security and anonymity, some webcamming services (such as Live Stars) use blockchain technology to handle the payment and to protect the model's entered personal information. SpankChain is another similar camming site and cryptocurrency.
By presenting hundreds of different models via individual chat rooms, a camming site becomes a talent aggregator and middleman. Though a camming website may carry many hundreds of models, they frequently provide an interface for the viewer to easily switch between the most-visited models' rooms, and that interface occasionally resembles the multiple channel selection of cable television.
Most cam models are independent contractors for camming sites, and are not employees.
Camming sites typically supply each webcam model with an individual profile webpage where the performer can describe themselves and more importantly, create a virtual store where they can sell items like videos, photos, personal clothing, and memberships to their fan club. The profile page's virtual store creates a stream of passive income, meaning that even if a camgirl is not online and performing, she can still generate money while fans come to the ever-present profile page to purchase its wares. Some of the most popular items are homemade videos cam models make of themselves. While most of these videos are sexual in nature, they often include elements of comedy, fashion, and a narration of their lifestyles.
The affordability of and access to new video recording technology has spawned new variations and genres of pornography since individual women, as well as industry players, can now create content. A profile page might also sell contact information like a personal phone number, a spot on a model's Snapchat contact list, or the ability to send her private messages through a camming site's friends list. The profile page may also suggest tip amounts for real-time performance requests, like a sexy dance, a song request, removal of clothing, or a particular sex act. All prices on a profile page are listed in quantities of tips, which are electronic tokens that the viewer can buy in bunches from the cam site to be given to various models during the performance, or in later purchases upon the profile page.
The camming site keeps a percentage of the tips, and the amount varies. Big earners can get a bigger chunk of their tips. Commissions earned by webcam models vary widely by website and are usually based on a percentage of gross sales, although sometimes they are in the form of a flat fee. They may also earn money through advertising or commissions by persuading customers to sign up for membership at adult pornographic paysites. Many sites encourage viewers to purchase items from online wish lists. Some webcam models cater to particular fetishes, such as a fascination with feet, and might earn additional money by selling worn socks to patrons. Some models will cater to extremely specific fetishes, as customers with uncommon fetishes tend to pay more. This has been criticized as a "race to the bottom," where webcam models will attempt to outdo each other in perversity. In reaction, cam models on websites such as Chaturbate have developed a culture discouraging engagement in fetishes they consider demeaning.
Camming sites specify rules and restrictions for their cam models, which in turn tend to give the camming site a distinct style and format. For example, one major free-access site, which only allows female models, fosters an environment where the camgirls are not necessarily obligated to do masturbation shows or even display nudity. Consequently, some of that site's models create a more relaxed "hangout atmosphere" within their rooms that occasionally resembles a talk show. Conversely, another major cam site, which allows men and couples to perform, tends to be more sexual and show-oriented. On some sites, models are not required to show their face on the webcam stream (thus allowing the use of veils, masks, ...). Other cam site rules might prohibit working in a public place so that the model does not get a public indecency arrest, the way that Kendra Sunderland was charged after her 2014 performance inside the Oregon State University Library. Models who violate a camming site's rules may be subjected to a temporary or permanent ban from the cam site.
Social media
Webcam models often rely on social media to interact with existing customers and to meet new customers. This has potential disadvantages; however, mainstream social media platforms often have poorly-defined and changing rules that sex workers can inadvertently break. Having a social media account closed for any reasonlegitimate or otherwisecan severely affect a performer's ability to earn income.
Some cam models have non-commercial personal web blogs.
Resources for performers
Cam studios allow models to rent facilities outside of their homes. These businesses can supply models with video equipment, Internet service, computer, lighting, and furniture. One example was the pornographic film company Kink.com, which rented individual cam studios in the San Francisco Armory by the hour from 2013 until the building was sold in 2018.
Within some studios, cam models can work by the percentage of business that they bring in, instead of renting studio time. The cam models do not have to pay to join this type of studio and are also not guaranteed a salary. These models can typically charge customers between $1 and $15 per minute, and then the studio keeps half of the gross while the model gets the rest.
Another workplace option is called a "camgirl mansion", which is a place that provides equipment and broadcast rooms, where multiple camgirls can live and share expenses without a studio owner.
Various support websites supply general information about business strategies, upcoming conferences, performance tips, and studio equipment reviews. Support sites also advise on how to protect privacy, discourage piracy, avoid Internet security lapses, and prevent financial scams. Some chat websites for cam models provide message boards for the models, which enables them to discuss their work concerns and issues, such as clients who get overly attached.
Conferences and industry trade shows can also aid cam models by allowing cam models to network and meet others in the profession on a personal level. Cam model Nikki Night provides a coaching service for cam models, in which she advises them on business practices that maximize revenues.
Legal issues and risks
Laws
Due to the controversial nature of pornography, camming, like most sex work, is not considered a legitimate form of labor in most developed countries. As a result, cam models do not receive the same benefits and rights as other employees since they are technically independent contractors. This offers cam models some freedom not offered to other laborers but prevents them from demanding better treatment from the websites that host them. However, in-person sex work is treated more harshly since it is illegal in many Western countries, including the United States. Camming is considered slightly different, since it is considered pornography as a virtue of being filmed.
Regulation would be beneficial to camming, since it would prevent cam models from being exploited for their labor. However, regulation could also potentially take away cam models' independence, such as sexual freedom and bodily autonomy. Although in-person sex work such as prostitution can be regulated by policing the streets, online sex work is hard to regulate, due to anonymity, and risk of encroaching on content that is risqué, but not necessarily pornographic. In a study on sex work in East Java, Indonesia where a specific district decriminalized sex work while its surround districts did not, researchers found that anti-prostitution laws decreased the use of condoms, which in effect increased the transmission of sexually transmitted diseases such as HIV.
China
In accordance with the 1997 penal code, pornography is illegal in China. The law only permits educational or artistic depictions of sexual intercourse. Historically, the law is not interpreted by the government to include pornography under the umbrella of art. As such, camming faces strict regulation on the internet in contrast to Western countries, where its legal distinction protects it from prostitution. However, camming can also be a form of solace for sex workers since it allows them to escape online where they can avoid persecution for their profession.
China has planned to extend anti-camming laws to ASMR. The Chinese government claims that ASMR constitutes pornography, but Chinese ASMR content creators dispute this, arguing that pornographic ASMR represents a different category from non-sexual ASMR.
India
Sex work is legal in India, but many related elements such as brothels are illegal. Thus, camming is legal in India, but a social stigma remains. There is a narrative that sex workers in India are coerced into their profession, but this is not true of all sex workers. Many sex workers attest that their profession is legitimate labor and should be recognized as such. Due to the illegality of pimping, sex workers like cammers tend to operate independently and thus control their labor and profits.
Philippines
Sex work is illegal in the Philippines, but enforcement of the law is not strict such that it is quite commonplace. There is a perception that Filipino sex workers are victims of human trafficking, but this isn't always the case. Camming, in particular, is usually consensual and not always explicitly sexual, likening it more to performance than pornography.
United Kingdom
Sex work and camming in the United Kingdom is heavily regulated by the government. Sex work is not recognized as legitimate employment by the government. As a result, sex workers are often afraid to report crimes committed against them, making sex work a dangerous occupation. Sex workers, both online and offline, are often subject to stalking, unwanted messages, and other forms of harassment. It is hard to obtain concrete conclusions from studies on sex work in the UK due to its tenuous legality. Most studies are conducted through surveys which are subject to biases.
A sex work researcher, Rachel Stuart, notes a paradox in British law that tends to focus on the uploading of pornographic recordings, but does not deal with erotic performance when streamed upon the Internet through camming. For instance, the Audiovisual Media Services Regulations 2014 ban certain acts from being depicted and uploaded by pornography producers in the United Kingdom, and the Digital Economy Act 2017 seeks to restrict minors' access to pornographic material online, yet both laws will have no effect if the performances are streamed as opposed to recorded. Stuart states of the legal conundrum in England, "Performing an explicitly pornographic act via a webcam carries no repercussions, but if the same show is recorded and uploaded, the performer can be liable to a fine."
United States
Lawrence Walters, a Florida lawyer who is an expert in obscenity law, said that there was nothing inherently illegal about web model camming shows, as long as the models were over 18, and performed at home or in a model's studio.
Risks
While the conduct of webcam models' clients in chat rooms has been described as generally civil and polite, some models have faced "aggressive sexual language" and online harassment. In 2012, a group of 4chan users harassed a webcam model about her weight until she began crying on camera. Even clients who are polite can behave in ways that make models feel uncomfortable, such as when clients become overly attached to, or obsessive, about a model; if the client is a regular customer and a heavy tipper, this can make the model feel pressured to give in to the client's requests. Webcam models have occasionally been the targets of cyber-stalkers and blackmailers. Some cam models have been "blackmailed or threatened into performing acts they are not comfortable with. If they don't comply, they run the risk of having their real identity exposed". In one case, Internet trolls revealed the real name, address and phone number of a webcam performer and posted this information, along with explicit photos of her, on social media, and the account was forwarded to her friends and family. As of 2019, it was reported that there is little legal protection for cam models, as most of the case law deals with the regulation of strip clubs and sex shops, or for distribution of products.
Sex work researcher, Rachel Stuart, reported that while doing her PhD research she encountered webcam models who were concerned about viewers filming and sharing their performances on porn sites, or acquiring personal information which could be used to stalk or blackmail them. In 2013, the New York Times interviewed a woman who prefers to conceal her real identity while working as a camgirl. She revealed that she had been cyber-stalked by a heavy tipper who started making threats and demands about what outfits she should wear. A short while later, she found out that her real name and address had been posted on the Internet along with her cam name. When she complained to the police, they said that they could do nothing, because "putting real information on the Internet is not illegal." She later found out that the same individual had also threatened and outed several other camgirls.
Another issue faced by cam models is that viewers may record streams or images of the model without their consent and then redistribute them on pornography websites. In addition to taking away the model's ability to choose where their content is shown, unauthorized use has been likened to theft of the model's property, since the porn site will earn money from the video and not the cam model.
Sex workers have formed support groups where sex workers may give each other advice and possibly cope with harassment and marginalization. The word "camily", a portmanteau of "cam" and "family", refers to communities formed by sex workers to help deal with such issues.
Incidents
1990s
A New York Times report described the story of Justin Berry, a 13-year-old boy who, after hooking up his webcam and listing himself on an online forum in order to make friends, was propositioned by older men to strip and masturbate on camera. CNN referred to him as "in the language of cyberspace... a cam-whore". He started his own paysite, prostituted himself, sold video recordings of his encounters with Mexican prostitutes, and helped hire other underage models. He made several hundred thousand dollars over five years before turning all information over to prosecutors in exchange for immunity.
2010s
In October 2014, a 19-year-old Oregon State University student, Kendra Sunderland, had been working as a camgirl before she made an hour-long video for MyFreeCams.com of herself at the Oregon State University Library, in which she stripped and masturbated on camera for a live audience. She was then charged with public indecency after the show was recorded by someone who was watching MyFreeCams.com online, and then posted it on other sites. Sunderland faced fines up to $6,250 and one year of jail. She pleaded guilty, paid $1,000, and avoided jail. The incident generated headlines around the country and landed Sunderland reported deals with Playboy, and a contract with Penthouse's parent company Friend Finder Networks purportedly worth six figures. The incident greatly increased Sunderland's popularity, and she has continued to do camming and speak positively of it as a career.
In Arizona during 2015, a fan took his appreciation of camgirls to an illegal level when he was indicted for spending $476,000 on a company credit card, which he used for tips on camming websites. He spent more than $100,000 on MyFreeCams.com alone, and sent $26,800 to one cam model in particular to pay for her college tuition bill and new tires for her car. According to the indictment, he also purchased flowers, chocolates, electronic equipment, shoes, a TV, a handbag, laptop computer, and an iPod for some of his favorite camgirls.
In one case, sex traffickers who operated illegal brothels forced an indentured victim to have sex in webcam shows.
In January 2019, a 29-year-old Grant Amato killed his father, mother and brother and staged the scene as a murder-suicide, placing the gun by his brother's body. His motivation was an argument with his parents about his infatuation with a webcam model.
See also
Adult content subscription service
American burlesque
Cybersex
Exhibitionism
Internet pornography
Lap dance
List of chat websites
LittleRedBunny
Parasocial interaction
Sex show
Striptease
Talk show
Taxi dancer
Voyeurism
References
Further reading
External links
Sexuality and computing
Online content distribution
Internet culture
Sex workers
2000s neologisms
Sex industry
Streaming | Webcam model | Technology | 5,585 |
56,699,922 | https://en.wikipedia.org/wiki/Stain%20repellent | A stain repellent is a product added to fabric in order to prevent stains.
Stains
Stains on fabrics are classified into three types: water-based stains and oil-based stains or a mix of both.
Stain repellant fabrics
Fabrics are finished with certain finishes that do not allow unwanted stains or that will wash out in simple laundry.
Chemicals
Mostly larger PFCAs such as perfluorooctanoic acid (PFOA) are used in stain repellancy. It is also known colloquially as C8. PFOA is a product of health concern and subject to regulatory action and voluntary industrial phase-outs.
Alternative chemistry
There are chemicals which are based on C8 free chemistry that may be used as an alternative to PFOA, but those are less durable.
See also
Fluorosurfactant
Perfluorobutanesulfonic acid
Perfluorooctanesulfonic acid
Scotchgard
Notes
Textile treatments | Stain repellent | Chemistry,Biology | 192 |
27,519,454 | https://en.wikipedia.org/wiki/Fernstr%C3%B6m%20Prize | The Fernström Prize () is a series of annual awards for prominent Swedish and Nordic scientists in medicine. The prize money is donated by the Eric K. Fernström' Foundation. The prizes are managed by the medical faculty at Lund University.
There are two versions of the prize, both awarded annually – the main prize and a separate prize for particularly promising young researchers.
Nordic Prize
The Nordic Fernström Prize (Nordiska Fernströmpriset) is awarded annually to an outstanding Nordic scientist in medicine. As of 2023, the prize money is 500,000 krona (approximately €50,000).
Recipients of the Nordic Fernström Prize
Swedish Prize
The Swedish Fernström Prize (Svenska Fernströmpriset) is awarded annually to six promising Swedish scientists in medicine. The prizes are distributed so that each winner works in one of the six medical faculties in Sweden:
Gothenburg (University of Gothenburg)
Linköping (Linköping University)
Lund (Lund University)
Stockholm (Karolinska Institutet)
Umeå (Umeå University)
Uppsala (Uppsala University)
See also
List of medicine awards
List of prizes named after people
Fernström Prize for Young Researcher – another prize from the same foundation to younger researcher who is successful and shows particular promise and who, as of 31 December of the year the prize relates to, has still not reached the age of 45.
References
Medicine awards
Lund University
Swedish awards | Fernström Prize | Technology | 283 |
52,745,530 | https://en.wikipedia.org/wiki/Lagg%20%28landform%29 | A lagg, also called a moat, is the very wet zone on the perimeter of peatland or a bog where water from the adjacent upland collects and flows slowly around the main peat mass.
Description
A lagg is an area of wetland, especially at the edge of raised bogs, in which water collects. It is often markedly different from the terrain either side and may consist of a morass of shrubs and murky water.
In addition to water gathered from surrounding uplands, the lagg also picks up water flowing down from the domed centre of a raised bog through small channels - soaks or water tracks - to the steeply sloping shoulder or rand of the bog. At the foot of the rand, the water collects and meets the water of the surrounding area on the boundary between the peat soil and mineral soil.
References
Literature
Johnson, Charles W. Bogs of the Northeast. London: University Press of New England, 1985. .
Bogs
Ecosystems | Lagg (landform) | Biology | 194 |
24,200,136 | https://en.wikipedia.org/wiki/Epstein%E2%80%93Barr%20virus%20small%20nucleolar%20RNA%201 | V-snoRNA1 is a box CD-snoRNA identified in B lymphocytes infected with the Epstein–Barr virus (human herpesvirus 4 (HHV-4)). This snoRNA is the first known example of a snoRNA expressed from a viral genome. It is homologous to eukaryotic snoRNAs because it contains the C and D boxes sequence motifs but lacks a terminal stem-loop structure. The nucleolar localization of v-snoRNA1 was determined by in situ hybridization. V-snoRNA1 can form into a ribonucleoprotein complex (snoRNP) as co-immunoprecipitation (CoIP) assays showed that this snoRNA interacts with the snoRNA core proteins, fibrillarin, Nop56, Nop58. It has also been proposed that this snoRNA may act as a miRNA-like precursor that is processed into 24-nucleotide-sized RNA fragments that target the 3'UTR of viral DNA polymerase mRNA.
See also
Epstein–Barr virus stable intronic sequence RNAs
References
External links
Molecular genetics
Non-coding RNA
Epstein–Barr virus | Epstein–Barr virus small nucleolar RNA 1 | Chemistry,Biology | 257 |
24,398,638 | https://en.wikipedia.org/wiki/C6H8S | {{DISPLAYTITLE:C6H8S}}
The molecular formula C6H8S (molar mass: 112.19 g/mol, exact mass: 112.0347 u) may refer to:
2,3-Dihydrothiepine
2,7-Dihydrothiepine
2,5-Dimethylthiophene
Molecular formulas | C6H8S | Physics,Chemistry | 81 |
148,432 | https://en.wikipedia.org/wiki/Michigan%20Terminal%20System | The Michigan Terminal System (MTS) is one of the first time-sharing computer operating systems. Created in 1967 at the University of Michigan for use on IBM S/360-67, S/370 and compatible mainframe computers, it was developed and used by a consortium of eight universities in the United States, Canada, and the United Kingdom over a period of 33 years (1967 to 1999).
Overview
The University of Michigan Multiprogramming Supervisor (UMMPS) was initially developed by the staff of the academic computing center at the University of Michigan for operation of the IBM S/360-67, S/370 and compatible computers. The software may be described as a multiprogramming, multiprocessing, virtual memory, time-sharing supervisor that runs multiple resident, reentrant programs. Among these programs is the Michigan Terminal System (MTS) for command interpretation, execution control, file management, and accounting. End-users interact with the computing resources through MTS using terminal, batch, and server oriented facilities.
The name MTS refers to:
The UMMPS Job Program with which most end-users interact;
The software system, including UMMPS, the MTS and other Job Programs, Command Language Subsystems (CLSs), public files (programs), and documentation; and
The time-sharing service offered at a particular site, including the MTS software system, the hardware used to run MTS, the staff that supported MTS and assisted end-users, and the associated administrative policies and procedures.
MTS was used on a production basis at about 13 sites in the United States, Canada, the United Kingdom, Brazil, and possibly in Yugoslavia and at several more sites on a trial or benchmarking basis. MTS was developed and maintained by a core group of eight universities included in the MTS Consortium.
The University of Michigan announced in 1988 that "Reliable MTS service will be provided as long as there are users requiring it ... MTS may be phased out after alternatives are able to meet users' computing requirements". It ceased operating MTS for end-users on June 30, 1996. By that time, most services had moved to client/server-based computing systems, typically Unix for servers and various Mac, PC, and Unix flavors for clients. The University of Michigan shut down its MTS system for the last time on May 30, 1997.
Rensselaer Polytechnic Institute (RPI) is believed to be the last site to use MTS in a production environment. RPI retired MTS in June 1999.
Today, MTS still runs using IBM S/370 emulators such as Hercules, Sim390, and FLEX-ES.
Origins
In the mid-1960s, the University of Michigan was providing batch processing services on IBM 7090 hardware under the control of the University of Michigan Executive System (UMES), but was interested in offering interactive services using time-sharing. At that time the work that computers could perform was limited by their small real memory capacity. When IBM introduced its System/360 family of computers in the mid-1960s, it did not provide a solution for this limitation and within IBM there were conflicting views about the importance of and need to support time-sharing.
A paper titled Program and Addressing Structure in a Time-Sharing Environment by Bruce Arden, Bernard Galler, Frank Westervelt (all associate directors at UM's academic Computing Center), and Tom O'Brian building upon some basic ideas developed at the Massachusetts Institute of Technology (MIT) was published in January 1966. The paper outlined a virtual memory architecture using dynamic address translation (DAT) that could be used to implement time-sharing.
After a year of negotiations and design studies, IBM agreed to make a one-of-a-kind version of its S/360-65 mainframe computer with dynamic address translation (DAT) features that would support virtual memory and accommodate UM's desire to support time-sharing. The computer was dubbed the Model S/360-65M. The "M" stood for Michigan. But IBM initially decided not to supply a time-sharing operating system for the machine. Meanwhile, a number of other institutions heard about the project, including General Motors, the Massachusetts Institute of Technology's (MIT) Lincoln Laboratory, Princeton University, and Carnegie Institute of Technology (later Carnegie Mellon University). They were all intrigued by the time-sharing idea and expressed interest in ordering the modified IBM S/360 series machines. With this demonstrated interest IBM changed the computer's model number to S/360-67 and made it a supported product. With requests for over 100 new model S/360-67s IBM realized there was a market for time-sharing, and agreed to develop a new time-sharing operating system called TSS/360 (TSS stood for Time-sharing System) for delivery at roughly the same time as the first model S/360-67.
While waiting for the Model 65M to arrive, UM Computing Center personnel were able to perform early time-sharing experiments using an IBM System/360 Model 50 that was funded by the ARPA CONCOMP (Conversational Use of Computers) Project. The time-sharing experiment began as a "half-page of code written out on a kitchen table" combined with a small multi-programming system, LLMPS from MIT's Lincoln Laboratory, which was modified and became the UM Multi-Programming Supervisor (UMMPS) which in turn ran the MTS job program. This earliest incarnation of MTS was intended as a throw-away system used to gain experience with the new IBM S/360 hardware and which would be discarded when IBM's TSS/360 operating system became available.
Development of TSS took longer than anticipated, its delivery date was delayed, and it was not yet available when the S/360-67 (serial number 2) arrived at the Computing Center in January 1967. At this time UM had to decide whether to return the Model 67 and select another mainframe or to develop MTS as an interim system for use until TSS was ready. The decision was to continue development of MTS and the staff moved their initial development work from the Model 50 to the Model 67. TSS development was eventually canceled by IBM, then reinstated, and then canceled again. But by this time UM liked the system they had developed, it was no longer considered interim, and MTS would be used at UM and other sites for 33 years.
MTS Consortium
MTS was developed, maintained, and used by a consortium of eight universities in the US, Canada, and the United Kingdom:
University of Michigan (UM), 1967 to 1997, US
University of British Columbia (UBC), 1968 to 1998, Canada
NUMAC (University of Newcastle upon Tyne, University of Durham, and Newcastle Polytechnic), 1969 to 1992, United Kingdom
University of Alberta (UQV), 1971 to 1994, Canada
Wayne State University (WSU), 1971 to 1998, US
Rensselaer Polytechnic Institute (RPI), 1976 to 1999, US
Simon Fraser University (SFU), 1977 to 1992, Canada
University of Durham (NUMAC), 1982 to 1992, United Kingdom
Several sites ran more than one MTS system: NUMAC ran two (first at Newcastle and later at Durham), Michigan ran three in the mid-1980s (UM for Maize, UB for Blue, and HG at Human Genetics), UBC ran three or four at different times (MTS-G, MTS-L, MTS-A, and MTS-I for general, library, administration, and instruction).
Each of the MTS sites made contributions to the development of MTS, sometimes by taking the lead in the design and implementation of a new feature and at other times by refining, enhancing, and critiquing work done elsewhere. Many MTS components are the work of multiple people at multiple sites.
In the early days collaboration between the MTS sites was accomplished through a combination of face-to-face site visits, phone calls, the exchange of documents and magnetic tapes by snail mail, and informal get-togethers at SHARE or other meetings. Later, e-mail, computer conferencing using CONFER and *Forum, network file transfer, and e-mail attachments supplemented and eventually largely replaced the earlier methods.
The members of the MTS Consortium produced a series of 82 MTS Newsletters between 1971 and 1982 to help coordinate MTS development.
Starting at UBC in 1974 the MTS Consortium held annual MTS Workshops at one of the member sites. The workshops were informal, but included papers submitted in advance and Proceedings published after-the-fact that included session summaries. In the mid-1980s several Western Workshops were held with participation by a subset of the MTS sites (UBC, SFU, UQV, UM, and possibly RPI).
The annual workshops continued even after MTS development work began to taper off. Called simply the "community workshop", they continued until the mid-1990s to share expertise and common experiences in providing computing services, even though MTS was no longer the primary source for computing on their campuses and some had stopped running MTS entirely.
MTS sites
In addition to the eight MTS Consortium sites that were involved in its development, MTS was run at a number of other sites, including:
Centro Brasileiro de Pesquisas Fisicas (CBPF) within the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Brazil
Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA), Brazil
Hewlett-Packard (HP), US
Michigan State University (MSU), US
Goddard Space Flight Center, National Aeronautics and Space Administration (NASA), US
A copy of MTS was also sent to the University of Sarajevo, Yugoslavia, though whether or not it was ever installed is not known.
INRIA, the French national institute for research in computer science and control in Grenoble, France ran MTS on a trial basis, as did the University of Waterloo in Ontario, Canada, Southern Illinois University, the Naval Postgraduate School, Amdahl Corporation, ST Systems for McGill University Hospitals, Stanford University, and University of Illinois in the United States, and a few other sites.
Hardware
In theory MTS will run on the IBM S/360-67, any of the IBM S/370 series which include virtual memory, and their successors. MTS has been run
on the following computers in production, benchmarking, or trial configurations:
IBM: S/360-67, S/370-148, S/370-168, 3033U, 4341, 4361, 4381, 3081D, 3081GX, 3083B, 3090–200, 3090–400, 3090–600, and ES/9000-720
Amdahl: 470V/6, 470V/7, 470V/8, 5860, 5870, 5990
Hitachi: NAS 9060
Various S/370 emulators
The University of Michigan installed and ran MTS on the first IBM S/360-67 outside of IBM (serial number 2) in 1967, the second Amdahl 470V/6 (serial number 2) in 1975, the first Amdahl 5860 (serial number 1) in 1982, and the first factory shipped IBM 3090–400 in 1986. NUMAC ran MTS on the first S/360-67 in the UK and very likely the first in Europe. The University of British Columbia (UBC) took the lead in converting MTS to run on the IBM S/370 series (an IBM S/370-168) in 1974. The University of Alberta installed the first Amdahl 470V/6 in Canada (serial number P5) in 1975. By 1978 NUMAC (at University of Newcastle upon Tyne and University of Durham) had moved main MTS activity on to its IBM S/370 series (an IBM S/370-168).
MTS was designed to support up to four processors on the IBM S/360-67, although IBM only produced one (simplex and half-duplex) and two (duplex) processor configurations of the Model 67. In 1984 RPI updated MTS to support up to 32 processors in the IBM S/370-XA (Extended Addressing) hardware series, although 6 processors is likely the largest configuration actually used. MTS supports the IBM Vector Facility, available as an option on the IBM 3090 and ES/9000 systems.
In early 1967 running on the single processor IBM S/360-67 at UM without virtual memory support, MTS was typically supporting 5 simultaneous terminal sessions and one batch job. In November 1967 after virtual memory support was added, MTS running on the same IBM S/360-67 was simultaneously supporting 50 terminal sessions and up to 5 batch jobs. In August 1968 a dual processor IBM S/360-67 replaced the single processor system, supporting roughly 70 terminal and up to 8 batch jobs. By late 1991 MTS at UM was running on an IBM ES/9000-720 supporting over 600 simultaneous terminal sessions and from 3 to 8 batch jobs.
MTS can be IPL-ed under VM/370, and some MTS sites did so, but most ran MTS on native hardware without using a virtual machine.
Features
Some of the notable features of MTS include:
The use of Virtual memory and Dynamic Address Translation (DAT) on the IBM S/360-67 in 1967.
The use of multiprocessing on an IBM S/360-67 with two CPUs in 1968.
Programs with access to (for the time) very large virtual address spaces.
A straightforward command language that is the same for both terminal and batch jobs.
A strong device independent input/output model that allows the same commands and programs to access terminals, disk files, printers, magnetic and paper tapes, card readers and punches, floppy disks, network hosts, and an audio response unit (ARU).
A file system with support for "line files" where the line numbers and length of individual lines are stored as metadata separate from the data contents of the line, and the ability to read, insert, replace, and delete individual lines anywhere in the file without the need to read or write the entire file.
A file editor ($EDIT) with both command line and "visual" interfaces and pattern matching based on SNOBOL4 patterns.
The ability to share files in controlled ways (read, write-change, write-expand, destroy, permit).
The ability to permit files, not just to other user IDs and projects (aka groups), but to specific commands or programs and combinations of user IDs, projects, commands and programs.
The ability for multiple users to manage simultaneous access to files with the ability to implicitly and explicitly lock and unlock files and to detect deadlocks.
Network host to host access from commands and programs as well as access to or from remote network printers, card readers and punches.
An e-mail system ($MESSAGESYSTEM) that supports local and network mail with the ability to send to groups, to recall messages that haven't already been read, to add recipients to messages after they have been sent, and to display a history of messages in an e-mail chain without the need to include the text from older messages in each new message.
The ability to access tapes remotely, and to handle data sets that extend across multiple tapes efficiently.
The availability of a rich collection of well-documented subroutine libraries.
The ability for multiple users to quickly load and use a collection of common reentrant subroutines, which are available in shared virtual memory.
The availability of compilers, assemblers, and a Symbolic Debugging System (SDS) that allow users to debug programs written in high-level languages such as FORTRAN, Pascal, PL/I, ... as well as in assembly language.
A strong protection model that uses the virtual memory hardware and the S/360 and S/370 hardware's supervisor and problem states and via software divides problem state execution into system (privileged or unprotected) and user (protected or unprivileged) modes. Relatively little code runs in supervisor state. For example, Device Support Routines (DSRs, aka device drivers) are not part of the supervisor and run in system mode in problem state rather than in supervisor state.
A simulated Branch on Program Interrupt (BPI) instruction.
Programs developed for MTS
The following are some of the notable programs developed for MTS:
Awit, a computer chess program written in Algol W by Tony Marsland.
Chaos, one of the leading computer chess programs from 1973 through 1985. Written in FORTRAN Chaos started at RCA Systems Programming division in Cinnaminson, NJ with Fred Swartz and Victor Berman as first authors, Mike Alexander and others joined the team later and moved development to MTS at the UM Computing Center.
CONFER II, one of the first computer conferencing systems. CONFER was developed by Robert Parnes starting in 1975 while he was a graduate student and with support from the University of Michigan's Center for Research on Learning and Teaching (CRLT) and School of Education.
FakeOS, a simulator that allows object modules containing OS/360 SVCs, control blocks, and references to OS/360 access methods to execute under MTS.
Forum, a computer conferencing system developed by staff of the Computing Centre at the University of British Columbia (UBC).
GOM (Good Old Mad), a compiler for the 7090 MAD language converted to run under MTS by Don Boettner of the UM's Computing Center.
IF (Interactive Fortran), developed by the University of British Columbia Computing Centre.
MICRO Information Management System, one of the earliest relational database management systems implemented in 1970 by the Institute for Labor and Industrial Relations (ILIR) at the University of Michigan.
MIDAS (Michigan Interactive Data Analysis System), an interactive statistical analysis package developed by Dan Fox and others at UM's Statistical Research Laboratory.
Plus, a programming language developed by Alan Ballard and Paul Whaley of the Computing Centre at the University of British Columbia (UBC).
TAXIR, an information storage and retrieval system designed for taxonomic data at the University of Colorado by David Rogers, Henry Fleming, Robert Brill, and George Estabrook and ported to MTS and enhanced by Brill at the University of Michigan.
Textform, a text-processing program developed at the University of Alberta's Computing Centre to support device independent output to a wide range of devices from line printers, to the Xerox 9700 page printers, to advanced phototypesetting equipment using fixed width and proportional fonts.
VSS, a simulator developed at the University of British Columbia's Computing Centre that makes it possible to run OS/MFT, OS/MVT, VS1, and MVS application programs under MTS.
Programs that run under MTS
The following are some of the notable programs ported to MTS from other systems:
APL VS, IBM's APL VS compiler program product.
ASMH, a version of IBM's 370 assembler with enhancements from SLAC and MTS.
COBOL VS, IBM's COBOL VS compiler program product.
CSMP, IBM's Continuous System Modeling Program.
Fortran, the G, H, and VS compilers from IBM.
GASP, a FORTRAN based discrete simulation package.
Kermit, Columbia University's communications software and protocol
MPS, IBM's Mathematical Programming System/360.
NASTRAN, finite element analysis program originally developed by and for NASA.
OSIRIS (Organized Set of Integrated Routines for Investigations with Statistics), a collection of statistical analysis programs developed at the University of Michigan's Institute for Social Research (ISR).
PascalSB, the Stony Brook Pascal compiler.
Pascal/SLAC, the Pascal compiler from the Stanford Linear Accelerator Center.
Pascal VS, IBM's Pascal VS compiler program product.
PL/I Optimizing Compiler from IBM.
REDUCE2, an algebraic language implemented in LISP.
SAS (Statistical Analysis System).
SHAZAM, a package for estimating, testing, simulating and forecasting econometrics and statistical models
SIMSCRIPT II.5, a free-form, English-like, general-purpose discrete event simulation language.
SPIRES (Stanford Public Information Retrieval System), a database management system.
SPSS (Statistical Package for the Social Sciences)
TELL-A-GRAPH, a proprietary conversational graphics program from ISSCO of San Diego, CA.
TEX, Don Knuth's TeX text-processing program.
TROLL, econometric modeling and statistical analysis
Programming languages available under MTS
MTS supports a rich set of programming languages, some developed for MTS and others ported from other systems:
ALGOL W
ALGOL 68
APL (IBM's VS APL)
Assembler (360/370: G, H, Assist; DEC PDP-11)
BASIC (BASICUM), WBASIC
BCPL (Basic Combined Programming Language)
C
COBOL (ANSI, VS, WATBOL)
EXPL (Extended XPL)
FORTRAN (G, H, VS, WATFOR, WATFIV)
GASP (A FORTRAN-based discrete simulation language)
GOM (Good Old Mad, the 7090 Michigan Algorithm Decoder ported to the S/370 architecture)
GPSS/H (General Purpose Simulation System V)
ICON
IF (Interactive FORTRAN, an incremental compiler and environment for executing and debugging FORTRAN programs, developed at the University of British Columbia)
MAD/I (an expanded version of the Michigan Algorithm Decoder for the IBM S/360 architecture that is not compatible with the original 7090 version of MAD, see also GOM above)
MPS, IBM's Mathematical Programming System/360
MTS LISP 1.5 (a new implementation of LISP 1.5 developed at the UM's Mental Health Research Institute, MHRI)
Pascal (VS, JB)
PIL, PIL/2 (Pitt Interpretive Language)
PL/I (F and OPT from IBM, PL/C from Cornell University)
PL/M
PL360
Plus (A "Pascal-like" system implementation language from the University of British Columbia (UBC) based on the SUE system language developed at the University of Toronto, c. 1971)
Prolog
Simula
SUE
SNOBOL4 (String Oriented Symbolic Language)
SPITBOL (Speedy Implementation of SNOBOL)
UMIST (University of Michigan Interpretive String Translator, based on TRAC)
System architecture
UMMPS, the supervisor, has complete control of the hardware and manages a collection of job programs. One of the job programs is MTS, the job program with which most users interact. MTS operates as a collection of command language subsystems (CLSs). One of the CLSs allows for the execution of user programs. MTS provides a collection of system subroutines that are available to CLSs, user programs, and MTS itself. Among other things these system subroutines provide standard access to Device Support Routines (DSRs), the components that perform device dependent input/output.
Manuals and documentation
The lists that follow are quite University of Michigan centric. Most other MTS sites used some of this material, but they also produced their own manuals, memos, reports, and newsletters tailored to the needs of their site.
End-user documentation
The manual series MTS: The Michigan Terminal System, was published from 1967 through 1991, in volumes 1 through 23, which were updated and reissued irregularly. Initial releases of the volumes did not always occur in numeric order and volumes occasionally changed names when they were updated or republished. In general, the higher the number, the more specialized the volume.
The earliest versions of MTS Volume I and II had a different organization and content from the MTS volumes that followed and included some internal as well as end user documentation. The second edition from December 1967 covered:
MTS Volume I: Introduction; Concepts and facilities; Calling conventions; Batch, Terminal, Tape, and Data Concentrator user's guides; Description of UMMPS and MTS; Files and devices; Command language; User Programs; Subroutine and macro library descriptions; Public or library file descriptions; and Internal specifications: Dynamic loader (UMLOAD), File and Device Management (DSRI prefix and postfix), Device Support Routines (DSRs), and File routines
MTS Volume II: Language processor descriptions: F-level assembler; FORTRAN G; IOH/360; PIL; SNOBOL4; UMIST; WATFOR; and 8ASS (PDP-8 assembler)
The following MTS Volumes were published by the University of Michigan Computing Center and are available as PDFs:
MTS Volume 1: The Michigan Terminal System, 1991
MTS Volume 2: Public File Descriptions, 1990
MTS Volume 3: Subroutine and Macro Descriptions, 1989
MTS Volume 4: Terminals and Networks in MTS, 1988 (earlier Terminals and Tapes)
MTS Volume 5: System Services, 1985
MTS Volume 6: FORTRAN in MTS, 1988
MTS Volume 7: PL/I in MTS, 1985
MTS Volume 8: LISP and SLIP in MTS, 1983
MTS Volume 9: SNOBOL4 in MTS, 1983
MTS Volume 10: BASIC in MTS, 1980
MTS Volume 11: Plot Description System, 1985
MTS Volume 12: PIL/2 in MTS, 1974
MTS Volume 13: The Symbolic Debugging System, 1985 (earlier Data Concentrator User's Guide)
MTS Volume 14: 360/370 Assemblers in MTS, 1986
MTS Volume 15: FORMAT and TEXT360, 1988
MTS Volume 16: ALGOL W in MTS, 1980
MTS Volume 17: Integrated Graphics System, 1984
MTS Volume 18: MTS File Editor, 1988
MTS Volume 19: Tapes and Floppy Disks, 1993
MTS Volume 20: PASCAL in MTS, 1989
MTS Volume 21: MTS Command Extensions and Macros, 1991
MTS Volume 22: Utilisp in MTS, 1988
MTS Volume 23: Messaging and Conferencing in MTS, 1991
MTS Reference Summary, a ~60 page, 3" x 7.5", pocket guide to MTS, Computing Center, University of Michigan
The Taxir primer: MTS version, Brill, Robert C., Computing Center, University of Michigan
Fundamental Use of the Michigan Terminal System, Thomas J. Schriber, 5th Edition (revised), Ulrich's Books, Inc., Ann Arbor, MI, 1983, 376 pp.
Digital computing, FORTRAN IV, WATFIV, and MTS (with *FTN and *WATFIV), Brice Carnahan and James O Wilkes, University of Michigan, Ann Arbor, MI, 1968–1979, 1976 538 p.
Documentation for MIDAS, Michigan Interactive Data Analysis System, Statistical Research Laboratory, University of Michigan
OSIRIS III MTS Supplement, Center for Political Studies, University of Michigan
Various aspects of MTS at the University of Michigan were documented in a series of Computing Center Memos (CCMemos) which were published irregularly from 1967 through 1987, numbered 2 through 924, though not necessarily in chronological order. Numbers 2 through 599 are general memos about various software and hardware; the 600 series are the Consultant's Notes series—short memos for beginning to intermediate users; the 800 series covers issues relating to the Xerox 9700 printer, text processing, and typesetting; and the 900 series covers microcomputers. There was no 700 series. In 1989 this series continued as Reference Memos with less of a focus on MTS.
A long run of newsletters targeted to end-users at the University of Michigan with the titles Computing Center News, Computing Center Newsletter, U-M Computing News, and the Information Technology Digest were published starting in 1971.
There was also introductory material presented in the User Guide, MTS User Guide, and Tutorial series, including:
Getting connected—Introduction to Terminals and Microcomputers
Introduction to the Computing Center
Introduction to Computing Center services
Introduction to Database Management Systems on MTS
Introduction to FORMAT
Introduction to Magnetic Tapes
Introduction to MTS
Introduction to the MTS File Editor
Introduction to Programming and Debugging in MTS
Introduction to Terminals
Introduction to Terminals and Microcomputers
Internals documentation
The following materials were not widely distributed, but were included in MTS Distributions:
MTS Operators Manual
MTS Message Manual
MTS Volume n: Systems Edition
MTS Volume 99: Internals Documentation
Supervisor Call Descriptions
Disk Disaster Recovery Procedures
A series of lectures describing the architecture and internal organization of the Michigan Terminal System given by Mike Alexander, Don Boettner, Jim Hamilton, and Doug Smith (4 audio tapes, lecture notes, and transcriptions)
Distribution
The University of Michigan released MTS on magnetic tape on an irregular basis. There were full and partial distributions, where full distributions (D1.0, D2.0, ...) included all of the MTS components and partial distributions (D1.1, D1.2, D2.1, D2.2, ...) included just the components that had changed since the last full or partial distribution. Distributions 1.0 through 3.1 supported the IBM S/360 Model 67, distribution 3.2 supported both the IBM S/360-67 and the IBM S/370 architecture, and distributions D4.0 through D6.0 supported just the IBM S/370 architecture and its extensions.
MTS distributions included the updates needed to run licensed program products and other proprietary software under MTS, but not the base proprietary software itself, which had to be obtained separately from the owners. Except for IBM's Assembler H, none of the licensed programs were required to run MTS.
The last MTS distribution was D6.0 released in April 1988. It consisted of 10,003 files on six 6250 bpi magnetic tapes. After 1988, distribution of MTS components was done in an ad hoc fashion using network file transfer.
To allow new sites to get started from scratch, two additional magnetic tapes were made available, an IPLable boot tape that contained a minimalist version of MTS plus the DASDI and DISKCOPY utilities that could be used to initialize and restore a one disk pack starter version of MTS from the second magnetic tape. In the earliest days of MTS, the standalone TSS DASDI and DUMP/RESTORE utilities rather than MTS itself were used to create the one-disk starter system.
There were also less formal redistributions where individual sites would send magnetic tapes containing new or updated work to a coordinating site. That site would copy the material to a common magnetic tape (RD1, RD2, ...), and send copies of the tape out to all of the sites. The contents of most of the redistribution tapes seem to have been lost.
Today, complete materials from the six full and the ten partial MTS distributions as well as from two redistributions created between 1968 and 1988 are available from the Bitsavers Software archive and from the University of Michigan's Deep Blue digital archive.
Working with the D6.0 distribution materials, it is possible to create an IPLable version of MTS. A new D6.0A distribution of MTS makes this easier. D6.0A is based on the D6.0 version of MTS from 1988 with various fixes and updates to make operation under Hercules in 2012 smoother. In the future, an IPLable version of MTS will be made available based upon the version of MTS that was in use at the University of Michigan in 1996 shortly before MTS was shut down.
Licensing
As of December 22, 2011, the MTS Distribution materials are freely available under the terms of the Creative Commons Attribution 3.0 Unported License (CC BY 3.0).
In its earliest days MTS was made available for free without the need for a license to sites that were interested in running MTS and which seemed to have the knowledgeable staff required to support it.
In the mid-1980s licensing arrangements were formalized with the University of Michigan acting as agent for and granting licenses on behalf of the MTS Consortium. MTS licenses were available to academic organizations for an annual fee of $5,000, to other non-profit organizations for $10,000, and to commercial organizations for $25,000. The license restricted MTS from being used to provide commercial computing services. The licensees received a copy of the full set of MTS distribution tapes, any incremental distributions prepared during the year, written installation instructions, two copies of the current user documentation, and a very limited amount of assistance.
Only a few organizations licensed MTS. Several licensed MTS in order to run a single program such as CONFER. The fees collected were used to offset some of the common expenses of the MTS Consortium.
See also
Merit Network
Time-sharing system evolution
References
External links
Archives
MTS Archive , a collection of documents, photographs, movies, and other materials related to MTS and the organizations and people that developed and used it
MTS distribution archive at Bitsavers'
MTS distribution archive at the University of Michigan's Deep Blue digital archive
MTS D6.0A - A pre-built version of MTS for use with the Hercules S/370 emulator, available from the MTS Archive
MTS PDF Document Archive at Bitsavers'
The UM Computing Center Public Collection at the Hathi Trust Digital Library contains full text versions of over 250 documents related to MTS that are available for online viewing.
The Computing Center collection in the University of Michigan's Deep Blue digital archive contains over 50 items, mostly PDFs, but also a few videos, related to MTS and the U-M Computing Center.
Papers
A Comparative Study of the Michigan Terminal System (MTS) with Other Time Sharing Systems for the IBM 360/67 Computer, Elvert F. Hinson, Master's thesis, Naval Postgraduate School, Monterey, CA., December 1971
"Measurement and Performance of a Multiprogramming System", B. Arden and D. Boettner, Proceedings of the 2nd ACM Symposium on Operating Systems Principles, pp. 130–46, October 1969
Merit Network History
MTS Bibliography, a list of published literature about MTS
"MTS - Michigan Terminal System", Donald W. Boettner and Michael T. Alexander, ACM SIGOPS Operating Systems Review, Volume 4, Issue 4 (December 1970)
"The Michigan Terminal System", Donald W. Boettner and Michael T. Alexander, Proceedings of the IEEE, Volume 63, Issue 6 (June 1975), pp. 912–918
"A Faster Cratchit - The History of Computing at Michigan", Vol. XXVII, No. 1 (January 1976), U-M Research News, 24 pages
Web sites
MTS History, collected by former University of Michigan Computing Center staff member Tom Valerio
Personal perspective on MTS by Dan Boulet a student and later Computing Services staff member at the University of Alberta
Personal reflections on MTS by Mark Riordan of Michigan State University's Computer Laboratory
Several articles from the May 13, 1996 issue of the University of Michigan Information Technology Digest, Volume 5, No. 5, giving the history of and reminiscences about MTS, Merit, and UMnet on the eve of MTS's retirement at the University of Michigan, preserved on Web pages created by Josh Simon
Try-MTS.com, a web site showing how to run MTS under the Hercules emulator, tutorials on using the system and on several of the programming languages available on MTS
Public MTS Terminal, logon and look around like a student would in the 90's
Time-sharing operating systems
IBM mainframe operating systems
Discontinued operating systems
Formerly proprietary software
History of software
University of Michigan
1967 software | Michigan Terminal System | Technology | 7,461 |
32,460,414 | https://en.wikipedia.org/wiki/Wat%20Pa%20Maha%20Chedi%20Kaew | Wat Pa Maha Chedi Kaew (, , ), also known as the Temple of a Million Bottles, is a Buddhist temple in Khun Han district of Sisaket province, Thailand. The temple is made of over 1.5 million empty Heineken bottles and Chang beer bottles. Collection of the bottles began in 1984; it took two years to build the main temple. Thereafter, the monks continued to expand the site, and by 2009 some 20 buildings had been similarly constructed.
History
According to the China Daily, "The Thai Buddhist temple has found an environmentally friendly way to utilize discarded bottles to reach nirvana."
Construction
The main temple has a concrete core, with collected bottles used as construction materials. Two types of bottles are used; green Heineken bottles and brown Chang bottles. After the local monks began to collect them in 1984 for use as a building material, the local government sent additional bottles. In addition to the bottles themselves, the bottle caps are used to create mosaics. there were a total of 20 buildings constructed in this fashion; in addition to the temple there were a crematorium, a series of prayer rooms, the local water tower, bathrooms for the use of tourists as well as several raised bungalows which are used as housing for the monks.
The main temple took two years to construct, but as the materials were still available the site is continually expanded. By 2009 there were more than 1.5 million bottles in use in the construction works at the temple site, leading to Wat Pa Maha Chedi Kaew also being known as the "Temple of a Million Bottles." In 2015, it was named one of the ten leading examples of sustainable architecture by travel website When on Earth.
Parenthetically, 50 years ago the Heineken company looked into changing their bottles so that they could be used as building blocks, a construction material. While nothing came of that, the monks found a way.
See also
Bunleua Sulilat
Wat Rong Khun
Sanctuary of Truth
References
Buddhist temples in Thailand
Visionary environments
Sustainable architecture
1984 establishments in Thailand
Glass bottles
Sisaket province
Religious buildings and structures completed in 1986
Glass buildings | Wat Pa Maha Chedi Kaew | Engineering,Environmental_science | 434 |
44,044 | https://en.wikipedia.org/wiki/Oceanography | Oceanography (), also known as oceanology, sea science, ocean science, and marine science, is the scientific study of the ocean, including its physics, chemistry, biology, and geology.
It is an Earth science, which covers a wide range of topics, including ocean currents, waves, and geophysical fluid dynamics; fluxes of various chemical substances and physical properties within the ocean and across its boundaries; ecosystem dynamics; and plate tectonics and seabed geology.
Oceanographers draw upon a wide range of disciplines to deepen their understanding of the world’s oceans, incorporating insights from astronomy, biology, chemistry, geography, geology, hydrology, meteorology and physics.
History
Early history
Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken.
The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic.
The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour:
"nam se fezeram indo a acertar: mas partiam os nossos mareantes muy ensinados e prouidos de estromentos e regras de astrologia e geometria que sam as cousas que os cosmographos ham dadar apercebidas (...) e leuaua cartas muy particularmente rumadas e na ja as de que os antigos vsauam" (were not done by chance: but our seafarers departed well taught and provided with instruments and rules of astrology (astronomy) and geometry which were matters the cosmographers would provide (...) and they took charts with exact routes and no longer those used by the ancient).
His credibility rests on being personally involved in the instruction of pilots and senior seafarers from 1527 onwards by Royal appointment, along with his recognized competence as mathematician and astronomer.
The main problem in navigating back from the south of the Canary Islands (or south of Boujdour) by sail alone, is due to the change in the regime of winds and currents: the North Atlantic gyre and the Equatorial counter current will push south along the northwest bulge of Africa, while the uncertain winds where the Northeast trades meet the Southeast trades (the doldrums) leave a sailing ship to the mercy of the currents. Together, prevalent current and wind make northwards progress very difficult or impossible. It was to overcome this problem and clear the passage to India around Africa as a viable maritime trade route, that a systematic plan of exploration was devised by the Portuguese. The return route from regions south of the Canaries became the 'volta do largo' or 'volta do mar'. The 'rediscovery' of the Azores islands in 1427 is merely a reflection of the heightened strategic importance of the islands, now sitting on the return route from the western coast of Africa (sequentially called 'volta de Guiné' and 'volta da Mina'); and the references to the Sargasso Sea (also called at the time 'Mar da Baga'), to the west of the Azores, in 1436, reveals the western extent of the return route. This is necessary, under sail, to make use of the southeasterly and northeasterly winds away from the western coast of Africa, up to the northern latitudes where the westerly winds will bring the seafarers towards the western coasts of Europe.
The secrecy involving the Portuguese navigations, with the death penalty for the leaking of maps and routes, concentrated all sensitive records in the Royal Archives, completely destroyed by the Lisbon earthquake of 1775. However, the systematic nature of the Portuguese campaign, mapping the currents and winds of the Atlantic, is demonstrated by the understanding of the seasonal variations, with expeditions setting sail at different times of the year taking different routes to take account of seasonal predominate winds. This happens from as early as late 15th century and early 16th: Bartolomeu Dias followed the African coast on his way south in August 1487, while Vasco da Gama would take an open sea route from the latitude of Sierra Leone, spending three months in the open sea of the South Atlantic to profit from the southwards deflection of the southwesterly on the Brazilian side (and the Brazilian current going southward - Gama departed in July 1497); and Pedro Álvares Cabral (departing March 1500) took an even larger arch to the west, from the latitude of Cape Verde, thus avoiding the summer monsoon (which would have blocked the route taken by Gama at the time he set sail). Furthermore, there were systematic expeditions pushing into the western Northern Atlantic (Teive, 1454; Vogado, 1462; Teles, 1474; Ulmo, 1486).
The documents relating to the supplying of ships, and the ordering of sun declination tables for the southern Atlantic for as early as 1493–1496, all suggest a well-planned and systematic activity happening during the decade long period between Bartolomeu Dias finding the southern tip of Africa, and Gama's departure; additionally, there are indications of further travels by Bartolomeu Dias in the area. The most significant consequence of this systematic knowledge was the negotiation of the Treaty of Tordesillas in 1494, moving the line of demarcation 270 leagues to the west (from 100 to 370 leagues west of the Azores), bringing what is now Brazil into the Portuguese area of domination. The knowledge gathered from open sea exploration allowed for the well-documented extended periods of sail without sight of land, not by accident but as pre-determined planned route; for example, 30 days for Bartolomeu Dias culminating on Mossel Bay, the three months Gama spent in the South Atlantic to use the Brazil current (southward), or the 29 days Cabral took from Cape Verde up to landing in Monte Pascoal, Brazil.
The Danish expedition to Arabia 1761–67 can be said to be the world's first oceanographic expedition, as the ship Grønland had on board a group of scientists, including naturalist Peter Forsskål, who was assigned an explicit task by the king, Frederik V, to study and describe the marine life in the open sea, including finding the cause of mareel, or milky seas. For this purpose, the expedition was equipped with nets and scrapers, specifically designed to collect samples from the open waters and the bottom at great depth.
Although Juan Ponce de León in 1513 first identified the Gulf Stream, and the current was well known to mariners, Benjamin Franklin made the first scientific study of it and gave it its name. Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Stream's cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769–1770.
Information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville. James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic and Indian oceans. During a voyage around the Cape of Good Hope in 1777, he mapped "the banks and currents at the Lagullas". He was also the first to understand the nature of the intermittent current near the Isles of Scilly, (now known as Rennell's Current). The tides and currents of the ocean are distinct. Tides are the rise and fall of sea levels created by the combination of the gravitational forces of the Moon along with the Sun (the Sun just in a much lesser extent) and are also caused by the Earth and Moon orbiting each other. An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect, breaking waves, cabbeling, and temperature and salinity differences.
Sir James Clark Ross took the first modern sounding in deep sea in 1840, and Charles Darwin published a paper on reefs and the formation of atolls as a result of the second voyage of HMS Beagle in 1831–1836. Robert FitzRoy published a four-volume report of Beagles three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology.
The first superintendent of the United States Naval Observatory (1842–1861), Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation, and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies. Many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide.
Modern oceanography
Knowledge of the oceans remained confined to the topmost few fathoms of the water and a small amount of the bottom, mainly in shallow areas. Almost nothing was known of the ocean depths. The British Royal Navy's efforts to chart all of the world's coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep, although little more was known. As exploration ignited both popular and scientific interest in the polar regions and Africa, so too did the mysteries of the unexplored oceans.
The seminal event in the founding of the modern science of oceanography was the 1872–1876 Challenger expedition. As the first true oceanographic cruise, this expedition laid the groundwork for an entire academic and research discipline. In response to a recommendation from the Royal Society, the British Government announced in 1871 an expedition to explore world's oceans and conduct appropriate scientific investigation. Charles Wyville Thomson and Sir John Murray launched the Challenger expedition. , leased from the Royal Navy, was modified for scientific work and equipped with separate laboratories for natural history and chemistry. Under the scientific supervision of Thomson, Challenger travelled nearly surveying and exploring. On her journey circumnavigating the globe, 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations were taken. Around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H.M.S. Challenger during the years 1873–76. Murray, who supervised the publication, described the report as "the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries". He went on to found the academic discipline of oceanography at the University of Edinburgh, which remained the centre for oceanographic research well into the 20th century. Murray was the first to study marine trenches and in particular the Mid-Atlantic Ridge, and map the sedimentary deposits in the oceans. He tried to map out the world's ocean currents based on salinity and temperature observations, and was the first to correctly understand the nature of coral reef development.
In the late 19th century, other Western nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, Albatros, was built in 1882. In 1893, Fridtjof Nansen allowed his ship, Fram, to be frozen in the Arctic ice. This enabled him to obtain oceanographic, meteorological and astronomical data at a stationary spot over an extended period.
In 1881 the geographer John Francon Williams published a seminal book, Geography of the Oceans. Between 1907 and 1911 Otto Krümmel published the Handbuch der Ozeanographie, which became influential in awakening public interest in oceanography. The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book The Depths of the Ocean.
The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge.
In 1934, Easter Ellen Cupp, the first woman to have earned a PhD (at Scripps) in the United States, completed a major work on diatoms that remained the standard taxonomy in the field until well after her death in 1999. In 1940, Cupp was let go from her position at Scripps. Sverdrup specifically commended Cupp as a conscientious and industrious worker and commented that his decision was no reflection on her ability as a scientist. Sverdrup used the instructor billet vacated by Cupp to employ Marston Sargent, a biologist studying marine algae, which was not a new research program at Scripps. Financial pressures did not prevent Sverdrup from retaining the services of two other young post-doctoral students, Walter Munk and Roger Revelle. Cupp's partner, Dorothy Rosenbury, found her a position teaching high school, where she remained for the rest of her career. (Russell, 2000)
Sverdrup, Johnson and Fleming published The Oceans in 1942, which was a major landmark. The Sea (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge's Encyclopedia of Oceanography was published in 1966.
The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953 and mapped by Heezen and Marie Tharp using bathymetric data; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess. The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible .
In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe to investigate the ocean's depths. The United States nuclear submarine made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a spar buoy, was first deployed.
In 1968, Tanya Atwater led the first all-woman oceanographic expedition. Until that time, gender policies restricted women oceanographers from participating in voyages to a significant extent.
From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. Early techniques included analog computers (such as the Ishiguro Storm Surge Computer) generally now replaced by numerical methods (e.g. SLOSH.) An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events.
1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995.
Study of the oceans is critical to understanding shifts in Earth's energy balance along with related global and regional changes in climate, the biosphere and biogeochemistry. The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation). Recent studies have advanced knowledge on ocean acidification, ocean heat content, ocean currents, sea level rise, the oceanic carbon cycle, the water cycle, Arctic sea ice decline, coral bleaching, marine heatwaves, extreme weather, coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks.
In general, understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of Earth's resources. The Intergovernmental Oceanographic Commission reports that 1.7% of the total national research expenditure of its members is focused on ocean science.
Branches
The study of oceanography is divided into these five branches:
Biological oceanography
Biological oceanography investigates the ecology and biology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment.
Chemical oceanography
Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles. The following is a central topic investigated by chemical oceanography.
Ocean acidification
Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide () emissions into the atmosphere. Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO2 is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1) through ocean acidification. The pH is expected to reach 7.7 by the year 2100.
An important element for the skeletons of marine animals is calcium, but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth. Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods, coccolithophorids and foraminifera, all important in the food chain. In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, in turn adversely impacting other reef dwellers.
The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher ocean temperatures and lower oxygen levels will impact the seas.
Geological oceanography
Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography.
Physical oceanography
Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves, internal waves, surface tides, internal tides, and currents. The following are central topics investigated by physical oceanography.
Seismic Oceanography
Ocean currents
Since the early ocean expeditions in oceanography, a major interest was the study of ocean currents and temperature measurements. The tides, the Coriolis effect, changes in direction and strength of wind, salinity, and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) (thermo- referring to temperature and -haline referring to salt content) connects the ocean basins and is primarily dependent on the density of sea water. It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity.
Examples of sustained currents are the Gulf Stream and the Kuroshio Current which are wind-driven western boundary currents.
Ocean heat content
Oceanic heat content (OHC) refers to the extra heat stored in the ocean from changes in Earth's energy balance. The increase in the ocean heat play an important role in sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation associated with global warming since 1971.
Paleoceanography
Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology.
Oceanographic institutions
The earliest international organizations of oceanography were founded at the turn of the 20th century, starting with the International Council for the Exploration of the Sea created in 1902, followed in 1919 by the Mediterranean Science Commission. Marine research institutes were already in existence, starting with the Stazione Zoologica Anton Dohrn in Naples, Italy (1872), the Biological Station of Roscoff, France (1876), the Arago Laboratory in Banyuls-sur-mer, France (1882), the Laboratory of the Marine Biological Association in Plymouth, UK (1884), the Norwegian Institute for Marine Research in Bergen, Norway (1900), the Laboratory für internationale Meeresforschung, Kiel, Germany (1902). On the other side of the Atlantic, the Scripps Institution of Oceanography was founded in 1903, followed by the Woods Hole Oceanographic Institution in 1930, the Virginia Institute of Marine Science in 1938, the Lamont–Doherty Earth Observatory at Columbia University in 1949, and later the School of Oceanography at University of Washington. In Australia, the Australian Institute of Marine Science (AIMS), established in 1972 soon became a key player in marine tropical research.
In 1921 the International Hydrographic Bureau, called since 1970 the International Hydrographic Organization, was established to develop hydrographic and nautical charting standards.
Related disciplines
See also
List of seas
Ocean optics
Ocean color
Ocean chemistry
References
Sources and further reading
Boling Guo, Daiwen Huang. Infinite-Dimensional Dynamical Systems in Atmospheric and Oceanic Science, 2014, World Scientific Publishing, . Sample Chapter
Hamblin, Jacob Darwin (2005) Oceanographers and the Cold War: Disciples of Marine Science. University of Washington Press.
Lang, Michael A., Ian G. Macintyre, and Klaus Rützler, eds. Proceedings of the Smithsonian Marine Science Symposium. Smithsonian Contributions to the Marine Sciences, no. 38. Washington, D.C.: Smithsonian Institution Scholarly Press (2009)
Roorda, Eric Paul, ed. The Ocean Reader: History, Culture, Politics (Duke University Press, 2020) 523 pp. online review
Steele, J., K. Turekian and S. Thorpe. (2001). Encyclopedia of Ocean Sciences. San Diego: Academic Press. (6 vols.)
Sverdrup, Keith A., Duxbury, Alyn C., Duxbury, Alison B. (2006). Fundamentals of Oceanography, McGraw-Hill,
Russell, Joellen Louise. Easter Ellen Cupp, 2000, Regents of the University of California.
External links
NASA Jet Propulsion Laboratory – Physical Oceanography Distributed Active Archive Center (PO.DAAC). A data centre responsible for archiving and distributing data about the physical state of the ocean.
Scripps Institution of Oceanography. One of the world's oldest, largest, and most important centres for ocean and Earth science research, education, and public service.
Woods Hole Oceanographic Institution (WHOI). One of the world's largest private, non-profit ocean research, engineering and education organizations.
British Oceanographic Data Centre. A source of oceanographic data and information.
NOAA Ocean and Weather Data Navigator. Plot and download ocean data.
Freeview Video 'Voyage to the Bottom of the Deep Deep Sea' Oceanography Programme by the Vega Science Trust and the BBC/Open University.
Atlas of Spanish Oceanography by InvestigAdHoc.
Glossary of Physical Oceanography and Related Disciplines by Steven K. Baum, Department of Oceanography, Texas A&M University
Barcelona-Ocean.com . Inspiring Education in Marine Sciences
CFOO: Sea Atlas. A source of oceanographic live data (buoy monitoring) and education for South African coasts.
Memorial website for USNS Bowditch, USNS Dutton, USNS Michelson and USNS H. H. Hess
Applied and interdisciplinary physics
Earth sciences
Hydrology
Physical geography
Articles containing video clips | Oceanography | Physics,Chemistry,Engineering,Environmental_science | 5,004 |
35,260,696 | https://en.wikipedia.org/wiki/Peptide%20microarray | A peptide microarray (also commonly known as peptide chip or peptide epitope microarray) is a collection of peptides displayed on a solid surface, usually a glass or plastic chip. Peptide chips are used by scientists in biology, medicine and pharmacology to study binding properties and functionality and kinetics of protein-protein interactions in general. In basic research, peptide microarrays are often used to profile an enzyme (like kinase, phosphatase, protease, acetyltransferase, histone deacetylase etc.), to map an antibody epitope or to find key residues for protein binding. Practical applications are seromarker discovery, profiling of changing humoral immune responses of individual patients during disease progression, monitoring of therapeutic interventions, patient stratification and development of diagnostic tools and vaccines.
Principle
The assay principle of peptide microarrays is similar to an ELISA protocol.
The peptides (up to tens of thousands in several copies) are linked to the surface of a glass chip typically the size and shape of a microscope slide. This peptide chip can directly be incubated with a variety of different biological samples like purified enzymes or antibodies, patient or animal sera, cell lysates and then be detected through a label-dependent fashion, for example, by a primary antibody that targets the bound protein or modified substrates. After several washing steps a secondary antibody with the needed specificity (e.g. anti IgG human/mouse or anti phosphotyrosine or anti myc) is applied. Usually, the secondary antibody is tagged by a fluorescence label that can be detected by a fluorescence scanner. Other label-dependent detection methods includes chemiluminescence, colorimetric or autoradiography.
Label-dependent assays are rapid and convenient to perform, but risk giving rise to false positive and negative results. More recently, label-free detection including surface plasmon resonance (SPR) spectroscopy, mass spectrometry (MS) and many other optical biosensors have been employed to measuring a broad range of enzyme activities.
Peptide microarrays show several advantages over protein microarrays:
Ease and cost of synthesis
Extended shelf stability
Detection of binding events on epitope level, enabling study of i.e. epitope spreading
Flexible design for peptide sequence (i.e. posttranslational modifications, sequence diversity, non-natural amino acids ...) and immobilization chemistries
Higher batch-to-batch reproducibility
Production of a peptide microarray
A peptide microarray is a planar slide with peptides spotted onto it or assembled directly on the surface by in-situ synthesis. Whereas peptides spotted can undergo quality controls that include mass spectrometer analysis and concentration normalization before spotting and result from a single synthetic batch, peptides synthesized directly on the surface may suffer from batch-to-batch variation and limited quality control options. However, peptide synthesis on chip allows the parallel synthesis of tens of thousands of peptides providing larger peptide libraries paired with lower synthesis costs. Peptides are ideally covalently linked through a chemoselective bond leading to peptides with the same orientation for interaction profiling. Some alternative procedures describe unspecific covalent binding and adhesive immobilization.
However, lithographic methods can be used to overcome the problem of excessive number of coupling cycles. Combinatorial synthesis of peptide arrays onto a microchip by laser printing has been described, where a modified colour laser printer is used in combination with conventional solid-phase peptide synthesis chemistry. Amino acids are immobilized within toner particles, and the peptides are printed onto the chip surface in consecutive, combinatorial layers. Melting of the toner upon the start of the coupling reaction ensures that delivery of the amino acids and the coupling reaction can be performed independently. Another advantage of this method is that each amino acid can be produced and purified separately, followed by embedding it into the toner particles, which allows long-term storage.
Applications of peptide microarrays
Peptide microarrays can be used to study different kinds of protein-protein interactions, specially those involving modular protein substructures called peptide recognition modules or, most commonly, protein interaction domains. The reason for this is that such protein substructures recognize short linear motifs often exposed in natively unstructured regions of the binding partner, such that the interaction can be modelled in vitro by peptides as probes and the peptide recognition module as analyte. Most publications can be found in the context of immune monitoring and enzyme profiling.
Immunology
Mapping of immunodominant regions in antigens or whole proteomes
Seromarker discovery
Monitoring of clinical trials
Profiling of antibody signatures and epitope mapping
Finding neutralizing antibodies
Enzyme profiling
Identification of substrates for orphan enzymes
Optimization of known enzyme substrates
Elucidation of signal transduction pathways
Detection of contaminating enzyme activities
Consensus sequence and key residues determination
Identifying sites for protein-protein interactions within a complex
Analysis and evaluation of results
Data analysis and evaluation of results is the most important part of every microarray experiment. After scanning the microarray slides, the scanner records a 20-bit, 16-bit or 8-bit numeric image in tagged image file format (*.tif). The .tif-image enables interpretation and quantification of each fluorescent spot on the scanned microarray slide. This quantitative data is the basis for performing statistical analysis on measured binding events or peptide modifications on the microarray slide. For evaluation and interpretation of detected signals an allocation of the peptide spot (visible in the image) and the corresponding peptide sequence has to be performed. The data for allocation is usually saved in the GenePix Array List (.gal) file and supplied together with the peptide microarray. The .gal-file (a tab-separated text file) can be opened using microarray quantification software-modules or processed with a text editor (e.g. notepad) or Microsoft Excel. This "gal" file is most often provided by the microarray manufacturer and is generated by input txt files and tracking software built into the robots that do the microarray manufacturing.
References
Biological techniques and tools
Immunology
Microarrays
Protein methods | Peptide microarray | Chemistry,Materials_science,Biology | 1,314 |
2,670,004 | https://en.wikipedia.org/wiki/Omega2%20Scorpii | {{DISPLAYTITLE:Omega2 Scorpii}}
ω2 Scorpii, Latinised as Omega2 Scorpii, is a suspected variable star in the zodiac constellation of Scorpius. A component of the visual double star ω Scorpii, it is bright enough to be seen with the naked eye having an apparent visual magnitude of +4.320. The distance to this star, as determined using parallax measurements, is around 291 light years. The visual magnitude of this star is reduced by 0.38 because of extinction from interstellar dust.
It is 0.05 degree north of the ecliptic, so can be occulted by the moon and planets.
This is a G-type giant star with a stellar classification of G6/8III. With an estimated age of 282 million years, it is an evolved, thin disk star that is currently on the red horizontal branch. The interferometry-measured angular diameter of this star is , which, at its estimated distance, equates to a physical radius of nearly 16 times the radius of the Sun. It has 3.27 times the mass of the Sun, and radiates 141 times the Sun's luminosity The effective temperature of the star's outer atmosphere is 5,363 K.
Names
In the Cook Islands, a traditional story is told of twins who flee their parents into the sky and become the pair of stars Omega2 and Omega1 Scorpii. The girl, who is called Piri-ere-ua "Inseparable", keeps tight hold of her brother, who is not named. (The IAU used a version of this story from Tahiti to name Mu2 Scorpii.)
References
External links
G-type giants
Upper Scorpius
Scorpius
Scorpii, Omega-1
Scorpii, 10
78990
5997
144608
Durchmusterung objects
Suspected variables
Piri-ere-ua | Omega2 Scorpii | Astronomy | 416 |
900,804 | https://en.wikipedia.org/wiki/Kakuro | Kakuro or Kakkuro or Kakoro () is a kind of logic puzzle that is often referred to as a mathematical transliteration of the crossword. Kakuro puzzles are regular features in many math-and-logic puzzle publications across the world. In 1966, Canadian Jacob E. Funk, an employee of Dell Magazines, came up with the original English name Cross Sums and other names such as Cross Addition have also been used, but the Japanese name Kakuro, abbreviation of Japanese kasan kurosu (加算クロス, "addition cross"), seems to have gained general acceptance and the puzzles appear to be titled this way now in most publications. The popularity of Kakuro in Japan is immense, second only to Sudoku among Nikoli's famed logic-puzzle offerings.
The canonical Kakuro puzzle is played in a grid of filled and barred cells, "black" and "white" respectively. Puzzles are usually 16×16 in size, although these dimensions can vary widely. Apart from the top row and leftmost column which are entirely black, the grid is divided into "entries"—lines of white cells—by the black cells. The black cells contain a diagonal slash from upper-left to lower-right and a number in one or both halves, such that each horizontal entry has a number in the half-cell to its immediate left and each vertical entry has a number in the half-cell immediately above it. These numbers, borrowing crossword terminology, are commonly called "clues".
The objective of the puzzle is to insert a digit from 1 to 9 inclusive into each white cell so that the sum of the numbers in each entry matches the clue associated with it and that no digit is duplicated in any entry. It is that lack of duplication that makes creating Kakuro puzzles with unique solutions possible. Like Sudoku, solving a Kakuro puzzle involves investigating combinations and permutations. There is an unwritten rule for making Kakuro puzzles that each clue must have at least two numbers that add up to it, since including only one number is mathematically trivial when solving Kakuro puzzles.
At least one publisher includes the constraint that a given combination of numbers can only be used once in each grid, but still markets the puzzles as plain Kakuro.
Some publishers prefer to print their Kakuro grids exactly like crossword grids, with no labeling in the black cells and instead numbering the entries, providing a separate list of the clues akin to a list of crossword clues. (This eliminates the row and column that are entirely black.) This is purely an issue of image and does not affect either the solution nor the logic required for solving.
In discussing Kakuro puzzles and tactics, the typical shorthand for referring to an entry is "(clue, in numerals)-in-(number of cells in entry, spelled out)", such as "16-in-two" and "25-in-five". The exception is what would otherwise be called the "45-in-nine"—simply "45" is used, since the "-in-nine" is mathematically implied (nine cells is the longest possible entry, and since it cannot duplicate a digit it must consist of all the digits from 1 to 9 once). Curiously, both "43-in-eight" and "44-in-eight" are still frequently called as such, despite the "-in-eight" suffix being equally implied.
Solving techniques
Combinatoric techniques
Although brute-force guessing is possible, a more efficient approach is the understanding of the various combinatorial forms that entries can take for various pairings of clues and entry lengths. The solution space can be reduced by resolving allowable intersections of horizontal and vertical sums, or by considering necessary or missing values.
Those entries with sufficiently large or small clues for their length will have fewer possible combinations to consider, and by comparing them with entries that cross them, the proper permutation—or part of it—can be derived. The simplest example is where a 3-in-two crosses a 4-in-two: the 3-in-two must consist of "1" and "2" in some order; the 4-in-two (since "2" cannot be duplicated) must consist of "1" and "3" in some order. Therefore, their intersection must be "1", the only digit they have in common.
When solving longer sums there are additional ways to find clues to locating the correct digits. One such method would be to note where a few squares together share possible values thereby eliminating the possibility that other squares in that sum could have those values. For instance, if two 4-in-two clues cross with a longer sum, then the 1 and 3 in the solution must be in those two squares and those digits cannot be used elsewhere in that sum.
When solving sums that have a limited number of solution sets then that can lead to useful clues. For instance, a 30-in-seven sum only has two solution sets: {1,2,3,4,5,6,9} and {1,2,3,4,5,7,8}. If one of the squares in that sum can only take on the values of {8,9} (if the crossing clue is a 17-in-two sum, for example) then that not only becomes an indicator of which solution set fits this sum, it eliminates the possibility of any other digit in the sum being either of those two values, even before determining which of the two values fits in that square.
Another useful approach in more complex puzzles is to identify which square a digit goes in by eliminating other locations within the sum. If all of the crossing clues of a sum have many possible values, but it can be determined that there is only one square that could have a particular value which the sum in question must have, then whatever other possible values the crossing sum would allow, that intersection must be the isolated value. For example, a 36-in-eight sum must contain all digits except 9. If only one of the squares could take on the value of 2 then that must be the answer for that square.
Box technique
A "box technique" can also be applied on occasion, when the geometry of the unfilled white cells at any given stage of solving lends itself to it: by summing the clues for a series of horizontal entries (subtracting out the values of any digits already added to those entries) and subtracting the clues for a mostly overlapping series of vertical entries, the difference can reveal the value of a partial entry, often a single cell. This technique works because addition is both associative and commutative.
It is common practice to mark potential values for cells in the cell corners until all but one have been proven impossible; for particularly challenging puzzles, sometimes entire ranges of values for cells are noted by solvers in the hope of eventually finding sufficient constraints to those ranges from crossing entries to be able to narrow the ranges to single values. Because of space constraints, instead of digits, some solvers use a positional notation, where a potential numerical value is represented by a mark in a particular part of the cell, which makes it easy to place several potential values into a single cell. This also makes it easier to distinguish potential values from solution values.
Some solvers also use graph paper to try various digit combinations before writing them into the puzzle grids.
As in the Sudoku case, only relatively easy Kakuro puzzles can be solved with the above-mentioned techniques. Harder ones require the use of various types of chain patterns, the same kinds as appear in Sudoku (see Pattern-Based Constraint Satisfaction and Logic Puzzles).
Mathematics of Kakuro
Mathematically, Kakuro puzzles can be represented as integer programming problems, and are NP-complete. See also Yato and Seta, 2004.
There are two kinds of mathematical symmetry readily identifiable in Kakuro puzzles: minimum and maximum constraints are duals, as are missing and required values.
All sum combinations can be represented using a bitmapped representation. This representation is useful for determining missing and required values using bitwise logic operations.
Popularity
Kakuro puzzles appear in nearly 100 Japanese magazines and newspapers. Kakuro remained the most popular logic puzzle in Japanese printed press until 1992, when Sudoku took the top spot. In the UK, they first appeared in The Guardian, with The Telegraph and the Daily Mail following.
See also
Killer Sudoku, a variant of Sudoku which is solved using similar techniques.
References
External links
The New Grid on the Block: The Guardian newspaper's introduction to Kakuro
IAENG report on Kakuro
Solve Kakuro puzzles online
Logic puzzles
NP-complete problems
1966 introductions
Japanese board games | Kakuro | Mathematics | 1,811 |
63,578,074 | https://en.wikipedia.org/wiki/Costruzioni%20Motori%20Diesel%20S.p.A. | The () or more simply CMD is an Italian company that designs, develops, builds and markets marine engines, with its brand FNM Marine.
History
The Company was founded in 1971, under the name Fratelli Negri Macchine Diesel Sud (FNM), on the initiative of the Negri brothers. Initially, the activity focused on the revision of earthmoving machines, then expanding in the mid-70s, in the installation of diesel engines on used cars. Furthermore, towards the end of the decade, collaboration with FIAT began, which still represents an important slice of the company's business. In 2013, indeed, the company signed an agreement to produce engine heads for Maserati and Jeep.
In 1980, the GD 178 AT 1.3 supercharged diesel engine, entirely designed and built by FNM, was presented at the Turin International Motor Show.
In 1991, CMD Costruzioni Motori Diesel was set up, which would acquire the entire FNM business branch and related know-how. In the 1990s, CMD inaugurated the Atella 1 plant, expanding its business producing and selling marine diesel engines. In 1996, FNM 1.4-liter diesel engines began to be installed in passenger cars built by India's Premier Automobiles Limited, in two Fiat-derived models called the Premier 137D and 1.38D.
In the 2000s the company opened two new production plants, Atella 2 in 2004 and Morra De Sanctis in 2005. In 2017 67% of the share capital was held by the Chinese Loncin Holdings group, while the remaining 33% was in Italian hands. The official dealer is AS Labruna which is based in Monopoli, Apulia.
Engines
The company builds various types of marine inboard motor including hybrids.
Awards
Premio ADI (Association for Industrial Design) 2019, at the 59° Salone Nautico di Genova thanks to the engine Blue Hybrid System.
See also
AS Labruna
Inboard motor
Fiat Chrysler Automobiles
References
Propulsion
Marine engineering
Italian brands
Companies based in Campania
Companies based in Basilicata
Engine manufacturers of Italy | Costruzioni Motori Diesel S.p.A. | Engineering | 435 |
20,679 | https://en.wikipedia.org/wiki/TiVo%20Corporation | TiVo Corporation, formerly known as the Rovi Corporation and Macrovision Solutions Corporation, was an American technology company headquartered in San Jose, California. Now operating as Xperi, the company is primarily involved in licensing its intellectual property within the consumer electronics industry, including digital rights management, electronic program guide software, and metadata. The company holds over 6,000 pending and registered patents. The company also provides analytics and recommendation platforms for the video industry.
In 2016, Rovi acquired digital video recorder maker TiVo Inc., and renamed itself TiVo Corporation. On May 30, 2019, TiVo announced the appointment of Dave Shull as the company's new president and CEO.
On December 19, 2019, TiVo merged with Xperi; the combined firm operates as Xperi.
History
Macrovision Corporation was established in 1983 by Victor Farrow and John O. Ryan. The 1984 film The Cotton Club was the first video to be encoded with Macrovision technology when it was released in 1985. The technology was subsequently extended to DVD players and other consumer electronic recording and playback devices such as digital cable and satellite set-top boxes, digital video recorders, and portable media players. By the end of the 1980s, most major Hollywood studios were utilizing their services.
In the 1990s, Macrovision acquired companies with expertise in managing access control and secure distribution of other forms of digital media, including music, video games, internet content, and computer software.
Ryan (founder and CEO of Macrovision from June 1995 to October 2001) and William A. Krepick (president of Macrovision Corporation from July 1995 to July 2005 and CEO from October 2001 to July 2005) led the company through an IPO in 1997 priced at $9.00 a share. Under their leadership, the company went from a private company with sales of under $20 million to a global, publicly traded corporation with annual sales of $220 million and market cap exceeding $1 billion.
In July 2005, the company hired Alfred J. Amoroso as chief executive officer and president to succeed William A. Krepick, who announced his retirement earlier in the year.
Macrovision acquired Gemstar-TV Guide on May 2, 2008, in a cash-and-stock deal worth about $2.8 billion. The combined company would seek to be “the homepage for the TV experience,” said Mr. Amoroso.
After the announcement of its intent to buy Gemstar-TV Guide, Macrovision made other changes in order to focus on entertainment technology, including selling its software business unit, valued at approximately $200 million, to private equity firm Thoma Cressey Bravo. The divestiture of the software business unit closed on April 1, 2008, becoming Acresso Software. Macrovision also ultimately sold off parts of Gemstar-TV Guide not focused on digital entertainment, including TryMedia, eMeta, TV Guide Magazine, TV Guide Network and the TV Games Network.
The company also bought two companies providing entertainment metadata: All Media Guide on November 6, 2007, and substantially all the assets of Muze, Inc. on April 15, 2009.
As Rovi
On July 16, 2009, Macrovision Solutions Corporation announced the official change of its name to Rovi Corporation.
Rovi announced its first product on January 7, 2010 – TotalGuide, an interactive media guide that incorporated entertainment data, to search, browse and provide recommendations. On March 16, 2010, Rovi acquired MediaUnbound for an undisclosed amount. MediaUnbound had helped build static and dynamic personalization and recommendation engines for clients such as Napster, eMusic and MTV Networks. On June 16, 2010, the company announced the Rovi Advertising Network which bundled guide advertising and third-party interactive TV platforms.
On December 23, 2010, the company announced its intention to acquire Sonic Solutions and its DivX video software in a deal valued at $720 million. Sonic provided digital video processing, playback and distribution technologies and owned RoxioNow (formerly CinemaNow) an OTT technology provider.
On March 1, 2011, Rovi announced its acquisition of online video guide SideReel.
The company announced Amoroso's intention to retire on May 26, 2011. Tom Carson, formerly the executive vice president of sales and marketing, was appointed CEO and President in December 2011. Under Carson the company shifted its focus on "growth opportunities related to its core enabling technology and services" and it announced that it intended to sell the Rovi Entertainment Store business. It entered into separate agreements to sell the Rovi Entertainment Store to Reliance Majestic Holdings, a private equity-backed company; and its consumer websites to All Media Networks, a new company, in July 2013. Continuing on this path, the company made a similar announcement in January 2014 indicating its intent to sell the DivX and MainConcept businesses.
On April 1, 2013, Rovi acquired Integral Reach, a provider of predictive analysis services. The technology would be integrated into Rovi's audience analysis services.
In April 2013, Facebook began licensing Rovi metadata for use within the service.
As TiVo Corporation
On April 29, 2016, Rovi Corporation announced that it had acquired TiVo Inc. for $1.1 billion. The combined company operated under the TiVo brand, and held over 6,000 pending and registered patents. Rovi plans to discontinue in-house hardware production, and focus primarily on licensing its technologies and the TiVo brand to third-party companies.
In December 2019, TiVo Corporation announced their intent to merge with Xperi. The surviving entity will operate under the Xperi name and have a combined value of $3 billion. TiVo had previously considered splitting out its hardware operations from its licensing operations. The merger was completed on June 1, 2020.
In August 2022, TiVo announced the TiVo OS for smart TVs launching in 2023 in Europe with Vestel.
Products
Guides
Rovi provides guides for service providers and CE manufacturers.
TotalGuide xD, a white-label media guide for mobile devices for finding, managing, and watching TV shows and movies. This also controlled the set top boxes.
TotalGuide CE, a media guide for CE manufacturers that gives access to broadcast programming, premium over-the-top (OTT) entertainment, and catch-up TV
Passport Guide and i-Guide, interactive program guides for service providers
G-Guide, an HTML5-based program guide for digital terrestrial, broadcast satellite, and commercial satellite services
TotalTV, an online guide enabling websites for news and entertainment organizations to incorporate local TV listings
Rovi DTA Guide, an interactive program guide designed for households installed with Digital terminal adapters
Data
Rovi provides entertainment metadata for consumer electronics manufacturers, service providers, retailers, online portals and application developers around the world. The company has over 50 years of metadata for video, music, books, and games covering more than 5 million movies and TV programs, 3.2 million album releases and 30 million song tracks, 9 million in-print and out-of-print book titles, and 70,000 video games. The metadata includes basic facts, local TV listings and channel line-ups for interactive program guides, original editorial, imagery, and other features.
Search and Recommendations
Rovi Search Service allows consumer electronics manufacturers, service providers, and developers to provide solutions that enable consumers to search for and access desired content. Rovi Recommendations Service is a cloud-based service that offers consumers entertainment choices similar to their chosen program, movie, album, track, musician or band.
Advertising
Rovi Advertising Service enables the monetization of entertainment platforms. It places ads that appear as content choices in application menus and user interfaces for set-top boxes, connected TVs, smartphones, tablets, Blu-ray players, game consoles and other devices.
Rovi Audience Management
Rovi Audience Management is a suite of products (Advertising Optimizer and Promotion Optimizer) combining big data with predictive analytics to provide TV audience insights and advertising campaign management. Ad Optimizer allows provides campaign management and media planning capabilities to TV networks and multichannel video programming distributors (MVPDs). Promo Optimizer uses past viewing data to enable cable and broadcast networks to create plans for on-air promos.
Legacy products
The company historically developed technologies and products that helped protect content from being pirated. Its two core legacy products were called RipGuard and the Analog Protection System (APS).
RipGuard
Macrovision introduced its RipGuard technology in February 2005. It was designed to prevent or reduce digital DVD copying by altering the format of the DVD content to disrupt the ripping software. Although the technology could be circumvented by several current DVD rippers such as AnyDVD or DVDFab, Macrovision claimed that 95% of casual users lack the knowledge and/or determination to be able to copy a DVD with RipGuard technology.
Analog Protection System
The Analog Protection System (APS), also known as Analog Copy Protection (ACP), Copyguard or Macrovision, was the Macrovision Corporation's flagship product, a copy protection system for both VHS and DVD. Video tapes copied from DVDs encoded with APS become garbled and unwatchable. The process works by adding pulses to analog video signals to negatively impact the AGC circuit of a recording device. In digital devices, changes to the analog video signal are created by a chip that converts the digital video to analog within the device. In DVD players, trigger bits are created during DVD authoring to inform the APS that it should be applied to DVD players' analog outputs or analog video outputs on a PC while playing back a protected DVD-Video disc. In set top boxes trigger bits are incorporated into Conditional Access Entitlement Control Messages (ECM) in the stream delivered to the STB. In VHS, alterations to the analog video signal are added in a Macrovision-provided "processor box" used by duplicators.
As Macrovision
In 2000, Macrovision acquired Globetrotter, creators of the FLEXlm, which was subsequently renamed Flexnet.
In 2002, Macrovision acquired Israeli company Midbar Technologies, developers of the Cactus Data Shield music copy protection solution for $17 million. Additionally the same year, Macrovision acquired all the music copy protection and digital rights management (DRM) assets of TTR Technologies (formerly NASDAQ listed under the ticker TTRE).
In 2004, Macrovision acquired InstallShield, creators of installation authoring software (later divested to private equity).
In 2005, Macrovision acquired the intellectual property rights to DVD Decrypter from its developer.
In 2005, Macrovision acquired ZeroG Software, creators of InstallAnywhere (direct competition to InstallShield MP (MultiPlatform)), and Trymedia Systems.
In 2006, Macrovision acquired eMeta.
On January 1, 2007, Macrovision acquired Mediabolic, Inc.
On November 6, 2007, Macrovision announced its intention to acquire All Media Guide.
On December 7, 2007, Macrovision announced an agreement to acquire Gemstar-TV Guide and completed the purchase on August 5, 2008.
On December 19, 2007, Macrovision purchased BD+ DRM technology from Cryptography Research, Inc.
On April 15, 2009, Macrovision announced that it has acquired substantially all of the assets of Muze, Inc.
As Rovi
On March 16, 2010, Rovi acquired Recommendations Service MediaUnbound.
On December 23, 2010, Rovi announced its intention to acquire Sonic Solutions.
On March 1, 2011, Rovi acquired SideReel.
On May 5, 2011, Rovi acquired DigiForge.
In 2012, Rovi acquired Snapstick.
In February 2012, Rovi sold Roxio to Corel.
On April 1, 2013, Rovi acquired Integral Reach.
On February 25, 2014, Rovi acquired Veveo.
On November 3, 2014, Rovi acquired Fanhattan, a company that ran the Fan TV service, and owners of The Movie Database, for $12.0 million in cash.
On April 29, 2016, Rovi confirmed that it would acquire TiVo for approximately $1.1 billion.
See also
TiVo digital video recorders
DCS Copy Protection
Automatic content recognition
Tivoization
References
Additional sources
Fil's FAQ-Link-In Corner: MacroVision FAQ
MPAA | DVD Frequently Asked Questions
Columbia ISA: Macrovision Details
Macrovision Agrees to Sell Software Unit (expired link)
Realnetworks Acquires Game Distributor From Macrovision
Adobe LM Service – Adobelmsvc.exe – Program Information (archive)
Rovi Acquires DigiForge
Rovi Corporation Appoints Thomas Carson as President and Chief Executive Officer
External links
Howstuffworks: "How does copy protection on a video tape work?"
Ars Technica: "Digitizing video signals might violate the DMCA"
Manufacturing companies based in San Jose, California
Companies formerly listed on the Nasdaq
Technology companies established in 1983
American companies established in 1983
Technology companies disestablished in 2020
American companies disestablished in 2020
Digital technology
TiVo
Xperi
1997 initial public offerings
2020 mergers and acquisitions | TiVo Corporation | Technology | 2,679 |
60,347,670 | https://en.wikipedia.org/wiki/Farallon%20virus | Farallon virus is a strain of Hughes orthonairovirus in the genus Orthonairovirus belonging to the Hughes serogroup. A known host of the virus is Ornithodoros. The virus is named after the Farallon Islands.
References
Nairoviridae
Infraspecific virus taxa | Farallon virus | Biology | 69 |
71,351,682 | https://en.wikipedia.org/wiki/List%20of%20existing%20technologies%20predicted%20in%20science%20fiction | This list of existing technologies predicted in science fiction includes every medium, mainly literature and film. In 1964 Soviet engineer and writer Genrikh Altshuller made the first attempt to catalogue science fiction technologies of the time.
Alongside first prediction of a particular technology, the list may include all subsequent works mentioning it until its invention. The list includes technologies that were first posited in non-fiction works before their appearance in science fiction and subsequent invention, such as ion thruster. To avoid repetitions, the list excludes film adaptations of prior literature containing the same predictions, such as "The Minority Report". The list also excludes emerging technologies that are not widely available. The names of some modern inventions (atomic bomb, credit card, robot, space station, oral contraceptive and borazon) exactly match their fictional predecessors. A few works correctly predicted the years when some technologies would emerge, such as the first sustained heavier-than-air aircraft flight in 1903 and the first atomic bomb explosion in 1945.
Literature
Films and TV series
Notes
References
Sources
See also
Clarke's three laws
List of emerging technologies
List of hypothetical technologies
Materials science in science fiction
Prophets of Science Fiction
Prediction
existing technologies predicted in science fiction
existing technologies predicted in science fiction
existing technologies predicted in science fiction
History of technology | List of existing technologies predicted in science fiction | Technology | 259 |
76,627,572 | https://en.wikipedia.org/wiki/SDSS%20J114816.64%2B525150.3 | SDSS J114816.64+525150.3 (J1148+5251) was the most distant known quasar when it was discovered in 2003, at redshift Z=6.419. The quasar is powered by a 3x109 solar mass supermassive black hole.
Imaging with amateur-grade telescope
The Virtual Telescope Project imaged the quasar between March and April 2024, with a Celestron Schmidt–Cassegrain telescope, on a Software Bisque Paramount ME robotic mount. A total of 81, 300-second exposures were combined, for a total of almost 7 hours of exposure, recording sources as faint as about magnitude R=25. The team termed it "the most distant quasar observable at visible wavelengths".
See also
List of the most distant astronomical objects
References
Sources
External links
Astronomical objects discovered in 2003
Quasars
Ursa Major | SDSS J114816.64+525150.3 | Astronomy | 192 |
5,065,992 | https://en.wikipedia.org/wiki/HD%2094510 | HD 94510 is a single star in the southern constellation of Carina, positioned near the northern constellation border with Vela. It has the Bayer designation u Carinae; HD 94520 is the identifier from the Henry Draper Catalogue. This object has an orange hue and is visible to the naked eye with an apparent visual magnitude that fluctuates around +3.78. The star is located at a distance of 95 light-years from the Sun based on parallax, and is drifting further away with a radial velocity of +8 km/s.
This is a K-type star in the subgiant stage with a stellar classification of K0IV, which indicates it has exhausted the supply of hydrogen at its core and is evolving into a giant. HD 94510 is a suspected variable star with a brightness that has been measured varying from magnitude 3.75 down to 3.80. It has an estimated 1.60 times the mass of the Sun and has expanded to nearly eight times the Sun's radius. The star is radiating 31 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of .
References
K-type subgiants
Carina (constellation)
Carinae, u
Durchmusterung objects
0404.1
094510
053253
4257 | HD 94510 | Astronomy | 273 |
52,218,674 | https://en.wikipedia.org/wiki/Dragon%20kiln | A dragon kiln () or "climbing kiln", is a traditional Chinese form of kiln, used for Chinese ceramics, especially in southern China. It is long and thin, and relies on having a fairly steep slope, typically between 10° and 16°, up which the kiln runs. The kiln could achieve the very high temperatures, sometimes as high as 1400 °C, necessary for high-fired wares including stoneware and porcelain, which long challenged European potters, and some examples were very large, up to 60 metres long, allowing up to 25,000 pieces to be fired at a time. By the early 12th century CE they might be over 135 metres long, allowing still larger quantities to be fired; more than 100,000 have been claimed.
History
According to recent excavations in Shangyu District in the northeast of Zhejiang province and elsewhere, the origins of the dragon kiln may go back as far as the Shang dynasty (c. 1600 to 1046 BCE), and is linked to the introduction of stoneware, fired at 1200 °C or more. These kilns were much smaller than later examples, at some 5–12 metres long, and also sloped far less.
The type had certainly developed by the Warring States period, and by the Eastern Wu kingdom (220–280 CE), there were over 60 kilns at Shangyu. Thereafter it remained the main design used in southern China until the Ming dynasty. The pottery areas of south China are mostly hilly, whereas those on the plains of north China typically lack suitable slopes; here the mantou kiln type predominated.
The Nanfeng Kiln in Guangdong province is several centuries old and still functioning. It was a producer of Shiwan ware as well as architectural ceramics, and today also functions as a tourist attraction.
Characteristics
The kilns were normally made of brick, and are one type of "cross-draught" kilns, where the flames travel more or less horizontally, rather than up from or down to the floor. The firing time could be relatively short, meaning about 24 hours for a small kiln. Early kilns were rising tunnels, not divided into chambers, but with a step at intervals giving relatively flat floor levels, and perhaps using gravel or similar material on the floor to allow vertical stacks to be rested. From the Southern Song period (1127–1279), some kilns were built as a series of chambers, stepped as they ran up the slope, and with connecting doors to allow access to both the kiln-workers during loading and unloading, and the heat during firing. There might be up to 12 chambers. Chambered kilns were usual for making Longquan celadon.
The main fire chamber was at the bottom, but there might be additional "stoke holes" to allow adding extra fuel at intervals up the slope, as well as peep holes to allow sight of the interior. At the far, top, end there was a chimney, but given the up-draught of the slope, this did not need to be tall, and might be omitted altogether. The size and shape of the kilns and chambers within varied considerably. Firing was begun at the bottom end and moved up the slope. The fuel might be wood or (generally less often) coal, which affected the atmosphere of the firing; wood giving a reducing atmosphere and coal an oxidizing one. The weight of pottery produced was about the same as the weight of wood required. Generally saggars were used, at least in later periods. These were an innovation of Ding ware from the north in the Song dynasty.
The kilns allowed large quantities of pottery to be fired at high temperatures, but the firing was not usually even across the length of the kiln, which often produced different effects on pieces at different levels. Very often the higher chambers produced the better pieces, as they heated up more slowly. As one example, the wide range of colours seen in Chinese celadon wares such as Yue ware and Longquan celadon is largely explained by variations in firing conditions. If the pieces are heated too high, instead of the desired celadon color, the pieces will turn brown. Variations in the shades of white porcelains between and within the northern Ding ware and the southern Qingbai were also the result of the fuel used. Some of the most advanced chambered kilns were built to fire Dehua porcelain, where precise control of high temperatures was essential. The dragon kiln form was copied in Korea, from sometime between 100 and 300 CE, and much later in Japan in various types of climbing anagama kilns, and elsewhere in East Asia.
The large quantities fired were not unique to Asian pottery; the largest kilns making ancient Roman pottery, of a totally different form, could fire up to 40,000 pieces at a time.
Notes
References
Eng, Clarence, Colours and Contrast: Ceramic Traditions in Chinese Architecture, 2014, BRILL, , Google books
Kerr, Rose, Needham, Joseph, Wood, Nigel, Science and Civilisation in China: Volume 5, Chemistry and Chemical Technology, Part 12, Ceramic Technology, 2004, Cambridge University Press, , Google books
Medley, Margaret, The Chinese Potter: A Practical History of Chinese Ceramics, 3rd edition, 1989, Phaidon,
Rawson, Jessica (ed.). The British Museum Book of Chinese Art, 2007 (2nd edn), British Museum Press,
Vainker, S. J., Chinese Pottery and Porcelain, 1991, British Museum Press,
Wood, Nigel: Oxford Art Online, section "Dragon (long) kilns" in "China, §VIII, 2.2: Ceramics: Materials and techniques, Materials and techniques".
Chinese inventions
Chinese pottery
Firing techniques
Kilns | Dragon kiln | Chemistry,Engineering | 1,183 |
45,429,804 | https://en.wikipedia.org/wiki/Penicillium%20dipodomyis | Penicillium dipodomyis is a species of the genus of Penicillium which occurs in kangaroo rats and produces penicillin and the diketopiperazine dipodazine.
See also
List of Penicillium species
References
Further reading
dipodomyis
Fungi described in 1997
Fungus species | Penicillium dipodomyis | Biology | 65 |
42,577,984 | https://en.wikipedia.org/wiki/Weakly%20holomorphic%20modular%20form | In mathematics, a weakly holomorphic modular form is similar to a holomorphic modular form, except that it is allowed to have poles at cusps. Examples include modular functions and modular forms.
Definition
To simplify notation this section does the level 1 case; the extension to higher levels is straightforward.
A level 1 weakly holomorphic modular form is a function f on the upper half plane with the properties:
f transforms like a modular form: for some integer k called the weight, for any elements of SL2(Z).
As a function of q=e2πiτ, f is given by a Laurent series whose radius of convergence is 1 (so f is holomorphic on the upper half plane and meromorphic at the cusps).
Examples
The ring of level 1 modular forms is generated by the Eisenstein series E4 and E6 (which generate the ring of holomorphic modular forms) together with the inverse 1/Δ of the modular discriminant.
Any weakly holomorphic modular form of any level can be written as a quotient of two holomorphic modular forms. However, not every quotient of two holomorphic modular forms is a weakly holomorphic modular form, as it may have poles in the upper half plane.
References
Modular forms | Weakly holomorphic modular form | Mathematics | 278 |
4,157,609 | https://en.wikipedia.org/wiki/Great%20dodecahemicosahedron | In geometry, the great dodecahemicosahedron (or great dodecahemiicosahedron) is a nonconvex uniform polyhedron, indexed as U65. It has 22 faces (12 pentagons and 10 hexagons), 60 edges, and 30 vertices. Its vertex figure is a crossed quadrilateral.
It is a hemipolyhedron with ten hexagonal faces passing through the model center.
Related polyhedra
Its convex hull is the icosidodecahedron. It also shares its edge arrangement with the dodecadodecahedron (having the pentagonal faces in common), and with the small dodecahemicosahedron (having the hexagonal faces in common).
Great dodecahemicosacron
The great dodecahemicosacron is the dual of the great dodecahemicosahedron, and is one of nine dual hemipolyhedra. It appears visually indistinct from the small dodecahemicosacron.
Since the hemipolyhedra have faces passing through the center, the dual figures have corresponding vertices at infinity; properly, on the real projective plane at infinity. In Magnus Wenninger's Dual Models, they are represented with intersecting prisms, each extending in both directions to the same vertex at infinity, in order to maintain symmetry. In practice, the model prisms are cut off at a certain point that is convenient for the maker. Wenninger suggested these figures are members of a new class of stellation figures, called stellation to infinity. However, he also suggested that strictly speaking, they are not polyhedra because their construction does not conform to the usual definitions.
The great dodecahemicosahedron can be seen as having ten vertices at infinity.
See also
List of uniform polyhedra
Hemi-icosahedron - The ten vertices at infinity correspond directionally to the 10 vertices of this abstract polyhedron.
References
(Page 101, Duals of the (nine) hemipolyhedra)
External links
Uniform polyhedra and duals
Uniform polyhedra | Great dodecahemicosahedron | Physics | 439 |
51,875,565 | https://en.wikipedia.org/wiki/Feng%27s%20classification | Tse-yun Feng suggested the use of degree of parallelism to classify various computer architecture. It is based on sequential and parallel operations at a bit and word level.
About degree of parallelism
Maximum degree of parallelism
The maximum number of binary digits that can be processed within a unit time by a computer system is called the maximum parallelism degree P. If a processor is processing P bits in unit time, then P is called the maximum
degree of parallelism.
Average degree of parallelism
Let i = 1, 2, 3, ..., T be the different timing instants and P1, P2, ..., PT be the corresponding bits processed.
Then,
Processor utilization
Processor utilization is defined as
The maximum degree of parallelism depends on the structure of the arithmetic and logic unit. Higher degree of parallelism indicates a highly parallel ALU or processing element. Average parallelism depends on both the hardware and the software. Higher average parallelism can be achieved through concurrent programs.
Types of classification
According to Feng's classification, computer architecture can be classified into four. The classification is based on the way contents stored in memory are processed. The contents can be either data or instructions.
Word serial bit serial (WSBS)
Word serial bit parallel (WSBP)
Word parallel bit serial (WPBS)
Word parallel bit parallel (WPBP)
Word serial bit serial (WSBS)
One bit of one selected word is processed at a time. This represents serial processing and needs maximum processing time.
Word serial bit parallel (WSBP)
It is found in most existing computers and has been called "word slice" processing because one word of one bit is processed at a time. All bits of a selected word are processed at a time. Bit parallel means all bits of a word.
Word parallel bit serial (WPBS)
It has been called bit slice processing because m-bit slice is processed at a time. Word parallel signifies selection of all words. It can be considered as one bit from all words are processed at a time.
Word parallel bit parallel (WPBP)
Fully parallel processing involves simultaneously processing an array of `n × m` bits. This means that both `n` words and `m` bits in each word are processed at the same time, enabling faster and more efficient data handling.
Limitations of Feng's classification
It fails to project the concurrency in pipeline processors, as degree of parallelism doesn't account for concurrency handle by pipe-lined design.
See also
Händler's (ECS)
Flynn's taxonomy
References
Computer architecture | Feng's classification | Technology,Engineering | 527 |
48,102,075 | https://en.wikipedia.org/wiki/Isobutyronitrile | Isobutyronitrile is a complex organic molecule that has recently been found in several meteorites arrived from space. The singularity of this chemical is due to the fact that it is the only one among the molecules arriving from the universe that has a branched, rather than straight, carbon backbone. The backbone is also larger than usual, in comparison with others.
History
Both isobutyronitrile and its straight-chain isomer, Butyronitrile, were detected by astronomers from Cornell University, the Max Planck Institute for Radio Astronomy and the University of Cologne by means of using the Atacama Large Millimeter/submillimeter Array (ALMA) — a set of radiotelescopes in Chile. The chemical was found within an immense gas cloud in the star-forming region called Sagittarius B2. This interstellar space is located at about 300 light years away from the Galactic Center Sgr A*. and about 27,000 light years from Earth.
About 50 individual features for isobutyronitrile and 120 for normal propyl cyanide (n-propyl cyanide) were identified in the ALMA spectrum of the Sagittarius B2 region. The published astrochemical model indicates that both isomers are produced within or upon dust grain ice mantles through the addition of molecular radicals, albeit via differing reaction pathways.
Scientists have come to the conclusion that isobutyronitrile could have been essential for the creation of primary life. The discovery of this particular cyanide suggests that the complex molecules needed for life may have their origins in interstellar space. Those molecules would have been rising during the process of early star formation and been transferred to our planet later.
According to Rob Garrod, this detection opens a new frontier in the field regarding the complexity of molecules that can be formed in interstellar space and that might ultimately find their way to the surfaces of planets. How widespread these complex organic molecules really are in our Galaxy is one of the questions raised by this new discovery.
Composition and structure
Isobutyronitrile (C3H7CN) contains a carbon atom bounded by a simple link to two methyl (-CH3) structures and to a cyano group (–CN). The cyano group is constituted by a triple link bond between one carbon and one nitrogen atom.
The greatest contribution to the production of i-PrCN comes from the reaction of CN radicals (which are accreted from the gas) with the CH3CHCH3 radical, whereas the dominant formation mechanism for n-PrCN is the addition of C2H5 and CH2CN, a process that has no equivalent for the production of i-PrCN.
i-PrCN production dominates all reaction mechanisms for which parallel processes are available to both isomers.It is also the most complex shaped molecule in the history.
Rotational spectrum
The rotational spectrum of the branched isomer iso- or i-PrCN, which had only been previously studied to a limited extent in the microwave region, has recently been extensively recorded in the laboratory from the microwave to the submillimeter wave region along with a redetermination of the dipole moment, which appears to be 4.29 D.
The latter uncertainty assumes the same source size and rotation temperature for both isomers.
Scientists were able to observe transitions in both types of cyanides. Thus, the microwave spectrum of the isobutyronitrile has been recorded from 26.5 to 40.0 GHz. Three different excited states were found in the R-branch of i-PrCN. In the experiments carried out by the scientists, different parameters were studied: The bond distance between the different atoms and the angles between them. The results indicated that the bond distance between de carbon atom and the cyano group is 1.501 Å; the angle between the three carbon atoms is 113º while the angle between the CCC and the CN bond is 53.8º. Two different torsional modes were observed, according to the relative intensities of the excited state lines, the frequencies of which were, respectively, 200±20 and 249±10 cm−1. This could give an idea of the internal rotation energy of this molecule, which has been found to be of 3.3 Kcal/mole.
Importance in life's origin
The branched carbon structure of isobutyronitrile is a common feature in those molecules that are considered to be necessary for life – such as amino acids, which are the building blocks of proteins. This new discovery lends weight to the idea that biologically crucial molecules, like the mentioned amino acids which are also commonly found in meteorites, were produced even before the process of star formation or before planets such as the Earth were formed.
The importance of the cyanides found in comets remains in their C-N bond. This bond has been proved to participate in the abiotic amino acid synthesis.
The two cyanide molecules – isobutyronitrile and n-butyronitrile – are the largest molecules yet detected in any star-forming region.
Properties
Nitrogen oxides are released during its combustion.
Highly stable under ordinary conditions.
Clear colourless liquid the density of which is lower than that of water.
Highly flammable liquid and vapor.
Its physical state is clear liquid.
Its distillation range is 115-118 °C.
Some more specific properties are:
Gravity: 0.76
Vapor density: 2.38
Refractive index: 1.372
Dielectric Constant: 20.80
Hazards
Contact with eyes causes irritation
Fatal if swallowed or if inhaled. It causes weakness, headache, confusion, nausea and vomiting.
Toxic in contact with skin and it may cause damage to organs.
Applications
Chemically speaking, the simple inorganic cyanides behave as chlorides in many ways. Organic nitriles act as solvents and are reacted further for various applications such as: Working as an extraction solvent for fatty acids, oils and unsaturated hydrocarbons. They are also good solvents for spinning and casting and extractive distillation based on its selective miscibility with organic compounds and can act as removing agents of colouring matters and aromatic alcohols. Inorganic cyanides are also able to perform a recrystallization of steroids or to be compounds for organic synthesis. Therefore, they basically act as solvents or chemical intermediates in biochemistry (pesticide sequencing and DNA synthesis, for example).
Some other useful applications of these organic nitriles are the performance of high-pressure liquid chromatographic analysis. Also, the action they have as catalysts and components of transition-metal complex catalysts, stabilizers for chlorinated solvents. Furthermore, they may work as chemical intermediates and solvents for perfumes and pharmaceutical products.
References
Bibliography
4
Astrochemistry
Isopropyl compounds | Isobutyronitrile | Chemistry,Astronomy | 1,406 |
5,597,613 | https://en.wikipedia.org/wiki/Directional-hemispherical%20reflectance | Directional-hemispherical reflectance is the reflectance of a surface under direct illumination (with no diffuse component). Directional-hemispherical reflectance is the integral of the bidirectional reflectance distribution function over all viewing directions. It is sometimes called "black-sky albedo".
References
See also
Bi-hemispherical reflectance
Electromagnetic radiation
Climatology | Directional-hemispherical reflectance | Physics | 81 |
62,098,761 | https://en.wikipedia.org/wiki/Latino%20urbanism | Latino urbanism is a field of study that examines urban planning and urbanism from the perspective of Latino studies. It aims to highlight the contributions of Latinos to the making of American cities, and the theoretical interventions that Latino studies scholarship have generated in response to urban scholars lack of engagement with Latino populations. Scholars have attributed this lack of attention to disciplinary boundaries between urban studies and ethnic studies. Latino urbanism as a field is inherently interdisciplinary and includes scholars working in literature, history, anthropology, urban planning, American studies, and more. A key characteristic is its attention to the ways communities act on the built environment, and how they in turn develop "barrio urbanisms," or new knowledges and interventions about the use and organization of urban space. The work of urban planner James Rojas provides an example of the field's attention to Latinos as actors, agents of change and innovators. His art making workshops wrest communities vernacular knowledges to develop urban planning solutions . Some scholars champion the Chicano practice of Rasquachismo—to suggest “placekeeping” as an inventive, make do, popular strategy that can help advance racial justice goals by expanding definitions of urbanism. This scholarship views grassroots interventions into space as strategic and resourceful.
See also
Urban vitality
References
Urban planning
Latin American studies | Latino urbanism | Engineering | 266 |
55,631,153 | https://en.wikipedia.org/wiki/Tensor%20representation | In mathematics, the tensor representations of the general linear group are those that are obtained by taking finitely many tensor products of the fundamental representation and its dual. The irreducible factors of such a representation are also called tensor representations, and can be obtained by applying Schur functors (associated to Young tableaux). These coincide with the rational representations of the general linear group.
More generally, a matrix group is any subgroup of the general linear group. A tensor representation of a matrix group is any representation that is contained in a tensor representation of the general linear group. For example, the orthogonal group O(n) admits a tensor representation on the space of all trace-free symmetric tensors of order two. For orthogonal groups, the tensor representations are contrasted with the spin representations.
The classical groups, like the symplectic group, have the property that all finite-dimensional representations are tensor representations (by Weyl's construction), while other representations (like the metaplectic representation) exist in infinite dimensions.
References
, chapters 9 and 10.
Bargmann, V., & Todorov, I. T. (1977). Spaces of analytic functions on a complex cone as carriers for the symmetric tensor representations of SO(n). Journal of Mathematical Physics, 18(6), 1141–1148.
Tensors | Tensor representation | Engineering | 273 |
31,057,646 | https://en.wikipedia.org/wiki/TAPS%20%28buffer%29 | TAPS ([tris(hydroxymethyl)methylamino]propanesulfonic acid) is a chemical compound commonly used to make buffer solutions.
It can bind divalent cations, including Co(II) and Ni(II).
TAPS is effective to make buffer solutions in the pH range 7.7–9.1, since it has a pKa value of 8.44 (ionic strength I = 0, 25 °C).
The pH (and pKa at I ≠ 0) of the buffer solution changes with concentration and temperature, and this effect may be predicted e.g. using online calculators.
References
Buffer solutions
Sulfonic acids
Triols | TAPS (buffer) | Chemistry | 141 |
71,119,705 | https://en.wikipedia.org/wiki/Cinnamonitrile | (E)-Cinnamonitrile is an organic compound approved for use as a fragrance in products such as air fresheners. It has a spicy cinnamon aroma.
Synthetic routes include an aldol-like condensation of benzaldehyde with acetonitrile under alkaline conditions, an elimination reaction of various oximes derived from cinnamaldehyde, and oxidative coupling of benzene to acrylonitrile.
References
Nitriles
Phenyl compounds | Cinnamonitrile | Chemistry | 103 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.