id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
39,857,945 | https://en.wikipedia.org/wiki/Elipse%20Software | Elipse Software is a Brazilian industrial automation software producer whose main activities are designing and selling software for HMI/SCADA projects and interfaces for many types of applications. It was founded in 1986 in Porto Alegre, Brazil. Its four Brazilian branches are located in the cities of São Paulo, Rio de Janeiro, Curitiba and Belo Horizonte. It also has an international branch in Taiwan.
The company is headquartered in Porto Alegre (southern Brazil), providing industrial automation software for several industries, such as Energy, Infrastructure, Water and Wastewater, Food and Beverage, Mining and Metals, and Pharmaceutical and Life Sciences.
Elipse Software is a Microsoft Gold Certified Partner, and has been a member of the OPC Foundation since 1999. In 2014, it was listed in Gartner's "Cool Vendors" report for Brazil and Asia (Taiwan).
See also
SCADA
Automation
References
External links
Official Website
Industrial software
Software companies of Brazil
Companies based in Rio Grande do Sul
Software companies established in 1986
Economy of Porto Alegre
Brazilian brands | Elipse Software | Technology | 210 |
25,121,056 | https://en.wikipedia.org/wiki/HD%20149382 | HD 149382 is a hot subdwarf star in the constellation of Ophiuchus with an apparent visual magnitude of 8.943. This is too faint to be seen with the naked eye even under ideal conditions, although it can be viewed with a small telescope. Based upon parallax measurements, this star is located at a distance of about from the Earth.
This is the brightest known B-type subdwarf star with a stellar classification of B5 VI. It is generating energy through the thermonuclear fusion of helium at its core (triple-alpha process). The effective temperature of the star's outer envelope is about 35,500 K, giving it the characteristic blue-white hue of a B-type star. Although only about one seventh the diameter of the Sun, it radiates about 25 times as much due to its high temperature. HD 149382 has a visual companion located at an angular separation of 1 arcsecond.
In 2009, a substellar companion, perhaps even a superjovian planet, was announced orbiting the star. This candidate object was estimated to have nearly half the mass of the Sun. In 2011, this discovery was thrown into doubt when an independent team of astronomers were unable to confirm the detection. Their observations rule out a companion with a mass greater than Jupiter orbiting with a period of less than 28 days.
See also
List of brown dwarfs
References
External links
http://simbad.u-strasbg.fr/simbad/sim-id?Ident=HD%20149382b
Ophiuchus
149382
Hypothetical planetary systems
B-type subdwarfs
081145
Durchmusterung objects | HD 149382 | Astronomy | 352 |
61,212,536 | https://en.wikipedia.org/wiki/Cylindrobasidium%20laeve | Cylindrobasidium is a species of fungus in the family Physalacriaceae.
A product which contains Cylindrobasidium laeve as the active ingredient can be used as a mycoherbicide to control Acacia mearnsii (black wattle) in South Africa.
Taxonomy
Initially described by Persoon in 1794 as Corticium laeve, the modern Index Fungorum name was given in 1984 by George Peter Chamuris.
In Europe
It is very common in Poland, usually found on various types of forests, bushes, parks, gardens, roadsides, trunks and branches of deciduous trees. It was found on the following species and types of trees: maples, chestnut tree, alder, silver birch, hornbeam, hazel, hawthorn, beech, hairy ash, apple, black poplar, plum tree, Robinia pseudoacacia, willow, and lime. It occurs rarely on conifers.
Gallery
References
Offsite
BioNET-EAFRINET: Acacia mearnsii (Black Wattle)
Mycobank: Cylindrobasidium laeve
Scottish fungi: Cylindrobasidium laeve
Physalacriaceae
Fungus species | Cylindrobasidium laeve | Biology | 242 |
20,960,352 | https://en.wikipedia.org/wiki/Fire%20Dynamics%20Simulator | Fire Dynamics Simulator (FDS) is a computational fluid dynamics (CFD) model of fire-driven fluid flow. The computer program solves numerically a large eddy simulation form of the Navier–Stokes equations appropriate for low-speed, thermally-driven flow, with an emphasis on smoke and heat transport from fires, to describe the evolution of fire.
FDS is free software developed by the National Institute of Standards and Technology (NIST) of the United States Department of Commerce, in cooperation with VTT Technical Research Centre of Finland. Smokeview is the companion visualization program that can be used to display the output of FDS.
The first version of FDS was publicly released in February 2000. To date, about half of the applications of the model have been for design of smoke handling systems and sprinkler/detector activation studies. The other half consist of residential and industrial fire reconstructions. Throughout its development, FDS has been aimed at solving practical fire problems in fire protection engineering, while at the same time providing a tool to study fundamental fire dynamics and combustion.
The Wildland-Urban Fire Dynamics Simulator (WFDS) is an extension developed by the US Forest Service that is integrated into FDS and allows it to be used for wildfire modeling. It models vegetative fuel either by explicitly defining the volume of the vegetation or, for surface fuels such as grass, by assuming uniform fuel at the air-ground boundary.
FDS is a Fortran program that reads input parameters from a text file, computes a numerical solution to the governing equations, and writes user-specified output data to files. Smokeview (SMV) is a companion program that reads FDS output files and produces animations on the computer screen. Smokeview has a simple menu-driven interface, while FDS does not. However, there are various third-party programs that have been developed to generate the text file containing the input parameters needed by FDS.
See also
Wildfire modeling
References
External links
FDS Official website
Wikibooks tutorial
FDS+Evac Tools
FDS Project Road Map
AutoCAD plugin to convert 3D geometry to FDS format
PyroSim, a graphical interface (GUI) for creation of FDS input files. (commercial)
Firefighting
Fire prevention
Fire protection
Wildfire prevention | Fire Dynamics Simulator | Engineering | 471 |
386,468 | https://en.wikipedia.org/wiki/Discrete%20geometry | Discrete geometry and combinatorial geometry are branches of geometry that study combinatorial properties and constructive methods of discrete geometric objects. Most questions in discrete geometry involve finite or discrete sets of basic geometric objects, such as points, lines, planes, circles, spheres, polygons, and so forth. The subject focuses on the combinatorial properties of these objects, such as how they intersect one another, or how they may be arranged to cover a larger object.
Discrete geometry has a large overlap with convex geometry and computational geometry, and is closely related to subjects such as finite geometry, combinatorial optimization, digital geometry, discrete differential geometry, geometric graph theory, toric geometry, and combinatorial topology.
History
Polyhedra and tessellations had been studied for many years by people such as Kepler and Cauchy, modern discrete geometry has its origins in the late 19th century. Early topics studied were: the density of circle packings by Thue, projective configurations by Reye and Steinitz, the geometry of numbers by Minkowski, and map colourings by Tait, Heawood, and Hadwiger.
László Fejes Tóth, H.S.M. Coxeter, and Paul Erdős laid the foundations of discrete geometry.
Topics
Polyhedra and polytopes
A polytope is a geometric object with flat sides, which exists in any general number of dimensions. A polygon is a polytope in two dimensions, a polyhedron in three dimensions, and so on in higher dimensions (such as a 4-polytope in four dimensions). Some theories further generalize the idea to include such objects as unbounded polytopes (apeirotopes and tessellations), and abstract polytopes.
The following are some of the aspects of polytopes studied in discrete geometry:
Polyhedral combinatorics
Lattice polytopes
Ehrhart polynomials
Pick's theorem
Hirsch conjecture
Opaque set
Packings, coverings and tilings
Packings, coverings, and tilings are all ways of arranging uniform objects (typically circles, spheres, or tiles) in a regular way on a surface or manifold.
A sphere packing is an arrangement of non-overlapping spheres within a containing space. The spheres considered are usually all of identical size, and the space is usually three-dimensional Euclidean space. However, sphere packing problems can be generalised to consider unequal spheres, n-dimensional Euclidean space (where the problem becomes circle packing in two dimensions, or hypersphere packing in higher dimensions) or to non-Euclidean spaces such as hyperbolic space.
A tessellation of a flat surface is the tiling of a plane using one or more geometric shapes, called tiles, with no overlaps and no gaps. In mathematics, tessellations can be generalized to higher dimensions.
Specific topics in this area include:
Circle packings
Sphere packings
Kepler conjecture
Quasicrystals
Aperiodic tilings
Periodic graph
Finite subdivision rules
Structural rigidity and flexibility
Structural rigidity is a combinatorial theory for predicting the flexibility of ensembles formed by rigid bodies connected by flexible linkages or hinges.
Topics in this area include:
Cauchy's theorem
Flexible polyhedra
Incidence structures
Incidence structures generalize planes (such as affine, projective, and Möbius planes) as can be seen from their axiomatic definitions. Incidence structures also generalize the higher-dimensional analogs and the finite structures are sometimes called finite geometries.
Formally, an incidence structure is a triple
where P is a set of "points", L is a set of "lines" and is the incidence relation. The elements of are called flags. If
we say that point p "lies on" line .
Topics in this area include:
Configurations
Line arrangements
Hyperplane arrangements
Buildings
Oriented matroids
An oriented matroid is a mathematical structure that abstracts the properties of directed graphs and of arrangements of vectors in a vector space over an ordered field (particularly for partially ordered vector spaces). In comparison, an ordinary (i.e., non-oriented) matroid abstracts the dependence properties that are common both to graphs, which are not necessarily directed, and to arrangements of vectors over fields, which are not necessarily ordered.
Geometric graph theory
A geometric graph is a graph in which the vertices or edges are associated with geometric objects. Examples include Euclidean graphs, the 1-skeleton of a polyhedron or polytope, unit disk graphs, and visibility graphs.
Topics in this area include:
Graph drawing
Polyhedral graphs
Random geometric graphs
Voronoi diagrams and Delaunay triangulations
Simplicial complexes
A simplicial complex is a topological space of a certain kind, constructed by "gluing together" points, line segments, triangles, and their n-dimensional counterparts (see illustration). Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex. See also random geometric complexes.
Topological combinatorics
The discipline of combinatorial topology used combinatorial concepts in topology and in the early 20th century this turned into the field of algebraic topology.
In 1978, the situation was reversed – methods from algebraic topology were used to solve a problem in combinatorics – when László Lovász proved the Kneser conjecture, thus beginning the new study of topological combinatorics. Lovász's proof used the Borsuk-Ulam theorem and this theorem retains a prominent role in this new field. This theorem has many equivalent versions and analogs and has been used in the study of fair division problems.
Topics in this area include:
Sperner's lemma
Regular maps
Lattices and discrete groups
A discrete group is a group G equipped with the discrete topology. With this topology, G becomes a topological group. A discrete subgroup of a topological group G is a subgroup H whose relative topology is the discrete one. For example, the integers, Z, form a discrete subgroup of the reals, R (with the standard metric topology), but the rational numbers, Q, do not.
A lattice in a locally compact topological group is a discrete subgroup with the property that the quotient space has finite invariant measure. In the special case of subgroups of Rn, this amounts to the usual geometric notion of a lattice, and both the algebraic structure of lattices and the geometry of the totality of all lattices are relatively well understood. Deep results of Borel, Harish-Chandra, Mostow, Tamagawa, M. S. Raghunathan, Margulis, Zimmer obtained from the 1950s through the 1970s provided examples and generalized much of the theory to the setting of nilpotent Lie groups and semisimple algebraic groups over a local field. In the 1990s, Bass and Lubotzky initiated the study of tree lattices, which remains an active research area.
Topics in this area include:
Reflection groups
Triangle groups
Digital geometry
Digital geometry deals with discrete sets (usually discrete point sets) considered to be digitized models or images of objects of the 2D or 3D Euclidean space.
Simply put, digitizing is replacing an object by a discrete set of its points. The images we see on the TV screen, the raster display of a computer, or in newspapers are in fact digital images.
Its main application areas are computer graphics and image analysis.
Discrete differential geometry
Discrete differential geometry is the study of discrete counterparts of notions in differential geometry. Instead of smooth curves and surfaces, there are polygons, meshes, and simplicial complexes. It is used in the study of computer graphics and topological combinatorics.
Topics in this area include:
Discrete Laplace operator
Discrete exterior calculus
Discrete calculus
Discrete Morse theory
Topological combinatorics
Spectral shape analysis
Analysis on fractals
See also
Discrete and Computational Geometry (journal)
Discrete mathematics
Paul Erdős
Notes
References | Discrete geometry | Mathematics | 1,625 |
71,220,542 | https://en.wikipedia.org/wiki/26%20blocks%20scandal | 26 blocks scandal () was a construction scandal in British Hong Kong during the 1980s. A total of 577 blocks of public housing estate was discovered with structural problems, of those 26 were demolished due to the imminent risk of collapse.
Events
In March 1980, blocks 5 and 6 of Kwai Fong Estate, built only eight years prior, were found to suffer from concrete spalling. Investigations concluded that jerry-building damaged the structure of the blocks, as the strength of concrete was significantly lower than the standard.
It is later revealed that, on 9 January 1982, the Independent Commission Against Corruption (ICAC) was told that the Kwai Fong Estate was marred by structural issues, such as concrete spalling off and water seepage from wall, with Block 6 as the most serious.
In 1982, Block 6 underwent complete repair whilst occupants were relocated to the Tai Wo Hau Estate in the same Tsuen Wan District, costing HK$50 million. Considering the cost-ineffectiveness and that the issue was quite common at that time, the Housing Department announced in January 1985 that Block 5 would become the first government-built low-cost housing block to be demolished, marking the start of the scandal.
The Government announced on 21 November 1985 that structural problems were found in a total of 577 blocks built between 1982 and 1984 and shall be repaired. 26 housing blocks and a school building were scheduled to be demolished as soon as possible due to the risk of collapse. The Extended Redevelopment Programme was launched in the same year to clear the sub-standard blocks. Tsuen Wan New Town was the most serious, with a total of 11 blocks demolished, impacting around 78,000 residents.
The ICAC decided to launch an investigation of bribery due to the scale of the scandal. The breakthrough of the probe came in 1987 after two criminals agreed to testify as witnesses. Three contractors along with seven current and former officials were charged with bribery. Two contractors were jailed for 33 months and 3 months (received suspended sentences) respectively.
List of affected buildings
Demolished ASAP
Other affected buildings
Prosecution
Siu Hon-sum, then 62, owner of On Lee Siu Construction Limited, faced eight charges for bribing Lam Or-shum, a worker in the Works Branch five times from February 1970 to December 1973 with a total of HK$50,000 when Ho Man Tin Estate was under construction, and a surveyor in the Work Branch in December 1968 with HK$300 when Kwai Hing Estate was being built. Siu was jailed for 33 months and fined HK$325,000.
Ho Leung, then 70, former owner of Yeu Shing Construction Company Limited, was charged with bribing Lam six times from August 1966 to 1975 with a total of more than HK$45,000 during the construction of Ngau Tau Kok Estate and Lei Muk Shue Estate. Not being charged by ICAC for health problems, Ho testified as a witness, and died in 1991.
Poon Pak-shing, former manager of Great Vast Construction Engineering Limited, faced charges over bribing Lam in 1965 and 1966 with HK$4,000 when building Upper Ngau Tau Kok Estate. Poon was handed three-month jail and suspended for a year, and fined HK$4,000.
Tam Wing-han, former deputy clerk of work in the Works Branch, was found not guilty over receiving bribery. Six government-employed worker, including four retired, were arrested but were not brought to court.
Aftermath
The authorities graded the problematic buildings into four riskiness levels. 26 blocks, found to have imminent risk of collapse and far from the safety standard, were demolished. For the other 551 buildings, some were carried out with stabilisation works. Nevertheless, the Executive Council decided in 1987 that all Resettlement Area and Low Cost Housing blocks were to be knocked down and rebuilt by 2001. The long-term housing strategy, named Comprehensive Redevelopment Programme, was completed in 2010 upon the clearance of Lower Ngau Tau Kok (II) Estate.
References
External links
香港房屋委員會轄下樓宇結構勘查的經過, 程序及結果報告;香港 房屋署 ;1986
樓宇結構勘查簡報 : 截至一九八五年十一月的情況報告;香港 房屋署 ;1986
1980s crimes in Hong Kong
Engineering failures
Independent Commission Against Corruption (Hong Kong)
Scandals in Hong Kong | 26 blocks scandal | Technology,Engineering | 903 |
28,450,260 | https://en.wikipedia.org/wiki/Rehabilitation%20robotics | Rehabilitation robotics is a field of research dedicated to understanding and augmenting rehabilitation through the application of robotic devices. Rehabilitation robotics includes development of robotic devices tailored for assisting different sensorimotor functions(e.g. arm, hand, leg, ankle), development of different schemes of assisting therapeutic training, and assessment of sensorimotor performance (ability to move) of patient; here, robots are used mainly as therapy aids instead of assistive devices. Rehabilitation using robotics is generally well tolerated by patients, and has been found to be an effective adjunct to therapy in individuals with motor impairments, especially due to stroke.
Overview
Rehabilitation robotics can be considered a specific focus of biomedical engineering, and a part of human-robot interaction. In this field, clinicians, therapists, and engineers collaborate to help rehabilitate patients.
Prominent goals in the field include: developing implementable technologies that can be easily used by patients, therapists, and clinicians; enhancing the efficacy of clinician's therapies; and increasing the ease of activities in the daily lives of patients.
History
The International Conference on Rehabilitation Robotics occurs every two years, with the first conference in 1989. The most recent conference was held in June 2019 in Toronto, as part of the RehabWeek. Rehabilitation robotics was introduced two decades ago for patients who have neurological disorders. The people that you will most commonly find using rehabilitation robots are disabled people or therapists. When the rehabilitation robots were created they were not intended to be recovery robots but to help people recognizing objects through touch and for people with nervous system disorder. Rehabilitation robots are used in the recuperation process of disabled patients in standing up, balancing and gait. These robots must keep up with a human and their movement, therefore in the making of the machine the makers need to be sure that it will be consistent with the progress of the patient. Much rigorous work is put into the design because the robot will work with people who have disabilities and will not be able to react quickly in case something goes wrong.
Function
Rehabilitation robots are designed with applications of techniques that determine the adaptability level of the patient. Techniques include but are not limited to active assisted exercise, active constrained exercise, active resistive exercise, passive exercise, and adaptive exercise. In active assisted exercise, the patient moves his or her hand in a predetermined pathway without any force pushing against it. Active constrained exercise is the movement of the patient's arm with an opposing force; if it tries to move outside of what it is supposed to. Active resistive exercise is the movement with opposing forces.
Over the years the number of rehabilitation robotics has grown but they are very limited due to the clinical trials. Many clinics have trials but do not accept the robots because they wish they were remotely controlled. Having robots involved in the rehabilitation of a patient has a few positive aspects. One of the positive aspects is the fact that you can repeat the process or exercise as many times as you wish. Another positive aspect is the fact that you can get exact measurements of their improvement or decline. You can get the exact measurements through the sensors on the device. While the device is taking a measurement you need to be careful because the device can be disrupted once it is done because of the different movements the patient does to get out. The rehabilitation robot can apply constant therapy for long periods. In the process of a recovery the rehabilitation robot is unable to understand the patient's needs like a well experienced therapist would. The robot is unable to understand now but in the future the device will be able to understand. Another plus of having a rehabilitation robot is that there is no physical effort put into work by the therapist.
Lately, rehabilitation robotics have been used in training medicine, surgery, remote surgery and other things, but there have been too many complaints about the robot not being controlled by a remote. Many people would think that using an industrial robot as a rehabilitation robot would be the same thing, but this is not true. Rehabilitation robots need to be adjustable and programmable, because the robot can be used for multiple reasons. Meanwhile, an industrial robot is always the same; there is no need to change the robot unless the product it is working with is bigger or smaller. In order for an industrial robot to work it would have to be more adjustable to its new task.
Reasons to use this device
The number of disabled people in Spain had gone up due to aging. This means the number of assistance has gone up. The rehabilitation robot is very popular in Spain because it is an acceptable cost, and there are many people in Spain that has strokes and need assistance afterward. Rehabilitation robotics are very popular with people who have had a stroke because the proprioceptive neuromuscular facilitation method is applied. When you have a stroke your nervous system becomes damage in most cases causing people to have disability for six months after the stroke. The robot would be able to carry out exercises a therapist would carry out but the robot will do some exercises that are not so easy to be carried out by a human being. The pneumatic robot helps people who have had strokes or any other illness that has caused a disorder with their upper limb
A 2018 review on the effectiveness of mirror therapy by virtual reality and robotics for any type of pathology concluded that: 1) Much of the research on second-generation mirror therapy is of very low quality; 2) Evidence-based rationale to conduct such studies is missing; 3) It is not relevant to recommend investment by rehabilitation professionals and institutions in such devices.
Types of robots
There are primarily two types of robots that can be used for rehabilitation: End-effector based robots and powered exoskeletons. Each system has their own advantages and limitations. End-effector systems are faster to set up and are more adaptable. On the other hand, exoskeletons offer more precise joint isolation and improve gait transparency.
Current areas of research
Current robotic devices include exoskeletons for aiding limb or hand movement, enhanced treadmills, robotic arms to retrain motor movement of the limb, and finger rehabilitation devices. Some devices are meant to aid strength development of specific motor movements, while others seek to aid these movements directly. Often robotic technologies attempt to leverage the principles of neuroplasticity by improving quality of movement, and increasing the intensity and repetition of the task. Over the last two decades, research into robot mediated therapy for the rehabilitation of stroke patients has grown significantly as the potential for cheaper and more effective therapy has been identified. Though stroke has been the focus of most studies due to its prevalence in North America, rehabilitation robotics can also be applied to individuals (including children) with cerebral palsy, or those recovering from orthopaedic surgery.
An additional benefit to this type of adaptive robotic therapy is a marked decrease in spasticity and muscle tone in the affected arm. Different spatial orientations of the robot allow for horizontal or vertical motion, or a combination in a variety of planes. The vertical, anti-gravity setting is particularly useful for improving shoulder and elbow function.
See also
Hybrid Assistive Limb
Rehabilitation engineering
Robotics
Prosthetics
References
Further reading
Gimigliano F, Palomba A, Arienti C, et al. Robot-assisted arm therapy in neurological health conditions: rationale and methodology for the evidence synthesis in the CICERONE Italian Consensus Conference. Eur J Phys Rehabil Med. 2021 Jun 15. doi: 10.23736/S1973-9087.21.07011-8. Epub ahead of print. PMID 34128606.
External links
International Conference for Rehabilitation Robotics http://icorr2019.org/
Journal of NeuroEngineering and Rehabilitation: http://www.jneuroengrehab.com/
IEEE Robotics and Automation Society special issue on rehabilitation robotics: https://web.archive.org/web/20121022224415/http://www.ieee-ras.org/issue/rehabilitation-robotics.html
IEEE RAS Technical Committee on Rehabilitation & Assistive Robotics.: https://web.archive.org/web/20101204064448/http://tab.ieee-ras.org/committeeinfo.php?tcid=18
Assistive technology
Medical robotics | Rehabilitation robotics | Biology | 1,690 |
45,210,316 | https://en.wikipedia.org/wiki/Signal%20%28software%29 | Signal is an open-source, encrypted messaging service for instant messaging, voice calls, and video calls. The instant messaging function includes sending text, voice notes, images, videos, and other files. Communication may be one-to-one between users or may involve group messaging.
The application uses a centralized computing architecture and is cross-platform software. It is developed by the non-profit Signal Foundation and its subsidiary Signal Messenger LLC. Signal's software is free and open-source. Its mobile clients, desktop client, and server are all published under the AGPL-3.0-only license. The official Android app generally uses the proprietary Google Play Services, although it is designed to be able to work without them. Signal is also distributed for iOS and desktop programs for Windows, macOS, and Linux. Registration for desktop use requires an iOS or Android device.
Signal uses mobile telephone numbers to register and manage user accounts, though configurable usernames were added in March 2024 to allow users to hide their phone numbers from other users. After removing support for SMS on Android in 2023, the app now secures all communications with end-to-end encryption. The client software includes mechanisms by which users can independently verify the identity of their contacts and the integrity of the data channel.
The non-profit Signal Foundation was launched in February 2018 with initial funding of $50 million from WhatsApp co-founder Brian Acton. , the platform had approximately 40 million monthly active users. , it was downloaded more than 105 million times.
History
2010–2013: Origins
Signal is the successor of the RedPhone encrypted voice calling app and the TextSecure encrypted texting program. The beta versions of RedPhone and TextSecure were first launched in May 2010 by Whisper Systems, a startup company co-founded by security researcher Moxie Marlinspike and roboticist Stuart Anderson. Whisper Systems also produced a firewall and tools for encrypting other forms of data. All of these were proprietary enterprise mobile security software and were only available for Android.
In November 2011, Whisper Systems announced that it had been acquired by Twitter. Neither company disclosed the financial terms of the deal. The acquisition was done "primarily so that Mr. Marlinspike could help the then-startup improve its security". Shortly after the acquisition, Whisper Systems' RedPhone service was made unavailable. Some criticized the removal, arguing that the software was "specifically targeted [to help] people under repressive regimes" and that it left people like the Egyptians in "a dangerous position" during the events of the Egyptian revolution of 2011.
Twitter released TextSecure as free and open-source software under the GPLv3 license in December 2011. RedPhone was also released under the same license in July 2012. Marlinspike later left Twitter and founded Open Whisper Systems as a collaborative Open Source project for the continued development of TextSecure and RedPhone.
2013–2018: Open Whisper Systems
Open Whisper Systems' website was launched in January 2013.
In February 2014, Open Whisper Systems introduced the second version of their TextSecure Protocol (now Signal Protocol), which added end-to-end encrypted group chat and instant messaging capabilities to TextSecure. Toward the end of July 2014, they announced plans to merge the RedPhone and TextSecure applications as Signal. This announcement coincided with the initial release of Signal as a RedPhone counterpart for iOS. The developers said that their next steps would be to provide TextSecure instant messaging capabilities for iOS, unify the RedPhone and TextSecure applications on Android, and launch a web client. Signal was the first iOS app to enable end-to-end encrypted voice calls for free. TextSecure compatibility was added to the iOS application in March 2015.
From its launch in May 2010 until March 2015, the Android version of Signal (then called TextSecure) included support for encrypted SMS/MMS messaging. From version 2.7.0 onward, the Android application only supported sending and receiving encrypted messages via the data channel. Reasons for this included security flaws of SMS/MMS and problems with the key exchange. Open Whisper Systems' abandonment of SMS/MMS encryption prompted some users to create a fork named Silence (initially called SMSSecure) that is meant solely for the exchange of encrypted SMS and MMS messages.
In November 2015, the TextSecure and RedPhone applications on Android were merged to become Signal for Android. A month later, Open Whisper Systems announced Signal Desktop, a Chrome app that could link with a Signal mobile client. At launch, the app could only be linked with the Android version of Signal. On 26 September 2016, Open Whisper Systems announced that Signal Desktop could now be linked with the iOS version of Signal as well. On 31 October 2017, Open Whisper Systems announced that the Chrome app was deprecated. At the same time, they announced the release of a standalone desktop client (based on the Electron framework) for Windows, macOS and certain Linux distributions.
On 4 October 2016, the American Civil Liberties Union (ACLU) and Open Whisper Systems published a series of documents revealing that OWS had received a subpoena requiring them to provide information associated with two phone numbers for a federal grand jury investigation in the first half of 2016. Only one of the two phone numbers was registered on Signal, and because of how the service is designed, OWS was only able to provide "the time the user's account had been created and the last time it had connected to the service". Along with the subpoena, OWS received a gag order requiring OWS not to tell anyone about the subpoena for one year. OWS approached the ACLU, and they were able to lift part of the gag order after challenging it in court. OWS said it was the first time they had received a subpoena, and that they were "committed to treating any future requests the same way".
In March 2017, Open Whisper Systems transitioned Signal's calling system from RedPhone to WebRTC, also adding the ability to make video calls with the mobile apps.
Since 2018: Signal Technology Foundation
On 21 February 2018, Moxie Marlinspike and WhatsApp co-founder Brian Acton announced the formation of the Signal Technology Foundation, a 501(c)(3) nonprofit organization whose mission is "to support, accelerate, and broaden Signal's mission of making private communication accessible and ubiquitous". Acton started the foundation with $50 million in funding and became the foundation's executive chairman after leaving WhatsApp's parent company Facebook in September 2017. Marlinspike continued as Signal Messenger's first CEO. , Signal ran entirely on donations, as a nonprofit.
Between November 2019 and February 2020, Signal added iPad support, view-once images and videos, stickers, and reactions. They also announced plans for a new group messaging system and an "experimental method for storing encrypted contacts in the cloud."
Signal was reportedly popularized in the United States during the George Floyd protests. Heightened awareness of police monitoring led protesters to use the platform to communicate. Black Lives Matter organizers had used the platform "for several years". During the first week of June, the encrypted messaging app was downloaded over five times more than it had been during the week prior to the murder of George Floyd. In June 2020, Signal Foundation announced a new feature that enables users to blur faces in photos, in response to increased federal efforts to monitor protesters.
On 7 January 2021, Signal saw a surge in new user registrations, which temporarily overwhelmed Signal's capacity to deliver account verification messages. CNN and MacRumors linked the surge with a WhatsApp privacy policy change and a Signal endorsement by Elon Musk and Edward Snowden via Twitter. The surge was also tied to the attack on the United States Capitol. International newspapers reported similar trends in the United Arab Emirates. Reuters reported that more than 100,000 people had installed Signal between 7 and 8 January.
Between 12 and 14 January 2021, the number of Signal installations listed on Google Play increased from over 10 million to over 50 million.
On 15 January 2021, due to the surge of new users, Signal was overwhelmed with the new traffic and was down for all users. On the afternoon of 16 January, Signal announced via Twitter that service had been restored.
On 10 January 2022, Moxie Marlinspike announced that he was stepping down from his role as CEO of Signal Messenger. He continues to remain on the Signal Foundation's board of directors and Brian Acton has volunteered to serve as interim CEO during the search for a new CEO.
In August 2022, Signal notified 1900 users that their data had been affected by the Twilio breach including user phone numbers and SMS verification codes. At least one journalist had his account re-registered to a device he did not control as a result of the attack.
In September 2022 Signal Messaging LLC announced that AI researcher and noted critic of big tech Meredith Whittaker would fill the newly created position of President.
Usage
Signal's userbase started in May 2010, when its predecessor TextSecure was launched by Whisper Systems. According to App Annie, Signal had approximately 20 million monthly active users at the end of December 2020. In January 2022, the BBC reported that Signal was used by over 40 million people.
Developers and funding
The development of Signal and its predecessors at Open Whisper Systems was funded by a combination of consulting contracts, donations and grants. The Freedom of the Press Foundation acted as Signal's fiscal sponsor. Between 2013 and 2016, the project received grants from the Knight Foundation, the Shuttleworth Foundation, and almost $3 million from the US government–sponsored Open Technology Fund. Signal is now developed by Signal Messenger LLC, a software company founded by Moxie Marlinspike and Brian Acton in 2018, which is wholly owned by a tax-exempt nonprofit corporation called the Signal Technology Foundation, also created by them in 2018. The Foundation was funded with an initial loan of $50 million from Acton, "to support, accelerate, and broaden Signal's mission of making private communication accessible and ubiquitous". All of the organization's products are published as free and open-source software.
In November 2023, Meredith Whittaker revealed that she expected the annual cost of running Signal to reach $50 million in 2025, with the current cost estimated around $40 million.
Features
Signal provides one-to-one and group voice and video calls with up to forty participants on iOS, Android, and desktop platforms. The calls are carried via the devices' wired or wireless (carrier or WiFi) data connections. The application can send text messages, documents files, voice notes, pictures, stickers, GIFs, and video messages. The platform also supports group messaging.
All communication sessions between Signal users are automatically end-to-end encrypted (the encryption keys are generated and stored on the devices, and not on servers). To verify that a correspondent is really the person that they claim to be, Signal users can compare key fingerprints (or scan QR codes) out-of-band. The platform employs a trust-on-first-use mechanism to notify the user if a correspondent's key changes.
Until 2023, Android users could opt into making Signal the default SMS/MMS application, allowing them to send and receive unencrypted SMS messages in addition to the standard end-to-end encrypted Signal messages. Users could then use the same application to communicate with contacts who do not have Signal. As of October 2022, this feature has been deprecated due to safety and security concerns, and was removed in 2023.
TextSecure allowed the user to set a passphrase that encrypted the local message database and the user's encryption keys. This did not encrypt the user's contact database or message timestamps. The Signal applications on Android and iOS can be locked with the phone's pin, passphrase, or biometric authentication. The user can define a "screen lock timeout" interval, where Signal will re-encrypt the messages after a certain amount of time, providing an additional protection mechanism in case the phone is lost or stolen.
Signal has a feature for scheduling messages. In addition, timers may be attached to messages to automatically delete the messages from both the sender's and the receivers' devices. The time period for keeping the message may be between five seconds and one week, and begins for each recipient once they have read their copy of the message. The developers stressed that this is meant to be "a collaborative feature for conversations where all participants want to automate minimal data hygiene, not for situations where the recipient is an adversary".
Signal's app icon may be changed with a variety of colour themes for customization and to hide the app. The application name can also be customized. Messages can have effects like spoilers and italics, and users can add each other via QR code.
Signal excludes users' messages from non-encrypted cloud backups by default.
Signal allows users to automatically blur faces of people in photos to protect identities.
Signal includes a cryptocurrency wallet functionality for storing, sending and receiving in-app payments. Apart from certain regions and countries, the feature was enabled globally in November 2021. , the only supported payment method is MobileCoin.
In February 2024, Signal added a username feature to the beta version of the app. This is a privacy feature that allows users to communicate with others without having to share their telephone number.
Limitations
Signal requires that the user provide a telephone number for verification, eliminating the need for user names or passwords and facilitating contact discovery (see below). The number does not have to be the same as on the device's SIM card; it can also be a VoIP number or a landline as long as the user can receive the verification code and have a separate device to set up the software. A number can only be registered on one mobile device at a time. Account registration requires an iOS or Android device.
This mandatory connection to a telephone number (a feature Signal shares with WhatsApp, KakaoTalk, and others) has been criticized as a "major issue" for privacy-conscious users who are not comfortable with giving out their private number. A workaround is to use a secondary phone number. The ability to choose a public, changeable username instead of sharing one's phone number was a widely-requested feature. This feature was added to the beta version of Signal in February 2024.
Using phone numbers as identifiers may also create security risks that arise from the possibility of an attacker taking over a phone number. A similar vulnerability was used to attack at least one user in August 2022, though the attack was performed via the provider of Signal's SMS services, not any user's provider. The threat of this attack can be mitigated by enabling Signal's Registration Lock feature, a form of two-factor authentication that requires the user to enter a PIN to register the phone number on a new device.
When linking Signal Desktop to a mobile device, the conversations history will not be synced; only the new messages will be shown on Signal Desktop.
Usability
In July 2016, the Internet Society published a user study that assessed the ability of Signal users to detect and deter man-in-the-middle attacks. The study concluded that 21 out of 28 participants failed to correctly compare public key fingerprints in order to verify the identity of other Signal users, and that most of these users believed they had succeeded, while they had actually failed. Four months later, Signal's user interface was updated to make verifying the identity of other Signal users simpler.
In 2023, the French government is pushing for the adoption of a European encrypted messaging alternative to Signal and WhatsApp named Olvid as their secured platform for communications.
Architecture
Encryption protocols
Signal messages are encrypted with the Signal Protocol (formerly known as the TextSecure Protocol). The protocol combines the Double Ratchet Algorithm, prekeys, and an Extended Triple Diffie–Hellman (X3DH) handshake. It uses Curve25519, AES-256, and HMAC-SHA256 as primitives. The protocol provides confidentiality, integrity, authentication, participant consistency, destination validation, forward secrecy, backward secrecy ( future secrecy), causality preservation, message unlinkability, message repudiation, participation repudiation, and asynchronicity. It does not provide anonymity preservation, and requires servers for the relaying of messages and storing of public key material.
The Signal Protocol also supports end-to-end encrypted group chats. The group chat protocol is a combination of a pairwise double ratchet and multicast encryption. In addition to the properties provided by the one-to-one protocol, the group chat protocol provides speaker consistency, out-of-order resilience, dropped message resilience, computational equality, trust equality, subgroup messaging, as well as contractible and expandable membership.
In October 2014, researchers from Ruhr University Bochum (RUB) published an analysis of the Signal Protocol. Among other findings, they presented an unknown key-share attack on the protocol, but in general, they found that it was secure. In October 2016, researchers from UK's University of Oxford, Queensland University of Technology in Australia, and Canada's McMaster University published a formal analysis of the protocol. They concluded that the protocol was cryptographically sound. In July 2017, researchers from RUB found during another analysis of group messengers a purely theoretic attack against the group protocol of Signal: A user who knows the secret group ID of a group (due to having been a group member previously or stealing it from a member's device) can become a member of the group. Since the group ID cannot be guessed and such member changes are displayed to the remaining members, this attack is likely to be difficult to carry out without being detected.
, the Signal Protocol has been implemented into WhatsApp, Facebook Messenger, Skype, and Google Allo, making it possible for the conversations of "more than a billion people worldwide" to be end-to-end encrypted. In Google Allo, Skype and Facebook Messenger, conversations are not encrypted with the Signal Protocol by default; they only offer end-to-end encryption in an optional mode.
Up until March 2017, Signal's voice calls were encrypted with SRTP and the ZRTP key-agreement protocol, which was developed by Phil Zimmermann. In March 2017, Signal transitioned to a new WebRTC-based calling system that introduced the ability to make video calls. Signal's voice and video calling functionalities use the Signal Protocol channel for authentication instead of ZRTP.
Authentication
To verify that a correspondent is really the person that they claim to be, Signal users can compare key fingerprints (or scan QR codes) out-of-band. The platform employs a trust on first use mechanism in order to notify the user if a correspondent's key changes.
Local storage
After receiving and decrypting messages, the application stored them locally on each device in a SQLite database that is encrypted with SQLCipher. The cryptographic key for this database is also stored locally and can be accessed if the device is unlocked. In December 2020, Cellebrite published a blog post announcing that one of their products could now access this key and use it to "decrypt the Signal app". Technology reporters later published articles about how Cellebrite had claimed to have the ability to "break into the Signal app" and "crack Signal's encryption". This latter interpretation was rejected by several experts, as well as representatives from Signal, who said the original post by Cellebrite had been about accessing data on "an unlocked Android phone in their physical possession" and that they "could have just opened the app to look at the messages". Similar extraction tools also exist for iOS devices and Signal Desktop.
Servers
Signal relies on centralized servers that are maintained by Signal Messenger. In addition to routing Signal's messages, the servers also facilitate the discovery of contacts who are also registered Signal users and the automatic exchange of users' public keys. By default, Signal's voice and video calls are peer-to-peer. If the caller is not in the receiver's address book, the call is routed through a server in order to hide the users' IP addresses.
Contact discovery
The servers store registered users' phone numbers, public key material and push tokens which are necessary for setting up calls and transmitting messages. In order to determine which contacts are also Signal users, cryptographic hashes of the user's contact numbers are periodically transmitted to the server. The server then checks to see if those match any of the SHA256 hashes of registered users and tells the client if any matches are found. The hashed numbers are thereafter discarded from the server. In 2014, Moxie Marlinspike wrote that it is easy to calculate a map of all possible hash inputs to hash outputs and reverse the mapping because of the limited preimage space (the set of all possible hash inputs) of phone numbers, and that a "practical privacy preserving contact discovery remains an unsolved problem." In September 2017, Signal's developers announced that they were working on a way for the Signal client applications to "efficiently and scalably determine whether the contacts in their address book are Signal users without revealing the contacts in their address book to the Signal service."
Metadata
All client-server communications are protected by TLS. Signal's developers have asserted that their servers do not keep logs about who called whom and when. In June 2016, Marlinspike told The Intercept that "the closest piece of information to metadata that the Signal server stores is the last time each user connected to the server, and the precision of this information is reduced to the day, rather than the hour, minute, and second".
The group messaging mechanism is designed so that the servers do not have access to the membership list, group title, or group icon. Instead, the creation, updating, joining, and leaving of groups is done by the clients, which deliver pairwise messages to the participants in the same way that one-to-one messages are delivered.
Federation
Signal's server architecture was federated between December 2013 and February 2016. In December 2013, it was announced that the messaging protocol Signal uses had successfully been integrated into the Android-based open-source operating system CyanogenMod. Since CyanogenMod 11.0, the client logic was contained in a system app called WhisperPush. According to Signal's developers, the Cyanogen team ran their own Signal messaging server for WhisperPush clients, which federated with the main server, so that both clients could exchange messages with each other. The WhisperPush source code was available under the GPLv3 license. In February 2016, the CyanogenMod team discontinued WhisperPush and recommended that its users switch to Signal. In May 2016, Moxie Marlinspike wrote that federation with the CyanogenMod servers had degraded the user experience and held back development, and that their servers will probably not federate with other servers again.
In May 2016, Moxie Marlinspike requested that a third-party client called LibreSignal not use the Signal service or the Signal name. As a result, on 24 May 2016 the LibreSignal project posted that the project was "abandoned". The functionality provided by LibreSignal was subsequently incorporated into Signal by Marlinspike.
Licensing
The complete source code of the Signal clients for Android, iOS and desktop is available on GitHub under a free software license. This enables interested parties to examine the code and help the developers verify that everything is behaving as expected. It also allows advanced users to compile their own copies of the applications and compare them with the versions that are distributed by Signal Messenger. In March 2016, Moxie Marlinspike wrote that, apart from some shared libraries that are not compiled with the project build due to a lack of Gradle NDK support, Signal for Android is reproducible. Signal's servers are partially open source, but the server software's anti-spam component is proprietary and closed source due to security concerns.
Reception
Security
In October 2014, the Electronic Frontier Foundation (EFF) included Signal in their updated surveillance self-defense guide. In November 2014, Signal received a perfect score on the EFF's secure messaging scorecard; it received points for having communications encrypted in transit, having communications encrypted with keys the provider does not have access to (end-to-end encryption), making it possible for users to independently verify their correspondents' identities, having past communications secure if the keys are stolen (forward secrecy), having the code open to independent review (open source), having the security designs well-documented, and having a recent independent security audit. At the time, "ChatSecure + Orbot", Pidgin (with OTR), Silent Phone, and Telegram's optional "secret chats" also received seven out of seven points on the scorecard.
Former NSA contractor Edward Snowden has endorsed Signal on multiple occasions. In his keynote speech at SXSW in March 2014, he praised Signal's predecessors (TextSecure and RedPhone) for their ease of use. In December 2014, leaked slides from an internal NSA presentation dating to June 2012 in which the NSA deemed Signal's encrypted voice calling component (RedPhone) on its own as a "major threat" to its mission of accessing users' private data, and when used in conjunction with other privacy tools such as Cspace, Tor, Tails, and TrueCrypt was ranked as "catastrophic" and led to a "near-total loss/lack of insight to target communications [and] presence".
Following the 2016 Democratic National Committee email leak, it was reported by Vanity Fair that Marc Elias (the general counsel for Hillary Clinton's presidential campaign) had instructed DNC staffers to exclusively use Signal when saying anything negative about Republican presidential nominee Donald Trump.
In March 2017, Signal was approved by the sergeant at arms of the U.S. Senate for use by senators and their staff.
On 27 September 2019, Natalie Silvanovich, a security engineer working in Google's vulnerability research team at Project Zero, disclosed how a bug in the Android Signal client could let an attacker spy on a user without their knowledge. The bug allowed an attacker to phone a target device, mute the call, and the call would complete – keeping the audio open but without the owner being aware of that (however they would still be aware of a ring and / or a vibration from the initial call). The bug was fixed the same day that it was reported and patched in release 4.47.7 of the app for Android.
In February 2020, the European Commission recommended that its staff use Signal. Following the George Floyd protests, which began in May 2020, Signal was downloaded 121,000 times in the U.S. between 25 May and 4 June. In July 2020, Signal became the most downloaded app in Hong Kong on both the Apple App Store and the Google Play Store after the passage of the Hong Kong national security law.
, Signal is a contact method for securely providing tips to major news outlets such as The Washington Post, The Guardian, The New York Times, and The Wall Street Journal.
Candiru claims the ability to capture data from Signal Private Messenger with their spyware, at a fee of €500,000.
On 9 August 2022, Ismail Sabri Yaakob, the Prime Minister of Malaysia, reported that his Signal account was "hacked" and infiltrated by a third party, sending out messages and impersonating the politician. No details were disclosed regarding the method used to gain access to the account.
In-app payments
In April 2021, Signal announced the addition of a cryptocurrency wallet feature that would allow users to send and receive payments in MobileCoin. This received criticism from security expert Bruce Schneier, who had previously praised the software. Schneier stated that this would bloat the client and attract unwanted attention from the authorities. The wallet functionality was initially only available in certain countries, but was later enabled globally in November 2021.
Blocking
In December 2016, Egypt blocked access to Signal. In response, Signal's developers added domain fronting to their service. This allows Signal users in a specific country to circumvent censorship by making it look like they are connecting to a different internet-based service. , Signal's domain fronting is enabled by default in Egypt, UAE, Oman, Qatar, Iran, Cuba, Uzbekistan and Ukraine.
, Signal was blocked in Iran. Signal's domain fronting feature relies on the Google App Engine (GAE) service. This does not work in Iran because Google has blocked Iranian access to GAE in order to comply with U.S. sanctions.
In early 2018, Google App Engine made an internal change to stop domain fronting for all countries. Due to this issue, Signal made a public change to use Amazon CloudFront for domain fronting. However, AWS also announced that they would be making changes to their service to prevent domain fronting. As a result, Signal said that they would start investigating new methods/approaches. Signal switched from AWS back to Google in April 2019.
In January 2021, Iran removed the app from app stores, and blocked Signal. Signal was later blocked by China in March 2021, followed by its removal from the App Store in China on 19 April 2024.
On August 9, 2024, Signal was blocked in Russia. Roskomnadzor claimed that this was due to "violations of the law on combating terrorism and extremism". Around the same, Signal was also blocked in Venezuela following the contested 2024 presidential election and subsequent protests.
Audience
Use by activists
In 2020, the app was used for coordination and communication by protesters during the George Floyd protests as they relied on the app's end-to-end encryption to share information securely.
In March 2021, the United Nations recommended Myanmar residents use Signal and Proton Mail to pass and preserve evidence of human rights violations committed during the 2021 coup.
Controversial use
Signal's terms of service states that the product may not be used to violate the law. According to a former employee, Signal's leadership at the time told him they would say something "if and when people start abusing Signal or doing things that we think are terrible". In January 2021, the position of Signal's leadership was to take a "hands-off approach to moderation" as the company's employees are not able to read user messages and the Signal Foundation does not "want to be a media company".
In 2016, authorities in India arrested members of a suspected ISIS-affiliated terrorist cell that communicated via Signal.
Radical right-wing militias and white nationalists use Signal for organizing their actions, including the Unite the Right II rally in 2018.
The claim that Signal is used to fund terrorist or criminal activities is the justification for Turkey to criminalize the app for the general population, which Abdullah Bozkurt claims is a way the "government abuses its counterterrorism laws to punish critics, opponents and dissidents."
See also
Comparison of cross-platform instant messaging clients
Comparison of VoIP software
Internet privacy
List of video telecommunication services and product brands
Secure communication
Notes
References
Bibliography
External links
2014 software
Cross-platform software
Cryptographic software
End-to-end encryption
Free and open-source Android software
Free instant messaging clients
Free security software
Free software programmed in Java (programming language)
Free VoIP software
Instant messaging clients programmed in Java
Internet privacy software
IOS software
Secure communication | Signal (software) | Mathematics | 6,572 |
1,053,747 | https://en.wikipedia.org/wiki/Vladimir%20Drinfeld | Vladimir Gershonovich Drinfeld (; ; born February 14, 1954), surname also romanized as Drinfel'd, is a mathematician from the former USSR, who emigrated to the United States and is currently working at the University of Chicago.
Drinfeld's work connected algebraic geometry over finite fields with number theory, especially the theory of automorphic forms, through the notions of elliptic module and the theory of the geometric Langlands correspondence. Drinfeld introduced the notion of a quantum group (independently discovered by Michio Jimbo at the same time) and made important contributions to mathematical physics, including the ADHM construction of instantons, algebraic formalism of the quantum inverse scattering method, and the Drinfeld–Sokolov reduction in the theory of solitons.
He was awarded the Fields Medal in 1990.
In 2016, he was elected to the National Academy of Sciences. In 2018 he received the Wolf Prize in Mathematics. In 2023 he was awarded the Shaw Prize in Mathematical Sciences.
Biography
Drinfeld was born into a Jewish mathematical family, in Kharkiv, Ukrainian SSR, Soviet Union in 1954. In 1969, at the age of 15, Drinfeld represented the Soviet Union at the International Mathematics Olympiad in Bucharest, Romania, and won a gold medal with the full score of 40 points. He was, at the time, the youngest participant to achieve a perfect score, a record that has since been surpassed by only four others including Sergei Konyagin and Noam Elkies. Drinfeld entered Moscow State University in the same year and graduated from it in 1974. Drinfeld was awarded the Candidate of Sciences degree in 1978 and the Doctor of Sciences degree from the Steklov Institute of Mathematics in 1988. He was awarded the Fields Medal in 1990. From 1981 till 1999 he worked at the Verkin Institute for Low Temperature Physics and Engineering (Department of Mathematical Physics). Drinfeld moved to the United States in 1999 and has been working at the University of Chicago since January 1999.
Contributions to mathematics
In 1974, at the age of twenty, Drinfeld announced a proof of the Langlands conjectures for GL2 over a global field of positive characteristic. In the course of proving the conjectures, Drinfeld introduced a new class of objects that he called "elliptic modules" (now known as Drinfeld modules). Later, in 1983, Drinfeld published a short article that expanded the scope of the Langlands conjectures. The Langlands conjectures, when published in 1967, could be seen as a sort of non-abelian class field theory. It postulated the existence of a natural one-to-one correspondence between Galois representations and some automorphic forms. The "naturalness" is guaranteed by the essential coincidence of L-functions. However, this condition is purely arithmetic and cannot be considered for a general one-dimensional function field in a straightforward way. Drinfeld pointed out that instead of automorphic forms one can consider automorphic perverse sheaves or automorphic D-modules. "Automorphicity" of these modules and the Langlands correspondence could be then understood in terms of the action of Hecke operators.
Drinfeld has also worked in mathematical physics. In collaboration with his advisor Yuri Manin, he constructed the moduli space of Yang–Mills instantons, a result that was proved independently by Michael Atiyah and Nigel Hitchin. Drinfeld coined the term "quantum group" in reference to Hopf algebras that are deformations of simple Lie algebras, and connected them to the study of the Yang–Baxter equation, which is a necessary condition for the solvability of statistical mechanical models. He also generalized Hopf algebras to quasi-Hopf algebras and introduced the study of Drinfeld twists, which can be used to factorize the R-matrix corresponding to the solution of the Yang–Baxter equation associated with a quasitriangular Hopf algebra.
Drinfeld has also collaborated with Alexander Beilinson to rebuild the theory of vertex algebras in a coordinate-free form, which have become increasingly important to two-dimensional conformal field theory, string theory, and the geometric Langlands program. Drinfeld and Beilinson published their work in 2004 in a book titled "Chiral Algebras."
See also
Drinfeld reciprocity
Drinfeld upper half plane
Manin–Drinfeld theorem
Quantum group
Chiral algebra
Quasitriangular Hopf algebra
Ruziewicz problem
Notes
References
Victor Ginzburg, Preface to the special volume of Transformation Groups (vol 10, 3–4, December 2005, Birkhäuser) on occasion of Vladimir Drinfeld's 50th birthday, pp 277–278,
Report by Manin
External links
Langlands Seminar homepage
1954 births
20th-century Ukrainian mathematicians
21st-century Ukrainian mathematicians
Moscow State University alumni
Fields Medalists
Living people
Algebraic geometers
Number theorists
Soviet mathematicians
Ukrainian Jews
Scientists from Kharkiv
International Mathematical Olympiad participants
University of Chicago faculty
Institute for Advanced Study visiting scholars
Members of the United States National Academy of Sciences
Corresponding members of the National Academy of Sciences of Ukraine
Russian scientists | Vladimir Drinfeld | Mathematics | 1,051 |
13,263,961 | https://en.wikipedia.org/wiki/Silvestri%20camera | Silvestri is an Italian manufacturer of professional photographic cameras and large format cameras.
The history - SLV and T30
The production of the Silvestri cameras started in Florence, Italy, at the beginning of the eighties by the work of Vincenzo Silvestri who designed and developed the original project.
The intent was to provide photographers of architecture, indoor and outdoor, with a wide-angle camera that was compact and light-weight, compared to the large view cameras produced in that period, and with the essential movements for perspective correction.
The first camera, the SLV, was born with the 6X7 / 6X9 format, with a rotating back with click stop every 90 degrees and the lens, a Super Angulon 5,6/47 mm in focusing helical mount by Schneider, was not interchangeable. The shift mechanism permitted a total rise or fall of 25 mm, it consisted of a control knob and two counter-posed screws right/left and allowed a precise setting and locking of the shift.
The attachment of the roll film back was Graflex compatible which opened the system to the application of various backs like Mamiya, Horseman, Wista, etc. The image viewing and the focusing were made on the round glass using a magnifying lens in a leather bellows.
The whole camera structure was made in anodized aluminum and worked with CNC machinery, ensuring constructive exactness and reliability.
From a conceptual point of view, the SLV camera is allowed to shift in any direction by simply placing and leveling the back horizontally or vertically and by orienting the camera body by leaning it to the right or left, or upwards or upside down.
Some samples of this first model were made in an almost handcrafted way but meeting a good interest among the specialized photographers, Silvestri was pushed to develop a new and improved model of SLV.
This second model had a bayonet for attaching the lenses and an interchangeable system for the backs.
This gave the SLV a major extension and flexibility and the range of lenses grew to 3 Schneider Schneider lenses: Super Angulon 5,6/65 mm, Super Angulon 5,6/75 mm and Symmar 5,6/100 mm, beside the Super Angulon 5,6/47 mm, all lenses had a bayonet attachment and a focusing helical mount. The interchangeable backs allowed to insertion of the extension rings to compensate for the difference in focal distance among the various lenses. The 4 points of 8° attachment, quick and precise to use, also accept backs of different formats like the 6x12cm and the 4x5”.
These modifications opened to the SLV new fields of application and attracted other photographers from Italy and abroad. The SLV was substituted by the T30 camera in 1997, The T30, having 30 mm of shift movement, was more suitable to the new lenses with lager image circle that were introduced on the market in that period. The T30 is still in production.
A further step towards flexibility of use consists in the design and production of the shiftable viewfinder with interchangeable frames. The viewfinder is extremely useful for quick work and in difficult work situations compared to viewing the image on the ground glass.
Mod.H
A new concept camera that renews the characteristics of the SLV, using most of its accessories, has the shiftable viewfinder embodied which coupled with the shift movement gives operative easiness and simplicity of use. The lens-format frames are interchangeable, starting from the Schneider series the including the Rodenstock one too. The camera is now out of production.
S4
The S4 camera was designed later to answer the need for full coverage of the 4x5 inches format, whereas the SLV camera could not give enough versatility with this format. Standard 4x5 inches back with format adapters for 6x9 and 6x12, interchangeable backs with short rotation attachment 8° and bayonet attachment or interchangeable lens boards for the lenses. Large in dimension, it is later provided with front bellows (Flexibellow) that perform the lens focusing, tilting, and swinging. With this accessory, the lenses can be used without a focusing mount and permitted the focus extension on the two orthogonal axes by using the lens tilting and swinging. The S4 camera is still in production.
Bicam
With the arrival of digital photography and considering that it would substitute le film in a short time, Silvestri began to study a camera able to face the double need of working with film but at the same time can be converted back to digital applications. There were two possible solutions, the scan backs and the matrix ones. The choice selected the matrix backs, creating a compact, easy-to-carry, proportioned to the small size of the high-resolution sensors. Two series of Rodenstock and Schneider lenses, from the 23 mm to the long focal ones, specifically designed for high-resolution digital photography are its range of lenses. The Bicam introduced in the late nineties was added with new accessories and components so to follow the continuous evolution of the sensors’ technology. Its main characteristics are the possibility to work with lenses mounted in helical focusing mount and bayonet, or with a bellows system which adds to the camera all the necessary correction movements typical of view cameras; side shift, rise and fall, tilt and swing; all movements are extremely precise and micrometric.
The reversible and interchangeable backs have a large range of accessories, from the sliding adapters with viewing screens and drop-in plates to interface the most popular digital backs: Hasselblad V, Hasselblad H, Mamiya 645, Contax 645, Rollei AFI.
S5 Micron
Classical view camera.
Designed for studio photography, it has full micrometric movements. All the parts related to lenses and backs are interchangeable. The lenses are on board or bayonet, le lens boards are flat or recessed, and the lenses have no need for a helical focusing mount. The backs and their accessories are in common with the Bicam system. The bellows are interchangeable. The peculiarity of the S5 micron is to be built on two separate shifting blocks that do not interfere between them allowing the two standards to get to touching. This characteristic makes it possible to use extremely wide-angle lenses and to perform adjustment movements otherwise impossible. The S5 camera was published in the ADI Index 2005 for the industrial design award the Compasso d'Oro.
Flexicam
Award winner for the best project at the Premio Vespucci 2008.
This camera was conceived for on-location works, lightweight (less than 1 kg.), and with absolute precision for the use of high-resolution digital backs. It offers the flexibility of a mini view camera having the same essential correction movements. Rise and fall, rail extension with the micrometric movement of the focus, tilt, and swing.
Rodenstock and Schneider lenses on Silvestri bayonet, from 23 mm to 120 mm, back adapter for high-resolution digital backs, T attachment for SLR cameras.
Camera models and the year of their introduction
Silvestri SLV – 1982
Silvestri SG612 – 1990
Silvestri Mod. H – 1992
Silvestri S4 – 1995
Silvestri T30 – 1997
Silvestri Bicam – 1998
Silvestri S5 micron – 2005
Silvestri Flexicam – 2006
References
General references
ADI Design Index 2005, Editrice Compositori, 2005 Bologna.
L'Ottica in Toscana, Nardini Editore, 2005 Firenze.
Alla Photokina e ritorno, Photographialibri, 2008 Milano.
Shutterbug, n.3 vol.33 January 2004, article "Medium format update" by George Schaub, pages 100-114.
FV Foto-Video Actualidad, n.50 1992, article "Una càmara poco corriente" by Valentìn Sama, pages 60–66.
PHOTO Technique International, n.1 1996, article "Primarily for architecture" by Hans Bluth, pages 10–11.
PHOTO Technik International, n.1 1994, article " Kamerakonzept fur die Architekturfotografie" by W.D. Georg, pages 48–51.
External links
Cameras
Italian brands
Photography equipment manufacturers of Italy | Silvestri camera | Technology | 1,730 |
764,515 | https://en.wikipedia.org/wiki/Hindenburg%20Line | The Hindenburg Line (, Siegfried Position) was a German defensive position built during the winter of 1916–1917 on the Western Front in France during the First World War. The line ran from Arras to Laffaux, near Soissons on the Aisne. In 1916, the Battle of Verdun and the Battle of the Somme left the German western armies () exhausted and on the Eastern Front, the Brusilov Offensive had inflicted huge losses on the Austro-Hungarian armies and forced the Germans to take over more of the front. Romania’s entrance into the war on the side of the Entente in August 1916 had also placed additional strain on the German army and war economy.
The Hindenburg Line, built behind the Noyon Salient, was to replace the old front line as a precaution against a resumption of the Battle of the Somme in 1917. By devastating the intervening ground, the Germans could delay a spring offensive in 1917. A shortened front could be held with fewer troops and with tactical dispersal, reverse-slope positions, defence in depth and camouflage, German infantry could be conserved. Unrestricted submarine warfare and strategic bombing would weaken the Anglo-French as the German armies in the west () recuperated. On 25 January 1917, the Germans had 133 divisions on the Western Front but this was insufficient to contemplate an offensive.
Greater output of explosives, ammunition and weapons by German industry against the Allied (battle of equipment) was attempted in the Hindenburg Programme of August 1916. Production did not sufficiently increase over the winter, with only 60 per cent of the programme expected to be fulfilled by the summer of 1917. The German (peace initiative) of December 1916 had been rejected by the Entente and the Auxiliary Service Law of December 1916, intended further to mobilise the civilian economy, had failed to supply the expected additional labour for war production.
The retirement to the Hindenburg Line (/Operation Alberich/Alberich Manoeuvre) took place from February to March 1917. News of the demolitions and the deplorable condition of French civilians left by the Germans were serious blows to German prestige in neutral countries. Labour was transferred south in February 1917 to work on the from La Fère to Rethel and on the forward positions on the Aisne front, which the Germans knew were due to be attacked by the French. Divisions released by the retirement and other reinforcements increased the number of divisions on the Aisne front to early April. The Hindenburg Line was attacked several times in 1917, notably at St Quentin, Bullecourt, the Aisne and Cambrai and was broken in September 1918 during the Hundred Days Offensive.
Background
Battle of the Somme, 1916
In August 1916 the German armies on the Somme had been subjected to great strain; the IX Reserve Corps had been "shattered" in the defence of Pozières. Ten fresh divisions had been brought into the Somme front and an extra division had been put into the line opposite the British. Movement behind the German front was made difficult by constant Anglo-French artillery harassing-fire, which added to equipment shortages by delaying deliveries by rail and interrupting road maintenance. Destruction, capture, damage, wear and defective ammunition had caused guns and guns to be out of action by the end of August.
The artillery deficit was only slowly improved by the plan of General Max von Gallwitz to centralise the command of the remaining artillery for counter-battery fire and to use reinforcements of aircraft to increase the amount of observed artillery fire, which had little effect on Allied air superiority but did eventually increase the accuracy and efficiency of German bombardments. The 2nd Army had been starved of reinforcements in mid-August to replace exhausted divisions in the 1st Army and plans for a counter-stroke had been abandoned for lack of troops. The emergency in Russia caused by the Brusilov Offensive, the entry of Romania into the war and the French counter offensive at Verdun had already overstretched the German army.
General Erich von Falkenhayn the German Chief of the General Staff was dismissed on 29 August 1916 and replaced by Field Marshal Paul von Hindenburg, with First General Erich Ludendorff as his deputy. (Third OHL, the new supreme command) ordered an end to attacks at Verdun and the dispatch of troops from there to Romania and the Somme front. On 5 September, proposals for a new shorter defensive position to be built in France were requested from the commanders of the western armies, who met Hindenburg and Ludendorff at Cambrai on 8 September. The western front commanders were told that no reserves were available for offensive operations, except those planned for Romania. Georg Fuchs, one of the corps commanders, recommended that a defensive line be built from Arras to west of Laon, shortening the front by and releasing ten divisions which, with other troops, could be used for an offensive in Alsace or Lorraine. Ludendorff criticised the practice of holding ground regardless of its tactical value and advocated holding front-line positions with a minimum of troops and the recapture of lost positions by counter-attacks, a practice that had already been forced on the German armies on the Somme.
On 15 September Crown Prince Rupprecht, commander of the northern group of armies, was ordered to prepare a rear defensive line and on 23 September work on the new (Siegfried Position/Hindenburg Line) began. On 21 September, after the battle of Flers–Courcelette (15–22 September), Hindenburg ordered that the Somme front would have priority in the west for troops and supplies. By the end of the Battle of Morval (25–28 September) Rupprecht had no reserves left on the Somme. During September, the Germans sent another thirteen fresh divisions to the British sector and scraped up troops wherever they could be found. The German artillery fired of field artillery shells and of heavy ammunition, yet the début of the tank, the defeat at the Battle of Thiepval (26–28 September) and the number of casualties (September was the costliest month of the battle for the German armies) were severe blows to German morale. On 7 October, Rupprecht anticipated a British attack north of the Ancre River in mid-October, anxiety about the situation at Verdun also increased. On 19 October, the dispatch of reinforcements from Verdun to the Somme was suspended. Defeats inflicted south of the Somme by the French Tenth Army (10–21 October) led to the sacking of Bronsart von Schellendorf, the 2nd Army chief of staff.
German strategy for 1917
Hindenburg Programme
Hindenburg and Ludendorff demanded domestic changes to complement their new strategy. German workers were to be subjected to an Auxiliary Service Law () that from November 1916, subjected all Germans from old to compulsory service. The new programme was intended to create a trebling of artillery and machine-gun output and a doubling of munitions and trench mortar production. Expansion of the army and output of war materials caused increased competition for manpower between the army and industry. In early 1916, the German army had in recruit depots and another in March when the 1897 class of conscripts was called up. The army was so flush with men that plans were made to demobilise older classes and in the summer, Falkenhayn ordered the raising of another for an army of The costly battles at Verdun and the Somme had been much more demanding on German divisions and they had to be relieved after only a few days in the front line, lasting about 14 days on the Somme. A larger number of divisions might reduce the strain on the and realise a surplus for offensives on other fronts. Hindenburg and Ludendorff ordered the creation of another 22 divisions, to reach 179 divisions by early 1917.
The men for the divisions created by Falkenhayn had come from reducing square divisions with four infantry regiments to triangular divisions with three regiments, rather than a net increase in the number of men in the army. Troops for the extra divisions of the expansion ordered by Hindenburg and Ludendorff could be found by combing out rear-area units but most would have to be drawn from the pool of replacements, which had been depleted by the losses of 1916 and although new classes of conscripts would top up the pool, casualty replacement would become much more difficult once the pool had to maintain a larger number of divisions. By calling up the 1898 class of recruits early in November 1916, the pool was increased to men in February 1917 but the larger army would become a wasting asset. Ernst von Wrisberg (de) Deputy Minister of the Prussian Ministry of War, responsible for raising new units, had grave doubts about the wisdom of this increase in the army but was over-ruled by Ludendorff.
The German army had begun 1916 equally well-provided for in artillery and ammunition, massing field and heavy artillery shells for the beginning of the Battle of Verdun but four million rounds were fired in the first fortnight and the 5th Army needed about trains a day to continue the battle. The Battle of the Somme further reduced the German reserve of ammunition and when the infantry was forced out of the front position, the need for (defensive barrages), to compensate for the lack of obstacles, increased. Before the war, Germany had imported nitrates for propellant manufacture and only the discovery before the war of the Haber process for the synthesis of nitrates from atmospheric nitrogen, enabled Germany to produce explosives while blockaded. Developing the process and building factories to exploit it took time. Under Falkenhayn, the procurement of ammunition and the weapons to fire it, had been based on the output of propellants, since the manufacture of ammunition without sufficient propellant fillings was as wasteful of resources as it was pointless; Hindenburg and Ludendorff wanted firepower to replace manpower and ignored the principle.
To meet existing demand and to feed new weapons, Hindenburg and Ludendorff wanted a big increase in propellant output to a month. In July 1916, the output target had been raised from , which was expected to cover existing demand and the extra of output demanded by Hindenburg and Ludendorff could never match the doubling and trebling of artillery, machine-guns and trench mortars. The industrial mobilisation needed to fulfil the Hindenburg Programme increased demand for skilled workers, (recalled from the army) or exempted from conscription. The number of increased from men, of whom deemed (kv, fit for front line service), at the end of 1916 to men in October 1917 and more than two million by November, being kv. The demands of the Hindenburg Programme exacerbated the manpower crisis and constraints on the availability of raw materials meant that targets were not met.
The German army returned workers to the war economy and exempted from conscription, from 1917. Steel production in February 1917 was short of expectations and explosives production was below the target, which added to the pressure on Ludendorff to retreat to the Hindenburg Line. Despite the shortfalls, by the summer of 1917, the artillery park had increased from guns and from guns, many being newer models of superior performance. Machine-gun output enabled each division to have and machine-guns and for the number of (MGA, machine-gun sharpshooter detachments) to be increased. The greater output was insufficient to equip the new divisions; existing divisions, which still had two artillery brigades with two regiments each, lost a regiment and the brigade headquarters, leaving three regiments. Against the new scales of equipment, British divisions in early 1917 had and machine-guns and the French and machine-guns.
Unrestricted U-boat warfare and strategic bombing
Hindenburg and Ludendorff forced a return to the policy of unrestricted submarine warfare on 9 January 1917 and engineered the dismissal of the Chancellor Bethmann-Hollweg and other opponents of the policy the next day. The policy was to resume on 1 February, to sink of shipping per month and knock Britain out of the war in five to twelve months. Optimistic claims by the navy were less important to the decision than the "desperate" position of the western armies and the decrepitude of Germany's allies. Another front in the west was to be opened by the resumption of air attacks on Britain. New aircraft had become available to replace airships, which had become too vulnerable to British counter-measures in 1916. Planning began in late 1916 and Operation Turk's Cross () began in May 1917.
Defensive fortification
As part of the defensive strategy for the Western Front, five defensive positions were planned to form the basis of the (defensive battle) expected in 1917. A (Flanders Position) from the Belgian coast, along Passchendaele Ridge and behind the Messines salient, to the defences of Lille, the (Wotan Position, known as the Drocourt-Quéant Line to the British) from Lille to Sailly, was to be built behind the 1915 battlefields of Loos, Vimy and Arras and the 1916 battlefield of the Somme. The (Siegfried Position, known to the British as the Hindenburg Line) was to be built across the base of the Noyon Salient, from Neuville Vitasse near Arras, through St Quentin and Laon, the Aisne east of Soissons to Cerny en Laonnois on the Chemin des Dames ridge.
The (Hunding Position) was to run from Péronne to Etain, north-east of Verdun behind the Champagne battlefields of 1915. The (Michel Position) was to cover Etain to Pont-à-Mousson behind the St Mihiel Salient. The new fortified areas were intended to be precautionary measures () built to be used as rallying-positions (, similar to ones built on the Russian front) and to shorten the Western Front to economise on troops and create more reserves. The had the potential to release the greatest number of troops and was begun first; Hindenburg and Ludendorff decided its course on 19 September and construction began on 27 September.
Withdrawal to the was debated by Ludendorff and other senior German commanders over the winter of 1916–1917. An offensive in the new year with was discussed on 19 December but it was considered that such a force could not achieve a decisive result. An OHL memorandum of 5 January noted that offensive preparations by the French and British were being made all along the Western Front to keep the site of a spring offensive secret. It was considered that the Somme front, the area between Arras and Lille, the Aisne front, Lorraine and Flanders were particularly threatened. Prisoner interrogation, postal analysis, espionage and air reconnaissance were used to identify the probable sites of Anglo-French offensives. March was considered the earliest that the Anglo-French could attack, with a possible delay if a Russian offensive was also planned. The chief of staff of Army Group Rupprecht, Hermann von Kuhl issued a survey of offensive possibilities on 15 January. A German breakthrough attempt was rejected for lack of means and the consequences of failure. Limited-objective attacks at Loos, Arras, the Somme and the Aisne were considered but the manpower and equipment shortage meant that even smaller attacks risked using up reserves needed for defence against the expected Anglo-French spring offensives. Local attacks like those at Bouchavesnes and La Maisonette on the Somme in late 1916, which could be mounted without reinforcements, were all that could be considered. Ludendorff accepted the analysis that no offensive was possible.
On a visit to Kuhl on 20 January, Fuchs concluded that Allied superiority was so great that the German army could not forestall the Anglo-French with an attack or stop them attacking elsewhere. The army could not withstand another battle like the Somme; work on defences there was futile and would exhaust the troops for nothing. On 29 January, Ludendorff ruled that a withdrawal could not be ordered on political as well as military grounds, then on 31 January, discussed withdrawal with Kuhl, while the 1st and 2nd Army commanders on the Somme front opposed a retirement. Resources continued to be directed to the Somme defences during January and February and on 6 February, the 1st Army HQ requested three divisions and to work on new positions, to implement the plan, a partial withdrawal to a line from Arras to Sailly. Even with the expansion of the German army over the winter and the transfer of divisions from Russia, divisions the Western Front were confronted by , British and Belgian divisions, many of which were bigger than the German equivalents. The plan would reduce the front by and need six fewer front-holding divisions, compared to a shortening of and a saving of 13 to 14 divisions, by withdrawing an average of to the (Hindenburg Line).
Anglo-French strategy for 1917
The German army was far from defeat but in 1916 had been forced back on the Somme and at Verdun, as had the Austro-Hungarian army in southern Russia. At the Chantilly Conference of November 1916 the Allies agreed to mount another general offensive. The Anglo-French contribution was to be a resumption of the Somme offensive with much larger forces, extending the attack north to Arras and south to the Oise, followed by a French attack between Soissons and Rheims. The British were to attack the salient that had formed between Bapaume and Vimy Ridge with two armies and the French with three armies from the Somme to Noyon. The attacks were to be made on the broadest possible fronts and advance deep enough to threaten German artillery positions. When Marshal Joseph Joffre was superseded by General Robert Nivelle, the "Chantilly strategy" was altered. The French returned to a policy of decisive battle, with a breakthrough to be achieved within leading to the "total destruction of active enemy forces by manoeuvre and battle". Successive attacks in a methodical battle were dropped and continuous thrusts were substituted, to deprive the Germans of time to reinforce and strengthen their defences. A large amount of heavy artillery fire up to deep, to the rear edge of the German defences would achieve the breakthrough. The infantry advance was to reach the German heavy artillery in one attack and then widen the breach with lateral attacks. A strategic reserve would then move through the gap and destroy the German reserves in open warfare. The original French attacks between the Somme and Oise were reduced in size and the secondary attack between Soissons and Rheims was reinforced to become the main offensive. The Nivelle Offensive was planned to begin with a British attack on the Bapaume salient in early April 1917, to assist the main French attacks a week later by holding German troops on the Arras front and diverting reserves from the Aisne.
Prelude
German preparations
German reconnaissance aircraft surveyed all of the Western Front over the winter of 1916–1917 to look for signs of Anglo-French offensive preparations. The design of the (Siegfried Position, later known by the Allied powers as the Hindenburg Line) was drawn up by Colonel Kraemer, an engineer from supreme headquarters (OHL) and General Lauter, the Inspector General of Artillery. Construction was organised by Rupprecht and Kuhl; when the plans were ready the line was divided into sectors and officers from the General Staff, gunners and engineers were appointed to oversee construction, which was expected to take five months. The defences were built by German construction companies, who brought skilled workmen to fabricate ferro concrete emplacements, while and labourers and Russian prisoners of war dug the trenches. The building works absorbed most of the cement, sand and gravel production of occupied France and Belgium plus that of west Germany. Transport of materials was conducted by canal barge and railway, which carried of engineering stores, although the building period from October 1916 to March 1917 meant that only about eight trains a day were added to normal traffic. Mass-production techniques were used to produce items for the position. Steel-reinforced concrete dug-outs for infantry squads and artillery-observation posts were standard designs and all woodwork was made to a pattern.
The line was long and built for a garrison of twenty divisions, one every . Telephone cables were deeply buried and light railways built to carry supplies to the defences. The position had two trenches about apart, with sentry garrisons to occupy the front trench. The main line of defence was the second line, which was equipped with dugouts for most of the front garrison. Fields of barbed wire up to deep, were fixed with screw pickets in three belts wide and apart, in a zig-zag so that machine-guns could sweep the sides, placed in front of the trench system. Artillery observation posts and machine-gun nests were built in front of and behind the trench lines. Where the lay of the land gave observation from behind the system, it was built on reverse slopes (a ), with a short field of fire for the infantry, according to the experience of the Western Front defensive battles of 1915 and 1916, when forward-slope positions had been smashed by observed Franco-British artillery-fire.
In much of the new position, the new principle of reverse-slope positions with artillery-observation posts to the rear was not followed. Artillery observation posts were built in the front-trench system or in front of it. Trenches had been dug near a crest, on a forward slope or at the rear of a reverse slope, which replicated the obsolete positions being abandoned. The 1st Army commander, General Fritz von Below and his Chief of Staff Colonel Fritz von Loßberg rejected this layout since smoke and dust would make artillery observation from such positions impossible. They urged that the 1st Army section of the (Hindenburg Line) from Quéant, where it met the site of the (Wotan Line) to Bellicourt north of St Quentin, should have another position built in front of the new position, which would become the artillery protection position () behind the revised front system; the line already had to accommodate , which was sufficient to shelter local reserves. The new line would be similar but on reverse slopes, have dugouts for and be ready by 15 March. The existing artillery positions were scrapped and the artillery sited to dominate ground useful for the assembly of assault-troops, such as the La Vacquerie plateau.
Rupprecht refused to delay implementation of Operation Alberich (the ) but having inspected the (Hindenburg Line) on 27 February, sanctioned the 1st Army proposal and provided three divisions and for the new construction, which turned the (Hindenburg Line) into the . Another two-trench system () was planned near the artillery reserve positions, which were about behind the existing battery positions, to be built as soon as labour became available. The extra position would ensure that an attack that captured the (Hindenburg Line), could not continue without a pause to move artillery into range of the . When complete the various positions had a depth of and the original Hindenburg Line had become an intermediate line (). Work began on another defensive position in the autumn of 1917, with the original Hindenburg Line as its front-trench system.
German defensive methods
The practice of rigidly defending front-line trenches, regardless of casualties was abolished, in favour of a mobile defence of the fortified areas being built over the autumn and winter of 1916–1917. (Principles of Field Fortification) was published in January 1917, in which instructions were given for the construction of defences in depth, according to the principles of greater depth and of disguise by dispersal and camouflage. Trench-lines were mainly intended for accommodation, dumps of supplies and as decoys, rather than firing lines. Deep dug-outs in the front line were to be replaced by many more smaller, shallow (MEBU shelters) with most built towards the rear of the defensive areas. Within the new forward zones, battlezones and rearward battle zones, the chain of command was streamlined by making corps headquarters into (groups), responsible for the administrative tasks in an area into which divisions would be moved for periods, before being withdrawn to rest, train and be brought up to strength. Command of areas rather than units was also introduced in divisions, with command of regiments devolved to the front battalion commander (KTK ), which reduced the chain of command from five to two posts.
The value of ground was to be determined by its importance to a defensive position. Where the lay of the land gave the defender a tactical advantage, by which an attacker could be defeated with the minimum of casualties to the defenders, with small-arms fire from dispersed, disguised positions and observed artillery-fire, it was to be fought for by the garrison and local reserves, which would counter-attack to regain any ground lost The changes were codified in a training manual (The Conduct of the Defensive Battle in Position Warfare) issued on 1 December 1916, which made infantry sections () rather than the battalion the basic tactical unit. Small, advanced garrisons were to repulse attacks and penetrations were to be cut off and counter-attacked immediately, without waiting for orders. Front line troops were allowed to move away from fire, preferably by advancing into no man's land but moves to the flanks and rear were also allowed.
When front-line garrisons and their supports were unable to hold or recapture the front-line, they were to defend positions even if surrounded, to give time for a counter-attack by reserve divisions. When an immediate counter-attack () from behind the defensive position was not possible, a deliberate counter-attack () was to be planned over several days. Two schools of thought emerged over the winter; the principal authors of the new training manual, Colonel Max Bauer and Captain Hermann Geyer of the General Staff, wanting front garrisons to have discretion to move forwards, sideways and to retire. General von Hoen and Colonel Fritz von Lossberg the 1st Army Chief of Staff issued a memorandum, (Experience of the German 1st Army in the Somme Battles) on 30 January 1917. The document advocated the rigid holding of the front line by its garrison, to keep the defence organised under the control of battalion commanders. Lossberg and Hoen doubted that relief divisions could arrive quickly enough to counter-attack before Allied infantry had consolidated. They predicted that (relief divisions) would not be ready in time for hasty counter-attacks to succeed and that they should make planned counter-attacks after with full artillery support. Both theories were incorporated by Ludendorff into the new (Training Manual for Foot troops in War) of March 1917. Training schools were established to prepare German commanders and courses began in February 1917.
Anglo-French preparations
British and French plans for 1917 were agreed at an Allied conference at Chantilly from 1916. Existing operations were to continue over the winter, fresh troops arriving in front-line units were to be trained and in the spring the front of attack was to be broadened, from the Somme to Arras and the Oise. The front of attack was to be about long, with two French surprise attacks near Rheims and in Alsace, to begin after the main attacks, to exploit German disorganisation and lack of reserves. The Allies expected to have against divisions, for the co-ordinated offensives. A British operation in Flanders was also agreed, to begin several weeks after the main offensives further south. Joffre was replaced by Nivelle on 13 December, who proposed a much more ambitious strategy, in which the plan for a resumption of Anglo-French attacks either side of the Somme battlefield of 1916 was retained but the offensive on the Aisne was converted to a breakthrough offensive, to be followed by the commitment of a strategic reserve of to fight a "decisive" battle leading to the exploitation of the victory by all of the British and French armies. French troops south of the British Fourth Army were freed to join the strategic reserve by an extension of the British front, to just north Roye on the Avre facing St Quentin, which was complete by 26 February.
During periods of fine weather in October 1916, British reconnaissance flights had reported new defences being built far behind the Somme front; on 9 November, reconnaissance aircraft found a new line of defences from Bourlon Wood to Quéant, Bullecourt, the river Sensée and Héninel, to the German third line near Arras. Next day, an escaped Russian prisoner of war, reported that were working on concrete dug-outs near St Quentin. Behind the Fifth and Fourth army fronts, the course of the Hindenburg Line was further away and the winter weather was exceptionally bad, which grounded aircraft and made air observation unreliable. On 11 December, a reconnaissance in the area of Marcoing reported nothing unusual, despite flying over the new diggings. German fighter opposition in the area became much worse, with more aircraft and the arrival in service of superior aircraft types in the late summer of 1916. Three intermediate defensive lines begun in late 1916, much closer to the Somme front, were observed by British reconnaissance aircraft, which made fragmentary reports of digging further back unexceptional.
On 2 January, Nivelle instructed the to co-operate with the British to investigate German defensive systems that spies and repatriated civilians had reported. Not until 26 January, did a British intelligence summary report a new line of defence between Arras and Laon. In February, attempts to send more aircraft to reconnoitre the line were hampered by mist, snow, rain, low cloud and an extremely determined German air defence. British air reconnaissance discovered diggings between Drocourt and Vitry en Artois at the end of January and on 15 February, found a line between Quéant and Etaing. The British were able to trace the new line (named the Drocourt–Quéant Switch) south to Bellicourt on 15 February and St Quentin on 25 February, the day after the first German withdrawal on the Ancre. British aircraft losses on these flights were severe due to the presence of Jagdstaffel 11 (the Richthofen Circus) near Douai; six British reconnaissance aircraft were shot down on 15 April, along with two escorts.
Operations on the Ancre, 1917
Winter weather in mid-November 1916, stopped the Anglo-French attacks on the Somme, rather than the defensive efforts of the German army. On 1 January, a German attack took Hope Post near Beaumont Hamel, which was lost to a British attack on 5 January. On the night of a British attack captured the Triangle and Muck Trench, covering the flank of an attack on Munich Trench during the day; British troops edged forward over Redan Ridge for the rest of the month. A fall in temperature added to German difficulties, by freezing the mud in the Ancre valley, making it much easier for infantry to move. On 3 and 4 February, British attacks towards Puisieux and River trenches succeeded, despite German counter-attacks on 4 February. On 7 February, British attacks threatened the German hold on Grandcourt and Serre. Each small advance uncovered to British ground observers another part of the remaining German defences. A bigger British attack began on 17 February, to capture and gain observation over Miraumont and the German artillery positions behind Serre. Three divisions attacked after a three-day artillery bombardment using the new fuze 106. A thaw set in on 16 February, which, with the Germans alerted to the attack by a deserter, led to the attack on the south bank advancing only at most and to the capture Boom Ravine (). The attack on the north bank, to gain observation over Miraumont from the west, succeeded despite the weather and the Germans being forewarned.
On the Fourth Army front, fewer attacks took place while the French line was being taken over in stages, southwards to the Amiens–Roye road. On 27 January, the 29th Division took in an advance of only and on 1 February, an Australian attack on Stormy Trench was repulsed by a German counter-attack. A second attack on 4 February succeeded. On 8 February, a battalion of the 17th Division took a trench overlooking Saillisel and held it, despite German counter-attacks that continued on 9 February. On 21 and 22 February, Australian troops captured more of Stormy Trench despite rain, which made the ground even more "appalling", than before the freeze in January and early February. On 23 February, British and Australian troops on the south side of the Ancre, sent patrols forward to investigate fires seen in German trenches and discovered the German withdrawal. Reports began to reach British commanders by on 24 February, who ordered intensive patrolling and advanced guards to be prepared, ready to move forward at dawn on 25 February. The German positions back to a reserve line, (Trench I Position) from Le Transloy to Serre were found to be empty; Gough ordered that strong patrols were to move forward and regain contact with the Germans. Behind the British front, the effect of the thaw on roads and supply routes caused acute supply difficulties.
Withdrawal
German plan
Over the winter, German deception operations were conducted and indications of an offensive through Switzerland diverted French attention at the end of 1916. The British were occupied by reports of troops and heavy artillery moving into Flanders and increased numbers of agent reports of troop movements from Lille, Tourcoing and Courtrai. Until January 1917, the British took seriously a possible limited offensive towards the Channel ports and made Flanders the subject of most of their long-range reconnaissance flights. Rupprecht, the northern army group commander on the Western Front, was made responsible for planning the devastation of the infrastructure within the Noyon Salient and the retirement to new defensive positions along the (Hindenburg Line), codenamed the (Alberich Manoeuvre). The Germans prepared a timetable; infrastructure in the salient was to be destroyed and buildings demolished from
Booby-traps were devised with delayed-action fuzes used a striker on a spring, held back by a wire. Acid ate through the wire, to release the striker and detonate the explosive. A number of devices with such fuzes were planted in bunkers but most booby-traps had simple pressure detonators. Wires were attached to useful items like stove chimneys and loot; trip-wires on the stairs of dugouts were connected to bundles of hand-grenades. On some roads, heavy-artillery shells were buried with contact-fuzes, which would only be triggered by the weight of a lorry. British engineers and tunnelling companies scoured areas as they were occupied and disabled many of the explosives. Roads were flooded by destroying drains and water-courses; wells sabotaged by drilling a shaft next to them and exploding a charge, permanently ruining the well. Much of the explosive used by the Germans (, and ) had the property of water-absorption so could be neutralised by dousing. Some British booby-trap patrols made German prisoners go first, who revealed traps rather than be blown up and British tunnellers removed of explosives. (In some areas no booby-traps were found, as German divisional commanders had been allowed to choose whether to mine their areas and some refused.)
Trees were to be cut down, wells polluted and the civilian population forced to leave the area. Rupprecht objected to the scorched-earth policy on moral and practical grounds, that the destruction would be a propaganda disaster, provide enemy troops with shelter, material to repair the damage to roads and undermine the morale and discipline of the German soldiers involved in the destruction. The buildings of Nesle, Ham, Noyon and several villages were excluded from the plan and French civilians were to be left behind in them, while civilians were to be evacuated to work in the rest of occupied France and Belgium. A timetable for the demolition plan was prepared to be followed by two marching days for the troops on the flanks of the area, three for the troops between Nauroy and Coucy le Chateau and four marching days for those between St Quentin and La Fère.
German retirements on the Somme
Defensive positions held by the German army on the Somme after November 1916 were in poor condition, the garrisons were exhausted and postal censors reported tiredness and low morale, which left the German command doubtful that the army could withstand a resumption of the battle. The German defences on the Ancre began to collapse under British attacks in January 1917, which caused Rupprecht to urge on 28 January, that the retirement to the (Hindenburg Line) begin. Ludendorff rejected the proposal next day but British attacks on the 1st Army, particularly the action of Miraumont/Battle of Boom Ravine caused Rupprecht on the night of 22 February, to order a preliminary withdrawal of about between Essarts and Le Transloy to .
On 24 February, the Germans withdrew to the protected by rear guards, over roads in relatively good condition, which they then destroyed. Next day, German rear guards inflicted on Australian troops near Loupart Wood and forced British troops back out of Irles with artillery-fire. A British attack on Puisieux on 26 February took all day and ended in hand-to-hand fighting. Next day troops of Prussian Foot Guard Regiment 5 withdrew from Thilloy, completing the retirement to the . The German withdrawal was helped by a thaw, which turned roads behind the British front into bogs and by disruption to the Allied railways that supplied the Somme front. On the night of 12 March, the Germans withdrew from the between Bapaume and Achiet le Petit, while small parties of troops sent up flares to mislead the British, who were preparing an attack. It took the British until 13 March to close up the (Trench II Position). The British opposite the 1st Army, received indications that a withdrawal was imminent on 20 and 21 February, when intercepted wireless messages were decoded, ordering German wireless stations at Achiet le Petit, Grévillers and the vicinity of Bapaume, to close and prepare to move back. After this period, information from prisoners and the evidence of German demolitions, indicated that a longer retirement was planned but the existence of three German reserve lines behind the front line, made a local German retirement seem more likely than a longer one.
On 13 March, a document revealing the plan and the code-name dated 5 March, was found in Loupart Wood. On 24 February Lieutenant-General Hubert Gough defined the boundaries of the three corps making the advance and ordered them to regain contact with the German armies, using strong patrols supported by larger forces moving forward more deliberately behind them. The German front-line was being maintained along the rest of the front and the possibility of a sudden German counter-offensive was not discounted. On 25 February, the 2nd Australian Division advanced on Malt Trench, found it strongly held and was forced to retire with The Fifth Army divisions advanced with patrols until they met German resistance, then prepared deliberate attacks, some of which were forestalled by German withdrawals, which by 26 February, apart from some small detachments, had abandoned the ground west of the . British engineers improvised sleds to move guns and wagons, with pack-mules being used to carry food and ammunition and on 8 March, ammunition lorries were able to move forward in the V Corps area. Behind the old British front line, the thaw badly affected roads, which had been in a very poor condition at the end of 1916, many were closed and others were limited to horse-drawn traffic. Railway transport was even worse affected, with Boulogne harbour blocked, the number of trains and wagons on the northern French railways far short of British requirements, the lines being congested and subject to traffic restrictions. Supply difficulties had also begun to increase on the Third Army and Fourth Army fronts before the German withdrawals.
On 10 March, the Fifth Army took Grévillers Trench and Irles in a methodical attack, which overwhelmed the German defence and took Fires could be seen behind Bapaume, with more visible behind the and British military intelligence reported that the headquarters of Rupprecht had been moved to Mons; civilians were known to have been evacuated along with supply dumps and artillery. The was found to be empty between Bapaume and Achiet le Petit on the night of 12 March but next day an attack on Bucquoy failed with The German document found in Loupart Wood dated 5 March, containing details of the (Operation Alberich), showed that Loupart Wood had been abandoned a day early. On the night of 14 March, patrols found that the Germans had withdrawn from part of the Fourth Army front and on 17 March, the Germans slipped away on all of the Third and Fifth Army fronts.
On 4 February, the order was given to begin the (Alberich Manoeuvre), with 9 February to be the first day and 16 March the first marching day. The 1st Army from Arras to Péronne brought reserve divisions forward to the and outpost villages close to the (Hindenburg Line). The front-holding divisions, which had been worn down by British attacks, were withdrawn behind the (Hindenburg Line). On 17 March, the German troops at the north end of the Bapaume Salient withdrew swiftly, as there were no intermediate lines corresponding to the north of Achiet le Grand. was abandoned by 18 March and next day Boyelles and Boiry Becquerelle were evacuated. The withdrawal went straight back to the (Hindenburg Line) except for outposts at Hénin sur Cojeul, St Martin sur Cojeul and the west end of Neuville Vitasse. Numerous raids were mounted on British outposts during 20 and 21 March.
The was abandoned north of the Ancre, along with part of the near its junction with at Bapaume, which was also abandoned while many houses were still on fire. Next day, parties of Germans at Beugny in the fought until nightfall then slipped away. A party at Vaulx Vraucourt was surprised (while some were shaving) and driven back to Lagnicourt. On 20 March, an Australian attack on Noreuil failed with and an attack on Croisilles was repulsed. A German counter-attack to recover Beaumetz was mounted on 23 March and got into the village before being forced to withdraw; the attack was repeated next day but only one party reached the village. Lagnicourt was lost on 26 March and a counter-attack from Noreuil repulsed, then a British attack on Bucquoy was defeated.
The 2nd Army conducted the withdrawal with the line-holding divisions, which were fresher than the divisions of the 1st Army and assisted by several cavalry divisions and cyclist battalions. On 17 March, withdrawals began north of the Avre and by 18 March, the German 7th, 2nd, 1st and the southern wing of the 6th Army, began to withdraw from the old front-line ( in length, as the crow flies). Soissons was abandoned, roads leading out of Noyon were flooded, railway bridges were blown and the Somme River and canal crossings from Offoy to Péronne were destroyed. Roads built on causeways over marshy ground between the river and canal, caused water to form pools wide, making crossings practical only at the causeways. The bridges over the rivers Germaine, Omignon, Cologne, Tortille and the Canal du Nord were also destroyed and huge craters blown in crossroads, the damage being made worse by the spring thaw. German rear-guards made a stand in part of the from Nurlu to Péronne on 18 March, which was the third and final marching day of the retreat from Roye to St Quentin and the second and final day from Péronne to le Catelet, when the main body of German troops reached the (Hindenburg Line). Work was still being done to remedy defects in the original position and the rear-guards retired next day from Nurlu and Bertincourt as soon as British troops appeared, then counter-attacked British cavalry around Poeuilly on 22 March.
A large counter-attack was mounted on the French front on 22 March, which forced French cavalry and cyclists back over the Crozat Canal with many casualties but began too soon to ambush a large force that included artillery, as had been intended. A Booby-trap exploded in Bapaume town hall on 25 March, killing Australian troops and two French Deputies; French civilians were left behind at Bouvincourt, Vraignes and Tincourt on 26 March and Villers Faucon, Saulcourt and Guyencourt were lost on 27 March, to attacks by British cavalry and armoured cars. Supplies of armour-piercing bullets had been sent forward by the Germans after Roisel was captured the day before, resulting in the armoured cars being peppered with bullet-holes. The armoured cars decoyed the German defenders, while cavalry got round the flanks and captured the villages. Outpost villages close to the (Hindenburg Line) south of Quéant had to be held by the Germans for longer than expected, because of the need to complete the additions to the defences being built to remedy defects in the original position. Heudicourt, Sorel and Fins were lost on 30 March. The northern outpost villages were lost on 2 April and Lempire fell on 5 April.
Anglo-French advance
In early March, instructions were given by the British Fourth Army corps commanders, for advanced guards to maintain contact should the Germans retreat, with larger forces to follow and dig in behind them on defensible ground, so that the advanced guards could fall back if attacked. The first sign of a German retreat was seen on 14 March when fires were seen in St Pierre Vaast Wood. Later in the day, the British entered Saillisel and by 16 March, most of the wood had been occupied. The British Fourth and Fifth armies organised all-arms forces of cavalry squadrons, infantry and cyclist battalions and artillery batteries, some of which had armoured-car units attached. On 15 March the French (GAN), south of the junction with the British Fourth Army at Roye, was ordered to follow up a German retirement. By 18 March the German 6th, 1st, 2nd and 7th armies were withdrawing and British and French cavalry patrols met in Nesle, behind the old front line. When French troops entered Lassigny they caused a traffic jam and vehicles that tried to skirt the jam bogged in mud. GAN had been on ten-day's notice to attack (about fourteen days before (GAC) attacked on the Aisne) between the Oise and Avre rivers. News of the first German retirements led the army group commander, General Franchet d'Espérey to advocate an attempt to surprise the Germans and force them to retreat prematurely. The suggestion was rejected and GAN began to prepare a limited attack for 17 March, by when the Germans had gone.
On 17 March Haig and the British army commanders met and discussed the effect of the German retirement. The precedent of a German withdrawal to a prepared position followed by a counter-attack, which had occurred in 1914 was noted and that reserves, freed by the retirement, would give the Germans an opportunity to attack the flanks of the withdrawal area. Nivelle had already decided to use the French troops released by the shorter front to reinforce the line in Champagne. British preparations for the attack at Arras were to proceed, with a watch kept for a possible German attack in Flanders and preparations for the attack on Messines Ridge were to continue. The pursuit of the German army was to be made in the Fourth Army area with advanced guards covered by the cavalry and cyclists attached to each corps and the 5th Cavalry Division. Larger forces were not to move east of a line from the Canal du Nord to the Somme south of Péronne until roads, bridges and railways had been repaired.
The boundary of the Fourth Army and French Third Army was set from south of Nesle, through Offroy to St Quentin. In the Fifth Army area from Bapaume to the north, the advance to the Hindenburg Line needed to be completed in time to conduct supporting operations for the Third Army attack, due at Arras in early April. All-arms columns of cavalry, infantry, artillery and engineers were organised to advance on the front of each division. The advanced guards of the 5th and 2nd Australian divisions had a detachment of the Australian Light Horse, a battery of 18-pounder field guns, part of an engineer field company, two infantry battalions and several machine-guns. The advance had fewer geographical obstacles than further south. On the left flank the country beyond was open and on the right the Germans made little effort to hold the ground west of , the ground inclining slightly to the north-east towards Bullecourt, away, with most of the rivers flowing in the direction of the British advance.
After 18 March the main body of the Fifth Army was ordered to dig in temporarily from Bancourt to Bapaume, Achiet-le-Grand and Ablainzevelle and the advanced guards, which were large enough to be mobile columns, be reinforced to the strength of brigade groups. Some of the columns advanced boldly and others dug in temporarily as a precaution. Information that the Germans were burning villages behind the Hindenburg Line, led Gough to order II Corps and V Corps and the Lucknow Cavalry Brigade to advance vigorously on 19 March, with the support of the reinforced mobile columns to Ecoust St Mein, Croisilles, Lagnicourt and Hénin sur Cojeul. Next day the brigade groups were to support the cavalry drive the Germans back to the Hindenburg Line, which led the 2nd Australian Division force to attack Noreuil on 20 March. The attack was repulsed with and an advance on Ecoust and Croisilles by infantry of the 18th (Eastern) Division with cavalry and artillery on the flanks was repulsed by fire from about fifteen machine-guns and six field guns; Gough ordered that attacks on the German outpost line were to stop until more artillery was available.
The British advance in the Fourth Army area reached the Somme rapidly from 17 to 20 March, with a continuous pursuit by vanguards and the main body moving forward by bounds between lines of resistance, up to the Somme river and Canal du Nord, which ran north-to-south from Offoy to Péronne, then paused while the river was bridged, with a priority of light bridges for infantry first, pontoon or trestle bridges for wagons and field artillery and then heavy bridges for mechanical transport and heavy artillery. The heavy steel bridges could be transported from a Base Park at Le Havre with notice. A bridge over the canal near Péronne was built by surveying the ground on the night of 15 March, towing pontoons up river the next night, building beginning at dawn on 17 March and the pontoon being ready by noon. Infantry of the 1/8th Royal Warwicks crossed that evening and were then ferried over the river beyond on rafts, to become the first Allied troops into Péronne.
On the right flank, IV Corps had to advance about over cratered and blocked roads to reach the Somme but Corps Mounted Troops and cyclists arrived on 18 March to find German rearguards also mounted on bicycles. Infantry crossed the river on 20 March by when the mounted troops had reached Germaine and the Fourth Army infantry outposts were established on high ground east of the Somme. "Ward's Force" was formed with corps cavalry, cyclists and two batteries of field artillery, two sections of engineers, a battalion of infantry from the 48th Division on 22 March as a precaution after cavalry was forced out of Poeuilly and neighbouring villages by a counter-attack and the corps cavalry relieved by the 5th Cavalry Division. The villages were reoccupied next day. The German retirement from the had begun on 19 March when Nurlu and Bertincourt were occupied by the British after slight pressure. British infantry and cavalry were finding greater German resistance.
After a pause until 26 March, Ward's Force captured Roisel with an infantry company, two cavalry squadrons and two armoured cars; Canadian cavalry took Equancourt. The cavalry advanced again on 27 March and took Villers Faucon, Saulcourt and Guyencourt "with great dash". An attempt at a swifter pursuit by French cavalry and cyclists on 22 March failed, when they were forced back over the Crozat canal by a German counter-attack, with many casualties. On 28 March the British precautionary line of resistance was moved forward to a line Germaine–Caulaincourt–Bernes–Marquaix–Lieramont–Nurlu–Equancourt–Bertincourt while the outposts of cavalry, cyclists and some infantry mostly paused.
On the army boundary with the French the 32nd Division kept two brigades in line and one in reserve. Each brigade in the line had two infantry companies in outposts held by platoons backed by their battalions and the artillery close enough to cover the outposts. By late March each British corps in the pursuit had diverted a minimum of one division to work on road repairs and bridging, the thaw making the effect of German demolitions far worse. In the Fifth Army area, repair work was concentrated on the railway up the Ancre valley, the Candas–Acheux line, two light railways and the Albert–Bapaume, Hamel–Achiet le Petit–Achiet le Grand and Serre–Puisieux–Bucquoy–Ablainzevelle roads, most of the labour coming from front-line divisions.
By 1 April, the British and French were ready to begin operations against outpost villages, still occupied by the Germans, west of the Hindenburg Line. The French Third Army prepared to attack at St Quentin on 10 April, for which the preliminary bombardment began on 4 April. The British Fourth Army prepared to support the attack with artillery and such infantry attacks as could be attempted, while communications were being repaired. Information from captured documents and prisoners had disclosed the details of and that outpost villages had to be held for longer than planned, to enable work to continue on the Hindenburg Line (), where it was being rebuilt south of Quéant. Despite increased German resistance, Neuville Bourjonval, Ruyaulcourt, Sorel le Grand, Heudicourt, Fins, Dessart Wood, St Emilie, Vermand sur Omignon, Vendelles, Jeancourt, Herbecourt, Épehy and Pezières were captured between 28 March and 1 April. Deliberate attacks were mounted in early April to take Holnon Wood, Savy (where the German garrison had to be overwhelmed by house-to-house fighting), Holnon, Sélency (including six German field guns) and Francilly Sélency.
A German counter-attack on 3 April by a storm troop, to recover a German artillery battery from Holnon Wood, coincided with a British attempt to do the same and failed. The French Third Army captured the Epine de Dallon on 3 April, bringing it up to the Hindenburg Line and on 4 April the British captured Metz en Couture in a snowstorm. Ronssoy, Basse Boulogne and Lempire were captured after house-to-house fighting but an attack on le Verguier failed. The villages still held by the Germans were found to be in a much better state of defence, with much more barbed wire around them. An attack on Fresnoy Le Petit, late on 5 April, was hampered by uncut wire and a second attack the next night was stopped halfway through the village, the defenders holding out until 7 April; an attack on Vadencourt also failed. On 9 April the Fourth Army began a bombardment of the Hindenburg Line, with such heavy artillery that was in range, as the Third and First armies began the offensive at Arras to the north. Fighting on the Fourth Army front, for the remaining outpost villages, went on until the end of April.
Air operations
German air operations over the winter concentrated on reconnaissance to look for signs of Anglo-French offensive preparations, which were found at Messines, Arras, Roye, the Aisne and the Champagne region. By March the outline of the Anglo-French spring offensive had been observed from the air. German air units were concentrated around Arras and the Aisne, which left few to operate over the Noyon Salient during the retirement. When the retirement began British squadrons in the area were instructed to keep German rearguards under constant observation, harass German troops by ground attacks and to make long-range reconnaissance to search the area east of the Hindenburg Line, for signs of more defensive positions and indications that a further retreat was contemplated.
A policy on rapid movement had been devised in September 1916, in which the Army Wing and Corps Wings not attached to the corps moving forward, would move with army headquarters and the Corps Wings attached to the corps that were advancing, would keep as close to their associated corps headquarters as possible. Squadrons would not need to move every day and could arrange temporary landing-grounds. On 21 March 1917 the use of temporary facilities was ordered with portable hangars to be built near corps headquarters and aircraft flown back to their normal aerodromes at night. IV and V Brigades were involved in the advance, with their squadrons attached to divisions for contact-patrols. Two cavalry divisions were attached to the Fourth and Fifth armies for the advance, with aircraft for reconnaissance of the ground that the cavalry was to traverse and to help the cavalry maintain touch with the rear.
Suitable targets found by air observation were engaged systematically by artillery, using zone calls. The cavalry divisions were issued with wireless stations to keep in touch with their attached aircraft but in the event good ground communications made them redundant. The German retirement was so swift and the amount of artillery fire was so small, that telephone wires were cut far less frequently than expected. German troop movements were well concealed and rarely seen from the air and it was usually ground fire that alerted aircrew to their presence. Pilots flew low over villages and strong points to invite German ground fire for their observers to plot, although this practice gave no indication of the strength of rearguards. A few attacks were made on German cavalry and infantry caught in the open but this had little influence on ground operations. The artillery wireless organisation broke down at times, due to delays in setting up ground stations, which led to missed opportunities for the direction of artillery fire from the air. The main influence of air operations was exerted through message carrying and reconnaissance, particularly in observing ground conditions in front of the advance and intermittent co-operation with artillery. Distant reconnaissance, some by single-seat fighters, found no evidence of German defences beyond the Hindenburg Line but many new aerodromes and supply dumps, indicating the permanence of the new position.
Aftermath
Analysis
The success of the German withdrawal to the Hindenburg Line has been explained as an Allied failure to anticipate the retirement and in being unable seriously to impede it. Another view is that the Anglo-French were not pursuing a broken enemy but an army making a deliberate withdrawal after months of preparation, which retained considerable powers of manoeuvre and counter-attack. Belated awareness of the significance of the building work along the base of the Noyon Salient, has also been given as a reason for a cautious pursuit deliberately chosen, rather than an inept and failed attempt to intercept the German retirement. In Cavalry Studies: Strategical and Tactical (1907) Haig had described the hasty retreat of a beaten enemy and an organised withdrawal by a formidable force, capable of rapidly returning to the attack, to defeat a disorganised pursuit.
In the case of an organised withdrawal, Haig described a cautious follow up by advanced guards, in front of a main force moving periodically from defensive position to defensive position, always providing a firm base on which the advanced guards could retire. The conduct of the Anglo-French pursuit conformed to this model. General Franchet d'Espérey proposed an improvised offensive to Nivelle, who rejected the idea, in favour of strengthening the main French front on the Aisne. British heavy artillery had been moved north from the Fifth Army in January, ready for the offensive at Arras and had been partly replaced by inexperienced units from Britain. Divisions from the Fourth Army had been moved south, to take over former French positions and I Anzac Corps had been transferred to the Fifth Army to compensate for divisions sent north to the Third Army by 6 February, which left the Anglo-French forces in the area depleted.
Beach concluded that evidence of German intentions had been collected by air reconnaissance, spy reports and debriefings of refugees and escaped prisoners of war but that German deception measures made information gleaned from intermittent air reconnaissance during the frequent bad flying weather over the winter appear unremarkable. German digging behind existing fortifications had taken place several times during the Somme battle and led British Intelligence to interpret the evidence of fortification-building further back from the Somme front, as an extension of the construction already being watched. In late December 1916, reports from witnesses led to British and French air reconnaissance further to the south and in mid-January 1917 British intelligence concluded that a new line was being built from Arras to Laon. By February, the line was known to be near completion and by 25 February, the local withdrawals on the Fifth Army front and prisoner interrogations, led the Anglo-French to anticipate a gradual German withdrawal to the new line.
When British patrols probing German outposts found them unoccupied, the Allies began a cautious advance, slowed by German destruction of the transport infrastructure. The troubled transport situation behind the British front, which had been caused by mounting difficulties on the Nord railways, overloading and the thaw on roads made British supply problems worse. The Germans had the advantage of falling back over good roads to prepared defences, protected by rearguards. The German armies made an efficient withdrawal, although the destruction accompanying led a considerable amount of indiscipline. Defending villages as outposts, with most of the rearguard posted at the western exits, left them vulnerable to encirclement and attacks from commanding ground and the predictability of such methods, provided French and British troops with obvious objectives.
Cyril Falls, a British official historian, criticised the British army for the failings it showed during the German withdrawal to the Hindenburg Line, writing that the divisions were "bewildered and helpless", until they gained experience in the new form of warfare. The commander of the 8th Division, Major-General William Heneker wrote on 2 April, that it had taken three weeks for his division to become proficient in open-warfare techniques. In April 1917, an analysis by II Corps had found that patrols coming under fire had stopped to report, ground of tactical importance had been ignored by patrols that had returned to British lines, forfeiting opportunities to force German withdrawals and artillery had been reluctant to push forward. Liaison between divisional engineers and artillery had been poor, advanced guards had not known the importance of reporting on the condition of roads, ground and the accuracy of maps; the cavalry element of advanced guards was also criticised for hesitancy although in contrast, Charles Bean, the Australian official historian, concluded that the advanced troops of I Anzac Corps had been sent out on a limb.
Falls rejected claims that British methods were predictable, noting that attacks had been made at dawn, noon, afternoon and at night. Bombardments had been fired before some attacks, during attacks on other occasions, on call from the infantry or were dispensed with. Attacks had been made indirectly, using ground for cover and a number of outflanking moves had succeeded. Combined operations with infantry, cavalry, cyclists, armoured cars and aircraft had also occurred. The most successful divisions in the pursuit were those that had been on the Somme for a considerable time, rather than the newer divisions, which were fresh and had trained for open warfare in England. Many of the British attacks had substantial casualties, mostly from German machine-gun fire, although artillery casualties were also high. Attacks on similar objectives using different methods had similar casualties, which suggested that losses were determined by the German defence, rather than unsatisfactory British methods. British field artillery had been supplied with an adequate amount of ammunition, despite the transport difficulties but much heavy artillery was left behind.
The weather was also unusually severe, with snow in early April, which had less effect on German rearguards, who occupied billets and then blew them up when they retired. Allied troops in the pursuit suffered from exposure and shortages of supplies but had increased morale, better health (trench foot cases declined sharply) and adapted to open warfare. Draught animals suffered from the weather, short rations and overloading; the British artillery soon had a shortage of and several immobilised heavy artillery batteries. The length of the Western Front was reduced by , which needed German divisions to hold. The Allied spring offensive had been forestalled and the subsidiary French attack up the Oise valley negated. The main French breakthrough offensive on the Aisne (the Nivelle Offensive), forced the Germans to withdraw to the Hindenburg Line defences behind the existing front line on the Aisne. German counter-attacks became increasingly costly during the battle; after four days had been taken by the French armies and casualties were inflicted on German armies opposite the French and Belgian fronts between April and July. Most German casualties had been incurred during the Nivelle Offensive and were greater than any earlier Entente attack, against French casualties for the same period.
The French armies lost by 25 April and were also struck by a collapse of the medical services on the Aisne front, casualties being stranded close to the battlefield for several days; German losses have been estimated at for the same period. A wave of mutinies broke out in the French armies, which eventually affected . Between 16 April and 15 May the mutinies were isolated but then spread, with recorded by 31 May. From violent resistance increased, possibly six people being killed by mutineers, which threatened the battle-worthiness of the French armies, before order slowly returned by the end of June. The French strategy of breakthrough and decisive battle had failed disastrously and for the rest of 1917, the French armies resorted to a strategy of "healing and defence". Continuous and methodical battles were replaced by limited attacks followed by consolidation. A massive rearmament programme was begun to produce aircraft, heavy artillery, tanks and chemicals, which had similar goals to the Hindenburg Programme.
The parts of the Western Front where German defences were rebuilt on the new principles, or had naturally occurring features similar to the new principles, such as the Chemin des Dames, withstood the Franco-British attacks of the Nivelle Offensive in April 1917, although the cost in casualties was high. The rate of German infantry losses in these defences diminished, although this was also apparent in the rate of loss of the attackers, who were better organised and used more efficient methods, made possible by the increased flow of equipment and supplies to the Western Front, which had so concerned Ludendorff in September 1916 (In 1917 British artillery ammunition shortages ended and barrel-wear, from firing so many shells became a problem.) At Verdun in December 1916, Arras in April 1917 and at Messines in June, where the new German defensive principles of depth, camouflage and reverse-slope defences, dispersed methods of fortification and prompt reinforcement by divisions, were not possible or had not been adopted in time, the British and French armies inflicted costly defeats on the Germans.
The German defensive strategy on the Western Front in 1917, succeeded in resisting the increase in the offensive power of the Entente, without the loss of vital territory but the attrition of German manpower was slowed rather than reversed. Unrestricted submarine warfare caused the United States to declare war on 6 April and failed to isolate Britain from its overseas sources of supply. The bombing offensive against Britain, acted to divert Anglo-French air defence resources, which slowed the rate at which the German air service was outnumbered in France. By the end of the Third Battle of Ypres in November 1917, the effectiveness of the methods of defence introduced in 1917 had been eroded and continuation of a defensive strategy in the west was made impossible. The defeat of Russia gave the German leadership a final opportunity to avoid defeat, rather than the attempts to compete with Allied numerical and industrial superiority, through economic warfare in the Atlantic and the domestic initiatives of the Hindenburg Programme, the Auxiliary Service Law and temporary demobilisation of skilled workers from the army.
Casualties
The accuracy of Great War casualty statistics is disputed. Casualty data available refer to Western Front totals as shown in Winston Churchill's The World Crisis (1923–29) and do not refer directly to the German withdrawal to the Hindenburg Line () or losses that would be considered "normal wastage", occurring as a consequence of the existence of the Western Front, rather than to particular military operations. British casualties in France from January to March 1917, were given as casualties given were German
Subsequent operations
The first attack of the Nivelle Offensive by the British First and Third armies came at Arras, north of the Hindenburg Line on 9 April and inflicted a substantial defeat on the German 6th Army, which occupied obsolete defences on forward slopes. Vimy Ridge was captured and further south, the greatest depth of advance since trench-warfare began was achieved, surpassing the success of the French Sixth Army on 1 July 1916. German reinforcements were able to stabilise the front line, using both of the defensive methods endorsed in the new German training manual. The British continued the offensive, despite the difficulties of ground and German defensive tactics, in support of the French offensives on the Aisne to the south and then to keep German troops in the area while the Messines Ridge attack was being prepared. German casualties were British losses of the Third and First armies.
During the Battle of Arras the British Fifth Army was intended to help the operations of the Third Army by pushing back German rear guards to the (Hindenburg Line) and then attacking the position from Bullecourt to Quéant, which was from the main Arras–Cambrai road. The German outpost villages from Doignies to Croisilles were captured on 2 April and an attack on a front, with Bullecourt in the centre was planned. The wire-cutting bombardment was delayed by transport difficulties behind the new British front line and the attack of the Third Army, which was originally intended to be simultaneous, took place on 9 April. A tank attack by the Fifth Army was improvised for 10 April on a front of to capture Riencourt and Hendecourt.
The attack was intended to begin before sunrise but the tanks were delayed by a blizzard and the attack was cancelled at the last minute; the 4th Australian Division withdrawal from its assembly positions was luckily obscured by a snowstorm. The cancellation did not reach the 62nd (2nd West Riding) Division on the left in time and several patrols were already in the German barbed wire when the order arrived. The attack was postponed for but only four of the twelve tanks in the attack were in position on time. The tanks that attacked lost direction and were quickly knocked out, leaving no gaps in the barbed wire for the infantry. Australian troops took a portion of the front Hindenburg trench and false reports of success led to cavalry being sent forward, where they were forced back by machine-gun fire as were the Australians by a counter-attack at Total British casualties were from the 62nd (2nd West Riding) Division suffered the 4th Australian Brigade of with prisoner and the 12th Australian Brigade had German casualties were
At on 15 April, elements of four German divisions attacked from the (Hindenburg Line) from Havrincourt to Quéant to occupy Noreuil, Lagnicourt, Morchies, Boursies, Doignies, Demicourt and Hermies until nightfall, to inflict casualties, destroy British artillery to make a British attack in the area impossible and to attract British reserves from the Arras front further north. Lagnicourt was occupied for a short time and five British guns destroyed but the rest of the attack failed. Co-ordination between German infantry and artillery suffered from the hasty nature of the attack, for which planning had begun on 13 April. Several units were late and attacked on unfamiliar ground, with against losses.
Labour was transferred to work on the from La Fère to Rethel and labour battalions were sent to work on the forward positions on the Aisne front on 23 February. The German strategic reserve rose to by the end of March and the Aisne front was reinforced with the 1st Army, released by Operation Alberich and other divisions, which raised the number to line and reserve on the Aisne by early April. The French (GAN) attacked the Hindenburg Line at St Quentin on 13 April with no success and the "decisive" offensive by the French (GAR) began on 16 April between Vailly and Rheims. The French breakthrough attempt was defeated but forced the Germans to abandon the area between Braye, Condé and Laffaux and withdraw to the Hindenburg Line from Laffaux Mill, along the Chemin des Dames to Courtecon. The German armies in France were still short of reserves, despite the retirements to the Hindenburg Line and divisions depleted by during the Nivelle Offensive and then replaced by those in reserve, had to change places with the counter-attack divisions rather than be withdrawn altogether.
Another British attack at Bullecourt was planned after the failure of 11 April but postponed several times until the Third Army further north had reached the river Sensée and there had been time for a thorough artillery preparation. By May the attack was intended to help the Third Army to advance, hold German troops in the area and assist the French army attacks on the Aisne. Two divisions were involved in the attack with the first objective at the second Hindenburg trench on a front of , a second objective at the Fontaine–Quéant road and the final objective at the villages of Riencourt and Hendecourt. Many of the British transport and supply difficulties had been remedied, with the extension of railways and roads into the "" area. The attack began on 3 May; part of the 2nd Australian Division reached the Hindenburg Line and established a foothold. Small parties of the 62nd Division reached the first objective and were cut off, the division having and an attack by the 7th Division was driven back.
From the battle in the 2nd Australian Division sector continued and the foothold in the Hindenburg Line was extended. The 7th Division continued to try to reach British parties, which had got into Bullecourt and been isolated. A German counter-attack on 6 May was defeated but the engagement exhausted the 2nd Australian Division and the 62nd Division; serious losses had been inflicted on the 1st Australian and 7th divisions. The German 27th, 3rd Guard, 2nd Guard Reserve divisions and a regiment of the 207th Division had made six big counter-attacks and also had many casualties. The British attacked again on 7 May with the 7th Division towards Bullecourt and the 1st Australian Brigade west along the Hindenburg trenches, which met at the second objective. Next day the "Red Patch" was attacked again and a small part held after German counter-attacks. The 5th Australian Division relieved the 2nd Australian Division by 10 May, while the battle in Bullecourt continued to the west, the 7th Division capturing the village except for the Red Patch on 12 May, while the 62nd Division advance was pushed back. The 58th Division relieved the Australians and British attacks on 13 May failed. A final German counter-attack was made to recapture all of Bullecourt and the Hindenburg trenches on 15 May. The attack failed except at Bullecourt where the west of the village was regained. The 7th Division was relieved by part of the 58th Division, which attacked the Red Patch again on 17 May and captured the ruins, just before the Germans were able to withdraw, which ended the battle. The Fifth Army lost and German losses in two divisions were with casualties in the regiments of five other divisions engaged being a minimum. Total British losses for both Bullecourt operations were
The Battle of Cambrai began with a secret deployment of British reinforcements for the attack. Instead of a long period of artillery registration (firing ranging shots before the attack) and wire-cutting, which would have warned the German defence that an assault was being prepared, massed artillery fire did not begin until the infantry–tank advance began on 20 November, using unregistered (predicted) fire. The British sent to roll through the (Hindenburg Line) barbed-wire fields, as a substitute for a long wire-cutting bombardment and the ground assault was accompanied by a large number of ground-attack aircraft. The British attack broke through the but was contained in the rear battlezone () by the , which had been built on the east side of the St Quentin canal on this part of the front. Preparations for a further advance were hampered by the obstacles of the Hindenburg defences, which had been crossed but which limited the routes by which the most advanced British forces could be supplied. The German defence quickly recovered and on 30 November began a counter-offensive, using a similar short bombardment, air attacks and storm troop infantry tactics, which was contained by the British, in some parts of the battlefield using the Hindenburg Line defences captured earlier.
A sequence of Allied offensives began with attacks by American and French armies on 26 September 1918 from Rheims to the Meuse, two British armies at Cambrai on 27 September, British, Belgian and French armies in Flanders on 28 September; on 29 September the British Fourth Army (including the US II Corps) attacked the Hindenburg Line from Holnon north to Vendhuille while the French First Army attacked the area from St Quentin to the south. The British Third Army attacked further north and crossed the Canal du Nord at Masnières. In nine days British, French and US forces crossed the Canal du Nord, broke through the Hindenburg Line and took and German troops were short of food, had worn out clothes and boots and the retreat back to the Hindenburg Line had terminally undermined their morale. The Allies had attacked with overwhelming material superiority, using combined-arms tactics, with a unified operational method and achieved a high tempo. On 4 October, the German government requested an armistice and on 8 October, the German armies were ordered to retire from the rest of the (Hindenburg Line).
See also
Siegfried Line
Notes
Footnotes
References
Books
Theses
Further reading
Books
Theses
External links
The German Retreat and the Battle of Arras, Imperial War Museum
An interpretation of the Bullecourt photograph.
Breaking the Hindenburg Line, Australian War Memorial
local history of Hindenburg Line in Arras sector, Hindenburg Line Museum
Military operations of World War I involving Germany
Military operations of World War I involving the United Kingdom
World War I sites in France
World War I defensive lines
1917 in France
Conflicts in 1917
Historic defensive lines | Hindenburg Line | Engineering | 16,157 |
49,168,357 | https://en.wikipedia.org/wiki/Kilim%20motifs | Many motifs are used in traditional kilims, handmade flat-woven rugs, each with many variations. In Turkish Anatolia in particular, village women wove themes significant for their lives into their rugs, whether before marriage or during married life. Some motifs represent desires, such as for happiness and children; others, for protection against threats such as wolves (to the flocks) and scorpions, or against the evil eye. These motifs were often combined when woven into patterns on kilims. With the fading of tribal and village cultures in the 20th century, the meanings of kilim patterns have also faded.
In these tribal societies, women wove kilims at different stages of their lives, choosing themes appropriate to their own circumstances. Some of the motifs used are widespread across Anatolia and sometimes across other regions of West Asia, but patterns vary between tribes and villages, and rugs often expressed personal and social meaning.
Context
A Turkish kilim is a flat-woven rug from Anatolia. Although the name kilim is sometimes used loosely in the West to include all type of rug such as cicim, palaz, soumak and zili, in fact any type other than pile carpets, the name kilim properly denotes a specific weaving technique. Cicim, palaz, soumak and zili are made using three groups of threads, namely longitudinal warps, crossing wefts, and wrapping coloured threads. The wrapping threads give these rugs additional thickness and strength. Kilim in contrast are woven flat, using only warp and weft threads. Kilim patterns are created by winding the weft threads, which are coloured, backwards and forwards around pairs of warp threads, leaving the resulting weave completely flat. Kilim are therefore called flatweave or flatware rugs.
To create a sharp pattern, weavers usually end each pattern element at a particular thread, winding the coloured weft threads back around the same warps, leaving a narrow gap or slit. These are prized by collectors for the crispness of their decoration. The motifs on kilims woven in this way are constrained to be somewhat angular and geometric.
In tribal societies, kilim were woven by women at different stages of their lives: before marriage, in readiness for married life; while married, for her children; and finally, kilim for her own funeral, to be given to the mosque. Kilims thus had strong personal and social significance in tribal and village cultures, being made for personal and family use. Feelings of happiness or sorrow, hopes and fears were expressed in the weaving motifs. Many of these represent familiar household and personal objects, such as a hairband, a comb, an earring, a trousseau chest, a jug, a hook.
Meanings
The meanings expressed in kilims derive both from the individual motifs used, and by their pattern and arrangement in the rug as a whole. A few symbols are widespread across Anatolia as well as other regions including Persia and the Caucasus; others are confined to Anatolia.
An especially widely used motif is the (hands on hips): Anatolian symbol of the mother goddess, mother with child in womb, fertility, and abundance. Other motifs express the tribal weavers' desires for protection of their families' flocks from wolves with the wolf's mouth or the wolf's foot motif (), or for safety from the sting of the scorpion (). Several protective motifs, such as those for the dragon (), scorpion, and spider (sometimes called the crab or tortoise by carpet specialists) share the same basic diamond shape with a hooked or stepped boundary, often making them very difficult to distinguish.
Several motifs hope for the safety of the weaver's family from the evil eye (, also used as a motif), which could be divided into four with a cross symbol (), or averted with the symbol of a hook (), a human eye (), or an amulet (; often, a triangular package containing a sacred verse). The carpet expert Jon Thompson explains that such an amulet woven into a rug is not a theme: to the weaver, it actually is an amulet, conferring protection by its presence. In his words, to people in the village and tribal cultures that wove kilims, "the device in the rug has a materiality, it generates a field of force able to interact with other unseen forces and is not merely an intellectual abstraction."
Other motifs symbolised fertility, as with the trousseau chest motif (), or the explicit fertility () motif. The motif for running water () similarly depicts the resource literally. The desire to tie a family or lovers together could be depicted with a fetter motif (). Similarly, a tombstone motif may indicate not simply death, but the desire to die rather than to part from the beloved. Several motifs represented the desire for good luck and happiness, as for instance the bird () and the star or Solomon's seal (). The oriental symbol of Yin/Yang is used for love and unison (). Among the motifs used late in life, the Tree of Life () symbolizes the desire for immortality. Many of the plants used to represent the Tree of Life can also be seen as symbols of fruitfulness, fertility, and abundance. Thus the pomegranate, a tree whose fruits carry many seeds, implies the desire for many children.
Symbols are often combined, as when the feminine elibelinde and the masculine ram's horn are each drawn twice, overlapping at the centre, forming a figure (some variants of the or fertility motif) of the sacred union of the principles of the sexes.
Motifs
All these motifs can vary considerably in appearance according to the weaver. Colours, sizes and shapes can all be chosen according to taste and the tradition in a given village or tribe; further, motifs are often combined, as illustrated in the photographs above. To give some idea of this variability, a few alternative forms are shown in the table.
See also
Islamic geometric patterns
References
External links
Border motifs in oriental carpets
Textiles in folklore
Culture of Turkey
Visual motifs
Textile patterns
Turkish rugs and carpets | Kilim motifs | Mathematics | 1,253 |
16,855,179 | https://en.wikipedia.org/wiki/CARD9 | Caspase recruitment domain-containing protein 9 is an adaptor protein of the CARD-CC protein family, which in humans is encoded by the CARD9 gene. It mediates signals from pattern recognition receptors to activate pro-inflammatory and anti-inflammatory cytokines, regulating inflammation. Homozygous mutations in CARD9 are associated with defective innate immunity against yeasts, like Candida and dermatophytes.
Function
CARD9 is a member of the CARD protein family, which is defined by the presence of a characteristic caspase-associated recruitment domain (CARD). This protein was identified by its selective association with the CARD domain of BCL10, a positive regulator and NF-κB activation. It is thought to function as a molecular scaffold for the assembly of a BCL10 signaling complex that activates NF-κB. Several alternatively spliced transcript variants have been observed, but their full-length nature is not clearly defined.
Clinical significance
In 2006, it became clear that Card9 plays important roles within the innate immune response against yeasts. Card9 mediates signals from so called pattern recognition receptors (Dectin-1) to downstream signalling pathways such as NF-κB and by this activates pro-inflammatory cytokines (TNF, IL-23, IL-6, IL-2) and an anti-inflammatory cytokine (IL-10) and subsequently an appropriate innate and adaptive immune response to clear an infection.
An autosomal recessive form of susceptibility to chronic mucocutaneous candidiasis was found in 2009 to be associated with homozygous mutations in CARD9.
Deep dermatophytosis and Card9 deficiency reported in an Iranian family led to its discovery in 17 people from Tunisian, Algerian, and Moroccan families with deep dermatophytosis.
CARD9 mutations have been associated with inflammatory diseases such as ankylosing spondylitis and inflammatory bowel disease (Crohn's Disease and Ulcerative Colitis). A genetic variant, c.IVS11+1G>C was found to be protective against crohn's disease, ulcerative colitis, and ankylosing spondilitis by Manuel Rivas, Mark Daly and colleagues. CARD9 S12NΔ11, is a rare splice variant in which exon 11 of CARD9 is deleted. This allele, identified by deep sequencing of GWAS loci, results in a protein with a C-terminal truncation. In a functional follow-up study, using re-expressed human CARD9 isoforms in murine Card9−/− bone marrow-derived dendritic cells (BMDCs) were assessed for cytokine production. BMDCs expressing the predisposing variant CARD9 S12N showed increased TNFα and IL-6 production compared to BMDCs expressing wild-type CARD9. In contrast, CARD9 Δ11 and CARD9 S12NΔ11, as well as the C-terminal truncated variant CARD9 V6, showed significant impairment in TNFα and IL-6 production. CARD9 Δ11 was found to have a dominant negative effect on CARD9 function when co-expressed with wild-type CARD9 in human and mouse dendritic cells.
References
External links
Further reading
Proteins | CARD9 | Chemistry | 689 |
425,002 | https://en.wikipedia.org/wiki/Gloss%20%28annotation%29 | A gloss is a brief notation, especially a marginal or interlinear one, of the meaning of a word or wording in a text. It may be in the language of the text or in the reader's language if that is different.
A collection of glosses is a glossary. A collection of medieval legal glosses, made by glossators, is called an apparatus. The compilation of glosses into glossaries was the beginning of lexicography, and the glossaries so compiled were in fact the first dictionaries. In modern times a glossary, as opposed to a dictionary, is typically found in a text as an appendix of specialized terms that the typical reader may find unfamiliar. Also, satirical explanations of words and events are called glosses. The German Romantic movement used the expression of gloss for poems commenting on a given other piece of poetry, often in the Spanish style.
Glosses were originally notes made in the margin or between the lines of a text in a classical language; the meaning of a word or passage is explained by the gloss. As such, glosses vary in thoroughness and complexity, from simple marginal notations of words one reader found difficult or obscure, to interlinear translations of a text with cross references to similar passages. Today parenthetical explanations in scientific writing and technical writing are also often called glosses. Hyperlinks to a glossary sometimes supersede them. In East Asian languages, ruby characters are glosses that indicate the pronunciation of logographic Chinese characters.
Etymology
Starting in the 14th century, a gloze in the English language was a marginal note or explanation, borrowed from French , which comes from medieval Latin , classical , meaning an obsolete or foreign word that needs explanation. Later, it came to mean the explanation itself. The Latin word comes from Greek 'tongue, language, obsolete or foreign word'. In the 16th century, the spelling was refashioned as gloss to reflect the original Greek form more closely.
In theology
Glosses and other marginal notes were a primary format used in medieval Biblical theology and were studied and memorized for their own merit. Many Biblical passages came to be associated with a particular gloss, whose truth was taken to be scriptural. Indeed, in one case, it is generally reckoned that an early gloss explicating the doctrine of the Trinity made its way into the Scriptural text itself, in the passage known as the "three heavenly witnesses" or the Comma Johanneum, which is present in the Vulgate Latin and the third and later editions of the Greek Textus Receptus collated by Erasmus (the first two editions excluded it for lack of manuscript evidence), but is absent from all modern critical reconstructions of the New Testament text, such as Westcott and Hort, Tischendorf, and Nestle-Aland.
In law
In the medieval legal tradition, the glosses on Roman law and Canon law created standards of reference, so-called sedes materiae 'seat of the matter'. In common law countries, the term "judicial gloss" refers to what is considered an authoritative or "official" interpretation of a statute or regulation by a judge. Judicial glosses are often very important in avoiding contradictions between statutes, and determining the constitutionality of various provisions of law.
In literature
A gloss, or glosa, is a verse in traditional Iberian literature and music which follows and comments on a refrain (the "mote"). See also villancico.
In philology
Glosses are of some importance in philology, especially if one language—usually, the language of the author of the gloss—has left few texts of its own. The Reichenau Glosses, for example, gloss the Latin Vulgate Bible in an early form of one of the Romance languages, and as such give insight into late Vulgar Latin at a time when that language was not often written down. A series of glosses in the Old English language to Latin Bibles give us a running translation of Biblical texts in that language; see Old English Bible translations. Glosses of Christian religious texts are also important for our knowledge of Old Irish. Glosses frequently shed valuable light on the vocabulary of otherwise little attested languages; they are less reliable for syntax, because many times the glosses follow the word order of the original text, and translate its idioms literally.
In linguistics
In linguistics, a simple gloss in running text may be marked by quotation marks and follow the transcription of a foreign word. Single quotes are a widely used convention. For example:
A Cossack longboat is called a chaika .
The moose gains its name from the Algonquian or ().
A longer or more complex transcription may rely upon an interlinear gloss. Such a gloss may be placed between a text and its translation when it is important to understand the structure of the language being glossed, and not just the overall meaning of the passage.
Glossing sign languages
Sign languages are typically transcribed word-for-word by means of a gloss written in the predominant oral language in all capitals; for example, American Sign Language and Auslan would be written in English. Prosody is often glossed as superscript words, with its scope indicated by brackets.
Pure fingerspelling is usually indicated by hyphenation. Fingerspelled words that have been lexicalized (that is, fingerspelling sequences that have entered the sign language as linguistic units and that often have slight modifications) are indicated with a hash. For example, W-I-K-I indicates a simple fingerspelled word, but #JOB indicates a lexicalized unit, produced like J-O-B, but faster, with a barely perceptible O and turning the "B" hand palm side in, unlike a regularly fingerspelled "B".
References
Further reading
Meinolf Schumacher: "…der kann den texst und och die gloß. Zum Wortgebrauch von 'Text' und 'Glosse' in deutschen Dichtungen des Spätmittelalters." In 'Textus' im Mittelalter. Komponenten und Situationen des Wortgebrauchs im schriftsemantischen Feld, edited by Ludolf Kuchenbuch and Uta Kleine, 207–27, Göttingen: Vandenhoeck & Ruprecht, 2006 (PDF).
External links
Documents
Lexicography
Linguistics
Book design | Gloss (annotation) | Engineering | 1,319 |
22,860,473 | https://en.wikipedia.org/wiki/Active%20hard-drive%20protection | In computer hardware, active hard-drive protection refers to technology that attempts to avoid or reduce mechanical damage to hard disk drives by preparing the disk prior to impact. This approach is mainly used in laptop computers that are frequently carried around and more prone to impacts than desktop computers.
Implementation
Usually, the system consists of accelerometers that alert the system when excess acceleration or vibration is detected. The software then tells the hard disk drive to unload its heads to prevent them from coming in contact with the platters, thus potentially preventing a head crash.
Many laptop vendors have implemented this technology under different names. Some hard-disk drives also include this technology, needing no cooperation from the system.
See also
Hard disk drive failure
Head crash
Sudden Motion Sensor
References
Data security
Hard disk drives | Active hard-drive protection | Engineering | 157 |
73,016 | https://en.wikipedia.org/wiki/Svante%20P%C3%A4%C3%A4bo | Svante Pääbo (; born 20 April 1955) is a Swedish geneticist and Nobel Laureate who specialises in the field of evolutionary genetics. As one of the founders of paleogenetics, he has worked extensively on the Neanderthal genome. In 1997, he became founding director of the Department of Genetics at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Since 1999, he has been an honorary professor at Leipzig University; he currently teaches molecular evolutionary biology at the university. He is also an adjunct professor at Okinawa Institute of Science and Technology, Japan.
In 2022, he was awarded the Nobel Prize in Physiology or Medicine "for his discoveries concerning the genomes of extinct hominins and human evolution".
Education and early life
Pääbo was born in Stockholm, Sweden, in 1955 and grew up there with his mother, Estonian chemist Karin Pääbo (; 1925–2013), who had escaped from the Soviet invasion in 1944 and arrived in Sweden as a refugee during World War II. He was born through an extramarital affair of his father, Swedish biochemist Sune Bergström (1916–2004), who, like his son, became a recipient of the Nobel Prize in Physiology or Medicine (in 1982). Pääbo is his mother's only child; he has via his father's marriage a half-brother (also born in 1955).
Pääbo grew up as a native Swedish speaker. In a 2012 interview with the Estonian newspaper Eesti Päevaleht, he said that he self-identifies as a Swede, but has a "special relationship with Estonia".
In 1975, Pääbo began studying at Uppsala University, serving one year in the Swedish Defense Forces attached to the School of Interpreters. Pääbo earned his Ph.D. from Uppsala University in 1986 for research investigating how the E19 protein of adenoviruses modulates the immune system.
Research and career
Pääbo is known as one of the founders of paleogenetics, a discipline that uses genetics to study early humans and other ancient species.
From 1986 to 1987, he did postdoctoral research at the Institute for Molecular Biology II, University of Zurich, Switzerland.
As an EMBO Postdoctoral Fellow, Pääbo moved to the United States in 1987, accepting a position as a postdoctoral researcher in biochemistry at the University of California, Berkeley, where he joined Allan Wilson's lab and worked on the genome of extinct mammals.
In 1990, he returned to Europe to become professor of general biology at the University of Munich, and, in 1997, he became founding director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.
In 1997, Pääbo and colleagues reported their successful sequencing of Neanderthal mitochondrial DNA (mtDNA), originating from a specimen found in Feldhofer grotto in the Neander valley.
In August 2002, Pääbo's department published findings about the "language gene", FOXP2, which is mutated in some individuals with language disabilities.
In 2006, Pääbo announced a plan to reconstruct the entire genome of Neanderthals. In 2007, he was named one of Time magazine's 100 most influential people of the year.
In February 2009, at the Annual Meeting of the American Association for the Advancement of Science (AAAS) in Chicago, it was announced that the Max Planck Institute for Evolutionary Anthropology had completed the first draft version of the Neanderthal genome. Over 3 billion base pairs were sequenced in collaboration with the 454 Life Sciences Corporation.
In March 2010, Pääbo and his coworkers published a report about the DNA analysis of a finger bone found in the Denisova Cave in Siberia; the results suggest that the bone belonged to an extinct member of the genus Homo that had not yet been recognised, the Denisova hominin. Pääbo first wanted to classify the Denisovans as a species of their own, separate from modern humans and Neanderthals but changed his mind after peer-review.
Pääbo's doctoral student Viviane Slon was able to successfully map the Denisovan genome, clarifying geographic distribution and admixtures in archaic humans.
In May 2010, Pääbo and his colleagues published a draft sequence of the Neanderthal genome in the journal Science. He and his team also concluded that there was probably interbreeding between Neanderthals and Eurasian (but not Sub-Saharan African) humans. There is general mainstream support in the scientific community for this theory of interbreeding between archaic and modern humans. This admixture of modern human and Neanderthal genes is estimated to have occurred roughly between 50,000 and 60,000 years ago, in the Middle East.
In 2014, he published the book Neanderthal Man: In Search of Lost Genomes where he, in the mixed form of a memoir and popular science, tells the story of the research effort to map the Neanderthal genome combined with his thoughts on human evolution.
In 2020, Hugo Zeberg and Svante Pääbo determined that more severe impacts upon victims of the COVID-19 disease, including the vulnerability to it and the incidence of the necessity of hospitalisation, have been associated via DNA analysis to be expressed in genetic variants at chromosomal region 3, features that are associated with European Neanderthal heritage. That structure imposes greater risks that those affected will develop a more severe form of the disease. The findings were described in a Nature article with Hugo Zeberg from Karolinska Institutet and Svante Pääbo from the Max Planck Institute.
, Pääbo has an h-index of 167 according to Google Scholar and of 133 according to Scopus.
Awards and honours
In 1992, he received the Gottfried Wilhelm Leibniz Prize of the Deutsche Forschungsgemeinschaft, which is the highest honour awarded in German research. Pääbo was elected a member of the Royal Swedish Academy of Sciences in 2000, and in 2004 was elected an international member of the National Academy of Sciences. In 2005, he received the prestigious Louis-Jeantet Prize for Medicine. In 2008, Pääbo was added to the members of the Order Pour le Mérite for Sciences and Arts. In the same year, he received the Golden Plate Award of the American Academy of Achievement. In October 2009, the Foundation For the Future announced that Pääbo had been awarded the 2009 Kistler Prize for his work isolating and sequencing ancient DNA, beginning in 1984 with a 2,400-year-old mummy. In June 2010, the Federation of European Biochemical Societies (FEBS) awarded him the Theodor Bücher Medal for outstanding achievements in Biochemistry and Molecular Biology. In 2013, he received Gruber Prize in Genetics for groundbreaking research in evolutionary genetics. In 2014, Pääbo was awarded the Swedish :sv:Learning Ladder Prize. In June 2015, he was awarded the degree of DSc (honoris causa) at NUI Galway. He was elected a Foreign Member of the Royal Society in 2016, and in 2017, was awarded the Dan David Prize. In 2018, he received the Princess of Asturias Awards in the category of Scientific Research and the Körber European Science Prize, in 2020 the Japan Prize, in 2021 the Massry Prize and in 2022 the Nobel Prize in Physiology or Medicine for sequencing the first Neanderthal genome.
Personal life
Pääbo wrote in his 2014 book Neanderthal Man: In Search of Lost Genomes that he is bisexual. He assumed he was gay until he met Linda Vigilant, an American primatologist and geneticist whose "boyish charms" attracted him. They have co-authored many papers, are married and raising a son and a daughter together in Leipzig.
Distinctions
: Commander Grand Cross of the Royal Order of the Polar Star (21 March 2024) (KmstkNO)
See also
Origins of Us (2011 BBC series)
First Peoples (2015 PBS series)
List of Nobel laureates in Physiology or Medicine
List of Swedish Nobel laureates
References
External links
Svante Pääbo at the Max Planck Society
Human Evolutionary Genomics Unit (Svante Pääbo). Okinawa Institute of Science and Technology Graduate University
1955 births
Living people
Scientists from Stockholm
Uppsala University alumni
Members of the Royal Swedish Academy of Sciences
Members of the French Academy of Sciences
Foreign associates of the National Academy of Sciences
Foreign members of the Royal Society
Knights Commander of the Order of Merit of the Federal Republic of Germany
Recipients of the Pour le Mérite (civil class)
Population geneticists
Swedish geneticists
Paleogeneticists
Gottfried Wilhelm Leibniz Prize winners
Swedish people of Estonian descent
Recipients of the Order of the Cross of Terra Mariana, 3rd Class
Recipients of the Lomonosov Gold Medal
Swedish LGBTQ scientists
Swedish bisexual men
LGBTQ Nobel laureates
Bisexual academics
Bisexual scientists
Max Planck Institute for Evolutionary Anthropology
Nobel laureates in Physiology or Medicine
Swedish Nobel laureates
Max Planck Institute directors | Svante Pääbo | Technology | 1,821 |
36,344,041 | https://en.wikipedia.org/wiki/YTH%20protein%20domain | In molecular biology, the protein domain YTH refers to a member of the YTH family that has been shown to selectively remove transcripts of meiosis-specific genes expressed in mitotic cells. They also play a role in the epitranscriptome as reader proteins for m6A.
This protein domain, the YTH-domain, is conserved across all eukaryotes and suggests that the conserved C-terminal region plays a critical role in relaying the cytosolic Ca-signals to the nucleus, thereby regulating gene expression.
Function/mechanism
It has been speculated that in higher order eukaryotic organisms, YTH-family members may be involved in similar mechanisms to suppress gene regulation during gametogenesis or general silencing. The rat protein YT521-B, SWISSPROT, is a tyrosine-phosphorylated nuclear protein, that interacts with the nuclear transcriptosomal component scaffold attachment factor B, and the 68kDa Src substrate associated during mitosis, Sam68. In vivo splicing assays demonstrated that YT521-B modulates alternative splice site selection in a concentration-dependent manner. Additionally, it is also thought that the YTH domain has a role in RNA binding.
The YTH domain proteins also serve as readers for the N6-methyladenosine (m6A) mRNA modification by scanning the mRNA to find the modified bases. The YTH domain proteins YTHDF1, YTHDF2, and YTHDF3 can bind to modified bases and the surrounding bases. These YTH proteins recognize RRACH sequences (with the A being the modified m6A, R being a purine, and H being an A, C, or U) and use these sequences as binding sites, allowing them to “read” the modification. The YTHDF2 proteins remove the adenylation on the m6A, destabilizing the RNA transcript and preventing translation. The YTHDF1 proteins have the opposite effect and promote the initiation of translation through their interactions with the 40 S ribosomal subunit.
Structure
The domain is predicted to be a mixed alpha/beta-fold containing four alpha helices and six beta strands. Crystallography studies of these YTH domain proteins show that they have a common hydrophobic region that has been proven to participate in the proteins binding to m6A since mutations in this region decrease binding affinity.
Plant
In plant cells environmental stimuli, which light, pathogens, hormones, and abiotic stresses, elicit changes in the cytosolic calcium levels but little is known of the cytosolic-nuclear Ca-signaling pathway; where gene regulation occurs to respond appropriately to the stress. It has been demonstrated that two novel Arabidopsis thaliana (Mouse-ear cress) proteins, (ECT1 and ECT2), specifically associated with Calcineurin B-Like-Interacting Protein Kinase1 (CIPK1), a member of Ser/Thr protein kinases that interact with the calcineurin B-like Ca-binding proteins. These two proteins contain a very similar C-terminal region (180 amino acids in length, 81% similarity), which is required and sufficient for both interaction with CIPK1 and translocation to the nucleus.
References
Protein domains | YTH protein domain | Biology | 686 |
1,474,961 | https://en.wikipedia.org/wiki/Adverse%20effect | An adverse effect is an undesired harmful effect resulting from a medication or other intervention, such as surgery. An adverse effect may be termed a "side effect", when judged to be secondary to a main or therapeutic effect. The term complication is similar to adverse effect, but the latter is typically used in pharmacological contexts, or when the negative effect is expected or common. If the negative effect results from an unsuitable or incorrect dosage or procedure, this is called a medical error and not an adverse effect. Adverse effects are sometimes referred to as "iatrogenic" because they are generated by a physician/treatment. Some adverse effects occur only when starting, increasing or discontinuing a treatment.
Using a drug or other medical intervention which is contraindicated may increase the risk of adverse effects. Adverse effects may cause complications of a disease or procedure and negatively affect its prognosis. They may also lead to non-compliance with a treatment regimen. Adverse effects of medical treatment resulted in 142,000 deaths in 2013 up from 94,000 deaths in 1990 globally.
The harmful outcome is usually indicated by some result such as morbidity, mortality, alteration in body weight, levels of enzymes, loss of function, or as a pathological change detected at the microscopic, macroscopic or physiological level. It may also be indicated by symptoms reported by a patient. Adverse effects may cause a reversible or irreversible change, including an increase or decrease in the susceptibility of the individual to other chemicals, foods, or procedures, such as drug interactions.
Classification
In terms of drugs, adverse events may be defined as: "Any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product and which does not necessarily have to have a causal relationship with this treatment."
In clinical trials, a distinction
is made between an adverse event and a serious adverse event. Generally, any event which causes death, permanent damage, birth defects, or requires hospitalization is considered a serious adverse event. The results of trials are often included in the labelling of the medication to provide information both for patients and the prescribing physicians.
The term "life-threatening" in the context of a serious adverse event refers to an event in which the patient was at risk of death at the time of the event; it does not refer to an event which hypothetically might have caused death if it were more severe.
Reporting systems
In many countries, adverse effects are required by law to be reported, researched in clinical trials and included into the patient information accompanying medical devices and drugs for sale to the public. Investigators in human clinical trials are obligated to report these events in clinical study reports. Research suggests that these events are often inadequately reported in publicly available reports. Because of the lack of these data and uncertainty about methods for synthesising them, individuals conducting systematic reviews and meta-analyses of therapeutic interventions often unknowingly overemphasise health benefit. To balance the overemphasis on benefit, scholars have called for more complete reporting of harm from clinical trials.
United Kingdom
The Yellow Card Scheme is a United Kingdom initiative run by the Medicines and Healthcare products Regulatory Agency (MHRA) and the Commission on Human Medicines (CHM) to gather information on adverse effects to medicines. This includes all licensed medicines, from medicines issued on prescription to medicines bought over the counter from a supermarket. The scheme also includes all herbal supplements and unlicensed medicines found in cosmetic treatments. Adverse drug reactions (ADRs) can be reported by a number of health care professionals including physicians, pharmacists and nurses, as well as patients.
United States
In the United States several reporting systems have been built, such as the Vaccine Adverse Event Reporting System (VAERS), the Manufacturer and User Facility Device Experience Database (MAUDE) and the Special Nutritionals Adverse Event Monitoring System. MedWatch is the main reporting center, operated by the Food and Drug Administration.
Australia
In Australia, adverse effect reporting is administered by the Adverse Drug Reactions Advisory Committee (ADRAC), a subcommittee of the Australian Drug Evaluation Committee (ADEC). Reporting is voluntary, and ADRAC requests healthcare professionals to report all adverse reactions to its current drugs of interest, and serious adverse reactions to any drug. ADRAC publishes the Australian Adverse Drug Reactions Bulletin every two months. The Government's Quality Use of Medicines program is tasked with acting on this reporting to reduce and minimize the number of preventable adverse effects each year.
New Zealand
Adverse reaction reporting is an important component of New Zealand's pharmacovigilance activities. The Centre for Adverse Reactions Monitoring (CARM) in Dunedin is New Zealand's national monitoring centre for adverse reactions. It collects and evaluates spontaneous reports of adverse reactions to medicines, vaccines, herbal products and dietary supplements from health professionals in New Zealand. Currently the CARM database holds over 80,000 reports and provides New Zealand-specific information on adverse reactions to these products, and serves to support clinical decision making when unusual symptoms are thought to be therapy related
Canada
In Canada, adverse reaction reporting is an important component of the surveillance of marketed health products conducted by the Health Products and Food Branch (HPFB) of Health Canada. Within HPFB, the Marketed Health Products Directorate leads the coordination and implementation of consistent monitoring practices with regards to assessment of signals and safety trends, and risk communications concerning regulated marketed health products.
MHPD also works closely with international organizations to facilitate the sharing of information. Adverse reaction reporting is mandatory for the industry and voluntary for consumers and health professionals.
Limitations
In principle, medical professionals are required to report all adverse effects related to a specific form of therapy. In practice, it is at the discretion of the professional to determine whether a medical event is at all related to the therapy. As a result, routine adverse effects reporting often may not include long-term and subtle effects that may ultimately be attributed to a therapy.
Part of the difficulty is identifying the source of a complaint. A headache in a patient taking medication for influenza may be caused by the underlying disease or may be an adverse effect of the treatment. In patients with end-stage cancer, death is a very likely outcome and whether the drug is the cause or a bystander is often difficult to discern.
By situation
Medical procedures
Surgery may have a number of undesirable or harmful effects, such as infection, hemorrhage, inflammation, scarring, loss of function, or changes in local blood flow. They can be reversible or irreversible, and a compromise must be found by the physician and the patient between the beneficial or life-saving consequences of surgery versus its adverse effects. For example, a limb may be lost to amputation in case of untreatable gangrene, but the patient's life is saved. Presently, one of the greatest advantages of minimally invasive surgery, such as laparoscopic surgery, is the reduction of adverse effects.
Other nonsurgical physical procedures, such as high-intensity radiation therapy, may cause burns and alterations in the skin. In general, these therapies try to avoid damage to healthy tissues while maximizing the therapeutic effect.
Vaccination may have adverse effects due to the nature of its biological preparation, sometimes using attenuated pathogens and toxins. Common adverse effects may be fever, malaise and local reactions in the vaccination site. Very rarely, there is a serious adverse effect, such as eczema vaccinatum, a severe, sometimes fatal complication which may result in persons who have eczema or atopic dermatitis.
Diagnostic procedures may also have adverse effects, depending much on whether they are invasive, minimally invasive or noninvasive. For example, allergic reactions to radiocontrast materials often occur, and a colonoscopy may cause the perforation of the intestinal wall.
Medications
Adverse effects can occur as a collateral or side effect of many interventions, but they are particularly important in pharmacology, due to its wider, and sometimes uncontrollable, use by way of self-medication. Thus, responsible drug use becomes an important issue here. Adverse effects, like therapeutic effects of drugs, are a function of dosage or drug levels at the target organs, so they may be avoided or decreased by means of careful and precise pharmacokinetics, the change of drug levels in the organism in function of time after administration.
Adverse effects may also be caused by drug interaction. This often occurs when patients fail to inform their physician and pharmacist of all the medications they are taking, including herbal and dietary supplements. The new medication may interact agonistically or antagonistically (potentiate or decrease the intended therapeutic effect), causing significant morbidity and mortality around the world. Drug-drug and food-drug interactions may occur, and so-called "natural drugs" used in alternative medicine can have dangerous adverse effects. For example, extracts of St John's wort (Hypericum perforatum), a phytotherapic used for treating mild depression are known to cause an increase in the cytochrome P450 enzymes responsible for the metabolism and elimination of many drugs, so patients taking it are likely to experience a reduction in blood levels of drugs they are taking for other purposes, such as cancer chemotherapeutic drugs, protease inhibitors for HIV and hormonal contraceptives.
The scientific field of activity associated with drug safety is increasingly government-regulated, and is of major concern for the public, as well as to drug manufacturers. The distinction between adverse and nonadverse effects is a major undertaking when a new drug is developed and tested before marketing it. This is done in toxicity studies to determine the nonadverse effect level (NOAEL). These studies are used to define the dosage to be used in human testing (phase I), as well as to calculate the maximum admissible daily intake. Imperfections in clinical trials, such as insufficient number of patients or short duration, sometimes lead to public health disasters, such as those of fenfluramine (the so-called fen-phen episode), thalidomide and, more recently, of cerivastatin (Baycol, Lipobay) and rofecoxib (Vioxx), where drastic adverse effects were observed, such as teratogenesis, pulmonary hypertension, stroke, heart disease, neuropathy, and a significant number of deaths, causing the forced or voluntary withdrawal of the drug from the market.
Most drugs have a large list of nonsevere or mild adverse effects which do not rule out continued usage. These effects, which have a widely variable incidence according to individual sensitivity, include nausea, dizziness, diarrhea, malaise, vomiting, headache, dermatitis, dry mouth, etc. These can be considered a form of pseudo-allergic reaction, as not all users experience these effects; many users experience none at all.
The Medication Appropriateness Tool for Comorbid Health Conditions in Dementia (MATCH-D) warns that people with dementia are more likely to experience adverse effects, and that they are less likely to be able to reliably report symptoms.
Examples with specific medications
Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug (this is a case where the adverse effect has been used legally and illegally for performing abortions)
Addiction to many sedatives and analgesics, such as diazepam, morphine, etc.
Birth defects associated with thalidomide
Bleeding of the intestine associated with aspirin therapy
Cardiovascular disease associated with COX-2 inhibitors (i.e. Vioxx)
Deafness and kidney failure associated with gentamicin (an antibiotic)
Death, following sedation, in children using propofol (Diprivan)
Depression or hepatic injury caused by interferon
Diabetes caused by atypical antipsychotic medications (neuroleptic psychiatric drugs)
Diarrhea caused by the use of orlistat (Xenical)
Erectile dysfunction associated with many drugs, such as antidepressants
Fever associated with vaccination
Glaucoma associated with corticosteroid-based eye drops
Hair loss and anemia may be caused by chemotherapy against cancer, leukemia, etc.
Headache following spinal anaesthesia
Hypertension in ephedrine users, which prompted FDA to remove the dietary supplement status of ephedra extracts
Insomnia caused by stimulants, methylphenidate (Ritalin), Adderall, etc.
Lactic acidosis associated with the use of stavudine (Zerit, for HIV therapy) or metformin (for diabetes)
Mania caused by corticosteroids
Liver damage from paracetamol
Melasma and thrombosis associated with use of estrogen-containing hormonal contraception, such as the combined oral contraceptive pill
Priapism associated with the use of sildenafil
Rhabdomyolysis associated with statins (anticholesterol drugs)
Seizures caused by withdrawal from benzodiazepines
Drowsiness or increase in appetite due to antihistamine use. Some antihistamines are used in sleep aids explicitly because they cause drowsiness.
Stroke or heart attack associated with sildenafil (Viagra), when used with nitroglycerin
Suicide, increased tendency associated to the use of fluoxetine and other selective serotonin reuptake inhibitor (SSRI) antidepressants
Tardive dyskinesia associated with use of metoclopramide and many antipsychotic medications
Controversies
Sometimes, putative medical adverse effects are regarded as controversial and generate heated discussions in society and lawsuits against drug manufacturers. One example is the recent controversy as to whether autism was linked to the MMR vaccine (or to thiomersal, a mercury-based preservative used in some vaccines). No link has been found in several large studies, and despite removal of thimerosal from most early childhood vaccines beginning with those manufactured in 2003, the rate of autism has not decreased as would be expected if it had been the causative agent.
Another instance is the potential adverse effects of silicone breast implants, which led to class actions brought by tens of thousands of plaintiffs against manufacturers of gel-based implants, due to allegations of damage to the immune system which have not yet been conclusively proven. In 1998, Dow Corning settled its remaining suits for $3.2 Billion and went into bankruptcy.
Due to the exceedingly high impact on public health of widely used medications, such as hormonal contraception and hormone replacement therapy, which may affect millions of users, even marginal probabilities of adverse effects of a severe nature, such as breast cancer, have led to public outcry and changes in medical therapy, although its benefits largely surpassed the statistical risks.
See also
Adverse drug reaction
Biosafety
Classification of Pharmaco-Therapeutic Referrals
Consultant pharmacist
Drug interaction
EudraVigilance
Evidence-based medicine
List of pharmaceutical companies
List of withdrawn drugs
Medical algorithm
Medical prescription
Nocebo
Patient safety
Perioperative mortality
Pharmacotoxicology
Placebo
Pleiotropy (drugs)
Polypharmacy
Toxicology
References
External links
Patient Safety Network – includes a glossary and articles on adverse effects, drug reactions, medical error, iatrogenesis, among others.
Australian Adverse Drug Reactions Bulletin – published bimonthly
MedEffect Canada (Health Canada)
Medication Errors—from the U.S. Food and Drug Administration.
Medical Product Safety Information – MedWatch lists safety alerts for drugs, biologics, devices and dietary supplements, recalls, market withdrawals, public health advisories and links
Medical Devices Safety National Library of Medicine (Medline Plus, useful lists of conventional drug and medical device articles and websites)
When Medicine Hurts Instead of Helps – June 1998 report by the Alliance for Aging Research.
Medical terminology
Clinical pharmacology
Patient safety
Effects of external causes
Drug safety | Adverse effect | Chemistry | 3,308 |
1,088,286 | https://en.wikipedia.org/wiki/Ethanol%20fermentation | Ethanol fermentation, also called alcoholic fermentation, is a biological process which converts sugars such as glucose, fructose, and sucrose into cellular energy, producing ethanol and carbon dioxide as by-products. Because yeasts perform this conversion in the absence of oxygen, alcoholic fermentation is considered an anaerobic process. It also takes place in some species of fish (including goldfish and carp) where (along with lactic acid fermentation) it provides energy when oxygen is scarce.
Ethanol fermentation is the basis for alcoholic beverages, ethanol fuel and bread dough rising.
Biochemical process of fermentation of sucrose
The chemical equations below summarize the fermentation of sucrose (C12H22O11) into ethanol (C2H5OH). Alcoholic fermentation converts one mole of glucose into two moles of ethanol and two moles of carbon dioxide, producing two moles of ATP in the process.
C6H12O6 + 2 ADP + 2 Pi → 2 C2H5OH + 2 CO2 + 2 ATP
Sucrose is a sugar composed of a glucose linked to a fructose. In the first step of alcoholic fermentation, the enzyme invertase cleaves the glycosidic linkage between the glucose and fructose molecules.
C12H22O11{} + H2O ->[\text{Invertase}] 2 C6H12O6
Next, each glucose molecule is broken down into two pyruvate molecules in a process known as glycolysis. Glycolysis is summarized by the equation:
C6H12O6 + 2 ADP + 2 Pi + 2 NAD+ → 2 CH3COCOO− + 2 ATP + 2 NADH + 2 H2O + 2 H+
CH3COCOO− is pyruvate, and Pi is inorganic phosphate. Finally, pyruvate is converted to ethanol and CO2 in two steps, regenerating oxidized NAD+ needed for glycolysis:
1. CH3COCOO− + H+ → CH3CHO + CO2
catalyzed by pyruvate decarboxylase
2. CH3CHO + NADH + H+ → C2H5OH + NAD+
This reaction is catalyzed by alcohol dehydrogenase (ADH1 in baker's yeast).
As shown by the reaction equation, glycolysis causes the reduction of two molecules of NAD+ to NADH. Two ADP molecules are also converted to two ATP and two water molecules via substrate-level phosphorylation.
Related processes
Fermentation of sugar to ethanol and can also be done by Zymomonas mobilis, however the path is slightly different since formation of pyruvate does not happen by glycolysis but instead by the Entner–Doudoroff pathway.
Other microorganisms can produce ethanol from sugars by fermentation but often only as a side product. Examples are
Heterolactic acid fermentation in which Leuconostoc bacteria produce lactate + ethanol +
Mixed acid fermentation where Escherichia produce ethanol mixed with lactate, acetate, succinate, formate, , and H2
2,3-butanediol fermentation by Enterobacter producing ethanol, butanediol, lactate, formate, , and H2
Gallery
Effect of oxygen
Fermentation does not require oxygen. If oxygen is present, some species of yeast (e.g., Kluyveromyces lactis or Kluyveromyces lipolytica) will oxidize pyruvate completely to carbon dioxide and water in a process called cellular respiration, hence these species of yeast will produce ethanol only in an anaerobic environment (not cellular respiration). This phenomenon is known as the Pasteur effect.
However, many yeasts such as the commonly used baker's yeast Saccharomyces cerevisiae or fission yeast Schizosaccharomyces pombe under certain conditions, ferment rather than respire even in the presence of oxygen. In wine making this is known as the counter-Pasteur effect. These yeasts will produce ethanol even under aerobic conditions, if they are provided with the right kind of nutrition. During batch fermentation, the rate of ethanol production per milligram of cell protein is maximal for a brief period early in this process and declines progressively as ethanol accumulates in the surrounding broth. Studies demonstrate that the removal of this accumulated ethanol does not immediately restore fermentative activity, and they provide evidence that the decline in metabolic rate is due to physiological changes (including possible ethanol damage) rather than to the presence of ethanol. Several potential causes for the decline in fermentative activity have been investigated. Viability remained at or above 90%, internal pH remained near neutrality, and the specific activities of the glycolytic and alcohologenic enzymes (measured in vitro) remained high throughout batch fermentation. None of these factors appears to be causally related to the fall in fermentative activity during batch fermentation.
Bread baking
Ethanol fermentation causes bread dough to rise. Yeast organisms consume sugars in the dough and produce ethanol and carbon dioxide as waste products. The carbon dioxide forms bubbles in the dough, expanding it to a foam. Less than 2% ethanol remains after baking.
In a contemporary advancement, a group in Germany has been doing the opposite and converting stale bread into ethanol.
Alcoholic beverages
Ethanol contained in alcoholic beverages is produced by means of fermentation induced by yeast. Liquors are distilled from grains, fruits, vegetables, or sugar that have already gone through alcoholic fermentation.
Alcohol products:
Natural sugars present in grapes;
Fermented: Wine, cider and perry are produced by similar fermentation of natural sugar in apples and pears, respectively; and other fruit wines are produced from the fermentation of the sugars in any other kinds of fruit.
Liquors: Brandy and eaux de vie (e.g. slivovitz) are produced by distillation of these fruit-fermented beverages.
Mead is produced by fermentation of the natural sugars present in honey.
Grain starches that have been converted to sugar by the enzyme amylase, which is present in grain kernels that have been malted (i.e. germinated). Other sources of starch (e.g. potatoes and unmalted grain) may be added to the mixture, as the amylase will act on those starches as well. It may also be amylase-induce fermented with saliva in a few countries.
Fermented: Beer
Liquors: Whiskey, and sometimes vodka. Gin and related beverages are produced by the addition of flavoring agents to a vodka-like feedstock during distillation.
Rice grain starches converted to sugar by the mold Aspergillus oryzae.
Fermented: Rice wines (including sake)
Liquors: Baijiu, soju, and shōchū
Sugarcane product molasses.
Liquors: Rum
In all cases, fermentation must take place in a vessel (e.g. a fermentation lock) that allows carbon dioxide to escape and prevents outside air from coming in. This is because letting in outside air could contaminate the brew due to risk of bacteria or mold, and a buildup of carbon dioxide could cause the vessel to rupture.
Feedstocks for fuel production
Yeast fermentation of various carbohydrate products is also used to produce the ethanol that is added to gasoline.
The dominant ethanol feedstock in warmer regions is sugarcane. In temperate regions, corn or sugar beets are used.
In the United States, the main feedstock for the production of ethanol is currently corn. Approximately 2.8 gallons of ethanol are produced from one bushel of corn (0.42 liter per kilogram). While much of the corn turns into ethanol, some of the corn also yields by-products such as DDGS (distillers dried grains with solubles) that can be used as feed for livestock. A bushel of corn produces about 18 pounds of DDGS (320 kilograms of DDGS per metric ton of maize). Although most of the fermentation plants have been built in corn-producing regions, sorghum is also an important feedstock for ethanol production in the Plains states. Pearl millet is showing promise as an ethanol feedstock for the southeastern U.S. and the potential of duckweed is being studied.
In some parts of Europe, particularly France and Italy, grapes have become a de facto feedstock for fuel ethanol by the distillation of surplus wine. Surplus sugary drinks may also be used. In Japan, it has been proposed to use rice normally made into sake as an ethanol source.
Cassava as ethanol feedstock
Ethanol can be made from mineral oil or from sugars or starches. Starches are cheapest. The starchy crop with highest energy content per acre is cassava, which grows in tropical countries.
Thailand already had a large cassava industry in the 1990s, for use as cattle feed and as a cheap admixture to wheat flour. Nigeria and Ghana are already establishing cassava-to-ethanol plants. Production of ethanol from cassava is currently economically feasible when crude oil prices are above US$120 per barrel.
New varieties of cassava are being developed, so the future situation remains uncertain.
Currently, cassava can yield between 25 and 40 tonnes per hectare (with irrigation and fertilizer), and from a tonne of cassava roots, circa 200 liters of ethanol can be produced (assuming cassava with 22% starch content). A liter of ethanol contains circa 21.46 MJ of energy. The overall energy efficiency of cassava-root to ethanol conversion is circa 32%.
The yeast used for processing cassava is Endomycopsis fibuligera, sometimes used together with bacterium Zymomonas mobilis.
Byproducts of fermentation
Ethanol fermentation produces unharvested byproducts such as heat, carbon dioxide, food for livestock, water, methanol, fuels, fertilizer and alcohols. The cereal unfermented solid residues from the fermentation process, which can be used as livestock feed or in the production of biogas, are referred to as Distillers grains and sold as WDG, Wet Distiller's grains, and DDGS, Dried Distiller's Grains with Solubles, respectively.
Microbes used in ethanol fermentation
Yeast
Saccharomyces cerevisiae
Schizosaccharomyces
Zymomonas mobilis (a bacterium)
See also
Anaerobic respiration
Cellular respiration
Cellulose
Fermentation (wine)
Yeast in winemaking
Auto-brewery syndrome
Tryptophol, a chemical compound found in wine or in beer as a secondary product of alcoholic fermentation (a product also known as congener)
References
Fermentation
Ethanol | Ethanol fermentation | Chemistry,Biology | 2,320 |
17,759,851 | https://en.wikipedia.org/wiki/McNeil%27s%20Nebula | McNeil's Nebula is a variable nebula discovered January 23, 2004 by Jay McNeil of Paducah, Kentucky. It is illuminated by the star V1647 Ori.
Discovery
McNeil's Nebula is a cometary-type reflection nebula, illuminated by the reddish star V1647 Ori (also catalogued as IRAS 05436-0007) at its southern tip. The nebula did not appear in images taken before September 2003; it was discovered in 2004 by amateur astronomer Jay McNeil using a 3-inch telescope. University of Hawaii researcher Bo Reipurth's preliminary studies have determined that McNeil's Nebula appeared when V1647 Ori, a pre-main sequence star, experienced an outburst called a FU Orionis or EX Lupii type event. Most stars are believed to undergo such events, though they are rarely observed.
Earlier images
The nebula has been identified on images taken by Evered Kreimer in October 1966, but not in various other images taken between 1951 and 1991. The nebula appears therefore to be very variable in luminosity, and is a reflection nebula illuminated by a variable star of some kind, or with the star's light being variably obscured for some reason. The nebula was not observed after 2004 until 2008, when it reappeared once more.
2018 disappearance
In November 2018 the Sky & Telescope website reported that the nebula had disappeared. On November 5, an observer using the 500-mm Gemini telescope at the Iowa Robotic Observatory reported its disappearance. Another observer using a 30-inch Dobsonian telescope on November 3 also failed to detect the nebula.
References
External links
CNN article
Reflection nebulae
Orion molecular cloud complex
Protostars
Orion (constellation) | McNeil's Nebula | Astronomy | 349 |
54,692,968 | https://en.wikipedia.org/wiki/Network%20eavesdropping | Network eavesdropping, also known as eavesdropping attack, sniffing attack, or snooping attack, is a method that retrieves user information through the internet. This attack happens on electronic devices like computers and smartphones. This network attack typically happens under the usage of unsecured networks, such as public wifi connections or shared electronic devices. Eavesdropping attacks through the network is considered one of the most urgent threats in industries that rely on collecting and storing data. Internet users use eavesdropping via the Internet to improve information security.
A typical network eavesdropper may be called a Black-hat hacker and is considered a low-level hacker as it is simple to network eavesdrop successfully. The threat of network eavesdroppers is a growing concern. Research and discussions are brought up in the public's eye, for instance, types of eavesdropping, open-source tools, and commercial tools to prevent eavesdropping. Models against network eavesdropping attempts are built and developed as privacy is increasingly valued. Sections on cases of successful network eavesdropping attempts and its laws and policies in the National Security Agency are mentioned. Some laws include the Electronic Communications Privacy Act and the Foreign Intelligence Surveillance Act.
Types of attacks
Types of network eavesdropping include intervening in the process of decryption of messages on communication systems, attempting to access documents stored in a network system, and listening on electronic devices. Types include electronic performance monitoring and control systems, keystroke logging, man-in-the-middle attacks, observing exit nodes on a network, and Skype & Type.
Electronic performance monitoring and control systems (EPMCSs)
Electronic performance monitoring and control systems are used by employees or companies and organizations to collect, store, analyze, and report actions or performances of employers when they are working. The beginning of this system is used to increase the efficiency of workers, but instances of unintentional eavesdropping can occur, for example, when employees' casual phone calls or conversations would be recorded.
Keystroke logging
Keystroke logging is a program that can oversee the writing process of the user. It can be used to analyze the user's typing activities, as keystroke logging provides detailed information on activities like typing speed, pausing, deletion of texts, and more behaviors. By monitoring the activities and sounds of the keyboard strikes, the message typed by the user can be translated. Although keystroke logging systems do not explain reasons for pauses or deletion of texts, it allows attackers to analyze text information. Keystroke logging can also be used with eye-tracking devices which monitor the movements of the user's eyes to determine patterns of the user's typing actions which can be used to explain the reasons for pauses or deletion of texts.
Man-in-the-middle attack (MitM)
A Man-in-the-middle attack is an active eavesdropping method that intrudes on the network system. It can retrieve and alter the information sent between two parties without anyone noticing. The attacker hijacks the communication systems and gains control over the transport of data, but cannot insert voice messages that sound or act like the actual users. Attackers also create independent communications through the system with the users acting as if the conversation between users is private.
The “man-in-the-middle” can also be referred to as lurkers in a social context. A lurker is a person who rarely or never posts anything online, but the person stays online and observes other users' actions. Lurking can be valuable as it lets people gain knowledge from other users. However, like eavesdropping, lurking into other users' private information violates privacy and social norms.
Observing exit nodes
Distributed networks including communication networks are usually designed so that nodes can enter and exit the network freely. However, this poses a danger in which attacks can easily access the system and may cause serious consequences, for example, leakage of the user’s phone number or credit card number. In many anonymous network pathways, the last node before exiting the network may contain actual information sent by users. Tor exit nodes are an example. Tor is an anonymous communication system that allows users to hide their IP addresses. It also has layers of encryption that protect information sent between users from eavesdropping attempts trying to observe the network traffic. However, Tor exit nodes are used to eavesdrop at the end of the network traffic. The last node in the network path flowing through the traffic, for instance, Tor exit nodes, can acquire original information or messages that were transmitted between different users.
Skype & Type (S&T)
Skype & Type (S&T) is a new keyboard acoustic eavesdropping attack that takes advantage of Voice-over IP (VoIP). S&T is practical and can be used in many applications in the real world, as it does not require attackers to be close to the victim and it can work with only some leaked keystrokes instead of every keystroke. With some knowledge of the victim’s typing patterns, attackers can gain a 91.7% accuracy typed by the victim. Different recording devices including laptop microphones, smartphones, and headset microphones can be used for attackers to eavesdrop on the victim's style and speed of typing. It is especially dangerous when attackers know what language the victim is typing in.
Tools to prevent eavesdropping attacks
Computer programs where the source code of the system is shared with the public for free or for commercial use can be used to prevent network eavesdropping. They are often modified to cater to different network systems, and the tools are specific in what task it performs. In this case, Advanced Encryption Standard-256, Bro, Chaosreader, CommView, Firewalls, Security Agencies, Snort, Tcptrace, and Wireshark are tools that address network security and network eavesdropping.
Advanced encryption standard-256 (AES-256)
It is a cipher block chaining (CBC) mode for ciphered messages and hash-based message codes. The AES-256 contains 256 keys for identifying the actual user, and it represents the standard used for securing many layers on the internet. AES-256 is used by Zoom Phone apps that help encrypt chat messages sent by Zoom users. If this feature is used in the app, users will only see encrypted chats when they use the app, and notifications of an encrypted chat will be sent with no content involved.
Bro
Bro is a system that detects network attackers and abnormal traffic on the internet. It emerged at the University of California, Berkeley that detects invading network systems. The system does not apply to the detection of eavesdropping by default, but can be modified to an offline analyzing tool for eavesdropping attacks. Bro runs under Digital Unix, FreeBSD, IRIX, SunOS, and Solaris operating systems, with the implementation of approximately 22,000 lines of C++ and 1,900 lines of Bro. It is still in the process of development for real-world applications.
Chaosreader
Chaosreader is a simplified version of many open-source eavesdropping tools. It creates HTML pages on the content of when a network intrusion is detected. No actions are taken when an attack occurs and only information such as time, network location on which system or wall the user is trying to attack will be recorded.
CommView
CommView is specific to Windows systems which limits real-world applications because of its specific system usage. It captures network traffic and eavesdropping attempts by using packet analyzing and decoding.
Firewalls
Firewall technology filters network traffic and blocks malicious users from attacking the network system. It prevents users from intruding into private networks. Having a firewall in the entrance to a network system requires user authentications before allowing actions performed by users. There are different types of firewall technologies that can be applied to different types of networks.
Security agencies
A Secure Node Identification Agent is a mobile agent used to distinguish secure neighbor nodes and informs the Node Monitoring System (NMOA). The NMOA stays within nodes and monitors the energy exerted, and receives information about nodes including node ID, location, signal strength, hop counts, and more. It detects nodes nearby that are moving out of range by comparing signal strengths. The NMOA signals the Secure Node Identification Agent (SNIA) and updates each other on neighboring node information. The Node BlackBoard is a knowledge base that reads and updates the agents, acting as the brain of the security system. The Node Key Management agent is created when an encryption key is inserted to the system. It is used to protect the key and is often used between Autonomous Underwater Vehicles (AUVs), which are underwater robots that transmit data and nodes.
Snort
Snort is used in many systems, and it can be run in an offline mode using stream4. Stream4 reassembles preprocessors with another stream option. The snort-reply patch feature is often used to reconstruct executions. It is currently developed by Cisco and acts as a free network intrusion detection system.
Tcptrace
Tcptrace is used to analyze pcap-based network intercepts, which is a packeting capture network application that detects network traffic. It has an important feature that monitors eavesdropping attacks and can reconstruct captured TCP streams.
Wireshark
Wireshark, or also named Ethereal, is a widely used open-source eavesdropping tool in the real world. Most of the features in Ethereal are packet-oriented and contain a TCP reassembly option for experiments on tracking intrusion attempts.
Models against the attacks
Models are built to secure system information stored online and can be specific towards certain systems, for example, protecting existing documents, preventing attacks on the processing of instant messages on the network, and creating fake documents to trace malicious users.
Beacon-bearing decoy documents
Documents containing fake but private information such as made-up social security numbers, bank account numbers, and passport information will be purposely posted on a web server. These documents have beacons that will be triggered when a user attempts to open them, which then alarms another site that records the time accessed of the documents and IP address of the user. The information collected from the beacons is then regularly sent to Tor exit nodes which then the user will be caught in the malicious act.
Butterfly encryption scheme
The Butterfly encryption scheme uses timestamps and updates pseudorandom number generators (PRNG) seeds in a network system to generate authentication keys and parameters for encrypted messages to be sent out. This scheme can perform in entities that are searching for a relatively low cost but efficient security scheme, and can work in different systems as it has a simple design that is easy to modify for specific purposes. The Butterfly encryption scheme is effective because it uses a changing parameter and has an unpredictable timestamp that creates a high-level security system.
Crypto phones (Cfones)
Cfones is a model built to protect VoIP communications. It uses Short Authenticated Strings (SAS) protocol that requires users to exchange keys to ensure no network intruders are in the system. This is specific to communication systems that involve both voice messages and text messages. In this model, a string is given to actual users, and to connect with another user, strings have to be exchanged and have to match. If another user tries to invade the system, the string will not match, and Cfones blocks attackers from entering the network. This model is specific to preventing man-in-the-middle attacks.
Friendly-jamming schemes (DFJ and OFJ)
Friendly-jamming schemes (DFJ and OFJ) are models that can decrease the eavesdropping risk by purposely interfering the network when an unknown user is near the area of the protected area. The models are tested by the probability of eavesdrop attacks in a testing environment, and are found that there is a lower probability of attacks compared to a system with no friendly-jamming schemes installed. A feature of the DFJ and OFJ schemes is that the models offer a large coverage secure area that is protected from eavesdroppers effectively.
Honey encryption scheme (HE)
A honey encryption scheme is used to strengthen the protection of private information of instant messaging systems, including WhatsApp and Snapchat, as well as tracking down the eavesdropper’s information. HE contains fake but similar plaintext during the decryption phase of the process of instant messaging with an incorrect key. This makes messages that the eavesdropper is trying to decrypt to be gibberish messages. HE schemes are used in specific systems not limited to instant messaging systems, passwords, and credit cards. However, applying it to other systems is still a difficult task as changes inside the scheme have to be made to fit the system.
Internet of Things framework (IoT)
The Internet of Things framework involved four layers of security measures that are management layer, cloud layer, gateway layer, and IoT device layer. The management layer handles web and mobile applications. The cloud layer looks over the service and resource management. It acts as an access point for users to connect to other internet services. The gateway layer manages the packet filtering module. It links the endpoint network of the services, processes the documents or information, and contains security tasks including authentication, authorization, and encryption. The two main tasks of the gateway layer are to detect users and perform filtering of the actual user and malicious users. The IoT device layer looks over the gateway layer’s performance and double-checks whether all malicious users are removed from the network, specifically, attestation is a mechanism to measure the end-point integrity and removes nodes from the network if necessary.
Cases of network eavesdropping
Completely trusting network devices or network companies can be risky. Users of devices are oftentimes unaware of the threats on the internet and choose to ignore the importance of protecting their personal information. This paves the way for malicious hackers to gain access to private data that users may not be aware of. A few cases of network eavesdropping discussed include Alipay and Cloud computing.
Alipay
Private information from a user of mobile payment apps, in this case, Alipay, is retrieved using a hierarchical identification specific to mobile payment apps. The system first recognizes the app used from traffic data, then categorizes the user’s distinct actions on the app, and lastly distinguishes comprehensive steps within each action. Distinct actions on mobile payment apps are generalized in a few groups including making a payment, transfer money between banks, scanning checks, and looking at previous records. By classifying and observing the user’s specific steps within each group of actions, the attacker intercepts the network traffic using and obtains private information of app users. Strategies to prevent incidents are made such as fingerprint or facial identification, and email or text confirmation of actions performed on the app.
Cloud computing
Cloud computing is a computing model that provides access to many different configurable resources, including servers, storage, applications, and services. The nature of the Cloud makes it vulnerable to security threats, and attackers can easily eavesdrop on the Cloud. Particularly, an attacker can simply identify the data center of the Virtual Machine used by cloud computing, and retrieve information on the IP address and domain names of the data center. It becomes dangerous when the attacker gains access to private cryptographic keys for specific servers which they may get data stored in the cloud. For example, the Amazon EC2 platform based in Seattle, Washington, WA, USA, was once at risk of such issues but has now used Amazon Web Service (AWS) to manage their encryption keys.
Medical records
Sometimes users can choose what they put online and should be responsible for their actions, including whether or not a user should take a photo of their social security number and send it through a messaging app. However, data like medical records or bank accounts are stored in a network system in which companies are also responsible for securing user’s data. Medical records of patients can be stolen by insurance companies, medical laboratories, or advertising companies for their interests. Information such as name, social security number, home address, email address, and diagnosis history can be used to track down a person. Eavesdropping reports of a patient’s medical history is illegal and is dangerous. To deal with network threats, many medical institutes have been using endpoint authentication, cryptographic protocols and data encryption.
Related laws and policies
Electronic Communications Privacy Act (ECPA)
In Title III of the Electronic Communications Privacy Act (ECPA), it states that it is a “federal crime to engage in wiretapping or electronic eavesdropping; to possess wiretapping or electronic eavesdropping equipment; to use to disclose information obtained through illegal wiretapping or electronic eavesdropping, or to disclose information secured through court-ordered wiretapping or electronic eavesdropping, to obstruct justice.” Federal and state law enforcement officials may be allowed to intercept with the wire, oral, and electronic communications if and only if a court order is issued, consent of the parties, or when a malicious user is trying to access the system. If the law is violated, there may be a criminal penalty, civil liability, administrative and professional disciplinary action, and or exclusion of evidence. A general penalty is not more than five years of imprisonment and no more than $250,000 for individuals and not more than $500,000 for organizations. If damages are created, there may be a $100 fine per day of violation or $10,000 in total.
Foreign Intelligence Surveillance Act (FISA)
The Foreign Intelligence Surveillance Act gives out court orders for “electronic surveillance, physical searches, installation, and use of pen registers and traps and trace devices, and orders to disclose tangible items.” Court orders issued on electronic surveillance allow the federal officials to use electronic surveillance which includes eavesdropping without violating the Electronic Communications Privacy Act or Title III specifically.
Organization of Economic Cooperation and Development (OECD)
A guideline to protecting the privacy of data of health patients is issued by the Organization of Economic Cooperation and Development (OECD). The policy states that individual patient data or personal data should be secure, and patients will not face any arbitrary losses related to invading their personal information or health conditions. The policy acts as a minimum standard for eHealth usages and it should be followed by all medical institutes for protecting the privacy of patient’s data.
See also
Black hat (computer security)
Crowdsensing
Eavesdropping
Endpoint detection and response
Endpoint security
Intrusion detection system
Packet analyzer
Security hacker
Van Eck phreaking
References
Computer networking
Computer security | Network eavesdropping | Technology,Engineering | 3,863 |
49,783,893 | https://en.wikipedia.org/wiki/HD%20215152 | HD 215152 is a star in the zodiac constellation of Aquarius. It has an apparent visual magnitude of 8.13, meaning it is too faint to be seen with the naked eye. Parallax measurements provide distance estimates of around 70 light years. The star has a relatively high proper motion, moving across the sky at an estimated 0.328 arc seconds per year along a position angle of 205°.
A 2015 survey ruled out the existence of any additional stellar companions at projected distances from 6 to 145 astronomical units.
This star has a stellar classification of K3 V, which indicates that it is an ordinary K-type main sequence star. Based upon observation of regular variations in chromospheric activity, it has a rotation period of days. Stellar models give an estimated mass of around 76% of the Sun. It has a slightly lower metallicity than the Sun, and thus has a lower abundance of elements other than hydrogen and helium. The effective temperature of the stellar atmosphere is about 4,803 K, giving it the orange-hued glow of an ordinary K-type star.
HD 215152 is a candidate for possessing a debris disk—a circumstellar disk of orbiting dust and debris. This finding was made through the detection of an infrared excess at a wavelength of 70 μm by the Spitzer Space Telescope. The detection has a 3σ level of certainty.
Planetary system
HD 215152 has a total of four confirmed sub-Neptune mass planets, all of which are potentially rocky. With all of the planets orbiting within 0.154 AU, it is a very compact system. The inner two are separated by only 0.0098 AU, or about four times the distance between the Earth and the Moon. This is unusual for systems discovered by radial velocity measurements. In 2011, it was reported that two planetary candidates (c and d) had been detected in close orbit around this star. The planets were discovered through Doppler spectroscopy using the HARPS spectrograph at La Silla Observatory in Chile. Their presence was revealed by periodic variations in the radial velocity of the host star due to gravitational perturbations by the orbiting objects. In 2018, two more planets were confirmed. All planets have brief orbital periods: the four planets orbit every 5.76, 7.28, 10.86 and 25.2 days respectively. Their minimum masses range between 1.7 and 2.9 Earth masses.
There is a gap between orbits of HD 215152 d and HD 215152 e, which may contain a fifth, yet-undetected terrestrial low-mass planet.
References
K-type main-sequence stars
Planetary systems with four confirmed planets
Aquarius (constellation)
J22432131-0624025
BD-07 5839
4291
215152
112190 | HD 215152 | Astronomy | 577 |
12,811,065 | https://en.wikipedia.org/wiki/Mine%20action | Mine action is a combination of humanitarian aid and development studies that aims to remove landmines and reduce the social, economic and environmental impact of them and the explosive remnants of war (ERW).
Description
Mine action is commonly represented as comprising five complementary groups of activities:
Humanitarian demining, i.e. mine and ERW survey, land release, mapping, marking and clearance
Risk education (RE), i.e. the communication to the public of the risk of ERW and how to act in the presence of ERW
Victim assistance, including rehabilitation and reintegration
Stockpile destruction
Advocacy to promote policies and practices that will reduce the threat from landmines and ERW, usually in the context of disarmament and international humanitarian law. The most commonly applied treaties including the 1997 anti-personnel Mine Ban Treaty (Ottawa Treaty), the Convention on Cluster Munitions, and the Convention on Certain Conventional Weapons.
The objective of these activities is to provide a safe environment in which landmines and ERW do not impede economical, social and health development, and to address the needs of victims. Gender mainstreaming will ensure that the different needs of women, girls, boys and men are taken into account and inequality is not perpetuated.
The coordination of mine action activities in affected countries is commonly conducted by Mine Action Coordination Centers (MACC) managed either by the United Nations or the host country government.
Clearance
In its broad sense, mine clearance includes surveying, mapping and marking of minefields and removal of mines from the ground. This range of activities is also sometimes referred to as demining.
Humanitarian mine clearance aims to clear land so that civilians can return to their homes and their everyday routines without the threat of landmines and unexploded remnants of war (ERW), which include unexploaded ordnance and abandoned explosive ordnance. This means that all the mines and ERW affecting the places where ordinary people live must be cleared, and their safety in areas that have been cleared must be guaranteed. Mines are cleared and the areas are thoroughly verified so that they can say without a doubt that the land is now safe, and people can use it without worrying about the weapons. The aim of humanitarian demining is to restore peace and security at the community level.
Methods
Surveying
Non-technical surveying, or the formal gathering of mine-related information, is required before actual clearance can begin. Impact surveys assess the socio-economic impact of the mine contamination and help assign priorities for the clearance of particular areas. Impact surveys make use of all available sources of information, including minefield records (where they exist), data about mine victims, and interviews with former combatants and local people. Technical surveys then define the minefields and provide detailed maps for the clearance operations.
Maps
Maps resulting from the impact surveys and technical surveys are stored in an information management system, including a variety of programme databases, and provide baseline data for clearance organisations and operational planning.
Minefield marking
Minefield marking is carried out when a mined area is identified, but clearance operations cannot take place immediately. Minefield marking, which is intended to deter people from entering mined areas, has to be carried out in combination with mine awareness, so that the local population understands the meaning and importance of the signs.
Manual clearance
Manual clearance relies on trained deminers using metal detectors and long thin prodders to locate the mines, which are then destroyed by controlled explosion.
Mine detection dogs
Mine detection dogs, which detect the presence of explosives in the ground by smell. Dogs are used in combination with manual deminers.
Mine detection rats
As well as dogs, rats detect the presence of explosive in the ground by smell. Rats are used in combination with manual deminers or mechanical demining
Mechanical clearance
Mechanical clearance relies on flails, rollers, vegetation cutters, and excavators, often attached to armoured bulldozers, to destroy the mines in the ground. These machines can only be used in certain terrains, and are expensive to operate. In most situations they are also not 100% reliable, and the work needs to be checked by other techniques.
Mine-risk education (MRE)
Mine-risk education, or MRE, refers to efforts to raise awareness and promote behavioural change through public-information campaigns, education and training, and liaison with communities.
MRE ensures that communities are aware of the risks from mines, unexploded ordnance and/or abandoned munitions and are encouraged to behave in ways that reduce the risk to people, property and the environment. Objectives are to reduce the risk to humans and to restore an environment where economic and social development can occur free from the constraints imposed by landmine contamination.
According to the Landmine Monitor Report (2009), in 2008, MRE was provided in 57 states and areas, compared to 61 states and areas in 2007. However, in the 1999 MRE programs were identified in just 14 states. MRE activities increased significantly in Yemen and Somaliland, and also increased to some degree in 10 other states. In Palestine, RE decreased in 2008 but rose sharply in response to conflict in Gaza in December 2008–January 2009. Some of the main players in MRE include Catholic Relief Services, German Caritas international, the Mines Advisory Group, Handicap International, Save the Children, INTERSOS, DanChurchAid, Norwegian People's Aid, the Mines Awareness Trust, Association for Aid and Relief, Japan and the International Committee of the Red Cross. Within the UN system UNICEF is the lead agency for MRE and supports programmes in 30 countries.
International standards have been developed to guide the management of MRE programmes. These standards emphasize that MRE should typically not be a stand-alone activity; it is an integral part of overall mine-action planning and implementation.
Public information dissemination
"Public information" in the context of mine action describes landmine and unexploded ordnance situations and informs and updates a broad range of stakeholders. Such information may focus on local risk-reduction messages, address broader national issues such as complying with legislation or raise public support for mine-action programmes.
Public information "dissemination", however, is primarily a one-way form of communication transmitted through mass media. Initiatives may be stand-alone MRE projects that are implemented in advance of other mine-action activities.
Education and training
"Education and training" is a two-way process that involves the imparting and acquiring of knowledge and the changing of attitudes and practices through teaching and learning.
Education and training activities may be conducted in formal and non-formal environments: teacher-to-child education in schools, information shared at home from parents to children or from children to their parents, child-to-child education, peer-to-peer education in work and recreational environments, landmine safety training for humanitarian aid workers and the incorporation of landmine safety messages in occupational health and safety practices.
Community liaison
Community liaison refers to the systems and processes used to exchange information between national authorities, mine-action organisations and communities on the presence of mines, unexploded ordnance and abandoned munitions. It enables communities to be informed about planned demining activities, the nature and duration of the tasks, and the exact locations of marked or cleared areas. Furthermore, it enables communities to inform local authorities and mine-action organizations about the location, extent and impact of contaminated areas. This information can greatly assist the planning of related activities, such as technical surveys, marking and clearance operations, and survivor-assistance services. Community liaison ensures that mine-action projects address community needs and priorities. Community liaison should be carried out by all organizations conducting mine-action operations.
Community liaison services may begin far in advance of demining activities and help the development of local capacities to assess the risks, manage information and develop risk-reduction strategies.
Stockpile destruction
Stockpiled anti-personnel landmines (APM) far outnumber those actually laid in the ground. In accordance with Article 4 of the anti-personnel mine-ban treaty, State Parties that accede to the treaty must destroy their stockpiled mines within four years. Sixty-five countries have now destroyed their stockpiles of antipersonnel landmines, destroying a combined total of more than 37 million mines. Another 51 countries have officially declared that they do not have a stockpile and a further three countries are scheduled to destroy theirs by the end of the year.
There are many options available to states in destroying their stockpiles. Stockpiles are usually destroyed by the military, but an industrial solution can also be employed. The techniques used vary depending on the make-up of the mines and the conditions in which they are found.
Laser cutting
Still in the research phase in the USA.
Microwave melt-out
This technology is also under development in the USA. It utilises microwaves to heat up TNT based explosive fillings. It is a rapid, clean technique but has one major disadvantage, the lack of control over heating can lead to the formation of "hot spots" with a resultant initiation of the filling. Work continues on its development, but it is not yet a feasible production technique. It is more energy efficient that steam and improves the value of any recovered explosives.
Destruction technology
"Silver II"
An electro-chemical oxidation process. The organic waste is treated by the generation of highly oxidising species in an electro-chemical cell. The cell is separated into two compartments by a membrane that allows ion flow but prevents bulk mixing of the anolyte and catholyte. In the anolyte compartment a highly reactive species of silver ion attacks organic material ultimately converting it to , and non-toxic inorganic compounds.
Biological degradation
This technology has been demonstrated at the pilot level for the destruction of perchlorate contaminated aqueous streams. The potential exists for bacteria to be used to consume the explosive content of APM, converting it into inert material. It requires extensive storage capacity whilst bio-remediation is taking place and only has limited applications. There is also a requirement for an element of mechanical breakdown prior to the addition of the bacteria.
Molten salt oxidation
Only demonstrated at prototype scale. Can destroy finely divided and consistent organic waste, therefore significant pre-processing required. These wastes can be destroyed by incineration anyway. A purely technical solution, but too expensive and impracticable at the moment.
Mine victim assistance
Mine victim assistance is a humanitarian effort which aims to organize a collaborative support for injured victims from mine and ERW as well as their families, thus enabling them to live normal lives. The approaches include physical rehabilitation, psychological support, and recovery of the victimized family and community. The work involves different level of actors, various organizations and State Parties who are obliged to perform the task under the Article 6 of the Mine Ban Treaty and Article 5 of the Convention on Cluster Munitions. United Nations Mine Action Service (UNMAS) is another active participant cooperating with other actors under United Nations, recently presenting the six-year plan according to mine action, "The Strategy of the United Nations on Mine Action 2013-2018".
See also
Geneva International Centre for Humanitarian Demining
International Campaign to Ban Landmines
Mine clearance agencies
Mines Advisory Group
Swiss Foundation for Mine Action (FSD)
References
External links
E-mine UN's electronic mine information network
International Mine Action Standards IMAS
Bomb disposal
Development studies
Humanitarian aid
Mine warfare
Minefields | Mine action | Chemistry,Engineering | 2,300 |
2,759,248 | https://en.wikipedia.org/wiki/Inventory%20control | Inventory control or stock control can be broadly defined as "the activity of checking a shop's stock". It is the process of ensuring that the right amount of supply is available within a business. However, a more focused definition takes into account the more science-based, methodical practice of not only verifying a business's inventory but also maximising the amount of profit from the least amount of inventory investment without affecting customer satisfaction. Other facets of inventory control include forecasting future demand, supply chain management, production control, financial flexibility, purchasing data, loss prevention and turnover, and customer satisfaction.
An extension of inventory control is the inventory control system. This may come in the form of a technological system and its programmed software used for managing various aspects of inventory problems, or it may refer to a methodology (which may include the use of technological barriers) for handling loss prevention in a business. The inventory control system allows for companies to assess their current state concerning assets, account balances, and financial reports.
Inventory control management
An inventory control system is used to keep inventories in a desired state while continuing to adequately supply customers, and its success depends on maintaining clear records on a periodic or perpetual basis.
Inventory management software often plays an important role in the modern inventory control system, providing timely and accurate analytical, optimization, and forecasting techniques for complex inventory management problems. Typical features of this type of software include:
inventory tracking and forecasting tools that use selectable algorithms and review cycles to identify anomalies and other areas of concern
inventory optimization
purchase and replenishment tools that include automated and manual replenishment components, inventory calculations, and lot size optimization
lead time variability management
safety stock calculation and forecasting
inventory cost management
shelf-life and slow-mover logic
multiple location support
Mobile/Moving Inventory Support
Through this functionality, a business may better detail what has sold, how quickly, and at what price, for example. Reports could be used to predict when to stock up on extra products around a holiday or to make decisions about special offers, discontinuing products, and so on.
Inventory control techniques often rely upon barcodes and radio-frequency identification (RFID) tags to provide automatic identification of inventory objects—including but not limited to merchandise, consumables, fixed assets, circulating tools, library books, and capital equipment—which in turn can be processed with inventory management software. A new trend in inventory management is to label inventory and assets with a QR Code, which can then be read with smart-phones to keep track of inventory count and movement. These new systems are especially useful for field service operations, where an employee needs to record inventory transaction or look up inventory stock in the field, away from the computers and hand-held scanners.
The control of inventory involves managing the physical quantities as well as the costing of the goods as it flows through the supply chain. In managing the cost prices of the goods throughout the supply chain, several costing methods are employed:
Retail method
Weighted Average Price method
FIFO (First In First Out) method
LIFO (Last In First Out) method
LPP (Last Purchase Price) method
BNM (Bottle neck method)
The calculation can be done for different periods. If the calculation is done on a monthly basis, then it is referred to the periodic method. In this method, the available stock is calculated by:
ADD Stock at beginning of period
ADD Stock purchased during the period
AVERAGE total cost by total qty to arrive at the Average Cost of Goods for the period.
This Average Cost Price is applied to all movements and adjustments in that period.
Ending stock in qty is arrived at by Applying all the changes in qty to the Available balance.
Multiplying the stock balance in qty by the Average cost gives the Stock cost at the end of the period.
Using the perpetual method, the calculation is done upon every purchase transaction.
Thus, the calculation is the same based on the periodic calculation whether by period (periodic) or by transaction (perpetual).
The only difference is the 'periodicity' or scope of the calculation.
Periodic is done monthly
Perpetual is done for the duration of the purchase until the next purchase
In practice, the daily averaging has been used to closely approximate the perpetual method.
6. Bottle neck method (depends on proper planning support)
Advantages and disadvantages
Inventory control systems have advantages and disadvantages, based on what style of system is being run. A purely periodic (physical) inventory control system takes "an actual physical count and valuation of all inventory on hand ... at the close of an accounting period," whereas a perpetual inventory control system takes an initial count of an entire inventory and then closely monitors any additions and deletions as they occur. Various advantages and disadvantages, in comparison, include:
Periodic is technically the more accurate as it considers both counted and valued inventory.
Periodic is more time-consuming than perpetual.
Perpetual can lower the cost of carrying inventory vs. periodic.
Perpetual is typically more costly to run than periodic.
Perpetual needs to be verified from time to time against an actual physical count, due to scrap, human error, theft, and other variables.
Vs. inventory management
While it is sometimes used interchangeably, inventory management and inventory control deal with different aspects of inventory.
Inventory management is a broader term pertaining to the regulation of all inventory aspects, from what is already present in the warehouse to how the inventory arrived and where the product's final destination will be. This management involves tracking field inventory throughout the supply chain, from sourcing to order fulfilment. It encompasses the entire process of procuring, storing, and profiting off merchandise or services.
Inventory control is the process of managing stock once it arrives at a warehouse, store or other storage location. It is solely concerned with regulating what is already present, and involves planning for sales and stock-outs, optimizing inventory for maximum benefit and preventing the pile-up of dead stock.
Business models
Just-in-time inventory (JIT), vendor managed inventory (VMI) and customer managed inventory (CMI) are a few of the popular models being employed by organizations looking to have greater stock management control.
JIT is a model that attempts to replenish inventory for organizations when the inventory is required. The model attempts to avoid excess inventory and its associated costs. As a result, companies receive inventory only when the need for more stock is approaching.
VMI (vendor managed inventory) and (co-managed inventory) are two business models that adhere to the JIT inventory principles. VMI gives the vendor in a vendor/customer relationship the ability to monitor, plan and control inventory for their customers. Customers relinquish the order making responsibilities in exchange for timely inventory replenishment that increases organizational efficiency.
CMI allows the customer to order and control their inventory from their vendors/suppliers. Both VMI and CMI benefit the vendor as well as the customer. Vendors see a significant increase in sales due to increased inventory turns and cost savings realized by their customers, while customers realize similar benefits.
See also
References
Inventory optimization
Freight transport
Lean manufacturing
Automatic identification and data capture
Further reading
Silver, Edward A., David F. Pyke, and Rein Peterson. Inventory Management and Production Planning and Scheduling, 3rd ed. Hoboken, NJ: Wiley, 1998.
Zipkin, Paul H. Foundations of Inventory Management. Boston: McGraw Hill, 2000.
Axsaeter, Sven. Inventory Control. Norwell, MA: Kluwer, 2000.
Porteus, Evan L. Foundations of Stochastic Inventory Theory. Stanford, CA: Stanford University Press, 2002.
Snyder, Lawrence V. Fundamentals of Supply Chain Theory, 2nd ed. Hoboken, NJ: John Wiley & Sons, Inc, 2019.
Rossi, Roberto. Inventory Analytics. Cambridge, UK: Open Book Publishers, 2021. | Inventory control | Technology,Engineering | 1,596 |
10,208,840 | https://en.wikipedia.org/wiki/Galactokinase%20deficiency | Galactokinase deficiency is an autosomal recessive metabolic disorder marked by an accumulation of galactose and galactitol secondary to the decreased conversion of galactose to galactose-1-phosphate by galactokinase. The disorder is caused by mutations in the GALK1 gene, located on chromosome 17q24. Galactokinase catalyzes the first step of galactose phosphorylation in the Leloir pathway of intermediate metabolism. Galactokinase deficiency is one of the three inborn errors of metabolism that lead to hypergalactosemia. The disorder is inherited as an autosomal recessive trait. Unlike classic galactosemia, which is caused by a deficiency of galactose-1-phosphate uridyltransferase, galactokinase deficiency does not present with severe manifestations in early infancy. Its major clinical symptom is the development of cataracts during the first weeks or months of life, as a result of the accumulation, in the lens, of galactitol, a product of an alternative route of galactose utilization. The development of early cataracts in homozygous affected infants is fully preventable through early diagnosis and treatment with a galactose-restricted diet. Some studies have suggested that, depending on milk consumption later in life, heterozygous carriers of galactokinase deficiency may be prone to presenile cataracts at 20–50 years of age.
Signs and symptoms
Causes the elevation of galactose in blood (galactosemia) and urine (galactosuria).
When the patient consumes galactose via their diet, it will result in galactitol accumulation. Which can result in cataracts.
Genetics
Galactokinase deficiency is an autosomal recessive disorder, which means the defective gene responsible for the disorder is located on an autosome (chromosome 17 is an autosome). Two copies of the defective gene (one inherited from each parent) are required in order to be born with the disorder. The parents of an individual with an autosomal recessive disorder both carry one copy of the defective gene, but usually do not experience any signs or symptoms of the disorder.
Unlike galactose-1-phosphate uridyltransferase deficiency, the symptoms of galactokinase deficiency are relatively mild. The only known symptom in affected children is the formation of cataracts, due to the production of galactitol in the lens of the eye. Cataracts can present as a failure to develop a social smile and a failure to visually track moving objects.
Gene structure
The human GALK1 gene contains 8 exons and spans approximately 7.3 kb of genomic DNA. The GALK1 promoter was found to have many features in common with other housekeeping genes, including high GC content, several copies of the binding site for the Sp1 transcription factor and the absence of TATA-box and CCAAT-box motifs typically present in eukaryotic polymerase II promoters. Analysis by 5-prime-RACE PCR indicated that the GALK1 mRNA is heterogeneous at the 5-prime end, with transcription sites occurring at many locations between 21 and 61 bp upstream of the ATG start site of the coding region. In vitro translation experiments of the GALK1 cDNA indicated that the protein is cytosolic and not associated with the endoplasmic reticulum membrane.
Diagnosis
Diagnosis is established by high blood levels of galactose, normal activity of the enzyme galactose-1-phosphate uridyltransferase and reduced or no activity of galactokinase in RBCs.
Treatment
Medical care
Treatment may be provided on an outpatient basis.
Cataracts that do not regress or disappear with therapy may require hospitalization for surgical removal.
Surgical care
Cataracts may require surgical removal.
Consultations
Biochemical geneticist
Nutritionist
Ophthalmologist
Diet
Diet is the foundation of therapy. Elimination of lactose and galactose sources suffices for definitive therapy.
Activity
No restriction is necessary.
See also
Galactosemia
References
External links
Autosomal recessive disorders
Inborn errors of carbohydrate metabolism | Galactokinase deficiency | Chemistry | 887 |
11,807,158 | https://en.wikipedia.org/wiki/Sleeve%20%28construction%29 | In construction, a sleeve is used both by the electrical and mechanical trades to create a penetration in a solid wall, ceilling or floor.
Purpose
For cables we provide wall/deck penetration sleeve to avoid any damage to cable from material shifting on deck.
Deck penetrations on offshore platform provided to avoid water/chemical dripping to lower deck in case of spillage.
Acts as toe guard.
For wall penetrations it can be a type of strengthening.
Together with packing it helps to protect from fire spread from one room to other.
Materials
Sleeves can be made of:
sections of steel pipe.
plastic.
sheet metal.
proprietary devices that are listed firestop components.
Requirements
Sleeves must be sized such as to adequately allow the passage of the intended penetrant(s) plus enough room to permit the practical installation and mounting of the penetrants as well as adequate room for firestops. A general practice is to size the sleeve two NPS (pipe sizes) up from the diameter of the penetrant. For example, a 4" pipe, with 1" of thermal insulation makes a 6" penetrant (1" pipe covering on each side of the pipe), plus two pipe sizes = an 8" sleeve, creating a 1" annulus.
In case of insulated piping, the size of the insulation must be taken into account for the intended firestop certification listing.
Hazards
Metallic sleeves are heatsinks in the firestop that follows the mounting of the penetrants. Maximum and minimum tolerances for wall thicknesses must be taken into account prior to casting. Heatsinks can affect T-ratings. Organic sealants used for topcaulking in firestops may let go of the sleeve if it has conducted too much heat through to the unexposed side (as in the case of the fire test article, this picture).
Plastic sleeves are usually removed after the concrete forms are stripped, as they contribute fuel to an accidental fire.
See also
Piping
Firestop
Packing (firestopping)
Penetration (firestop)
Penetrant (mechanical, electrical, or structural)
Annulus (firestop)
Fire-resistance rating
Fire test
Heat sink
External links
CAL State University Section 15050 Spec, including Sleeves
UVA Virginia Hospital Section 15050 Specification, including Sleeves
Piping
Passive fire protection | Sleeve (construction) | Chemistry,Engineering | 469 |
57,305,294 | https://en.wikipedia.org/wiki/The%20Porter%20Garden%20Telescope | The Porter Garden Telescope was an innovative ornamental telescope for the garden designed by Russell W. Porter and commercialized by Jones & Lamson Machine Company at the beginning of the 1920s in the United States.
Oriented to users with high purchasing power, and constructed in statuary bronze, it could be left permanently outdoors like sculptures and sundials, keeping the delicate optics in a case.
It was embellished with floral ornament, with a style close to the art nouveau. In its base were the names of celebrated astronomers: Galileo, Kepler, and Newton.
The part called the bowl bore the commercial logo "The Porter Garden Telescope", the name and address of the manufacturer, the serial number of manufacture, and the number and date of the patent.
Technical characteristics
Material of construction: Bronze.
Reflecting telescope, Newtonian type.
Without tube that wrap the optics.
Mount: Combination of altazimuth (terrestrial use) and equatorial type horseshoe (astronomical use).
Able to follow the movement of the apparent place of the stars, when it is used between the latitudes 25º-55º of both hemispheres north and south.
Primary mirror of 6 inches (6": roughly 152mm).
Focal relation f/4.
Prism of 1.5 inches as secondary reflecting element.
The eyepieces provided give magnifications of 25x, 50x and 100x.
The arm that holds the eyepiece spins freely around the optical axis, allowing the user to adapt it to his comfort during the observation.
Optional: Dual eyepiece holder for simultaneous use of two users.
History
Russell W. Porter designed the telescope starting from previous concepts that he had explored before, with the idea of simplifying and reducing to the minimum the times of transport, assembly, setup and disassembly of conventional personal telescopes. By making a telescope that could stay outdoors permanently, he maximized observing time. On the other hand, it brought together his dream of promoting astronomy among the uninitiated, by embellishing its form to make it attractive to the public, but without endangering the robustness of bronze. Beginners could manipulate it without fear.
He presented the application of patent on 25 January 1922 and was conceded the same on 25 September 1923 under number US1468973. However, the final model differed of the aforementioned patent in the zone of the base, since there were later modifications of design that were collected in a new patent for a refracting telescope version. Although it presented the application some months after the first, 7 September 1922, it was conceded years later, on 6 December 1927 under number US1651412 but it was never manufactured.
The primary mirror was mechanized by J&L.Mac.Co, with the final parabolized handmade, and the specular surface of the glass was obtained by silvering. They offered to resilver at nominal cost, although they claimed that it would not be necessary to do it in years since its lacquered was tested outdoors during the rigour of one winter of Vermont without appreciating deterioration.
The rest of the optics, prism and ocular, were supplied by John A. Brashear Co. The election of a prism like a secondary element was usual in the period, previous to the first aluminized optics in vacuum chambers, and deleted the need of maintenance of a second silvering specular surface.
It was commercialized during a pair of years (1923-1924), with the publication of articles in skilled press and announcements in magazines of decoration and gardening, but since its price without pedestal was equal to a car Ford Model-T of the time, that saturated the market for which it was oriented with the sale of around 100 units. Other influential factors in the decommission of the product were: the few customer understanding of how to use the equatorial mount for astronomical use, initial underestimation of the costs of production (sale price changed from $250 to $400) and the own art nouveau style of the sculpture in full tendency of the art deco style.
Years later, in 1936, during the collaboration of Russell W. Porter in the design of the Palomar Observatory and the Hale Telescope, which was the largest effective telescope in the world during 45 years, he requested the assignment of using the original patent to J&L.Mac.Co. to be able to implement the horseshoe type mount in that project, obtaining it without obstacles.
Miscellanea
On 10 September 1923 Russell W. Porter could show to his acquaintances the partial eclipse of sun, using the sundial function of the telescope and taking advantage to be able to spin freely the arm that held the eyepiece around the optical axis to project comfortably the image of the sun on a cardboard.
On 29 June 1925 one copy survived to the earthquake of Santa Barbara, showing its profit like sundial since the clocks stopped affected by the seism.
Exactly ten years after its original presentation in the magazine Scientific American, in 1933 in that magazine an only announcement in which they offered copies manufactured by Donald Alden Patch was published . Don Patch, acquaintance of Russell W. Porter and also member of Springfield Telescope Makers, had already made previously a Springfield type mount making the castings from the original design of Mr. Porter and maybe could have access to molds of the discontinued telescope. It is unknown how many of those could make or sell, but it exists proof of at least a possible copy that amalgamated pieces apparently genuine and parts adapted and re-designed to obtain a functional telescope that reminded the original.
The copy with higher numbering that has survived until the present and that it has transcended publicly is the number 54. It was exposed in Longwood Gardens and was re-discovered in 2012 under the staircase of a barn.
See also
List of The Porter Garden Telescope original copies
Russell W. Porter
Palomar Observatory
References
External links
Stellafane homepage
Reflecting telescope patent
Refracting telescope patent
History of astronomy
Telescopes | The Porter Garden Telescope | Astronomy | 1,211 |
8,565,964 | https://en.wikipedia.org/wiki/Lithium%20hexafluorophosphate | Lithium hexafluorophosphate is an inorganic compound with the formula LiPF6. It is a white crystalline powder.
Production
LiPF6 is manufactured by reacting phosphorus pentachloride with hydrogen fluoride and lithium fluoride
PCl5 + LiF + 5 HF → LiPF6 + 5 HCl
Suppliers include Targray and Morita Chemical Industries Co., Ltd.
Chemistry
The salt is relatively stable thermally, but loses 50% weight at 200 °C (392 °F). It hydrolyzes near 70 °C (158 °F) according to the following equation forming highly toxic HF gas:
LiPF6 + 4 H2O → LiF + 5 HF + H3PO4
Owing to the Lewis acidity of the Li+ ions, LiPF6 also catalyses the tetrahydropyranylation of tertiary alcohols.
In lithium-ion batteries, LiPF6 reacts with Li2CO3, which may be catalysed by small amounts of HF:
LiPF6 + Li2CO3 → POF3 + CO2 + 3 LiF
Application
The main use of LiPF6 is in commercial secondary batteries, an application that exploits its high solubility in polar aprotic solvents. Specifically, solutions of lithium hexafluorophosphate in carbonate blends of ethylene carbonate, dimethyl carbonate, diethyl carbonate and/or ethyl methyl carbonate, with a small amount of one or many additives such as fluoroethylene carbonate and vinylene carbonate, serve as state-of-the-art electrolytes in lithium-ion batteries. This application takes advantage of the inertness of the hexafluorophosphate anion toward strong reducing agents, such as lithium metal, as well as of the ability of [PF6-] to passivate the positive aluminium current collector.
References
Lithium salts
Hexafluorophosphates
Electrolytes | Lithium hexafluorophosphate | Chemistry | 406 |
9,384,714 | https://en.wikipedia.org/wiki/Basal%20area | Basal area is the cross-sectional area of trees at breast height (1.3m or 4.5 ft above ground). It is a common way to describe stand density. In forest management, basal area usually refers to merchantable timber and is given on a per hectare or per acre basis. If one cut down all the merchantable trees on an acre at off the ground and measured the square inches on the top of each stump (πr*r), added them all together and divided by square feet (144 sq inches per square foot), that would be the basal area on that acre. In forest ecology, basal area is used as a relatively easily-measured surrogate of total forest biomass and structural complexity, and change in basal area over time is an important indicator of forest recovery during succession
.
Estimation from diameter at breast height
The basal area (BA) of a tree can be estimated from its diameter at breast height (DBH), the diameter of the trunk as measured 1.3m (4.5 ft) above the ground. DBH is converted to BA based on the formula for the area of a circle:
If was measured in cm, will be in cm2. To convert to m2, divide by 10,000:
If is in inches, divide by 144 to convert to ft2:
The formula for BA in ft2 may also be simplified as:
in English system
in Metric system
The basal area of a forest can be found by adding the basal areas (as calculated above) of all of the trees in an area and dividing by the area of land in which the trees were measured. Basal area is generally made for a plot and then scaled to m2/ha or ft2/acre to compare forest productivity and growth rate among multiple sites.
Estimation using a wedge prism
A wedge prism can be used to quickly estimate basal area per hectare. To find basal area using this method, simply multiply your BAF (Basal Area Factor) by the number of "in" trees in your variable radius plot. The BAF will vary based on the prism used, common BAFs include 5/8/10, and all "in" trees are those trees, when viewed through your prism from plot centre, that appear to be in-line with the standing tree on the outside of the prism.
Worked example
Suppose you carried out a survey using a variable radius plot with angle count sampling (wedge prism) and you selected a Basal Area Factor (BAF) of 4. If your first tree had a diameter at breast height (DBH) of 14cm, then the standard way of calculating how much of 1ha was covered by tree area (scaling up from that tree to the hectare) would be:
(BAF/((DBH+0.5)2 × π/4))) × 10,000
BAF, in this case 4, is the BAF selected for the sampling technique.
DBH, in this case 14 (this uses an assumed diameter, when actually used is the radius perpendicular to the tangent line)
The + 0.5 allows under and over measurement to be accounted for.
The π/4 converts the rest to the area.
In this case this means in every Ha there is 242 m2 of tree area according to this sampled tree being taken as representative of all the unmeasured trees.
Fixed area plot
It would also be possible to survey the trees in a Fixed Area Plot (FAP). Also called a Fixed Radius Plot. In the case that this plot was 100 m2. Then the formula would be
(DBH+0.5)2X π/4
References
R. Hédl, M. Svátek, M. Dancak, Rodzay A.W., M. Salleh A.B., Kamariah A.S. A new technique for inventory of permanent plots in tropical forests: a case study from lowland dipterocarp forest in Kuala Belalong, Brunei Darussalam, In Blumea 54, 2009, p 124–130. Published 30. 10. 2009.
Forest modelling
Measurement
Forest ecology | Basal area | Physics,Mathematics | 838 |
44,398,664 | https://en.wikipedia.org/wiki/Multiconsult | Multiconsult is an engineering consultancy with 2800 employees operating in Norway, elsewhere in Europe and globally. The company is listed on the Oslo Stock Exchange.
In addition to its headquarters in Oslo, Multiconsult has local offices in several Norwegian cities. Multiconsult also operates elsewhere in Europe, Africa and Asia. In 2015, Multiconsult acquired Link Arkitektur, one of the largest architecture firms in Scandinavia.
History
Multiconsults traces its origins from the founding of Norsk Vandbygningskontor (NVK) in 1908. NVK merged with Multiconsult in 2003. The name of Multiconsult stems from 1974 when Sivilingeniørene Apeland & Mjøset AS was reorganised and Stiftelsen Multiconsult became a major shareholder.
References
External links
Construction and civil engineering companies of Norway
Companies based in Oslo
Construction and civil engineering companies established in 1973
Companies listed on the Oslo Stock Exchange
International engineering consulting firms
Geotechnical engineering companies
Norwegian companies established in 1973
Norwegian companies established in 1908
Construction and civil engineering companies established in 1908 | Multiconsult | Engineering | 234 |
403,320 | https://en.wikipedia.org/wiki/64%20%28number%29 | 64 (sixty-four) is the natural number following 63 and preceding 65.
Mathematics
Sixty-four is the square of 8, the cube of 4, and the sixth power of 2. It is the seventeenth interprime, since it lies midway between the eighteenth and nineteenth prime numbers (61, 67).
The aliquot sum of a power of two (2n) is always one less than the power of two itself, therefore the aliquot sum of 64 is 63, within an aliquot sequence of two composite members (64, 63, 41, 1, 0) that are rooted in the aliquot tree of the thirteenth prime, 41.
64 is:
the smallest number with exactly seven divisors,
the first whole number (greater than one) that is both a perfect square, and a perfect cube,
the lowest positive power of two that is not adjacent to either a Mersenne prime or a Fermat prime,
the fourth superperfect number — a number such that σ(σ(n)) = 2n,
the sum of Euler's totient function for the first fourteen integers,
the number of graphs on four labeled nodes,
the index of Graham's number in the rapidly growing sequence 3↑↑↑↑3, 3 ↑ 3, …
the number of vertices in a 6-cube,
the fourth dodecagonal number,
and the seventh centered triangular number.
Since it is possible to find sequences of 65 consecutive integers (intervals of length 64) such that each inner member shares a factor with either the first or the last member, 64 is the seventh Erdős–Woods number.
In decimal, no integer added to the sum of its own digits yields 64; hence, 64 is the tenth self number.
In four dimensions, there are 64 uniform polychora aside from two infinite families of duoprisms and antiprismatic prisms, and 64 Bravais lattices.
See also
Other powers of two: 4, 8, 16, 32, 64, 128, ...
64-bit computing
References
Integers | 64 (number) | Mathematics | 425 |
55,575,535 | https://en.wikipedia.org/wiki/TON%20618 | TON 618 (abbreviation of Tonantzintla 618) is a hyperluminous, broad-absorption-line, radio-loud quasar, and Lyman-alpha blob located near the border of the constellations Canes Venatici and Coma Berenices, with the projected comoving distance of approximately 18.2 billion light-years from Earth. It possesses one of the most massive black holes ever found, at 40.7 billion .
Observational history
As quasars were not recognized until 1963, the nature of this object was unknown when it was first noted in a 1957 survey of faint blue stars (mainly white dwarfs) that lie away from the plane of the Milky Way. On photographic plates taken with the 0.7 m Schmidt telescope at the Tonantzintla Observatory in Mexico, it appeared "decidedly violet" and was listed by the Mexican astronomers Braulio Iriarte and Enrique Chavira as entry number 618 in the Tonantzintla Catalogue.
In 1970, a radio survey at Bologna in Italy discovered radio emissions from TON 618, indicating that it was a quasar. Marie-Helene Ulrich then obtained optical spectra of TON 618 at the McDonald Observatory which showed emission lines typical of a quasar. From the high redshift of the lines Ulrich deduced that TON 618 was very distant, and hence was one of the most luminous quasars known.
Components
Supermassive black hole
As a quasar, TON 618 is believed to be the active galactic nucleus at the center of a galaxy, the engine of which is a supermassive black hole feeding on intensely hot gas and matter in an accretion disc. Given its observed redshift of 2.219, the light travel time of TON 618 is estimated to be approximately 10.8 billion years. Due to the brilliance of the central quasar, the surrounding galaxy is outshone by it and hence is not visible from Earth. With an absolute magnitude of −30.7, it shines with a luminosity of watts, or as brilliantly as 140 trillion times that of the Sun, making it one of the brightest objects in the known Universe.
Like other quasars, TON 618 has a spectrum containing emission lines from cooler gas much further out than the accretion disc, in the broad-line region. The size of the broad-line region can be calculated from the brightness of the quasar radiation that is lighting it up. Shemmer and coauthors used both NV and CIV emission lines in order to calculate the widths of the Hβ spectral line of at least 29 quasars, including TON 618, as a direct measurement of their accretion rates and hence the mass of the central black hole.
The emission lines in the spectrum of TON 618 have been found to be unusually wide, indicating that the gas is travelling very fast; the full width half maxima of TON 618 has been the largest of the 29 quasars, with hints of 10,500 km/s speeds of infalling material by a direct measure of the Hβ spectral line, indication of a very strong gravitational force. From this, the mass of the central black hole of TON 618 has been estimated to be at . This is considered one of the highest masses ever recorded for such an object; higher than the mass of all the stars in the Milky Way galaxy combined, which is , and 15,300 times more massive than Sagittarius A*, the Milky Way's central black hole. With such high mass, TON 618 may fall into a proposed new classification of ultramassive black holes. A black hole of this mass has a Schwarzschild radius of 1,300 AU (about 390 billion km or 0.04 ly in diameter) which is more than 40 times the distance from Neptune to the Sun, and its event horizon is large enough to fit over 30 solar systems inside of it.
A more recent measurement in 2019 by Ge and colleagues which utilizes the C IV emission line, an alternative spectral line to Hβ, using the same data reproduced by the earlier paper by Shemmer found a lower relative velocity of the surrounding gas of , which indicate a lower mass for the central black hole at , consequentially lower than the previous estimate.
Lyman-alpha nebula
The nature of TON 618 as a Lyman-alpha emitter has been well documented since at least the 1980s. Lyman-alpha emitters are characterized by their significant emission of the Lyman-alpha line, an ultraviolet wavelength emitted by neutral hydrogen. Such objects, however, have been very difficult to study due to the Lyman-alpha line being strongly absorbed by air in the Earth's atmosphere, limiting study of Lyman-alpha emitters to those objects with high redshifts. TON 618, with its luminous emission of Lyman-alpha radiation along with its high redshift, has made it one of the most important objects in the study of the Lyman-alpha forest.
Observations made by the Atacama Large Millimeter Array (ALMA) in 2021 revealed the apparent source of the Lyman-alpha radiation of TON 618: an enormous cloud of gas surrounding the quasar and its host galaxy. This would make it a Lyman-alpha blob (LAB), one of the largest such objects yet known.
LABs are huge collections of gases, or nebulae, that are also classified as Lyman-alpha emitters. These enormous, galaxy-sized clouds are some of the largest nebulae known to exist, with some identified LABs in the 2000s reaching sizes of at least hundreds of thousands of light-years across.
In the case of TON 618, the enormous Lyman-alpha nebula surrounding it has the diameter of at least , twice the size of the Milky Way. The nebula consists of two parts: an inner molecular outflow and an extensive cold molecular gas in its circumgalactic medium, each having the mass of 50 billion , with both of them being aligned to the radio jet produced by the central quasar. The extreme radiation from TON 618 excites the hydrogen in the nebula so much that it causes it to glow brightly in the Lyman-alpha line, consistent with the observations of other LABs driven by their inner galaxies. Since both quasars and LABs are precursors of modern-day galaxies, the observation on TON 618 and its enormous LAB gave insight to the processes that drive the evolution of massive galaxies, in particular probing their ionization and early development.
See also
Other notable objects in the Tonantzintla Catalogue
NGC 6380 – globular cluster listed as TON 1, the first entry of the Tonantzintla Catalogue.
SX Leonis Minoris – variable star listed as TON 45.
U Geminorum – star system listed as TON 842.
RZ Leonis Minoris – cataclysmic variable listed as TON 1107.
Notes
References
External links
NASA animation illustrating the relative sizes of black holes including TON 618
Quasars
Astronomical objects discovered in 1957
Canes Venatici
Supermassive black holes
Lyman-alpha blobs | TON 618 | Physics,Astronomy | 1,464 |
2,726 | https://en.wikipedia.org/wiki/Atlas%20Autocode | Atlas Autocode (AA) is a programming language developed around 1963 at the University of Manchester. A variant of the language ALGOL, it was developed by Tony Brooker and Derrick Morris for the Atlas computer. The initial AA and AB compilers were written by Jeff Rohl and Tony Brooker using the Brooker-Morris Compiler-compiler, with a later hand-coded non-CC implementation (ABC) by Jeff Rohl.
The word Autocode was basically an early term for programming language. Different autocodes could vary greatly.
Features
AA was a block structured language that featured explicitly typed variables, subroutines, and functions. It omitted some ALGOL features such as passing parameters by name, which in ALGOL 60 means passing the memory address of a short subroutine (a thunk) to recalculate a parameter each time it is mentioned.
The AA compiler could generate range-checking for array accesses, and allowed an array to have dimensions that were determined at runtime, i.e., an array could be declared as integer array Thing (i:j), where i and j were calculated values.
AA high-level routines could include machine code, either to make an inner loop more efficient or to effect some operation which otherwise cannot be done easily.
AA included a complex data type to represent complex numbers, partly because of pressure from the electrical engineering department, as complex numbers are used to represent the behavior of alternating current. The imaginary unit square root of -1 was represented by i, which was treated as a fixed complex constant = i.
The complex data type was dropped when Atlas Autocode later evolved into the language Edinburgh IMP. IMP was an extension of AA and was used to write the Edinburgh Multiple Access System (EMAS) operating system.
In addition to being notable as the progenitor of IMP and EMAS, AA is noted for having had many of the features of the original Compiler Compiler. A variant of the AA compiler included run-time support for a top-down recursive descent parser. The style of parser used in the Compiler Compiler was in use continuously at Edinburgh from the 60's until almost the year 2000.
Other Autocodes were developed for the Titan computer, a prototype Atlas 2 at Cambridge, and the Ferranti Mercury.
Syntax
Atlas Autocode's syntax was largely similar to ALGOL, though it was influenced by the output device which the author had available, a Friden Flexowriter. Thus, it allowed symbols like ½ for .5 and the superscript 2 for to the power of 2. The Flexowriter supported overstriking and thus, AA did also: up to three characters could be overstruck as a single symbol. For example, the character set had no ↑ symbol, so exponentiation was an overstrike of | and *. The aforementioned underlining of reserved words (keywords) could also be done using overstriking. The language is described in detail in the Atlas Autocode Reference Manual.
Other Flexowriter characters that were found a use in AA were: α in floating-point numbers, e.g., 3.56α-7 for modern 3.56e-7 ; β to mean the second half of a 48-bit Atlas memory word; π for the mathematical constant pi.
When AA was ported to the English Electric KDF9 computer, the character set was changed to International Organization for Standardization (ISO). That compiler has been recovered from an old paper tape by the Edinburgh Computer History Project and is available online, as is a high-quality scan of the original Edinburgh version of the Atlas Autocode manual.
Keywords in AA were distinguishable from other text by being underlined, which was implemented via overstrike in the Flexowriter (compare to bold in ALGOL). There were also two stropping regimes. First, there was an "uppercasedelimiters" mode where all uppercase letters (outside strings) were treated as underlined lowercase. Second, in some versions (but not in the original Atlas version), it was possible to strop keywords by placing a "%" sign in front of them, for example the keyword endofprogramme could be typed as %end %of %programme or %endofprogramme. This significantly reduced typing, due to only needing one character, rather than overstriking the whole keyword. As in ALGOL, there were no reserved words in the language as keywords were identified by underlining (or stropping), not by recognising reserved character sequences. In the statement if token=if then result = token, there is both a keyword if and a variable named if.
As in ALGOL, AA allowed spaces in variable names, such as integer previous value. Spaces were not significant and were removed before parsing in a trivial pre-lexing stage called "line reconstruction". What the compiler would see in the above example would be "iftoken=ifthenresult=token". Spaces were possible due partly to keywords being distinguished in other ways, and partly because the source was processed by scannerless parsing, without a separate lexing phase, which allowed the lexical syntax to be context-sensitive.
The syntax for expressions let the multiplication operator be omitted, e.g., 3a was treated as 3*a, and a(i+j) was treated as a*(i+j) if a was not an array. In ambiguous uses, the longest possible name was taken (maximal munch), for example ab was not treated as a*b, whether or not a and b had been declared.
References
External links
The main features of Atlas Autocode, By R. A. Brooker, J. S. Rohl, and S. R. Clark
The Atlas Autocode Mini-Manual by W. F. Lunnon, G. Riding (July 1965)
Atlas Autocode Reference Manual by R.A. Brooker, J.S.Rohl (March 1965)
Mercury Autocode, Atlas Autocode and some Associated Matters. by Vic Forrington (Jan 2014)
Flowcharts for Atlas Autocode compiler on KDF9.
Ferranti
History of computing in the United Kingdom
Structured programming languages | Atlas Autocode | Technology | 1,292 |
8,877,643 | https://en.wikipedia.org/wiki/Misner%20space | Misner space is an abstract mathematical spacetime, first described by Charles W. Misner. It is also known as the Lorentzian orbifold . It is a simplified, two-dimensional version of the Taub–NUT spacetime. It contains a non-curvature singularity and is an important counterexample to various hypotheses in general relativity.
Michio Kaku develops the following analogy for understanding the concept: "Misner space is an idealized space in which a room, for example, becomes the entire universe. For example, every point on the left wall of the room is identical to the corresponding point on the right wall, such that if you were to walk toward the left wall you will walk through the wall and appear from the right wall. This suggests that the left and right wall are joined, in some sense, as in a cylinder. The opposite walls are thus all identified with each other, and the ceiling is likewise identified with the floor. Misner space is often studied because it has the same topology as a wormhole but is much simpler to handle mathematically. If the walls move, then time travel might be possible within the Misner universe."
Metric
The simplest description of Misner space is to consider two-dimensional Minkowski space with the metric
with the identification of every pair of spacetime points by a constant boost
It can also be defined directly on the cylinder manifold with coordinates by the metric
The two coordinates are related by the map
and
Causality
Misner space is a standard example for the study of causality since it contains both closed timelike curves and a compactly generated Cauchy horizon, while still being flat (since it is just Minkowski space). With the coordinates , the loop defined by , with tangent vector , has the norm , making it a closed null curve. This is the chronology horizon : there are no closed timelike curves in the region , while every point admits a closed timelike curve through it in the region .
This is due to the tipping of the light cones which, for , remains above lines of constant but will open beyond that line for , causing any loop of constant to be a closed timelike curve.
Chronology protection
Misner space was the first spacetime where the notion of chronology protection was used for quantum fields, by showing that in the semiclassical approximation, the expectation value of the stress-energy tensor for the vacuum is divergent.
References
Further reading
General relativity | Misner space | Physics | 499 |
73,922,479 | https://en.wikipedia.org/wiki/Ramicolous%20lichen | A ramicolous lichen is one that lives on branches.
References
Sources
Lichenology | Ramicolous lichen | Biology | 20 |
27,450,868 | https://en.wikipedia.org/wiki/816%20Nuclear%20Military%20Plant | 816 Nuclear Military Plant () is an unfinished Chinese underground nuclear weapons production facility and the largest man-made tunnel structure in the world. A military megaproject, the nuclear base is located near what is now suburban Fuling, a municipality in Chongqing, China. In 2010, it was opened to Chinese tourists. It is a distinct network of nuclear-weapons manufacturing tunnels to the likewise defunct Underground Project 131 and the still operational "Underground Great Wall of China."
History
The project was started in 1966 when Sino-Soviet relations dramatically declined (see also the Sino-Soviet split). To enhance China's national defence and prevent possible Soviet invasion and nuclear attack, the project was approved (directly by then-Premier Zhou Enlai) and undertaken in secret. More than 60,000 engineering soldiers of the People's Liberation Army participated in the construction of the base. The underground base was designed to be able to tolerate thousands of tons of TNT explosives and 8-magnitude earthquakes.
The project was under construction for 17 years, and the construction was nearly completed in 1984. In 1964 China made its first public nuclear test. Largely due to change in the Cold War international situation, the project was cancelled in February 1984. It was further declassified in April 2002. In April 2010, after being closed for over 25 years, the base was opened to tourists.
Structure
The surface area of the cave is more than 104,000 m2, and the total length of the tunnels is more than 20 kilometers. The whole complex consists of 13 levels, 18 artificial caves linked to each other, and has more than 80 roads and 130 tunnels. Automobiles are able to pass the roads and tunnels inside. The base has the “World's Largest Artificial Cave”, which has a height of 79.6 meters, roughly equal to that of a 20-floor building.
See also
Fallout Shelter
Underground City (Beijing)
Nuclear warfare
Nuclear deterrent
Nuclear strategy
References
External links
Geographical coordinates:
Pictures of the 816 Nuclear Military Plant
CCTV: Chongqing opens former Nuclear Plant as tourist attraction
Secret military programs
Nuclear history of China
Nuclear program of the People's Republic of China
Military history of the People's Republic of China
Subterranean buildings and structures
Buildings and structures in Chongqing
Secret places
Cold War museums in China
China Projects
Military installations of China
Military history of Chongqing
1966 establishments in Shanghai | 816 Nuclear Military Plant | Engineering | 475 |
53,513,426 | https://en.wikipedia.org/wiki/Lie%20group%20integrator | A Lie group integrator is a numerical integration method for differential equations built from coordinate-independent operations such as Lie group actions on a manifold. They have been used for the animation and control of vehicles in computer graphics and control systems/artificial intelligence research.
These tasks are particularly difficult because they feature nonholonomic constraints.
See also
Euler integration
Lie group
Numerical methods for ordinary differential equations
Parallel parking problem
Runge–Kutta methods
Variational integrator
References
Numerical analysis | Lie group integrator | Mathematics | 97 |
38,464,658 | https://en.wikipedia.org/wiki/Relativistic%20disk | In general relativity, the relativistic disk expression refers to a class of axi-symmetric self-consistent solutions to Einstein's field equations corresponding to the gravitational field generated by axi-symmetric isolated sources. To find such solutions, one has to pose correctly and solve together the ‘outer’ problem, a boundary value problem for vacuum Einstein's field equations whose solution determines the external field, and the ‘inner’ problem, whose solution determines the structure and the dynamics of the matter source in its own gravitational field. Physically reasonable solutions must satisfy some additional conditions such as finiteness and positiveness of mass, physically reasonable kind of matter and finite geometrical size. Exact solutions describing relativistic static thin disks as their sources were first studied by Bonnor and Sackfield and Morgan and Morgan. Subsequently, several classes of exact solutions corresponding to static and stationary thin disks have been obtained by different authors.
References
General relativity
Exact solutions in general relativity | Relativistic disk | Physics,Mathematics | 193 |
3,621,036 | https://en.wikipedia.org/wiki/Kempe%20chain | In mathematics, a Kempe chain is a device used mainly in the study of the four colour theorem. Intuitively, it is a connected chain of vertices on a graph with alternating colours.
History
Kempe chains were first used by Alfred Kempe in his attempted proof of the four colour theorem. Even though his proof turned out to be incomplete, the method of Kempe chains is crucial to the success of valid modern proofs, such as the first successful one by Kenneth Appel and Wolfgang Haken. Furthermore, the method is used in the proof of the five color theorem by Percy John Heawood, a weaker but more easily proven version of the four colour theorem.
Formal definition
The term "Kempe chain" is used in two different but related ways.
Suppose G is a graph with vertex set V, with a given colouring function
where S is a finite set of colours, containing at least two distinct colours a and b. If v is a vertex with colour a, then the (a, b)-Kempe chain of G containing v is the maximal connected subset of V which contains v and whose vertices are all coloured either a or b.
The above definition is what Kempe worked with. Typically, the set S has four elements (the four colours of the four colour theorem), and c is a proper colouring, that is, each pair of adjacent vertices in V are assigned distinct colours. With these additional conditions, a and b are two out of the four colours available, and every element of the (a, b)-Kempe chain has neighbours in the chain of only the other colour.
A more general definition, which is used in the modern computer-based proofs of the four colour theorem, is the following. Suppose again that G is a graph, with edge set E, and this time we have a colouring function
If e is an edge assigned colour a, then the (a, b)-Kempe chain of G containing e is the maximal connected subset of E which contains e and whose edges are all coloured either a or b.
This second definition is typically applied where S has three elements, say a, b and c, and where V is a cubic graph, that is, every vertex has three incident edges. If such a graph is properly coloured, then each vertex must have edges of three distinct colours, and Kempe chains end up being paths, which is simpler than in the case of the first definition.
In terms of maps
Application to the four colour theorem
In the four colour theorem, Kempe was able to prove that all graphs necessarily have a vertex of five or less, or containing a vertex that touches five other vertices, called its neighbours. As such, to prove the four colour theorem, it is sufficient to prove that vertices of five or less were all four-colourable. Kempe was able to prove the case of degree four and give a partial proof of degree five using Kempe chains.
In this case, Kempe chains are used to prove the idea that no vertex of degree four has to be touching four distinct colours different from itself. First, one can create a graph with a vertex v and four vertices as neighbours. If we remove the vertex v, we can four-colour the remaining vertices. We can set the colours as (in clockwise order) red, yellow, blue, and green. In this situation, there can be a Kempe chain joining the red and blue neighbours or a Kempe chain joining the green and yellow neighbours, but not both, since these two paths would necessarily intersect, and the vertex where they intersect cannot be coloured with both red or blue and with green or yellow at the same time. Supposing that the Kempe chain is connecting the green and yellow neighbours, red and blue must then necessarily not have a Kempe chain between them. So, when placing the original vertex v back into the graph, we can simply reverse the colours of the red vertex and its neighbours (including the red vertex, making it blue), which leaves vertex v with two blue neighbours, one green, and one yellow. This means v has only three distinct colours as neighbours, and that we can now colour vertex v as red. This results in a four-coloured graph.
Other applications
Kempe chains have been used to solve problems in colouring extension. Kempe chains can be used for register allocation.
See also
Four colour theorem
Five colour theorem
Graph colouring
References
Graph coloring | Kempe chain | Mathematics | 902 |
4,580,462 | https://en.wikipedia.org/wiki/Physiome | The physiome of an individual's or species' physiological state is the description of its functional behavior. The physiome describes the physiological dynamics of the normal intact organism and is built upon information and
structure (genome, proteome, and morphome). The term comes from "physio-" (nature) and "-ome" (as a whole). The study of physiome is called physiomics.
The concept of a physiome project was presented to the International Union of Physiological Sciences (IUPS) by its Commission on Bioengineering in Physiology in 1993. A workshop on designing the Physiome Project was held in 1997. At its world congress in 2001, the IUPS designated the project as a major focus for the next decade. The project is led by the Physiome Commission of the IUPS.
Other research initiatives related to the physiome include:
The EuroPhysiome Initiative
The NSR Physiome Project of the National Simulation Resource (NSR) at the University of Washington, supporting the IUPS Physiome Project
The Wellcome Trust Heart Physiome Project, a collaboration between the University of Auckland and the University of Oxford, part of the wider IUPS Physiome Project
See also
Cardiophysics
Cytomics
Human Genome Project
List of omics topics in biology
Living Human Project
Virtual Physiological Human
Virtual Physiological Rat
References
External links
National Resource for Cell Analysis and Modeling (NRCAM)
Biophysics
Physiology | Physiome | Physics,Biology | 316 |
69,766,757 | https://en.wikipedia.org/wiki/Steve%20Owens%20%28Arizona%20politician%29 | Stephen Alan Owens (born August 19, 1955) is an American attorney and politician. Originally from Memphis, Tennessee, he served as chief counsel and state director for U.S. Senator Al Gore before moving to the Phoenix, Arizona area during Gore's unsuccessful presidential run in 1988. He was a fundraiser for the Clinton-Gore campaign in 1992, and, from 1993 to 1995, was chair of the Arizona Democratic Party. He was the Democratic nominee for Arizona's 6th congressional district in 1996 and 1998, losing both times to incumbent J. D. Hayworth.
Owens served as director of the Arizona Department of Environmental Quality from 2003 to 2009 under Governor Janet Napolitano, after which he was appointed by President Barack Obama to be Assistant Administrator of the U.S. Environmental Protection Agency for the Office of Prevention, Pesticides and Toxic Substances. After two years in Washington, he joined Squire Sanders (now Squire Patton Boggs) as a partner in their Phoenix office. Since February 2022, he has served as a member of the U.S. Chemical Safety and Hazard Investigation Board by appointment of President Joe Biden.
Early life and family
Childhood and education
Owens was born on August 19, 1955, in Memphis, Tennessee to Milburne (1924–1995), a truck driver, and Maxine Neal Owens (1932–2019), who worked at Sears. He attended Messick High School, where he was elected by his peers as president of the class of 1973. Later, he was accepted into Brown University on an academic scholarship. While there, was an active member of the Undergraduate Council of Students, the school's student government. He won election as vice president in 1976 and as president the following year.
After five years at Brown, Owens graduated with honors with a degree in public policy in 1978. He then attended Vanderbilt University Law School, where he was editor-in-chief of the school's law review, graduating in 1981. He was admitted to the Tennessee bar later that year and spent a year as a law clerk to Judge Thomas A. Wiseman Jr. of the U.S. District Court for the Middle District of Tennessee.
Marriage
Owens married Karen Lynn Carter on November 12, 1988, at the Customs House in Nashville. The two knew each other at Vanderbilt Law and reconnected when Owens moved to Phoenix, Arizona, where Carter was practicing law with Janet Napolitano at Lewis & Roca. They went on to have two sons.
Career
Gore staffer
Owens first met then-U.S. Representative Al Gore as a law student. In 1982, he moved to Washington, D.C. after Gore named him counsel to the House Science and Technology Committee's Subcommittee on Oversight and Investigations, which Gore chaired. During the 1984 U.S. Senate election, in which Gore handily defeated Republican state senator Victor Ashe, Owens served as his Shelby County campaign manager. In the Senate, he was Gore's chief counsel and later his state director.
In 1987, Gore kicked off his campaign for the following year's Democratic presidential nomination. Despite a relatively successful Super Tuesday, by April 1988, he was trailing far behind Michael Dukakis and Jesse Jackson. Owens, the campaign's Southern director, was dispatched to Phoenix to round up delegates ahead of the April 16 Arizona caucus and ended up staying in the state. He took an active role in state politics, working in 1992 as a fundraiser for the Clinton-Gore campaign, and, on January 16, 1993, he was elected chair of the Arizona Democratic Party, after incumbent Bill Minette declined to run for a second term. He won reelection in early 1995 but resigned in July of that year, in part to focus on a 1996 congressional run. He was succeeded by former congressman Sam Coppersmith.
Congressional campaigns
Environmental lawyer
After moving to Phoenix, Owens entered private practice, joining the law firm Brown & Bain as a regulatory attorney and registered lobbyist. Later, he joined Beshears Muchmore Wallwork. In 2003, when friend Janet Napolitano was sworn in as Governor of Arizona, she appointed Owens to serve as director of the state Department of Environmental Quality. Six years later, Napolitano and Owens were both tapped for jobs in the Obama administration: Napolitano as Secretary of Homeland Security and Owens as Assistant Administrator of the Environmental Protection Agency for the Office of Prevention, Pesticides and Toxic Substances. Owens left in 2011 to return to Arizona and become a partner with Squire Sanders (now Squire Patton Boggs).
In 2021, President Joe Biden nominated Owens to serve on the U.S. Chemical Safety and Hazard Investigation Board. Owens' nomination was confirmed by the Senate in December 2021, and he began service on February 2, 2022. Following the resignation of Katherine Lemos in July 2022, President Biden appointed Owens as interim executive authority, and nominated him as chair of the board. On November 17, 2022, the United States Senate Committee on Environment and Public Works held hearings on his nomination. On December 13, 2022, the United States Senate discharged the committee from further consideration of the nomination by unanimous consent agreement, and confirmed the nomination by voice vote.
References
External links
Candidate Profile from Congressional Quarterly
1955 births
Living people
Arizona Democratic Party chairs
Arizona Democrats
Arizona lawyers
Brown University alumni
Vanderbilt University Law School alumni
United States Chemical Safety and Hazard Investigation Board | Steve Owens (Arizona politician) | Chemistry | 1,083 |
35,945,712 | https://en.wikipedia.org/wiki/Topological%20degeneracy | In quantum many-body physics, topological degeneracy is a phenomenon in which the ground state of a gapped many-body Hamiltonian becomes degenerate in the limit of large system size such that the degeneracy cannot be lifted by any local perturbations.
Applications
Topological degeneracy can be used to protect qubits which allows topological quantum computation. It is believed that topological degeneracy implies topological order (or long-range entanglement ) in the ground state. Many-body states with topological degeneracy are described by topological quantum field theory at low energies.
Background
Topological degeneracy was first introduced to physically define topological order.
In two-dimensional space, the topological degeneracy depends on the topology of space, and the topological degeneracy on high genus Riemann surfaces encode all information on the quantum dimensions and the fusion algebra of the quasiparticles. In particular, the topological degeneracy on torus is equal to the number of quasiparticles types.
The topological degeneracy also appears in the situation with topological defects (such as vortices, dislocations, holes in 2D sample, ends of a 1D sample, etc.), where the topological degeneracy depends on the number of defects. Braiding those topological defect leads to topologically protected non-Abelian geometric phase, which can be used to perform topologically protected quantum computation.
Topological degeneracy of topological order can be defined on a closed space or an open space with gapped boundaries or gapped domain
walls, including both Abelian topological orders
and non-Abelian topological orders.
The application of these types of systems for quantum computation has been proposed. In certain generalized cases, one can also design the systems with topological interfaces enriched or extended by global or gauge symmetries.
The topological degeneracy also appear in non-interacting fermion systems (such as p+ip superconductors) with trapped defects (such as vortices). In non-interacting fermion systems, there is only one type of topological degeneracy
where number of the degenerate states is given by , where
is the number of the defects (such as the number of vortices).
Such topological degeneracy is referred as "Majorana zero-mode" on the defects.
In contrast, there are many types of topological degeneracy for interacting systems.
A systematic description of topological degeneracy is given by tensor category (or monoidal category) theory.
See also
Topological order
Quantum topology
Topological defect
Topological quantum field theory
Topological quantum number
Majorana fermion
References
Quantum phases
Condensed matter physics | Topological degeneracy | Physics,Chemistry,Materials_science,Engineering | 557 |
3,641,749 | https://en.wikipedia.org/wiki/Cytoarchitecture | Cytoarchitecture (from Greek κύτος 'cell' and ἀρχιτεκτονική 'architecture'), also known as cytoarchitectonics, is the study of the cellular composition of the central nervous system's tissues under the microscope. Cytoarchitectonics is one of the ways to parse the brain, by obtaining sections of the brain using a microtome and staining them with chemical agents which reveal where different neurons are located.
The study of the parcellation of nerve fibers (primarily axons) into layers forms the subject of myeloarchitectonics (from Greek μυελός 'marrow' and ἀρχιτεκτονική 'architecture'), an approach complementary to cytoarchitectonics.
History of the cerebral cytoarchitecture
Defining cerebral cytoarchitecture began with the advent of histology—the science of slicing and staining brain slices for examination. It is credited to the Viennese psychiatrist Theodor Meynert (1833–1892), who in 1867 noticed regional variations in the histological structure of different parts of the gray matter in the cerebral hemispheres.
Paul Flechsig was the first to present the cytoarchitecture of the human brain into 40 areas. Alfred Walter Campbell then divided it into 14 areas.
Sir Grafton Elliot Smith (1871–1937), a New South Wales native working in Cairo, identified 50 areas. Korbinian Brodmann worked on the brains of diverse mammalian species and developed a division of the cerebral cortex into 52 discrete areas (of which 44 in the human, and the remaining 8 in the non-human primate brain). Brodmann used numbers to categorize the different architectural areas, now referred to as a Brodmann Area, and he believed that each of these regions served a unique functional purpose.
Constantin von Economo and Georg N. Koskinas, two neurologists in Vienna, produced a landmark work in brain research by defining 107 cortical areas on the basis of cytoarchitectonic criteria. They used letters to categorize the architecture, e.g., "F" for areas of the frontal lobe.
The Nissl staining technique
The Nissl staining technique (named for Franz Nissl the neuroscientist and histologist who originated the technique) is commonly used for determining the cytoarchitectonics of neuroanatomical structures, using common agents such as thionine, cresyl violet, or neutral red. These dyes intensely stain "Nissl bodies" (rough endoplasmic reticulum), which are abundant in neurons and reveal specific patterns of cytoarchitecture in the brain. Other common staining techniques used by histologists in other tissues (such as the hematoxylin and eosin or "H&E stain") leave brain tissue appearing largely homogeneous and do not reveal the level of organization apparent in a Nissl stain. Nissl staining reveals details ranging from the macroscopic, such as the laminar pattern of the cerebral cortex or the interlocking nuclear patterns of the diencephalon and brainstem, to the microscopic, such as the distinctions between individual neurons and glia in any subregion of the central nervous system. Many other neuroanatomic and cytoarchitectonic techniques are available to supplement Nissl cytoarchitectonics, including immunohistochemistry and in situ hybridization, which allow one to label any gene or protein expressed in any group of cells in the brain. However, Nissl cytoarchitecture remains a reliable, inexpensive, and familiar starting or reference point for neuroscientists wishing to examine or communicate their findings in a widely recognized anatomical framework and/or in reference to neuroanatomical atlases which use the same technique.
See also
Otfrid Foerster
References
Cell biology
Neuroanatomy
Histology | Cytoarchitecture | Chemistry,Biology | 853 |
1,488,463 | https://en.wikipedia.org/wiki/Agroforestry | Agroforestry (also known as agro-sylviculture or forest farming) is a land use management system that integrates trees with crops or pasture. It combines agricultural and forestry technologies. As a polyculture system, an agroforestry system can produce timber and wood products, fruits, nuts, other edible plant products, edible mushrooms, medicinal plants, ornamental plants, animals and animal products, and other products from both domesticated and wild species.
Agroforestry can be practiced for economic, environmental, and social benefits, and can be part of sustainable agriculture. Apart from production, benefits from agroforestry include improved farm productivity, healthier environments, reduction of risk for farmers, beauty and aesthetics, increased farm profits, reduced soil erosion, creating wildlife habitat, less pollution, managing animal waste, increased biodiversity, improved soil structure, and carbon sequestration.
Agroforestry practices are especially prevalent in the tropics, especially in subsistence smallholdings areas, with particular importance in sub-Saharan Africa. Due to its multiple benefits, for instance in nutrient cycle benefits and potential for mitigating droughts, it has been adopted in the USA and Europe.
Definition
At its most basic, agroforestry is any of various polyculture systems that intentionally integrate trees with crops or pasture on the same land. An agroforestry system is intensively managed to optimize helpful interactions between the plants and animals included, and “uses the forest as a model for design."
Agroforestry shares principles with polyculture practices such as intercropping, but can also involve much more complex multi-strata agroforests containing hundreds of species. Agroforestry can also utilise nitrogen-fixing plants such as legumes to restore soil nitrogen fertility. The nitrogen-fixing plants can be planted either sequentially or simultaneously.
History and scientific study
The term “agroforestry” was coined in 1973 by Canadian forester John Bene, but the concept includes agricultural practices that have existed for millennia.
Scientific agroforestry began in the 20th century with ethnobotanical studies carried out by anthropologists. However, indigenous communities that have lived in close relationships with forest ecosystems have practiced agroforestry informally for centuries. For example, Indigenous peoples of California periodically burned oak and other habitats to maintain a ‘pyrodiversity collecting model,’ which allowed for improved tree health and habitat conditions. Likewise Native Americans in the eastern United States extensively altered their environment and managed land as a “mosaic” of woodland areas, orchards, and forest gardens.
Agroforestry in the tropics is ancient and widespread throughout various tropical areas of the world, notably in the form of "tropical home gardens." Some “tropical home garden” plots have been continuously cultivated for centuries. A “home garden” in Central America could contain 25 different species of trees and food crops on just one-tenth of an acre. "Tropical home gardens" are traditional systems developed over time by growers without formalized research or institutional support, and are characterized by a high complexity and diversity of useful plants, with a canopy of tree and palm species that produce food, fuel, and shade, a mid-story of shrubs for fruit or spices, and an understory of root vegetables, medicinal herbs, beans, ornamental plants, and other non-woody crops.
In 1929, J. Russel Smith published Tree Crops: A Permanent Agriculture, in which he argued that American agriculture should be changed two ways: by using non-arable land for tree agriculture, and by using tree-produced crops to replace the grain inputs in the diets of livestock. Smith wrote that the honey locust tree, a legume that produced pods that could be used as nutritious livestock feed, had great potential as a crop. The book's subtitle later led to the coining of the term permaculture.
The most studied agroforestry practices involve a simple interaction between two components, such as simple configurations of hedges or trees integrated with a single crop. There is significant variation in agroforestry systems and the benefits they have. Agroforestry as understood by modern science is derived from traditional indigenous and local practices, developed by living in close association with ecosystems for many generations.
Benefits
Benefits include increasing farm productivity and profitability, reduced soil erosion, creating wildlife habitat, managing animal waste, increased biodiversity, improved soil structure, and carbon sequestration.
Agroforestry systems can provide advantages over conventional agricultural and forest production methods. They can offer increased productivity; social, economic and environmental benefits, as well as greater diversity in the ecological goods and services provided. These benefits are conditional on good farm management. This includes choosing the right trees, as well as pruning them regularly etc.
Biodiversity
Biodiversity in agroforestry systems is typically higher than in conventional agricultural systems. Two or more interacting plant species in a given area create a more complex habitat supporting a wider variety of fauna.
Agroforestry is important for biodiversity for different reasons. It provides a more diverse habitat than a conventional agricultural system in which the tree component creates ecological niches for a wide range of organisms both above and below ground. The life cycles and food chains associated with this diversification initiate an agroecological succession that creates functional agroecosystems that confer sustainability. Tropical bat and bird diversity, for instance, can be comparable to the diversity in natural forests. Although agroforestry systems do not provide as many floristic species as forests and do not show the same canopy height, they do provide food and nesting possibilities. A further contribution to biodiversity is that the germplasm of sensitive species can be preserved. As agroforests have no natural clear areas, habitats are more uniform. Furthermore, agroforests can serve as corridors between habitats. Agroforestry can help conserve biodiversity, positively influencing other ecosystem services.
Soil and plant growth
Depleted soil can be protected from soil erosion by groundcover plants such as naturally growing grasses in agroforestry systems. These help to stabilise the soil as they increase cover compared to short-cycle cropping systems. Soil cover is a crucial factor in preventing erosion. Cleaner water through reduced nutrient and soil surface runoff can be a further advantage of agroforestry. Trees can help reduce water runoff by decreasing water flow and evaporation and thereby allowing for increased soil infiltration. Compared to row-cropped fields nutrient uptake can be higher and reduce nutrient loss into streams.
Further advantages concerning plant growth:
Bioremediation
Drought tolerance
Increased crop stability
Sustainability
Agroforestry systems can provide ecosystem services which can contribute to sustainable agriculture in the following ways:
Diversification of agricultural products, such as fuelwood, medicinal plants, and multiple crops, increases income security
Increased food security and nutrition by restored soil fertility, crop diversity and resilience to weather shocks for food crops
Land restoration through reducing soil erosion and regulating water availability
Multifunctional site use, e.g., crop production and animal grazing
Reduced deforestation and pressure on woodlands by providing farm-grown fuelwood
Possibility of reduced chemicals inputs, e.g. due to improved use of fertilizer, increased resilience against pests, and increased ground cover which reduces weeds
Growing space for medicinal plants e.g., in situations where people have limited access to mainstream medicines
According to the United Nations Food and Agriculture Organization (FAO)'s The State of the World’s Forests 2020, adopting agroforestry and sustainable production practices, restoring the productivity of degraded agricultural lands, embracing healthier diets and reducing food loss and waste are all actions that urgently need to be scaled up. Agribusinesses must meet their commitments to deforestation-free commodity chains and companies that have not made zero-deforestation commitments should do so.
Other environmental goals
Carbon sequestration is an important ecosystem service. Agroforestry practices can increase carbon stocks in soil and woody biomass. Trees in agroforestry systems, like in new forests, can recapture some of the carbon that was lost by cutting existing forests. They also provide additional food and products. The rotation age and the use of the resulting products are important factors controlling the amount of carbon sequestered. Agroforests can reduce pressure on primary forests by providing forest products.
Adaptation to climate change
Agroforestry can significantly contribute to climate change mitigation along with adaptation benefits. A case study in Kenya found that the adoption of agroforestry drove carbon storage and increased livelihoods simultaneously among small-scale farmers. In this case, maintaining the diversity of tree species, especially land use and farm size are important factors.
Poor smallholder farmers have turned to agroforestry as a means to adapt to climate change. A study from the CGIAR research program on Climate Change, Agriculture and Food Security found from a survey of over 700 households in East Africa that at least 50% of those households had begun planting trees in a change from earlier practices. The trees were planted with fruit, tea, coffee, oil, fodder and medicinal products in addition to their usual harvest. Agroforestry was one of the most widespread adaptation strategies, along with the use of improved crop varieties and intercropping.
Tropical
Trees in agroforestry systems can produce wood, fruits, nuts, and other useful products. Agroforestry practices are most prevalent in the tropics, especially in subsistence smallholdings areas such as sub-Saharan Africa.
Research with the leguminous tree Faidherbia albida in Zambia showed maximum maize yields of 4.0 tonnes per hectare using fertilizer and inter-cropped with the trees at densities of 25 to 100 trees per hectare, compared to average maize yields in Zimbabwe of 1.1 tonnes per hectare.
Hillside systems
A well-studied example of an agroforestry hillside system is the Quesungual Slash and Mulch Agroforestry System in Lempira Department, Honduras. This region was historically used for slash-and-burn subsistence agriculture. Due to heavy seasonal floods, the exposed soil was washed away, leaving infertile barren soil exposed to the dry season. Farmed hillside sites had to be abandoned after a few years and new forest was burned. The UN's FAO helped introduce a system incorporating local knowledge consisting of the following steps:
Thin and prune Hillside secondary forest, leaving individual beneficial trees, especially nitrogen-fixing trees. They help reduce soil erosion, maintain soil moisture, provide shade and provide an input of nitrogen-rich organic matter in the form of litter.
Plant maize in rows. This is a traditional local crop.
Harvest from the dried plant and plant beans. The maize stalks provide an ideal structure for the climbing bean plants. Bean is a nitrogen-fixing plant and therefore helps introduce more nitrogen.
Pumpkins can be planted during this time. The plant's large leaves and horizontal growth provide additional shade and moisture retention. It does not compete with the beans for sunlight since the latter grow vertically on the stalks.
Every few seasons, rotate the crop by grazing cattle, allowing grass to grow and adding soil organic matter and nutrients (manure). The cattle prevent total reforestation by grazing around the trees.
Repeat.
Kuojtakiloyan
The kuojtakiloyan of Mexico is a jungle-landscaped polyculture that grows avocadoes, sweet potatoes, cinnamon, black cherries, , citrus fruits, gourds, macadamia, mangoes, bananas and sapotes.
Kuojtakiloyan is a Masehual term that means 'useful forest' or 'forest that produces', and it is an agroforestry system developed and maintained by indigenous peoples of the Sierra Norte of the State of Puebla, Mexico. It has become a vital fountain of resources (food, medicinal herbs, fuels, floriculture, etc.) for the local population, but it is also a respectful transformation of the environment, with its biodiversity and nature conservation. The kuojtakiloyan comes directly from the ancestral Nahua and Totonaku knowledge of their natural environment. Despite its unawareness among the mainstream Mexican population, many agronomic experts in the world point it out as a successful case of sustainable agroforestry practiced communally.
The kuojtakiloyan is a jungle-landscaped polyculture in which avocados, sweet potatoes, cinnamon, black cherries, chalahuits, citrus fruits, gourds, macadamia, mangoes, bananas and sapotes are grown. In addition, a wide variety of harvested wild edible mushrooms and herbs (quelites). The jonote is planted because its fiber is useful in basketry, and also bamboo, which is fast growing, to build cabins and other structures. Concurrently to kuojtakiloyan, shade coffee is grown (café bajo sombra in Spanish; kafentaj in Masehual). Shade is essential to obtain high quality coffee. The local population has favored the proliferation of the stingless bee (pisilnekemej) by including the plants that it pollinates. From bees, they get honey, pollen, wax and propolis.
Shade crops
With shade applications, crops are purposely raised under tree canopies within the shady environment. The understory crops are shade tolerant or the overstory trees have fairly open canopies. A conspicuous example is shade-grown coffee. This practice reduces weeding costs and improves coffee quality and taste.
Crop-over-tree systems
Crop-over-tree systems employ woody perennials in the role of a cover crop. For this, small shrubs or trees pruned to near ground level are utilized. The purpose is to increase in-soil nutrients and/or to reduce soil erosion.
Intercropping and alley cropping
With alley cropping, crop strips alternate with rows of closely spaced tree or hedge species. Normally, the trees are pruned before planting the crop. The cut leafy material - for example, from Alchornea cordifolia and Acioa barteri - is spread over the crop area to provide nutrients. In addition to nutrients, the hedges serve as windbreaks and reduce erosion.
In tropical areas of North and South America, various species of Inga such as I. edulis and I. oerstediana have been used for alley cropping.
Intercropping is advantageous in Africa, particularly in relation to improving maize yields in the sub-Saharan region. Use relies upon the nitrogen-fixing tree species Sesbania sesban, Tephrosia vogelii, Gliricidia sepium and Faidherbia albida. In one example, a ten-year experiment in Malawi showed that, by using the fertilizer tree Gliricidia (G. sepium) on land on which no mineral fertilizer was applied, maize/corn yields averaged as compared to in plots without fertilizer trees or mineral fertilizers.
Weed control is inherent to alley cropping, by providing mulch and shade.
Syntropic systems
Syntropic farming, syntropic agriculture or syntropic agroforestry is an organic, permaculture agroforestry system developed by Ernst Götsch in Brazil. Sometimes this system is referred to as a successional agroforestry systems or SAFS, which sometimes refer to a broader concept originating in Latin America. The system focuses on replicating natural systems of accumulation of nutrients in ecosystems, replicating secondary succession, in order to create productive forest ecosystems that produce food, ecosystem services and other forest products.
The system relies heavily on several processes:
Dense planting mixing perennial and annual crops
Rapid cutting and composting of fast growing pioneer species, to accumulate nutrients and biomass
Creating greater water retention on the land through improving penetration of water into soil and plant water cycling
The systems were first developed in tropical Brazil, but many similar systems have been tested in temperate environments as soil and ecosystem restoration tactics.
The framework for the syntropic agroforestry is advocated for by Agenda Gotsch an organization built to promote the systems.
Syntropic systems have a number of documented benefits, including increased soil water penetration, increases to productivity on marginal land that has since become and soil temperature moderation.
In Burma
Taungya is a system from Burma. In the initial stages of an orchard or tree plantation, trees are small and widely spaced. The free space between the newly planted trees accommodates a seasonal crop. Instead of costly weeding, the underutilized area provides an additional output and income. More complex taungyas use between-tree space for multiple crops. The crops become more shade tolerant as the tree canopies grow and the amount of sunlight reaching the ground declines. Thinning can maintain sunlight levels.
In India
Itteri agroforestry systems have been used in Tamil Nadu since time immemorial. They involve the deliberate management of multipurpose trees and shrubs grown in intimate association with herbaceous species. They are often found along village and farm roads, small gullies, and field boundaries.
Bamboo-based agroforestry systems (Dendrocalamus strictus + sesame–chickpea) have been studied for enhancing productivity in semi-arid tropics of central India.
In Africa
A project to mitigate climate change with agriculture was launched in 2019 by the "Global EverGreening Alliance". The target is to sequester carbon from the atmosphere. By 2050 the restored land should sequestrate 20 billion tons of carbon annually
Shamba (Swahili for 'plantation') is an agroforestry system practiced in East Africa, particularly in Kenya. Under this system, various crops are combined: bananas, beans, yams and corn, to which are added timber resources, beekeeping, medicinal herbs, mushrooms, forest fruits, fodder for livestock, etc.
In Hawai'i
Native Hawaiians formerly practiced agroforestry adapted to the islands' tropical landscape. Their ability to do this influenced the region's carrying capacity, social conflict, cooperation, and political complexity. More recently, after scientific study of lo’I systems, attempts have been made to reintroduce dryland agroforestry in Hawai’i Island and Maui, fostering interdisciplinary collaboration between political leaders, landowners, and scientists.
Temperate
Although originally a concept in tropical agronomy, agroforestry's multiple benefits, for instance in nutrient cycles and potential for mitigating droughts, have led to its adoption in the USA and Europe.
The United States Department of Agriculture distinguishes five applications of agroforestry for temperate climates, namely alley cropping, forest farming, riparian forest buffers, silvopasture, and windbreaks.
Alley cropping
Alley cropping can also be used in temperate climates. Strip cropping is similar to alley cropping in that trees alternate with crops. The difference is that, with alley cropping, the trees are in single rows. With strip cropping, the trees or shrubs are planted in wide strips. The purpose can be, as with alley cropping, to provide nutrients, in leaf form, to the crop. With strip cropping, the trees can have a purely productive role, providing fruits, nuts, etc. while, at the same time, protecting nearby crops from soil erosion and harmful winds.
Inga alley cropping
Inga alley cropping is the planting agricultural crops between rows of Inga trees. It has been promoted by Mike Hands.
Using the Inga tree for alley cropping has been proposed as an alternative to the much more ecologically destructive slash and burn cultivation. The technique has been found to increase yields. It is sustainable agriculture as it allows the same plot to be cultivated over and over again thus eliminating the need for burning of the rainforests to get fertile plots.
Inga tree
Inga trees are native to many parts of Central and South America. Inga grows well on the acid soils of the tropical rainforest and former rainforest. They are leguminous and fix nitrogen into a form usable by plants. Mycorrhiza growing within the roots (arbuscular mycorrhiza) was found to take up spare phosphorus, allowing it to be recycled into the soil.
Other benefits of Inga include the fact that it is fast growing with thick leaves which, when left on the ground after pruning, form a thick cover that protects both soil and roots from the sun and heavy rain. It branches out to form a thick canopy so as to cut off light from the weeds below and withstands careful pruning year after year.
History
The technique was first developed and trialled by tropical ecologist Mike Hands in Costa Rica in the late 1980s and early '90s. Research funding from the EEC allowed him to experiment with species of Inga. Although alley cropping had been widely researched, it was thought that the tough pinnate leaves of the Inga tree would not decompose quickly enough.
The Inga is used as hedges and pruned when large enough to provide a mulch in which bean and corn seeds are planted. This results in both improving crop yields and the retention of soil fertility on the plot that is being farmed. Hands had seen the devastating consequences that are caused by slash and burn agriculture while working in Honduras; this new technique seemed to offer the solution to the environmental and economic problems faced by so many slash and burn farmers.
Although this technique has the potential to save rainforest and lift many out of poverty, Inga alley cropping has not yet reached its full potential, although the charity Inga Foundation, headed by Mike Hands, has been consulted about potential projects in Haiti ( which is almost completely deforested) and the Congo. Discussions have also been mooted about projects in Peru and Madagascar. Another charity, Rainforest Saver formed to promote Inga Alley Cropping, started a project in 2016 in Ecuador, in the area of the Amazon where Inga edulis originates from, and by the end of 2018 more than 60 farms in the area had Inga plots. Rainforest Saver also started a project in Cameroon in 2009, where in late 2018 there were around 100 farms with Inga plots, mainly in Western Cameroon.
Method
For Inga alley cropping the trees are planted in rows (hedges) close together, with a gap, the alley, of about 4m between the rows. An initial application of rock phosphate has kept the system going for many years.
When the trees have grown, usually in about two years, the canopies close over the alley and cut off the light and so smother the weeds.
The trees are then carefully pruned. The larger branches are used for firewood. The smaller branches and leaves are left on the ground in the alleys. These rot down into a good mulch (compost). If any weeds haven't been killed off by lack of light the mulch smothers them.
The farmer then pokes holes into the mulch and plants their crops into the holes.
The crops grow, fed by the mulch. The crops feed on the lower layers while the latest prunings form a protective layer over the soil and roots, shielding them from both the hot sun and heavy rain. This makes it possible for the roots of both the crops and the trees to stay to a considerable extent in the top layer of soil and the mulch, thus benefiting from the food in the mulch, and escaping soil pests and toxic minerals lower down. Pruning the Inga also makes its roots die back, thus reducing competition with the crops.
Forest farming
In forest farming, high-value crops are grown under a suitably-managed tree canopy. This is sometimes called multi-story cropping, or in tropical villages as home gardening. It can be practised at varying levels of intensity but always involves some degree of management; this distinguishes it from simple harvesting of wild plants from the forest.
Riparian forest buffers
Riparian buffers are strips of permanent vegetation located along or near active watercourses or in ditches where water runoff concentrates. The purpose is to keep nutrients and soil from contaminating the water.
Silvopasture
Trees can benefit fauna in a silvopasture system, where cattle, goats, or sheep browse on grasses grown under trees.
In hot climates, the animals are less stressed and put on weight faster when grazing in a cooler, shaded environment. The leaves of trees or shrubs can also serve as fodder. Similar systems support other fauna. Deer and pigs gain when living and feeding in a forest ecosystem, especially when the tree forage nourishes them. In aquaforestry, trees shade fish ponds. In many cases, the fish eat the leaves or fruit from the trees.
The dehesa or montado system of silviculture are an example of pigs and bulls being held extensively in Spain and Portugal.
Windbreaks
Windbreaks reduce wind velocity over and around crops. This increases yields through reduced drying of the crop and/or by preventing the crop from toppling in strong wind gusts.
In Switzerland
Since the 1950s, four-fifths of Swiss Hochstammobstgärten (traditional orchards with tall trees) have disappeared. An agroforestry scheme was tested here with trees together with annual crops. Trees tested were walnut (Juglans regia) and cherry (Prunus avium). Forty to seventy trees per hectare were recommended, yields were somewhat decreasing with increasing tree height and foliage. However, the total yield per area is shown to be up to 30 percent higher than for monocultural systems.
Another set of tests involve growing Populus tremula for biofuel at 52 trees a hectare and with grazing pasture alternated every two to three years with maize or sorghum, wheat, strawberries and fallowing between rows of modern short-pruned & grafted apple cultivars ('Boskoop' & 'Spartan') and growing modern sour cherry cultivars ('Morina', 'Coraline' and 'Achat') and apples, with bushes in the rows with tree (dogrose, Cornus mas, Hippophae rhamnoides) intercropped with various vegetables.
Forest gardening
Forest gardening is a low-maintenance, sustainable, plant-based food production and agroforestry system based on woodland ecosystems, incorporating fruit and nut trees, shrubs, herbs, vines and perennial vegetables which have yields directly useful to humans. Making use of companion planting, these can be intermixed to grow in a succession of layers to build a woodland habitat.
Forest gardening is a prehistoric method of securing food in tropical areas. In the 1980s, Robert Hart coined the term "forest gardening" after adapting the principles and applying them to temperate climates.
History
Since prehistoric times, hunter-gatherers might have influenced forests, for instance in Europe by Mesolithic people bringing favored plants like hazel with them. Forest gardens are probably the world's oldest form of land use and most resilient agroecosystem. First Nation villages in Alaska with forest gardens filled with nuts, stone fruit, berries, and herbs, were noted by an archeologist from the Smithsonian in the 1930s.
Forest gardens are still common in the tropics and known as Kandyan forest gardens in Sri Lanka; , family orchards in Mexico; agroforests; or shrub gardens. They have been shown to be a significant source of income and food security for local populations.
Robert Hart adapted forest gardening for the United Kingdom's temperate climate during the 1980s.
In temperate climates
Hart began farming at Wenlock Edge in Shropshire to provide a healthy and therapeutic environment for himself and his brother Lacon. Starting as relatively conventional smallholders, Hart soon discovered that maintaining large annual vegetable beds, rearing livestock and taking care of an orchard were tasks beyond their strength. However, a small bed of perennial vegetables and herbs he planted was looking after itself with little intervention.
Following Hart's adoption of a raw vegan diet for health and personal reasons, he replaced his farm animals with plants. The three main products from a forest garden are fruit, nuts and green leafy vegetables. He created a model forest garden from a 0.12 acre (500 m2) orchard on his farm and intended naming his gardening method ecological horticulture or ecocultivation. Hart later dropped these terms once he became aware that agroforestry and forest gardens were already being used to describe similar systems in other parts of the world. He was inspired by the forest farming methods of Toyohiko Kagawa and James Sholto Douglas, and the productivity of the Keralan home gardens; as Hart explained, "From the agroforestry point of view, perhaps the world's most advanced country is the Indian state of Kerala, which boasts no fewer than three and a half million forest gardens ... As an example of the extraordinary intensity of cultivation of some forest gardens, one plot of only was found by a study group to have twenty-three young coconut palms, twelve cloves, fifty-six bananas, and forty-nine pineapples, with thirty pepper vines trained up its trees. In addition, the smallholder grew fodder for his house-cow."
Seven-layer system
Further development
The Agroforestry Research Trust, managed by Martin Crawford, runs experimental forest gardening projects on a number of plots in Devon, United Kingdom. Crawford describes a forest garden as a low-maintenance way of sustainably producing food and other household products.
Ken Fern had the idea that for a successful temperate forest garden a wider range of edible shade tolerant plants would need to be used. To this end, Fern created the organisation Plants for a Future which compiled a plant database suitable for such a system. Fern used the term woodland gardening, rather than forest gardening, in his book Plants for a Future.
Kathleen Jannaway, the cofounder of Movement for Compassionate Living (MCL) with her husband Jack, wrote a book outlining a sustainable vegan future called Abundant Living in the Coming Age of the Tree in 1991. The MCL promotes forest gardening and other types of vegan organic gardening. In 2009 it provided a grant of £1,000 to the Bangor Forest Garden project in Gwynedd, North West Wales.
Permaculture
Bill Mollison, who coined the term permaculture, visited Hart at his forest garden in October 1990. Hart's seven-layer system has since been adopted as a common permaculture design element.
Numerous permaculturalists are proponents of forest gardens, or food forests, such as Graham Bell, Patrick Whitefield, Dave Jacke, Eric Toensmeier and Geoff Lawton. Bell started building his forest garden in 1991 and wrote the book The Permaculture Garden in 1995, Whitefield wrote the book How to Make a Forest Garden in 2002, Jacke and Toensmeier co-authored the two volume book set Edible Forest Gardens in 2005, and Lawton presented the film Establishing a Food Forest in 2008.
Geographical distribution
Forest gardens, or home gardens, are common in the tropics, using intercropping to cultivate trees, crops, and livestock on the same land. In Kerala in south India as well as in northeastern India, the home garden is the most common form of land use and is also found in Indonesia. One example combines coconut, black pepper, cocoa and pineapple. These gardens exemplify polyculture, and conserve much crop genetic diversity and heirloom plants that are not found in monocultures. Forest gardens have been loosely compared to the religious concept of the Garden of Eden.
Americas
The Amazon rainforest, rather than being a pristine wilderness, has been shaped by humans for at least 11,000 years through practices such as forest gardening and terra preta. Since the 1970s, numerous geoglyphs have been discovered on deforested land in the Amazon rainforest, furthering the evidence of pre-Columbian civilizations.
On the Yucatán Peninsula, much of the Maya food supply was grown in "orchard gardens", known as pet kot. The system takes its name from the low wall of stones (pet meaning 'circular' and kot, 'wall of loose stones') that characteristically surrounds the gardens.
The environmental historian William Cronon argued in his 1983 book Changes in the Land that indigenous North Americans used controlled burning to form ideal habitat for wild game. The natural environment of New England was sculpted into a mosaic of habitats. When indigenous Americans hunted, they were "harvesting a foodstuff which they had consciously been instrumental in creating". Most English settlers, however, assumed that the wealth of food provided by the forest was a result of natural forces, and that indigenous people lived off "the unplanted bounties of nature." Animal populations declined after settlement, while fields of strawberries and raspberries found by the earliest settlers became overgrown and disappeared for want of maintenance.
Plants
Some plants, such as wild yam, work as both a root plant and as a vine. Ground covers are low-growing edible forest garden plants that help keep weeds in control and provide a way to utilize areas that would otherwise be unused.
Cardamom
Ginger
Chervil
Bergamot
Sweet woodruff
Sweet cicely
Projects
El Pilar on the Belize–Guatemala border features a forest garden to demonstrate traditional Maya agricultural practices. A further one acre model forest garden, called Känan K'aax (meaning 'well-tended garden' in Mayan), is funded by the National Geographic Society and developed at Santa Familia Primary School in Cayo.
In the United States, the largest known food forest on public land is believed to be the seven acre Beacon Food Forest in Seattle, Washington. Other forest garden projects include those at the central Rocky Mountain Permaculture Institute in Basalt, Colorado, and Montview Neighborhood farm in Northampton, Massachusetts. The Boston Food Forest Coalition promotes local forest gardens.
In Canada Richard Walker has been developing and maintaining food forests in British Columbia for over 30 years. He developed a three-acre food forest that at maturity provided raw materials for a plant nursery and herbal business as well as food for his family. The Living Centre has developed various forest garden projects in Ontario.
In the United Kingdom, other than those run by the Agroforestry Research Trust (ART), projects include the Bangor Forest Garden in Gwynedd, northwest Wales. Martin Crawford from ART administers the Forest Garden Network, an informal network of people and organisations who are cultivating forest gardens.
Since 2014, Gisela Mir and Mark Biffen have been developing a small-scale edible forest garden in Cardedeu near Barcelona, Spain, for experimentation and demonstration.
Forest farming
Forest farming is the cultivation of high-value specialty crops under a forest canopy that is intentionally modified or maintained to provide shade levels and habitat that favor growth and enhance production levels. Forest farming encompasses a range of cultivated systems from introducing plants into the understory of a timber stand to modifying forest stands to enhance the marketability and sustainable production of existing plants.
Forest farming is a type of agroforestry practice characterized by the "four I's": intentional, integrated, intensive and interactive. Agroforestry is a land management system that combines trees with crops or livestock, or both, on the same piece of land. It focuses on increasing benefits to the landowner as well as maintaining forest integrity and environmental health. The practice involves cultivating non-timber forest products or niche crops, some of which, such as ginseng or shiitake mushrooms, can have high market value.
Non-timber forest products (NTFPs) are plants, parts of plants, fungi, and other biological materials harvested from within and on the edges of natural, manipulated, or disturbed forests. Examples of crops are ginseng, shiitake mushrooms, decorative ferns, and pine straw. Products typically fit into the following categories: edible, medicinal and dietary supplements, floral or decorative, or specialty wood-based products.
History
Forest farming, though not always by that name, is practiced around the world. For centuries, humans have relied on fruits, nuts, seeds, parts of foliage and pods from trees and shrubs in the forests to feed themselves and their livestock. Over time, certain species have been selected for cultivation near homes or livestock to provide food or medicine. For example, in the southern United States, mulberry trees are used as a feedstock for pigs and often cultivated near pig quarters.
In 1929, J. Russell Smith, Emeritus Professor of Economic Geography at Columbia University, published "Tree Crops – A Permanent Agriculture" which stated that crop-yielding trees could provide useful substitutes for cereals in animal feeding programs, as well as conserve environmental health. Toyohiko Kagawa read and was heavily influenced by Smith’s publication and began experimental cultivation under trees in Japan during the 1930s. Through forest farming, or three-dimensional forestry, Kagawa addressed problems of soil erosion by persuading many of Japan's upland farmers to plant fodder trees to conserve soil, supply food and feed animals. He combined extensive plantings of walnut trees, harvested the nuts and fed them to the pigs, then sold the pigs as a source of income. When the walnut trees matured, they were sold for timber and more trees were planted so that there was a continuous cycle of economic cropping that provided both short-term and long-term income to the small landowner. The success of these trials prompted similar research in other countries. World War II disrupted communication and slowed advances in forest farming. In the mid-1950s research resumed in places such as southern Africa. Kagawa was also an inspiration to Robert Hart pioneered forest gardening in temperate climates in the sixties in Shropshire, England.
In earlier years, livestock were often considered part of the forest farming system. Now they are typically excluded and agroforestry systems that integrate trees, forages and livestock are referred to as silvopastures. Because forest farming combines ecological stability of natural forests and productive agriculture systems, it is considered to have great potential for regenerating soils, restoring ground water supplies, controlling floods and droughts and cultivating marginal lands.
Principles
Forest farming principles constitute an ecological approach to forest management. Forest resources are judiciously used while biodiversity and wildlife habitat are conserved. Forest farms have the potential to restore ecological balance to fragmented second growth forests through intentional manipulation to create the desired forest ecosystem.
In some instances, the intentional introduction of species for botanicals, medicinals, food or decorative products is accomplished using existing forests. The tree cover, soil type, water supply, land form and other site characteristics determine what species will thrive. Developing an understanding of species/site relationships as well as understanding the site limitations is necessary to utilize these resources for production needs, while conserving adequate resources for the long-term health of the forest.
Apart from the environmental benefits, forest farming can increase the economic value of forest property and provide short- and long-term benefits to the landowner. Forest farming provides economic return from intact forest ecosystems, but timber sales can remain part of the long-term management strategy.
Methods
Forest farming methods may include: Intensive, yet careful thinning of overstocked, suppressed tree stands; multiple integrated entries to accomplish thinning so that systemic shock is minimized; and interactive management to maintain a cross-section of healthy trees and shrubs of all ages and species. Physical disturbance to the surrounding area should be minimized. The following are forest farming techniques described in the Training Manual produced by the Center for Agroforestry at the University of Missouri.
Level of management that is required
(from most intense to least intense)
1. Forest gardening is the most intensive of forest farming methods. In addition to thinning the overstory, this method involves clearing the understory of undesirable vegetation and other practices that are closely related to agronomy (tillage, fertilization, weeding, and control of disease and insects and wildlife management). Due to input levels, this method often produces lower valued products compared to other methods. Forest gardens take advantage of the vertical levels of light availability and space under the forest canopy so that more than one crop can be grown at once if desired.
2. Wild-simulated seeks to maintain a natural growing environment, yet enriches local NTFP populations to create an abundant renewable supply of the products. Minimal disturbance and natural growing conditions ensure products will be similar in appearance and quality of those harvested from the wild. Rather than till, practitioners often rake leaves to expose soil, sow seed directly onto the ground, and then cover with leaves again. Since this method produces NTFPs that closely resemble wild plants; they often command a higher price than NTFPs produced using the forest gardening method.
3. Forest tending involves adjusting tree crown density to manipulate light levels that favor natural reproduction of desirable NTFPs. This low intensity management approach does not involve supplemental planting to increase populations of desired NTFPs.
4. Wildcrafting is the harvesting of naturally growing NTFPs. It is not considered a forest farming practice since there is no human involvement in the plant’s establishment and maintenance. However, wildcrafters often take steps to protect NTFPs with future harvests in mind. It becomes agroforestry once forest thinnings, or other inputs, are applied to sustain or maintain plant populations that might otherwise succumb to successional changes in the forest. The most important difference between forest farming and wildcrafting is that forest farming intentionally produces NTFPS, whereas wildcrafting seeks and gathers from naturally growing NTFPs.
Production considerations
Forest farming can be a small business opportunity for landowners and requires careful planning, including a business and marketing plan. Learning how to market the NTFPs on the Internet is an option, but may entail higher shipping costs. Landowners should consider all options for selling their products including, farmer’s markets or restaurants that focus on locally grown ingredients. The development phase should include a forest management plan that states the landowner’s objectives and a resource inventory. Start-up costs should be analyzed as specific equipment may be necessary to harvest or process the product, whereas other crops require minimal initial investment. Local incentives for sustainable forest management, as well as regulations and policies should be explored. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) regulates international trade of certain plant (American ginseng and goldenseal) and animal species. To be legally exported, regulated plants must be harvested and records kept according to CITES rules and restrictions. Many states also have harvesting regulations for certain native plants that are searchable online. Another good source to start with on information is the Medicinal Plants at Risk 2008 report, by the Center for Biological Diversity] in the U.S.
Examples of crops
(from the National Agroforestry Center)
Medicinal herbs:
Ginseng (Panax quinquefolius)
Black Cohosh (Actaea racemosa)
Goldenseal (Hydrastis canadensis)
Bloodroot (Sanguinaria canadensis)
Pacific yew (Taxus brevifolia)
Mayapple (Podophyllum peltatum)
Saw palmetto (Serenoa repens)
American Pokeweed (Phytolacca americana)
Nuts:
Black walnut (Juglans nigra)
Hazelnut (Corylus avellana)
Shagbark hickory (Carya ovata)
Beechnut (Fagus sylvatica)
Fruit:
Pawpaw (Asimina triloba)
Currants (Ribes spp)
Elderberry (Sambucus spp)
Serviceberry (Amelanchier spp)
Blackberry (Rubus spp)
Huckleberry (Gaylussacia brachycera)
Other food crops:
Ramps (wild leeks) (Allium tricoccum)
Syrups (maple)
Honey
Mushrooms
Other edible roots
Other products: (mulch, decoratives, crafts, dyes)
Pine straw
Willow twigs
Vines
Beargrass (Xerophyllum tenax)
Ferns
Pine cones
Moss
Native ornamentals:
Rhododendron (Rhododendron catawbiense)
Highbush cranberry (Viburnum trilobum)
Flowering dogwood (Cornus florida)
Farmer-managed natural regeneration
Farmer-managed natural regeneration (FMNR) is a low-cost, sustainable land restoration technique used to combat poverty and hunger amongst poor subsistence farmers in developing countries by increasing food and timber production, and resilience to climate extremes. It involves the systematic regeneration and management of trees and shrubs from tree stumps, roots and seeds. FMNR was developed by the Australian agricultural economist Tony Rinaudo in the 1980s in West Africa. The background and development are described in Rinaudo's book The Forest Underground.
FMNR is especially applicable, but not restricted to, the dryland tropics. As well as returning degraded croplands and grazing lands to productivity, it can be used to restore degraded forests, thereby reversing biodiversity loss and reducing vulnerability to climate change. FMNR can also play an important role in maintaining not-yet-degraded landscapes in a productive state, especially when combined with other sustainable land management practices such as conservation agriculture on cropland and holistic management on range lands.
FMNR adapts centuries-old methods of woodland management, called coppicing and pollarding, to produce continuous tree-growth for fuel, building materials, food and fodder without the need for frequent and costly replanting. On farmland, selected trees are trimmed and pruned to maximise growth while promoting optimal growing conditions for annual crops (such as access to water and sunlight). When FMNR trees are integrated into crops and grazing pastures there is an increase in crop yields, soil fertility and organic matter, soil moisture and leaf fodder. There is also a decrease in wind and heat damage, and soil erosion.
FMNR complements the evergreen agriculture, conservation agriculture and agroforestry movements. It is considered a good entry point for resource-poor and risk-averse farmers to adopt a low-cost and low-risk technique. This in turn has acted as a stepping stone to greater agricultural intensification as farmers become more receptive to new ideas.
Background
Throughout the developing world, immense tracts of farmland, grazing lands and forests have become degraded to the point they are no longer productive. Deforestation continues at a rapid pace. In Africa's drier regions, 74 percent of rangelands and 61 percent of rain-fed croplands are damaged by moderate to very severe desertification. In some African countries deforestation rates exceed planting rates by 300:1.
Degraded land has an extremely detrimental effect on the lives of subsistence farmers who depend on it for their food and livelihoods. Subsistence farmers often make up to 70–80 percent of the population in these regions and they regularly suffer from hunger, malnutrition and even famine as a consequence.
In the Sahel region of Africa, a band of savanna which runs across the continent immediately south of the Sahara Desert, large tracts of once-productive farmland are turning to desert. In tropical regions across the world, where rich soils and good rainfall would normally assure bountiful harvests and fat livestock, some environments have become so degraded they are no longer productive.
Severe famines across the African Sahel in the 1970s and 1980s led to a global response, and stopping desertification became a top priority. Conventional methods of raising exotic and indigenous tree species in nurseries were used. Despite investing millions of dollars and thousands of hours of labour, there was little overall impact. Conventional approaches to reforestation in such harsh environments faced insurmountable problems and were costly and labour-intensive. Once planted out, drought, sand storms, pests, competition from weeds and destruction by people and animals negated efforts. Low levels of community ownership were another inhibiting factor.
Existing indigenous vegetation was generally dismissed as 'useless bush', and it was often cleared to make way for exotic species. Exotics were planted in fields containing living and sprouting stumps of indigenous vegetation, the presence of which was barely acknowledged, let alone seen as important.
This was an enormous oversight. In fact, these living tree stumps are so numerous they constitute a vast 'underground forest' just waiting for some care to grow and provide multiple benefits at little or no cost. Each stump can produce between 10 and 30 stems each. During the process of traditional land preparation, farmers saw the stems as weeds and slashed and burnt them before sowing their food crops. The net result was a barren landscape for much of the year with few mature trees remaining. To the casual observer, the land was turning to desert. Most concluded that there were no trees present and that the only way to reverse the problem was through tree planting.
Meanwhile, established indigenous trees continued to disappear at an alarming rate. In Niger, from the 1930s until 1993, forestry laws took tree ownership and responsibility for the care of trees out of the hands of the people. Reforestation through conventional tree planting seemed to be the only way to address desertification at the time.
History
In the early-1980s, in the Maradi region of the Republic of Niger, the missionary organisation, Serving in Mission (SIM), was unsuccessfully attempting to reforest the surrounding districts using conventional means. In 1983, SIM began experimenting and promoting FMNR amongst about 10 farmers. During the famine of 1984, a food-for-work program was introduced that saw some 70,000 people exposed to FMNR and its practice on around 12,500 hectares of farmland. From 1985 to 1999, FMNR continued to be promoted locally and nationally as exchange visits and training days were organised for various NGOs, government foresters, Peace Corps volunteers, and farmer and civil society groups. Additionally, SIM project staff and farmers visited numerous locations across Niger to provide training.
By 2004 it was ascertained that FMNR was being practised on over five million hectares or 50 percent of Niger's farmland – an average reforestation rate of 250,000 hectares per year over a 20-year period. This transformation prompted a Senior Fellow of the World Resources Institute, Chris Reij, to comment that "this is probably the largest positive environmental transformation in the Sahel and perhaps all of Africa".
In 2004, World Vision Australia and World Vision Ethiopia initiated a forestry-based carbon sequestration project as a potential means to stimulate community development while engaging in environmental restoration. A partnership with the World Bank, the Humbo Community-based Natural Regeneration Project involved the regeneration of 2,728 hectares of degraded native forests. This brought social, economic and ecological benefits to the participating communities. Within two years, communities were collecting wild fruits, firewood, and fodder, and reported that wildlife had begun to return and erosion and flooding had been reduced. In addition, the communities are now receiving payments for the sale of carbon credits through the Clean Development Mechanism (CDM) of the Kyoto Protocol.
Following the success of the Humbo project, FMNR spread to the Tigray region of northern Ethiopia where 20,000 hectares have been set aside for regeneration, including 10 hectare FMNR model sites for research and demonstration in each of 34 sub-districts. The Government of Ethiopia has committed to reforest 15 million hectares of degraded land using FMNR as part of a climate change and renewable energy plan to become carbon neutral by 2025.
In Talensi, northern Ghana, FMNR is being practiced on 2,000–3,000 hectares and new projects are introducing FMNR into three new districts. In the Kaffrine and Diourbel regions of Senegal, FMNR has spread across 50,000 hectares in four years. World Vision is also promoting FMNR in Indonesia, Myanmar and East Timor. There are also examples of both independently promoted and spontaneous FMNR movements occurring. In Burkina Faso, for example, an increasing part of the country is being transformed into agro-forestry parkland. And in Mali, an ageing agro-forestry parkland of about six million hectares is showing signs of regeneration.
Key principles
FMNR depends on the existence of living tree stumps or roots in crop fields, grazing pastures, woodlands or forests. Each season bushy growth will sprout from the stumps/roots often appearing like small shrubs. Continuous grazing by livestock, regular burning and/or regular harvesting for fuel wood results in these 'shrubs' never attaining tree stature. On farmland, standard practice has been for farmers to slash this regrowth in preparation for planting crops, but with a little attention this growth can be turned into a valuable resource without jeopardising crop yields.
For each stump, a decision is made as to how many stems will be chosen to grow. The tallest and straightest stems are selected and the remaining stems culled. Best results are obtained when the farmer returns regularly to prune any unwanted new stems and side branches as they appear. Farmers can then grow other crops between and around the trees. When farmers want wood they can cut the stem(s) they want and leave the rest to continue growing. The remaining stems will increase in size and value each year, and will continue to protect the environment. Each time a stem is harvested, a younger stem is selected to replace it.
Various naturally occurring tree species can be used which may also provide berries, fruits and nuts or have medicinal qualities. In Niger, commonly used species include: Strychnos spinosa, Balanites aegyptiaca, Boscia senegalensis, Ziziphus spp., Annona senegalensis, Poupartia birrea and Faidherbia albida. However, the most important determinants are whatever species are locally available, their ability to re-sprout after cutting, and the value local people place on those species.
Faidherbia albida, also known as the 'fertiliser tree', is popular for intercropping across the Sahel as it fixes nitrogen into the soil, provides fodder for livestock, and shade for crops and livestock. By shedding its leaves in the wet season, Faidherbia provides beneficial light shade to crops when high temperatures would otherwise damage crops or retard growth. Leaf fall contributes useful nutrients and organic matter to the soil.
The practice of FMNR is not confined to croplands. It is being practised on grazing land and in degraded communal forests as well. When there are no living stumps, seeds of naturally occurring species are used. In reality, there is no fixed way of practising FMNR and farmers are free to choose which species they will leave, the density of trees they prefer, and the timing and method of pruning.
In practice
Benefits
FMNR can restore degraded farmlands, pastures and forests by increasing the quantity and value of woody vegetation, by increasing biodiversity and by improving soil structure and fertility through leaf litter and nutrient cycling. The reforestation also retards wind and water erosion; it creates windbreaks which decrease soil moisture evaporation, and protects crops and livestock against searing winds and temperatures. Often, dried up springs reappear and the water table rises towards historic levels; insect eating predators including insects, spiders and birds return, helping to keep crop pests in check; the trees can be a source of edible berries and nuts; and over time the biodiversity of plant and animal life is increased. FMNR can be used to combat deforestation and desertification and can also be an important tool in maintaining the integrity and productivity of land that is not yet degraded.
Trials, long-running programs and anecdotal data indicate that FMNR can at least double crop yields on low fertility soils. In the Sahel, high numbers of livestock and an eight month dry season can mean that pastures are completely depleted before the rains commence. However, with the presence of trees, grazing animals can make it through the dry season by feeding on tree leaves and seed pods of some species, at a time when no other fodder is available. In northeast Ghana, more grass became available with the introduction of FMNR because communities worked together to prevent bush fires from destroying their trees.
Well designed and executed FMNR projects can act as catalysts to empower communities as they negotiate land ownership or user rights for the trees in their care. This assists with self-organisation, and with the development of new agriculture-based micro-enterprises (e.g., selling firewood, timber and handcrafts made from timber or woven grasses).
Conventional approaches to reversing desertification, such as funding tree planting, rarely spread beyond the project boundary once external funding is withdrawn. By comparison, FMNR is cheap, rapid, locally led and implemented. It uses local skills and resources – the poorest farmers can learn by observation and teach their neighbours. Given an enabling environment, or at least the absence of a 'disabling' environment, FMNR can be done at scale and spread well beyond the original target area without ongoing government or NGO intervention.
World Vision evaluations of FMNR conducted in Senegal and Ghana in 2011 and 2012 found that households practising FMNR were less vulnerable to extreme weather shocks such as drought and damaging rain and wind storms.
The following table summarises FMNR's benefits which fit the sustainable development model of economic, social and environmental benefits:
Sources:
Key success factors and constraints
While there are numerous accounts of the uptake and spread of FMNR independent of aid and development agencies, the following factors have been found to be beneficial for its introduction and spread:
Awareness creation of FMNR's potential.
Capacity building through workshops and exchange visits.
Awareness of the devastating effects of deforestation. The adoption of FMNR is more likely when communities acknowledge their situation and the need to take action. This perception of need can be supported by education.
An FMNR champion/facilitator from within the community who encourages, challenges and trains peers. This is critical during the first three to five years, and continues to be important for up to 10 years. Regular site visits also ensure early detection and remedial action on resistance and threats to FMNR through deliberate damage to trees and theft.
The buy-in of all stakeholders including their agreement on any by-laws created for FMNR and the consequences for infringements. Stakeholders include FMNR practitioners, local, regional and national government departments of agriculture and forestry, men, women, youth, marginalised groups (including nomadic herders), cultivators and commercial interests.
Stakeholder buy-in is also important to create a critical mass of FMNR adopters in order to change social attitudes from a position of apathy or active participation in deforestation to one of proactive sustainable tree management through FMNR.
Government support through the creation of favourable policies, positive reinforcement of actions facilitating the spread of FMNR, and disincentives for actions working against the spread of FMNR. FMNR practitioners need to be confident that they will benefit from their labours (either private or community ownership of trees, or legally binding user rights).
Reinforcement of existing organisational structures (farmers clubs, development groups, traditional leadership structures) or establishment of new structures which will provide a framework for communities to practise FMNR on a local, district or region-wide basis.
A communications strategy which includes education in schools, radio programs and engagement with religious and traditional leaders to become advocates.
Establishment of a legal, transparent and accessible market for FMNR wood and non-timber forest products, enabling practitioners to benefit financially from their activities.
Brown et al. suggest that the two main reasons why FMNR has spread so widely in Niger are attitudinal change by the community of what constitutes good land management practices, and farmers' ownership of trees. Farmers need the assurance that they will benefit from their labour. Giving farmers either outright ownership of the trees they protect, or tree-user rights, has made it possible for large-scale farmer-led reforestation to take place.
Current and future directions
Over nearly 30 years, FMNR has changed the farming landscape in some of the poorest countries in the world, including parts of Niger, Burkina Faso, Mali, and Senegal, providing subsistence farmers with the methods necessary to become more food secure and resilient against severe weather events.
The 2011–2012 food crisis in East Africa gave a stark reminder of the importance of addressing root causes of hunger. In the 2011 State of the World Report, Bunch concludes that four major factors – lack of sustainable fertile land, loss of traditional fallowing, cost of fertiliser and climate change – are coming together all at once in a sort of "perfect storm" that will almost surely result in an African famine of unprecedented proportions, probably within the next four to five years. It will most heavily affect the lowland, semi-arid to sub-humid areas of Africa (including the Sahel, parts of eastern Africa, plus a band from Malawi across to Angola and Namibia); and unless the world does something dramatic, 10 to 30 million people could die from famine between 2015 and 2020. Restoration of degraded land through FMNR is one way of addressing these major contributors to hunger.
In recent years FMNR has come to the attention of global development agencies and grassroots movements alike. The World Bank, World Resources Institute, World Agroforestry Center, USAID and the Permaculture movement are amongst those either actively promoting or advocating for the uptake of FMNR and FMNR has received recognition from a number of quarters including:
In 2010, FMNR won the Interaction 4 Best Practice and Innovation Initiative award in recognition of high technical standards and effectiveness in addressing the food security and livelihood needs of small producers in the areas of natural resource management and agro forestry.
In 2011, FMNR won the World Vision International Global Resilience Award for the most innovative initiative in the area of resilient development practice and natural environment and climate issues.
In 2012 WVA was awarded the Arbor Day Award for Education Innovation.
In April 2012, World Vision Australia – in partnership with the World Agroforestry Center and World Vision East Africa – held an international conference in Nairobi called "Beating Famine" to analyse and plan how to improve food security for the world's poor through the use of FMNR and Evergreen Agriculture. The conference was attended by more than 200 participants, including world leaders in sustainable agriculture, five East African ministers of agriculture and the environment, ambassadors, and other government representatives from Africa, Europe, and Australia, and leaders from non-government and international organisations.
Two major outcomes of the conference were:
The establishment of a global FMNR network of key stakeholders to promote, encourage and initiate the scale-up of FMNR globally.
Country, regional and global level plans as a basis for inter-organisation collaboration for FMNR scale-up.
The conference acted as a catalyst for media coverage of FMNR in some of the world's leading outlets and a noticeable increase in momentum for an FMNR global movement. This heightened awareness of FMNR has created an opportunity for it to spread exponentially worldwide.
Further reading
See also
References
Sources
d'Arms, Deborha 2011. Jardin d'Or (Garden of Gold): A Treatise on Forest Gardening, Recreating Sustainable Gardens of Eden. Los Gatos, CA: Robertson Publishing. .
Douglas, J. Sholto and Hart, Robert A. de J. 1985. Forest Farming. Intermediate Technology. .
Fern, Ken 1997. Plants for a Future: Edible and Useful Plants for a Healthier World. Hampshire: Permanent Publications. .
Jacke, Dave, and Toensmeier, Eric 2005. Edible Forest Gardens. Two volume set. Volume One: Ecological Vision and Theory for Temperate Climate Permaculture, . Volume Two: Ecological Design and Practice for Temperate Climate Permaculture, . White River Junction, VT: Chelsea Green.
Jannaway, Kathleen 1991. Abundant Living in the Coming Age of the Tree. Movement for Compassionate Living. .
Smith, Joseph Russell 1988 (first published in 1929). Tree Crops: A Permanent Agriculture. Island Press.
Mir, Gisela Biffen, Mark 2021. Bosques y jardines de alimentos. La Fertilidad de la Tierra Ediciones. (in Spanish) ISBN 978-84-121830-1-6
T.D.Pennignton and E.C.M. Fernandes (editors) "The Genus Inga, Utilization" Inga species and alley-cropping by Mike Hands, Kew Publications.
External links
Why Food Forests?, Permaculture Research Institute
Plant an Edible Forest Garden, Mother Earth News
The garden of the future?, The Guardian
Forest gardens, Permaculture Association
El Pilar Forest Garden Network, information on traditional Maya forest gardening
National Agroforestry Center (USDA)
Agroforestry Practices by The Center for Agroforestry, University of Missouri.
Hwwff.cce.cornell.edu
Ces.ncsu.edu
Trees with Edible Leaves The Perennial Agriculture Institute.
Ntfpinfo.us
Dcnr.state.pa.us
Inga Foundation
Rainforest Saver Foundation (Inga alley cropping projects in Honduras and Cameroon)
Inga alley cropping as an agrometeorogical service to slash and burn cultivation
What is inga alley cropping?
Farmer Managed Natural Regeneration Website
Re-Greening the Sahel at IFPRI
The Development of Farmer Managed Natural Regeneration
Farmer Managed Natural Regeneration – Video
National Agroforesty Center (USDA)
World Agroforestry Centre
The CGIAR Research Program on Forests, Trees and Agroforestry (FTA)
Australian agroforestry
Green Belt Movement
Plants For A Future
Agroforestry in France and Europe
Media
.
.
.
.
.
Agroforestry, stakes and perspectives. Agroof Production, Liagre F. and Girardin N.
Environmental issues with forests
Tropical agriculture
Forest management
Sustainable agriculture
Non-timber forest products
Organic farming
Agriculture in Brazil
Artificial ecosystems
Agriculture in Mesoamerica
Agroforestry systems
Agroforestry
Climate change and agriculture
Agriculture and the environment
Polyculture
Desert greening
Reforestation
Forestry in Africa
Sustainable forest management
Forestry in Ethiopia
Permaculture concepts | Agroforestry | Biology | 13,648 |
4,785,316 | https://en.wikipedia.org/wiki/2-sided | In mathematics, specifically in topology of manifolds, a compact codimension-one submanifold of a manifold is said to be 2-sided in when there is an embedding
with for each and
.
In other words, if its normal bundle is trivial.
This means, for example that a curve in a surface is 2-sided if it has a tubular neighborhood which is a cartesian product of the curve times an interval.
A submanifold which is not 2-sided is called 1-sided.
Examples
Surfaces
For curves on surfaces, a curve is 2-sided if and only if it preserves orientation, and 1-sided if and only if it reverses orientation: a tubular neighborhood is then a Möbius strip. This can be determined from the class of the curve in the fundamental group of the surface and the orientation character on the fundamental group, which identifies which curves reverse orientation.
An embedded circle in the plane is 2-sided.
An embedded circle generating the fundamental group of the real projective plane (such as an "equator" of the projective plane – the image of an equator for the sphere) is 1-sided, as it is orientation-reversing.
Properties
Cutting along a 2-sided manifold can separate a manifold into two pieces – such as cutting along the equator of a sphere or around the sphere on which a connected sum has been done – but need not, such as cutting along a curve on the torus.
Cutting along a (connected) 1-sided manifold does not separate a manifold, as a point that is locally on one side of the manifold can be connected to a point that is locally on the other side (i.e., just across the submanifold) by passing along an orientation-reversing path.
Cutting along a 1-sided manifold may make a non-orientable manifold orientable – such as cutting along an equator of the real projective plane – but may not, such as cutting along a 1-sided curve in a higher genus non-orientable surface,
maybe the simplest example of this is seen when one cut a mobius band along its core curve.
References
Geometric topology | 2-sided | Mathematics | 436 |
36,777,840 | https://en.wikipedia.org/wiki/Managerial%20epidemiology | The use of epidemiological tools in health care management can be described as managerial epidemiology. Several formal definitions have been proposed for managerial epidemiology. These include:
The use of epidemiology for designing and managing health care for populations.
Effective management of resources to maintain and promote the health of populations.
The use epidemiological concepts and tools to improve decisions about the management of health services.
History
The potential value of epidemiology in health care management has long been recognized. Academics were encouraging use of epidemiological methods in health care management for quality improvement and planning before the term ‘managerial epidemiology’ was coined. (See for example Rohrer 1989.) Epidemiology became a required subject in some health care management programs and textbooks were written for those courses. Managerial epidemiology might be considered a type of health services research, since it involves the study of health services.
After almost 40 years of research, a handful of researchers provided examples of using managerial epidemiology and the importance for healthcare managers to use the practice. However, the perspectives of healthcare leaders on the use of managerial epidemiology were never studied until 2020. In 2020, a study was conducted to explore the adoption of managerial epidemiology by ambulatory healthcare leaders across the United States (See Schenning 2020). The adoption was found to be poor; yet, critically important for improving overall health system performance including the triple aim and impacting population health. From the findings, Dr. Schenning developed a framework for accelerating adoption of managerial epidemiology (See Schenning 2020). She also discussed the importance of using managerial epidemiology for pandemic preparedness and response.
Variations
An important distinction can be drawn between population epidemiology and clinical epidemiology. If the US health care system had fully evolved in a direction that entailed management of care for populations rather than patients, then the concepts, methods and perspectives drawn from population epidemiology would have been ideal tools for use by managers. This indeed was anticipated by authors of textbooks on managerial epidemiology. (See Dever). In each cycle of health reform, the utility of epidemiology in planning medical services for populations was recognized.
However, the attention of most health care managers remains focused on patients rather than communities. Hospitals do not serve enrolled populations; they serve the patients who are treated in their beds and in the clinics. Consequently, the tools and perspectives of clinical epidemiology may be as or more relevant to health care managers than those drawn from population epidemiology. Managers employing epidemiology in hospitals might not conduct many community surveys. Instead, they would extract clinical information from medical records to analyze variations in outcomes, complications, and services used.
However, healthcare leaders should use managerial epidemiology especially population epidemiology for population health strategies and overall system performance (See Schenning). This is seen with the rise in addressing social determinants of health and further realized during the COVID-19 pandemic (See Schenning).
Application
Methods drawn from clinical epidemiology that are employed in health care management to assess quality and cost include the following.
Study designs commonly used by epidemiologists, such as cohort studies, case control studies and surveys of patients.
Measures commonly used by epidemiologists, such as morbidity rates, infection rates, and mortality rates.
Statistical techniques commonly used by epidemiologists, such as chi square tests of rates and proportions.
Stratification of data by health problem, diagnosis or disability so as to maximize biological and clinical homogeneity.
Differentiation
Managerial epidemiology differs from clinical epidemiology in that it addresses the concerns of management. For example, clinical epidemiologists who seek to control hospital-acquired infections would not be engaged in managerial epidemiology unless they described the infections as quality indicators and proposed or tested organizational changes that might reduce infection rates. Another distinction between clinical epidemiology and managerial epidemiology is that while clinical epidemiologists test the efficacy of particular treatments, managers are concerned with how clinical outcomes differ between hospitals, bed sections, clinics, or programs. Information of this kind can lead to reallocation of resources so as to improve efficiency and effectiveness of the organization as a whole.
References
Epidemiology
Health care management | Managerial epidemiology | Environmental_science | 911 |
68,195,735 | https://en.wikipedia.org/wiki/Matrix%20sign%20function | In mathematics, the matrix sign function is a matrix function on square matrices analogous to the complex sign function.
It was introduced by J.D. Roberts in 1971 as a tool for model reduction and for solving Lyapunov and Algebraic Riccati equation in a technical report of Cambridge University, which was later published in a journal in 1980.
Definition
The matrix sign function is a generalization of the complex signum function
to the matrix valued analogue . Although the sign function is not analytic, the matrix function is well defined for all matrices that have no eigenvalue on the imaginary axis, see for example the Jordan-form-based definition (where the derivatives are all zero).
Properties
Theorem: Let , then .
Theorem: Let , then is diagonalizable and has eigenvalues that are .
Theorem: Let , then is a projector onto the invariant subspace associated with the eigenvalues in the right-half plane, and analogously for and the left-half plane.
Theorem: Let , and be a Jordan decomposition such that corresponds to eigenvalues with positive real part and to eigenvalue with negative real part. Then , where and are identity matrices of sizes corresponding to and , respectively.
Computational methods
The function can be computed with generic methods for matrix functions, but there are also specialized methods.
Newton iteration
The Newton iteration can be derived by observing that , which in terms of matrices can be written as , where we use the matrix square root. If we apply the Babylonian method to compute the square root of the matrix , that is, the iteration , and define the new iterate , we arrive at the iteration
,
where typically . Convergence is global, and locally it is quadratic.
The Newton iteration uses the explicit inverse of the iterates .
Newton–Schulz iteration
To avoid the need of an explicit inverse used in the Newton iteration, the inverse can be approximated with one step of the Newton iteration for the inverse, , derived by Schulz(de) in 1933. Substituting this approximation into the previous method, the new method becomes
.
Convergence is (still) quadratic, but only local (guaranteed for ).
Applications
Solutions of Sylvester equations
Theorem: Let and assume that and are stable, then the unique solution to the Sylvester equation, , is given by such that
Proof sketch: The result follows from the similarity transform
since
due to the stability of and .
The theorem is, naturally, also applicable to the Lyapunov equation. However, due to the structure the Newton iteration simplifies to only involving inverses of and .
Solutions of algebraic Riccati equations
There is a similar result applicable to the algebraic Riccati equation, . Define as
Under the assumption that are Hermitian and there exists a unique stabilizing solution, in the sense that is stable, that solution is given by the over-determined, but consistent, linear system
Proof sketch: The similarity transform
and the stability of implies that
for some matrix .
Computations of matrix square-root
The Denman–Beavers iteration for the square root of a matrix can be derived from the Newton iteration for the matrix sign function by noticing that is a degenerate algebraic Riccati equation and by definition a solution is the square root of .
References
Matrix theory
Linear algebra | Matrix sign function | Mathematics | 667 |
67,693,073 | https://en.wikipedia.org/wiki/NGC%203412 | NGC 3412 is a barred lenticular galaxy located in the constellation Leo. It was discovered on April 8, 1784, by the astronomer William Herschel.
References
External links
Leo (constellation)
3412
Barred lenticular galaxies
032508 | NGC 3412 | Astronomy | 50 |
75,948,469 | https://en.wikipedia.org/wiki/U%20Pegasi | U Pegasi is a binary star system in the constellation of Pegasus, abbreviated U Peg. The pair form an eclipsing binary with a combined peak apparent visual magnitude of 9.23, which is far too faint to be visible to the naked eye. During the primary eclipse the magnitude decreases to 10.07, while the secondary eclipse only drops to magnitude 9.73. This system is located at a distance of approximately 596 light years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of around −28.5 km/s.
The variable luminosity of this system was discovered by S. C. Chandler in 1895. He found it to have a continuously varying light curve with a period of . Observing the star photometrically, in 1898 E. C. Pickering and O. Wendell determined a longer period of . H. Shapley published orbital measures for this eclipsing binary in 1913, estimating their relative luminosities and radii. It was determined to be a variable of the W Ursae Majoris type, and in 1945 the orbital period was shown to vary over time.
Properties
This is a double-lined spectroscopic binary star system with an orbital period of . Their orbital separation is just . The inclination of the orbital plane is at an angle of 76.1° to the plane of the sky from the Earth, so the components are seen to eclipse each other during an orbit. The deeper eclipse occurs when the cooler secondary overlaps the primary star. They belong to the W sub-class of W UMa eclipsing variables.
The larger member of this system is a G-type main-sequence star with a stellar classification of G2 V. The primary has a larger mass and radius than the Sun, while the secondary component is significantly smaller and less massive. The pair are in over-contact by about 14.9%, which allows a considerable amount of energy transfer between the two stars. As a consequence, the two components show similar effective temperatures and spectral classes. The cooler component displays a significant level of star spot activity that causes the light curve to vary anomalously over time. The system has an X-ray luminosity of .
The long term change in the orbital period may be explained by mass transfer between the components, with the matter streaming from the more massive to the less massive star. The overall period change indicates this mass transfer is occurring at an average rate of ·yr−1. Periodicity in the change rate suggests there may be a third orbiting component in the system with a period of . If so, it would need to have a mass of at least , which is high enough to be a star.
References
Further reading
G-type main-sequence stars
W Ursae Majoris variables
Astronomical X-ray sources
Pegasus (constellation)
Durchmusterung objects
093174 | U Pegasi | Astronomy | 589 |
477,510 | https://en.wikipedia.org/wiki/Omics | Omics is the collective characterization and quantification of entire sets of biological molecules and the investigation of how they translate into the structure, function, and dynamics of an organism or group of organisms. The branches of science known informally as omics are various disciplines in biology whose names end in the suffix -omics, such as genomics, proteomics, metabolomics, metagenomics, phenomics and transcriptomics.
The related suffix -ome is used to address the objects of study of such fields, such as the genome, proteome or metabolome respectively. The suffix -ome as used in molecular biology refers to a totality of some sort; it is an example of a "neo-suffix" formed by abstraction from various Greek terms in , a sequence that does not form an identifiable suffix in Greek.
Functional genomics aims at identifying the functions of as many genes as possible of a given organism. It combines
different -omics techniques such as transcriptomics and proteomics with saturated mutant collections.
Origin
The Oxford English Dictionary (OED) distinguishes three different fields of application for the -ome suffix:
in medicine, forming nouns with the sense "swelling, tumour"
in botany or zoology, forming nouns in the sense "a part of an animal or plant with a specified structure"
in cellular and molecular biology, forming nouns with the sense "all constituents considered collectively"
The -ome suffix originated as a variant of -oma, and became productive in the last quarter of the 19th century. It originally appeared in terms like sclerome or rhizome. All of these terms derive from Greek words in , a sequence that is not a single suffix, but analyzable as , the belonging to the word stem (usually a verb) and the being a genuine Greek suffix forming abstract nouns.
The OED suggests that its third definition originated as a back-formation from mitome, Early attestations include biome (1916) and genome (first coined as German Genom in 1920).
The association with chromosome in molecular biology is by false etymology. The word chromosome derives from the Greek stems "colour" and "body". While "body" genuinely contains the suffix, the preceding is not a stem-forming suffix but part of the word's root. Because genome refers to the complete genetic makeup of an organism, a neo-suffix -ome suggested itself as referring to "wholeness" or "completion".
Bioinformaticians and molecular biologists figured amongst the first scientists to apply the "-ome" suffix widely. Early advocates included bioinformaticians in Cambridge, UK, where there were many early bioinformatics labs such as the MRC centre, Sanger centre, and EBI (European Bioinformatics Institute); for example, the MRC centre carried out the first genome and proteome projects.
Current usage
Many "omes" beyond the original "genome" have become useful and have been widely adopted by research scientists. "Proteomics" has become well-established as a term for studying proteins at a large scale. "Omes" can provide an easy shorthand to encapsulate a field; for example, an interactomics study is clearly recognisable as relating to large-scale analyses of gene-gene, protein-protein, or protein-ligand interactions. Researchers are rapidly taking up omes and omics, as shown by the explosion of the use of these terms in PubMed since the mid-1990s.
Kinds of omics studies
Genomics
Genomics: Study of the genomes of organisms.
Cognitive genomics: Study of the changes in cognitive processes associated with genetic profiles.
Comparative genomics: Study of the relationship of genome structure and function across different biological species or strains.
Functional genomics: Describes gene and protein functions and interactions (often uses transcriptomics).
Metagenomics: Study of metagenomes, i.e., genetic material recovered directly from environmental samples.
Neurogenomics: Study of genetic influences on the development and function of the nervous system.
Pangenomics: Study of the entire collection of genes or genomes found within a given species.
Personal genomics: Branch of genomics concerned with the sequencing and analysis of the genome of an individual. Once the genotypes are known, the individual's genotype can be compared with the published literature to determine likelihood of trait expression and disease risk. Helps in Personalized Medicine
Electromics: Branch of genomics concerned with the role of exogenous electric fields in potentiating the gene expression profiles of cells, tissues, and organoids.
Epigenomics
The epigenome is the supporting structure of the genome, including protein and RNA binders, alternative DNA structures, and chemical modifications on DNA.
Epigenomics: Modern technologies include chromosome conformation by Hi-C, various ChIP-seq and other sequencing methods combined with proteomic fractionations, and sequencing methods that find chemical modification of cytosines, like bisulfite sequencing.
Nucleomics: Study of the complete set of genomic components which form "the cell nucleus as a complex, dynamic biological system, referred to as the nucleome". The 4D Nucleome Consortium officially joined the IHEC (International Human Epigenome Consortium) in 2017.
Microbiomics
The microbiome is a microbial community occupying a well-defined habitat with distinct physio-chemical properties. It includes the microorganisms involved and their theatre of activity, forming ecological niches. Microbiomes form dynamic and interactive micro-ecosystems prone to spaciotemporal change. They are integrated into macro-ecosystems, such as eukaryotic hosts, and are crucial to the host's proper function and health. The interactive host-microbe systems make up the holobiont.
Microbiomics is the study of microbiome dynamics, function, and structure. This area of study employs several techniques to study the microbiome in its host environment:
Sampling methods focused on collecting representative samples of the local environment, either from oral swabs or stool.
Culturomics (microbiology) is the high-throughput cell culture of bacteria that aims to comprehensively identify strains or species in samples obtained from tissues such as the human gut or from the environment.
Microfluidics gut-on-a-chip devices, which simulate the conditions of the gut and allow analysis of changes to the microbiome that can be more accurately monitored than in situ.
Mechanical DNA extraction techniques and gene amplification methods, such as PCR, to analyze the genomic profile of the entire microbiome.
DNA fingerprinting using microarrays and hybridization techniques allow analysis of shifts in microbiota populations.
Multi-omics studies allow for functional analysis of microbiota.
Animal models can be used to take more accurate samples of the in situ microbiome. Germ-free animals are used to implant a specific microbiome from another organism to yield a gnotobiotic model. These can be studied to see how it changes under different environmental conditions.
Lipidomics
The lipidome is the entire complement of cellular lipids, including the modifications made to a particular set of lipids, produced by an organism or system.
Lipidomics: Large-scale study of pathways and networks of lipids. Mass spectrometry techniques are used.
Proteomics
The proteome is the entire complement of proteins, including the modifications made to a particular set of proteins, produced by an organism or system.
Proteomics: Large-scale study of proteins, particularly their structures and functions. Mass spectrometry techniques are used.
Chemoproteomics: An array of techniques used to study protein-small molecule interactions
Immunoproteomics: Study of large sets of proteins (proteomics) involved in the immune response
Nutriproteomics: Identifying the molecular targets of nutritive and non-nutritive components of the diet. Uses proteomics mass spectrometry data for protein expression studies
Proteogenomics: An emerging field of biological research at the intersection of proteomics and genomics. Proteomics data used for gene annotations.
Structural genomics: Study of the three-dimensional structure of every protein encoded by a given genome using a combination of experimental and modeling approaches.
Glycomics
Glycomics is the comprehensive study of the glycome i.e. sugars and carbohydrates.
Foodomics
Foodomics was defined by Alejandro Cifuentes in 2009 as "a discipline that studies the food and nutrition domains through the application and integration of advanced omics technologies to improve consumer’s well-being, health, and knowledge."
Transcriptomics
Transcriptome is the set of all RNA molecules, including mRNA, rRNA, tRNA, and other non-coding RNA, produced in one or a population of cells.
Transcriptomics: Study of transcriptomes, their structures and functions.
Metabolomics
The metabolome is the ensemble of small molecules found within a biological matrix.
Metabolomics: Scientific study of chemical processes involving metabolites. It is a "systematic study of the unique chemical fingerprints that specific cellular processes leave behind", the study of their small-molecule metabolite profiles
Metabonomics: The quantitative measurement of the dynamic multiparametric metabolic response of living systems to pathophysiological stimuli or genetic modification
Nutrition, pharmacology, and toxicology
Nutritional genomics: A science studying the relationship between human genome, nutrition and health.
Nutrigenetics studies the effect of genetic variations on the interaction between diet and health with implications to susceptible subgroups
Nutrigenomics: Study of the effects of foods and food constituents on gene expression. Studies the effect of nutrients on the genome, proteome, and metabolome
Pharmacogenomics investigates the effect of the sum of variations within the human genome on drugs;
Pharmacomicrobiomics investigates the effect of variations within the human microbiome on drugs and vice versa.
Toxicogenomics: a field of science that deals with the collection, interpretation, and storage of information about gene and protein activity within particular cell or tissue of an organism in response to toxic substances.
Culture
Inspired by foundational questions in evolutionary biology, a Harvard team around Jean-Baptiste Michel and Erez Lieberman Aiden created the American neologism culturomics for the application of big data collection and analysis to cultural studies.
Miscellaneous
Mitointeractome
Psychogenomics: Process of applying the powerful tools of genomics and proteomics to achieve a better understanding of the biological substrates of normal behavior and of diseases of the brain that manifest themselves as behavioral abnormalities. Applying psychogenomics to the study of drug addiction, the ultimate goal is to develop more effective treatments for these disorders as well as objective diagnostic tools, preventive measures, and eventually cures.
Stem cell genomics: Helps in stem cell biology. Aim is to establish stem cells as a leading model system for understanding human biology and disease states and ultimately to accelerate progress toward clinical translation.
Connectomics: The study of the connectome, the totality of the neural connections in the brain.
Microbiomics: The study of the genomes of the communities of microorganisms that live in a specific environmental niche.
Cellomics: The quantitative cell analysis and study using bioimaging methods and bioinformatics.
Tomomics: A combination of tomography and omics methods to understand tissue or cell biochemistry at high spatial resolution, typically using imaging mass spectrometry data.
Viral metagenomics: Using omics methods in soil, ocean water, and humans to study the Virome and Human virome.
Ethomics: The high-throughput machine measurement of animal behaviour.
Videomics (or vide-omics): A video analysis paradigm inspired by genomics principles, where a continuous image sequence (or video) can be interpreted as the capture of a single image evolving through time through mutations revealing 'a scene'.
Multiomics: Integration of different omics in a single study or analysis pipeline.
Unrelated words in -omics
The word "comic" does not use the "omics" suffix; it derives from Greek "κωμ(ο)-" (merriment) + "-ικ(ο)-" (an adjectival suffix), rather than presenting a truncation of "σωμ(ατ)-".
Similarly, the word "economy" is assembled from Greek "οικ(ο)-" (household) + "νομ(ο)-" (law or custom), and "economic(s)" from "οικ(ο)-" + "νομ(ο)-" + "-ικ(ο)-". The suffix -omics is sometimes used to create names for schools of economics, such as Reaganomics.
See also
Systems biology
Panomics
Notes
Further reading
External links
Omics.org Omics terms and concepts home page. Probably the first omics web page created.
List of omics, including references/origins. Maintained by the (CHI) Cambridge Health Institute.
Scientific suffixes
Genomics | Omics | Biology | 2,726 |
146,252 | https://en.wikipedia.org/wiki/Martin%20Kamen | Martin David Kamen (August 27, 1913, Toronto – August 31, 2002, Montecito, California) was an American chemist who, together with Sam Ruben, co-discovered the synthesis of the isotope carbon-14 on February 27, 1940, at the University of California Radiation Laboratory, Berkeley. He also confirmed that all of the oxygen released in photosynthesis comes from water, not carbon dioxide, in 1941.
Kamen was the first to use carbon-14 to study a biochemical system, and his work revolutionized biochemistry and molecular biology, enabling scientists to trace a wide variety of biological reactions and processes. Despite being blacklisted for nearly a decade on suspicion of being a security risk, Kamen went on to receive the Albert Einstein World Award of Science in 1989, and the U.S. Department of Energy's 1995 Enrico Fermi award for lifetime scientific achievement.
Early life and education
Kamen was born on August 27, 1913, in Toronto, the son of Russian Jewish immigrants. He grew up in Chicago. Interested in classical music, he initially entered the University of Chicago as a music student before changing his major from music to chemistry. Although he gave up music as a career, Kamen continued to play the viola at a high professional level during the rest of his life.
Kamen received a bachelor's degree in chemistry from the University of Chicago in 1933. In 1936, Kamen earned a PhD in physical chemistry from the same university after working with William D. Harkins on "Neutron-Proton Inter-action: The Scattering of Neutrons by Protons."
Career
From 1936 to 1944, Kamen worked at the Radiation laboratories at the University of California, Berkeley.
Kamen gained a research position in chemistry and nuclear physics under Ernest Lawrence by working without pay for six months, until he was hired to oversee the preparation and distribution of the cyclotron's products.
Kamen's major achievements during his time at Berkeley included the co-discovery of the synthesis of carbon-14 with Sam Ruben in 1940, and the confirmation that all of the oxygen released in photosynthesis comes from water, not carbon dioxide, in 1941.
From 1941 to 1944, Kamen and others at the Berkeley Radiation Laboratory worked on the Manhattan Project.
In 1943, Kamen was assigned to Manhattan Project work at Oak Ridge, Tennessee, where he worked briefly before returning to Berkeley.
In spite of the fact that his scientific capabilities were unquestioned, Kamen was fired from Berkeley in July 1944 on suspicion of being a security risk. He was suspected of leaking nuclear weapons secrets to the Soviet Union (which at the time was allied with the US and others against Nazi Germany).
Kamen was unable to obtain another academic position until 1945 when he was hired by Arthur Holly Compton to run the cyclotron program in the medical school of Washington University in St. Louis. Kamen taught the faculty how to use radioactive tracer materials in research, and continued to develop his interests in biochemistry. His book Isotopic Tracers in Biology (1947) became a standard text on tracer methodology and highly influenced tracer use in biochemistry.
In 1957, Kamen moved to Brandeis University in Massachusetts where he helped Nathan Oram Kaplan to establish the Graduate Department of Biochemistry.
In 1961 Kamen joined the University of California, San Diego, where he founded a biochemistry group as part of the university's new department of chemistry.
Kamen remained at the University of California, San Diego, retiring from teaching (but not research) to become an emeritus professor in 1978.
Martin Kamen died August 31, 2002, at the age of 89 in Montecito (Santa Barbara), California.
Research
Although carbon-14 was previously known, the discovery of the synthesis of carbon-14 occurred at Berkeley in 1940 when Kamen and Sam Ruben bombarded graphite in the cyclotron in hopes of producing a radioactive isotope of carbon that could be used as a tracer in investigating chemical reactions in photosynthesis. Their experiment resulted in production of carbon-14.
By bombarding matter with particles in the cyclotron, radioactive isotopes such as carbon-14 were generated. Using carbon-14, the order of events in biochemical reactions could be elucidated, showing the precursors of a particular biochemical product, revealing the network of reactions that constitute life.
Kamen confirmed in 1941 that all of the oxygen released in photosynthesis comes from water, not carbon dioxide. He also studied anoxygenic photosynthetic bacteria, the biochemistry of cytochromes and their role in photosynthesis and metabolism, photosynthetic bacteria, the role of molybdenum in biological nitrogen fixation, the role of iron in the activity of porphyrin compounds in plants and animals, and calcium exchange in cancerous tumors, making substantial contributions.
Security risk controversy
Kamen came under long-term suspicion of espionage activity as a result of two incidents in 1944. He has described his experiences during this era in his autobiography, Radiant Science, Dark Politics. He first aroused suspicion while working at Oak Ridge. A cyclotron operator prepared radioactive sodium for an experiment, and Kamen was surprised that the resulting sodium had a purple glow, indicating it was much more intensely radioactive than could be produced in a cyclotron. Kamen recognized immediately that the sodium must have been irradiated in a nuclear reactor elsewhere in the facility. Because of wartime secrecy, he had not been aware of the reactor's existence. He excitedly told Ernest O. Lawrence about his discovery, in the hearing of Lawrence's Army escort. Shortly thereafter, an investigation was launched to find out who had leaked the information to Kamen.
After returning to Berkeley, Kamen met two Russian officials at a party given by his friend, the violinist Isaac Stern, whom he sometimes accompanied as a viola player in social evenings of chamber music. The Russians were Grigory Kheifets and Grigory Kasparov, posted as undercover KGB officers in the Soviet Union's San Francisco consulate. One of them asked Kamen for assistance in getting in touch with Rad Lab scientist John H. Lawrence about an experimental radiation treatment for a colleague with leukemia (Commander Kalinin of the Russian Navy, under treatment at the United States Navy Hospital in Seattle, Washington). Kamen put them in contact, and in appreciation he was invited for dinner at a local restaurant. FBI agents observed the dinner, on July 1, 1944, took a photograph of the men together, and submitted a report alleging Kamen to have discussed atomic research with Kheifets. In a memorandum of July 11, 1944, Army officials ordered Lawrence to have Martin Kamen dismissed from his Berkeley position and his work on the Manhattan Project on suspicion of being a “security risk.” There was no hearing or method of appeal.
In addition, Ruth B. Shipley at the Passport Division of the State Department revoked Kamen's passport in 1947, and repeatedly refused to reissue it. This had significant negative effects on Kamen's career and research, preventing him from traveling abroad to give lectures, attend conferences, and take up visiting professorships.
In 1948, the House Committee on Un-American Activities summoned Kamen to testify about his dinner conversation of 1944. From 1947-1955 Kamen engaged in repeated attempts to regain his passport and to engage in international scientific activities. He sought legal counsel in 1950, and started litigation to regain his passport and right to travel, gaining support from the Federation of American Scientists, the American Civil Liberties Union and others.
In 1951 the Chicago Tribune published an article that named him as a suspected spy for the Soviets, further damaging his reputation. Soon after, Kamen attempted suicide. He went on to sue the Chicago Tribune and the Washington Times-Herald for libel, winning his suit in 1955. It took Kamen nearly 10 years to establish his innocence and prove that he had been unjustly blacklisted as a security risk. He was finally able to regain his passport as of July 9, 1955.
Awards and honors
Kamen was elected a Fellow of the American Physical Society in 1941. He became a fellow of the American Academy of Arts and Sciences in 1958. In 1962, Kamen was elected as a member of the National Academy of Sciences. He was elected to the American Philosophical Society in 1974.
Kamen became a Guggenheim Fellowship recipient in 1956 and again in 1972, in the field of Molecular and Cellular Biology.
Kamen was awarded the Charles F. Kettering Award for Excellence in Photosynthesis Research from the American Society of Plant Biologists in 1968 and the Merck Award of the American Society of Biological Chemists in 1982.
He received the 1989 Albert Einstein World Award of Science. On April 24, 1996, he was presented with the 1995 Enrico Fermi Award, given by the U.S. President and the Department of Energy for lifetime scientific achievement. Some believe he should have won a Nobel Prize, for which he was nominated 14 times between 1955 and 1970.
Books
Foreword by Edwin M. McMillan.
Archival Collections
Martin David Kamen Papers MSS 98. UC San Diego Library Special Collections & Archives, UC San Diego Library.
Kamen, Martin, Vertical File, Bernard Becker Medical Library, Washington University in St. Louis.
Martin David Kamen papers : ca. 1937-1945, Bancroft Library, UC Berkeley
References
1913 births
2002 deaths
Carbon-14
Scientists from Toronto
University of Chicago alumni
Scientists from Chicago
University of California, San Diego faculty
American biochemists
Nuclear secrecy
Manhattan Project people
Albert Einstein World Award of Science Laureates
Enrico Fermi Award recipients
Washington University in St. Louis faculty
Fellows of the American Physical Society
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Sciences
Canadian people of Russian descent
American physical chemists
Venona project
Jewish American scientists
Canadian emigrants to the United States
Members of the American Philosophical Society
Brandeis University faculty
Researchers of photosynthesis | Martin Kamen | Chemistry | 1,981 |
14,171,783 | https://en.wikipedia.org/wiki/Auxiliary%20electrode | In electrochemistry, the auxiliary electrode, often also called the counter electrode, is an electrode used in a three-electrode electrochemical cell for voltammetric analysis or other reactions in which an electric current is expected to flow. The auxiliary electrode is distinct from the reference electrode, which establishes the electrical potential against which other potentials may be measured, and the working electrode, at which the cell reaction takes place.
In a two-electrode system, either a known current or potential is applied between the working and auxiliary electrodes and the other variable may be measured. The auxiliary electrode functions as a cathode whenever the working electrode is operating as an anode and vice versa. The auxiliary electrode often has a surface area much larger than that of the working electrode to ensure that the half-reaction occurring at the auxiliary electrode can occur fast enough so as not to limit the process at the working electrode.
When a three-electrode cell is used to perform electroanalytical chemistry, the auxiliary electrode, along with the working electrode, provides a circuit over which current is either applied or measured. Here, the potential of the auxiliary electrode is usually not measured and is adjusted so as to balance the reaction occurring at the working electrode. This configuration allows the potential of the working electrode to be measured against a known reference electrode without compromising the stability of that reference electrode by passing current over it.
The auxiliary electrode may be isolated from the working electrode using a glass frit. Such isolation prevents any byproducts generated at the auxiliary electrode from contaminating the main test solution: for example, if a reduction is being performed at the working electrode in aqueous solution, oxygen may be evolved from the auxiliary electrode. Such isolation is crucial during the bulk electrolysis of a species which exhibits reversible redox behavior.
Auxiliary electrodes are often fabricated from electrochemically active materials such as gold, platinum, or carbon, with a much higher surface area than the working electrode.
See also
Electrochemical cell
Electrochemistry
Reference electrode
Voltammetry
Working electrode
References
Further reading
Electroanalytical chemistry devices
Electrodes | Auxiliary electrode | Chemistry | 424 |
24,324,537 | https://en.wikipedia.org/wiki/Armillaria%20duplicata | Armillaria duplicata is a species of mushroom in the family Physalacriaceae. This species is found in Asia.
See also
List of Armillaria species
References
duplicata
Fungal tree pathogens and diseases
Fungus species | Armillaria duplicata | Biology | 51 |
45,048,610 | https://en.wikipedia.org/wiki/CXCR4%20antagonist | A CXCR4 antagonist is a substance which blocks the CXCR4 receptor and prevent its activation. Blocking the receptor stops the receptor's ligand, CXCL12, from binding which prevents downstream effects. CXCR4 antagonists are especially important for hindering cancer progression because one of the downstream effects initiated by CXCR4 receptor activation is cell movement which helps the spread of cancer, known as metastasis. The CXCR4 receptor has been targeted by antagonistic substances since being identified as a co-receptor in HIV and assisting the development of cancer. Macrocyclic ligands have been utilised as CXCR4 antagonists.
Plerixafor is an example of a CXCR4 antagonist, and has approvals (e.g. US FDA 2008) for clinical use (to mobilize hematopoietic stem cells).
BL-8040 is a CXCR4 antagonist that has undergone clinical trials (e.g. in various leukemias), with one planned for pancreatic cancer (in combination with pembrolizumab). Previously called BKT140, it is a synthetic cyclic 14-residue peptide with an aromatic ring. In a 2018 mouse tumor model study, BL-8040 treatment enhanced anti-tumor immune response potentially by increasing the CD8+ T-cells in the tumor microenvironment.
Mavorixafor (Xolremdi) is a small-molecule drug that targets CXCR4 mutations, it was approved for medical use in the United States in April of 2024 for the treatment of WHIM syndrome.
References
Cancer treatments
Receptor antagonists | CXCR4 antagonist | Chemistry | 345 |
9,211,532 | https://en.wikipedia.org/wiki/Gyula%20O.%20H.%20Katona | Gyula O. H. Katona (born 16 March 1941 in Budapest) is a Hungarian mathematician known for his work in combinatorial set theory, and especially for the Kruskal–Katona theorem and his beautiful and elegant proof of the Erdős–Ko–Rado theorem in which he discovered a new method, now called Katona's cycle method. Since then, this method has become a powerful tool in proving many interesting results in extremal set theory. He is affiliated with the Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences.
Katona was secretary-general of the János Bolyai Mathematical Society from 1990 to 1996. In 1966 and 1968 he won the Grünwald Prize, awarded by the Bolyai Society to outstanding young mathematicians, he was awarded the Alfréd Rényi Prize of the Hungarian Academy of Sciences in 1975, and the same academy awarded him the Prize of the Academy in 1989. In 2011 the Alfréd Rényi Institute, the János Bolyai Society and the Hungarian Academy of Sciences organized a conference in honor of Katona's 70th birthday.
Gyula O.H. Katona is the father of Gyula Y. Katona, another Hungarian mathematician with similar research interests to those of his father.
References
External links
Katona's web site
Katona on IMDB, appearing as himself in N is a Number
2024 Interview with Gyula O. H. Katona
Members of the Hungarian Academy of Sciences
20th-century Hungarian mathematicians
Combinatorialists
1941 births
Living people | Gyula O. H. Katona | Mathematics | 317 |
27,112,704 | https://en.wikipedia.org/wiki/Dichlorotris%28triphenylphosphine%29ruthenium%28II%29 | Dichlorotris(triphenylphosphine)ruthenium(II) is a coordination complex of ruthenium. It is a chocolate brown solid that is soluble in organic solvents such as benzene. The compound is used as a precursor to other complexes including those used in homogeneous catalysis.
Synthesis and basic properties
RuCl2(PPh3)3 is the product of the reaction of ruthenium trichloride trihydrate with a methanolic solution of triphenylphosphine.
2 RuCl3(H2O)3 + 7 PPh3 → 2 RuCl2(PPh3)3 + 2 HCl + 5 H2O + OPPh3
The coordination sphere of RuCl2(PPh3)3 can be viewed as either five-coordinate or octahedral. One coordination site is occupied by one of the hydrogen atoms of a phenyl group. This Ru---H agostic interaction is long (2.59 Å) and weak. The low symmetry of the compound is reflected by the differing lengths of the Ru-P bonds: 2.374, 2.412, and 2.230 Å. The Ru-Cl bond lengths are both 2.387 Å.
Reactions
In the presence of excess of triphenylphosphine, RuCl2(PPh3)3 binds a fourth phosphine to give black RuCl2(PPh3)4. The triphenylphosphine ligands in both the tris(phosphine) and tetrakis(phosphine) complexes are readily substituted by other ligands. The tetrakis(phosphine) complex is a precursor to the Grubbs catalysts.
Dichlorotris(triphenylphosphine)ruthenium(II) reacts with hydrogen in the presence of base to give the purple-colored monohydride HRuCl(PPh3)3.
RuCl2(PPh3)3 + H2 + NEt3 → HRuCl(PPh3)3 + [HNEt3]Cl
Dichlorotris(triphenylphosphine)ruthenium(II) reacts with carbon monoxide to produce the all trans isomer of dichloro(dicarbonyl)bis(triphenylphosphine)ruthenium(II).
RuCl2(PPh3)3 + 2 CO → trans,trans,trans-RuCl2(CO)2(PPh3)2 + PPh3
This kinetic product isomerizes to the cis adduct during recrystallization. trans-RuCl2(dppe)2 forms upon treating RuCl2(PPh3)3 with dppe.
RuCl2(PPh3)3 + 2 dppe → RuCl2(dppe)2 + 3 PPh3
RuCl2(PPh3)3 catalyzes the decomposition of formic acid into carbon dioxide and hydrogen gas in the presence of an amine. Since carbon dioxide can be trapped and hydrogenated on an industrial scale, formic acid represents a potential storage and transportation medium.
Use in organic synthesis
RuCl2(PPh3)3 facilitates oxidations, reductions, cross-couplings, cyclizations, and isomerization. It is used in the Kharasch addition of chlorocarbons to alkenes.
Dichlorotris(triphenylphosphine)ruthenium(II) serves as a precatalyst for the hydrogenation of alkenes, nitro compounds, ketones, carboxylic acids, and imines. On the other hand, it catalyzes the oxidation of alkanes to tertiary alcohols, amides to t-butyldioxyamides, and tertiary amines to α-(t-butyldioxyamides) using tert-butyl hydroperoxide. Using other peroxides, oxygen, and acetone, the catalyst can oxidize alcohols to aldehydes or ketones. Using dichlorotris(triphenylphosphine)ruthenium(II) the N-alkylation of amines with alcohols is also possible (see "borrowing hydrogen").
RuCl2(PPh3)3 efficiently catalyzes carbon-carbon bond formation from cross couplings of alcohols through C-H activation of sp3 carbon atoms in the presence of a Lewis acid.
References
Ruthenium complexes
Triphenylphosphine complexes
Chloro complexes
Ruthenium(II) compounds
Reagents for organic chemistry | Dichlorotris(triphenylphosphine)ruthenium(II) | Chemistry | 972 |
2,312,624 | https://en.wikipedia.org/wiki/Lucigenin | Lucigenin is an aromatic compound used in areas which include chemiluminescence. Its chemical name is bis-N-methylacridinium nitrate. It exhibits a bluish-green fluorescence.
It is used as a probe for superoxide anion in biology, for its chemiluminescent properties.
Synthesis
It may be prepared from acridone.
There's also a route from toluene:
References
Nitrates
Acridines
Quaternary ammonium compounds | Lucigenin | Chemistry | 100 |
4,740,982 | https://en.wikipedia.org/wiki/Density%20contrast | Density contrast is a parameter used in galaxy formation to indicate where there are local enhancements in matter density.
It is believed that after inflation, although the universe was mostly uniform, some regions were slightly denser than others with contrast densities on the order of 1 trillionth. As the horizon distance expanded, the enclosed causally connected (i.e. gravitationally connected) masses increased until they reached the Jeans mass and began to collapse, which allowed galaxies, galaxy clusters, superclusters, and filaments to form.
References
Physical cosmology
Inflation (cosmology) | Density contrast | Physics,Astronomy | 120 |
21,059,340 | https://en.wikipedia.org/wiki/Reproductive%20initials | Reproductive initials, or gonidial initials, are filaments below the cuticle surface of algae and fungi which give rise to the bulbs of spore-producing cells (in fungi, conidiophores).
References
Algal anatomy
Fungal morphology and anatomy | Reproductive initials | Biology | 55 |
14,487,095 | https://en.wikipedia.org/wiki/NGC%2088 | NGC 88 is a barred spiral galaxy exhibiting an inner ring structure located about 160 million light years from the Earth in the Phoenix constellation. NGC 88 is interacting with the galaxies NGC 92, NGC 87 and NGC 89.
It is part of a family of galaxies called Robert's Quartet discovered by astronomer John Herschel in the 1830s.
References
NGC 88
External links
Phoenix (constellation)
0088
01370
Robert's Quartet
Barred spiral galaxies
18340930 | NGC 88 | Astronomy | 93 |
57,628,273 | https://en.wikipedia.org/wiki/ITOS-E | ITOS-E was a weather satellite operated by the National Oceanic and Atmospheric Administration (NOAA). It was part of a series of satellites called ITOS, or improved TIROS. ITOS-E was released on July 16, 1973, from the Vandenberg Air Force Base, California, with a Delta rocket, but failed to achieve orbit.
References
1973 in spaceflight
Spacecraft launched by Delta rockets
Weather satellites of the United States
Television Infrared Observation Satellites | ITOS-E | Astronomy | 94 |
6,317,791 | https://en.wikipedia.org/wiki/National%20Computer%20Camps | National Computer Camps are computer camps for children and teens founded in 1977 by Dr. Michael Zabinski. There are locations at Fairfield University in Fairfield, Connecticut, where Dr. Zabinski is a professor of physics and engineering; Oglethorpe University in Atlanta, Georgia; and Baldwin Wallace University in Cleveland, Ohio.
The focus of NCC is 2D and 3D video game design, computer programming, digital video production, web page design, A+ and Network+ certification, Android App programming, and software applications including animation, Flash and Photoshop. An optional sports program is also available. Each week, all levels of programming are offered in Basic, C++, Java, assembler, HTML, XML, and JavaScript. Campers may attend one or multi-week sessions.
NCC was the first summer camp for children founded with a primary focus on computing.
References
External links
National Computer Camps Official Website
1977 establishments in Connecticut
Computing and society
Summer camps in the United States | National Computer Camps | Technology | 198 |
44,610,362 | https://en.wikipedia.org/wiki/Chan%E2%80%93Lam%20coupling | The Chan–Lam coupling reaction, also known as the Chan–Evans–Lam coupling, is a cross-coupling reaction between an aryl boronic acid and an alcohol or an amine to form the corresponding secondary aryl amines or aryl ethers, respectively. The Chan–Lam coupling is catalyzed by copper complexes. It can be conducted in air at room temperature. The more popular Buchwald–Hartwig coupling relies on the use of palladium.
History
Dominic Chan, David Evans, and Patrick Lam published their work nearly simultaneously. The mechanism however remained uncertain for many years. Later developments by others extended the scope to include using carboxylic acids, giving aryl-ester products.
Mechanism
Analysis of the mechanism is complicated by the lability of copper reagents and the multicomponent nature of the reaction. The reaction proceeds via the formation of copper-aryl complexes. A copper(III)-aryl-alkoxide or copper(III)-aryl-amide intermediate undergoes Reductive elimination to give the aryl ether or aryl amine, respectively:
Ar-Cu(III)-NHR-L2 → Ar-NHR + Cu(I)L2
Ar-Cu(III)-OR-L2 → Ar-OR + Cu(I)L2
Example
An example of the Chan–Lam coupling to synthesize biologically active compounds is shown below:
Compound 1, a pyrrole, is coupled with aryl boronic acid, 2, to afford product 3, which is then carried forward to the target 4. The nitrile group of 2 does not poison the catalyst. Pyridine is the ligand used for the reaction. Although the reaction requires three days, it was carried out at room temperature in ambient air and resulted in a 93% yield.
Further reading
References
Carbon-carbon bond forming reactions
Name reactions | Chan–Lam coupling | Chemistry | 396 |
77,396,736 | https://en.wikipedia.org/wiki/Cross-cockpit%20collimated%20display | A cross-cockpit collimated display (CCCD) is a display system used in full flight simulators (FFS) to provide the crew with a high-fidelity out-the-window (OTW) view of the simulated environment around the aircraft. It is called cross-cockpit collimated because the light from a projected distant object is composed of rays that remain parallel or near-parallel across the cockpit (which typically sits two pilots side-by-side). Therefore, the projected object appears to both pilots to be realistically located in the distance, where the real object would be.
The technology was developed in the early 1980s by the British electronics company Rediffusion, and has since become the industry standard for FFS visual systems.
Design
In the real world, the rays of light emitted by a distant source are virtually parallel to each other when they reach two observers standing side-by-side (Figure 1). This also means that the same observer moving around from one position to the other, or simply moving their head, will see the distant object as stationary within their field of view, with its light always coming from the same direction.
Early digital visual systems for flight simulators consisted in one or more translucent screens, directly illuminated by one or more projectors from the opposite side to the observer (Figure 2). From the observer's point of view, it is as if the source of light lay on the screen, much like what happens with a TV screen. Since the screen is only a few meters away from the observer, the light from the image of the distant object will appear to come from different directions as the observer moves around, giving the impression that the object is close by instead of far away.
In a typical CCCD, instead, the screen is replaced by a curved mirror, which reflects the image from the translucent back-projection screen, now placed above the cockpit (Figure 3). Thank to the properties of parabolic mirrors, the rays of light from the screen are reflected by the mirror largely all in the same direction; that is they become collimated, and will give the observer the impression that the object represented is indeed distant.
History
The cross-cockpit collimated display was first developed by British firm Rediffusion in Crawley, West Sussex, under the direction of development manager Stuart Anderson. Launched in 1981, the system was patented under the name WIDE – Wide-angle Infinity Display Equipment. After the patent expired, it was adopted by full flight simulator manufacturers worldwide, and remains to this day the standard for FFS visual systems.
References
External links
Display technology | Cross-cockpit collimated display | Materials_science,Engineering | 528 |
5,870,005 | https://en.wikipedia.org/wiki/Line-in%20recording | Line-in recording is a term often used by manufacturers of sound equipment to refer to the capability of a device to record line level audio feeds. Microphone and instrument inputs, by contrast, are designed for devices which require further amplification to be at line-level.
The common 3.5 mm line-in connector has the left channel on the tip and right channel in the middle. The port is used to connect with other devices. Line-in is most commonly used for instruments.
References
Audio engineering | Line-in recording | Engineering | 103 |
412,488 | https://en.wikipedia.org/wiki/Starman%20%28Ted%20Knight%29 | Starman (Theodore Henry "Ted" Knight) is a fictional superhero appearing in media published by DC Comics, primarily as a member of the Justice Society of America. Created by writer Gardner Fox and artist Jack Burnley, he first appeared in Adventure Comics #61 (April 1941).
Publication history
Invited by editor Whitney Ellsworth to create a new superhero character, Burnley drew the Starman costume as a variation of Superman's famous outfit, topped with a Buck Rogers-style helmet. Gardner Fox developed the character, and science-fiction writer Alfred Bester also contributed Starman scripts. Later in the run, Emil Gershwin wrote the stories, with art by Mort Meskin and George Roussos.
His first story in Adventure Comics #61 (April 1941) pitted Starman against the sinister Dr. Doog, who threatened the world with his invention, the Ultra-Dynamo.
He continued to appear in Adventure Comics through #102 (Feb 1946), and All-Star Comics #8 (Dec 1941) to #23 (Winter 1944).
Fictional character biography
As Starman, Ted wears a caped costume of red and green topped with a helmet with a fin on the top. He uses a gravity rod (later cosmic rod) of his own invention which allows him to fly and manipulate energy.
Initially intending the rod as a power source, Ted is convinced by his cousin, Sandra Knight / Phantom Lady, to use it to fight crime. In the original 1940s stories, Starman operated out of Gotham City, but this was retconned in the 1990s to Opal City.
Starman's first recurring villain is Mist, an elderly scientist who wields an invisibility potion. Starman's rogues gallery also includes Astra the Astrologist, Cuthbert Cain, Dr. Doog and the Secret Brotherhood of the Electron, and the Veil.
He is a frequent ally of the FBI and a member of the Justice Society of America for much of the 1940s and, like other "mystery men" of the time, serves in the wartime All-Star Squadron. In 1942, Ted enlists in the U.S. Army Air Force and briefly serves as a pilot during World War II.
At this time, the love of Ted's life is a woman named Doris Lee, who often chastises her layabout playboy boyfriend for his pretended laziness and hypochondria, unaware of Ted's costumed persona. Doris is murdered in the late 1940s, leading Ted to suffer a nervous breakdown.
In the 1990s-era Starman series, Ted returns to active duty, partially inspired by his time-traveling son Jack. Additionally, it is revealed that he had a brief affair with the first Black Canary (Dinah Drake) in the 1960s.
Like the rest of the Justice Society, Starman spends many years in retirement following the end of the Golden Age of heroes, but returns to help mentor the team's spiritual successors the Justice League of America. During this time, Ted Knight marries a woman named Adele Doris Drew and has two children, Jack and David. In Zero Hour: Crisis in Time!, Ted loses his slowed aging and is forced into retirement.
Ted later battles Doctor Phosphorus, whose radiation gives him terminal cancer. He is killed in battle with Mist while stopping his bomb from destroying the city.
Powers and abilities
Ted Knight has no natural, superhuman powers. His abilities stem from the use of his inventions, the gravity rod and the cosmic rod. These devices channel an unknown form of stellar radiation, which Ted is able to manipulate through the rod. As Starman, he possesses the ability to fly, project bursts of stellar energy, light, and heat, create force fields and simple energy constructs, and levitate objects. Extended use of the cosmic rod created a bond between it and Ted, allowing him to mentally summon the rod when separated from it.
Ted possesses a brilliant intellect, mastery of several sciences, and a gift for invention. In addition to the gravity and cosmic rods, Ted created the cosmic staff used by his son, Jack, and the cosmic converter belt worn by his JSA teammates, the Star-Spangled Kid and Stargirl.
Other versions
An alternate universe variant of Ted Knight / Starman appears in JLA: Age of Wonder. This version is an inventor and friend of Superman, Thomas Edison, and Nikola Tesla who built his cosmic rod with technology gleaned from the rocket ship that brought Superman to Earth.
An alternate universe variant of Ted Knight, codenamed "Star", appears in JSA: The Unholy Three as an intelligence agent working at Chernobyl.
An alternate universe variant of Ted Knight / Starman makes a cameo appearance in JLA: Another Nail.
Collected editions
Golden Age Starman Archives Vol. 1 (Starman stories from Adventure Comics #61-76)
Golden Age Starman Archives Vol. 2 (Starman stories from Adventure Comics #77-102)
In other media
Ted Knight / Starman appears in the Batman: The Brave and the Bold episode "Crisis: 22,300 Miles Above Earth!", voiced by Jeff Bennett.
Ted Knight / Starman appears in Justice League: Crisis on Infinite Earths.
Ted Knight / Starman received a figure in Mattel's DC Universe Classics line.
References
External links
Starman (1941) at Don Markstein's Toonopedia. Archived from the original on October 23, 2017.
JSA Fact File: Starman I
Earth-2 Starman Index
Characters created by Gardner Fox
Comics characters introduced in 1941
DC Comics male superheroes
Earth-Two
Fictional astronomers
Fictional characters with energy-manipulation abilities
Fictional characters with gravity abilities
Fictional inventors in comics
Fictional stick-fighters
Golden Age superheroes
Starman (DC Comics) | Starman (Ted Knight) | Astronomy | 1,168 |
7,233,280 | https://en.wikipedia.org/wiki/Semantic%20interoperability | Semantic interoperability is the ability of computer systems to exchange data with unambiguous, shared meaning. Semantic interoperability is a requirement to enable machine computable logic, inferencing, knowledge discovery, and data federation between information systems.
Semantic interoperability is therefore concerned not just with the packaging of data (syntax), but the simultaneous transmission of the meaning with the data (semantics). This is accomplished by adding data about the data (metadata), linking each data element to a controlled, shared vocabulary. The meaning of the data is transmitted with the data itself, in one self-describing "information package" that is independent of any information system. It is this shared vocabulary, and its associated links to an ontology, which provides the foundation and capability of machine interpretation, inference, and logic.
Syntactic interoperability (see below) is a prerequisite for semantic interoperability. Syntactic interoperability refers to the packaging and transmission mechanisms for data. In healthcare, HL7 has been in use for over thirty years (which predates the internet and web technology), and uses the pipe character (|) as a data delimiter. The current internet standard for document markup is XML, which uses "< >" as a data delimiter. The data delimiters convey no meaning to the data other than to structure the data. Without a data dictionary to translate the contents of the delimiters, the data remains meaningless. While there are many attempts at creating data dictionaries and information models to associate with these data packaging mechanisms, none have been practical to implement. This has only perpetuated the ongoing "babelization" of data and inability to exchange data with meaning.
Since the introduction of the Semantic Web concept by Tim Berners-Lee in 1999, there has been growing interest and application of the W3C (World Wide Web Consortium) standards to provide web-scale semantic data exchange, federation, and inferencing capabilities.
Semantic as a function of syntactic interoperability
Syntactic interoperability, provided by for instance XML or the SQL standards, is a pre-requisite to semantic. It involves a common data format and common protocol to structure any data so that the manner of processing the information will be interpretable from the structure. It also allows detection of syntactic errors, thus allowing receiving systems to request resending of any message that appears to be garbled or incomplete. No semantic communication is possible if the syntax is garbled or unable to represent the data. However, information represented in one syntax may in some cases be accurately translated into a different syntax. Where accurate translation of syntaxes is possible, systems using different syntaxes may also interoperate accurately. In some cases, the ability to accurately translate information among systems using different syntaxes may be limited to one direction, when the formalisms used have different levels of expressivity (ability to express information).
A single ontology containing representations of every term used in every application is generally considered impossible, because of the rapid creation of new terms or assignments of new meanings to old terms. However, though it is impossible to anticipate every concept that a user may wish to represent in a computer, there is the possibility of finding some finite set of "primitive" concept representations that can be combined to create any of the more specific concepts that users may need for any given set of applications or ontologies. Having a foundation ontology (also called upper ontology) that contains all those primitive elements would provide a sound basis for general semantic interoperability, and allow users to define any new terms they need by using the basic inventory of ontology elements, and still have those newly defined terms properly interpreted by any other computer system that can interpret the basic foundation ontology. Whether the number of such primitive concept representations is in fact finite, or will expand indefinitely, is a question under active investigation. If it is finite, then a stable foundation ontology suitable to support accurate and general semantic interoperability can evolve after some initial foundation ontology has been tested and used by a wide variety of users. At the present time, no foundation ontology has been adopted by a wide community, so such a stable foundation ontology is still in the future.
Words and meanings
One persistent misunderstanding recurs in discussion of semantics is "the confusion of words and meanings". The meanings of words change, sometimes rapidly. But a formal language such as used in an ontology can encode the meanings (semantics) of concepts in a form that does not change. In order to determine what is the meaning of a particular word (or term in a database, for example) it is necessary to label each fixed concept representation in an ontology with the word(s) or term(s) that may refer to that concept. When multiple words refer to the same (fixed) concept in language this is called synonymy; when one word is used to refer to more than one concept, that is called ambiguity.
Ambiguity and synonymy are among the factors that make computer understanding of language very difficult. The use of words to refer to concepts (the meanings of the words used) is very sensitive to the context and the purpose of any use for many human-readable terms. The use of ontologies in supporting semantic interoperability is to provide a fixed set of concepts whose meanings and relations are stable and can be agreed to by users. The task of determining which terms in which contexts (each database is a different context) is then separated from the task of creating the ontology, and must be taken up by the designer of a database, or the designer of a form for data entry, or the developer of a program for language understanding. When the meaning of a word used in some interoperable context is changed, then to preserve interoperability it is necessary to change the pointer to the ontology element(s) that specifies the meaning of that word.
Knowledge representation requirements and languages
A knowledge representation language may be sufficiently expressive to describe nuances of meaning in well understood fields. There are at least five levels of complexity of these.
For general semi-structured data one may use a general purpose language such as XML.
Languages with the full power of first-order predicate logic may be required for many tasks.
Human languages are highly expressive, but are considered too ambiguous to allow the accurate interpretation desired, given the current level of human language technology.
Semantic interoperability healthcare systems leverage data in a standardized way as they break down and share information. For example, two systems can now recognize terminology and medication symbols. Semantic interoperability healthcare systems leverage data in a standardized way as they break down and share information. For example, two systems can now recognize terminology, medication symbols, and other nuances while exchanging data automatically, without human intervention.
Prior agreement not required
Semantic interoperability may be distinguished from other forms of interoperability by considering whether the information transferred has, in its communicated form, all of the meaning required for the receiving system to interpret it correctly, even when the algorithms used by the receiving system are unknown to the sending system. Consider sending one number:
If that number is intended to be the sum of money owed by one company to another, it implies some action or lack of action on the part of both those who send it and those who receive it.
It may be correctly interpreted if sent in response to a specific request, and received at the time and in the form expected. This correct interpretation does not depend only on the number itself, which could represent almost any of millions of types of quantitative measurement, rather it depends strictly on the circumstances of transmission. That is, the interpretation depends on both systems expecting that the algorithms in the other system use the number in exactly the same sense, and it depends further on the entire envelope of transmissions that preceded the actual transmission of the bare number.
By contrast, if the transmitting system does not know how the information will be used by other systems, it is necessary to have a shared agreement on how information with some specific meaning (out of many possible meanings) will appear in a communication. For a particular task, one solution is to standardize a form, such as a request for payment; that request would have to encode, in standardized fashion, all of the information needed to evaluate it, such as: the agent owing the money, the agent owed the money, the nature of the action giving rise to the debt, the agents, goods, services, and other participants in that action; the time of the action; the amount owed and currency in which the debt is reckoned; the time allowed for payment; the form of payment demanded; and other information. When two or more systems have agreed on how to interpret the information in such a request, they can achieve semantic interoperability for that specific type of transaction. For semantic interoperability generally, it is necessary to provide standardized ways to describe the meanings of many more things than just commercial transactions, and the number of concepts whose representation needs to be agreed upon are at a minimum several thousand.
Ontology research
How to achieve semantic interoperability for more than a few restricted scenarios is currently a matter of research and discussion. For the problem of General Semantic Interoperability, some form of foundation ontology ('upper ontology') is required that is sufficiently comprehensive to provide the definition of concepts for more specialized ontologies in multiple domains. Over the past decade, more than ten foundation ontologies have been developed, but none have as yet been adopted by a wide user base.
The need for a single comprehensive all-inclusive ontology to support Semantic Interoperability can be avoided by designing the common foundation ontology as a set of basic ("primitive") concepts that can be combined to create the logical descriptions of the meanings of terms used in local domain ontologies or local databases. This tactic is based on the principle that:
If:
(1) the meanings and usage of the primitive ontology elements in the foundation ontology are agreed on, and
(2) the ontology elements in the domain ontologies are constructed as logical combinations of the elements in the foundation ontology,
Then:
The intended meanings of the domain ontology elements can be computed automatically using an FOL (first-order logic) reasoner, by any system that accepts the meanings of the elements in the foundation ontology, and has both the foundation ontology and the logical specifications of the elements in the domain ontology.
Therefore:
Any system wishing to interoperate accurately with another system need transmit only the data to be communicated, plus any logical descriptions of terms used in that data that were created locally and are not already in the common foundation ontology.
This tactic then limits the need for prior agreement on meanings to only those ontology elements in the common Foundation Ontology (FO). Based on several considerations, this may require fewer than 10,000 elements (types and relations). However, for ease of understanding and use, more ontology elements with additional detail and specifics can help to find the exact location in the FO where specific domain concepts can be found or added.
In practice, together with the FO focused on representations of the primitive concepts, a set of domain extension ontologies to the FO with elements specified using the FO elements will likely also be used. Such pre-existing extensions will ease the cost of creating domain ontologies by providing existing elements with the intended meaning, and will reduce the chance of error by using elements that have already been tested. Domain extension ontologies may be logically inconsistent with each other, and that needs to be determined if different domain extensions are used in any communication.
Whether use of such a single foundation ontology can itself be avoided by sophisticated mapping techniques among independently developed ontologies is also under investigation.
Importance
The practical significance of semantic interoperability has been measured by several studies that estimate the cost (in lost efficiency) due to lack of semantic interoperability. One study, focusing on the lost efficiency in the communication of healthcare information, estimated that US$77.8 billion per year could be saved by implementing an effective interoperability standard in that area. Other studies, of the construction industry and of the automobile manufacturing supply chain, estimate costs of over US$10 billion per year due to lack of semantic interoperability in those industries. In total these numbers can be extrapolated to indicate that well over US$100 billion per year is lost because of the lack of a widely used semantic interoperability standard in the US alone.
There has not yet been a study about each policy field that might offer big cost savings applying semantic interoperability standards. But to see which policy fields are capable of profiting from semantic interoperability, see 'Interoperability' in general. Such policy fields are eGovernment, health, security and many more. The EU also set up the Semantic Interoperability Centre Europe in June 2007.
Semantic Interoperability for Internet of Things
Digital transformation holds huge benefits for enabling organizations to be more efficient, more flexible, and more nimble in responding to changes in business and operating conditions. This involves the need to integrate heterogeneous data and services throughout organizations. Semantic interoperability addresses the need for shared understanding of the meaning and context.
To support this, a cross-organization expert group involving ISO/IEC JTC1, ETSI, oneM2M and W3C are collaborating with AIOTI on accelerating adoption of semantic technologies in the IoT. The group has very recently published two joint white papers on semantic interoperability respectively named “Semantic IoT Solutions – A Developer Perspective” and “Towards semantic interoperability standards based on ontologies“. This follows on the success of the earlier white paper on “Semantic Interoperability for the Web of Things.”
Source:
“Semantic IoT Solutions – A Developer Perspective”
“Towards semantic interoperability standards based on ontologies“.
This follows on the success of the earlier white paper on “Semantic Interoperability for the Web of Things.”
https://www.w3.org/blog/2019/10/aioti-iso-iec-jtc1-etsi-onem2m-and-w3c-collaborate-on-two-joint-white-papers-on-semantic-interoperability-targeting-developers-and-standardization-engineers/
See also
Data integration
Business semantics management
Interoperability, a more general concept
Ontology alignment
Semantic computing
UDEF, Universal Data Element Framework
References
External links
the ONTACWG Glossary Other definitions of Semantic Interoperability
MMI Guide: Achieving Semantic Interoperability
Knowledge representation
Technical communication
Information science
Ontology (information science)
Computing terminology
Telecommunication theory
Interoperability | Semantic interoperability | Technology,Engineering | 3,011 |
14,053,322 | https://en.wikipedia.org/wiki/GPCR%20neuropeptide%20receptor | GPCR neuropeptide receptors are G-protein coupled receptors which bind various neuropeptides. Members include:
Neuropeptide B/W receptor
NPBWR1
NPBWR2
Neuropeptide FF receptor
NPFFR1
NPFFR2
Neuropeptide S receptor
NPSR1
Neuropeptide Y receptor
Y1 - NPY1R
Y2 - NPY2R
Y4 - PPYR1
Y5 - NPY5R
References
External links
G protein-coupled receptors | GPCR neuropeptide receptor | Chemistry | 112 |
76,934,010 | https://en.wikipedia.org/wiki/Sony%20Xperia%201%20VI | Xperia 1 VI is a smartphone product of the Sony Xperia 1 range. The phone was released on May 15, 2024, powered by the Snapdragon 8 Gen 3 chipset and Qualcomm Snapdragon X75 modem. The phone's display has a 19.5:9 aspect ratio, which is uncommon, and it has an upgraded camera.
The phone will not be released in the United States.
Hardware
There is a dedicated shutter button.
It is equipped with the following camera lenses:
48MP 24mm effective primary lens with AI Auto Focus.
12 MP 16mm ultrawide lens.
12 MP Telephoto optical zoom lens with a focal length of 85mm to 170mm, allowing for long-distance shooting up to x7.1 zoom.
The Xperia 1 VI comes in Black, Platinum Silver, Khaki Green, and Scarlet Red. Its display uses a 6.50" 120Hz LTPO OLED panel with BRAVIA screen, which supports the HDR BT.2020 standard. Unlike its predecessor, the Xperia 1 V, it has an FHD+ resolution instead of 4K.
It has a two-day battery life with a 5000mAh lithium battery.
The phone has a IP65, IP68 standard dust and water body protection.
It has 12GB of RAM paired with 256GB or 512GB of storage.
Software
There is Xperia UI and Android 14 with gaming mode customizations. A unified camera app replacing previous two separate apps has been added. The phone has a specialized music recording app.
The Android 15 update was released on the 20th of November 2024, which also introduced the dedicated Video Pro mode in the camera app, along with the 1st of November 2024 security patch level.
References
External links
Sony smartphones
Mobile phones introduced in 2024
Mobile phones with multiple rear cameras
Flagship smartphones | Sony Xperia 1 VI | Technology | 380 |
13,565,654 | https://en.wikipedia.org/wiki/Bromazine | Bromazine, sold under the brand names Ambodryl, Ambrodil, and Deserol among others, also known as bromodiphenhydramine, is an antihistamine and anticholinergic medication of the ethanolamine class. It is an analogue of diphenhydramine with a bromine substitution on one of the phenyl rings.
Synthesis
Grignard reaction between phenylmagnesium bromide and para-bromobenzaldehyde [1122-91-4] (1) gives p-bromobenzhydrol [29334-16-5] (2). Halogenation with acetyl bromide in benzene solvent gives p-bromo-benzhydrylbromide [18066-89-2] (3). Finally, etherification with deanol completed the synthesis of Bromazine (4).
Side effects
Continuous and/or cumulative use of anticholinergic medications, including first-generation antihistamines, is associated with higher risk for cognitive decline and dementia in elderly people.
References
4-Bromophenyl compounds
Ethers
Dimethylamino compounds
H1 receptor antagonists | Bromazine | Chemistry | 252 |
50,540,599 | https://en.wikipedia.org/wiki/NGC%205343 | NGC 5343 is an elliptical galaxy in the constellation of Virgo. It was discovered on 5 May 1785 by William Herschel.
References
Notes
Elliptical galaxies
Virgo (constellation)
5343
49412 | NGC 5343 | Astronomy | 42 |
5,505,010 | https://en.wikipedia.org/wiki/Ammonium%20alum | Ammonium aluminium sulfate, also known as ammonium alum or just alum (though there are many different substances also called "alum"), is a white crystalline double sulfate usually encountered as the dodecahydrate, formula (NH4)Al(SO4)2·12H2O. It is used in small amounts in a variety of niche applications. The dodecahydrate occurs naturally as the rare mineral tschermigite.
Production and basic properties
Ammonium alum is made from aluminium hydroxide, sulfuric acid and ammonium sulfate. It forms a solid solution with potassium alum. Pyrolysis leaves alumina. Such alumina is used in the production of grinding powders and as precursors to synthetic gems.
Uses
Ammonium alum is not a major industrial chemical or a particularly useful laboratory reagent, but it is cheap and effective, which invites many niche applications. It is used in water purification, in vegetable glues, in porcelain cements, in deodorants and in tanning, dyeing and in fireproofing textiles. The pH of the solution resulting from the topical application of ammonium alum with perspiration is typically in the slightly acid range, from 3 to 5.
Ammonium alum is a common ingredient in animal repellent sprays.
References
Aluminium compounds
Ammonium compounds
Sulfates
Double salts
Astringent flavors
E-number additives | Ammonium alum | Chemistry | 298 |
13,481,675 | https://en.wikipedia.org/wiki/Natural%20Language%20Semantics%20Markup%20Language | Natural Language Semantics Markup Language is a markup language for providing systems (like Voice Browsers) with semantic interpretations for a variety of inputs, including speech and natural language text input. Natural Language Semantics Markup Language is currently a World Wide Web Consortium Working Draft.
See also
VoiceXML
SRGS
Semantic Interpretation for Speech Recognition
External links
SRGS Specification (W3C Recommendation)
Natural Language Semantics Markup Language for the Speech Interface Framework (W3C Working Draft)
W3C's Voice Browser Working Group
World Wide Web Consortium standards
XML-based standards | Natural Language Semantics Markup Language | Technology | 114 |
58,422,881 | https://en.wikipedia.org/wiki/Anderson%E2%80%93Kadec%20theorem | In mathematics, in the areas of topology and functional analysis, the Anderson–Kadec theorem states that any two infinite-dimensional, separable Banach spaces, or, more generally, Fréchet spaces, are homeomorphic as topological spaces. The theorem was proved by Mikhail Kadec (1966) and Richard Davis Anderson.
Statement
Every infinite-dimensional, separable Fréchet space is homeomorphic to the Cartesian product of countably many copies of the real line
Preliminaries
Kadec norm: A norm on a normed linear space is called a with respect to a total subset of the dual space if for each sequence the following condition is satisfied:
If for and then
Eidelheit theorem: A Fréchet space is either isomorphic to a Banach space, or has a quotient space isomorphic to
Kadec renorming theorem: Every separable Banach space admits a Kadec norm with respect to a countable total subset of The new norm is equivalent to the original norm of The set can be taken to be any weak-star dense countable subset of the unit ball of
Sketch of the proof
In the argument below denotes an infinite-dimensional separable Fréchet space and the relation of topological equivalence (existence of homeomorphism).
A starting point of the proof of the Anderson–Kadec theorem is Kadec's proof that any infinite-dimensional separable Banach space is homeomorphic to
From Eidelheit theorem, it is enough to consider Fréchet space that are not isomorphic to a Banach space. In that case there they have a quotient that is isomorphic to A result of Bartle-Graves-Michael proves that then
for some Fréchet space
On the other hand, is a closed subspace of a countable infinite product of separable Banach spaces of separable Banach spaces. The same result of Bartle-Graves-Michael applied to gives a homeomorphism
for some Fréchet space From Kadec's result the countable product of infinite-dimensional separable Banach spaces is homeomorphic to
The proof of Anderson–Kadec theorem consists of the sequence of equivalences
See also
Notes
References
.
.
Topological vector spaces
Theorems in functional analysis
Theorems in topology | Anderson–Kadec theorem | Mathematics | 465 |
22,830,139 | https://en.wikipedia.org/wiki/Light%20non-aqueous%20phase%20liquid | A light non-aqueous phase liquid (LNAPL) is a groundwater contaminant that is not soluble in water and has a lower density than water, in contrast to a DNAPL which has a higher density than water. Once a LNAPL pollution infiltrates the ground, it will stop at the depth of the water table because of its positive buoyancy. Efforts to locate and remove LNAPLs are relatively less expensive and easier than for DNAPLs because LNAPLs float on top of the water table.
Examples of LNAPLs are benzene, toluene, xylene, and other hydrocarbons.
See also
DNAPL
LNAPL transmissivity
External links
LNAPL Definition from the USGS
Water pollution
Water chemistry
Hydrogeology | Light non-aqueous phase liquid | Chemistry,Environmental_science | 165 |
1,547,157 | https://en.wikipedia.org/wiki/Population%20size | In population genetics and population ecology, population size (usually denoted N) is a countable quantity representing the number of individual organisms in a population. Population size is directly associated with amount of genetic drift, and is the underlying cause of effects like population bottlenecks and the founder effect. Genetic drift is the major source of decrease of genetic diversity within populations which drives fixation and can potentially lead to speciation events.
Genetic drift
Of the five conditions required to maintain Hardy-Weinberg Equilibrium, infinite population size will always be violated; this means that some degree of genetic drift is always occurring. Smaller population size leads to increased genetic drift, it has been hypothesized that this gives these groups an evolutionary advantage for acquisition of genome complexity. An alternate hypothesis posits that while genetic drift plays a larger role in small populations developing complexity, selection is the mechanism by which large populations develop complexity.
Population bottlenecks and founder effect
Population bottlenecks occur when population size reduces for a short period of time, decreasing the genetic diversity in the population.
The founder effect occurs when few individuals from a larger population establish a new population and also decreases the genetic diversity, and was originally outlined by Ernst Mayr. The founder effect is a unique case of genetic drift, as the smaller founding population has decreased genetic diversity that will move alleles within the population more rapidly towards fixation.
Modeling genetic drift
Genetic drift is typically modeled in lab environments using bacterial populations or digital simulation. In digital organisms, a generated population undergoes evolution based on varying parameters, including differential fitness, variation, and heredity set for individual organisms.
Rozen et al. use separate bacterial strains on two different mediums, one with simple nutrient components and one with nutrients noted to help populations of bacteria evolve more heterogeneity. A digital simulation based on the bacterial experiment design was also used, with assorted assignations of fitness and effective population sizes comparable to those of the bacteria used based on both small and large population designations Within both simple and complex environments, smaller populations demonstrated greater population variation than larger populations, which showed no significant fitness diversity. Smaller populations had increased fitness and adapted more rapidly in the complex environment, while large populations adapted faster than small populations in the simple environment. These data demonstrate that the consequences of increased variation within small populations is dependent on the environment: more challenging or complex environments allow variance present within small populations to confer greater advantage. Analysis demonstrates that smaller populations have more significant levels of fitness from heterogeneity within the group regardless of the complexity of the environment; adaptive responses are increased in more complex environments. Adaptations in asexual populations are also not limited by mutations, as genetic variation within these populations can drive adaptation. Although small populations tend to face more challenges because of limited access to widespread beneficial mutation adaptation within these populations is less predictable and allows populations to be more plastic in their environmental responses. Fitness increase over time in small asexual populations is known to be strongly positively correlated with population size and mutation rate, and fixation probability of a beneficial mutation is inversely related to population size and mutation rate.
LaBar and Adami use digital haploid organisms to assess differing strategies for accumulating genomic complexity. This study demonstrated that both drift and selection are effective in small and large populations, respectively, but that this success is dependent on several factors. Data from the observation of insertion mutations in this digital system demonstrate that small populations evolve larger genome sizes from fixation of deleterious mutations and large populations evolve larger genome sizes from fixation of beneficial mutations. Small populations were noted to have an advantage in attaining full genomic complexity due to drift-driven phenotypic complexity. When deletion mutations were simulated, only the largest populations had any significant fitness advantage. These simulations demonstrate that smaller populations fix deleterious mutations by increased genetic drift. This advantage is likely limited by high rates of extinction. Larger populations evolve complexity through mutations that increase expression of particular genes; removal of deleterious alleles does not limit developing more complex genomes in the larger groups and a large number of insertion mutations that resulted in beneficial or non-functional elements within the genome were not required. When deletion mutations occur more frequently, the largest populations have an advantage that suggests larger populations generally have an evolutionary advantage for development of new traits.
Critical Mutation Rate
Critical mutation rate, or error threshold, limits the number of mutations that can exist within a self-replicating molecule before genetic information is destroyed in later generations.
Contrary to the findings of previous studies, critical mutation rate has been noted to be dependent on population size in both haploid and diploid populations. When populations have fewer than 100 individuals, critical mutation rate can be exceeded, but will lead to loss of genetic material which results in further population decline and likelihood of extinction. This ‘speed limit’ is common within small, adapted asexual populations and is independent of mutation rate.
Effective population size (Ne)
The effective population size (Ne) is defined as "the number of breeding individuals in an idealized population that would show the same amount of dispersion of allele frequencies under random genetic drift or the same amount of inbreeding as the population under consideration." Ne is usually less than N (the absolute population size) and this has important applications in conservation genetics.
Overpopulation may indicate any case in which the population of any species of animal may exceed the carrying capacity of its ecological niche.
See also
Carrying capacity
Holocene extinction event
Lists of organisms by population
Overpopulation
Population growth rate
References
Ecological metrics
Population genetics
Countable quantities | Population size | Physics,Mathematics | 1,128 |
44,442,903 | https://en.wikipedia.org/wiki/Identity%20interrogation | Identity interrogation is a method of authentication or identity proofing that involves posing one or more knowledge-based authentication questions to an individual. Identity interrogation questions such as "What is your mother’s maiden name?" or "What are the last four digits of your social security number?" It is a method businesses use to prevent identity theft or impersonation of customers.
Identity interrogation is primarily employed during remote, not in-person interactions, such as with a teller at a bank. Many interactions that require user authentication over the Internet or the telephone employ Identity interrogation as a substitute for stronger authentication methods such as physical ownership authentication (i.e. presenting a driver's license or a bankcard), or biometrics (i.e. fingerprint or facial recognition) available mainly during in-person interactions. Identity interrogation is used to assist with risk management, account security, and legal and regulatory compliance during remote interactions. In addition, the technique was developed to assist in the prevention of identity fraud, or the illegal use of another person's identity to commit fraud or other criminal activities.
Identity interrogation methods are most commonly used by governments, organizations and companies such as banks or financial intermediaries, credit card companies, internet providers, telecommunications companies, insurance providers and others.
See also
TRUSTID
Notes
Computer network security
Identity management | Identity interrogation | Engineering | 269 |
4,732,887 | https://en.wikipedia.org/wiki/Test%20management | Test management most commonly refers to the activity of managing a testing process. A test management tool is software used to manage tests (automated or manual) that have been previously specified by a test procedure. It is often associated with automation software. Test management tools often include requirement and/or specification management modules that allow automatic generation of the requirement test matrix (RTM), which is one of the main metrics to indicate functional coverage of a system under test (SUT).
Creating tests definitions in a database
Test definition includes: test plan, association with product requirements and specifications. Eventually, some relationship can be set between tests so that precedences can be established.
E.g. if test A is parent of test B and if test A is failing, then it may be useless to perform test B.
Tests should also be associated with priorities.
Every change on a test must be versioned so that the QA team has a comprehensive view of the history of the test.
Preparing test campaigns
This includes building some bundles of test cases and executing them (or scheduling their execution).
Execution can be either manual or automatic.
Manual execution
The user will have to perform all the test steps manually and inform the system of the result.
Some test management tools includes a framework to interface the user with the test plan to facilitate this task. There are several ways to run tests. The simplest way to run a test is to run a test case. The test case can be associated with other test artifacts such as test plans, test scripts, test environments, test case execution records, and test suites.
Automatic execution
There are numerous ways of implementing automated tests.
Automatic execution requires the test management tool to be compatible with the tests themselves.
To do so, test management tools may propose proprietary automation frameworks or APIs to interface with third-party or proprietary automated tests.
Generating reports and metrics
The ultimate goal of test management tools is to deliver sensitive metrics that will help the QA manager in evaluating the quality of the system under test before releasing.
Metrics are generally presented as graphics and tables indicating success rates, progression/regression and much other sensitive data.
Managing bugs
Eventually, test management tools can integrate bug tracking features or at least interface with well-known dedicated bug tracking solutions (such as Bugzilla or Mantis) efficiently link a test failure with a bug.
Planning test activities
Test management tools may also integrate (or interface with third-party) project management functionalities to help the QA manager planning activities ahead of time.
Test management tools
There are several commercial and open source test management tools available in the market today. Most test management tools are web-served applications that need to be installed in-house, while others can be accessed as software as a service.
See also
Test management tools
Software testing
Test automation management tools
References
External links
Open Source Test Management Tools
The 7 Complexities of Test Management
Software testing | Test management | Engineering | 583 |
30,948,931 | https://en.wikipedia.org/wiki/Leccinellum%20pseudoscabrum | Leccinellum pseudoscabrum is an edible species of fungus in the bolete family.
External links
pseudoscabrum
Fungus species | Leccinellum pseudoscabrum | Biology | 30 |
64,425,995 | https://en.wikipedia.org/wiki/1%2C2-Dioxolane | 1,2-Dioxolane is a chemical compound with formula C3H6O2, consisting of a ring of three carbon atoms and two oxygen atoms in adjacent positions. Its condensed structural formula is .
The compound is an organic peroxide, specifically an endoperoxide, and a structural isomer of the much more common 1,3-dioxolane, which is often called simply "dioxolane".
Synthesis
Synthesis methods for the 1,2-dioxolane core structure include oxidation of cyclopropane derivatives with singlet oxygen or molecular oxygen with a suitable catalyst, the use of autooxidation, nucleophilic displacement with hydrogen peroxide, treatment with mercury(II) nitrate, photolysis of extended π-systems, reaction of a bis-silylperoxide and an alkene, or reaction with a 2-perhydroxy 4-alkene with diethylamine or mercury(II) acetate.
Occurrence
Some derivatives occur naturally, for example in Calophyllum dispar and from the seeds of the mamey (Mammea americana). Plakinic acid A (3,5-peroxy 3Z,5Z,7,11-tetramethyl 13-phenyl-8E,12E-tridecadienoic acid) and similar compounds were isolated from sponges of the Plakortis genus. Nardosinone is a sesquiterpene derivative with a 1,2-dioxolane element isolated from the plant Adenosma caeruleum.
Uses
Synthetic and natural dioxolane derivatives have been used or considered as antimalarial drugs. Plakinic acid A and related compounds showed antifungal action.
See also
1,2-Dioxane
1,2-Dioxetane
1,2-Dithiolane
References
Oxygen heterocycles
Organic peroxides | 1,2-Dioxolane | Chemistry | 404 |
15,062,529 | https://en.wikipedia.org/wiki/HOXC5 | Homeobox protein Hox-C5 is a protein that in humans is encoded by the HOXC5 gene.
Function
This gene belongs to the homeobox family of genes. The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms. Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, which are located on different chromosomes and consist of 9 to 11 genes arranged in tandem. This gene, HOXC5, is one of several homeobox HOXC genes located in a cluster on chromosome 12. Three genes, HOXC5, HOXC4 and HOXC6, share a 5' non-coding exon. Transcripts may include the shared exon spliced to the gene-specific exons, or they may include only the gene-specific exons. Two alternatively spliced variants have been described for HOXC5. The transcript variant which includes the shared exon apparently doesn't encode a protein. The protein-coding transcript variant contains gene-specific exons only.
References
Further reading
External links
Transcription factors | HOXC5 | Chemistry,Biology | 252 |
18,355,895 | https://en.wikipedia.org/wiki/Davenport%E2%80%93Schmidt%20theorem | In mathematics, specifically the area of Diophantine approximation, the Davenport–Schmidt theorem tells us how well a certain kind of real number can be approximated by another kind. Specifically it tells us that we can get a good approximation to irrational numbers that are not quadratic by using either quadratic irrationals or simply rational numbers. It is named after Harold Davenport and Wolfgang M. Schmidt.
Statement
Given a number α which is either rational or a quadratic irrational, we can find unique integers x, y, and z such that x, y, and z are not all zero, the first non-zero one among them is positive, they are relatively prime, and we have
If α is a quadratic irrational we can take x, y, and z to be the coefficients of its minimal polynomial. If α is rational we will have x = 0. With these integers uniquely determined for each such α we can define the height of α to be
The theorem then says that for any real number ξ which is neither rational nor a quadratic irrational, we can find infinitely many real numbers α which are rational or quadratic irrationals and which satisfy
where C is any real number satisfying C > 160/9.
While the theorem is related to Roth's theorem, its real use lies in the fact that it is effective, in the sense that the constant C can be worked out for any given ξ.
Notes
References
Wolfgang M. Schmidt. Diophantine approximation. Lecture Notes in Mathematics 785. Springer. (1980 [1996 with minor corrections])
Wolfgang M. Schmidt.Diophantine approximations and Diophantine equations, Lecture Notes in Mathematics, Springer Verlag 2000
External links
Diophantine approximation
Theorems in number theory | Davenport–Schmidt theorem | Mathematics | 353 |
2,705,942 | https://en.wikipedia.org/wiki/F-ATPase | F-ATPase, also known as F-Type ATPase, is an ATPase/synthase found in bacterial plasma membranes, in mitochondrial inner membranes (in oxidative phosphorylation, where it is known as Complex V), and in chloroplast thylakoid membranes. It uses a proton gradient to drive ATP synthesis by allowing the passive flux of protons across the membrane down their electrochemical gradient and using the energy released by the transport reaction to release newly formed ATP from the active site of F-ATPase. Together with V-ATPases and A-ATPases, F-ATPases belong to superfamily of related rotary ATPases.
F-ATPase consists of two domains:
the Fo domain, which is integral in the membrane and is composed of 3 different types of integral proteins classified as a, b and c.
the F1, which is peripheral (on the side of the membrane that the protons are moving into). F1 is composed of 5 polypeptide units α3β3γδε that bind to the surface of the Fo domain.
F-ATPases usually work as ATP synthases instead of ATPases in cellular environments. That is to say, it usually makes ATP from the proton gradient instead of working in the other direction like V-ATPases typically do. They do occasionally revert as ATPases in bacteria.
Structure
Fo-F1 particles are mainly formed of polypeptides. The F1-particle contains 5 types of polypeptides, with the composition-ratio—3α:3β:1δ:1γ:1ε. The Fo has the 1a:2b:12c composition. Together they form a rotary motor. As the protons bind to the subunits of the Fo domains, they cause parts of it to rotate. This rotation is propagated by a 'camshaft' to the F1 domain. ADP and Pi (inorganic phosphate) bind spontaneously to the three β subunits of the F1 domain, so that every time it goes through a 120° rotation ATP is released (rotational catalysis).
The Fo domains sits within the membrane, spanning the phospholipid bilayer, while the F1 domain extends into the cytosol of the cell to facilitate the use of newly synthesized ATP.
The Bovine Mitochondrial F1-ATPase Complexed with the inhibitor protein If1 is commonly cited in the relevant literature. Examples of its use may be found in many cellular fundamental metabolic activities such as acidosis and alkalosis and respiratory gas exchange.
The o in the Fo stands for oligomycin, because oligomycin is able to inhibit its function.
N-ATPase
N-ATPases are a group of F-type ATPases without a delta/OSCP subunit, found in bacteria and a group of archaea via horizontal gene transfer. They transport sodium ions instead of protons and tend to hydrolyze ATP. They form a distinct group that is further apart from usual F-ATPases than A-ATPases are from V-ATPases.
References
Membrane proteins | F-ATPase | Biology | 651 |
50,788,903 | https://en.wikipedia.org/wiki/HAL%20HTFE-25 | The HAL HTFE-25 ("Hindustan Turbo Fan Engine") is a 25 kN turbofan engine under development by Hindustan Aeronautics Limited (HAL). The engine can be used in single engine trainer jets, business jets and UAVs weighing up to 5 tonnes and in twin engine configuration for same weighing up to 9 tonnes. Based on the technical feasibility, the market potential for engine is of 200-250 units.
Two engines have been produced and completed 339 runs as of 2019 out of which 96 runs were conducted in 2018-19. Engine has successfully completed cold starting test at 14 °C with spark igniters and has also achieved 100 per cent max speed with and without IGV modulation.
The company has also initiated the development for integrating afterburner technology on engine. In March 2019, first test with "basic afterburner" configuration was conducted on the engine. Cold weather trials and Hot weather high altitude trials were completed at Leh. Full engine (technology demonstrator) has been built and first run completed successfully. Completion of acceleration trials up to 55% of the speed achieved.
Specifications
See also
References
Gas turbines | HAL HTFE-25 | Technology | 231 |
74,935,727 | https://en.wikipedia.org/wiki/Taiwan%20sleeper%20shark | The Taiwan sleeper shark (Somniosus cheni) is a small sleeper shark from the western North Pacific Ocean around Taiwan. It is only known from a single adult specimen, a pregnant female with 33 embryos, which was caught in 2017.
References
Fish described in 2020
Somniosus
Species known from a single specimen | Taiwan sleeper shark | Biology | 67 |
15,057,209 | https://en.wikipedia.org/wiki/Glycerophosphorylcholine | L-α-Glycerophosphorylcholine (alpha-GPC, choline alfoscerate, sn-glycero-3-phosphocholine) is a natural choline compound found in the brain. It is also a parasympathomimetic acetylcholine precursor which has been investigated for its potential for the treatment of Alzheimer's disease and other dementias.
Alpha-GPC rapidly delivers choline to the brain across the blood–brain barrier and is a biosynthetic precursor of acetylcholine. It is a non-prescription drug in most countries. The FDA determined that intake of no more than 196.2 mg/person/day is considered generally recognized as safe (GRAS).
Production
Industrially, alpha-GPC is produced by the chemical or enzymatic deacylation of phosphatidylcholine enriched soya phospholipids followed by chromatographic purification. Alpha-GPC may also be derived in small amounts from highly purified soy lecithin as well as from purified sunflower lecithin.
Safety
Alpha-GPC metabolizes to trimethylamine n-oxide in the gastrointestinal tract, which has implications for cardiovascular health. In one study, risk of stroke over a ten-year period was increased by about 40% in users of alpha-GPC.
References
External links
Alpha-GPC on PsychonautWiki
Cholinergics
Dietary supplements
Neurotransmitter precursors
Nutrition
Vitamins
Neuropharmacology
Glycerol esters | Glycerophosphorylcholine | Chemistry | 340 |
2,217,599 | https://en.wikipedia.org/wiki/Circular%20symmetry | In geometry, circular symmetry is a type of continuous symmetry for a planar object that can be rotated by any arbitrary angle and map onto itself.
Rotational circular symmetry is isomorphic with the circle group in the complex plane, or the special orthogonal group SO(2), and unitary group U(1). Reflective circular symmetry is isomorphic with the orthogonal group O(2).
Two dimensions
A 2-dimensional object with circular symmetry would consist of concentric circles and annular domains.
Rotational circular symmetry has all cyclic symmetry, Zn as subgroup symmetries. Reflective circular symmetry has all dihedral symmetry, Dihn as subgroup symmetries.
Three dimensions
In 3-dimensions, a surface or solid of revolution has circular symmetry around an axis, also called cylindrical symmetry or axial symmetry. An example is a right circular cone. Circular symmetry in 3 dimensions has all pyramidal symmetry, Cnv as subgroups.
A double-cone, bicone, cylinder, toroid and spheroid have circular symmetry, and in addition have a bilateral symmetry perpendicular to the axis of system (or half cylindrical symmetry). These reflective circular symmetries have all discrete prismatic symmetries, Dnh as subgroups.
Four dimensions
In four dimensions, an object can have circular symmetry, on two orthogonal axis planes, or duocylindrical symmetry. For example, the duocylinder and Clifford torus have circular symmetry in two orthogonal axes. A spherinder has spherical symmetry in one 3-space, and circular symmetry in the orthogonal direction.
Spherical symmetry
An analogous 3-dimensional equivalent term is spherical symmetry.
Rotational spherical symmetry is isomorphic with the rotation group SO(3), and can be parametrized by the Davenport chained rotations pitch, yaw, and roll. Rotational spherical symmetry has all the discrete chiral 3D point groups as subgroups. Reflectional spherical symmetry is isomorphic with the orthogonal group O(3) and has the 3-dimensional discrete point groups as subgroups.
A scalar field has spherical symmetry if it depends on the distance to the origin only, such as the potential of a central force. A vector field has spherical symmetry if it is in radially inward or outward direction with a magnitude and orientation (inward/outward) depending on the distance to the origin only, such as a central force.
See also
Isotropy
Rotational symmetry
Particle in a spherically symmetric potential
Gauss's theorem
References
Symmetry
Rotation | Circular symmetry | Physics,Mathematics | 499 |
602,905 | https://en.wikipedia.org/wiki/69230%20Hermes | 69230 Hermes is a sub-kilometer sized asteroid and binary system on an eccentric orbit, classified as a potentially hazardous asteroid and near-Earth object of the Apollo group, that passed Earth at approximately twice the distance of the Moon on 30 October 1937. The asteroid was named after Hermes from Greek mythology. It is noted for having been the last remaining named lost asteroid, rediscovered in 2003. The S-type asteroid has a rotation period of 13.9 hours. Its synchronous companion was discovered in 2003. The primary and secondary are similar in size; they measure approximately and in diameter, respectively.
Discovery
Hermes was discovered by German astronomer Karl Reinmuth in images taken at Heidelberg Observatory on 28 October 1937. Only four days of observations could be made before it became too faint to be seen in the telescopes of the day. This was not enough to calculate an orbit, and Hermes became a lost asteroid. It thus did not receive a number, but Reinmuth nevertheless named it after the Greek god Hermes. It was the third unnumbered but named asteroid, having only the provisional designation . The two others long lost were (1862) Apollo, discovered in 1932 and numbered in 1973, and (2101) Adonis, discovered in 1936 and numbered in 1977.
On 15 October 2003, Brian A. Skiff of the LONEOS project made an asteroid observation that, when the orbit was calculated backwards in time (by Timothy B. Spahr, Steven Chesley and Paul Chodas), turned out to be a rediscovery of Hermes. It has been assigned sequential number 69230. Additional precovery observations were published by the Minor Planet Center, the earliest being found in images taken serendipitously by the MPG/ESO 2.2-m La Silla telescope on 16 September 2000.
Naming
This minor planet was named after the Greek god Hermes, who is the messenger of the gods and son of Zeus and Maia (see also and ). Recovered and numbered in late 2003, Hermes was originally named by the Astronomical Calculation Institute as early as 1937. The official naming citation was published by the Minor Planet Center on 9 November 2003 ().
Orbit and classification
Hermes is an Apollo asteroid, a subgroup of near-Earth asteroids that cross the orbit of Earth. It orbits the Sun at a distance of 0.6–2.7 AU once every 2 years and 2 months (778 days; semi-major axis of 1.66 AU). Its orbit has an eccentricity of 0.62 and an inclination of 6° with respect to the ecliptic. Due to its eccentricity, Hermes is also a Mars- and Venus-crosser. Frequent close approaches to both Earth and Venus make it unusually challenging to forecast its orbit more than a century in advance, though there is no impact risk within that timeframe.
Close approaches
The asteroid has an Earth minimum orbital intersection distance of which translates into 1.6 LD. On 30 October 1937, Hermes passed from Earth, and on 26 April 1942, from Earth. In retrospect it turned out that Hermes came even closer to the Earth in 1942 than in 1937, within 1.7 lunar distances; the second pass was unobserved at the height of the Second World War. For decades, Hermes was known to have made the closest known approach of an asteroid to the Earth. Not until 1989 was a closer approach (by 4581 Asclepius) observed. At closest approach, Hermes was moving 5° per hour across the sky and reached 8th magnitude.
Physical characteristics
Spectral type
Hermes is a stony S-type asteroid, as reported by Andy Rivkin and Richard Binzel. It has been characterized as a Sq-subtype using the SpeX instrument at NASA Infrared Telescope Facility. Sq-types transition to the Q-type asteroid.
Lightcurves
Three rotational lightcurves of Hermes were obtained from photometric observations in October 2003. Lightcurve analysis gave a well-defined rotation period between 13.892 and 13.894 hours with a brightness variation between and 0.06 and 0.08 magnitude, which indicates that the body has a nearly spherical shape ().
Binary system
Radar observations led by Jean-Luc Margot at Arecibo Observatory and Goldstone in October and November 2003 showed Hermes to be a binary asteroid. The primary and secondary components have nearly identical radii of and , respectively, and their orbital separation is only 1,200 metres, much smaller than the Hill radius of 35 km.
The two components are in double synchronous rotation (similar to the trans-Neptunian system Pluto and Charon). Hermes is one of only four systems of that kind known in the near-Earth object population. The other three are , , and .
In popular culture
In the 1978 novel The Hermes Fall by John Baxter, the asteroid endangers the Earth in 1980. It is not explicitly made clear as to whether or not the Hermes asteroid from The Hermes Fall is 69230 Hermes.
Notes
References
External links
Arecibo 2003 press release
Hermes radar results at UCLA
Asteroids with Satellites, Robert Johnston, johnstonsarchive.net
Dictionary of Minor Planet Names, Google books
069230
Discoveries by Karl Wilhelm Reinmuth
Named minor planets
069230
069230
069230
19371028
Recovered astronomical objects | 69230 Hermes | Astronomy | 1,090 |
15,995,191 | https://en.wikipedia.org/wiki/Nanoradio | A nanoradio (also called carbon nanotube radio) is a nanotechnology acting as a radio transmitter and receiver by using carbon nanotubes. One of the first nanoradios was constructed in 2007 by researchers under Alex Zettl at the University of California, Berkeley where they successfully transmitted an audio signal. Due to the small size, nanoradios can have several possible applications such as radio function in the bloodstream.
History
The first observation of a nanoradio can be accredited to a Japanese physicist Sumio Iijima in 1991 who saw a "a luminous discharge of electricity" coming from a carbon nanotube on a graphite electrode. On October 31, 2007, a team of researchers under Alex Zettl at the University of California, Berkeley created one of the first nanoradios. Their experiment consisted of placing a multilayered nanotube placed on a silicon electrode and connecting it to a counter electrode through a wire and a DC battery. Both the electrode and nanotube were also put in a vacuum of about 10−7 Torr. They then placed the apparatus into a high-resolution transmission electron microscope to document the movement of the nanotube. They observed the nanoradio vibrating and transmitted a song called "Layla" by Eric Clapton. After some minor adjustments, the team was able to transmit and receive signals from a couple meters across the laboratory; however, the initial audio receptions from the radio were scratchy which Zettl believed was due to the lack of a better vacuum.
Properties
The small size, roughly 10 nanometers wide and hundreds of nanometers long, and composition of nanoradios provide several distinct properties. The small size of nanoradios enables electrons to pass through without much friction, making nanoradios efficient conductors. Nanoradios can also come in different sizes; they can be double-walled, tripled-walled and multi-walled. Aside from the different sizes, nanoradios can also take different shapes such as bent, straight or toroidal. Common among all nanoradios is how relatively strong they are. The resistance can be attributed to the strength of the bonds between carbon atoms.
Function
The fundamental parts of a radio are the antenna, tuner, demodulator and amplifier. Carbon nanotubes are special in that they can function as these parts without the need of extra circuitry.
Antenna
The nanoradio is small enough for electromagnetic signals to mechanically vibrate the nanoradio. The nanoradio essentially acts as an antenna by vibrating with the same frequency as the signal from incoming electromagnetic waves; this is in contrast with traditional radio antennas, which are generally stationary. The nanotube can vibrate in high frequencies, from "thousands to millions of times per second."
Tuner
The nanoradio can also function as a tuner by extending or reducing the length of the nanotube; doing so changes the resonance frequency at which it vibrates, enabling the radio to tune into specific frequencies. The length of the nanotube can be extended by pulling the tip with a positive electrode and can be shortened by removing atoms off the tip. Consequently, changing the length is permanent and can't be reversed; however, the method of varying the electric field can also affect the frequency that the nanoradio responds without being permanent.
Amplifier
As a benefit of the microscopic size and needle-like shape, the nanoradio functions naturally as an amplifier. The nanoradio exhibits field emission, in which a small voltage emits a flow of electrons; due to this, a small electromagnetic wave would produce a large flow of electrons, amplifying the signal.
Demodulator
Demodulation is essentially the separation of the information signal from the carrier wave. When the nanoradio vibrates in sync with the carrier wave, the nanoradio responds only to the information signal and ignores the carrier wave; and so, the nanoradio can act as a demodulator without the need of circuitry.
Medical Application
Currently, chemotherapy uses chemicals that harm not only cancerous cells, but also healthy ones since they are put into the blood stream. Nanoradios can be used to prevent damage to healthy cells by remotely communicating with the radio to release drugs and specifically target cancerous cells. Nanoradios can also be injected into individual cells to release certain chemicals, enabling repair of specific cells. Nanoradios can also be used to monitor insulin levels of diabetes patients and use that information to release a drug or chemical.
Complications
The implanting of nanoradios in the body is now feasible with manipulation of directed energy. The nanoradio radiates about 4.5 x 10−27 W of electromagnetic power; however, much of this power is lost when passing through the body. The amount of energy input can be increased, which would generate much heat in the body, which can pose a safety risk.
References
Nanoelectronics
Radio technology
Radio electronics | Nanoradio | Materials_science,Technology,Engineering | 1,004 |
2,167,172 | https://en.wikipedia.org/wiki/HP%20FOCUS | The Hewlett-Packard FOCUS microprocessor, launched in 1982, was the first commercial, single chip, fully 32-bit microprocessor available on the market. At this time, all 32-bit competitors (DEC, IBM, Prime Computer, etc.) used multi-chip bit-slice-CPU designs, while single-chip designs like the Motorola 68000 were a mix of 32 and 16-bit.
Introduced in the Hewlett-Packard HP 9000 Series 500 workstations and servers (originally launched as the HP 9020 and also, unofficially, called HP 9000 Series 600), the single-chip CPU was used alongside the I/O Processor (IOP), Memory Controller (MMU), Clock, and a number of 128-kilobit dynamic RAM devices as the basis of the HP 9000 system architecture. It was a 32-bit implementation of the 16-bit HP 3000 computer's stack architecture, with over 220 instructions (some 32 bits wide, some 16 bits wide), a segmented memory model, and no general purpose programmer-visible registers. The design of the FOCUS CPU was richly inspired by the custom silicon on sapphire (SOS) chip design HP used in their HP 3000 series machines.
Because of the high density of HP's NMOS-III IC process, heat dissipation was a problem. Therefore, the chips were mounted on special printed circuit boards, with a ~1 mm copper sheet at its core, called "finstrates".
The Focus CPU is microcoded with a 9,216 by 38-bit microcode control store. Internal data paths and registers are all 32-bit wide. The Focus CPU has a transistor count of 450,000 FETs.
References
See for HP Journal articles.
FOCUS
Stack machines
32-bit microprocessors | HP FOCUS | Technology | 382 |
39,355,617 | https://en.wikipedia.org/wiki/Melvin%20Mooney%20Distinguished%20Technology%20Award | The Melvin Mooney Distinguished Technology Award is a professional award conferred by the ACS Rubber Division. Established in 1983, the award is named after Melvin Mooney, developer of the Mooney viscometer and of the Mooney-Rivlin hyperelastic law. The award consists of an engraved plaque and prize money. The medal honors individuals "who have exhibited exceptional technical competency by making significant and repeated contributions to rubber science and technology".
Recipients
1980s
1982 J. Roger Beatty - Senior Research Fellow at B. F. Goodrich known for development of rubber testing instruments and methods
1983 Aubert Y. Coran - Monsanto researcher responsible for invention of thermoplastic elastomer Geolast
1984 Eli M. Dannenberg - Cabot scientist known for contributions to surface chemistry of carbon black
1985 William M. Hess - Columbian Chemicals Company scientist known for contributions to characterization of carbon black dispersion in rubber
1986 Albert M. Gessler - ExxonMobil researcher known for development of elastomeric thermoplastics
1987 Avrom I. Medalia - Cabot scientist known for contributions to understanding electrical conductivity and dynamic properties of carbon black filled rubbers
1988 John G. Sommer - GenCorp scientist and author of popular texts on rubber technology
1989 Joginder Lal - Goodyear Polymer Research Manager and expert in the synthesis and mechanism of the formation of high polymers.
1990s
1990 Gerard Kraus - Phillips Petroleum Scientist known for developing testing standard for carbon black surface area
1991 Charles S. Schollenberger - B. F. Goodrich chemist who invented Estane
1992 Robert W. Layer - B. F. Goodrich chemist noted for contributions to chemistry of imines
1993 John R. Dunn - Polysar synthetic rubber research chemist
1994 Noboru Tokita - Uniroyal and later Cabot scientist known for studying processing of elastomers
1995 Edward N. Kresge - Exxon Chief Polymer Scientist who developed tailored molecular weight density EPDM elastomers
1997 Russell A. Livigni - Gencorp scientist known for discovery and development of barium-based catalysts for the polymerization of butadiene and its copolymerization with styrene to give high trans rubbers with low vinyl content
1998 Henry Hsieh - Phillips Petroleum scientist known for contributions to polymerization chemistry
1999 Avraam I. Isayev - University of Akron Distinguished Professor of Polymer Science known for widely used texts on rheology and polymer molding technology, as well as for development of technology for ultrasonic devulcanization of tire rubber.
2000s
2000 Joseph Kuczkowski - Goodyear chemist who elucidated mechanisms of antioxidant function, resulting in the commercialization of several new antioxidant systems
2002 C. Michael Roland - Naval Research Lab scientist recognized for blast and impact protection using elastomers, and for diverse contributions to elastomer science
2003 Walter H. Waddell - Exxon scientist recognized for his work on tire innerliner technology
2004 Oon Hock Yeoh - Freudenberg Scientist known for contributions to nonlinear elasticity and fracture mechanics
2005 Kenneth F. Castner - Senior Research and Development Associate at Goodyear Tire & Rubber Company known for his work in nickel catalyzed diene polymerization for the synthesis of high cis-polybutadiene
2006 Meng-Jaio Wang - Cabot scientist known for studies of carbon black
2007 Daniel L. Hertz Jr. - President of Seals Eastern and NASA consultant on the Space Shuttle Challenger disaster
2008 Robert P. Lattimer - Lubrizol Advanced Materials research and development technical fellow
2009 Frederick Ignatz-Hoover - Eastman technology fellow and 9th editor of Rubber Chemistry and Technology
2010s
2010 William J. van Ooij - University of Cincinnati professor known for elucidating the mechanisms of brass-rubber adhesion in tires
2011 Periagaram S. Ravishankar - Exxon Senior Staff Engineer recognized for development of Vistalon EPDM elastomers
2012 Robert Schuster - former director of the German Institute for Rubber Technology (DIK) and popular lecturer on rubber technology
2014 Shingo Futamura - Materials scientist noted for his concept of the Deformation Index
2015 Alan H. Muhr - TARRC scientist noted for contributions to understanding the mechanics elastomer applications, including laminated rubber isolators, marine fenders, automotive mounts, and structural energy dissipation systems
2016 Dane Parker - Goodyear Tire & Rubber Company researcher known for developing a single-step process for converting Nitrile latex to HNBR latex
2017 David J. Lohse - ExxonMobil Materials Scientist known for contributions on thermodynamics of mixing, nanocomposites for controlling permeability, neutron scattering of polymers, rheology of polymers
2018 Joseph Padovan - University of Akron Distinguished Professor known for pioneering finite element procedures for analysis of rolling tires.
2019 Manfred Klüppel - German Institute for Rubber Technology department head of Material Concepts and Modeling group
2020s
2020 Kenneth T. Gillen - Sandia National Labs researcher noted for contributions to service life prediction methods for elastomers
2021 Howard Colvin - Organic chemist and consultant to the tire and rubber industries noted for developments to rubber chemicals and polymers
2022 Anil K. Bhowmick - University of Houston professor known for contributions to polymer nanocomposites, thermoplastic elastomers, sustainability, adhesion, failure and degradation of rubbers and rubber technology
2023 Anke Blume - engineering technology professor at the University of Twente known for her contributions to silica and silane chemistry for rubber applications.
2024 Andrew V. Chapman - TARRC scientist noted for contributions to AFM microscopy of tire tread compounds.
2025 Sunny Jacob - ExxonMobil scientist known for leading the development of Thermoplastic vulcanizates products and processes.
See also
International Rubber Science Hall of Fame: Another ACS award
Rubber Chemistry and Technology: An ACS journal
List of chemistry awards
Sparks-Thomas award
Charles Goodyear Medal
References
External links
The ACS Rubber Division
Oral histories of several medal winners
Chemical and Engineering News
Awards of the American Chemical Society
Awards established in 1983
Materials science awards
Rubber | Melvin Mooney Distinguished Technology Award | Materials_science,Technology,Engineering | 1,262 |
2,707,071 | https://en.wikipedia.org/wiki/La%20Hague%20site | The La Hague site is a nuclear fuel reprocessing plant at La Hague on the Cotentin Peninsula in northern France, with the Manche storage centre bordering on it. Operated by Orano, formerly AREVA, and prior to that COGEMA (), La Hague has nearly half of the world's light water reactor spent nuclear fuel reprocessing capacity. It has been in operation since 1976, and has a capacity of about 1,700 tonnes per year. It extracts plutonium which is then recycled into MOX fuel at the Marcoule site.
It has treated spent nuclear fuel from France, Japan, Germany, Belgium, Switzerland, Italy, Spain and the Netherlands. It processed 1100 tonnes in 2005. The non-recyclable part of the radioactive waste is eventually sent back to the user nation. Prior to 2015, more than 32,000 tonnes of spent nuclear fuel has been reprocessed, with 70% of that from France, 17% from Germany and 9% from Japan.
Operations
Spent nuclear fuel roughly consists of three categories. The largest fraction by far is uranium that was present in the fuel from the start and was not affected by the nuclear reactions. Most of this uranium consists of uranium-238, which has a low radioactivity.
Around 3-4% of the material consists of fission products. These are mostly composed of highly radioactive isotopes, as the fission of uranium has endowed these with too many neutrons to be stable. As with all highly radioactive material, this level of radioactivity decreases relatively rapidly, although storage and shielding is required for at least hundreds of years.
Third, atom species are present which have a larger mass and atomic number than uranium itself. The majority of this so-called transuranic waste consists of plutonium isotopes, although other species are also present, such as americium.
Spent fuel treatment plants seek to separate these three categories into fractions that are as pure as possible. In nuclear reprocessing plants about 96% of spent nuclear fuel is recycled back into uranium-based and mixed-oxide MOX fuels.
One of the main method for the separation of spent fuel is the PUREX process, which separates the plutonium and other transuranics from the remainder of the spent fuel. The uranium and plutonium are separated in turn in a series of complex chemical operations. The uranium becomes uranyl nitrate and the plutonium will be converted into plutonium oxide. The latter is used to produce fresh fuel called MOX – mixed oxides of uranium and plutonium, which can be used as fuel in nuclear reactors. The uranium fraction is very low in radioactivity and can be stored in specialized warehouses.
Long-term storage of radioactive waste requires the stabilization of the waste into a form that will neither react nor degrade for extended periods. Decades of research efforts have shown that a viable way to do this is through vitrification. High temperature treated ("calcined") fuel separation fractions are fed into an induction heated furnace with fragmented glass. The resulting glass contains the waste products which are bonded to the glass matrix. The fission products, which make up around 4% of the spent fuel mass, are the ones that are vitrified in this glass, as they cannot be used for any other purpose, and are generally highly radioactive. In practice, because of limits to separation of the three categories, a small amount of transuranic isotopes will be present in this material.
History
The La Hague site was built after the Marcoule site originally for producing plutonium for military purposes. In 1969 the French military, having had a sufficient supply of plutonium for weapons, had no further use of the reprocessing centre. The factory directed its efforts toward civil operations, and with the reduction of 350 people from the plant's workforce, its military connections ended.
This shift to civil uses was supported by Valéry Giscard d'Estaing and strengthened by the 1973 oil crisis.
It was understood that the facility would be used to reprocess the uranium sold to Taiwan in the 1980s and a number of politicians and experts from Taiwan listed the La Hague site in the course of securing the deal. France later reneged on the agreement and the nuclear material sold to Taiwan remains unreprocessed and is stored in temporary cooling ponds.
On 5 October 2002, an INES Level 1 incident occurred at La Hague. A sub-contractor working at the plant suffered skin contamination while rinsing equipment in the plutonium purification workshop.
Controversy surrounding radioactive releases
Greenpeace has been campaigning since 1997 for the shutdown of the site, which they claim dumps "one million litres of liquid radioactive waste per day" into the ocean; "the equivalent of 50 nuclear waste barrels", claiming the radiation affects local beaches, although official figures are to the contrary.
Greenpeace have protested by creating roadblocks and chaining themselves to vehicles transporting materials to and from the site. However the leader of Greenpeace France, Yannick Rousselet, has since stated that they have ceased attempting to criticize the reprocessing plant on technical grounds, COGEMA having succeeded at performing the process without serious spills that have been frequent at other such facilities around the world. In the past, the antinuclear movement argued that COGEMA would not succeed with reprocessing. Eric Blanc, deputy director of the processing plant, says that although the plant does intentionally release radioactive material, the annual dose in the vicinity of the facility is less than 20 microsieverts per year, which is equivalent to the dose of cosmic radiation received during a single transatlantic flight, and therefore within regulation. The AREVA NC website emphasizes that they are committed to keeping the dose below 30 microsieverts per year.
See also
Sellafield, a similar facility in the United Kingdom
References
External links
La Hague Plant home page
Processing of Used Nuclear Fuel
Nuclear reprocessing sites
Radioactive waste
Buildings and structures in Manche
Nuclear technology in France
Nuclear energy in France
Water pollution in France | La Hague site | Chemistry,Technology | 1,237 |
25,272,420 | https://en.wikipedia.org/wiki/Stalking | Stalking is unwanted and/or repeated surveillance or contact by an individual or group toward another person. Stalking behaviors are interrelated to harassment and intimidation and may include following the victim in person or monitoring them. The term stalking is used with some differing definitions in psychiatry and psychology, as well as in some legal jurisdictions as a term for a criminal offense.
According to a 2002 report by the U.S. National Center for Victims of Crime, "virtually any unwanted contact between two people that directly or indirectly communicates a threat or places the victim in fear can be considered stalking", although the rights afforded to victims may vary depending on jurisdiction.
Definitions
A 1995 research paper titled "Stalking Strangers and Lovers" was among the first places to use the term "stalking" to describe the common occurrence of males after a breakup who aggressively pursue their female former partner. Prior to that paper instead of the term "stalking", people more commonly used the terms "female harassment", "obsessive following" or "psychological rape".
The difficulties associated with defining this term exactly (or defining it at all) are well documented.
This is due in part to overlapping between accepted courtship behaviors and stalking behaviors. Context must be relied on to determine if a specific action is a stalking behavior.
Having been used since at least the 16th century to refer to a prowler or a poacher, the term stalker was initially used by media in the 20th century to describe people who pester and harass others, initially with specific reference to the harassment of celebrities by strangers who were described as being "obsessed". This use of the word appears to have been coined by the tabloid press in the United States. With time, the meaning of stalking changed and incorporated individuals being harassed by their former partners. Pathé and Mullen describe stalking as "a constellation of behaviours in which an individual inflicts upon another repeated unwanted intrusions and communications". Stalking can be defined as the willful and repeated following, watching or harassing of another person. Unlike other crimes, which usually involve one act, stalking is a series of actions that occur over a period of time.
Although stalking is illegal in most areas of the world, some of the actions that contribute to stalking may be legal, such as gathering information, calling someone on the phone, texting, sending gifts, emailing, or instant messaging. They become illegal when they breach the legal definition of harassment (e.g., an action such as sending a text is not usually illegal, but is illegal when frequently repeated to an unwilling recipient). In fact, United Kingdom law states the incident only has to happen twice when the harasser should be aware their behavior is unacceptable (e.g., two phone calls to a stranger, two gifts, following the victim then phoning them, etc.).
Cultural norms and meaning affect the way stalking is defined. Scholars note that the majority of men and women admit engaging in various stalking-like behaviors following a breakup, but stop such behaviors over time, suggesting that "engagement in low levels of unwanted pursuit behaviors for a relatively short amount of time, particularly in the context of a relationship break-up, may be normative for heterosexual dating relationships occurring within U.S. culture."
Psychology and behaviors
People characterized as stalkers may be accused of having a mistaken belief that another person loves them (erotomania), or that they need rescuing. Stalking can consist of an accumulation of a series of actions which, by themselves, can be legal, such as calling on the phone, sending gifts, or sending emails.
Stalkers may use overt and covert intimidation, threats and violence to frighten their victims. They may engage in vandalism and property damage or make physical attacks that are meant to frighten. Less common are sexual assaults.
Intimate-partner stalkers are the most dangerous type. In the UK, for example, most stalkers are former partners, and evidence indicates that stalking facilitated by mental illness (often covered by the media) accounts for only a minority of cases of alleged stalking. A UK Home Office research study on the use of the Protection from Harassment Act stated: "The study found that the Protection from Harassment Act is being used to deal with a variety of behaviour such as domestic and inter-neighbour disputes. It is rarely used for stalking as portrayed by the media since only a small minority of cases in the survey involved such behaviour."
Psychological effects on victims
Disruptions in daily life necessary to escape the stalker, including changes in employment, residence and phone numbers, take a toll on the victim's well-being and may lead to a sense of isolation.
According to Lamber Royakkers:
Stalking as a close relationship
Stalking has also been described as a form of close relationship between the parties, albeit a disjunctive one where the two participants have opposing goals rather than cooperative goals. One participant, often a woman, likely wishes to end the relationship entirely, but may find herself unable to easily do so. The other participant, often but not always a man, wishes to escalate the relationship. It has been described as a close relationship because the duration, frequency, and intensity of contact may rival that of a more traditional conjunctive dating relationship.
Types of victims
Based on work with stalking victims for eight years in Australia, Mullen and Pathé identified different types of stalking victims, characterized by prior relationship with the stalker. These are:
Prior intimates: Victims who had been in a previous intimate relationship with their stalker. In the article, Mullen and Pathé describe this as being "the largest category, the most common victim profile being a woman who has previously shared an intimate relationship with her (usually) male stalker." These victims are more likely to be exposed to violence being enacted by their stalker especially if the stalker had a criminal past. In addition, victims who have "date stalkers" are less likely to experience violence by their stalkers. A "date stalker" is considered an individual who had an intimate relationship with the victim but it was short-lived instead of a long term relationship.
Casual acquaintances and friends: Among male stalking victims, most are part of this category. This category of victims also includes neighbor stalking. This may result in the victims' change of residence.
Professional contacts: These are victims who have been stalked by patients, clients, or students who they have had a professional relationship with. Certain professions such as health care providers, teachers, and lawyers are at a higher risk for stalking.
Workplace contacts: The stalkers of these victims tend to visit them in their workplace which means that they are either an employer, employee, or a customer. When victims have stalkers coming to their workplace, this poses a threat not only to the victims' safety but to the safety of other individuals as well.
Strangers: These victims are typically unaware of how their stalkers began stalking because typically these stalkers form a sense of admiration for their victims from a distance.
The famous: Most of these victims are individuals who are portrayed heavily on media outlets but can also include individuals such as politicians and athletes.
Gender
Although stalking is a gender-neutral behavior, studies confirm that the majority of victims are female and that the primary perpetrators are male. As for the victims, a January 2009 report from the United States Department of Justice reported the rate of stalking victimization for female was approximately 2% and for male was approximately 0.7%. As for the perpetrators, many studies have shown that approximately 80-90% of stalking perpetrators are male.
According to one study, women often target other women, whereas men primarily stalk women. A January 2009 report from the United States Department of Justice also reports that "Males were as likely to report being stalked by a male as by a female offender. 43% of male stalking victims stated that the offender was female, while 41% of male victims stated that the offender was another male. Female victims of stalking were significantly more likely to be stalked by a male (67%) rather than a female (24%) offender." This report provides considerable data by gender and race about both stalking and harassment, obtained via the 2006 Supplemental Victimization Survey (SVS), by the U.S. Census Bureau for the U.S. Department of Justice.
In an article in the journal Sex Roles, Jennifer Langhinrichsen-Rohling discusses how gender plays a role in the difference between stalkers and victims. She says, "gender is associated with the types of emotional reactions that are experienced by recipients of stalking related events, including the degree of fear experienced by the victim." In addition, she hypothesizes that gender may also affect how police handle a case of stalking, how the victim copes with the situation, and how the stalker might view their behavior. She discusses how victims might view certain forms of stalking as normal because of gender socialization influences on the acceptability of certain behaviors. She emphasizes that in the United Kingdom, Australia, and the United States, strangers are considered more dangerous when it comes to stalking than a former partner. Media also plays an important role due to portrayals of male stalking behavior as acceptable, influencing men into thinking it is normal. Since gender roles are socially constructed, sometimes men do not report stalking. She also mentions coercive control theory; "future research will be needed to determine if this theory can predict how changes in social structures and gender-specific norms will result in variations in rates of stalking for men versus women over time in the United States and across the world."
Types of stalkers
Psychologists often group individuals who stalk into two categories: psychotic and nonpsychotic. Some stalkers may have pre-existing psychotic disorders such as delusional disorder, schizoaffective disorder, or schizophrenia. However, most stalkers are nonpsychotic and may exhibit disorders or neuroses such as major depression, adjustment disorder, or substance dependence, as well as a variety of personality disorders (such as antisocial, borderline, or narcissistic). The nonpsychotic stalkers' pursuit of victims is primarily angry, vindictive, focused, often including projection of blame, obsession, dependency, minimization, denial, and jealousy. Conversely, only 10% of stalkers had an erotomanic delusional disorder.
In "A Study of Stalkers" Mullen et al. (2000) identified five types of stalkers:
Rejected stalkers follow their victims in order to reverse, correct, or avenge a rejection (e.g. divorce, separation, termination).
Resentful stalkers make a vendetta because of a sense of grievance against the victims – motivated mainly by the desire to frighten and distress the victim.
Intimacy seekers seek to establish an intimate, loving relationship with their victim. Such stalkers form a spectrum from those with erotomania, to those who do not believe their love is reciprocated but who insist with "delusional intensity" of their eventual success, and to other rigid, obsessive individuals.
Incompetent suitors, despite poor social or courting skills, have a fixation, or in some cases, a sense of entitlement to an intimate relationship with those who have attracted their amorous interest. Their victims are most often already in a dating relationship with someone else.
Predatory stalkers spy on the victim in order to prepare and plan an attack – often sexual – on the victim.
In addition to Mullen et al., Joseph A. Davis, Ph.D., an American researcher, crime analyst, and university psychology professor at San Diego State University investigated, as a member of the Stalking Case Assessment Team (SCAT), special unit within the San Diego District Attorney's Office, hundreds of cases involving what he called and typed "terrestrial" and "cyberstalking" between 1995 and 2002. This research culminated in one of the most comprehensive books written to date on the subject. Published by CRC Press, Inc. in August 2001, it is considered the "gold standard" as a reference to stalking crimes, victim protection, safety planning, security and threat assessment.
The 2002 National Victim Association Academy defines an additional form of stalking: The vengeance/terrorist stalker. Both the vengeance stalker and terrorist stalker (the latter sometimes called the political stalker) do not, in contrast with some of the aforementioned types of stalkers, seek a personal relationship with their victims but rather force them to emit a certain response. While the vengeance stalker's motive is "to get even" with the other person whom he/she perceives has done some wrong to them (e.g., an employee who believes is fired without justification from a job by a superior), the political stalker intends to accomplish a political agenda, also using threats and intimidation to force the target to refrain or become involved in some particular activity regardless of the victim's consent. For example, most prosecutions in this stalking category have been against anti-abortionists who stalk doctors in an attempt to discourage the performance of abortions.
Stalkers may fit categories with paranoia disorders. Intimacy-seeking stalkers often have delusional disorders involving erotomanic delusions. With rejected stalkers, the continual clinging to a relationship of an inadequate or dependent person couples with the entitlement of the narcissistic personality, and the persistent jealousy of the paranoid personality. In contrast, resentful stalkers demonstrate an almost "pure culture of persecution", with delusional disorders of the paranoid type, paranoid personalities, and paranoid schizophrenia.
One of the uncertainties in understanding the origins of stalking is that the concept is now widely understood in terms of specific behaviors which are found to be offensive or illegal. As discussed above, these specific (apparently stalking) behaviors may have multiple motivations.
In addition, the personality characteristics that are often discussed as antecedent to stalking may also produce behavior that is not conventionally defined as stalking. Some research suggests there is a spectrum of what might be called "obsessed following behavior." People who complain obsessively and for years, about a perceived wrong or wrong-doer, when no one else can perceive the injury—and people who cannot or will not "let go" of a person or a place or an idea—comprise a wider group of persons that may be problematic in ways that seem similar to stalking. Some of these people get extruded from their organizations—they may get hospitalized or fired or let go if their behavior is defined in terms of illegal stalking, but many others do good or even excellent work in their organizations and appear to have just one focus of tenacious obsession.
Cyberstalking
Cyberstalking is the use of computers or other electronic technology to facilitate stalking. In Davis (2001), Lucks identified a separate category of stalkers who instead of a terrestrial means, prefer to perpetrate crimes against their targeted victims through electronic and online means. Amongst college students, Ménard and Pincus found that male stalkers were likely to have a high score of sexual abuse as children and narcissistic vulnerability.
Men were more likely to become stalkers. Out of the women who participated in their study, 9% were cyberstalkers meanwhile only 4% were overt stalkers. In addition, the male participants revealed the opposite, 16% were overt stalkers while 11% were cyberstalkers. Alcohol and physical abuse both played a role in predicting women's cyberstalking and in men, "preoccupied attachment significantly predicted cyber stalking" while the victims were likely to have an "avoidant attachment".
Stalking by groups
According to a U.S. Department of Justice special report, a significant number of people reporting stalking incidents claim that they had been stalked by more than one person, with 18.2% reporting that they were stalked by two people and 13.1% reporting that they had been stalked by three or more. The report did not break down these cases into numbers of victims who claimed to have been stalked by several people individually, and by people acting in concert. A question asked of respondents reporting three or more stalkers by polling personnel about whether the stalking was related to co-workers, members of a gang, fraternities, sororities, etc., did not have its responses indicated in the survey results as released by the DOJ. The data for this report was obtained via the 2006 Supplemental Victimization Survey (SVS), conducted by the U.S. Census Bureau for the Department of Justice.
According to a United Kingdom study by Sheridan and Boon, in 5% of the cases they studied, there was more than one stalker, and 40% of the victims said that friends or family of their stalker had also been involved. In 15% of cases, the victim was unaware of any reason for the harassment.
Over a quarter of all stalking and harassment victims do not know their stalkers in any capacity. About a tenth responding to the SVS did not know the identities of their stalkers. 11% of victims said they had been stalked for five years or more.
In some cases, collaborative abusive behavior is normalised within groups, in behaviours such as the Fair Game policy within Scientology.
False claims of stalking, "gang stalking" and delusions of persecution
In 1999, Pathe, Mullen and Purcell wrote that popular interest in stalking was promoting false claims. In 2004, Sheridan and Blaauw said that they estimated that 11.5% of claims in a sample of 357 reported claims of stalking were false.
According to Sheridan and Blaauw, 70% of false stalking reports were made by people experiencing delusions, stating that "after eight uncertain cases were excluded, the false reporting rate was judged to be 11.5%, with the majority of false victims suffering delusions (70%)." Another study estimated the proportion of false reports that were due to delusions as 64%.
A 2020 study by Sheridan et al. gave figures for lifetime prevalence of perceived gang-stalking at 0.66% for adult women and 0.17% for adult men.
Epidemiology and prevalence
Australia
According to a study conducted by Purcell, Pathé and Mullen (2002), 23% of the Australian population reported having been stalked.
Austria
Stieger, Burger and Schild conducted a survey in Austria, revealing a lifetime prevalence of 11% (women: 17%, men: 3%). Further results include: 86% of stalking victims were female, 81% of the stalkers were male. Women were mainly stalked by men (88%) while men were almost equally stalked by men and women (60% male stalkers). 19% of the stalking victims reported that they were still being stalked at the time of study participation (point prevalence rate: 2%). To 70% of the victims, the stalker was known, being a prior intimate partner in 40%, a friend or acquaintance in 23% and a colleague at work in 13% of cases. As a consequence, 72% of the victims reported having changed their lifestyle. 52% of former and ongoing stalking victims reported having a currently impaired (pathological) psychological well-being. There was no significant difference between the incidence of stalking in rural and urban areas.
England and Wales
In 1998, Budd and Mattinson found a lifetime prevalence of 12% in England and Wales (16% female, 7% males). In 2010/11, 57% of stalking victims were found to be female, and 43% were male.
According to a paper by staff from the Fixated Threat Assessment Centre, a unit established to deal with people with fixations on public figures, 86% of a sample group of 100 people assessed by them appeared to them to have a psychotic illness; 57% of the sample group were subsequently admitted to hospital, and 26% treated in the community.
A similar retrospective study published in 2009 in Psychological Medicine, based on a sample of threats to the royal family kept by the Metropolitan Police Service over a period of 15 years, suggested that 83.6% of these letter-writers had a serious mental illness.
Germany
Dressing, Kuehner and Gass conducted a representative survey in Mannheim, a middle-sized German city, and reported a lifetime prevalence of having been stalked of almost 12%.
India
In India, a stalking case is reported every 55 minutes. Most cases are not reported as they are not considered criminal enough.
United States
Tjaden and Thoennes reported a lifetime prevalence (being stalked) of 8% in females and 2% in males (depending on how strict the definition) in the National Violence Against Women Survey.
Laws on harassment and stalking
Australia
Every Australian state enacted laws prohibiting stalking during the 1990s, with Queensland being the first state to do so in 1994. The laws vary slightly from state to state, with Queensland's laws having the broadest scope, and South Australian laws the most restrictive. Punishments vary from a maximum of 10 years imprisonment in some states, to a fine for the lowest severity of stalking in others. Australian anti-stalking laws have some notable features. Unlike many US jurisdictions they do not require the victim to have felt fear or distress as a result of the behaviour, only that a reasonable person would have felt this way. In some states, the anti-stalking laws operate extra-territorially, meaning that an individual can be charged with stalking if either they or the victim are in the relevant state. Most Australian states provide the option of a restraining order in cases of stalking, breach of which is punishable as a criminal offence. There has been relatively little research into Australian court outcomes in stalking cases, although Freckelton (2001) found that in the state of Victoria, most stalkers received fines or community based dispositions.
Canada
Section 264 of the Criminal Code, titled "criminal harassment", addresses acts which are termed "stalking" in many other jurisdictions. The provisions of the section came into force in August 1993 with the intent of further strengthening laws protecting women. It is a hybrid offence, which may be punishable upon summary conviction or as an indictable offence, the latter of which may carry a prison term of up to ten years. Section 264 has withstood Charter challenges.
The Chief, Policing Services Program, for Statistics Canada has stated:
China
In China, simple stalking was treated as a kind of minor offence when it amounted to harassment, so stalkers were usually punished by a small fine or less than 10 days detention under the Public Security Administration Punishment Law.
According to the Tort Liability Law, infringement of citizens' privacy shall be subject to tort liability. For stalkers to spy on, secretly photograph, eavesdrop on or spread the privacy of others, under Article 42 of the Public Security Administration Punishment Law clearly stipulates that they can be detained for not more than five days or fined not more than five hundred yuan, and if the circumstances are more serious, they can be detained for not less than five days and not more than ten days, and can be fined not more than five hundred yuan.
Unfortunately, under the current judicial system in mainland China, there is a lack of judicial protection for individuals facing illegal stalking, harassment, surveillance, and other stalking behaviors. Even celebrities may not be able to solve it for a long time when faced with stalking of illegitimate meals. Many cases across China have shown that ordinary people who have been stalked may still be unable to solve the problem after they seek help from the judicial authorities. In the case of Wuhu, Anhui in March 2018, the entangled woman repeatedly rescued the police to no avail and was eventually killed. In the homicide case in Laiyuan, Hebei in July of the same year, women and their families who had been stalked and harassed for a long time also helped the police repeatedly to no avail. It did not end until the opponent broke into the home with arms and was killed by victim's parents.
In the social culture of mainland China, the "stalker" type of courtship is highly respected, that is, as the saying goes, "good women (martyrs) are afraid of stalkers". Literary works also publicly promote such behaviors, and stalking between opposite sexes is thus beautified as courtship. In real life, this type of behavior may even occur when the two parties do not know each other and the stalked person does not know in advance. Through online platforms and other social media, with the help of the convenience of online communication, individuals and institutions directly participate in, promote, and support various "courtship-style" tracing and stalking cases.
France
Article 222–33–2 of the French Criminal Code (added in 2002) penalizes "Moral harassment," which is: "Harassing another person by repeated conduct which is designed to or leads to a deterioration of his conditions of work liable to harm his rights and his dignity, to damage his physical or mental health or compromise his career prospects," with a year's imprisonment and a fine of EUR15,000.
Germany
The German Criminal Code (§ 238 StGB) penalizes Nachstellung, defined as threatening or seeking proximity or remote contact with another person and thus heavily influencing their lives, with up to three years of imprisonment. The definition is not strict and allows "similar behaviour" to also be classified as stalking.
India
In 2013, Indian Parliament made amendments to the Indian Penal Code, introducing stalking as a criminal offence. Stalking has been defined as a man following or contacting a woman, despite clear indication of disinterest by the woman, or monitoring her use of the Internet or electronic communication. A man committing the offence of stalking would be liable for imprisonment up to three years for the first offence, and shall also be liable to fine and for any subsequent conviction would be liable for imprisonment up to five years and with fine.
Italy
Following a series of high-profile incidents that came to public attention, a law was proposed in June 2008 which became effective in February 2009 (D.L. 23.02.2009 n. 11) making a criminal offence under the newly introduced art. 612 bis of the penal code, punishable with imprisonment ranging from six months up to five years, any "continuative harassing, threatening or persecuting behaviour which: (1) causes a state of anxiety and fear in the victim(s), or; (2) ingenerates within the victim(s) a motivated fear for his/her own safety or for the safety of relatives, , or others tied to the victim him/herself by an affective relationship, or; (3), forces the victim(s) to change his/her living habits." If the perpetrator of the offense is a subject tied to the victim by kinship or that is or has been in the past involved in a relationship with the victim (i.e., a current or former spouse or fiancé), or if the victim is a pregnant woman or a minor or a person with disabilities, the sanction can be elevated up to six years of incarceration.psicolab.net "Dei delitti contro la persona", Altalex.
Japan
In 2000, Japan enacted a national law to combat this behaviour, after the murder of Shiori Ino. Acts of stalking can be viewed as "interfering [with] the tranquility of others' lives" and are prohibited under petty offence laws.
However, stalking cases are increasing rather than decreasing, with more than 20,000 people reporting cases to the police in 2013, and civil society organisations estimate that these are only the tip of the iceberg; Japan has seen the highest growth in stalking cases in the world in recent years, and stalking has continued to turn into homicide. Many victims say that reporting to the police is ineffective, that the police treat it as a minor domestic dispute, that the process of filing a court order for protection can take months, and that some people have to hire private bodyguards.
Netherlands
In the Wetboek van Strafrecht, Article 285b defines the crime of belaging (harassment), which is a term used for stalking.
Article 285b:
1. One who unlawfully, systematically, and deliberately intrudes into someone's personal environment with the intention to force the other to act in a way, or to prevent one to act in a certain way or to induce fear, will be prosecuted for harassment, for which the maximal punishment is three years and a fine of the fourth monetary category.
2. The prosecution will only take place after a complaint of the person who is the victim of the crime.
Republic of Korea
Until 2021, simple stalking was treated as a kind of minor offence when it amounted to harassment, so stalkers were usually punished by a small fine or less than 30 days detention under the Minor Offences Act. In April 2021, the National Assembly passed an act intended to address widespread stalking crimes and protect victims, which came into force on October 21 the same year. The act includes a provision that mandates the victim must approve of punishment for the stalker. A subsequent bill proposes to remove this provision to address situations where the victim may fear retribution from the stalker.
South Korea's stalking laws were criticized for weaknesses and led to accusations the country does not treat violence against women seriously enough when a female subway worker in Seoul was stalked and stabbed to death in the subway restroom by her former colleague in September 2022. The stalker had been harassing the victim since 2019.
In October 2022, the city of Seoul announced the opening of three shelters to house stalking victims and offer free counseling.
Romania
Article 208 of the 2014 Criminal Code states:-
Russia
In the Criminal Code of the Russian Federation, such an independent corpus delicti as stalking is absent. However, lawyers argue that the persecution of a person in Russia can also be seriously fined. The victim of stalking only needs to use the articles that are already in the code. So, if the persecutor uses threats, then should refer to Article 119 of the Criminal Code of the Russian Federation "Threats of murder or causing grievous bodily harm". In this case, the offender is punished with compulsory labor for up to 480 hours or forced labor for up to 2 years. Also, the persecutor may face arrest for up to six months or imprisonment (restriction) of freedom for up to two years.
"Violation of privacy" (Article 137 of the Criminal Code of the Russian Federation) can also be applied part of stalking. This crime manifests itself in the illegal collection of information about private life and its dissemination (including in public speeches and the media). For this, a criminal can receive a fine of up to 200 thousand rubles, go to compulsory work for up to 360 hours, and even be imprisoned for two years. In addition, persecutors often violate Article 138 of the Criminal Code of the Russian Federation Violation of the secrecy of correspondence, telephone conversations, postal, telegraph and other messages of citizens. The article provides for punishment ranging from a fine of 80 thousand rubles to correctional labor for up to one year.
However, these are not all articles of the Criminal Code that can be applied to stalkers. As result, I.A. Yurchenko, author of Crimes Against Information Security, claims that victims of persecution, in the presence of appropriate circumstances, have the right to use Article 133 of the Criminal Code of the Russian Federation "Compulsion to Sexual Actions" (from a fine of 120 thousand rubles to imprisonment for up to one year), article 139 of the Criminal Code of the Russian Federation "Violation of the inviolability of the home" (from a fine in the amount of 40 thousand rubles to imprisonment for two to three years), article 163 of the Criminal Code of the Russian Federation Extortion (imprisonment up to seven years), article 167 of the Criminal Code of the Russian Federation Intentional destruction or damage to property (up to imprisonment in accordance with the gravity of the offense).
Indeed, under the listed articles, many Russian stalkers were convicted. For example, a resident of Ufa, who forced his ex-girlfriend to resume relations by means of threats related to exposing her intimate photographs to the public, was found guilty under Articles 133 and 137 of the Criminal Code of the Russian Federation and sentenced to a fine of 70 thousand rubles. According to some lawyers, the punishment in such cases is not always commensurate with the crime committed, therefore they propose to include in the Criminal Code of Russia an article similar to § 238 of the Criminal Code of the Federal Republic of Germany, according to which a stalker pursuing a person faces up to 3 years in prison.
Also for its specific forms, one can be held criminally liable, for example: a threat to kill or cause grievous bodily harm (Article 119 of the Criminal Code of the Russian Federation); violation of privacy, that is, the illegal collection or dissemination of information about the private life of a person that constitutes his personal or family secret, without his consent (Article 137 of the Criminal Code of the Russian Federation); violation of the inviolability of the home (Article 139 of the Criminal Code of the Russian Federation). To do this, victim need to apply with a statement to law enforcement agencies. Crimes under Art. 137 and 139 of the Criminal Code of the Russian Federation are being investigated by investigators of the Investigative Committee of the Russian Federation, and criminal cases on the fact of threats are being considered by interrogators of the Ministry of Internal Affairs of the Russian Federation. Therefore, it is necessary to contact the relevant law enforcement agency at the scene of the crime (in this case, it is imperative to obtain a coupon-notification of the KUSP, confirming the fact of filing an application).
Taiwan
In Taiwan, more than 7,000 cases are reported each year, nearly half of which have been repeatedly harassed for up to a year and a quarter for up to three years, with 80% of the victims being female. A survey conducted by the Modern Women's Foundation in 2014 showed that less than 10% of those who had been harassed would report it or file a complaint, and 12.4% of young female students were found to have been stalked during the interview; the foundation therefore promoted the legislation of the "Stalking Prevention Act". However, the draft has not been reviewed since its first reviewing in the Legislative in 2015. In 2019, the DPP blocked the third reading of the bill on the grounds that it would "increase police duties." It was only in 2021 that the Stalking Prevention Act was again discussed and passed by the Legislative Yuan due to the murder of women. During the legislative process, the DPP insisted that the definition of stalking be limited to "related to sex or gender".
Under the Stalking and Harassment Prevention Act, anyone who conducts stalking and harassment may be sentenced to imprisonment of not more than one year or detention; in lieu thereof, or in addition thereto, a fine of not more than one hundred thousand New Taiwan Dollars may be imposed. Anyone who commits the crimes stated in the preceding paragraph with lethal weapons or other dangerous objects shall be sentenced to the imprisonment of not more than five years, or short-term imprisonment; in lieu thereof, or in addition thereto, a fine of not more than five hundred thousand New Taiwan Dollars may be imposed. Violators of a protection order issued by a court in accordance with Article 12, Paragraph 1, Subparagraphs 1 to 3 shall be sentenced to the imprisonment of not more than three years, or detention; in lieu thereof, or in addition thereto, a fine of not more than three hundred thousand New Taiwan Dollars may be imposed.
United Kingdom
Before the enactment of the Protection from Harassment Act 1997, the Telecommunications Act 1984 criminalised indecent, offensive or threatening phone calls, and the Malicious Communications Act 1988 criminalised the sending of an indecent, offensive or threatening letter, electronic communication, or other article to another person.
Before 1997, no specific offence of stalking existed in England and Wales. However, in Scotland, incidents could be dealt with under pre-existing law, with life imprisonment available for the worst offences.
England and Wales
In England and Wales, "harassment" was criminalised by the enactment of the Protection from Harassment Act 1997, which came into force on 16 June 1997. It makes it a criminal offence, punishable by up to six months' imprisonment, to make a course of conduct which amounts to harassment of another on two or more occasions. The court can also issue a restraining order, which carries a maximum punishment of five years' imprisonment if breached. In England and Wales, liability may arise if the victim suffers either mental or physical harm as a result of being harassed (or slang term stalked) (see R. v. Constanza).
In 2012, then-Prime Minister David Cameron stated that the government intended to make another attempt to create a law aimed specifically at stalking behaviour.
In May 2012, the Protection of Freedoms Act 2012 created the offence of stalking for the first time in England and Wales, by inserting these offences into the Protection from Harassment Act 1997. The act of stalking under this section is exemplified by contacting, or attempting to contact, a person by any means, publishing any statement or other material relating or purporting to relate to a person, monitoring the use by a person of the Internet, email, or any other form of electronic communication, loitering in any place (whether public or private), interfering with any property in the possession of a person, or watching or spying on a person.
The Protection of Freedoms Act 2012 also added Section 4(a) into the Protection From Harassment Act 1997 which covered 'Stalking involving fear of violence or serious alarm or distress'. This created the offence of where a person's conduct amounts to stalking and either causes another to fear (on at least two occasions) that violence will be used against them, or conduct that causes another person serious alarm or distress which has a substantial effect on their usual day-to-day activities.
Scotland
In Scotland, behaviour commonly described as stalking was already prosecuted as the common law offence of breach of the peace (not to be confused with the minor English offence of the same description) before the introduction of the statutory offence against s.39 of the Criminal Justice and Licensing (Scotland) Act 2010; either course can still be taken depending on the circumstances of each case. The statutory offence incurs a penalty of twelve months imprisonment or a fine upon summary conviction, or a maximum of five years' imprisonment or a fine upon conviction on indictment; penalties for conviction for breach of the peace are limited only by the sentencing powers of the court, thus a case remitted to the High Court can carry a sentence of imprisonment for life.
Provision is made under the Protection from Harassment Act against stalking to deal with the civil offence (i.e. the interference with the victim's personal rights), falling under the law of delict. Victims of stalking may sue for interdict against an alleged stalker, or a non-harassment order, breach of which is an offence.
United States
California was the first state to criminalize stalking in the United States in 1990 as a result of numerous high-profile stalking cases in California, including the 1982 attempted murder of actress Theresa Saldana, the 1988 massacre by Richard Farley, the 1989 murder of actress Rebecca Schaeffer, and five Orange County stalking murders, also in 1989. The first anti-stalking law in the United States, California Penal Code Section 646.9, was developed and proposed by Municipal Court Judge John Watson of Orange County. Watson with U.S. Representative Ed Royce introduced the law in 1990. Also in 1990, the Los Angeles Police Department (LAPD) founded the United States' first Threat Management Unit, founded by LAPD Captain Robert Martin.
Within three years thereafter, every state in the United States followed suit to create the crime of stalking, under different names such as criminal harassment or criminal menace. The Driver's Privacy Protection Act (DPPA) was enacted in 1994 in response to numerous cases of a driver's information being abused for criminal activity, with prominent examples including the Saldana and Schaeffer stalking cases.U.S. Senate Committee: Robert Douglas Testimony The DPPA prohibits states from disclosing a driver's personal information without permission by State Department of Motor Vehicles (DMV).
The Violence Against Women Act of 2005, amending a United States statute, 108 Stat. 1902 et seq, defined stalking as:
As of 2011, stalking is an offense under section 120a of the Uniform Code of Military Justice (UCMJ). The law took effect on 1 October 2007.
In 2014, new amendments were made to the Clery Act to require reporting on stalking, domestic violence, and dating violence.
In 2018, the PAWS Act became law in the United States, and it expanded the definition of stalking to include "conduct that causes a person to experience a reasonable fear of death or serious bodily injury to his or her pet".
The anti-stalking statute of Illinois is controversial. It is particularly restrictive, by the standards of this type of legislation.
Other
The Council of Europe Convention on preventing and combating violence against women and domestic violence defines and criminalizes stalking, as well as other forms of violence against women. The Convention came into force on 1 August 2014.
See also
References
Further reading
Paper presented at the Stalking: Criminal Justice Responses Conference'' convened by the Australian Institute of Criminology and held in Sydney 7–8 December 2000.
External links
Abuse
Aggression
Crimes
Harassment and bullying
Inchoate offenses | Stalking | Biology | 8,489 |
65,771,592 | https://en.wikipedia.org/wiki/Indium%28III%29%20nitrate | Indium(III) nitrate is a nitrate salt of indium which forms various hydrates. Only the pentahydrate has been crystallographically verified. Other hydrates are also reported in literature, such as the trihydrate.
Production and reactions
Indium(III) nitrate hydrate is produced by the dissolution of indium metal in concentrated nitric acid followed by evaporation of the solution:
The hydrate first decomposes to a basic salt and then to indium(III) oxide at 240 °C. Anhydrous indium(III) nitrate is claimed to be produced by the reaction of anhydrous indium(III) chloride and dinitrogen pentoxide.
In the presence of excess nitrate ions, indium(III) nitrate converts to the [In(NO3)4]− ion.
The hydrolysis of indium(III) nitrate yields indium(III) hydroxide. It also reacts with sodium tungstate to form In(OH)WO4, [In(OH)2]2WO4, NaInWO4 or In2(WO4)3 depending on pH.
Structure
Only the pentahydrate has been structurally elucidated. The pentahydrate consists of octahedral [In(NO3)(H2O)5]2+ centers as well as two nitrates and is monoclinic.
References
Indium compounds
Nitrates | Indium(III) nitrate | Chemistry | 302 |
3,974 | https://en.wikipedia.org/wiki/Biopolymer | Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The Polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs).
In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering.
Biopolymers versus synthetic polymers
A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or stochastic) structures. This fact leads to a molecular mass distribution that is missing in biopolymers. In fact, as their synthesis is controlled by a template-directed process in most in vivo systems, all biopolymers of a type (say one specific protein) are all alike: they all contain similar sequences and numbers of monomers and thus all have the same mass. This phenomenon is called monodispersity in contrast to the polydispersity encountered in synthetic polymers. As a result, biopolymers have a dispersity of 1.
Conventions and nomenclature
Polypeptides
The convention for a polypeptide is to list its constituent amino acid residues as they occur from the amino terminus to the carboxylic acid terminus. The amino acid residues are always joined by peptide bonds. Protein, though used colloquially to refer to any polypeptide, refers to larger or fully functional forms and can consist of several polypeptide chains as well as single chains. Proteins can also be modified to include non-peptide components, such as saccharide chains and lipids.
Nucleic acids
The convention for a nucleic acid sequence is to list the nucleotides as they occur from the 5' end to the 3' end of the polymer chain, where 5' and 3' refer to the numbering of carbons around the ribose ring which participate in forming the phosphate diester linkages of the chain. Such a sequence is called the primary structure of the biopolymer.
Polysaccharides
Polysaccharides (sugar polymers) can be linear or branched and are typically joined with glycosidic bonds. The exact placement of the linkage can vary, and the orientation of the linking functional groups is also important, resulting in α- and β-glycosidic bonds with numbering definitive of the linking carbons' location in the ring. In addition, many saccharide units can undergo various chemical modifications, such as amination, and can even form parts of other molecules, such as glycoproteins.
Structural characterization
There are a number of biophysical techniques for determining sequence information. Protein sequence can be determined by Edman degradation, in which the N-terminal residues are hydrolyzed from the chain one at a time, derivatized, and then identified. Mass spectrometer techniques can also be used. Nucleic acid sequence can be determined using gel electrophoresis and capillary electrophoresis. Lastly, mechanical properties of these biopolymers can often be measured using optical tweezers or atomic force microscopy. Dual-polarization interferometry can be used to measure the conformational changes or self-assembly of these materials when stimulated by pH, temperature, ionic strength or other binding partners.
Common biopolymers
Collagen: Collagen is the primary structure of vertebrates and is the most abundant protein in mammals. Because of this, collagen is one of the most easily attainable biopolymers, and used for many research purposes. Because of its mechanical structure, collagen has high tensile strength and is a non-toxic, easily absorbable, biodegradable, and biocompatible material. Therefore, it has been used for many medical applications such as in treatment for tissue infection, drug delivery systems, and gene therapy.
Silk fibroin: Silk Fibroin (SF) is another protein rich biopolymer that can be obtained from different silkworm species, such as the mulberry worm Bombyx mori. In contrast to collagen, SF has a lower tensile strength but has strong adhesive properties due to its insoluble and fibrous protein composition. In recent studies, silk fibroin has been found to possess anticoagulation properties and platelet adhesion. Silk fibroin has been additionally found to support stem cell proliferation in vitro.
Gelatin: Gelatin is obtained from type I collagen consisting of cysteine, and produced by the partial hydrolysis of collagen from bones, tissues and skin of animals. There are two types of gelatin, Type A and Type B. Type A collagen is derived by acid hydrolysis of collagen and has 18.5% nitrogen. Type B is derived by alkaline hydrolysis containing 18% nitrogen and no amide groups. Elevated temperatures cause the gelatin to melts and exists as coils, whereas lower temperatures result in coil to helix transformation. Gelatin contains many functional groups like NH2, SH, and COOH which allow for gelatin to be modified using nanoparticles and biomolecules. Gelatin is an Extracellular Matrix protein which allows it to be applied for applications such as wound dressings, drug delivery and gene transfection.
Starch: Starch is an inexpensive biodegradable biopolymer and copious in supply. Nanofibers and microfibers can be added to the polymer matrix to increase the mechanical properties of starch improving elasticity and strength. Without the fibers, starch has poor mechanical properties due to its sensitivity to moisture. Starch being biodegradable and renewable is used for many applications including plastics and pharmaceutical tablets.
Cellulose: Cellulose is very structured with stacked chains that result in stability and strength. The strength and stability comes from the straighter shape of cellulose caused by glucose monomers joined by glycogen bonds. The straight shape allows the molecules to pack closely. Cellulose is very common in application due to its abundant supply, its biocompatibility, and is environmentally friendly. Cellulose is used vastly in the form of nano-fibrils called nano-cellulose. Nano-cellulose presented at low concentrations produces a transparent gel material. This material can be used for biodegradable, homogeneous, dense films that are very useful in the biomedical field.
Alginate: Alginate is the most copious marine natural polymer derived from brown seaweed. Alginate biopolymer applications range from packaging, textile and food industry to biomedical and chemical engineering. The first ever application of alginate was in the form of wound dressing, where its gel-like and absorbent properties were discovered. When applied to wounds, alginate produces a protective gel layer that is optimal for healing and tissue regeneration, and keeps a stable temperature environment. Additionally, there have been developments with alginate as a drug delivery medium, as drug release rate can easily be manipulated due to a variety of alginate densities and fibrous composition.
Biopolymer applications
The applications of biopolymers can be categorized under two main fields, which differ due to their biomedical and industrial use.
Biomedical
Because one of the main purposes for biomedical engineering is to mimic body parts to sustain normal body functions, due to their biocompatible properties, biopolymers are used vastly for tissue engineering, medical devices and the pharmaceutical industry. Many biopolymers can be used for regenerative medicine, tissue engineering, drug delivery, and overall medical applications due to their mechanical properties. They provide characteristics like wound healing, and catalysis of bioactivity, and non-toxicity. Compared to synthetic polymers, which can present various disadvantages like immunogenic rejection and toxicity after degradation, many biopolymers are normally better with bodily integration as they also possess more complex structures, similar to the human body.
More specifically, polypeptides like collagen and silk, are biocompatible materials that are being used in ground-breaking research, as these are inexpensive and easily attainable materials. Gelatin polymer is often used on dressing wounds where it acts as an adhesive. Scaffolds and films with gelatin allow for the scaffolds to hold drugs and other nutrients that can be used to supply to a wound for healing.
As collagen is one of the more popular biopolymers used in biomedical science, here are some examples of their use:
Collagen based drug delivery systems: collagen films act like a barrier membrane and are used to treat tissue infections like infected corneal tissue or liver cancer. Collagen films have all been used for gene delivery carriers which can promote bone formation.
Collagen sponges: Collagen sponges are used as a dressing to treat burn victims and other serious wounds. Collagen based implants are used for cultured skin cells or drug carriers that are used for burn wounds and replacing skin.
Collagen as haemostat: When collagen interacts with platelets it causes a rapid coagulation of blood. This rapid coagulation produces a temporary framework so the fibrous stroma can be regenerated by host cells. Collagen based haemostat reduces blood loss in tissues and helps manage bleeding in organs such as the liver and spleen.
Chitosan is another popular biopolymer in biomedical research. Chitosan is derived from chitin, the main component in the exoskeleton of crustaceans and insects and the second most abundant biopolymer in the world. Chitosan has many excellent characteristics for biomedical science. Chitosan is biocompatible, it is highly bioactive, meaning it stimulates a beneficial response from the body, it can biodegrade which can eliminate a second surgery in implant applications, can form gels and films, and is selectively permeable. These properties allow for various biomedical applications of chitosan.
Chitosan as drug delivery: Chitosan is used mainly with drug targeting because it has potential to improve drug absorption and stability. In addition, chitosan conjugated with anticancer agents can also produce better anticancer effects by causing gradual release of free drug into cancerous tissue.
Chitosan as an anti-microbial agent: Chitosan is used to stop the growth of microorganisms. It performs antimicrobial functions in microorganisms like algae, fungi, bacteria, and gram-positive bacteria of different yeast species.
Chitosan composite for tissue engineering: Chitosan powder blended with alginate is used to form functional wound dressings. These dressings create a moist, biocompatible environment which aids in the healing process. This wound dressing is also biodegradable and has porous structures that allows cells to grow into the dressing. Furthermore, thiolated chitosans (see thiomers) are used for tissue engineering and wound healing, as these biopolymers are able to crosslink via disulfide bonds forming stable three-dimensional networks.
Industrial
Food: Biopolymers are being used in the food industry for things like packaging, edible encapsulation films and coating foods. Polylactic acid (PLA) is very common in the food industry due to is clear color and resistance to water. However, most polymers have a hydrophilic nature and start deteriorating when exposed to moisture. Biopolymers are also being used as edible films that encapsulate foods. These films can carry things like antioxidants, enzymes, probiotics, minerals, and vitamins. The food consumed encapsulated with the biopolymer film can supply these things to the body.
Packaging: The most common biopolymers used in packaging are polyhydroxyalkanoates (PHAs), polylactic acid (PLA), and starch. Starch and PLA are commercially available and biodegradable, making them a common choice for packaging. However, their barrier properties (either moisture-barrier or gas-barrier properties) and thermal properties are not ideal. Hydrophilic polymers are not water resistant and allow water to get through the packaging which can affect the contents of the package. Polyglycolic acid (PGA) is a biopolymer that has great barrier characteristics and is now being used to correct the barrier obstacles from PLA and starch.
Water purification: Chitosan has been used for water purification. It is used as a flocculant that only takes a few weeks or months rather than years to degrade in the environment. Chitosan purifies water by chelation. This is the process in which binding sites along the polymer chain bind with the metal ions in the water forming chelates. Chitosan has been shown to be an excellent candidate for use in storm and wastewater treatment.
As materials
Some biopolymers- such as PLA, naturally occurring zein, and poly-3-hydroxybutyrate can be used as plastics, replacing the need for polystyrene or polyethylene based plastics.
Some plastics are now referred to as being 'degradable', 'oxy-degradable' or 'UV-degradable'. This means that they break down when exposed to light or air, but these plastics are still primarily (as much as 98 per cent) oil-based and are not currently certified as 'biodegradable' under the European Union directive on Packaging and Packaging Waste (94/62/EC). Biopolymers will break down, and some are suitable for domestic composting.
Biopolymers (also called renewable polymers) are produced from biomass for use in the packaging industry. Biomass comes from crops such as sugar beet, potatoes, or wheat: when used to produce biopolymers, these are classified as non food crops. These can be converted in the following pathways:
Sugar beet > Glyconic acid > Polyglyconic acid
Starch > (fermentation) > Lactic acid > Polylactic acid (PLA)
Biomass > (fermentation) > Bioethanol > Ethene > Polyethylene
Many types of packaging can be made from biopolymers: food trays, blown starch pellets for shipping fragile goods, thin films for wrapping.
Environmental impacts
Biopolymers can be sustainable, carbon neutral and are always renewable, because they are made from plant or animal materials which can be grown indefinitely. Since these materials come from agricultural crops, their use could create a sustainable industry. In contrast, the feedstocks for polymers derived from petrochemicals will eventually deplete. In addition, biopolymers have the potential to cut carbon emissions and reduce CO2 quantities in the atmosphere: this is because the CO2 released when they degrade can be reabsorbed by crops grown to replace them: this makes them close to carbon neutral.
Almost all biopolymers are biodegradable in the natural environment: they are broken down into CO2 and water by microorganisms. These biodegradable biopolymers are also compostable: they can be put into an industrial composting process and will break down by 90% within six months. Biopolymers that do this can be marked with a 'compostable' symbol, under European Standard EN 13432 (2000). Packaging marked with this symbol can be put into industrial composting processes and will break down within six months or less. An example of a compostable polymer is PLA film under 20μm thick: films which are thicker than that do not qualify as compostable, even though they are "biodegradable". In Europe there is a home composting standard and associated logo that enables consumers to identify and dispose of packaging in their compost heap.
See also
Biomaterials
Bioplastic
Biopolymers & Cell (journal)
Condensation polymers
Condensed tannins
DNA sequence
Melanin
Non food crops
Phosphoramidite
Polymer chemistry
Sequence-controlled polymers
Sequencing
Small molecules
Worm-like chain
References
External links
NNFCC: The UK's National Centre for Biorenewable Energy, Fuels and Materials
Bioplastics Magazine
Biopolymer group
What's Stopping Bioplastic?
Biomolecules
Polymers
Molecular biology
Molecular genetics
Biotechnology products
Bioplastics
Biomaterials | Biopolymer | Physics,Chemistry,Materials_science,Biology | 3,754 |
64,971,320 | https://en.wikipedia.org/wiki/Justice%20Duel | Justice Duel is a platform game developed and published by Mega Cat Studios. It was released on the Nintendo Entertainment System in 2017. Players battle one another while riding cybernetically enhanced eagles. The game plays similarly to Joust.
Gameplay
The player controls a cyborg version of a past President of the United States riding a cybernetically enhanced eagle to duel either against other human players or the game's AI. The player duels by firing projectiles at their opponents; a level is completed once there are no remaining opponents. Players can pick up power-ups in the form of mines and traps to use against opponents. Justice Duel supports multiplayer for up to four players through the NES Four Score.
Development
Besides Joust, the game's developers cite TowerFall and Balloon Fight as having inspired elements of the game's design.
Evercade release
Mega Cat Studios released Justice Duel in a compilation cartridge for the Evercade, along with several other titles from the studio, including Little Medusa, Log Jammers, and Coffee Crisis.
Child's Play edition
In 2018, Mega Cat Studios released a special edition of Justice Duel to raise money for the charity Child's Play; it debuted at PAX East and was part of the Omegathon tournament. This special edition of the game features two additional playable characters: Della and Rod, who ride a duck and a quail, respectively.
Reception
Justice Duel has received a generally positive reception from the press, being praised for its aesthetic and fast paced gameplay.
References
External links
2017 video games
Eagles in popular culture
Fiction about cyborgs
Fictional presidents of the United States
Indie games
Mega Cat Studios games
Multiplayer and single-player video games
Nintendo Entertainment System games
Nintendo Entertainment System homebrew games
Nintendo Entertainment System-only games
Platformers
Unauthorized video games
Video games developed in the United States
Video games about birds | Justice Duel | Biology | 370 |
742,038 | https://en.wikipedia.org/wiki/Liquid%20scintillation%20counting | Liquid scintillation counting is the measurement of radioactive activity of a sample material which uses the technique of mixing the active material with a liquid scintillator (e.g. zinc sulfide), and counting the resultant photon emissions. The purpose is to allow more efficient counting due to the intimate contact of the activity with the scintillator. It is generally used for alpha particle or beta particle detection.
Technique
Samples are dissolved or suspended in a "cocktail" containing a solvent (historically aromatic organics such as xylene or toluene, but more recently less hazardous solvents are used), typically some form of a surfactant, and "fluors" or scintillators which produce the light measured by the detector. Scintillators can be divided into primary and secondary phosphors, differing in their luminescence properties.
Beta particles emitted from the isotopic sample transfer energy to the solvent molecules: the π cloud of the aromatic ring absorbs the energy of the emitted particle. The energized solvent molecules typically transfer the captured energy back and forth with other solvent molecules until the energy is finally transferred to a primary scintillator. The primary phosphor will emit photons following absorption of the transferred energy. Because that light emission may be at a wavelength that does not allow efficient detection, many cocktails contain secondary phosphors that absorb the fluorescence energy of the primary phosphor and re-emit at a longer wavelength. Two widely used primary and secondary
fluors are 2,5-diphenyloxazole (PPO) with an emission maximum of 380 nm and 1,4-bis-2-(5-phenyloxazolyl)benzene (POPOP) with an emission maximum of 420 nm.
The radioactive samples and cocktail are placed in small transparent or translucent (often glass or plastic) vials that are loaded into an instrument known as a liquid scintillation counter. Newer machines may use 96-well plates with individual filters in each well. Many counters have two photo multiplier tubes connected in a coincidence circuit. The coincidence circuit assures that genuine light pulses, which reach both photomultiplier tubes, are counted, while spurious pulses (due to line noise, for example), which would only affect one of the tubes, are ignored.
Counting efficiencies under ideal conditions range from about 30% for tritium (a low-energy beta emitter) to nearly 100% for phosphorus-32, a high-energy beta emitter. Some chemical compounds (notably chlorine compounds) and highly colored samples can interfere with the counting process. This interference, known as "quenching", can be overcome through data correction or through careful sample preparation.
Cherenkov counting
High-energy beta emitters, such as phosphorus-32 and yttrium-90 can also be counted in a scintillation counter without the cocktail, instead using an aqueous solution containing no scintillators. This technique, known as Cherenkov counting, relies on Cherenkov radiation being detected directly by the photomultiplier tubes. Cherenkov counting benefits from the use of plastic vials which scatter the emitted light, increasing the potential for light to reach the photomultiplier tube.
See also
Accelerator mass spectrometry
Counting efficiency
References
Liquid Scintillation Counting, University of Wisconsin–Milwaukee Radiation Safety Program
Principles and Applications of Liquid Scintillation Counting, National Diagnostics
K. Regan, "Cerenkov counting technique for beta particles: advantages and limitations". J. Chem. Educ., August 1983, 60 (8), 682–684.
Photochemistry
Particle detectors
Ionising radiation detectors | Liquid scintillation counting | Chemistry,Technology,Engineering | 774 |
25,988,143 | https://en.wikipedia.org/wiki/Seward%27s%20Success%2C%20Alaska | Seward's Success was a planned community proposed for Point MacKenzie, north of Anchorage, Alaska, United States. The megaproject was to be fully enclosed by a dome spanning the Knik Arm and holding a community of 40,000 residents, with ample residential, office, recreational and commercial space. It was proposed in 1968 after the discovery of oil at Prudhoe Bay and scuttled in 1972 by a delay to the development of the Trans-Alaska Pipeline System.
Its name alludes to "Seward's Folly", an epithet flung at Secretary of State William H. Seward for the 1867 Alaskan Purchase.
History
The plan for constructing Seward's Success developed after the January 1968 discovery of oil reserves at Prudhoe Bay. The $800 million ($ billion today), four-phase community was to have been developed by Tandy Industries of Tulsa, Oklahoma and designed by Adrian Wilson Associates of Los Angeles. The $170 million ($ billion today) initial phase was envisioned to provide for a population of 5,000 and contain of office space, of retail space and an indoor sports arena. The central feature of the office construction was the proposed 20-story Alaskan Petroleum Center, which was to serve a variety of oil and oil service companies. The development was touted as the world's first totally enclosed, climate-controlled community.
Transportation between Seward's Success and downtown Anchorage would be accomplished initially by way of a high-speed aerial tramway. Subsequently, a monorail would be built as an additional connection between the town and Anchorage International Airport. Automobiles would not have been allowed inside the community, and all transportation within Seward's Success was to have been provided by way of the aerial tramway, monorail, bicycle paths and moving sidewalks.
The temperature would have been controlled at year round. The shell would have been composed of glass designed to work like a greenhouse in maintaining the temperature. Energy to power the community would be generated through natural gas available on-site.
Physical construction of the community would commence in 1970 with the completion of a dock and several roads. However, with construction of the Trans-Alaska Pipeline System delayed due to lawsuits, a group subcontracted by Tandy failed to make the annual lease payment for the where Seward's Success was to have been located. By 1972, the project was officially cancelled.
See also
Knik Arm Bridge - Controversial proposed bridge to cross the Knik Arm between Anchorage and the proposed location of Seward's Success.
Arcology
References
Geography of Matanuska-Susitna Borough, Alaska
Planned communities in the United States
Cancelled cities
Proposed arcologies | Seward's Success, Alaska | Technology | 536 |
17,748,770 | https://en.wikipedia.org/wiki/Open%20Agent%20Architecture | Open Agent Architecture, or OAA for short, is a framework for integrating a community of heterogeneous software agents in a distributed environment. It is also a research project of the SRI International Artificial Intelligence Center.
Roughly, the architecture is that a central "blackboard" server holds a list of tasks while a group of agents executes these tasks based on their specific capabilities.
Agents working in the structure of an OAA framework are built to universal communication and functional standards and are based on the Interagent Communication Language. The language is platform-independent and allows agents to collaborate by delegating and receiving work requests.
Open Agent Architecture was first proposed in the late 1990s and was later used as a foundation for the DARPA-funded CALO artificial intelligence project.
References
Computer programming
SRI International | Open Agent Architecture | Technology,Engineering | 161 |
629,203 | https://en.wikipedia.org/wiki/Stefan%20Mazurkiewicz | Stefan Mazurkiewicz (25 September 1888 – 19 June 1945) was a Polish mathematician who worked in mathematical analysis, topology, and probability. He was a student of Wacław Sierpiński and a member of the Polish Academy of Learning (PAU). His students included Karol Borsuk, Bronisław Knaster, Kazimierz Kuratowski, Stanisław Saks, and Antoni Zygmund. For a time Mazurkiewicz was a professor at the University of Paris; however, he spent most of his career as a professor at the University of Warsaw.
The Hahn–Mazurkiewicz theorem, a basic result on curves prompted by the phenomenon of space-filling curves, is named for Mazurkiewicz and Hans Hahn. His 1935 paper Sur l'existence des continus indécomposables is generally considered the most elegant piece of work in point-set topology.
During the Polish–Soviet War (1919–21), Mazurkiewicz as early as 1919 broke the most common Russian cipher for the Polish General Staff's cryptological agency. Thanks to this, orders issued by Soviet commander Mikhail Tukhachevsky's staff were known to Polish Army leaders. This contributed substantially, perhaps decisively, to Polish victory at the critical Battle of Warsaw and possibly to Poland's survival as an independent country.
See also
Biuro Szyfrów
List of Polish mathematicians
External links
1888 births
1945 deaths
Warsaw School of Mathematics
People from Warsaw Governorate
Polish cryptographers
Topologists
Academic staff of the University of Paris
Academic staff of the University of Warsaw
Mathematical analysts
Cipher Bureau (Poland)
University of Warsaw alumni | Stefan Mazurkiewicz | Mathematics | 342 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.