id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,014,669 | https://en.wikipedia.org/wiki/Overlay%20network | An overlay network is a computer network that is layered on top of another (logical as opposed to physical) network. The concept of overlay networking is distinct from the traditional model of OSI layered networks, and almost always assumes that the underlay network is an IP network of some kind.
Some examples of overlay networking technologies are, VXLAN, BGP VPNs, both Layer 2 and Layer 3, and IP over IP technologies, such as GRE or IPSEC Tunnels. IP over IP technologies, such as SD-WAN are a class of overlay network.
Structure
Nodes in an overlay network can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. For example, distributed systems such as peer-to-peer networks are overlay networks because their nodes form networks over existing network connections.
The Internet was originally built as an overlay upon the telephone network, while today (through the advent of VoIP), the telephone network is increasingly turning into an overlay network built on top of the Internet.
Attributes
Overlay networks have a certain set of attributes, including separation of logical addressing, security and quality of service. Other optional attributes include resiliency/recovery, encryption and bandwidth control.
Uses
Telcos
Many telcos use overlay networks to provide services over their physical infrastructure. In the networks that connect physically diverse sites (wide area networks, WANs), one common overlay network technology is BGP VPNs. These VPNs are provided in the form of a service to enterprises to connect their own sites and applications. The advantage of these kinds of overlay networks are that the telecom operator does not need to manage addressing or other enterprise specific network attributes.
Within data centers, it was more common to use VXLAN, however due to its complexity and the need to stitch Layer 2 VXLAN-based overlay networks to Layer 3 IP/BGP networks, it has become more common to use BGP within data centers to provide Layer 2 connectivity between virtual machines or Kubernetes clusters.
Enterprise networks
Enterprise private networks were first overlaid on telecommunication networks such as Frame Relay and Asynchronous Transfer Mode packet switching infrastructures but migration from these (now legacy) infrastructures to IP-based MPLS networks and virtual private networks started (2001~2002) and is now completed, with very few remaining Frame Relay or ATM networks.
From an enterprise point of view, while an overlay VPN service configured by the operator might fulfill their basic connectivity requirements, they lack flexibility. For example, connecting services from competitive operators, or an enterprise service over an internet service and securing that service is impossible with standard VPN technologies, hence the proliferation of SD-WAN overlay networks that allow enterprises to connect sites and users regardless of the network access type they have.
Over the Internet
The Internet is the basis for more overlaid networks that can be constructed in order to permit routing of messages to destinations not specified by an IP address. For example, distributed hash tables can be used to route messages to a node having a specific logical address, whose IP address is not known in advance.
Quality of Service
Guaranteeing bandwidth through marking traffic has multiple solutions, including IntServ and DiffServ. IntServ requires per-flow tracking and consequently causes scaling issues in routing platforms. It has not been widely deployed. DiffServ has been widely deployed in many operators as a method to differentiate traffic types. DiffServ itself provides no guarantee of throughput, it does allow the network operator to decide which traffic is higher priority, and hence will be forwarded first in congestion situations.
Overlay networks implement a much finer granularity of quality of service, allowing enterprise users to decide on an application and user or site basis which traffic should be prioritised.
Ease of Deployment
Overlay networks can be incrementally deployed at end-user sites or on hosts running the overlay protocol software, without cooperation from ISPs. The overlay has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes a message traverses before reaching its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast).
Advantages
Resilience
The objective of resilience in telecommunications networks is to enable automated recovery during failure events in order to maintain a wanted service level or availability. As telecommunications networks are built in a layered fashion, resilience can be used in the physical, optical, IP or session/application layers. Each layer relies on the resilience features of the layer below it. Overlay IP networks in the form of SD-WAN services therefore rely on the physical, optical and underlying IP services they are transported over. Application layer overlays depend on the all the layers below them. The advantage of overlays are that they are more flexible/programmable than traditional network infrastructure, which outweighs the disadvantages of additional latency, complexity and bandwidth overheads.
Application Layer Resilience Approaches
Resilient Overlay Networks (RON) are architectures that allow distributed Internet applications to detect and recover from disconnection or interference. Current wide-area routing protocols that take at least several minutes to recover from are improved upon with this application layer overlay. The RON nodes monitor the Internet paths among themselves and will determine whether or not to reroute packets directly over the Internet or over other RON nodes thus optimizing application-specific metrics.
The Resilient Overlay Network has a relatively simple conceptual design. RON nodes are deployed at various locations on the Internet. These nodes form an application layer overlay that cooperates in routing packets. Each of the RON nodes monitors the quality of the Internet paths between each other and uses this information to accurately and automatically select paths from each packet, thus reducing the amount of time required to recover from poor quality of service.
Multicast
Overlay multicast is also known as End System or Peer-to-Peer Multicast. High bandwidth multi-source multicast among widely distributed nodes is a critical capability for a wide range of applications, including audio and video conferencing, multi-party games and content distribution. Throughout the last decade, a number of research projects have explored the use of multicast as an efficient and scalable mechanism to support such group communication applications. Multicast decouples the size of the receiver set from the amount of state kept at any single node and potentially avoids redundant communication in the network.
The limited deployment of IP Multicast, a best-effort network layer multicast protocol, has led to considerable interest in alternate approaches that are implemented at the application layer, using only end-systems. In an overlay or end-system multicast approach, participating peers organize themselves into an overlay topology for data delivery. Each edge in this topology corresponds to a unicast path between two end-systems or peers in the underlying internet. All multicast-related functionality is implemented at the peers instead of at routers, and the goal of the multicast protocol is to construct and maintain an efficient overlay for data transmission.
Disadvantages
No knowledge of the real network topology, subject to the routing inefficiencies of the underlying network, may be routed on sub-optimal paths
Possible increased latency compared to non-overlay services
Duplicate packets at certain points.
Additional encapsulation overhead, meaning lower total network capacity due to multiple payload encapsulation
List of overlay network protocols
Overlay network protocols based on TCP/IP include:
Distributed hash tables (DHTs) based on the Chord
JXTA
XMPP: the routing of messages based on an endpoint Jabber ID (Example: nodeId_or_userId@domainId\resourceId) instead of by an IP Address
Many peer-to-peer protocols including Gnutella, Gnutella2, Freenet, I2P and Tor.
PUCC
Solipsis: a France Télécom system for massively shared virtual world
Overlay network protocols based on UDP/IP include:
Distributed hash tables (DHTs) based on Kademlia algorithm, such as KAD, etc.
Real Time Media Flow Protocol – Adobe Flash
See also
Darknet
Mesh network
Computer network
Peercasting
Virtual Private Network
References
External links
List of overlay network implementations, July 2003
Resilient Overlay Networks
Overcast: reliable multicasting with an overlay network
OverQoS: An overlay based architecture for enhancing Internet QoS
End System Multicast
Overlay networks
Anonymity networks
Network architecture
Computer networking | Overlay network | [
"Technology",
"Engineering"
] | 1,776 | [
"Computer networking",
"Computer engineering",
"Network architecture",
"Computer networks engineering",
"Computer science"
] |
1,014,694 | https://en.wikipedia.org/wiki/Real%20projective%20space | In mathematics, real projective space, denoted or is the topological space of lines passing through the origin 0 in the real space It is a compact, smooth manifold of dimension , and is a special case of a Grassmannian space.
Basic properties
Construction
As with all projective spaces, is formed by taking the quotient of under the equivalence relation for all real numbers . For all in one can always find a such that has norm 1. There are precisely two such differing by sign. Thus can also be formed by identifying antipodal points of the unit -sphere, , in .
One can further restrict to the upper hemisphere of and merely identify antipodal points on the bounding equator. This shows that is also equivalent to the closed -dimensional disk, , with antipodal points on the boundary, , identified.
Low-dimensional examples
is called the real projective line, which is topologically equivalent to a circle.
is called the real projective plane. This space cannot be embedded in . It can however be embedded in and can be immersed in (see here). The questions of embeddability and immersibility for projective -space have been well-studied.
is diffeomorphic to SO(3), hence admits a group structure; the covering map is a map of groups Spin(3) → SO(3), where Spin(3) is a Lie group that is the universal cover of SO(3).
Topology
The antipodal map on the -sphere (the map sending to ) generates a Z2 group action on . As mentioned above, the orbit space for this action is . This action is actually a covering space action giving as a double cover of . Since is simply connected for , it also serves as the universal cover in these cases. It follows that the fundamental group of is when . (When the fundamental group is due to the homeomorphism with ). A generator for the fundamental group is the closed curve obtained by projecting any curve connecting antipodal points in down to .
The projective -space is compact, connected, and has a fundamental group isomorphic to the cyclic group of order 2: its universal covering space is given by the antipody quotient map from the -sphere, a simply connected space. It is a double cover. The antipode map on has sign , so it is orientation-preserving if and only if is even. The orientation character is thus: the non-trivial loop in acts as on orientation, so is orientable if and only if is even, i.e., is odd.
The projective -space is in fact diffeomorphic to the submanifold of consisting of all symmetric matrices of trace 1 that are also idempotent linear transformations.
Geometry of real projective spaces
Real projective space admits a constant positive scalar curvature metric, coming from the double cover by the standard round sphere (the antipodal map is locally an isometry).
For the standard round metric, this has sectional curvature identically 1.
In the standard round metric, the measure of projective space is exactly half the measure of the sphere.
Smooth structure
Real projective spaces are smooth manifolds. On Sn, in homogeneous coordinates, (x1, ..., xn+1), consider the subset Ui with xi ≠ 0. Each Ui is homeomorphic to the disjoint union of two open unit balls in Rn that map to the same subset of RPn and the coordinate transition functions are smooth. This gives RPn a smooth structure.
Structure as a CW complex
Real projective space RPn admits the structure of a CW complex with 1 cell in every dimension.
In homogeneous coordinates (x1 ... xn+1) on Sn, the coordinate neighborhood U1 = {(x1 ... xn+1) | x1 ≠ 0} can be identified with the interior of n-disk Dn. When xi = 0, one has RPn−1. Therefore the n−1 skeleton of RPn is RPn−1, and the attaching map f : Sn−1 → RPn−1 is the 2-to-1 covering map. One can put
Induction shows that RPn is a CW complex with 1 cell in every dimension up to n.
The cells are Schubert cells, as on the flag manifold. That is, take a complete flag (say the standard flag) 0 = V0 < V1 <...< Vn; then the closed k-cell is lines that lie in Vk. Also the open k-cell (the interior of the k-cell) is lines in (lines in Vk but not Vk−1).
In homogeneous coordinates (with respect to the flag), the cells are
This is not a regular CW structure, as the attaching maps are 2-to-1. However, its cover is a regular CW structure on the sphere, with 2 cells in every dimension; indeed, the minimal regular CW structure on the sphere.
In light of the smooth structure, the existence of a Morse function would show RPn is a CW complex. One such function is given by, in homogeneous coordinates,
On each neighborhood Ui, g has nondegenerate critical point (0,...,1,...,0) where 1 occurs in the i-th position with Morse index i. This shows RPn is a CW complex with 1 cell in every dimension.
Tautological bundles
Real projective space has a natural line bundle over it, called the tautological bundle. More precisely, this is called the tautological subbundle, and there is also a dual n-dimensional bundle called the tautological quotient bundle.
Algebraic topology of real projective spaces
Homotopy groups
The higher homotopy groups of RPn are exactly the higher homotopy groups of Sn, via the long exact sequence on homotopy associated to a fibration.
Explicitly, the fiber bundle is:
You might also write this as
or by analogy with complex projective space.
The homotopy groups are:
Homology
The cellular chain complex associated to the above CW structure has 1 cell in each dimension 0, ..., n. For each dimensional k, the boundary maps dk : δDk → RPk−1/RPk−2 is the map that collapses the equator on Sk−1 and then identifies antipodal points. In odd (resp. even) dimensions, this has degree 0 (resp. 2):
Thus the integral homology is
RPn is orientable if and only if n is odd, as the above homology calculation shows.
Infinite real projective space
The infinite real projective space is constructed as the direct limit or union of the finite projective spaces:
This space is classifying space of O(1), the first orthogonal group.
The double cover of this space is the infinite sphere , which is contractible. The infinite projective space is therefore the Eilenberg–MacLane space K(Z2, 1).
For each nonnegative integer q, the modulo 2 homology group .
Its cohomology ring modulo 2 is
where is the first Stiefel–Whitney class: it is the free -algebra on , which has degree 1.
See also
Complex projective space
Quaternionic projective space
Lens space
Real projective plane
Notes
References
Bredon, Glen. Topology and geometry, Graduate Texts in Mathematics, Springer Verlag 1993, 1996
Algebraic topology
Differential geometry
Projective geometry | Real projective space | [
"Mathematics"
] | 1,530 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
1,014,770 | https://en.wikipedia.org/wiki/Shanty%20town | A shanty town, squatter area, squatter settlement, or squatter camp is a settlement of improvised buildings known as shanties or shacks, typically made of materials such as mud and wood, or from cheap building materials such as corrugated tin sheets. A typical shanty town is squatted and in the beginning lacks adequate infrastructure, including proper sanitation, safe water supply, electricity and street drainage. Over time, shanty towns can develop their infrastructure and even change into middle class neighbourhoods. They can be small informal settlements or they can house millions of people.
First used in North America to designate a shack, the term shanty is likely derived from French chantier (construction site and associated low-level workers' quarters), or alternatively from Scottish Gaelic sean () meaning 'old' and taigh () meaning 'house[hold]'.
Globally, some of the largest shanty towns are Ciudad Neza in Mexico, Orangi in Pakistan and Dharavi in India. They are known by various names in different places, such as favela in Brazil, villa miseria in Argentina and gecekondu in Turkey. Shanty towns are mostly found in developing nations, but also in the cities of developed nations, such as Athens, Los Angeles and Madrid. Cañada Real is considered the largest informal settlement in Europe, and Skid Row is an infamous shanty town in Los Angeles. Shanty towns are sometimes found on places such as railway sidings, swampland or disputed building projects. In South Africa, squatter camps, often referred to as "plakkerskampe", directly translated from the Afrikaans word for squatter camps, often starts and grows rapidly on vacant land or public spaces within or close to cities and towns, where there may be nearby work opportunities, without the cost of transport.
Construction
Shanty towns tend to begin as improvised shelters on squatted land. People build shacks from whatever materials are easy to acquire, for example wood or mud. There are no facilities such as electricity, gas, sewage or running water. The squatters choose areas such as railway sidings, preservation areas or disputed building projects. Swiss journalist Georg Gerster has noted (with specific reference to the invasões of Brasília) that "squatter settlements [as opposed to slums], despite their unattractive building materials, may also be places of hope, scenes of a counter-culture, with an encouraging potential for change and a strong upward impetus". Stewart Brand has observed that shanty towns are green, with people recycling as much as possible and tending to travel by foot, bicycle, rickshaw or shared taxi, though this is mainly due to the generally poor economic situation found.
Development and future prospects
While most shanty towns begin as precarious establishments haphazardly thrown together without basic social and civil services, over time, some have undergone a certain amount of development. Often the residents themselves are responsible for the major improvements. Community organizations sometimes working alongside NGOs, private companies, and the government, set up connections to the municipal water supply, pave roads, and build local schools. Some of these shanties have become middle class suburbs. One such example is the Los Olivos neighbourhood of Lima, Peru, which now contains gated communities, casinos, and plastic surgery clinics.
Some Brazilian favelas have also seen improvements in recent years and can even attract tourists. Development occurs over a long period of time and newer towns still lack basic services. Nevertheless, there has been a general trend whereby shanties undergo gradual improvements, rather than relocation to even more distant parts of a metropolis.
In Africa, many shanty towns are starting to implement the use of composting toilets and solar panels. In India, people living in slums have access to cell phones and the internet.
Other African shanty towns have even become popular tourist attractions. Soweto, an old squatter camp from apartheid-era South Africa is now classified as a city within a city, with a population of almost 2 million. It boasts the popular "Soweto Towers" and a multitude of guided excursions, often including a Shisa-nyama.
Pope Francis argues in his 2015 encyclical letter Laudato si' that shanty town settlements should be developed, if possible, rather than people being moved on and their settlements destroyed. He and the Catholic Church's Council for Justice and Peace have emphasised the need for information, involvement and choice being offered to people being moved on.
Instances
Shanty towns are present in a number of developing countries. In Francophone countries, shanty towns are referred to as bidonvilles (French for "can town"); such countries include Haiti, where Cité Soleil houses between 200,000 and 300,000 people on the edge of Port-au-Prince.
Africa
In 2016, 62% of Africa's population was living in shanty towns.
Squatter camps in South Africa typically use cheap, and easily acquired building materials such as corrugated tin sheets to build shacks. Offering very little protection against extreme weather conditions, these squatter camps, often built near streams or rivers due to the steady water supply, are often subjected to flash floods. They are also prone to runaway fires due to the close proximity they are built in. They often cause a great deal of damage to naturally occurring ecosystems, both directly, and indirectly. An example of severe indirect damage is the use of washing detergents, and refuse disposal in the nearby water source, which can often be seen for hundreds of kilometers down stream.
Due to the lack of infrastructure, and the cost of basic services, such as water and electricity, the overall squatted area is often barren, with the ground sweeped and stamped to minimise dust, and where gardening is simply impossible and unaffordable. Illegal and dangerous electricity connections are abundant, another danger for fires, and electrical accidents,
The Joe Slovo squatter camp, in Cape Town, houses an estimated 20,000 people. Shack dwellers in South Africa organise themselves in groups such as Abahlali baseMjondolo and Western Cape Anti-Eviction Campaign.
In Nairobi, the capital of Kenya, Kibera has between 200,000 and 1 million residents. There is no running water and inhabitants use a flying toilet in which faeces are collected in a plastic bag then thrown away. Mathare is a collection of slums which contain around 500,000 people. In Zambia, the informal housing areas are known as kombonis and approximately 80% of the people in the capital Lusaka are living in them.
Asia
The largest shanty town in Asia is Orangi in Karachi, Pakistan, which had an estimated 1.5 million inhabitants in 2011. The Orangi Pilot Project aims to lift local people out of poverty. It was begun by Akhtar Hameed Khan and run by Parveen Rehman until her murder in 2013. Residents laid sewage pipes themselves and almost all of Orangi's 8,000 streets are now connected. In India, an estimated one million people live in Dharavi, a shanty town built on a former mangrove swamp in Mumbai. It is one of the most densely populated places on the globe. In 2011, there were at least four improvised settlements in Mumbai containing even more people. There are in total 3.4 million people living in the 5,000 informal settlements of Bangladesh's capital city Dhaka.
Thailand has 5,500 informal settlements, one of the largest being a shanty town in the Khlong Toei District of Bangkok. In China, 171 urban villages were demolished before the 2008 Summer Olympics in Beijing. As of 2005, there were 346 shanty towns in Beijing, housing 1.5 million people. Author Robert Neuwirth wrote that around six million people, half the population of Istanbul lived in gecekondu areas.
In Hong Kong, the Kowloon Walled City housed up to 50,000 people, with rooftop slums currently providing some additional housing.
Latin America
The world's largest shanty town is Ciudad Neza or Neza-Chalco-Itza, which is part of the city of Ciudad Nezahualcóyotl, next to Mexico City. Estimates of its population range from 1.2 million to 4 million.
Brazil has many favelas. In Rio de Janeiro, Brazil, it was calculated in 2000 that over 20% of its 6.5 million inhabitants were living in more than 600 favelas. For example, Rocinha is home to an estimated 80,000 inhabitants. It has developed into a densely populated neighbourhood with some buildings reaching six storeys high. There are theatres, schools, nurseries and local newspapers.
In Argentina, shanty towns are known as villas miseria. As of 2011, there were 500,000 people living in 864 informal settlements in the metropolitan Buenos Aires area. In Peru, they are known as pueblos jóvenes ("young towns"), as campamentos in Chile, and as asentamientos in Guatemala.
Developed countries
During the 1930s Great Depression, shanty towns nicknamed Hoovervilles sprang up across the United States. Following the Great Depression, squatters lived in shacks on landfill sites beside the Martin Pena canal in Puerto Rico and were still there in 2010. More recently, cities such as Newark and Oakland have witnessed the creation of tent cities. The Umoja Village shanty town was squatted in 2006 in Miami, Florida. There are also colonias near the border with Mexico.
Although shanty towns are now generally less common in developed countries in Europe, they still exist. The growing influx of migrants has fuelled shantytowns in cities commonly used as a point of entry into the European Union, including Athens and Patras in Greece. The Calais Jungle in France had grown to over 8,000 people by the time of its clearance in October 2016. Bidonvilles exist in the peripheries of some French cities. The state authorities recorded 16,399 people living in 391 slums across the country in 2012. Of these, 41% lived on the outskirts of Paris.
In Madrid, Spain, a shanty town named Cañada Real is considered the largest informal settlement in Europe. It has an estimated 8,628 inhabitants, who are mainly Spanish, Romani and north African, but only one mobile health unit. After 40 years, property developers began to take an interest in the site in 2012.
There have been cardboard cities in London and Belgrade. In some cases, shanty towns can persist in gentrified areas that local governments have yet to redevelop, or in regions of political dispute. A major historical example was the Kowloon Walled City in Hong Kong.
In Australia and New Zealand, there were many shanty towns before World War II, some of which still exist (for example Wyee, a suburb of the Central Coast).
In popular culture
Many films have been shot in shanty towns. Slumdog Millionaire centres on characters who spend most of their lives in Indian shanty towns. The Brazilian film City of God was set in Cidade de Deus and filmed in another favela, called Cidade Alta. White Elephant, a 2012 Argentinian movie, is set in a villa miseria in Buenos Aires. The South African film District 9 is largely set in a township called Chiawelo, from which people had been forcibly resettled.
The 2016 Chinese TV series Housing tells the story of shantytown clearance in Beiliang, Baotou, Inner Mongolia.
A 2023 Nigerian crime thriller titled Shanty Town was released on Netflix on January 20, 2023. It is a six-part series that tells the story of a ruthless leader named Scar (Chidi Mokeme) who handles a lot of dirty business and is popularly regarded as the King of Shanty Town.
Video games such as Max Payne 3 have levels located in fictional shanty towns.
Reggae singer Desmond Dekker sang a song called "007 (Shanty Town)".
See also
Informal settlement
New village
Refugee camp
Slum
Tent city
References
Further reading
Slate article about an economist proposing New Orleans to be reconstructed with shanties
External links
Photos of Dharavi, a shanty town in Mumbai, India.
Homelessness
Types of towns
Urban planning
Squatting
fr:Bidonville
zh:棚戶區 | Shanty town | [
"Engineering"
] | 2,524 | [
"Urban planning",
"Architecture"
] |
1,014,792 | https://en.wikipedia.org/wiki/Key-based%20routing | Key-based routing (KBR) is a lookup method used in conjunction with distributed hash tables (DHTs) and certain other overlay networks. While DHTs provide a method to find a host responsible for a certain piece of data, KBR provides a method to find the closest host for that data, according to some defined metric. This may not necessarily be defined as physical distance, but rather the number of network hops.
Key-based routing networks
Freenet
GNUnet
Kademlia
Onion routing
Garlic routing
See also
Public-key cryptography
Distributed Hash Table - Overlay Network
Anonymous P2P
References
Anonymity networks
Routing
File sharing networks
Distributed data storage
Network architecture
Cryptographic protocols
Key-based routing | Key-based routing | [
"Technology",
"Engineering"
] | 145 | [
"Network architecture",
"Computing stubs",
"Computer networks engineering",
"Computer network stubs"
] |
1,014,906 | https://en.wikipedia.org/wiki/Cyclomatic%20complexity | Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976.
Cyclomatic complexity is computed using the control-flow graph of the program. The nodes of the graph correspond to indivisible groups of commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command. Cyclomatic complexity may also be applied to individual functions, modules, methods, or classes within a program.
One testing strategy, called basis path testing by McCabe who first proposed it, is to test each linearly independent path through the program. In this case, the number of test cases will equal the cyclomatic complexity of the program.
Description
Definition
There are multiple ways to define cyclomatic complexity of a section of source code. One common way is the number of linearly independent paths within it. A set of paths is linearly independent if the edge set of any path in is not the union of edge sets of the paths in some subset of . If the source code contained no control flow statements (conditionals or decision points) the complexity would be 1, since there would be only a single path through the code. If the code had one single-condition IF statement, there would be two paths through the code: one where the IF statement is TRUE and another one where it is FALSE. Here, the complexity would be 2. Two nested single-condition IFs, or one IF with two conditions, would produce a complexity of 3.
Another way to define the cyclomatic complexity of a program is to look at its control-flow graph, a directed graph containing the basic blocks of the program, with an edge between two basic blocks if control may pass from the first to the second. The complexity is then defined as
where
= the number of edges of the graph.
= the number of nodes of the graph.
= the number of connected components.
An alternative formulation of this, as originally proposed, is to use a graph in which each exit point is connected back to the entry point. In this case, the graph is strongly connected. Here, the cyclomatic complexity of the program is equal to the cyclomatic number of its graph (also known as the first Betti number), which is defined as
This may be seen as calculating the number of linearly independent cycles that exist in the graph: those cycles that do not contain other cycles within themselves. Because each exit point loops back to the entry point, there is at least one such cycle for each exit point.
For a single program (or subroutine or method), always equals 1; a simpler formula for a single subroutine is
Cyclomatic complexity may be applied to several such programs or subprograms at the same time (to all of the methods in a class, for example). In these cases, will equal the number of programs in question, and each subprogram will appear as a disconnected subset of the graph.
McCabe showed that the cyclomatic complexity of a structured program with only one entry point and one exit point is equal to the number of decision points ("if" statements or conditional loops) contained in that program plus one. This is true only for decision points counted at the lowest, machine-level instructions. Decisions involving compound predicates like those found in high-level languages like IF cond1 AND cond2 THEN ... should be counted in terms of predicate variables involved. In this example, one should count two decision points because at machine level it is equivalent to IF cond1 THEN IF cond2 THEN ....
Cyclomatic complexity may be extended to a program with multiple exit points. In this case, it is equal to
where is the number of decision points in the program and is the number of exit points.
Algebraic topology
An even subgraph of a graph (also known as an Eulerian subgraph) is one in which every vertex is incident with an even number of edges. Such subgraphs are unions of cycles and isolated vertices. Subgraphs will be identified with their edge sets, which is equivalent to only considering those even subgraphs which contain all vertices of the full graph.
The set of all even subgraphs of a graph is closed under symmetric difference, and may thus be viewed as a vector space over GF(2). This vector space is called the cycle space of the graph. The cyclomatic number of the graph is defined as the dimension of this space. Since GF(2) has two elements and the cycle space is necessarily finite, the cyclomatic number is also equal to the 2-logarithm of the number of elements in the cycle space.
A basis for the cycle space is easily constructed by first fixing a spanning forest of the graph, and then considering the cycles formed by one edge not in the forest and the path in the forest connecting the endpoints of that edge. These cycles form a basis for the cycle space. The cyclomatic number also equals the number of edges not in a maximal spanning forest of a graph. Since the number of edges in a maximal spanning forest of a graph is equal to the number of vertices minus the number of components, the formula defines the cyclomatic number.
Cyclomatic complexity can also be defined as a relative Betti number, the size of a relative homology group:
which is read as "the rank of the first homology group of the graph G relative to the terminal nodes t". This is a technical way of saying "the number of linearly independent paths through the flow graph from an entry to an exit", where:
"linearly independent" corresponds to homology, and backtracking is not double-counted;
"paths" corresponds to first homology (a path is a one-dimensional object); and
"relative" means the path must begin and end at an entry (or exit) point.
This cyclomatic complexity can be calculated. It may also be computed via absolute Betti number by identifying the terminal nodes on a given component, or drawing paths connecting the exits to the entrance. The new, augmented graph obtains
It can also be computed via homotopy. If a (connected) control-flow graph is considered a one-dimensional CW complex called , the fundamental group of will be . The value of is the cyclomatic complexity. The fundamental group counts how many loops there are through the graph up to homotopy, aligning as expected.
Interpretation
In his presentation "Software Quality Metrics to Identify Risk" for the Department of Homeland Security, Tom McCabe introduced the following categorization of cyclomatic complexity:
1 - 10: Simple procedure, little risk
11 - 20: More complex, moderate risk
21 - 50: Complex, high risk
> 50: Untestable code, very high risk
Applications
Limiting complexity during development
One of McCabe's original applications was to limit the complexity of routines during program development. He recommended that programmers should count the complexity of the modules they are developing, and split them into smaller modules whenever the cyclomatic complexity of the module exceeded 10. This practice was adopted by the NIST Structured Testing methodology, which observed that since McCabe's original publication, the figure of 10 had received substantial corroborating evidence. However, it also noted that in some circumstances it may be appropriate to relax the restriction and permit modules with a complexity as high as 15. As the methodology acknowledged that there were occasional reasons for going beyond the agreed-upon limit, it phrased its recommendation as "For each module, either limit cyclomatic complexity to [the agreed-upon limit] or provide a written explanation of why the limit was exceeded."
Measuring the "structuredness" of a program
Section VI of McCabe's 1976 paper is concerned with determining what the control-flow graphs (CFGs) of non-structured programs look like in terms of their subgraphs, which McCabe identified. (For details, see structured program theorem.) McCabe concluded that section by proposing a numerical measure of how close to the structured programming ideal a given program is, i.e. its "structuredness". McCabe called the measure he devised for this purpose essential complexity.
To calculate this measure, the original CFG is iteratively reduced by identifying subgraphs that have a single-entry and a single-exit point, which are then replaced by a single node. This reduction corresponds to what a human would do if they extracted a subroutine from the larger piece of code. (Nowadays such a process would fall under the umbrella term of refactoring.) McCabe's reduction method was later called condensation in some textbooks, because it was seen as a generalization of the condensation to components used in graph theory. If a program is structured, then McCabe's reduction/condensation process reduces it to a single CFG node. In contrast, if the program is not structured, the iterative process will identify the irreducible part. The essential complexity measure defined by McCabe is simply the cyclomatic complexity of this irreducible graph, so it will be precisely 1 for all structured programs, but greater than one for non-structured programs.
Implications for software testing
Another application of cyclomatic complexity is in determining the number of test cases that are necessary to achieve thorough test coverage of a particular module.
It is useful because of two properties of the cyclomatic complexity, , for a specific module:
is an upper bound for the number of test cases that are necessary to achieve a complete branch coverage.
is a lower bound for the number of paths through the control-flow graph (CFG). Assuming each test case takes one path, the number of cases needed to achieve path coverage is equal to the number of paths that can actually be taken. But some paths may be impossible, so although the number of paths through the CFG is clearly an upper bound on the number of test cases needed for path coverage, this latter number (of possible paths) is sometimes less than .
All three of the above numbers may be equal: branch coverage cyclomatic complexity number of paths.
For example, consider a program that consists of two sequential if-then-else statements.
if (c1())
f1();
else
f2();
if (c2())
f3();
else
f4();
In this example, two test cases are sufficient to achieve a complete branch coverage, while four are necessary for complete path coverage. The cyclomatic complexity of the program is 3 (as the strongly connected graph for the program contains 9 edges, 7 nodes, and 1 connected component) ().
In general, in order to fully test a module, all execution paths through the module should be exercised. This implies a module with a high complexity number requires more testing effort than a module with a lower value since the higher complexity number indicates more pathways through the code. This also implies that a module with higher complexity is more difficult to understand since the programmer must understand the different pathways and the results of those pathways.
Unfortunately, it is not always practical to test all possible paths through a program. Considering the example above, each time an additional if-then-else statement is added, the number of possible paths grows by a factor of 2. As the program grows in this fashion, it quickly reaches the point where testing all of the paths becomes impractical.
One common testing strategy, espoused for example by the NIST Structured Testing methodology, is to use the cyclomatic complexity of a module to determine the number of white-box tests that are required to obtain sufficient coverage of the module. In almost all cases, according to such a methodology, a module should have at least as many tests as its cyclomatic complexity. In most cases, this number of tests is adequate to exercise all the relevant paths of the function.
As an example of a function that requires more than mere branch coverage to test accurately, reconsider the above function. However, assume that to avoid a bug occurring, any code that calls either f1() or f3() must also call the other. Assuming that the results of c1() and c2() are independent, the function as presented above contains a bug. Branch coverage allows the method to be tested with just two tests, such as the following test cases:
c1() returns true and c2() returns true
c1() returns false and c2() returns false
Neither of these cases exposes the bug. If, however, we use cyclomatic complexity to indicate the number of tests we require, the number increases to 3. We must therefore test one of the following paths:
c1() returns true and c2() returns false
c1() returns false and c2() returns true
Either of these tests will expose the bug.
Correlation to number of defects
Multiple studies have investigated the correlation between McCabe's cyclomatic complexity number with the frequency of defects occurring in a function or method. Some studies find a positive correlation between cyclomatic complexity and defects; functions and methods that have the highest complexity tend to also contain the most defects. However, the correlation between cyclomatic complexity and program size (typically measured in lines of code) has been demonstrated many times. Les Hatton has claimed that complexity has the same predictive ability as lines of code.
Studies that controlled for program size (i.e., comparing modules that have different complexities but similar size) are generally less conclusive, with many finding no significant correlation, while others do find correlation. Some researchers question the validity of the methods used by the studies finding no correlation. Although this relation likely exists, it is not easily used in practice. Since program size is not a controllable feature of commercial software, the usefulness of McCabe's number has been questioned. The essence of this observation is that larger programs tend to be more complex and to have more defects. Reducing the cyclomatic complexity of code is not proven to reduce the number of errors or bugs in that code. International safety standards like ISO 26262, however, mandate coding guidelines that enforce low code complexity.
See also
Programming complexity
Complexity trap
Computer program
Computer programming
Control flow
Decision-to-decision path
Design predicates
Essential complexity (numerical measure of "structuredness")
Halstead complexity measures
Software engineering
Software testing
Static program analysis
Maintainability
Notes
References
External links
Generating cyclomatic complexity metrics with Polyspace
The role of empiricism in improving the reliability of future software
McCabe's Cyclomatic Complexity and Why We Don't Use It
Software metrics | Cyclomatic complexity | [
"Mathematics",
"Engineering"
] | 3,060 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
1,015,016 | https://en.wikipedia.org/wiki/Degenerative%20disease | Degenerative disease is the result of a continuous process based on degenerative cell changes, affecting tissues or organs, which will increasingly deteriorate over time.
In neurodegenerative diseases, cells of the central nervous system stop working or die via neurodegeneration. An example of this is Alzheimer's disease. The other two common groups of degenerative diseases are those that affect circulatory system (e.g. coronary artery disease) and neoplastic diseases (e.g. cancers).
Many degenerative diseases exist and some are related to aging. Normal bodily wear or lifestyle choices (such as exercise or eating habits) may worsen degenerative diseases, depending on the specific condition. Sometimes the main or partial cause behind such diseases is genetic. Thus some are clearly hereditary like Huntington's disease. Other causes include viruses, poisons or chemical exposures, while sometimes, the underlying cause remains unknown.
Some degenerative diseases can be cured. In those that can not, it may be possible to alleviate the symptoms.
Examples
Alzheimer's disease (AD)
Amyotrophic lateral sclerosis (ALS, Lou Gehrig's disease)
Cancers
Charcot–Marie–Tooth disease (CMT)
Chronic traumatic encephalopathy
Cystic fibrosis
Some cytochrome c oxidase deficiencies (often the cause of degenerative Leigh syndrome)
Ehlers–Danlos syndrome
Fibrodysplasia ossificans progressiva
Friedreich's ataxia
Frontotemporal dementia (FTD)
Some cardiovascular diseases (e.g. atherosclerotic ones like coronary artery disease, aortic stenosis, congenital defects etc.)
Huntington's disease
Infantile neuroaxonal dystrophy
Keratoconus (KC)
Keratoglobus
Leukodystrophies
Macular degeneration (AMD)
Marfan's syndrome (MFS)
Some mitochondrial myopathies
Mitochondrial DNA depletion syndrome
Mueller–Weiss syndrome
Multiple sclerosis (MS)
Multiple system atrophy
Muscular dystrophies (MD)
Neuronal ceroid lipofuscinosis
Niemann–Pick diseases
Osteoarthritis
Osteoporosis
Parkinson's disease
Pulmonary arterial hypertension
All prion diseases (Creutzfeldt-Jakob disease, fatal familial insomnia etc.)
Progressive supranuclear palsy
Retinitis pigmentosa (RP)
Rheumatoid arthritis
Sandhoff Disease
Spinal muscular atrophy (SMA, motor neuron disease)
Subacute sclerosing panencephalitis
Substance Use Disorder
Tay–Sachs disease
Vascular dementia (might not itself be neurodegenerative, but often appears alongside other forms of degenerative dementia)
See also
Life extension
Senescence
Progressive disease
List of genetic disorders
References
Diseases and disorders
Senescence
Ageing processes | Degenerative disease | [
"Chemistry",
"Biology"
] | 611 | [
"Senescence",
"Ageing processes",
"Metabolism",
"Cellular processes"
] |
1,015,145 | https://en.wikipedia.org/wiki/Fort%20%C3%89ben-%C3%89mael | Fort Eben-Emael (, ) is an inactive Belgian fortress located between Liège and Maastricht, on the Belgian-Dutch border, near the Albert Canal, outside the village of Ében-Émael. It was designed to defend Belgium from a German attack across the narrow belt of Dutch territory in the region. Constructed in 1931–1935, it was reputed to be impregnable and at the time, the largest in the world.
The fort was neutralized by glider-borne German troops (85 men) on 10–11 May 1940 during the Second World War. This was the first strategic airborne operation using paratroopers ever attempted in military history. The action cleared the way for German ground forces to enter Belgium, unhindered by fire from Eben-Emael. While still the property of the Belgian Army, the fort however has been preserved as a museum and may be visited.
Location
The fort is located along the Albert Canal where it runs through a deep cutting at the junction of the Belgian and Dutch borders, about northeast of Liège and about south of Maastricht. A huge excavation project was carried out in the 1920s to create the Caster cutting through Mount Saint Peter to keep the canal in Belgian territory. This created a natural defensive barrier that was augmented by the fort, at a location that had been recommended by Brialmont in the 19th century. Eben-Emael was the largest of four forts built in the 1930s as the Fortified Position of Liège I (Position Fortifiée de Liège I (PFL I)). From north to south, the new forts were Eben-Emael, Fort d'Aubin-Neufchâteau, Fort de Battice and Fort de Tancrémont. Tancrémont and Aubin-Neufchâteau are smaller than Eben-Emael and Battice. Several of the 19th century forts designed by General Henri Alexis Brialmont that encircled Liège were reconstructed and designated PFL II.
A great deal of the fort's excavation work was carried out on the canal side, sheltered from view and a convenient location to load excavated spoil into barges to be taken away economically. The fort's elevation above the canal also allowed for efficient interior drainage, making Eben-Emael drier than many of its sister fortifications.
Description
Fort Eben-Emael was a greatly enlarged development of the original Belgian defence works designed by General Henri Alexis Brialmont before World War I. Even in its larger form, the fort comprised a relatively compact ensemble of gun turrets and observation posts, surrounded by a defended ditch. This was in contrast with French thinking for the contemporary Maginot Line fortifications, which were based on the dispersed fort palmé concept, with no clearly defined perimeter, a lesson learned from the experiences of French and Belgian forts in World War I. The new Belgian forts, while more conservative in design than the French ouvrages, included several new features as a result of World War I experience. The gun turrets were less closely grouped. Reinforced concrete was used in place of plain mass concrete, and its placement was done with greater care to avoid weak joints between pours. Ventilation was greatly improved, including an air filtration system for protection against gas attack, magazines were deeply buried and protected, and sanitary facilities and general living arrangements for the troops were given careful attention. Eben-Emael and Battice featured 120 mm and 75 mm guns, giving the fort the ability to bombard targets across a wide area of the eastern Liège region.
Fort Eben-Emael occupies a large hill just to the east of Eben-Emael village (now part of Bassenge) and bordering the Albert Canal. The irregularly-shaped fort is about in the east-west dimension, and about in the north-south dimension. It was more heavily armed than any other in the PFL I. In contrast to the other forts whose main weapons were in turrets, Eben-Emael's main weapons were divided between turrets and casemates. The 60 mm, 75 mm and 120 mm guns were made by the Fonderie Royale des Canons de Belgique (F.R.C.) in the city of Liège. The artillery turrets were so well-designed and constructed the artillerists were not required to wear hearing protection when firing the guns.
Block B.I – entrance block with two 60 mm anti-tank guns (F.R.C Modèle 1936) and machine guns.
Blocks B.II, B.IV and B.VI – flanking casemates located around the perimeter ditch to take the ditch in enfilade with two 60 mm guns and machine guns.
Block B.V – similar to II, IV and VI, with one 60 mm gun.
Cupola 120 – one twin 120 mm gun (F.R.C Modèle 1931) turret, with a range of 17,5 km. There were also three dummy 120 mm turrets.
Cupola Nord and Cupola Sud – each had one retractable turret with two 75 mm guns (F.R.C Modèle 1935), with a range of 10,5 km.
Visé I and IÍ – each house three 75 mm guns, facing south.
Maastricht I and II – each house three 75 mm guns, firing north in the direction of Maastricht.
Canal Nord and Sud – were twinned blocks housing 60 mm guns and machine guns covering the canal. 'Sud' was demolished when the canal was enlarged.
'Mi-Nord and Sud' are machine gun blocks (mitrailleuses) in the main surface of the fort. They were crucial in defending the top of the fort.
'Block O1' overlooks the canal and guarded the Lanaye locks. It housed a 60 mm gun and machine guns.
Underground galleries extend over beneath the hill, connecting the combat blocks and serving the underground barracks, power plant, ammunition magazines and other spaces. Fresh air was obtained from intake vents over the canal.
Personnel
In 1940, Fort Eben-Emael was commanded by Major Jottrand. There were around 1,200 Belgian troops stationed at the fort, divided into three groups. The first group was permanently stationed at the fort and consisted of 200 technical personnel (e.g. doctors, cooks, weapon maintenance technicians, administration staff). The two other groups consisted of 500 artillerists each. In peacetime, one group would be stationed at the fort for one week, and the other group would be in reserve at the village of Wonck, about away. These two groups would change places every week.
Except for some of the officers and NCOs, most of the men were conscripts. The majority of these were reservists and were called up after the Invasion of Poland in 1939. Infantry training was poor, since the men were considered to be purely artillerists.
1940
On 10 May 1940, 78 paratroopers of the German 7th Flieger (later 1st Fallschirmjäger Division) landed on the fortress with DFS 230 gliders, armed with special high explosives to attack the fortress and its guns. Most of the fort's defenders were taken by complete surprise. Eben-Emael's loss delivered a hard blow from which the Belgian Army could not recover.
Present day
Fort Eben-Emael is now open to the public. While still military property, it is administered by the Association Fort Eben-Emael, which provides tours and activities.
See also
Czechoslovak border fortifications
Maginot Line
References
Bibliography
External links
Fort Eben-Emael Official website
A page on Eben-Emael
Eben-Emael at darkplaces.org with a layout of the fort
Story of Pionier-Bataillon(mot)51, Eben-Emael May 1940
Glider assault on Eben Emael as an archetype for the future Infantry Magazine, March–April, 2004 by Paul Witkowski
Armament of Eben-Emael, Aubin-Neufchateau, Battice and Trancémont and Photo gallery of Eben-Emael(Czech only)
Eben-Emael
Military history of Belgium during World War II
World War II museums in Belgium
Museums in Liège Province
Tunnel warfare
Bassenge | Fort Ében-Émael | [
"Engineering"
] | 1,695 | [
"Military engineering",
"Tunnel warfare"
] |
1,015,161 | https://en.wikipedia.org/wiki/Phecda | Phecda , also called Gamma Ursae Majoris (γ Ursae Majoris, abbreviated Gamma UMa, γ UMa), is a star in the constellation of Ursa Major. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Based upon parallax measurements with the Hipparcos astrometry satellite, it is located at a distance of around from the Sun.
It is more familiar to most observers in the northern hemisphere as the lower-left star forming the bowl of the Big Dipper, together with Alpha Ursae Majoris (Dubhe, upper-right), Beta Ursae Majoris (Merak, lower-right) and Delta Ursae Majoris (Megrez, upper-left). Along with four other stars in this well-known asterism, Phecda forms a loose association of stars known as the Ursa Major moving group. Like the other stars in the group, it is a main sequence star, as the Sun is, although somewhat hotter, brighter and larger.
Phecda is located in relatively close physical proximity to the prominent Mizar–Alcor star system. The two are separated by an estimated distance of ; much closer than the two are from the Sun. The star Merak is separated from Phecda by .
Nomenclature
γ Ursae Majoris (Latinised to Gamma Ursae Majoris) is the star's Bayer designation.
It bore the traditional names Phecda or Phad, derived from the Arabic phrase fakhth al-dubb ('thigh of the bear'). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Phecda for this star.
To the Hindus this star was known as Pulastya, one of the seven rishis.
In Chinese, (), meaning Northern Dipper, refers to an asterism equivalent to the Big Dipper. Consequently, the Chinese name for Gamma Ursae Majoris itself is (, ) and (, ).
Properties
Phecda is an Ae star, which is surrounded by an envelope of gas that is adding emission lines to the spectrum of the star; hence the 'e' suffix in the stellar classification of A0 Ve. It is 2.4 times more massive than the Sun and is 333 million years old. It rotates rapidly with a rotational velocity of 386 km/s at its equator, which causes it to have an oblate shape. The equatorial radius measures , while the polar radius measures . The effective temperature varies as well, from 6,750 K in the equator to 10,520 K in the poles.
Phecda is also an astrometric binary: the companion star regularly perturbs the Ae-type primary star, causing the primary to wobble around the barycenter. From this, an orbital period of 20.5 years has been calculated. The secondary star is a K-type main-sequence star that is 0.79 times as massive as the Sun, and with a surface temperature of .
References
A-type main-sequence stars
Ursa Major moving group
Phecda
Ursae Majoris, Gamma
Big Dipper
Ursa Major
Durchmusterung objects
Ursae Majoris, 64
103287
058001
4554
Astrometric binaries | Phecda | [
"Astronomy"
] | 735 | [
"Ursa Major",
"Constellations"
] |
1,015,240 | https://en.wikipedia.org/wiki/Rollback%20%28data%20management%29 | In database technologies, a rollback is an operation which returns the database to some previous state. Rollbacks are important for database integrity, because they mean that the database can be restored to a clean copy even after erroneous operations are performed. They are crucial for recovering from database server crashes; by rolling back any transaction which was active at the time of the crash, the database is restored to a consistent state.
The rollback feature is usually implemented with a transaction log, but can also be implemented via multiversion concurrency control.
Cascading rollback
A cascading rollback occurs in database systems when a transaction (T1) causes a failure and a rollback must be performed. Other transactions dependent on T1's actions must also be rollbacked due to T1's failure, thus causing a cascading effect. That is, one transaction's failure causes many to fail.
Practical database recovery techniques guarantee cascadeless rollback, therefore a cascading rollback is not a desirable result. Cascading rollback is scheduled by dba.
SQL
SQL refers to Structured Query Language, a kind of language used to access, update and manipulate database.
In SQL, ROLLBACK is a command that causes all data changes since the last START TRANSACTION or BEGIN to be discarded by the relational database management systems (RDBMS), so that the state of the data is "rolled back" to the way it was before those changes were made.
A ROLLBACK statement will also release any existing savepoints that may be in use.
In most SQL dialects, ROLLBACKs are connection specific. This means that if two connections are made to the same database, a ROLLBACK made in one connection will not affect any other connections. This is vital for proper concurrency.
Usage outside databases
Rollbacks are not exclusive to databases: any stateful distributed system may use rollback operations to maintain consistency. Examples of distributed systems that can support rollbacks include message queues and workflow management systems. More generally, any operation that resets a system to its previous state before another operation or series of operations can be viewed as a rollback.
See also
Savepoint
Commit
Undo
Schema migration
Notes
References
"ROLLBACK Transaction", Microsoft SQL Server.
"Sql Commands", MySQL.
Database theory
Transaction processing
Reversible computing
Database management systems | Rollback (data management) | [
"Physics"
] | 475 | [
"Spacetime",
"Reversible computing",
"Physical quantities",
"Time"
] |
1,015,267 | https://en.wikipedia.org/wiki/Harvester%20%28forestry%29 | A harvester is a type of heavy forestry vehicle employed in cut-to-length logging operations for felling, delimbing and bucking trees. A forest harvester is typically employed together with a skidder that hauls the logs to a roadside landing, for a forwarder to pick up and haul away.
History
Forest harvesters were mainly developed in Sweden and Finland and today do practically all of the commercial felling in these countries. The first fully mobile timber "harvester", the PIKA model 75, was introduced in 1973 by Finnish systems engineer Sakari Pinomäki and his company PIKA Forest Machines. The first single grip harvester head was introduced in the early 1980s by Swedish company SP Maskiner. Their use has become widespread throughout the rest of Northern Europe, particularly in the harvesting of plantation forests.
Before modern harvesters were developed in Finland and Sweden, two inventors from Texas developed a crude tracked unit that sheared off trees at the base up to in diameter was developed in the US called The Mammoth Tree Shears. After shearing off the tree, the operator could use his controls to cause the tree to fall either to the right or left. Unlike a harvester, it did not delimb the tree after felling it.
Uses
Harvesters are employed effectively in level to moderately steep terrain for clearcutting areas of forest. For very steep hills or for removing individual trees, ground crews working with chain saws are still preferred in some countries. In northern Europe small and manoeuvrable harvesters are used for thinning operations, manual felling is typically only used in extreme conditions, where tree size exceeds the capacity of the harvester head or by small woodlot owners.
The principle aimed for in mechanised logging is "no feet on the forest floor", and the harvester and forwarder allow this to be achieved. Keeping workers inside the driving cab of the machine provides a safer and more comfortable working environment for industrial scale logging.
Harvesters are built on a robust all-terrain vehicle, either wheeled, tracked, or on a walking excavator. The vehicle may be articulated to provide tight turning capability around obstacles. A diesel engine provides power for both the vehicle and the harvesting mechanism through hydraulic drive. An extensible, articulated boom, similar to that on an excavator, reaches out from the vehicle to carry the harvester head. Some harvesters are adaptations of excavators with a new harvester head, while others are purpose-built vehicles.
"Combi" machines are available which combine the felling capability of a harvester with the load-carrying capability of a forwarder, allowing a single operator and machine to fell, process and transport trees. These novel type of vehicles are only competitive in operations with short distances to the landing.
Felling head
A typical harvester head consists of (from bottom to top, with head in vertical position)
a chain saw to cut the tree at its base, and cut it to length. The saw is hydraulically powered, rather than using the 2-stroke engine of a portable version. It has a stronger chain and a higher power output than any saw a person can carry.
two or more curved delimbing knives which reach around the trunk to remove branches.
two feed rollers to grasp the tree. The wheels pivot apart to allow the harvester head to grasp the tree and pivot together to hug the tree tightly. The wheels are driven in rotation to force the cut tree stem through the delimbing knives.
diameter sensors to calculate the volume of timber harvested in conjunction with
a measuring wheel which measures the length of the stem as it is fed through the head.
One operator in the vehicle's cab can control all of these functions. A control computer can simplify mechanical movements and can keep records of the length and diameter of trees cut. Length is computed by either counting the rotations of the gripping wheels or, more commonly, using the measuring wheel. Diameter is computed from the pivot angle of the gripping wheels or delimbing knives when hugging the tree. Length measurement also can be used for automated cutting of the tree into predefined lengths. Computer software can predict the volume of each stem based on analysing stems harvested previously. This information when used in conjunction with price lists for each specific log specification enables the optimisation of log recovery from the stem.
Harvesters are routinely available for cutting trees up to in diameter, built on vehicles weighing up to , with a boom reaching up to radius. Larger, heavier vehicles do more damage to the forest floor, but a longer reach helps by allowing harvesting of more trees with fewer vehicle movements.
The approximate equivalent type of vehicle in full-tree logging systems are feller-bunchers.
Manufacturers
NEUSON Forest
Rottne
Logset
EcoLog
Kone Ketonen
AFM-Forest
Barko Hydraulics
Caterpillar
Komatsu Forest
Ponsse
SP Maskiner
Tigercat
Timberjack (owned by John Deere)
Kesla Oyj
Prosilva
Sampo-Rosenlew
References
External links
Harvesters specs
Engineering vehicles
Logging
Forestry equipment
Youtube Channel: https://www.youtube.com/channel/UCMmjUfmI81ZTBZFDKs6e44Q | Harvester (forestry) | [
"Engineering"
] | 1,078 | [
"Engineering vehicles"
] |
1,015,276 | https://en.wikipedia.org/wiki/Shoe%20size | A shoe size is an indication of the fitting size of a shoe for a person.
There are a number of different shoe-size systems used worldwide. While all shoe sizes use a number to indicate the length of the shoe, they differ in exactly what they measure, what unit of measurement they use, and where the size 0 (or 1) is positioned. Some systems also indicate the shoe width, sometimes also as a number, but in many cases by one or more letters. Some regions use different shoe-size systems for different types of shoes (e.g. men's, women's, children's, sport, and safety shoes). This article sets out several complexities in the definition of shoe sizes. In practice, shoes are often tried on for both size and fit before they are purchased.
Deriving the shoe size
Foot versus shoe and last
The length of a person's foot is commonly defined as the distance between two parallel lines that are perpendicular to the foot and in contact with the most prominent toe and the most prominent part of the heel. Foot length is measured with the subject standing barefoot and the weight of the body equally distributed between both feet.
The sizes of the left and right feet are often slightly different. In this case, both feet are measured, and purchasers of mass-produced shoes are advised to purchase a shoe size based on the larger foot, as most retailers do not sell pairs of shoes in non-matching sizes.
Each size of shoe is considered suitable for a small interval of foot lengths, typically limited by half-point of the shoe size system.
A shoe-size system can refer to three characteristic lengths:
The median length of feet for which a shoe is suitable. For customers, this measure has the advantage of being directly related to their body measures. It applies equally to any type, form, or material of shoe. However, this measure is less popular with manufacturers, because it requires them to test carefully for each new shoe model, for which range of foot sizes it is recommendable. It puts on the manufacturer the burden of ensuring that the shoe will fit a foot of a given length.
The length of the inner cavity of the shoe. This measure has the advantage that it can be measured easily on the finished product. However, it will vary with manufacturing tolerances and only gives the customer very crude information about the range of foot sizes for which the shoe is suitable.
The length of the "last", the foot-shaped template over which the shoe is manufactured. This measure is the easiest one for the manufacturer to use, because it identifies only the tool used to produce the shoe. It makes no promise about manufacturing tolerances or for what size of foot the shoe is actually suitable. It leaves all responsibility and risk of choosing the correct size with the customer. Further, the last can be measured in several different ways, resulting in different measurements.
All these measures differ substantially from one another for the same shoe. For example, the inner cavity of a shoe must typically be 15 mm longer than the foot, and the shoe last would be 2 size points larger than the foot, but this varies between different types of shoes and the shoe size system used. The typical range lies between for the UK/US size system and for the European size system, but may extend to and .
Length
Sizing systems also differ in the units of measurement they use. This also results in different increments between shoe sizes, because usually only "full" or "half" sizes are made.
The following length units are commonly used today to define shoe-size systems:
The Paris point equates to . Whole sizes are incremented by 1 Paris point; this corresponds to between half sizes. This unit is commonly used in Continental Europe, and Russia and former USSR countries.
The barleycorn is an old English unit that equates to . This is the basis for current UK and North American shoe sizes. "Today in America, the sizing generally adheres relatively closely to a formula of 3 times the length of the foot in inches (the barleycorn length), less a constant (22 for men and 21 for women). In the UK, shoe sizes follow a similar method of computation, except that the constant is 23, and it is the same for men and women".
Metric measurements in millimetres (mm) or centimetres (cm), with intervals of 5 mm and 7.5 mm are used in the international Mondopoint system (USSR/Russia and East Asia).
Since the early 2000s, labels on sports shoes typically include sizes measured in all four systems: EU, UK, US, and Mondopoint.
Zero point
The sizing systems also place size 0 (or 1) at different locations:
Size 0 as a foot's length of 0. The shoe size is directly proportional to the length of the foot in the chosen unit of measurement. Sizes of children's, men's, and women's shoes, as well as sizes of different types of shoes, can be compared directly. This is used with the Mondopoint system (USSR/Russia and East Asia).
Size 0 as the length of the shoe's inner cavity of 0. The shoe size is then directly proportional to the inner length of the shoe. This is used with systems that also take the measurement from the shoe. While sizes of children's, men's and women's shoes can be compared directly, this is not necessarily true for different types of shoes that require a different amount of "wiggle room" in the toe box. This is used with the Continental European system.
Size 0 (or 1) can just be simply a shoe of a given length. Typically, this will be the shortest length deemed practical; but this can be different for children's, teenagers', men's, and women's shoes - making it difficult to compare sizes. In America, the baseline for women's shoes is seven inches and for men's it is 7 in.; in the UK, the baseline for both is 7 in.
Width
Some systems also include the width of a foot (or the girth of a shoe last), but do so in a variety of ways:
Measured foot width in millimetres (mm) - this is done with the Mondopoint system.
Measured width as a letter (or combination of letters), which is taken from a table (indexed to length and width/girth) or just assigned on an ad-hoc basis. Examples are (each starting with the narrowest width):
AAA, AA, A, B, C, D, E, EE, EEE is the typical North American system and follows the brannock device standards, per the system B is narrow, C is regular, D is medium, E is wide, EE is extra wide and so on. The unlettered D size is the norm for men and B for women.
4A, 3A, 2A, A, B, C, D, E, 2E, 3E, 4E, 5E, 6E (variant North American).
C, D, E, F, G, H (common UK; "medium" is usually F but varies by manufacturer—makers Edward Green and Crockett & Jones, among others, use E instead, but one maker's E is not necessarily the same size as another's).
N (narrow), M (medium) or R (regular), W (wide), XW (extra wide).
For children's sizes in North America, typical letters used are M or B (medium), W or D (wide), EW or 2E (extra wide).
The width for which these sizes are suitable can vary significantly between manufacturers. The A–E width indicators used by most American, Canadian, and some British shoe manufacturers are typically based on the width of the foot, and common step sizes are inch (4.8 mm).
Difficulties
There could be differences between various shoe size tables from shoemakers and shoe stores. They are usually due to the following factors:
Different methods of measuring the shoes, different manufacturing processes, or different allowances even when the same system is used.
An indication in centimetres or inches can mean the length of the foot or the length of the shoe's inner cavity.
Differing amounts of wiggle room required for different sizes of shoes.
For wide feet, a shoe several sizes larger (and actually too long) may be required and may also result in inconsistent size indications when different typical widths are attributed to specific shoe sizing systems.
Some tables for children take future growth into account. The shoe size is then larger than what would correspond to the actual length of the foot.
Conversion tables available on the Web often contain obvious errors, not taking into account different zero points or wiggle room.
Although shoe size systems are not fully standardised, the ISO/TC 137 had released a technical specification ISO/TS 19407:2015 for converting shoe sizes across various local sizing systems. Even though the problem of converting shoe sizes accurately has yet to be fully resolved, this standard serves as "a good compromise solution" for shoe-buyers.
Common sizing systems
United Kingdom
Shoe size in the United Kingdom, Ireland, India, Pakistan and South Africa is based on the length of the last used to make the shoes, measured in barleycorns ( inch) starting from the smallest size deemed practical, which is called size zero. It is not formally standardised. The last is typically longer than the foot heel to toe length by to in or to 2 barleycorns, so to determine the shoe size based on actual foot length one must add 2 barleycorns.
A child's size zero is equivalent to 4 inches (a hand = 12 barleycorns = 10.16 cm), and the sizes go up to size (measuring barleycorns, or ). Thus, the calculation for a children's shoe size in the UK is:
equivalent to:
.
An adult size one is then the next size up (26 barleycorns, or ) and each size up continues the progression in barleycorns. The calculation for an adult shoe size in the UK is thus:
equivalent to:
.
Although this sizing standard is nominally for both men and women, some manufacturers use different numbering for women's UK sizing.
In Australia and New Zealand, the UK system is followed for men and children's footwear. Women's footwear follows the US sizings.
In Mexico, shoes are sized either according to the foot length they are intended to fit, in cm, or alternatively to another variation of the barleycorn system, with sizes calculated approximately as:
equivalent to:
.
United States
In the United States and Canada, the traditional system is similar to the British system but there are different zero points for children's, men's, and women's shoe sizes. The most common is the customary system where men's shoes are one size longer than the UK equivalent, making a men's 13 in the US the same size as a men's 12 in the UK.
Customary
The customary system is offset by barleycorn, or , comparing to the UK sizes. The men's range starts at size 1, with zero point corresponding to the children's size 13 which equals barleycorns or .
However, most US manufacturers are using greater offsets, such as and 1 barleycorns. Therefore in current practice, US men's size 1 equals 25 barleycorns, or , so the calculation for a male shoe size in the United States is:
equivalent to:
.
In the "standard" or "FIA" (Footwear Industries of America) scale, women's sizes are men's sizes plus 1 (so a men's is a women's ):
equivalent to:
.
There is also the "common" scale, where women's sizes are equal to men's sizes plus .
Children's shoes start from size zero, which is equivalent to inches ( barleycorns = 99.48 mm), and end at . Thus the formula for children's sizes in the US is
equivalent to:
.
Alternatively, a Mondopoint-based scale running from K4 to K13 and then 1 to 7 is in use. K4 to K9 are toddler sizes, K10 to K13 are pre-school and 1 to 7 are grade school sizes.
Brannock Device
The Brannock Device is a measuring instrument invented by Charles F. Brannock in 1925 found in many shoe stores. The recent formula used by the Brannock device assumes a foot length of 2 barleycorns less than the length of the last; thus, men's size 1 is equivalent to a last's length of and foot's length of , and children's size 1 is equivalent to last's length and foot's length.
The device also measures the length of the arch, or the distance between the heel and the ball (metatarsal head) of the foot. For this measurement, the device has a shorter scale at the instep of the foot with an indicator that slides into position. If this scale indicates a larger size, it is taken in place of the foot's length to ensure proper fitting.
For children's sizes, additional wiggle room is added to allow for growth.
The device also measures the width of the foot and assigns it designations of AAA, AA, A, B, C, D, E, EE, or EEE. The widths are inches apart and differ by shoe length.
Some shoe stores and medical professionals use optical 3D surface scanners to precisely measure the length and width of both feet and recommend the appropriate shoe model and size.
Continental Europe
In the Continental European system, the shoe size is the length of the last, expressed in Paris points or , for both sexes and for adults and children alike. The last is typically longer than the foot heel to toe length by to , or 2 to Paris points, so to determine the shoe size based on actual foot length one must add 2 Paris points.
Because a Paris point is of a centimetre, a centimetre is Paris points, and the formula is as follows:
equivalent to:
The Continental European system is used in Austria, Belgium, Denmark, France, Germany, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland, and most other continental European countries. It is also used in Middle Eastern countries (such as Iran), Brazil—which uses the same method but subtracts 2 from the final result, in effect measuring foot size instead of last size—and, commonly, Hong Kong. The system is sometimes described as Stich size (from Pariser Stich, the German name for the Paris point), or size (from a German name of a micrometer for internal measurements).
Mondopoint
The Mondopoint shoe length system is widely used in the sports industry to size athletic shoes, ski boots, skates, and pointe ballet shoes; it was also adopted as the primary shoe sizing system in the Soviet Union, Russia, East Germany, China, Japan, Taiwan, and South Korea, and as an optional system in the United Kingdom, India, Mexico, and European countries. The Mondopoint system is also used by NATO and other military services.
The Mondopoint system was introduced in the 1970s by International Standard ISO 2816:1973 "Fundamental characteristics of a system of shoe sizing to be known as Mondopoint" and ISO 3355:1975 "Shoe sizes – System of length grading (for use in the Mondopoint system)". ISO 9407:2019, "Shoe sizes—Mondopoint system of sizing and marking", is the current version of the standard.
The Mondopoint system is based on average foot length and foot width for which the shoe is suitable, measured in millimetres. The length of the foot is measured as horizontal distance between the perpendiculars in contact with the end of the most prominent toe and the most prominent part of the heel. The width of the foot is measured as horizontal distance between vertical lines in contact with the first and fifth metatarsophalangeal joints. The perimeter of the foot is the length of the foot circumference, measured with a flexible tape at the same points as foot width. The origin of the grade is zero.
The labeling typically includes foot length, followed by an optional foot width: a shoe size of 280/110 indicates a foot length of and width of . Other customary markings, such as EU, UK and US sizes, may also be used.
Because Mondopoint takes the foot width into account, it allows for better fitting than most other systems. A given shoe size shall fit every foot with indicated average measurements, and those differing by no more than a half-step of the corresponding interval grid. Standard foot lengths are defined with interval steps of 5 mm for casual footwear and steps of 7.5 mm for specialty (protective) footwear. The standard is maintained by ISO Technical Committee 137 "Footwear sizing designations and marking systems."
East Asia
In Japan, mainland China, Taiwan, and South Korea, the Mondopoint system is used as defined by national standard Japanese Industrial Standards (JIS) S 5037:1998 and its counterparts Guobiao (GB/T) 3293.1-1998, Chinese National Standard (CNS) 4800-S1093:2000 and Korean Standards Association (KS) M 6681:2007.
Foot length and girth (foot circumference) are taken into account. The foot length is indicated in centimetres; an increment of 5 mm is used.
The length is followed by designators for girth (A, B, C, D, E, EE, EEE, EEEE, F, G), which are specified in an indexed table as foot circumference in millimetres for each given foot length; foot width is also included as supplemental information. There are different tables for men's, women's, and children's (less than 12 years of age) shoes. Not all designators are used for all genders and in all countries. For example, the largest girth for women in Taiwan is EEEE, whereas in Japan, it is F.
The foot length and width can also be indicated in millimetres, separated by a slash or a hyphen.
Soviet Union (Russia, Commonwealth of Independent States)
Historically the Soviet Union used the European (Paris point) system, but the Mondopoint metric system was introduced in the 1980s by GOST 24382-80 "Sizes of Sport Shoes" (based on ISO 2816:1973) and GOST 11373-88 "Shoe Sizes" (based on ISO 3355:1975), and lately by GOST R 58149-2018 (based on ISO 9407:1991)
Standard metric foot sizes can be converted to the nearest Paris point ( cm) sizes using approximate conversion tables; shoes are marked with both foot length in millimetres, as for pointe ballet shoe sizes, and last length in European Paris point sizes (although such converted Stichmaß sizes may come to 1 size smaller than comparable European-made adult footwear, and up to sizes smaller for children's footwear, according to ISO 19407 shoe size definitions). Foot lengths are aligned to 5 mm intervals for sports and casual shoes, and 7.5 mm for protective/safety shoes. Optional foot width designations includes narrow, normal (medium or regular), and wide grades.
Infant sizes start at 16 (95 mm) and pre-school kids at 23 (140 mm); schoolchildren sizes span 32 (202.5 mm) to 40 (255 mm) for girls and 32 to 44 (285 mm) for boys. Adult sizes span 33 (210 mm) to 44 for women and 38 (245 mm) to 48 (310 mm) for men.
ISO 19407 and shoe size conversion
ISO/TS 19407:2023 Footwear - Sizing - Conversion of sizing systems is a technical specification from the International Organization for Standardization. It contains basic description and conversion tables for major shoe sizing systems including Mondopoint with length steps of 5 mm and 7.5 mm, European Paris point system, and UK -inch system. The standard has also been adopted as Russian GOST R 57425-2017.
The standard is maintained by ISO/TC 137, which also developed ISO/TS 19408:2015 Footwear - Sizing - Vocabulary and terminology; in development are companion standards ISO/TS 19409 "Footwear - Sizing - Measurement of last dimensions" and ISO/TS 19410 "Footwear - Sizing - Inshoe measurement".
Shoe sizing
The adult shoe sizes are calculated from typical last length, which is converted from foot length in millimetres by adding an allowance of two shoe sizes:
where L is foot length in millimetres.
Direct conversion between adult UK, Continental European and Mondopoint shoe size systems is derived as follows:
Using these formulas, the standard derives shoe size tables for adults and children, based on actual foot length measurement (insole) in millimetres. Typical last length ranges are also included (13 to 25 mm over foot length for adults, 8% greater than foot length plus 6 mm for children).
Exact foot lengths may contain repeating decimals because the formulas include division by 3; in practice, approximate interval steps of 6.67 mm and 8.47 mm are used, and sizes are rounded to either the nearest half size or closest matching Mondopoint size.
Size marking
It is recommended to include size marking in each of the four sizing systems on the shoe label and on the package. The principal system used for manufacturing the shoe needs to be placed first and emphasized with a boldface.
The standard includes quick conversion tables for adult shoe size marking; they provide matching sizes for shoes marked in Mondopoint, European, and UK systems. Converted values are rounded to a larger shoe size to increase comfort.
Conversion between US and UK sizing
See also
Clothing sizes
List of shoe styles
Shoes
References
External links
IS 8751-1 (1978): Footwear sizes in mondopoint system, Part 1: Fundamental characteristics
IS 8751-2 (1978): Footwear sizes in mondopoint system, Part 2: Length grading
Anthropometry
Footwear
Sizes in clothing | Shoe size | [
"Physics",
"Mathematics"
] | 4,587 | [
"Sizes in clothing",
"Quantity",
"Physical quantities",
"Size"
] |
1,015,424 | https://en.wikipedia.org/wiki/MML%20%28programming%20language%29 | A man–machine language (MML) is a specification language. MMLs are typically defined to standardize the interfaces for managing a telecommunications or network device from a console.
ITU-T Z.300 series recommendations define an MML, that has been extended by Telcordia Technologies (formerly Bellcore) to form Transaction Language 1.
Further reading
Specification languages
ITU-T recommendations | MML (programming language) | [
"Engineering"
] | 80 | [
"Software engineering",
"Specification languages"
] |
1,015,432 | https://en.wikipedia.org/wiki/Guidelines%20for%20the%20Definition%20of%20Managed%20Objects | The Guidelines for the Definition of Managed Objects (GDMO) is a specification for defining managed objects of interest to the Telecommunications Management Network (TMN) for use in Common Management Information Protocol (CMIP).
GDMO to the Structure of Management Information for defining a management information base for Simple Network Management Protocol (SNMP). For example, both represent a hierarchy of managed objects and use ASN.1 for syntax.
GDMO is defined in ISO/IEC 10165 and ITU-T X.722.
Network management
ITU-T recommendations
ITU-T G Series Recommendations | Guidelines for the Definition of Managed Objects | [
"Technology",
"Engineering"
] | 126 | [
"Computing stubs",
"Computer networks engineering",
"Network management",
"Computer network stubs"
] |
1,015,454 | https://en.wikipedia.org/wiki/Platelet-derived%20growth%20factor | Platelet-derived growth factor (PDGF) is one among numerous growth factors that regulate cell growth and division. In particular, PDGF plays a significant role in blood vessel formation, the growth of blood vessels from already-existing blood vessel tissue, mitogenesis, i.e. proliferation, of mesenchymal cells such as fibroblasts, osteoblasts, tenocytes, vascular smooth muscle cells and mesenchymal stem cells as well as chemotaxis, the directed migration, of mesenchymal cells. Platelet-derived growth factor is a dimeric glycoprotein that can be composed of two A subunits (PDGF-AA), two B subunits (PDGF-BB), or one of each (PDGF-AB).
PDGF is a potent mitogen for cells of mesenchymal origin, including fibroblasts, smooth muscle cells and glial cells. In both mouse and human, the PDGF signalling network consists of five ligands, PDGF-AA through -DD (including -AB), and two receptors, PDGFRalpha and PDGFRbeta. All PDGFs function as secreted, disulphide-linked homodimers, but only PDGFA and B can form functional heterodimers.
Though PDGF is synthesized, stored (in the alpha granules of platelets), and released by platelets upon activation, it is also produced by other cells including smooth muscle cells, activated macrophages, and endothelial cells
Recombinant PDGF is used in medicine to help heal chronic ulcers, to heal ocular surface diseases and in orthopedic surgery and periodontics as an alternative to bone autograft to stimulate bone regeneration and repair.
Types and classification
There are five different isoforms of PDGF that activate cellular response through two different receptors. Known ligands include: PDGF-AA (PDGFA), -BB (PDGFB), -CC (PDGFC), and -DD (PDGFD), and -AB (a PDGFA and PDGFB heterodimer). The ligands interact with the two tyrosine kinase receptor monomers, PDGFRα (PDGFRA) and -Rβ (PDGFRB). The PDGF family also includes a few other members of the family, including the VEGF sub-family.
Mechanisms
The receptor for PDGF, PDGFR is classified as a receptor tyrosine kinase (RTK), a type of cell surface receptor. Two types of PDGFRs have been identified: alpha-type and beta-type PDGFRs. The alpha type binds to PDGF-AA, PDGF-BB and PDGF-AB, whereas the beta type PDGFR binds with high affinity to PDGF-BB and PDGF-AB.
PDGF binds to the PDGFR ligand binding pocket located within the second and third immunoglobulin domains. Upon activation by PDGF, these receptors dimerise, and are "switched on" by auto-phosphorylation of several sites on their cytosolic domains, which serve to mediate binding of cofactors and subsequently activate signal transduction, for example, through the PI3K pathway or through reactive oxygen species (ROS)-mediated activation of the STAT3 pathway. Downstream effects of this include regulation of gene expression and the cell cycle.
The role of PI3K has been investigated by several laboratories. Accumulating data suggests that, while this molecule is, in general, part of growth signaling complex, it plays a more profound role in controlling cell migration.
The different ligand isoforms have variable affinities for the receptor isoforms, and the receptor isoforms may variably form hetero- or homo- dimers. This leads to specificity of downstream signaling. It has been shown that the sis oncogene is derived from the PDGF B-chain gene. PDGF-BB is the highest-affinity ligand for the PDGFR-beta; PDGFR-beta is a key marker of hepatic stellate cell activation in the process of fibrogenesis.
Function
PDGFs are mitogenic during early developmental stages, driving the proliferation of undifferentiated mesenchyme and some progenitor populations. During later maturation stages, PDGF signalling has been implicated in tissue remodelling and cellular differentiation, and in inductive events involved in patterning and morphogenesis. In addition to driving mesenchymal proliferation, PDGFs have been shown to direct the migration, differentiation and function of a variety of specialised mesenchymal and migratory cell types, both during development and in the adult animal. Other growth factors in this family include vascular endothelial growth factors B and C (VEGF-B, VEGF-C) which are active in angiogenesis and endothelial cell growth, and placenta growth factor (PlGF) which is also active in angiogenesis.
PDGF plays a role in embryonic development, cell proliferation, cell migration, and angiogenesis. Over-expression of PDGF has been linked to several diseases such as atherosclerosis, fibrotic disorders and malignancies. Synthesis occurs due to external stimuli such as thrombin, low oxygen tension, or other cytokines and growth factors.
PDGF is a required element in cellular division for fibroblasts, a type of connective tissue cell that is especially prevalent in wound healing. In essence, the PDGFs allow a cell to skip the G1 checkpoints in order to divide. It has been shown that in monocytes-macrophages and fibroblasts, exogenously administered PDGF stimulates chemotaxis, proliferation, and gene expression and significantly augmented the influx of inflammatory cells and fibroblasts, accelerating extracellular matrix and collagen formation and thus reducing the time for the healing process to occur.
In terms of osteogenic differentiation of mesenchymal stem cells, comparing PDGF to epidermal growth factor (EGF), which is also implicated in stimulating cell growth, proliferation, and differentiation, MSCs were shown to have stronger osteogenic differentiation into bone-forming cells when stimulated by epidermal growth factor (EGF) versus PDGF. However, comparing the signaling pathways between them reveals that the PI3K pathway is exclusively activated by PDGF, with EGF having no effect. Chemically inhibiting the PI3K pathway in PDGF-stimulated cells negates the differential effect between the two growth factors, and actually gives PDGF an edge in osteogenic differentiation. Wortmannin is a PI3K-specific inhibitor, and treatment of cells with Wortmannin in combination with PDGF resulted in enhanced osteoblast differentiation compared to just PDGF alone, as well as compared to EGF. These results indicate that the addition of Wortmannin can significantly increase the response of cells into an osteogenic lineage in the presence of PDGF, and thus might reduce the need for higher concentrations of PDGF or other growth factors, making PDGF a more viable growth factor for osteogenic differentiation than other, more expensive growth factors currently used in the field such as BMP2.
PDGF is also known to maintain proliferation of oligodendrocyte progenitor cells (OPCs). It has also been shown that fibroblast growth factor (FGF) activates a signaling pathway that positively regulates the PDGF receptors in OPCs.
History
PDGF was one of the first growth factors characterized, and has led to an understanding of the mechanism of many growth factor signaling pathways.The first engineered dominant negative protein was designed to inhibit PDGF
Medicine
Recombinant PDGF is used to help heal chronic ulcers and in orthopedic surgery and periodontics to stimulate bone regeneration and repair. PDGF may be beneficial when used by itself or especially in combination with other growth factors to stimulate soft and hard tissue healing (Lynch et al. 1987, 1989, 1991, 1995).
Research
Like many other growth factors that have been linked to disease, PDGF and its receptors have provided a market for receptor antagonists to treat disease. Such antagonists include (but are not limited to) specific antibodies that target the molecule of interest, which act only in a neutralizing manner.
The "c-Sis" oncogene is derived from PDGF.
Age related downregulation of the PDGF receptor on islet beta cells has been demonstrated to prevent islet beta cell proliferation in both animal and human cells and its re-expression triggered beta cell proliferation and corrected glucose regulation via insulin secretion.
A non-viral PDGF "bio patch" can regenerate missing or damaged bone by delivering DNA in a nano-sized particle directly into cells via genes. Repairing bone fractures, fixing craniofacial defects and improving dental implants are among potential uses. The patch employs a collagen platform seeded with particles containing the genes needed for producing bone. In experiments, new bone fully covered skull wounds in test animals and stimulated growth in human bone marrow stromal cells.
The addition of PDGF at specific time‐points has been shown to stabilise vasculature in collagen‐glycosaminoglycan scaffolds.
Family members
Human genes encoding proteins that belong to the platelet-derived growth factor family include:
PDGFA; PDGFB; PDGFC; PDGFD
PGF
VEGF; VEGFB; VEGFC; VEGFD
See also
Platelet-activating factor
Platelet-derived growth factor receptor
atheroma platelet involvement in smooth muscle proliferation
Withaferin A potent inhibitor of angiogenesis
References
External links
Growth factors
Protein domains | Platelet-derived growth factor | [
"Chemistry",
"Biology"
] | 2,043 | [
"Growth factors",
"Protein domains",
"Protein classification",
"Signal transduction"
] |
1,015,481 | https://en.wikipedia.org/wiki/Falu%20red | Falu red or falun red ( ; , ) is a permeable red paint commonly used on wooden cottages and barns in Sweden, Finland, and Norway.
History
Following hundreds of years of mining copper in Falun, large piles of residual product were deposited above ground in the vicinity of the mines.
By the 16th Century, mineralization of the mine's tailings and slag added by smelters began to produce a red-coloured sludge rich in copper, limonite, silicic acid, and zinc. When the sludge was heated for a few hours and then mixed with linseed oil and rye flour, it was found to form an excellent anti-weathering paint. During the 17th century, falu red began to be daubed onto wooden buildings to mimic the red-brick façades built by the upper classes.
In Sweden's built-up areas, wooden buildings were often painted with falu red until the early 19th century, until authorities began to oppose use of the paint.
Resurgence
Falu red saw a resurgence in popularity in the Swedish countryside during the 19th century, when poorer farmers and crofters began to paint their houses. Falu red is still widely used in the countryside. The Finnish expression punainen tupa ja perunamaa, "a red cottage and a potato patch", referring to idyllic home and life, is a direct allusion to a country house painted in falu red.
Composition
The paint consists of water, rye flour, linseed oil, silicates, iron oxides, copper compounds, and zinc. As falu red ages the binder deteriorates, leaving the color granules loose, but restoration is easy since simply brushing the surface is sufficient before repainting.
The actual color may be different depending on the degree to which the oxide is burnt, ranging from almost black to a bright, light red. Different tones of red have been popular at different times.
References
External links
Mould resistance tests in Sweden (falu on page 5, in Swedish)
Dalarna
Shades of brown
Shades of red
Iron oxide pigments | Falu red | [
"Chemistry"
] | 432 | [
"Paints",
"Coatings"
] |
1,015,599 | https://en.wikipedia.org/wiki/Ammonium%20lauryl%20sulfate | Ammonium lauryl sulfate (ALS) is the INCI name and common name for ammonium dodecyl sulfate (CH3(CH2)10CH2OSO3NH4). The anion consists of a nonpolar hydrocarbon chain and a polar sulfate end group. The combination of nonpolar and polar groups confers surfactant properties to the anion: it facilitates dissolution of both polar and non-polar materials. This salt is classified as a sulfate ester. It is made from coconut or palm kernel oil for use primarily in shampoos and body-wash as a foaming agent. Lauryl sulfates are very high-foam surfactants that disrupt the surface tension of water in part by forming micelles at the surface-air interface.
Action in solution
Above the critical micelle concentration, the anions organize into a micelle, in which they form a sphere with the polar, hydrophilic heads of the sulfate portion on the outside (surface) of the sphere and the nonpolar, hydrophobic tails pointing inwards towards the center. The water molecules around the micelle in turn arrange themselves around the polar heads, which disrupts their ability to hydrogen bond with other nearby water molecules. The overall effect of these micelles is a reduction in surface tension of the solution, which affords a greater ability to penetrate or "wet out" various surfaces, including porous structures like cloth, fibers, and hair. Accordingly, this structured solution allows the solution to more readily dissolve soils, greases, etc. in and on such substrates. Lauryl sulfates however exhibit poor soil suspending capacity.
Safety
ALS is an innocuous detergent. A 1983 report by the Cosmetic Ingredient Review, shampoos containing up to 31% ALS registered 6 health complaints out of 6.8 million units sold. These complaints included two of scalp itch, two allergic reactions, one hair damage and one complaint of eye irritation.
The CIR report concluded that both sodium and ammonium lauryl sulfate "appear to be safe in formulations designed for discontinuous, brief use followed by thorough rinsing from the surface of the skin. In products intended for prolonged use, concentrations should not exceed 1%".
The Human and Environmental Risk Assessment (HERA) project performed a thorough investigation of all alkyl sulfates, as such the results they found apply directly to ALS. Most alkyl sulfates exhibit low acute oral toxicity, no toxicity through exposure to the skin, concentration dependent skin irritation, and concentration dependent eye-irritation. They do not sensitize the skin and did not appear to be carcinogenic in a two-year study on rats. The report found that longer carbon chains (16–18) were less irritating to the skin than chains of 12–15 carbons in length. In addition, concentrations below 1% were essentially non-irritating while concentrations greater than 10% produced moderate to strong irritation of the skin.
Occupational exposure
The CDC has reported on occupations which were routinely exposed to ALS between 1981 and 1983. During this time, the occupation with the highest number of workers exposed was registered nurses, followed closely by funeral directors.
Environment
The HERA project also conducted an environmental review of alkyl sulfates that found all alkyl sulfates are readily biodegradable and standard wastewater treatment operations removed 96–99.96% of short-chain (12–14 carbons) alkyl sulfates. Even in anaerobic conditions at least 80% of the original volume is biodegraded after 15 days with 90% degradation after 4 weeks.
See also
Sodium lauryl sulfate
Sodium laureth sulfate
Potassium lauryl sulfate
Sodium pareth sulfate
References
Household chemicals
Ammonium compounds
Anionic surfactants
Sulfate esters
Dodecyl compounds | Ammonium lauryl sulfate | [
"Chemistry"
] | 774 | [
"Ammonium compounds",
"Salts"
] |
1,015,846 | https://en.wikipedia.org/wiki/Mannitol | Mannitol is a type of sugar alcohol used as a sweetener and medication. It is used as a low calorie sweetener as it is poorly absorbed by the intestines. As a medication, it is used to decrease pressure in the eyes, as in glaucoma, and to lower increased intracranial pressure. Medically, it is given by injection or inhalation. Effects typically begin within 15 minutes and last up to 8 hours.
Common side effects from medical use include electrolyte problems and dehydration. Other serious side effects may include worsening heart failure and kidney problems. It is unclear if use is safe in pregnancy. Mannitol is in the osmotic diuretic family of medications and works by pulling fluid from the brain and eyes.
The discovery of mannitol is attributed to Joseph Louis Proust in 1806. It is on the World Health Organization's List of Essential Medicines. It was originally made from the flowering ash and called manna due to its supposed resemblance to the Biblical food. Mannitol is on the World Anti-Doping Agency's banned substances list due to concerns that it may mask prohibited drugs.
Uses
Medical uses
In the United States, mannitol is indicated for the reduction of intracranial pressure and treatment of cerebral edema and elevated intraocular pressure.
In the European Union, mannitol is indicated for the treatment of cystic fibrosis (CF) in adults aged 18 years and above as an add-on therapy to best standard of care.
Mannitol is used intravenously to reduce acutely raised intracranial pressure until more definitive treatment can be applied, e.g., after head trauma. While mannitol injection is the mainstay for treating high pressure in the skull after a bad brain injury, it is no better than hypertonic saline as a first-line treatment. In treatment-resistant cases, hypertonic saline works better. Intra-arterial infusions of mannitol can transiently open the blood–brain barrier by disrupting tight junctions.
It may also be used for certain cases of kidney failure with low urine output, decreasing pressure in the eye, to increase the elimination of certain toxins, and to treat fluid build up.
Intraoperative mannitol prior to vessel clamp release during renal transplant has been shown to reduce post-transplant kidney injury, but has not been shown to reduce graft rejection.
Mannitol acts as an osmotic laxative in oral doses larger than 20 g, and is sometimes sold as a laxative for children.
The use of mannitol, when inhaled, as a bronchial irritant as an alternative method of diagnosis of exercise-induced asthma has been proposed. A 2013 systematic review concluded evidence to support its use for this purpose at this time is insufficient.
Mannitol is commonly used in the circuit prime of a heart lung machine during cardiopulmonary bypass. The presence of mannitol preserves renal function during the times of low blood flow and pressure, while the patient is on bypass. The solution prevents the swelling of endothelial cells in the kidney, which may have otherwise reduced blood flow to this area and resulted in cell damage.
Mannitol can also be used to temporarily encapsulate a sharp object (such as a helix on a lead for an artificial pacemaker) while it passes through the venous system. Because the mannitol dissolves readily in blood, the sharp point becomes exposed at its destination.
Mannitol is also the first drug of choice to treat acute glaucoma in veterinary medicine. It is administered as a 20% solution intravenously. It dehydrates the vitreous humor and, therefore, lowers the intraocular pressure. However, it requires an intact blood-ocular barrier to work.
Food
Mannitol increases blood glucose to a lesser extent than sucrose (thus having a relatively low glycemic index) so is used as a sweetener for people with diabetes, and in chewing gums. Although mannitol has a higher heat of solution than most sugar alcohols, its comparatively low solubility reduces the cooling effect usually found in mint candies and gums. However, when mannitol is completely dissolved in a product, it induces a strong cooling effect. Also, it has a very low hygroscopicity – it does not pick up water from the air until the humidity level is 98%. This makes mannitol very useful as a coating for hard candies, dried fruits, and chewing gums, and it is often included as an ingredient in candies and chewing gum. The pleasant taste and mouthfeel of mannitol also makes it a popular excipient for chewable tablets.
Analytical chemistry
Mannitol can be used to form a complex with boric acid. This increases the acid strength of the boric acid, permitting better precision in volumetric analysis of this acid.
Other
Mannitol is the primary ingredient of mannitol salt agar, a bacterial growth medium, and is used in others.
Mannitol is used as a cutting agent in various drugs that are used intranasally (snorted), such as heroin and cocaine. A mixture of mannitol and fentanyl (or fentanyl analogs) in ratio 1:10 is labeled and sold as "China white", a popular heroin substitute.
Mannitol is a sugar alcohol with "50-70 percent of the relative sweetness of sugar, which means more must be used to equal the sweetness of sugar. Mannitol lingers in the intestines for a long time and therefore often causes bloating and diarrhea."
Contraindications
Mannitol is contraindicated in people with anuria, severe hypovolemia, pre-existing severe pulmonary vascular congestion or pulmonary edema, irritable bowel syndrome (IBS), and active intracranial bleeding except during craniotomy.
Adverse effects include hyponatremia and volume depletion leading to metabolic acidosis.
Chemistry
Mannitol is an isomer of sorbitol, another sugar alcohol; the two differ only in the orientation of the hydroxyl group on carbon 2. While similar, the two sugar alcohols have very different sources in nature, melting points, and uses.
Production
Mannitol is classified as a sugar alcohol; that is, it can be derived from a sugar (mannose) by reduction. Other sugar alcohols include xylitol and sorbitol.
Industrial synthesis
Mannitol is commonly produced via the hydrogenation of fructose, which is formed from either starch or sucrose (common table sugar). Although starch is a cheaper source than sucrose, the transformation of starch is much more complicated. Eventually, it yields a syrup containing about 42% fructose, 52% glucose, and 6% maltose. Sucrose is simply hydrolyzed into an invert sugar syrup, which contains about 50% fructose. In both cases, the syrups are chromatographically purified to contain 90–95% fructose. The fructose is then hydrogenated over a nickel catalyst into a mixture of isomers sorbitol and mannitol. Yield is typically 50%:50%, although slightly alkaline reaction conditions can slightly increase mannitol yields.
Biosyntheses
Mannitol is one of the most abundant energy and carbon storage molecules in nature, produced by a plethora of organisms, including bacteria, yeasts, fungi, algae, lichens, and many plants. Fermentation by microorganisms is an alternative to the traditional industrial synthesis. A fructose to mannitol metabolic pathway, known as the mannitol cycle in fungi, has been discovered in a type of red algae (Caloglossa leprieurii), and it is highly possible that other microorganisms employ similar such pathways. A class of lactic acid bacteria, labeled heterofermentive because of their multiple fermentation pathways, convert either three fructose molecules or two fructose and one glucose molecule into two mannitol molecules, and one molecule each of lactic acid, acetic acid, and carbon dioxide. Feedstock syrups containing medium to large concentrations of fructose (for example, cashew apple juice, containing 55% fructose: 45% glucose) can produce yields mannitol per liter of feedstock. Further research is being conducted, studying ways to engineer even more efficient mannitol pathways in lactic acid bacteria, as well as the use of other microorganisms such as yeast and E. coli in mannitol production. When food-grade strains of any of the aforementioned microorganisms are used, the mannitol and the organism itself are directly applicable to food products, avoiding the need for careful separation of microorganism and mannitol crystals. Although this is a promising method, steps are needed to scale it up to industrially needed quantities.
Natural extraction
Since mannitol is found in a wide variety of natural products, including almost all plants, it can be directly extracted from natural products, rather than chemical or biological syntheses. In fact, in China, isolation from seaweed is the most common form of mannitol production. Mannitol concentrations of plant exudates can range from 20% in seaweeds to 90% in the plane tree. It is a constituent of saw palmetto (Serenoa).
Traditionally, mannitol is extracted by the Soxhlet extraction, using ethanol, water, and methanol to steam and then hydrolysis of the crude material. The mannitol is then recrystallized from the extract, generally resulting in yields of about 18% of the original natural product. Another method of extraction is using supercritical and subcritical fluids. These fluids are at such a stage that no difference exists between the liquid and gas stages, so are more diffusive than normal fluids. This is considered to make them much more effective mass transfer agents than normal liquids. The super- or subcritical fluid is pumped through the natural product, and the mostly mannitol product is easily separated from the solvent and minute amount of byproduct.
Supercritical carbon dioxide extraction of olive leaves has been shown to require less solvent per measure of leaf than a traditional extraction – CO2 versus ethanol per olive leaf. Heated, pressurized, subcritical water is even cheaper, and is shown to have dramatically greater results than traditional extraction. It requires only water per of olive leaf, and gives a yield of 76.75% mannitol. Both super- and subcritical extractions are cheaper, faster, purer, and more environmentally friendly than the traditional extraction. However, the required high operating temperatures and pressures are causes for hesitancy in the industrial use of this technique.
History
In the early 1880s, Julije Domac elucidated the structure of hexene and mannitol obtained from Caspian manna. He determined the place of the double bond in hexene obtained from mannitol and proved that it is a derivative of a normal hexene. This also solved the structure of mannitol, which was unknown until then.
Controversy
The three studies that originally found high-dose mannitol effective in treating severe head injury were the subject of an investigation. Published in 2007 after the lead author Dr Julio Cruz's death, the investigation questioned whether the studies had actually taken place. The co-authors of the paper were not able to confirm the existence of the study patients, and the Federal University of São Paulo, which Cruz gave as his affiliation, had never employed him. As a result of doubt surrounding Cruz's work, an updated version of the Cochrane review excludes all studies by Julio Cruz, leaving only four studies. Due to differences in selection of control groups, a conclusion about the clinical use of mannitol has not been reached.
Compendial status
British Pharmacopoeia
Japanese Pharmacopoeia
United States Pharmacopeia
See also
-mannitol oxidase
E number
Mannitol dehydrogenase
Mannitol dehydrogenase (cytochrome)
Mannitol-1-phosphatase
Mannitol 2-dehydrogenase
Mannitol 2-dehydrogenase (NADP+)
Mannitol-1-phosphate 5-dehydrogenase
References
External links
E-number additives
Excipients
Glycerols
Nephrotoxins
Osmotic diuretics
Wikipedia medicine articles ready to translate
Sugar alcohols
Sugar substitutes
World Anti-Doping Agency prohibited substances
World Health Organization essential medicines | Mannitol | [
"Chemistry"
] | 2,643 | [
"Carbohydrates",
"Sugar alcohols"
] |
1,015,863 | https://en.wikipedia.org/wiki/Contingent%20valuation | Contingent valuation is a survey-based economic technique for the valuation of non-market resources, such as environmental preservation or the impact of externalities like pollution. While these resources do give people utility, certain aspects of them do not have a market price as they are not directly sold – for example, people receive benefit from a beautiful view of a mountain, but it would be tough to value using price-based models. Contingent valuation surveys are one technique which is used to measure these aspects. Contingent valuation is often referred to as a stated preference model, in contrast to a price-based revealed preference model. Both models are utility-based. Typically the survey asks how much money people would be willing to pay (or willing to accept) to maintain the existence of (or be compensated for the loss of) an environmental feature, such as biodiversity.
History
Contingent valuation surveys were first proposed in theory by S.V. Ciriacy-Wantrup (1947) as a method for eliciting market valuation of a non-market good. The first practical application of the technique was in 1963 when Robert K. Davis used surveys to estimate the value hunters and tourists placed on a particular wilderness area. He compared the survey results to an estimation of value based on travel costs and found good correlation with his results. This work was published as his Ph.D. Dissertation at Harvard "The Value of Outdoor Recreation: An Economic Study of the Maine Woods." See also: This work, and other early applications of the method are described in Chapter 1 of "Using Surveys to Value Public Goods" by Robert Cameron Mitchell, and Richard T. Carson.
The method rose to high prominence in the USA in the 1980s when government agencies were given the power to sue for damage to environmental resources which they were trustees over. Following Ohio v Department of the Interior, the types of damages which they were able to recover included non-use or existence values. Existence values are unable to be assessed through market pricing mechanisms, so contingent valuation surveys were suggested to assess them. During this time, the EPA convened an important conference with an aim to recommend guidelines for survey design. The Exxon Valdez oil spill in Prince William Sound was the first case where contingent valuation surveys were used in a quantitative assessment of damages. Use of the technique has spread from there.
Past controversies
Many economists question the use of stated preference to determine willingness to pay for a good, preferring to rely on people's revealed preferences in binding market transactions. Early contingent valuation surveys were often open-ended questions of the form "how much compensation would you demand for the destruction of X area" or "how much would you pay to preserve X". Such surveys potentially suffer from a number of shortcomings; strategic behaviour, protest answers, response bias and respondents ignoring income constraints. Early surveys used in environmental valuation seemed to indicate people were expressing a general preference for environmental spending in their answers, described as the embedding effect by detractors of the method.
In response to criticisms of contingent valuation surveys, a panel of high profile economists (chaired by Nobel Prize laureates Kenneth Arrow and Robert Solow) was convened under the auspices of the United States National Oceanic and Atmospheric Administration (NOAA). The panel heard evidence from 22 expert economists and published its results in 1993. The recommendations of the NOAA panel were that contingent valuation surveys should be carefully designed and controlled due to the inherent difficulties in eliciting accurate economic values through survey methods.
The most important recommendations of the NOAA panel were that:
Personal interviews be used to conduct the survey, as opposed to telephone or mall-stop methods.
Surveys be designed in a yes or no referendum format put to the respondent as a vote on a specific tax to protect a specified resource.
Respondents be given detailed information on the resource in question and on the protection measure they were voting on. This information should include threats to the resource (best and worst-case scenarios), scientific evaluation of its ecological importance and possible outcomes of protection measures.
Income effects be carefully explained to ensure respondents understood that they were to express their willingness to pay to protect the particular resource in question, not the environment generally.
Subsidiary questions be asked to ensure respondents understood the question posed.
"[CVM] produces estimates reliable enough to be the starting point of a judicial process of damage assessment, including passive-use values" and has been successfully used in such high profile cases as the Exxon Valdez oil spill.
The guiding principle behind these recommendations was that the survey operator has a high burden of proof to satisfy before the results can be seen as meaningful. Surveys meeting these criteria are very expensive to operate and to ameliorate the expense of conducting surveys the panel recommended a set of reference surveys which future surveys could be compared to and calibrated against. The NOAA panel also felt, in general, that conservative estimates of value were to be preferred and one important consequence of this decision is that they recommended contingent valuation surveys measure willingness to pay to protect the good rather than willingness to accept compensation for the loss of the resource.
As a result, current contingent valuation methodology corrects for these shortcomings, and current empirical testing indicates that such bias and inconsistency has been successfully addressed.
Current status
As shown by Mundy and McLean (1998), contingent valuation is now widely accepted as a real estate appraisal technique, particularly in contaminated property or other situations where revealed preference models (i.e. transaction pricing) fail due to disequilibrium in the market. McLean, Mundy, and Kilpatrick (1999) demonstrate the acceptability of contingent valuation in real estate expert testimony, and the current standards for use of contingent valuation in litigation situations is described by Diamond (2000).
The technique has been widely used by government departments in the US when performing cost-benefit analysis of projects impacting, positively or negatively, on the environment. Examples include a valuation of water quality and recreational opportunities in the river downstream from Glen Canyon dam, biodiversity restoration in the Mono Lake and restoration of salmon spawning grounds in certain rivers. The technique has also been used in Australia to value areas of the Kakadu National Park as well as trophy property in the United States, and is recognized as a valuable tool in the appraisal of brownfields.
See also
Choice modelling
Opportunity cost
Scarcity
Trade-off
References
W. Michael Hanemann, 'Valuing the Environment Through Contingent Valuation' The Journal of Economic Perspectives, Vol. 8, No. 4. (Autumn, 1994), pp. 19–43
Paul R. Portney, 'The Contingent Valuation Debate: Why Economists Should Care' The Journal of Economic Perspectives, Vol. 8, No. 4. (Autumn, 1994), pp. 3–17
External links
NOAA report
Ecosystem Valuation information
Environmental Valuation and Cost-Benefit News
Misleading Quantification: The Contingent Valuation of Environmental Quality (Robert K. Niewijk)
Association of Environmental and Resource Economists (AERE).
- JEEM: Journal of Environmental Economics and Management (AERE's official "technical" journal).
- REEP: Review of Environmental Economics and Policy (AERE's official "accessible" journal).
Economics of Natural Resource Decisions: Gardner Brown (University of Washington Office of Research)
Curated bibliography at IDEAS/RePEc
Environmental economics
Survey methodology
Public policy research | Contingent valuation | [
"Environmental_science"
] | 1,485 | [
"Environmental economics",
"Environmental social science"
] |
1,015,903 | https://en.wikipedia.org/wiki/Existence%20value | Existence values are a class of economic value, reflecting the benefit people receive from knowing that a particular environmental resource, such as Antarctica, the Grand Canyon, endangered species, or any other organism or thing exists.
Existence value is a prominent example of non-use value, as they do not require that utility be derived from direct use of the resource: the utility comes from simply knowing the resource exists. The idea was first introduced by John V. Krutilla in his essay "Conservation Reconsidered."
As the Canadian Privy Council explains , existence value is "A concept used to refer to the intrinsic value of some asset, normally natural/environmental. It is the value of the benefits derived from the asset's existence alone. For example, a tree can be valued in a number of ways, including its use value (as lumber), an existence value (simply being there), and an option value (value of things that it could be used for). Existence value is separate from the value accruing from any use or potential use of the asset."
These values are commonly measured through contingent valuation surveys and have been actionable damages in the US since . They were used in a legal assessment of damages following the Exxon Valdez oil spill.
See also
Ecosystem services
Value of Earth
References
Environmental economics | Existence value | [
"Environmental_science"
] | 266 | [
"Environmental economics",
"Environmental social science"
] |
1,015,919 | https://en.wikipedia.org/wiki/Excedrin%20%28brand%29 | Excedrin is an over-the-counter headache pain reliever, typically in the form of tablets or caplets. It contains paracetamol, aspirin and caffeine. It was manufactured by Bristol-Myers Squibb until it was purchased by Novartis in July 2005 along with other products from BMS's over-the-counter business. As of March 2015, GSK holds majority ownership of Excedrin through a joint venture transaction with Novartis. On July 18, 2022, GSK spun off its consumer healthcare business (including Excedrin) to Haleon.
The brand became known for advertisements where it cured especially unpleasant and excruciating headaches (called "Excedrin headaches" in the ads of 1970s, and later called "Excedrin tension headaches"). In 2007, the brand branched out into marketing for other types of pains with the introduction of Excedrin Back & Body, without caffeine.
Principles of work
Excedrin is a combination medication composed of acetaminophen, aspirin, and caffeine. These medications treat migraine headache in a variety of ways.
Acetaminophen is a fever reducer and painkiller. Its precise mechanism is unknown. It is known that it mostly affects the brain and spinal cord, which are parts of the central nervous system. By lowering the quantity of prostaglandins the body produces, acetaminophen raises the threshold for pain.
Aspirin is a nonsteroidal anti-inflammatory drug (NSAID). It lessens irritation and swelling as well as discomfort and inflammation. The amount of prostaglandins the body produces is also decreased by aspirin, but not in the same way that acetaminophen does.
Caffeine acts as a vasoconstrictor, causing blood vessels to become smaller. This helps to restrict the blood vessels in the brain. As a result, the amount of blood that may pass through the blood arteries at once is reduced. There are several theories regarding the cause and exacerbation of headaches, and it is thought by some that vasodilation may contribute to symptoms. If a headache is brought on by caffeine withdrawal, the caffeine content of Excedrin may relieve it.
Versions
Over the years, different forms of the drug have been introduced:
1960: Excedrin Extra Strength (the formula changed for the last time in 1978) In 1960, Bristol-Myers, nowadays Bristol-Myers Squibb, introduced Excedrin Extra Strength for headaches, the first multi-ingredient headache treatment product. Contains 250 mg acetaminophen, 250 mg aspirin and 65 mg caffeine.
1969: Excedrin PM – Excedrin PM is the first headache and sleeping pill combination product. Contains 500 mg acetaminophen and 38 mg diphenhydramine citrate as a sleep aid. Those same active ingredients were later utilized several years later in the product Tylenol PM.
1998: Excedrin Migraine – At the beginning of 1998, the FDA granted clearance to market Excedrin Migraine for the relief of migraine headache pain and associated symptoms. Excedrin Migraine continued the trend of marketing pain products for specific types of pain, becoming the first migraine headache medication available to consumers without a prescription, even though it has identical active ingredients to the regular Excedrin Extra Strength product, 250 mg acetaminophen, 250 mg aspirin, and 65 mg caffeine. In fact, upon the product's launch, its advertising slogan was “Excedrin is now Excedrin Migraine,” noting that the two were the same product.
2003: Excedrin Tension Headache contains 500 mg acetaminophen, and 65 mg caffeine.
2005: Excedrin Sinus Headache contains 325 mg acetaminophen and 5 mg phenylephrine HCl as a decongestant.
2007: Excedrin Back and Body – a dual-ingredient formula claiming that it "works two ways—as a pain reliever and a pain blocker right where it hurts". Contains 250 mg acetaminophen, 250 mg aspirin.
Discontinued – Excedrin Menstrual Complete, which continued the trend of marketing pain products for specific types of pain, even though it had identical active ingredients to the regular Excedrin Extra Strength product and Excedrin Migraine: 250 mg acetaminophen, 250 mg aspirin and 65 mg caffeine.
Canada
Excedrin is no longer sold in Canada.
Previously, Excedrin Migraine and Excedrin Extra Strength sold in Canada had a different formulation compared to the United States. In Canada the product sold with those names contained 500 mg of acetaminophen and 65 mg of caffeine per tablet. The reason was that the combination of acetaminophen with aspirin creates the risk of renal papillary necrosis if large doses are taken chronically.
Ownership
In 2005, Bristol-Myers Squibb announced the sale of its North American consumer medicine business (including Excedrin, Comtrex and Keri brands) to Novartis for $660 million, in order to focus on drugs for the ten most profitable disease areas. As of March 2015, GlaxoSmithKline held majority ownership through a joint venture transaction with Novartis.
Recall and production stoppage
2012 recall
On January 9, 2012, Novartis announced that it was voluntarily recalling all lots of select bottle-packaging configurations of Excedrin products with expiration dates of December 20, 2014, or earlier as a precautionary measure because the products may contain stray tablets, capsules, or caplets from other Novartis products, or contain broken or chipped tablets. The recall was conducted with the knowledge of the U.S. Food and Drug Administration. Wholesalers and retailers were instructed to stop distribution and return the affected product. Consumers in possession of recalled Excedrin were instructed to stop using the product and contact Novartis. Novartis stated that Excedrin would be shipping to stores on October 15, 2012, and that customers would start seeing it by the first of November.
January 2020 production stoppage
On January 21, 2020, GlaxoSmithKline announced that production and distribution of caplets and gel tabs of Excedrin Extra Strength and Excedrin Migraine would be stopped temporarily. Their statement said, "Through routine quality control and assurance measures, we discovered inconsistencies in how we transfer and weigh ingredients" for the recalled products, and that production would restart "shortly". However, GSK acknowledged that they "cannot confirm a definite date as to when supply will resume".
December 2020 recall
On December 26, 2020, the U.S. Consumer Product Safety Commission announced the recall of 400,000 bottles of Excedrin due to the containers of the drug allegedly having holes in the bottom. The concern behind the recall was that the plastic bottles, if they had a hole, could allow children to access the painkiller caplets and lead to dangerous overdose or poisoning. The recall involved bottles containing 50, 80, 100, 152, 200, 250, or 300 caplets.
See also
Stella Nickell, who tampered with Excedrin tablets
References
External links
Combination analgesics
Drug brand names
Haleon brands
Products introduced in 1960 | Excedrin (brand) | [
"Chemistry"
] | 1,545 | [
"Pharmacology",
"Drug brand names"
] |
1,016,017 | https://en.wikipedia.org/wiki/Thiocyanate | Thiocyanates are salts containing the thiocyanate anion (also known as rhodanide or rhodanate). is the conjugate base of thiocyanic acid. Common salts include the colourless salts potassium thiocyanate and sodium thiocyanate. Mercury(II) thiocyanate was formerly used in pyrotechnics.
Thiocyanate is analogous to the cyanate ion, , wherein oxygen is replaced by sulfur. is one of the pseudohalides, due to the similarity of its reactions to that of halide ions. Thiocyanate used to be known as rhodanide (from a Greek word for rose) because of the red colour of its complexes with iron.
Thiocyanate is produced by the reaction of elemental sulfur or thiosulfate with cyanide: 8 CN- + S8 -> 8 SCN-CN- + S2O3^2- -> SCN- + SO3^2- The second reaction is catalyzed by thiosulfate sulfurtransferase, a hepatic mitochondrial enzyme, and by other sulfur transferases, which together are responsible for around 80% of cyanide metabolism in the body.
Oxidation of thiocyanate inevitably produces hydrogen sulfate. The other product depends on pH: in acid, it is hydrogen cyanide, presumably via HOSCN and with a sulfur dicyanide side-product; but in base and neutral solutions, it is cyanate.
Biology
Occurrences
Thiocyanate occurs widely in nature, albeit often in low concentrations. It is a component of some sulfur cycles.
Biochemistry
Thiocyanate hydrolases catalyze the conversion of thiocyanate to carbonyl sulfide and to cyanate:
Medicine
Thiocyanate is known to be an important part in the biosynthesis of hypothiocyanite by a lactoperoxidase. Thus the complete absence of thiocyanate or reduced thiocyanate in the human body, (e.g., cystic fibrosis) is damaging to the human host defense system.
Thiocyanate is a potent competitive inhibitor of the thyroid sodium-iodide symporter. Iodine is an essential component of thyroxine. Since thiocyanates will decrease iodide transport into the thyroid follicular cell, they will decrease the amount of thyroxine produced by the thyroid gland. As such, foodstuffs containing thiocyanate are best avoided by iodide deficient hypothyroid patients.
In the early 20th century, thiocyanate was used in the treatment of hypertension, but it is no longer used because of associated toxicity. Sodium nitroprusside, a metabolite of which is thiocyanate, is however still used for the treatment of a hypertensive emergency. Rhodanese catalyzes the reaction of sodium nitroprusside (like other cyanides) with thiosulfate to form the metabolite thiocyanate.
Coordination chemistry
Thiocyanate shares its negative charge approximately equally between sulfur and nitrogen. As a consequence, thiocyanate can act as a nucleophile at either sulfur or nitrogen—it is an ambidentate ligand. [SCN]− can also bridge two (M−SCN−M) or even three metals (>SCN− or −SCN<). Experimental evidence leads to the general conclusion that class A metals (hard acids) tend to form N-bonded thiocyanate complexes, whereas class B metals (soft acids) tend to form S-bonded thiocyanate complexes. Other factors, e.g. kinetics and solubility, are sometimes involved, and linkage isomerism can occur, for example [Co(NH3)5(NCS)]Cl2 and [Co(NH3)5(SCN)]Cl2. It [SCN] is considered as a weak ligand. ([NCS] is a strong ligand)
Test for iron(III) and cobalt(II)
If [SCN]− is added to a solution with iron(III) ions, a blood-red solution forms mainly due to the formation of [Fe(NCS)(H2O)5]2+, i.e. pentaaqua(thiocyanato-N)iron(III). Lesser amounts of other hydrated compounds also form: e.g. Fe(SCN)3 and [Fe(SCN)4]−.
Similarly, Co2+ gives a blue complex with thiocyanate. Both the iron and cobalt complexes can be extracted into organic solvents like diethyl ether or amyl alcohol. This allows the determination of these ions even in strongly coloured solutions. The determination of Co(II) in the presence of Fe(III) is possible by adding KF to the solution, which forms uncoloured, very stable complexes with Fe(III), which no longer react with SCN−.
Phospholipids or some detergents aid the transfer of thiocyanatoiron into chlorinated solvents like chloroform and can be determined in this fashion.
See also
Sulphobes
References
Citations
Anions
Sulfur ions
Concrete admixtures | Thiocyanate | [
"Physics",
"Chemistry"
] | 1,154 | [
"Matter",
"Anions",
"Functional groups",
"Thiocyanates",
"Sulfur ions",
"Ions"
] |
1,016,422 | https://en.wikipedia.org/wiki/Curved%20spacetime | In physics, curved spacetime is the mathematical model in which, with Einstein's theory of general relativity, gravity naturally arises, as opposed to being described as a fundamental force in Newton's static Euclidean reference frame. Objects move along geodesics—curved paths determined by the local geometry of spacetime—rather than being influenced directly by distant bodies. This framework led to two fundamental principles: coordinate independence, which asserts that the laws of physics are the same regardless of the coordinate system used, and the equivalence principle, which states that the effects of gravity are indistinguishable from those of acceleration in sufficiently small regions of space. These principles laid the groundwork for a deeper understanding of gravity through the geometry of spacetime, as formalized in Einstein's field equations.
Introduction
Newton's theories assumed that motion takes place against the backdrop of a rigid Euclidean reference frame that extends throughout all space and all time. Gravity is mediated by a mysterious force, acting instantaneously across a distance, whose actions are independent of the intervening space. In contrast, Einstein denied that there is any background Euclidean reference frame that extends throughout space. Nor is there any such thing as a force of gravitation, only the structure of spacetime itself.
In spacetime terms, the path of a satellite orbiting the Earth is not dictated by the distant influences of the Earth, Moon and Sun. Instead, the satellite moves through space only in response to local conditions. Since spacetime is everywhere locally flat when considered on a sufficiently small scale, the satellite is always following a straight line in its local inertial frame. We say that the satellite always follows along the path of a geodesic. No evidence of gravitation can be discovered following alongside the motions of a single particle.
In any analysis of spacetime, evidence of gravitation requires that one observe the relative accelerations of two bodies or two separated particles. In Fig. 5-1, two separated particles, free-falling in the gravitational field of the Earth, exhibit tidal accelerations due to local inhomogeneities in the gravitational field such that each particle follows a different path through spacetime. The tidal accelerations that these particles exhibit with respect to each other do not require forces for their explanation. Rather, Einstein described them in terms of the geometry of spacetime, i.e. the curvature of spacetime. These tidal accelerations are strictly local. It is the cumulative total effect of many local manifestations of curvature that result in the appearance of a gravitational force acting at a long range from Earth.
Different observers viewing the scenarios presented in this figure interpret the scenarios differently depending on their knowledge of the situation. (i) A first observer, at the center of mass of particles 2 and 3 but unaware of the large mass 1, concludes that a force of repulsion exists between the particles in scenario A while a force of attraction exists between the particles in scenario B. (ii) A second observer, aware of the large mass 1, smiles at the first reporter's naiveté. This second observer knows that in reality, the apparent forces between particles 2 and 3 really represent tidal effects resulting from their differential attraction by mass 1. (iii) A third observer, trained in general relativity, knows that there are, in fact, no forces at all acting between the three objects. Rather, all three objects move along geodesics in spacetime.
Two central propositions underlie general relativity.
The first crucial concept is coordinate independence: The laws of physics cannot depend on what coordinate system one uses. This is a major extension of the principle of relativity from the version used in special relativity, which states that the laws of physics must be the same for every observer moving in non-accelerated (inertial) reference frames. In general relativity, to use Einstein's own (translated) words, "the laws of physics must be of such a nature that they apply to systems of reference in any kind of motion." This leads to an immediate issue: In accelerated frames, one feels forces that seemingly would enable one to assess one's state of acceleration in an absolute sense. Einstein resolved this problem through the principle of equivalence.
The equivalence principle states that in any sufficiently small region of space, the effects of gravitation are the same as those from acceleration. In Fig. 5-2, person A is in a spaceship, far from any massive objects, that undergoes a uniform acceleration of g. Person B is in a box resting on Earth. Provided that the spaceship is sufficiently small so that tidal effects are non-measurable (given the sensitivity of current gravity measurement instrumentation, A and B presumably should be Lilliputians), there are no experiments that A and B can perform which will enable them to tell which setting they are in. An alternative expression of the equivalence principle is to note that in Newton's universal law of gravitation, mgg and in Newton's second law, there is no a priori reason why the gravitational mass mg should be equal to the inertial mass mi. The equivalence principle states that these two masses are identical.
To go from the elementary description above of curved spacetime to a complete description of gravitation requires tensor calculus and differential geometry, topics both requiring considerable study. Without these mathematical tools, it is possible to write about general relativity, but it is not possible to demonstrate any non-trivial derivations.
Curvature of time
In the discussion of special relativity, forces played no more than a background role. Special relativity assumes the ability to define inertial frames that fill all of spacetime, all of whose clocks run at the same rate as the clock at the origin. Is this really possible? In a nonuniform gravitational field, experiment dictates that the answer is no. Gravitational fields make it impossible to construct a global inertial frame. In small enough regions of spacetime, local inertial frames are still possible. General relativity involves the systematic stitching together of these local frames into a more general picture of spacetime.
Years before publication of the general theory in 1916, Einstein used the equivalence principle to predict the existence of gravitational redshift in the following thought experiment: (i) Assume that a tower of height h (Fig. 5-3) has been constructed. (ii) Drop a particle of rest mass m from the top of the tower. It falls freely with acceleration g, reaching the ground with velocity , so that its total energy E, as measured by an observer on the ground, is (iii) A mass-energy converter transforms the total energy of the particle into a single high energy photon, which it directs upward. (iv) At the top of the tower, an energy-mass converter transforms the energy of the photon E back into a particle of rest mass m.
It must be that , since otherwise one would be able to construct a perpetual motion device. We therefore predict that , so that
A photon climbing in Earth's gravitational field loses energy and is redshifted. Early attempts to measure this redshift through astronomical observations were somewhat inconclusive, but definitive laboratory observations were performed by Pound & Rebka (1959) and later by Pound & Snider (1964).
Light has an associated frequency, and this frequency may be used to drive the workings of a clock. The gravitational redshift leads to an important conclusion about time itself: Gravity makes time run slower. Suppose we build two identical clocks whose rates are controlled by some stable atomic transition. Place one clock on top of the tower, while the other clock remains on the ground. An experimenter on top of the tower observes that signals from the ground clock are lower in frequency than those of the clock next to her on the tower. Light going up the tower is just a wave, and it is impossible for wave crests to disappear on the way up. Exactly as many oscillations of light arrive at the top of the tower as were emitted at the bottom. The experimenter concludes that the ground clock is running slow, and can confirm this by bringing the tower clock down to compare side by side with the ground clock. For a 1 km tower, the discrepancy would amount to about 9.4 nanoseconds per day, easily measurable with modern instrumentation.
Clocks in a gravitational field do not all run at the same rate. Experiments such as the Pound–Rebka experiment have firmly established curvature of the time component of spacetime. The Pound–Rebka experiment says nothing about curvature of the space component of spacetime. But the theoretical arguments predicting gravitational time dilation do not depend on the details of general relativity at all. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence. This includes Newtonian gravitation. A standard demonstration in general relativity is to show how, in the "Newtonian limit" (i.e. the particles are moving slowly, the gravitational field is weak, and the field is static), curvature of time alone is sufficient to derive Newton's law of gravity.
Newtonian gravitation is a theory of curved time. General relativity is a theory of curved time and curved space. Given G as the gravitational constant, M as the mass of a Newtonian star, and orbiting bodies of insignificant mass at distance r from the star, the spacetime interval for Newtonian gravitation is one for which only the time coefficient is variable:
Curvature of space
The coefficient in front of describes the curvature of time in Newtonian gravitation, and this curvature completely accounts for all Newtonian gravitational effects. As expected, this correction factor is directly proportional to and , and because of the in the denominator, the correction factor increases as one approaches the gravitating body, meaning that time is curved.
But general relativity is a theory of curved space and curved time, so if there are terms modifying the spatial components of the spacetime interval presented above, should not their effects be seen on, say, planetary and satellite orbits due to curvature correction factors applied to the spatial terms?
The answer is that they are seen, but the effects are tiny. The reason is that planetary velocities are extremely small compared to the speed of light, so that for planets and satellites of the solar system, the term dwarfs the spatial terms.
Despite the minuteness of the spatial terms, the first indications that something was wrong with Newtonian gravitation were discovered over a century-and-a-half ago. In 1859, Urbain Le Verrier, in an analysis of available timed observations of transits of Mercury over the Sun's disk from 1697 to 1848, reported that known physics could not explain the orbit of Mercury, unless there possibly existed a planet or asteroid belt within the orbit of Mercury. The perihelion of Mercury's orbit exhibited an excess rate of precession over that which could be explained by the tugs of the other planets. The ability to detect and accurately measure the minute value of this anomalous precession (only 43 arc seconds per tropical century) is testimony to the sophistication of 19th century astrometry.
As the astronomer who had earlier discovered the existence of Neptune "at the tip of his pen" by analyzing irregularities in the orbit of Uranus, Le Verrier's announcement triggered a two-decades long period of "Vulcan-mania", as professional and amateur astronomers alike hunted for the hypothetical new planet. This search included several false sightings of Vulcan. It was ultimately established that no such planet or asteroid belt existed.
In 1916, Einstein was to show that this anomalous precession of Mercury is explained by the spatial terms in the curvature of spacetime. Curvature in the temporal term, being simply an expression of Newtonian gravitation, has no part in explaining this anomalous precession. The success of his calculation was a powerful indication to Einstein's peers that the general theory of relativity could be correct.
The most spectacular of Einstein's predictions was his calculation that the curvature terms in the spatial components of the spacetime interval could be measured in the bending of light around a massive body. Light has a slope of ±1 on a spacetime diagram. Its movement in space is equal to its movement in time. For the weak field expression of the invariant interval, Einstein calculated an exactly equal but opposite sign curvature in its spatial components.
In Newton's gravitation, the coefficient in front of predicts bending of light around a star. In general relativity, the coefficient in front of predicts a doubling of the total bending.
The story of the 1919 Eddington eclipse expedition and Einstein's rise to fame is well told elsewhere.
Sources of spacetime curvature
In Newton's theory of gravitation, the only source of gravitational force is mass.
In contrast, general relativity identifies several sources of spacetime curvature in addition to mass. In the Einstein field equations,
the sources of gravity are presented on the right-hand side in the stress–energy tensor.
Fig. 5-5 classifies the various sources of gravity in the stress–energy tensor:
(red): The total mass–energy density, including any contributions to the potential energy from forces between the particles, as well as kinetic energy from random thermal motions.
and (orange): These are momentum density terms. Even if there is no bulk motion, energy may be transmitted by heat conduction, and the conducted energy will carry momentum.
are the rates of flow of the of momentum per unit area in the . Even if there is no bulk motion, random thermal motions of the particles will give rise to momentum flow, so the terms (green) represent isotropic pressure, and the terms (blue) represent shear stresses.
One important conclusion to be derived from the equations is that, colloquially speaking, gravity itself creates gravity. Energy has mass. Even in Newtonian gravity, the gravitational field is associated with an energy, called the gravitational potential energy. In general relativity, the energy of the gravitational field feeds back into creation of the gravitational field. This makes the equations nonlinear and hard to solve in anything other than weak field cases. Numerical relativity is a branch of general relativity using numerical methods to solve and analyze problems, often employing supercomputers to study black holes, gravitational waves, neutron stars and other phenomena in the strong field regime.
Energy-momentum
In special relativity, mass-energy is closely connected to momentum. Just as space and time are different aspects of a more comprehensive entity called spacetime, mass–energy and momentum are merely different aspects of a unified, four-dimensional quantity called four-momentum. In consequence, if mass–energy is a source of gravity, momentum must also be a source. The inclusion of momentum as a source of gravity leads to the prediction that moving or rotating masses can generate fields analogous to the magnetic fields generated by moving charges, a phenomenon known as gravitomagnetism.
It is well known that the force of magnetism can be deduced by applying the rules of special relativity to moving charges. (An eloquent demonstration of this was presented by Feynman in volume II, of his Lectures on Physics, available online.) Analogous logic can be used to demonstrate the origin of gravitomagnetism.
In Fig. 5-7a, two parallel, infinitely long streams of massive particles have equal and opposite velocities −v and +v relative to a test particle at rest and centered between the two. Because of the symmetry of the setup, the net force on the central particle is zero. Assume so that velocities are simply additive. Fig. 5-7b shows exactly the same setup, but in the frame of the upper stream. The test particle has a velocity of +v, and the bottom stream has a velocity of +2v. Since the physical situation has not changed, only the frame in which things are observed, the test particle should not be attracted towards either stream.
It is not at all clear that the forces exerted on the test particle are equal. (1) Since the bottom stream is moving faster than the top, each particle in the bottom stream has a larger mass energy than a particle in the top. (2) Because of Lorentz contraction, there are more particles per unit length in the bottom stream than in the top stream. (3) Another contribution to the active gravitational mass of the bottom stream comes from an additional pressure term which, at this point, we do not have sufficient background to discuss. All of these effects together would seemingly demand that the test particle be drawn towards the bottom stream.
The test particle is not drawn to the bottom stream because of a velocity-dependent force that serves to repel a particle that is moving in the same direction as the bottom stream. This velocity-dependent gravitational effect is gravitomagnetism.
Matter in motion through a gravitomagnetic field is hence subject to so-called frame-dragging effects analogous to electromagnetic induction. It has been proposed that such gravitomagnetic forces underlie the generation of the relativistic jets (Fig. 5-8) ejected by some rotating supermassive black holes.
Pressure and stress
Quantities that are directly related to energy and momentum should be sources of gravity as well, namely internal pressure and stress. Taken together, , momentum, pressure and stress all serve as sources of gravity: Collectively, they are what tells spacetime how to curve.
General relativity predicts that pressure acts as a gravitational source with exactly the same strength as mass–energy density. The inclusion of pressure as a source of gravity leads to dramatic differences between the predictions of general relativity versus those of Newtonian gravitation. For example, the pressure term sets a maximum limit to the mass of a neutron star. The more massive a neutron star, the more pressure is required to support its weight against gravity. The increased pressure, however, adds to the gravity acting on the star's mass. Above a certain mass determined by the Tolman–Oppenheimer–Volkoff limit, the process becomes runaway and the neutron star collapses to a black hole.
The stress terms become highly significant when performing calculations such as hydrodynamic simulations of core-collapse supernovae.
These predictions for the roles of pressure, momentum and stress as sources of spacetime curvature are elegant and play an important role in theory. In regards to pressure, the early universe was radiation dominated, and it is highly unlikely that any of the relevant cosmological data (e.g. nucleosynthesis abundances, etc.) could be reproduced if pressure did not contribute to gravity, or if it did not have the same strength as a source of gravity as mass–energy. Likewise, the mathematical consistency of the Einstein field equations would be broken if the stress terms did not contribute as a source of gravity.
Experimental test of the sources of spacetime curvature
Definitions: Active, passive, and inertial mass
Bondi distinguishes between different possible types of mass: (1) is the mass which acts as the source of a gravitational field; (2) is the mass which reacts to a gravitational field; (3) is the mass which reacts to acceleration.
is the same as in the discussion of the equivalence principle.
In Newtonian theory,
The third law of action and reaction dictates that and must be the same.
On the other hand, whether and are equal is an empirical result.
In general relativity,
The equality of and is dictated by the equivalence principle.
There is no "action and reaction" principle dictating any necessary relationship between and .
Pressure as a gravitational source
The classic experiment to measure the strength of a gravitational source (i.e. its active mass) was first conducted in 1797 by Henry Cavendish (Fig. 5-9a). Two small but dense balls are suspended on a fine wire, making a torsion balance. Bringing two large test masses close to the balls introduces a detectable torque. Given the dimensions of the apparatus and the measurable spring constant of the torsion wire, the gravitational constant G can be determined.
To study pressure effects by compressing the test masses is hopeless, because attainable laboratory pressures are insignificant in comparison with the of a metal ball.
However, the repulsive electromagnetic pressures resulting from protons being tightly squeezed inside atomic nuclei are typically on the order of 1028 atm ≈ 1033 Pa ≈ 1033 kg·s−2m−1. This amounts to about 1% of the nuclear mass density of approximately 1018kg/m3 (after factoring in c2 ≈ 9×1016m2s−2).
If pressure does not act as a gravitational source, then the ratio should be lower for nuclei with higher atomic number Z, in which the electrostatic pressures are higher. (1968) did a Cavendish experiment using a Teflon mass suspended in a mixture of the liquids trichloroethylene and dibromoethane having the same buoyant density as the Teflon (Fig. 5-9b). Fluorine has atomic number , while bromine has . Kreuzer found that repositioning the Teflon mass caused no differential deflection of the torsion bar, hence establishing active mass and passive mass to be equivalent to a precision of 5×10−5.
Although Kreuzer originally considered this experiment merely to be a test of the ratio of active mass to passive mass, Clifford Will (1976) reinterpreted the experiment as a fundamental test of the coupling of sources to gravitational fields.
In 1986, Bartlett and Van Buren noted that lunar laser ranging had detected a 2 km offset between the moon's center of figure and its center of mass. This indicates an asymmetry in the distribution of Fe (abundant in the Moon's core) and Al (abundant in its crust and mantle). If pressure did not contribute equally to spacetime curvature as does mass–energy, the moon would not be in the orbit predicted by classical mechanics. They used their measurements to tighten the limits on any discrepancies between active and passive mass to about 10−12. With decades of additional lunar laser ranging data, Singh et al. (2023) reported improvement on these limits by a factor of about 100.
Gravitomagnetism
The existence of gravitomagnetism was proven by Gravity Probe B , a satellite-based mission which launched on 20 April 2004. The spaceflight phase lasted until 2005. The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism.
Initial results confirmed the relatively large geodetic effect (which is due to simple spacetime curvature, and is also known as de Sitter precession) to an accuracy of about 1%. The much smaller frame-dragging effect (which is due to gravitomagnetism, and is also known as Lense–Thirring precession) was difficult to measure because of unexpected charge effects causing variable drift in the gyroscopes. Nevertheless, by August 2008, the frame-dragging effect had been confirmed to within 15% of the expected result, while the geodetic effect was confirmed to better than 0.5%.
Subsequent measurements of frame dragging by laser-ranging observations of the LARES, and satellites has improved on the measurement, with results (as of 2016) demonstrating the effect to within 5% of its theoretical value, although there has been some disagreement on the accuracy of this result.
Another effort, the Gyroscopes in General Relativity (GINGER) experiment, seeks to use three 6 m ring lasers mounted at right angles to each other 1400 m below the Earth's surface to measure this effect. The first ten years of experience with a prototype ring laser gyroscope array, GINGERINO, established that the full scale experiment should be able to measure gravitomagnetism due to the Earth's rotation to within a 0.1% level or even better.
See also
Spacetime topology
Notes
References
Concepts in physics
Theoretical physics
Theory of relativity
Time
Time in physics
Conceptual models | Curved spacetime | [
"Physics",
"Mathematics"
] | 4,902 | [
"Physical phenomena",
"Time in physics",
"Physical quantities",
"Time",
"Vector spaces",
"Quantity",
"Theoretical physics",
"Space (mathematics)",
"nan",
"Theory of relativity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
1,016,556 | https://en.wikipedia.org/wiki/Induced%20seismicity | Induced seismicity is typically earthquakes and tremors that are caused by human activity that alters the stresses and strains on Earth's crust. Most induced seismicity is of a low magnitude. A few sites regularly have larger quakes, such as The Geysers geothermal plant in California which averaged two M4 events and 15 M3 events every year from 2004 to 2009. The Human-Induced Earthquake Database (HiQuake) documents all reported cases of induced seismicity proposed on scientific grounds and is the most complete compilation of its kind.
Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.7 El Reno earthquake may have been induced by deep injection of wastewater by the oil industry. A huge number of seismic events in oil and gas extraction states like Oklahoma is caused by increasing the volume of wastewater injection that is generated as part of the extraction process. "Earthquake rates have recently increased markedly in multiple areas of the Central and Eastern United States (CEUS), especially since 2010, and scientific studies have linked the majority of this increased activity to wastewater injection in deep disposal wells."
Induced seismicity can also be caused by the injection of carbon dioxide as the storage step of carbon capture and storage, which aims to sequester carbon dioxide captured from fossil fuel production or other sources in Earth's crust as a means of climate change mitigation. This effect has been observed in Oklahoma and Saskatchewan. Though safe practices and existing technologies can be utilized to reduce the risk of induced seismicity due to injection of carbon dioxide, the risk is still significant if the storage is large in scale. The consequences of the induced seismicity could disrupt pre-existing faults in the Earth's crust as well as compromise the seal integrity of the storage locations.
The seismic hazard from induced seismicity can be assessed using similar techniques as for natural seismicity, although accounting for non-stationary seismicity. It appears that earthquake shaking from induced earthquakes may be similar to that observed in natural tectonic earthquakes, or may have higher shaking at shorter distances. This means that ground-motion models derived from recordings of natural earthquakes, which are often more numerous in strong-motion databases than data from induced earthquakes, may be used with minor adjustments. Subsequently, a risk assessment can be performed, taking into account the increased seismic hazard and the vulnerability of the exposed elements at risk (e.g. local population and the building stock). Finally, the risk can, theoretically at least, be mitigated, either through reductions to the hazard or a reduction to the exposure or the vulnerability.
Causes
There are many ways in which induced seismicity has been seen to occur. In the 2010s, some energy technologies that inject or extract fluid from the Earth, such as oil and gas extraction and geothermal energy development, have been found or suspected to cause seismic events. Some energy technologies also produce wastes that may be managed through disposal or storage by injection deep into the ground. For example, waste water from oil and gas production and carbon dioxide from a variety of industrial processes may be managed through underground injection.
Artificial lakes
The column of water in a large and deep artificial lake alters in-situ stress along an existing fault or fracture. In these reservoirs, the weight of the water column can significantly change the stress on an underlying fault or fracture by increasing the total stress through direct loading, or decreasing the effective stress through the increased pore water pressure. This significant change in stress can lead to sudden movement along the fault or fracture, resulting in an earthquake. Reservoir-induced seismic events can be relatively large compared to other forms of induced seismicity. Though understanding of reservoir-induced seismic activity is very limited, it has been noted that seismicity appears to occur on dams with heights larger than . The extra water pressure created by large reservoirs is the most accepted explanation for the seismic activity. When the reservoirs are filled or drained, induced seismicity can occur immediately or with a small time lag.
The first case of reservoir-induced seismicity occurred in 1932 in Algeria's Oued Fodda Dam.
The 6.3 magnitude 1967 Koynanagar earthquake occurred in Maharashtra, India with its epicenter, fore- and aftershocks all located near or under the Koyna Dam reservoir. 180 people died and 1,500 were left injured. The effects of the earthquake were felt away in Bombay with tremors and power outages.
During the beginnings of the Vajont Dam in Italy, there were seismic shocks recorded during its initial fill. After a landslide almost filled the reservoir in 1963, causing a massive flooding and around 2,000 deaths, it was drained and consequently seismic activity was almost non-existent.
On August 1, 1975, a magnitude 6.1 earthquake at Oroville, California, was attributed to seismicity from a large earth-fill dam and reservoir recently constructed and filled.
The filling of the Katse Dam in Lesotho, and the Nurek Dam in Tajikistan is an example. In Zambia, Kariba Lake may have provoked similar effects.
The 2008 Sichuan earthquake, which caused approximately 68,000 deaths, is another possible example. An article in Science suggested that the construction and filling of the Zipingpu Dam may have triggered the earthquake.
Some experts worry that the Three Gorges Dam in China may cause an increase in the frequency and intensity of earthquakes.
Mining
Mining affects the stress state of the surrounding rock mass, often causing observable deformation and seismic activity. A small portion of mining-induced events are associated with damage to mine workings and pose a risk to mine workers. These events are known as rock bursts in hard rock mining, or as bumps in underground coal mining. A mine's propensity to burst or bump depends primarily on depth, mining method, extraction sequence and geometry, and the material properties of the surrounding rock. Many underground hardrock mines operate seismic monitoring networks in order to manage bursting risks, and guide mining practices.
Seismic networks have recorded a variety of mining-related seismic sources including:
Shear slip events (similar to tectonic earthquakes) which are thought to have been triggered by mining activity. Notable examples include the 1980 Bełchatów earthquake and the 2014 Orkney earthquake.
Implosional events associated with mine collapses. The 2007 Crandall Canyon mine collapse and the Solvay Mine Collapse are examples of these.
Explosions associated with routine mining practices, such as drilling and blasting, and unintended explosions such as the Sago mine Disaster. Explosions are generally not considered "induced" events since they are caused entirely by chemical payloads. Most earthquake monitoring agencies take careful measures to identify explosions and exclude them from earthquake catalogs.
Fracture formation near the surface of excavations, which are usually small magnitude events only detected by dense in-mine networks.
Slope failures, the largest example being the Bingham Canyon Landslide.
Waste disposal wells
Injecting liquids into waste disposal wells, most commonly in disposing of produced water from oil and natural gas wells, has been known to cause earthquakes. This high-saline water is usually pumped into salt water disposal (SWD) wells. The resulting increase in subsurface pore pressure can trigger movement along faults, resulting in earthquakes.
One of the first known examples was from the Rocky Mountain Arsenal, northeast of Denver. In 1961, waste water was injected into deep strata, and this was later found to have caused a series of earthquakes.
The 2011 Oklahoma earthquake near Prague, of magnitude 5.8, occurred after 20 years of injecting waste water into porous deep formations at increasing pressures and saturation. On September 3, 2016, an even stronger earthquake with a magnitude of 5.8 occurred near Pawnee, Oklahoma, followed by nine aftershocks between magnitudes 2.6 and 3.6 within hours. Tremors were felt as far away as Memphis, Tennessee, and Gilbert, Arizona. Mary Fallin, the Oklahoma governor, declared a local emergency and shutdown orders for local disposal wells were ordered by the Oklahoma Corporation Commission. Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.5 El Reno earthquake may have been induced by deep injection of waste water by the oil industry. Prior to April 2015 however, the Oklahoma Geological Survey's position was that the quake was most likely due to natural causes and was not triggered by waste injection. This was one of many earthquakes which have affected the Oklahoma region.
Since 2009, earthquakes have become hundreds of times more common in Oklahoma with magnitude 3 events increasing from 1 or 2 per year to 1 or 2 per day. On April 21, 2015, the Oklahoma Geological Survey released a statement reversing its stance on induced earthquakes in Oklahoma: "The OGS considers it very likely that the majority of recent earthquakes, particularly those in central and north-central Oklahoma, are triggered by the injection of produced water in disposal wells."
Hydrocarbon extraction and storage
Large-scale fossil fuel extraction can generate earthquakes. Induced seismicity can be also related to underground gas storage operations. The 2013 September–October seismic sequence occurred 21 km off the coast of the Valencia Gulf (Spain) is probably the best known case of induced seismicity related to Underground Gas Storage operations (the Castor Project). In September 2013, after the injection operations started, the Spanish seismic network recorded a sudden increase of seismicity. More than 1,000 events with magnitudes () between 0.7 and 4.3 (the largest earthquake ever associated with gas storage operations) and located close the injection platform were recorded in about 40 days. Due to the significant population concern the Spanish Government halted the operations. By the end of 2014, the Spanish government definitively terminated the concession of the UGS plant. Since January 2015 about 20 people who took part in the transaction and approval of the Castor Project were indicted.
Groundwater extraction
The changes in crustal stress patterns caused by the large scale extraction of groundwater has been shown to trigger earthquakes, as in the case of the 2011 Lorca earthquake.
Geothermal energy
Enhanced geothermal systems (EGS), a new type of geothermal power technology that does not require natural convective hydrothermal resources, are known to be associated with induced seismicity. EGS involves pumping fluids at pressure to enhance or create permeability through the use of hydraulic fracturing techniques. Hot dry rock (HDR) EGS actively creates geothermal resources through hydraulic stimulation. Depending on the rock properties, and on injection pressures and fluid volume, the reservoir rock may respond with tensile failure, as is common in the oil and gas industry, or with shear failure of the rock's existing joint set, as is thought to be the main mechanism of reservoir growth in EGS efforts.
HDR and EGS systems are currently being developed and tested in Soultz-sous-Forêts (France), Desert Peak and the Geysers (U.S.), Landau (Germany), and Paralana and Cooper Basin (Australia). Induced seismicity events at the Geysers geothermal field in California has been strongly correlated with injection data. The test site at Basel, Switzerland, has been shut down due to induced seismic events. In November 2017 a Mw 5.5 struck the city of Pohang (South Korea) injuring several people and causing extensive damage. The proximity of the seismic sequence to an EGS site, where stimulation operations had taken place only a few months before the earthquake, raised the possibility that this earthquake had been anthropogenic. According to two different studies it seems plausible that the Pohang earthquake was induced by EGS operations.
Researchers at MIT believe that seismicity associated with hydraulic stimulation can be mitigated and controlled through predictive siting and other techniques. With appropriate management, the number and magnitude of induced seismic events can be decreased, significantly reducing the probability of a damaging seismic event.
Induced seismicity in Basel led to suspension of its HDR project. A seismic hazard evaluation was then conducted, which resulted in the cancellation of the project in December 2009.
Hydraulic fracturing
Hydraulic fracturing is a technique in which high-pressure fluid is injected into the low-permeable reservoir rocks in order to induce fractures to increase hydrocarbon production. This process is generally associated with seismic events that are too small to be felt at the surface (with moment magnitudes ranging from −3 to 1), although larger magnitude events are not excluded. For example, several cases of larger magnitude events (M > 4) have been recorded in Canada in the unconventional resources of Alberta and British Columbia.
Carbon capture and storage
Risk analysis
Operation of technologies involving long-term geologic storage of waste fluids have been shown to induce seismic activity in nearby areas, and correlation of periods of seismic dormancy with minima in injection volumes and pressures has even been demonstrated for fracking wastewater injection in Youngstown, Ohio. Of particular concern to the viability of carbon dioxide storage from coal-fired power plants and similar endeavors is that the scale of intended CCS projects is much larger in both injection rate and total injection volume than any current or past operation that has already been shown to induce seismicity. As such, extensive modeling must be done of future injection sites in order to assess the risk potential of CCS operations, particularly in relation to the effect of long-term carbon dioxide storage on shale caprock integrity, as the potential for fluid leaks to the surface might be quite high for moderate earthquakes. However, the potential of CCS to induce large earthquakes and CO2 leakage remains a controversial issue.,
Monitoring
Since geological sequestration of carbon dioxide has the potential to induce seismicity, researchers have developed methods to monitor and model the risk of injection-induced seismicity in order to manage better the risks associated with this phenomenon. Monitoring can be conducted with measurements from an instrument such as a geophone to measure the movement of the ground. Generally a network of instruments is used around the site of injection, although many current carbon dioxide injection sites use no monitoring devices. Modelling is an important technique for assessing the potential for induced seismicity and two primary models are used: Physical and numerical. A physical model uses measurements from the early stages of a project to forecast how the project will behave once more carbon dioxide is injected. A numerical model, on the other hand, uses numerical methods to simulate the physics of what is happening within the reservoir. Both modelling and monitoring are useful tools whereby to quantify, understand better and mitigate the risks associated with injection-induced seismicity.
Failure mechanisms due to fluid injection
To assess induced seismicity risks associated with carbon storage, one must understand the mechanisms behind rock failure. The Mohr-Coulomb failure criteria describe shear failure on a fault plane. Most generally, failure will happen on existing faults due to several mechanisms: an increase in shear stress, a decrease in normal stress or a pore pressure increase. The injection of supercritical will change the stresses in the reservoir as it expands, causing potential failure on nearby faults. Injection of fluids also increases the pore pressures in the reservoir, triggering slip on existing rock weakness planes. The latter is the most common cause of induced seismicity due to fluid injection.
The Mohr-Coulomb failure criteria state that
with the critical shear stress leading to failure on a fault, the cohesive strength along the fault, the normal stress, the friction coefficient on the fault plane and the pore pressure within the fault. When is attained, shear failure occurs and an earthquake can be felt. This process can be represented graphically on a Mohr's circle.
Comparison of risks due to CCS versus other injection methods
While there is risk of induced seismicity associated with carbon capture and storage underground on a large scale, it is currently a much less serious risk than other injection types. Wastewater injection, hydraulic fracturing, and secondary recovery after oil extraction have all contributed significantly more to induced seismic events than carbon capture and storage in the last several years. There have actually not been any major seismic events associated with carbon injection at this point, whereas there have been recorded seismic occurrences caused by the other injection methods. One such example is massively increased induced seismicity in Oklahoma, USA caused by injection of huge volumes of wastewater into the Arbuckle Group sedimentary rock.
Electromagnetic pulses
It has been shown that high-energy electromagnetic pulses can trigger the release of energy stored by tectonic movements by increasing the rate of local earthquakes, within 2–6 days after the emission by the EMP generators. The energy released is approximately six orders of magnitude larger than the EM pulses energy. The release of tectonic stress by these relatively small triggered earthquakes equals to 1-17% of the stress released by a strong earthquake in the area. It has been proposed that strong EM impacts could control seismicity as during the periods of the experiments and long time after, the seismicity dynamics were a lot more regular than usual.
Risk analysis
Risk factors
Risk is defined as the probability of being impacted from an event in the future. Seismic risk is generally estimated by combining the seismic hazard with the exposure and vulnerability at a site or over a region. The hazard from earthquakes depends on the proximity to potential earthquake sources, and the rates of occurrence of different magnitude earthquakes for those sources, and the propagation of seismic waves from the sources to the site of interest. Hazard is then represented in terms of the probability of exceeding some level of ground shaking at a site. Earthquake hazards can include ground shaking, liquefaction, surface fault displacement, landslides, tsunamis, and uplift/subsidence for very large events (ML > 6.0). Because induced seismic events, in general, are smaller than ML 5.0 with short durations, the primary concern is ground shaking.
Ground shaking
Ground shaking can result in both structural and nonstructural damage to buildings and other structures. It is commonly accepted that structural damage to modern engineered structures happens only in earthquakes larger than ML 5.0. In seismology and earthquake engineering, ground shaking can be measured as peak ground velocity (PGV), peak ground acceleration (PGA) or spectral acceleration (SA) at a building's period of excitation. In regions of historical seismicity where buildings are engineered to withstand seismic forces, moderate structural damage is possible, and very strong shaking can be perceived when PGA is greater than 18-34% of g (the acceleration of gravity). In rare cases, nonstructural damage has been reported in earthquakes as small as ML 3.0. For critical facilities like dams and nuclear plants, the acceptable levels of ground shaking is lower than that for buildings.
Probabilistic seismic hazard analysis
Extended reading – An Introduction to Probabilistic Seismic Hazard Analysis (PSHA)
Probabilistic Seismic Hazard Analysis (PSHA) is a probabilistic framework that accounts for probabilities in earthquake occurrence and the probabilities in ground motion propagation. Using the framework, the probability of exceeding a certain level of ground shaking at a site can be quantified, taking into account all the possible earthquakes (both natural and induced). PSHA methodology is used to determine seismic loads for building codes in both the United States and Canada, and increasingly in other parts of the world, as well as protecting dams and nuclear plants from the damage of seismic events.
Calculating Seismic Risk
Earthquake source characterization
Understanding the geological background on the site is a prerequisite for seismic hazard estimation. Formations of the rocks, subsurface structures, locations of faults, state of stresses and other parameters that contribute to possible seismic events are considered. Records of past earthquakes of the site are also taken into account.
Recurrence pattern
The magnitudes of earthquakes occurring at a source generally follow the Gutenberg-Richter relation that states that the number of earthquakes decrease exponentially with increase in magnitude, as shown below,
where is the magnitude of seismic events, is the number of events with magnitudes bigger than , is the rate parameter and is the slope. and vary for different sources. In the case of natural earthquakes, historical seismicity is used to determine these parameters. Using this relationship, the number and probability of earthquakes exceeding a certain magnitude can be predicted following the assumptions that earthquakes follow a Poisson process. However, the goal of this analysis is to determine the possibility of future earthquakes. For induced seismicity in contrast to natural seismicity, the earthquake rates change over time as a result of changes in human activity, and hence are quantified as non-stationary processes with varying seismicity rates over time.
Ground motions
At a given site, the ground motion describes the seismic waves that would have been observed at that site with a seismometer. In order to simplify the representation of an entire seismogram, PGV (peak ground velocity), PGA (peak ground acceleration), spectral acceleration (SA) at different period, earthquake duration, arias intensity (IA) are some of the parameters that are used to represent ground shaking. Ground motion propagation from the source to a site for an earthquake of a given magnitude is estimated using ground motion prediction equations (GMPE) that have been developed based on historical records. Since historical records are scarce for induced seismicity, researchers have provided modifications to GMPEs for natural earthquakes in order to apply them to indced earthquakes.
Seismic hazard
The PSHA framework uses the distributions of earthquake magnitudes and ground motion propagation to estimate the seismic hazard – the probability of exceeding a certain level of ground shaking (PGA, PGV, SA, IA, etc.) in the future. Depending on the complexity of the probability distributions, either numerical methods or simulations (such as, Monte Carlo method) may be used to estimate seismic hazard. In the case of induced seismicity, the seismic hazard is not constant, but varies with time due to changes in the underlying seismicity rates.
Exposure and vulnerability
In order to estimate seismic risk, the hazard is combined with the exposure and vulnerability at a site or in a region. For example, if an earthquake occurs where there are no humans or structures, there would be no human impacts despite any level of seismic hazard. Exposure is defined as the set of entities (such as, buildings and people) that exist at a given site or a region. Vulnerability is defined as the potential of impact to those entities, for example, structural or non-structural damage to a building, and loss of well-being and life for people. Vulnerability can also be represented probabilistically using vulnerability or fragility functions. A vulnerability or fragility function specifies the probability of impact at different levels of ground shaking. In regions like Oklahoma without a lot of historical natural seismicity, structures are not engineered to withstand seismic forces, and as a result are more vulnerable even at low levels of ground shaking, as compared to structures in tectonic regions like California and Japan.
Seismic risk
Seismic risk is defined as the probability of exceeding a certain level of impact in the future. For example, it may estimate the exceedance probability of moderate or more damage to a building in the future. Seismic hazard is combined with the exposure and vulnerability to estimate seismic risk. While numerical methods may be used to estimate risk at one site, simulation-based methods are better suited to estimate seismic risk for a region with a portfolio of entities, in order to correctly account for the correlations in ground shaking, and impacts. In the case of induced seismicity, the seismic risk varies over time due to changes in the seismic hazard.
Risk Mitigation
Induced seismicity can cause damage to infrastructure and has been documented to damage buildings in Oklahoma. It can also lead to brine and leakages.
It is easier to predict and mitigate seismicity caused by explosions. Common mitigation strategies include constraining the amount of dynamite used in one single explosion and the locations of the explosions. For injection-related induced seismicity, however, it is still difficult to predict when and where induced seismic events will occur, as well as the magnitudes. Since induced seismic events related to fluid injection are unpredictable, it has garnered more attention from the public. Induced seismicity is only part of the chain reaction from industrial activities that worry the public. Impressions toward induced seismicity are very different between different groups of people. The public tends to feel more negatively towards earthquakes caused by human activities than natural earthquakes. Two major parts of public concern are related to the damages to infrastructure and the well-being of humans. Most induced seismic events are below M 2 and are not able to cause any physical damage. Nevertheless, when the seismic events are felt and cause damages or injuries, questions arise from the public whether it is appropriate to conduct oil and gas operations in those areas. Public perceptions may vary based on the population and tolerance of local people. For example, in the seismically active Geysers geothermal area in Northern California, which is a rural area with a relatively small population, the local population tolerates earthquakes up to M 4.5. Actions have been taken by regulators, industry and researchers. On October 6, 2015, people from industry, government, academia, and the public gathered together to discuss how effective it was to implement a traffic light system or protocol in Canada to help manage risks from induced seismicity.
Risk assessment and tolerance for induced seismicity, however, is subjective and shaped by different factors like politics, economics, and understanding from the public. Policymakers have to often balance the interests of industry with the interests of the population. In these situations, seismic risk estimation serves as a critical tool for quantifying future risk, and can be used to regulate earthquake-inducing activities until the seismic risk reaches a maximum acceptable level to the population.
Traffic Light System
One of the methods suggested to mitigate seismic risk is a Traffic Light System (TLS), also referred to as Traffic Light Protocol (TLP), which is a calibrated control system that provides continuous and real-time monitoring and management of ground shaking of induced seismicity for specific sites. TLS was first implemented in 2005 in an enhanced geothermal plant in Central America. For oil and gas operations, the most widely implemented one is modified by the system used in the UK. Normally there are two types of TLS – the first one sets different thresholds, usually earthquake local magnitudes (ML) or ground motions from small to large. If the induced seismicity reaches the smaller thresholds, modifications of the operations are implemented by the operators and the regulators are informed. If the induced seismicity reaches the larger thresholds, operations are shut down immediately. The second type of traffic light system sets only one threshold. If this threshold is reached, the operations are halted. This is also called a "stop light system". Thresholds for the traffic light system vary between and within countries, depending on the area.
However, the traffic light system is not able to account for future changes in seismicity. It may take time for changes in human activities to mitigate the seismic activity, and it has been observed that some of the largest induced earthquakes have occurred after stopping fluid injection.
Nuclear explosions
Nuclear explosions can cause seismic activity, but according to USGS, the resulting seismic activity is less energetic than the original nuclear blast, and generally does not produce large aftershocks. Nuclear explosions may instead release the elastic strain energy that was stored in the rock, strengthening the initial blast shockwave.
U.S. National Research Council report
A 2013 report from the U.S. National Research Council examined the potential for energy technologies—including shale gas recovery, carbon capture and storage, geothermal energy production, and conventional oil and gas development—to cause earthquakes. The report found that only a very small fraction of injection and extraction activities among the hundreds of thousands of energy development sites in the United States have induced seismicity at levels noticeable to the public. However, although scientists understand the general mechanisms that induce seismic events, they are unable to accurately predict the magnitude or occurrence of these earthquakes due to insufficient information about the natural rock systems and a lack of validated predictive models at specific energy development sites.
The report noted that hydraulic fracturing has a low risk for inducing earthquakes that can be felt by people, but underground injection of wastewater produced by hydraulic fracturing and other energy technologies has a higher risk of causing such earthquakes. In addition, carbon capture and storage—a technology for storing excess carbon dioxide underground—may have the potential for inducing seismic events, because significant volumes of fluids are injected underground over long periods of time.
List of induced seismic events
Table
References
Further reading
External links
The Human-Induced Earthquake Database
Map of reservoir-induced earthquakes at International Rivers
WEBINAR: Yes, Humans Really Are Causing Earthquakes – IRIS Consortium
One-year seismic hazard forecast for the Central and Eastern United States from induced and natural earthquakes – United States Geological Survey, 2016 (with maps)
Induced Earthquakes – United States Geological Survey website
Seismology
Man-made disasters | Induced seismicity | [
"Biology"
] | 5,882 | [] |
1,016,665 | https://en.wikipedia.org/wiki/Ad%20hoc%20On-Demand%20Distance%20Vector%20Routing | Ad hoc On-Demand Distance Vector (AODV) Routing is a routing protocol for mobile ad hoc networks (MANETs) and other wireless ad hoc networks. It was jointly developed by Charles Perkins (Sun Microsystems) and Elizabeth Royer (now Elizabeth Belding) (University of California, Santa Barbara) and was first published in the ACM 2nd IEEE Workshop on Mobile Computing Systems and Applications in February 1999.
AODV is the routing protocol used in Zigbee – a low power, low data rate wireless ad hoc network. There are various implementations of AODV such as MAD-HOC, Kernel-AODV, AODV-UU, AODV-UCSB and AODV-UIUC.
The original publication of AODV won the SIGMOBILE Test of Time Award in 2018. According to Google Scholar, this publication reached 30,000 citations at the end of 2022. AODV was published in the Internet Engineering Task Force (IETF) as Experimental RFC 3561 in 2003.
How it works
Each node has its own sequence number that grows monotonically over time and ensures that there are no loops in the paths used. In addition, each network component assigned to routing functionality stores its own path index, which contains the address of the next node in the direction of the destination (next hop), its sequence number, and the total distance given in hops, or possibly other metrics designed to measure link quality.
In AODV, the network remains completely silent until a connection is required to forward a data packet. When routes need to be searched on the network, AODV resorts to the following packets defined by its protocol:
Route request (RREQ)
Route reply (RREP)
Route error (RERR)
These messages can be implemented as simple UDP packets, so routing is still based on the Internet Protocol (IP).
RREQ packets are broadcast from the source node, so a burst of messages is generated and forwarded through the entire network. When a node in the network receives a request packet, it can send an RREP packet through a temporary path to the requesting node, which can then exploit the newly received information. Generally, each node compares different paths based on their length and chooses the most convenient one. If a node is no longer reachable, a RERR message is generated to alert the rest of the network.
Each RREQ has a "time to live" that limits the times it can be retransmitted. In addition, AODV implements a binary backoff mechanism in case the node does not receive a response to its RREQ, whereby requests are repeated at linearly increasing time intervals up to a maximum set by the implementation.
Evaluation
The main advantage of AODV is that it does not generate traffic in the case of already established and working routes. In fact, the algorithm itself is completely irrelevant as long as it does not turn out to be necessary to send a packet to a node whose route is unknown. Beyond that, distance vector-based routing is computationally simple and does not require large amounts of memory.
However, the protocol takes longer than other protocols to establish a connection between two nodes in a network.
See also
Wireless ad hoc networks
Backpressure routing
Mesh networking
Wireless mesh network § Routing protocols
List of ad hoc routing protocols
References
Ad hoc routing protocols
Computer network technology | Ad hoc On-Demand Distance Vector Routing | [
"Technology"
] | 692 | [
"Wireless networking",
"Wireless sensor network"
] |
1,016,696 | https://en.wikipedia.org/wiki/HD%20Radio | HD Radio (HDR) is a trademark for a so-called in-band on-channel (IBOC) digital radio broadcast technology. HD radio generally simulcasts an existing analog radio station in digital format with less noise and with additional text information. HD Radio is used primarily by AM and FM radio stations in the United States, U.S. Virgin Islands, Canada, Mexico and the Philippines, with a few implementations outside North America.
HD Radio transmits the digital signals in unused portions of the same band as the analog AM and FM signals. As a result, radios are more easily designed to pick up both signals, which is why the HD in HD Radio is shorthand for "hybrid digital," not "high definition." Officially, HD is not intended to stand for any term in HD Radio, it is simply part of iBiquity's trademark, and does not have any meaning on its own. HD Radios tune into the station's analog signal first and then look for a digital signal. The European DRM system shares channels similar to HD Radio, but the European DAB system uses different frequencies for its digital transmission.
The term "on channel" is a misnomer because the system actually sends the digital components on the ordinarily unused channels adjacent to an existing radio station's allocation. This leaves the original analog signal intact, allowing enabled receivers to switch between digital and analog as required. In most FM implementations, from 96 to 128 kbit/s of capacity is available. High-fidelity audio requires only 48 kbit/s so there is ample capacity for additional channels, which HD Radio refers to as "multicasting".
HD Radio is licensed so that the simulcast of the main channel is royalty-free. The company makes its money on fees on additional multicast channels. Stations can choose the quality of these additional channels; music stations generally add one or two high-fidelity channels, while others use lower bit rates for voice-only news and sports. Previously these services required their own transmitters, often on low-fidelity AM. With HD, a single FM allocation can carry all of these channels, and even its lower-quality settings usually sound better than AM.
While it is typically used in conjunction with an existing channel it has been licensed for all-digital transmission as well. Four AM stations use the all-digital format, one under an experimental authorization, the other three under new rules adopted by the FCC in October 2020. The system sees little use elsewhere due to its reliance on the sparse allocation of FM broadcast channels in North America; in Europe, stations are more tightly spaced.
History
This standard was meant to supersede other existing stereophonic standards on AM.
iBiquity developed HD Radio, and the system was selected by the U.S. Federal Communications Commission (FCC) in 2002 as a digital audio broadcasting method for the United States. It is officially known as NRSC‑5, with the latest version being NRSC‑5‑E.
iBiquity was acquired by DTS in September 2015 bringing the HD Radio technology under the same banner as DTS's eponymous theater surround sound systems. The HD Radio technology and trademarks were subsequently acquired by Xperi Holding Corporation in 2016.
HD Radio is one of several digital radio standards which are generally incompatible with each other:
FMeXtra was a competing U.S. standard, but has been stagnant since the 2010s.
Compatible AM Digital (CAM‑D) for AM stations.
Digital Audio Broadcasting (DAB), a.k.a. Eureka 147, is the most common standard in Europe.
Digital Radio Mondiale (DRM‑30 and DRM+ configurations) was intended mostly for shortwave radio.
By May 2018, iBiquity Digital Co. claimed its HD Radio technology was used by more than 3,500 individual services, mostly in the United States. This compares with more than 2,200 services operating with the DAB system.
A 400 kHz wide channel is required for HD FM analog-digital hybrid transmission, making its adoption problematic outside of North America. In the United States, FM channels are spaced 200 kHz apart as opposed to 100 kHz elsewhere. Furthermore, long-standing FCC licensing practice, dating from when receivers had poor adjacent-channel selectivity, assigns stations in geographically overlapping or adjacent coverage areas to channels separated by (at least) 400 kHz. Thus most stations can transmit carefully designed digital signals on their adjacent channels without interfering with other local stations, and usually without co-channel interference with distant stations on those channels. Outside the U.S., the heavier spectral loading of the FM broadcast band makes IBoC systems like HD Radio less practical.
The FCC has not indicated any intent to end analog radio broadcasting as it did with analog television, since it would not result in the recovery of any radio spectrum rights which could be sold. Thus, there is no deadline by which consumers must buy an HD receiver.
Technique
Digital information is transmitted using OFDM with an audio compression format called HDC (High-Definition Coding). HDC is a proprietary codec based upon, but incompatible with, the MPEG-4 standard HE-AAC. It uses a modified discrete cosine transform (MDCT) audio data compression algorithm.
HD equipped stations pay a one-time licensing fee for converting their primary audio channel to iBiquity's HD Radio technology, and 3% of incremental net revenues for any additional digital subchannels. The cost of converting a radio station can run between $100,000 and $200,000. Receiver manufacturers who include HD Radio pay a royalty, which is the main reason it failed to be fully-adopted as a standard feature.
If the HD receiver loses the primary digital signal (HD‑1), it reverts to the analog signal, thereby providing seamless operation between the newer and older transmission methods. The extra HD‑2 and HD‑3 streams do not have an analog simulcast; consequently, their sound will drop-out or "skip" when digital reception degrades (similar to digital television drop-outs). Alternatively the HD signal can revert to a more robust 20 kbit/s stream, although the sound quality is then reduced to conventional AM-level. Datacasting is also possible, with metadata providing song titles or artist information.
iBiquity Digital claims that the system approaches CD quality audio and offers reduction of both interference and static. However, the data rates in HD Radio are substantially lower than from a CD, and the digital signals sometimes interfere with adjacent analog AM band stations. (see § AM, below).
AM
The AM hybrid mode ("MA1") uses 30 kHz of bandwidth (±15 kHz), and overlaps adjacent channels on both sides of the station's assigned channel. Some nighttime listeners have expressed concern this design harms reception of adjacent channels with one formal complaint filed regarding the matter: WYSL owner Bob Savage against WBZ in Boston.
The capacity of a 30 kHz channel on the AM band is limited. By using spectral band replication the HDC+SBR codec is able to simulate the recreation of sounds up to 15,000 Hz, thus achieving moderate quality on the bandwidth-tight AM band. The HD Radio AM hybrid mode offers two options which can carry approximately 40~60 kbit/s of data, with most AM digital stations defaulting to the more-robust 40 kbit/s mode, which features redundancy (same data is broadcast twice).
The digital radio signal received on a conventional AM receiver tuned to an adjacent channel sounds like white noise – the sound of a , or a large waterfall, or a strong, steady wind through a dense forest canopy, or similar.
All-digital AM
All-digital AM ("MA3") allows for two modes: "Enhanced" and "core-only".
In enhanced mode, the primary, secondary and tertiary carriers are transmitted, allowing for a maximum throughput of 40.2 kbit/s while using 20 kHz of bandwidth out to the station's 0.5 mV/m contour. Inside this contour, stereo audio along with graphics (station logo and "artist experience" album artwork) and text information (the station's call sign, title, album, and artist) can be decoded by the receiver.
Beyond the station's 0.5 mV/m contour, typically only the primary carriers can be received, which restricts the maximum throughput to 20.2 kbit/s while only requiring 10 kHz of bandwidth.
In core-only mode, the station only transmits the primary carriers.
When the receiver can only decode the primary carriers in either mode, the audio will be mono and only text information can be displayed. The narrower bandwidth needed in either all-digital mode compared to hybrid mode reduces possible interference to and from stations broadcasting on adjacent channels. However, all-digital AM lacks the analog signal for fallback when the signal is too weak for the receiver to decode the primary digital carrier.
Four stations have operated as all-digital / digital-only broadcasters:
WWFD has had special temporary authority from the FCC since July 2018 to broadcast all-digital.
WMGG (since January 2021), WFAS (from May 2021 until its October 2024 deletion) and WSRO (since December 2021) broadcast all-digital under new rules adopted by the FCC on 27 October 2020 that allow any AM station to voluntarily choose to convert to all-digital operation. However, WMGG subsequently dropped MA3 and WSRO left the air.
WWFD experimented with using a digital subchannel, operating a second channel (HD2) at a low data rate while reducing the data rate of the primary channel (HD1). In October 2020, the FCC concluded from WWFD's experiments:
"The record does not establish that an audio stream on an HD-2 subchannel is currently technically feasible".
The FCC requires stations that wish to multiplex their digital AM signals to request and receive permission to do so; in early 2020 it rejected a multiplex request from WTLC.
FM
The FM hybrid digital / analog mode offers four options which can carry approximately 100, 112, 125, or 150 kbit/s of data carrying (lossy) compressed digital audio depending upon the station manager's power budget and desired range of signal. HD FM also provides several pure digital modes with up to 300 kbit/s rate, and enabling extra features like surround sound. Like AM, purely-digital FM provides a "fallback" condition where it reverts to a more robust 25 kbit/s signal.
FM stations can divide their datastream into sub-channels (e.g., 88.1 HD‑1, HD‑2, HD‑3) of varying audio quality. The multiple services are similar to the digital subchannels found in ATSC-compliant digital television using multiplexed broadcasting. For example, some top 40 stations have added hot AC and classic rock to their digital subchannels, to provide more variety to listeners. Stations may eventually go all-digital, thus allowing as many as three full-power channels and four low-power channels (seven total). Alternatively, they could broadcast one single channel at 300 kbit/s.
FCC rules require that one channel be a simulcast of the analog signal so that when the primary digital stream cannot be decoded, a receiver can fall back to the analog signal. This requires synchronization of the two, with a significant delay added to the analog service. In some cases, particularly during tropospheric ducting events, an HD receiver will lock on to the digital stream of a distant station even though there is a much stronger local analog-only station on the same frequency. With no automatic identification of the station on the analog signal, there is no way for the receiver to recognize that there is no correlation between the two. The listener can possibly turn HD reception off (to listen to the local station, or avoid random flipping between the two stations), or listen to the distant stations and try to get a station ID.
Although the signals may be synchronized at the transmitter and reach the receiving equipment simultaneously, what the listener hears through an HD unit and an analog radio played together can be distinctly unsynchronized. This is because all analog receivers process analog signals faster than digital radios can process digital signals. The digital processing of analog signals in an HD Radio also delays them. The resulting unmistakable "reverb" or echo effect from playing digital and analog radios in the same room or house, tuned to the same station, can be annoying. It is more noticeable with simple voice transmission than with complex musical program content.
Stations can transmit HD through their existing antennas using a diplexer, as on AM, or are permitted by the FCC to use a separate antenna at the same general location, or at a site licensed as an analog auxiliary, provided it is within a certain distance and height referenced to the main analog signal. The limitation assures that the two transmissions have nearly the same broadcast range, and that they maintain the proper ratio of signal strength to each other so as not to cause destructive interference at any given location where they may be received.
Artist Experience
HD Radio supports a service called "Artist Experience" in which the transmission of album art, logos, and other graphics can be displayed on the receiver. Album art and logos are displayed at the station's discretion, and require extra equipment. An HD Radio manufacturer should pass the iBiquity certification, which includes displaying the artwork properly.
EAS alerts
Since 2016, newer HD Radios support Bluetooth and Emergency Alert System (EAS) alerts in which the transmission of traffic, weather alerts, AMBER, and security alerts can be displayed on the radio. As with "Artist Experience", emergency alerts are displayed at the station's discretion, and require extra equipment.
Bandwidth and power
FM stereo stations typically require up to 280 kilohertz of spectrum. The bandwidth of an FM signal is found by doubling the sum of the peak deviation (usually 75 kHz) and the highest baseband modulating frequency (around 60 kHz when RBDS is used). Only 15 kHz of the baseband bandwidth is used by analog monaural audio (baseband), with the remainder used for stereo, RBDS, paging, radio reading service, rental to other customers, or as a transmitter/studio link for in-house telemetry.
In (regular) hybrid mode a station has ±130 kHz of analog bandwidth. The primary main digital sidebands extend ±70 kHz on either side of the analog signal, thus taking a full 400 kHz of spectrum. In extended hybrid mode, the analog signal is restricted to ±100 kHz. Extended primary sidebands are added to the main primary sidebands using the extra ±30 kHz of spectrum created by restricting the analog signal. Extended hybrid provides up to approximately 50 kbit/s additional capacity. Any existing subcarrier services (usually at 92 kHz and 67 kHz) that must be shut down to use extended hybrid can be restored through use of digital subchannels. However, this requires the replacement of all related equipment both for the broadcasters and all of the receivers that use the services shifted to HD subchannels.
The ratio of power of the analog signal to the digital signal was initially standardized at 100:1 (−20 dBc), i.e., the digital signal power is 1% of the analog carrier power. This low power, plus the uniform, noise-like nature of the digital modulation, is what reduces its potential for co-channel interference with distant analog stations. Unlike with subcarriers, where the total baseband modulation is reduced, there is no reduction to the analog carrier power. The National Association of Broadcasters (NAB) requested a 10 dB (10×) increase in the digital signal from the FCC. This equates to an increase to 10% of the analog carrier power, but no decrease in the analog signal. This was shown to reduce analog coverage because of interference, but results in a dramatic improvement in digital coverage. Other levels were also tested, including a 6 dB or fourfold increase to 4% (−14 dBc or 25:1). National Public Radio was opposed to any increase because it is likely to increase interference to their member stations, particularly to their broadcast translators, which are secondary and therefore left unprotected from such interference. Other broadcasters are also opposed (or indifferent), since increasing power would require expensive changes in equipment for many, and the already-expensive system has so far given them no benefit.
There are still some concerns that HD FM will increase interference between different stations, even though HD Radio at the 10% power level fits within the FCC spectral mask. North American FM channels are spaced 200 kHz apart. An HD broadcast station will not generally cause interference to any analog station within its 1 mV/m service contour – the limit above which the FCC protects most stations. However, the IBOC signal resides within the analog signal of the immediately adjacent station(s). With the proposed power increase of 10 dB, the potential exists to cause the degradation of the second-adjacent analog signals within its 1 mV/m contour.
On 29 January 2010, the U.S. FCC approved a report and order to voluntarily increase the maximum digital effective radiated power (ERP) to 4% of analog ERP (−14 dBc), up from the previous maximum of 1% (−20 dBc). Individual stations may apply for up to 10% (−10 dBc) if they can prove it will not cause harmful interference to any other station. If at least six verified complaints of ongoing RF interference to another station come from locations within the other station's licensed service geographic region, the interfering station will be required to reduce to the next level down of 4%, 2% (−17 dB), or 1%, until the FCC finally determines that the interference has been satisfactorily reduced. The station to which the interference is caused bears the burden of proof and its associated expenses, rather than the station that causes the problem. For grandfathered FM stations, which are allowed to remain over the limit for their broadcast class, these numbers are relative to that lower limit rather than their actual power.
Comparison to other digital radio standards
HD versus DAB
Some countries have implemented Eureka-147 Digital Audio Broadcasting (DAB) or the newer DAB+ version. DAB broadcasts a single multiplex that is approximately 1.5 megahertz wide (≈1 megabit per second). That multiplex is then subdivided into multiple digital streams of between 9~12 programs (or stations). In contrast, HD FM requires 400 kHz bandwidth – compatible with the 200 kHz channel spacing traditionally used in the ITU Region 2 (including the United States) – with capability of 300 kbit/s in digital-only mode.
The gradually phased out first generation DAB uses the MPEG-1 Audio Layer II (MP2) audio codec which has less efficient compression than newer codecs. The typical bitrate for DAB stereo programs is 128 kilobit per second or less and as a result most radio stations on DAB have a poorer sound quality than FM does under similar conditions. Many DAB stations also broadcast in mono. In contrast, DAB+ uses the newer AAC+ codec and HD FM uses a codec based upon the MPEG-4 HE-AAC standard.
Before DAB+ was introduced, DAB's inefficient compression led in some cases to "downgrading" stations from stereophonic to monaural, in order to include more channels in the limited 1 Mbit/s bandwidth.
Digital radio allows for more stations and less susceptibility for disturbances in the signal. In the United States, however, other than HD Radio, digital broadcast technologies, such as DAB+, have not been approved for use on either the VHF band II (FM) or medium wave band.
DAB better suits national broadcasting networks that provide several stations as is common in Europe, whereas HD is more appropriate for individual stations.
HD versus DRM
Digital Radio Mondiale (DRM 30) is a system designed primarily for shortwave, medium wave, and longwave broadcasting with compatible radios already available for sale. DRM 30 is similar to HD AM, in that each station is broadcast via channels spaced 10 kHz (or 9 kHz in some regions) on frequencies up to 30 MHz. The two standards also share the same basic modulation scheme (COFDM), and HD AM uses a proprietary codec. DRM 30 operates with xHE-AAC, historically with any of a number of codecs, including AAC, Opus, and HVXC. The receiver synchronization and data coding are quite different between HD AM and DRM 30. As of 2015 there are several radio chipsets available which can decode AM, FM, DAB, DRM 30 and DRM+, and HD AM and HD FM.
Similar to HD AM, DRM allows either hybrid digital-analog broadcasts or pure digital broadcasts, DRM allows broadcasters to use multiple options:
Hybrid mode (digital/analog) - 10 kHz analog plus 5 kHz digital bandwidth allows 5–16 kbit/s data rate;
10 kHz digital-only bandwidth confined to ±5 kHz of the channel center allows 12–35 kbit/s;
20 kHz digital-only bandwidth using ±10 kHz (including half of the adjacent channels) allows 24–72 kbit/s.
On the medium wave, actual DRM bit rates vary depending on day versus night transmission (groundwave versus skywave) and the amount of bits dedicated for error correction (signal robustness).
Although DRM offers a growth path for AM broadcasters, it shares some issues with HD Radio in the AM:
Shorter broadcast distance in hybrid mode compared to an analog AM signal
Interference with adjacent channels when using the 20 kHz mode though in all-digital mode the signal fits inside the designated channel mask.
DRM+, a different system based upon the same principles of HD Radio on the FM band, but can be implemented in all the VHF bands (1, 2, and 3), either as a hybrid analog-digital or digital only broadcast, but with 0.1 MHz digital-only bandwidth, it allows 186.3 kbit/s data rate (compared to HD FM with 0.4 MHz allowing 300 kbps.)
Digital Radio Mondiale is an open standards system, albeit one that is subject to patents and licensing. HD Radio is based upon the intellectual property of iBiquity Digital Co. / Xperi Holding Co.
The United States uses DRM for HF / shortwave broadcasts.
Acceptance and criticism
Awareness and coverage
According to a survey dated 8 August 2007 by Bridge Ratings, when asked the question, "Would you buy an HD Radio in the next two months?" Only 1.0% responded "yes".
Some broadcast engineers have expressed concern over the new HD system. A survey conducted in September 2008 saw a small percentage of participants that confused HD Radio with satellite radio.
Many first-generation HD Radios had insensitive receivers, which caused issues with sound quality. The HD Radio digital signal level is 10–20 dB below the analog signal power of the station's transmitter. In addition, commentators have noted that the analog section of some receivers were inferior compared to older, analog-only models.
However, since 2012, HD capable receiver adoption has significantly increased in most newer cars, and several aftermarket radio systems both for vehicles and home use contain HD Radio receivers and special features such as Full Artist Experience. iBiquity reports that 78% of all radio listening is done on stations that broadcast in HD. There are an increasing number of stations switching to HD or adding subchannels compatible with digital radio, such as St. Cloud, Minnesota, where many local radio outlets find a growing number of listeners tuning in to their HD signals, which in turn has benefited sales.
Different format and compatibility standards
Even though DAB and DRM standards are open standards and predate HD Radio, HD receivers cannot be used to receive these stations when sold or moved overseas (with certain exceptions; there are HD stations in Sri Lanka, Thailand, Taiwan, Japan, Romania, and a few other countries).
DAB and DRM receivers cannot receive HD signals in the U.S. The HD system, which enables AM and FM stations to upgrade to digital without changing frequencies, is a different digital broadcasting standard. The lack of a common standard means that HD receivers cannot receive DAB or DRM broadcasts from other countries, and vice versa, and that manufacturers must develop separate products for different countries, which typically are not dual-format.
Whereas the Advanced Audio Coding (AAC) family of codecs are publicly documented standards, the HDC codec exists only within the HD system, and is an iBiquity trade secret.
Similarly DAB or DRM are open specifications, while iBiquity's HD specification is partly open, but mostly private.
HD Radio does not use ATSC, the standard for digital television in the United States, and so fails to recover the former TV and FM radio compatibility enjoyed by TV channel 6 broadcasters. In the days of analog television, the lowest sliver of the FM broadcast band (87.7–87.9 MHz) overlapped with the FM audio carrier of U.S. analog television's channel 6; because the NTSC analog television standard used conventional analog FM to modulate the audio carrier, the audio of television stations that broadcast on channel 6 could be heard on most FM receivers. In earlier days of television and radio, several television stations exploited this overlap and operated as radio stations. Full-powered television stations were forced to cease their analog broadcasts in June 2009, and low-powered stations ceased analog broadcasts by July 2021. Because the digital television and all digital radio standards are incompatible, HD receivers are not able to receive digital TV signals on the 87.75 MHz frequency, eliminating the former dual-medium compatibility of channel 6 television stations. Current low-power ATSC 3.0 channel 6 stations that broadcast an audio carrier on 87.75 do not have HD Radio.
Reduced-quality concerns
Promotion for HD Radio often fails to make clear that some of its features are mutually incompatible with other features. For example, the HD system has been described as "CD quality"; however, the HD system also allows multiplexing the data stream between two or more separate programs. A program utilizing one half or less of the data stream does not attain the higher audio quality of a single program allowed the full data stream. The FCC has declared
"one free over-the-air digital stream [must be] of equal or greater quality than the station's existing analog signal".
If the FCC disallows analog simulcasting, each station will have over 300 kbit/s bandwidth available, allowing for good stereo quality or even surround sound audio, together with multiple sub-channels, and to a lesser extent more freedom for low-power, personal FM transmitters, to pair modern smartphones, computers, and other devices to legacy analog FM receivers.
The broadcasting industry is seeking FCC approval on future HD receiver models, for conditional access; that is, enabling the extra subchannels to be available only by paid subscription. NDS has made a deal with iBiquity to provide HD Radio with an encrypted content-delivery system called "RadioGuard". NDS claims that RadioGuard will "provide additional revenue-generating possibilities".
Mostly all existing FM receivers tuned to a channel broadcasting a HD signal are prone to increased noise on the analog signal, called "HD Radio self-noise", due to analog demodulation of the digital signal(s). In some high fidelity FM receivers in quality playback systems, this noise can be audible and irritating. Most all existing FM receivers will require modifications to the internal filters or the addition of a post-detection filter to prevent degradation of the analog signal quality on stations broadcasting HD Radio.
Reduced analog signal
Radio stations are licensed in the United States to broadcast at a specific effective radiated power level. In 2008, NPR Labs did a study of predicted HD Radio operation if the digital power levels were increased to 10% of the maximum analog carrier power as is now allowed by the FCC under certain circumstances, and found the digital signal would increase RF interference on FM. However the boosted digital HD signal coverage would then exceed analog coverage, with 17% more population covered in vehicles but 17% less indoors.
High costs
The costs of installing the system, including fees, vary from station to station, according to the station's size and existing infrastructure. Typical costs are at least several tens of thousands of dollars at the outset plus per-channel annual fees (3% of the station's annual revenue) to be paid to Xperi for HD‑2 and HD‑3 (HD‑1 has no royalty charge). Large companies in larger media markets – such as iHeartRadio or Cumulus Broadcasting – can afford to implement the technology for their stations. However, community radio stations, both commercial and noncommercial, in many cases cannot afford the US$1,000 yearly Xperi fee assessed to LPFM stations. During mid-2010 a new generation of HD Radio broadcasting equipment was introduced, greatly lowering the startup costs of implementing the system.
HD Radio receivers cost anywhere from around US$50 to several hundred dollars, compared to regular FM radios which can sometimes even be found at dollar stores.
Although costs have historically been higher for HD hardware, as adoption has increased, prices have been reduced, and receivers containing HD Radio are becoming more commonplace – especially as more stations broadcast in HD format.
Power consumption
Conventional analog-only FM transmitters normally operate with "class C" amplifiers, which are efficient, but not linear; HD Radio requires a different amplifier class. A class C amplifier can operate with overall transmitter efficiency higher than 70%. Digital transmitters operate in one of the other amplifier classes – one that is close to linear, and linearity lowers the efficiency. A modern hybrid HD FM transmitter typically achieves 50~60% efficiency, whereas an HD digital-only FM transmitter should manage just 40~45%. The reduced efficiency causes significantly increased costs for electricity and for cooling.
Programming
Until 2013, the HD Digital Radio Alliance, acted as a liaison for stations to choose unduplicated formats for the extra channels (HD‑2, HD‑3, etc.). Now, iBiquity works with the major owners of the stations to provide various additional choices for listeners, instead of having several stations independently deciding to create the same format. HD‑1 stations broadcast the same format as the regular FM (and some AM) stations, and many of these stations offer one, two, or even three subchannels (designated HD‑2, HD‑3, HD‑4) to complement their main programming.
iHeartRadio is selling programming of several different music genres to other competing stations, in addition to airing them on its own stations. Some stations are simulcasting their local AM or lower-power FM broadcasts on sister stations' HD‑2 or HD‑3 channels, such as KFNZ-FM in Kansas City simulcasting 610 AM KFNZ's programming on 96.5 FM‑HD2. It is common practice to broadcast an older, discontinued format on HD‑2 channels; for example, with the recent disappearance of the smooth jazz format from the analog radio dial in many markets, stations such as WDZH‑FM in Detroit, Michigan, (formerly WVMV), WFAN-FM in New York City, and WNWV-FM in Cleveland, Ohio, program smooth jazz on their HD‑2 or HD‑3 bands. Some HD‑2 or HD‑3 stations are even simulcasting sister AM stations. In St. Louis, for example, clear-channel KMOX‑AM (1120 kHz analog and HD) is simulcast on KEZK-FM 102.5 FM‑HD3. KBCO‑FM in Boulder, Colorado, uses its HD‑2 channel to broadcast exclusive live recordings from their private recording studio. CBS Radio is implementing plans to introduce its more popular superstations into distant markets (KROQ-FM into New York City, WFAN‑AM into Florida, and KFRG-FM and KSCF‑FM into Los Angeles) via HD‑2 and HD‑3 channels.
On 8 March 2009, CBS Radio inaugurated the first station with an HD4 subchannel, WJFK-FM in Washington, D.C., a sports radio station which also carries sister sports operations WJZ-FM from Baltimore; Philadelphia's WTEL‑AM and WIP-FM; and WFAN‑AM from New York. Since then numerous other channels have implemented HD‑4 subchannels as well, although with nearly 100% talk-based formats, because of the reduced audio quality. For example, KKLQ‑FM in Los Angeles operates an HD‑4 signal and aired The Mormon Channel which was 99% talk.
Public broadcasters are also embracing HD Radio. Minnesota Public Radio offers a few services: KNOW-FM, the MPR News station in the Twin Cities, offers music service Radio Heartland on 91.1 FM‑HD2 and additional news programming called "BBC News and More" on 91.1 FM‑HD3; KSJN-FM, the classical MPR station in the Twin Cities, provides "Classical 24" service on 99.5 FM‑HD2; and KCMP-FM, on 89.3 FM in the Twin Cities, offers "Wonderground Radio", music for kids and their parents, on 89.3 FM‑HD2.
KPCC‑FM (Southern California Public Radio), heard on 89.3 FM in Los Angeles, offers a digital simulcast of its analog channel on 89.3 FM‑HD1 and MPR's music service KCMP-FM on 89.3 FM‑HD2 in Los Angeles.
New York Public Radio in New York City, WNYC (AM) and WNYC-FM, (d.b.a. WNYC) re-broadcasts a locally programmed, all-classical service from WQXR-FM called "Q2", on 93.9 FM‑HD2. The service launched in March 2006. On 8 October 2009, the format was moved to WQXR‑HD2 (WXNY-FM) on 105.9 FM when WQXR-FM was acquired by New York Public Radio as part of a frequency swap with Univision Radio for their former frequency. The programming on the WNYC-FM‑HD2 channel now is a rebroadcast of WQXR-FM, in order to give full coverage of WQXR-FM programming in some form, as the 105.9 FM signal is weaker, and does not cover the whole area.
WMIL-FM in Milwaukee has offered an audio simulcast of Fox affiliate WITI‑TV on their HD‑3 subchannel since August 2009 as part of a news and weather content agreement between iHeartRadio and WITI‑TV. This restored WITI‑TV's audio to the Milwaukee radio dial after a two-month break, following the digital transition; as a channel 6 analog television station WITI‑TV exploited the 87.7 FM audio quirk as an advantage, in order to allow viewers to hear the station's newscasts and Fox programming on their car radios.
KYXY‑FM, operated by CBS in San Diego on 96.5 FM and offers their HD‑2 channel as one of the few "subchannel only" independent Christian music based formats on HD Radio. Branded as "The Crossing", it is operated by Azusa Pacific University.
College radio has also been impacted by HD Radio, stations such as WBJB-FM which is a public station on a college campus offer a student run station as one of the multicast channels. WKNC-FM in Raleigh, NC, runs college radio programming on HD‑1 and HD‑2, and electronic dance music on WolfBytes Radio on WKNC-FM‑HD3.
Some commercial broadcasters also use their HD‑2 channels to broadcast the programming of noncommercial broadcasters. Bonneville International uses its HD‑2 and HD‑3 channels to broadcast Mormon Channel which is entirely noncommercial and operates solely as a public service from Bonneville's owner, The Church of Jesus Christ of Latter-day Saints. That network of eight HD‑2 and HD‑3 stations was launched on 18 May 2009 and was fully functional within two weeks. Also, in Detroit, WMXD-FM, an urban adult contemporary station, airs the contemporary Christian K-Love format on its HD‑2 band (the HD‑2 also feeds several analog translators around the metropolitan area – see below), due to an agreement between iHeartMedia and K-Love owner Educational Media Foundation (EMF), allowing EMF to program WMXD-FM's HD‑2 channel. On a similar note, Los Angeles' KRRL 92.3 FM‑HD3 signal rebroadcasts EMF's Air1, and in Santa Barbara KLSB 97.5 FM airs K-Love on its primary frequency, and rebroadcasts Air1 on HD‑2 (though neither supports "Artist Experience"). In St. Louis, Missouri, WFUN 96.3-HD2 rebroadcasts K297BI for the classical music station Classic 107.3.
In July 2018, as part of a projected one year experiment, WWFD‑AM in Frederick, Maryland, became the first AM station to eliminate its analog transmissions and broadcast exclusively in digital.
Translators
Although broadcast translators are prohibited from originating their own programming, the FCC has controversially allowed translator stations to rebroadcast in standard analog FM the audio of an HD Radio channel of the primary station the translator is assigned to. This also allows station owners, who already usually own multiple stations locally and nationally, to avoid the rulemaking process of changing the table of allotments as would be needed to get a new separately-licensed station, and to avoid exceeding controlling-interest caps intended to prevent the excessive concentration of media ownership. Such new translator stations can block new LPFM stations from going on the air in the same footprint. Translator stations are allowed greater broadcast range (via less restrictive height and power limitations) than locally originated LPFM stations, so they may occupy a footprint in which several LPFMs might have been licensed otherwise.
In addition to the controversial practice of converting the HD-only secondary radio channels of a primary station into analog FM in areas where the primary station's signal can already readily be received, translators can also be used in a more traditional manner to extend the range of the full content of the primary station, including the unmodified main signal and any HD Radio sub-channels, in areas where the station has poor coverage or reception, as is done via the remote transmitter K202BD in Manti, Utah, which rebroadcasts both the analog and digital signals of KUER-FM from Salt Lake City.
In order to do this, HD Radio may be passed along from the main station via a "bent pipe" setup, where the translator simply makes a frequency shift of the entire channel, often by simple heterodyning. This may require an increase in bandwidth in both the amplifier and radio antenna if either is too narrowband to pass the wider HD Radio signal, meaning one or both might have to be replaced. Baseband translators which use a separate receiver and transmitter require an HD Radio transmitter, just as does the main station. Translators are not required to transmit an HD Radio digital signal, and the vast majority of existing translators which repeat FM stations running hybrid HD signals do not repeat the HD part of the radio broadcast, due to technical limitations in equipment designed before the advent of HD Radio technology.
Receivers
Automotive and home/professional
By 2012, there were several HD receivers available on the market. A basic model costs around US$50.
Automotive HD receiver manufacturers include:
Most car manufacturers offer HD receivers as audio packages in new cars, including:
Home and office listening equipment is available from a number of companies in both component receiver and tabletop models, including:
Portable
Initially, portable HD receivers were not available due to the early chipsets either being too large for a small enclosure and / or needing too much power to be practical for a battery-operated device. However, in January 2008 at the Consumer Electronics Show (CES) in Las Vegas, iBiquity unveiled a prototype of a new portable receiver, roughly the size of a cigarette pack. Two companies made low-power chipsets for HD receivers:
Samsung
SiPort
At least five companies made portable HD receivers:
Coby Electronics Corporation produced the first HD portable – the Coby HDR‑700 portable HD receiver for both AM and FM.
Griffin Technology produced an HD receiver designed to plug into the dock connector of an Apple iPod, or iPhone, with tuning functionality provided via software through the device's multi-touch display. This product was discontinued.
Best Buy started selling the Insignia NS‑HD01, a house brand portable unit on 12 July 2009. It was the second portable HD receiver to come to the general market and featured FM‑only playback and a non-removable rechargeable battery which charges via mini USB. The Insignia unit sold in 2009 for around US$50 – the least expensive receiver available. Best Buy discontinued the NS‑HD01 model by September 2019 in favor of the NS-HD02.
Microsoft released the on 15 September 2009. It included an HD receiver embedded in the media device. The Zune HD was discontinued in November 2011 in favor of Windows Phones.
Sangean Electronics produces multiple portable HD radios with AM and FM reception, like the Sangean HDR-14 portable receiver.
By 2012, iBiquity was trying to get HDR chipsets into mobile phones.
Footnotes
References
External links
— IBOC general information page
— Site with standards documents for the NRSC formats of HD Radio
— information on an early IBOC installation
— editorial discusses marketing challenges for HD Radio
— audio samples of actual HD broadcasts
Xperi
Digital radio
Broadcast engineering
Articles containing video clips
American inventions
2002 introductions
2002 establishments in the United States | HD Radio | [
"Engineering"
] | 8,706 | [
"Broadcast engineering",
"Electronic engineering"
] |
1,016,885 | https://en.wikipedia.org/wiki/European%20Terrestrial%20Reference%20System%201989 | The European Terrestrial Reference System 1989 (ETRS89) is an ECEF (Earth-Centered, Earth-Fixed) geodetic Cartesian reference frame, in which the Eurasian Plate as a whole is static. The coordinates and maps in Europe based on ETRS89 are not subject to change due to the continental drift.
The development of ETRS89 is related to the global ITRS geodetic datum, in which the representation of the continental drift is balanced in such a way that the total apparent angular momentum of continental plates is about 0. ETRS89 was officially born at the 1990 Florence meeting of EUREF, following its Resolution 1, which recommends that the terrestrial reference system to be adopted by EUREF will be coincident with ITRS at the epoch 1989.0 and fixed to the stable part of the Eurasian Plate. According to the resolution, this system was named European Terrestrial Reference System 89 (ETRS89). Since then ETRS89 and ITRS diverge due to the continental drift at a speed about 2.5 cm per year. By the year 2000 the two coordinate systems differed by about 25 cm.
The 89 in its name does not refer to the year of solution (realization), but rather the year of initial definition, when ETRS89 was fully equivalent to ITRS. The solutions of ETRS89 correspond to the ITRS solutions. For each ITRS solution, a matching ETRS89 solution is being made. ETRF2000, for example, is an ETRS89 solution, which corresponds to ITRF2000. ETRS89 is realized by EUREF through the maintenance of the EUREF Permanent Network (EPN) and continuous processing of the EPN data in a few processing centres. Users have access to ETRS89 via EPN data products and real-time streams of differential corrections from a set of public providers based on the EPN stations.
The transformation from ETRS89 to ITRS is time-dependent and was formulated by C. Boucher, Z. Altamimi, and X. Collilieux
ETRS89 is the EU-recommended frame of reference for geodata for Europe. It is the only geodetic datum to be used for mapping and surveying purposes in Europe.
It plays the same role for Europe as NAD-83 for North America. (NAD-83 is a datum in which the North American Plate as a whole is static, and which is used for mapping and surveying in the US, Canada, and Mexico.) ETRS89 and NAD-83 are based on the GRS80 ellipsoid. WGS84 originally used the GRS80 reference ellipsoid, but has undergone some minor refinements in later editions since its initial publication.
See also
WGS 84.
ED50.
AFREF.
Degrees minutes seconds (DMS).
Geodetic datum.
KMZ file, a zipped KML file.
Latitude.
Longitude.
Universal Transverse Mercator coordinate system (UTM).
World Geodetic System.
References
External links
ETRS89 site (Spanish)
ETRS89 site (French)
ETRS89 site (English)
ETRS89 site (Portuguese)
Information from the Ordnance Survey of Ireland
UTM to Google maps (and others) interface.
Geodetic datums | European Terrestrial Reference System 1989 | [
"Mathematics"
] | 700 | [
"Geodetic datums",
"Coordinate systems"
] |
1,017,002 | https://en.wikipedia.org/wiki/Upper%20topology | In mathematics, the upper topology on a partially ordered set X is the coarsest topology in which the closure of a singleton is the order section for each If is a partial order, the upper topology is the least order consistent topology in which all open sets are up-sets. However, not all up-sets must necessarily be open sets. The lower topology induced by the preorder is defined similarly in terms of the down-sets. The preorder inducing the upper topology is its specialization preorder, but the specialization preorder of the lower topology is opposite to the inducing preorder.
The real upper topology is most naturally defined on the upper-extended real line by the system of open sets. Similarly, the real lower topology is naturally defined on the lower real line A real function on a topological space is upper semi-continuous if and only if it is lower-continuous, i.e. is continuous with respect to the lower topology on the lower-extended line Similarly, a function into the upper real line is lower semi-continuous if and only if it is upper-continuous, i.e. is continuous with respect to the upper topology on
See also
References
p.101
General topology
Order theory | Upper topology | [
"Mathematics"
] | 247 | [
"General topology",
"Order theory",
"Topology",
"Topology stubs"
] |
1,017,009 | https://en.wikipedia.org/wiki/Surat%20Shabd%20Yoga | Surat Shabd Simran is a type of spiritual meditation in the Sant Mat tradition.
Etymology
Surat is "attention" or "face", that is, an outward expression of the soul; Shabd or Shabda has multiple meanings including ‘sacred song’, ‘word’, ‘voice’, ‘hymn’, ‘verse’, or ‘sound current, ‘audible life stream’, and the ‘essence of the Absolute Supreme Being’. The Absolute Supreme Being is a dynamic force of creative energy sent out into the abyss of space at the dawn of the universe's manifestation, as sound vibrations. These vibrations continue and are sent forth through the ages, framing all things that constitute and inhabit the universe. Yoga is literally ‘union’, or ‘to yoke’. Etymologically, Surat Shabd Yoga means the ‘Union of the Soul with the Essence of the Absolute Supreme Being’.
First is simran, or the repetition of Lord's holy names. It brings back our scattered attention to the tisra til - the third eye (behind our eyes), which is the headquarters of our mind and soul, in the waking state, whence it has scattered.
Second is dhyan, or contemplation on the immortal form of the Master. This helps in keeping the attention fixed at that centre.
Third is bhajan, or listening to the Anahad Shabd or celestial music that is constantly reverberating within us. With the help of this divine melody, the soul ascends to higher regions and ultimately reaches the feet of the Lord.
Surat Shabda Yoga is also known as Sehaj Yoga – the path leading to Sehaj or equipoise, The Path of Light and Sound, The Path of the Sants or 'Saints', The Journey of Soul, and The Yoga of the Sound Current.
Basic principles
Surat Shabda Yoga is for the discovery of True Self (Self-Realization), True Essence (Spirit-Realization), and True Divinity (God-Realization) while living in the human physical body. This involves reuniting in stages with what is called the "Essence of the Absolute Supreme Being", also known as the "Shabd or Word". Attaining this extent of self-realization is believed to result in jivan moksha/mukti, which is liberation/release from samsara and positivity in the cycle of karma and reincarnation. Initiation by a contemporary living Satguru (Sat - true, Guru - teacher) is considered a prerequisite for successful sadhana (spiritual exercises). The sadhanas include simran (repetition, particularly silent repetition of a mantra given at initiation), dhyan (concentration, viewing, or contemplation, particularly on the Inner Master), and bhajan (listening to the inner sounds of the Shabd). The mantra is Guru Manter. These are some verbal words that one gets from living masters and chants in mind, not with the mouth. As the language varies from culture to culture, Guru Manter or Mantra may also vary from culture to culture in spirituality. On the other hand, Naam or Word of God or Kalma is unspeakable, alive and no one can write it on paper. In the new testament "Hebrews 4:12" states, "For the Word of God is living and active. Sharper than any double-edged sword, it pierces even to dividing soul and spirit, joints and marrow. It judges the thoughts and intentions of the heart." Words which we write on paper are not alive and active. According to Scripture, the Word of God has existed since the beginning of the world, prior to any man-made language.
Surat Shabd Yoga arose in India in the last several hundred years, specifically in the Sikh tradition (Nanakpanthi) founded by Guru Nanak. The practice of meditation (Shabad), which is the central core practice of Surat Shabd Yoga, is derived from the ancient Hindu practice of nāda yoga. Nada yoga is expounded in various Hindu scriptures such as the Nadabindu Upanishad, an ancient text affiliated with the several thousands-year-old Rig Veda. The practice of nāda yoga within Hinduism has been widely affiliated within many yoga traditions including bhakti or devotional yoga, kundalini and tantric yogas, laya yoga, and raja yoga. Modern Hindu teachers still emphasizing nada yoga include Swami Sivananda, Swami Rama, Rammurti Mishra (Shri Brahmananda Sarasvati), Paramahansa Yogananda (Kriya Yoga Lineage), and many others. The practice of nāda yoga is an integral part of various other traditions as well, such as being a form of the advanced thogal practice in the Tibetan Dzogchen lineage, and is mentioned by H. P. Blavatsky, founder of the Theosophical Society in her book "The Voice of the Silence". The form of Surat Shabd Yoga, practiced by followers of Sant Mat and the Sikh tradition (nanakpanthi), is most commonly related to nāda yoga. Furthermore, nāda yoga resembles and combines elements from the Hindu practices of raja yoga, laya yoga, and bhakti yoga.
Movements and masters
Adherents believe Surat Shabda Yoga has been expressed through the movements of many different masters. However, a basic principle of Surat Shabd Yoga's tradition is the requirement for an outer Living Master to initiate followers onto the Path. The movements whose historical Satgurus have died and their successors do not purport themselves to be Surat Shabd Yoga Satgurus, usually are not considered currently to be Surat Shabd Yoga movements, either by their own leaders or by movements with current Living Masters.
Satguru Maharshi Mehi Paramahansa Ji Maharaj is considered the movement leader in the 20th century. He came to an isolated cave of Kuppaghat, Bhagalpur (Bihar, India) and practiced Surat-Shabda Yoga from March 1933 - November 1934. He achieved self-realization and attained ultimate salvation during his practice. He wrote books "Moksha-Darshan (Philosophy of Salvation), "Satsang-Yoga", "Shri Gita-Yoga Prakash", "Raamcharit Maanas Saar-Satik", "Maharshi-Mehi-Padaawali" and "Maharshi-Mehi-Padawaali".
The Radha Soami movement of Surat Shabda Yoga was established by Shiv Dayal Singh (1818–1878) in 1861 and named "Radhasoami Satsang" circa 1866. Soamiji Maharaj, as he was known, presided over the satsang meetings for seventeen years at Panni Gali and Soami Bagh in Agra, India, until he died on June 15, 1878. Accounts of his guru and successors vary, although he gave verbal instructions on his last day as to how his followers should be cared for. According to Radha Soami Satsang Beas, his guru was Tulsi Sahib of Hathras. According to the successors Soami Bagh and Dayal Bagh, Tulsi Sahib was a contemporary guru of the same teachings; but being a natural born Satguru, Seth Shiv Dayal Singh Ji himself had no guru.
After his death, six immediate successors carried on Shiv Dayal Singh’s teachings, including Huzur Maharaj Rai Salig Ram of Peepal Mandi, Agra, and Babaji Maharaj Baba Jaimal Singh of Radha Soami Satsang Beas (RSSB). More information on living masters related to Seth Shiv Dayal Singh Ji's lineage can be found in the Contemporary Sant Mat movements article.
Sant Kirpal Singh, a contemporary Sant Mat guru, stated that "Naam" ("Word") has been described in many traditions through the use of several different terms. In his teachings, the following expressions are interpreted as being identical to "Naam":
"Naad", "Akash Bani", and "Sruti" in the Vedas
"Nada" and "Udgit" in the Upanishads
"Logos" and "Word" in the New Testament
"Tao" by Lao Zi
"Music of the Spheres" by Pythagoras
"Sraosha" by Zoraster
"Kalma" and "Kalam-i-Qadim" in the Qur'an
"Naam", "Akhand Kirtan", and "Sacha ('True') Shabd" by Guru Granth Sahib
The more recently promulgated Quan Yin Method of meditation espoused via the spiritual teachings of Supreme Master Ching Hai has notable similarities to Surat Shabd Yoga.
Eckankar, an American movement, has many links to Surat Shabd Yoga including terminology, although its American founder Paul Twitchell disassociated himself from his former teacher Kirpal Singh.
The Movement of Spiritual Inner Awareness, also founded in America in 1971 by John-Roger and now with students in thirty-two countries, also teaches a similar form of active meditation called spiritual exercises. This movement uses the Sound Current and ancient Sanskrit tones in order to traverse and return to the higher realms of Spirit and into God.
The MasterPath is another contemporary American movement of Surat Shabda Yoga. Gary Olsen, the current Living Master of this branch, contends that several historical figures are Sat Gurus of Surat Shabda Yoga as representatives for the eternal Inner Shabda Master. A few of these Living Masters of their times include Laozi, Jesus, Pythagoras, Socrates, Kabir, the Sufi Masters, and mystic poets, Hafez and Rumi, the Ten Sikh Gurus beginning with Guru Nanak, Tulsi Sahib, and the Radhasoami/Radha Soami and offshoot Masters, including Seth Shiv Dayal Singh Ji, Baba Sawan Singh, Baba Faqir Chand, and Sant Kirpal Singh.
The ten Sikh Satgurus (Nanak Panthis) discuss the inner sound and inner light a lot in their scriptures. The first Sikh Satguru was Guru Nanak, but his master (guru) was Waheguru. These masters teach these two techniques. There is a master, Satpal Maharaj, that teaches four techniques that include these two of inner light and inner music. Altogether He teaches inner light (sight), inner music (hearing), primordial vibration (sense of touch), and nectar (taste and smell).
These correspond to the five senses, and this is how a student turns them inward to experience what is inside of himself. See Vishnu with his four arms and they correspond to these. One hand is holding a circle (chakra) of light, one holding a conch shell for the inner sound (hold it to the ear and a sound is heard), one holding a lotus flower to refer to nectar, and finally the fourth hand is holding a metal club (mace) for the inner vibration (if you hit something with it, it vibrates like a tuning fork). Some people refer to this inner energy as the soul.
The Line of Succession of sikhs
THE LINE OF SUCCESSION FROM KABIR TO PRESENT
From Kabir to present, the Masters of the Divine Science of Light and Sound appear and manifest themselves as an uninterrupted line, through which they transmitted to each successor the one knowledge and the one Power. Subsequently, we can trace an
exact "family tree" from Kabir Sahib (1398-1518) to Sri Guru Nanak Dev Ji (1469-1539) and further as shown below:
ਹੁਕਮੁ ਪਛਾਨੈ ਸੁ ਏਕੋ ਜਾਨੈ ਬੰਦਾ ਕਹੀਐ ਸੋਈ॥੩॥ (GGSG-Page No.1350-3)...Saint Kabir
Translation: One who will recognize the Command (Hukam), will know One Lord, we call him the real man.
ਸਚਾ ਸਉਦਾ ਹਰਿ ਨਾਮੁ ਹੈ ਸਚਾ ਵਾਪਾਰਾ ਰਾਮ॥ (GGSG-Page No.570)
Translation: The True merchandise(ਸਚਾ ਸਉਦਾ) is the Lord`s Naam. Trader Raam is the truth.
Above line in the book "Guru Granth Sahib Ji" verifies that during the true merchandise (ਸਚਾ ਸਉਦਾ) when Guru Nanak Dev Ji fed the hungry saints and he got the Naam from them. Whereas in books when we read the history of Guru Nanak Dev Ji, we find no information regarding true merchandise of Word or Naam of God. This line also verifies that Guru Nanak Dev Ji also interacted with living SattGuru where from he got the Naam.
ਏਕੋ ਨਾਮੁ ਹੁਕਮੁ ਹੈ ਨਾਨਕ ਸਿਤਗੁਿਰ ਦੀਆ ਬੁਝਾਇ ਜੀਉ॥੫॥ (GGSG-Page No.72) ..Guru Nanak
Translation: The One "Naam" is the Lord's Command, O Nanak, the "True Guru(teacher)" helped me to guess the lamp (light).
(Here One Naam means One Word, and Hukam also addresses the Naam of God)
ਦੇਹੀ ਅੰਦਰਿ ਨਾਮੁ ਨਿਵਾਸੀ॥ (GGSG-Page No.1026)
Translation: The Naam or Word or Kalma, abides deep within the body.
This line demonstrates that the Naam of God is something present in the body. One will get it within his own home i.e body where the spirit lives. Body is the current home of the spirit. After Guru Nanak Dev Ji below are the nine SattGurus (i.e Teachers of truth) of the Sikhs who got the success in the history and lead the community of Sikhs:
Sri Guru Angad Dev Ji (1504-1552),
Sr Guru Amar Das Ji (1479-1574),
Sri Guru Ram Das Ji (1534-1581),
Sri Guru Arjan Dev Ji (1563-1606),
Sri Guru Har Gobind Sahib Ji (1595-1644),
Sri Guru Har Rai Sahib Ji (1630-1661),
Sri Guru Har Krishan Ji (1656-1664),
Sri Guru Teg Bahadur Ji (1621-1675)
and Sri Guru Gobind Singh Ji (1667-1708).
Variations in movements
Among the exponents of Surat Shabd Yoga and the commonly shared elements related to the basic principles, notable variations also exist. For example, the followers of the orthodox Sikh faith no longer lay emphasis on a contemporary living guru. Different Surat Shabd Yoga paths will vary in the names used to describe the Absolute Supreme Being (God), including Anami Purush (nameless power) and Radha Soami (lord of the soul); the presiding deities and divisions of the macrocosm; the number of outer initiations; the words given as mantras; and the initiation vows or the prerequisites that must be agreed to before being accepted as an initiate.
Notes and references
External links
Website of Ruhani Satsang USA
Website of Sant Kirpal Singh Unity of Man
Website of School of Spirituality
Creation myths
Meditation
Sant Mat | Surat Shabd Yoga | [
"Astronomy"
] | 3,153 | [
"Cosmogony",
"Creation myths"
] |
1,017,074 | https://en.wikipedia.org/wiki/The%20Frankenfood%20Myth | The Frankenfood Myth: How Protest and Politics Threaten the Biotech Revolution is a book written by Hoover Institution research fellow Henry I. Miller and political scientist Gregory Conko and published by Praeger Publishers published in 2004. In it Conko argues against over-regulation of genetically modified food and it features a foreword by Nobel Peace Prize-winner Norman Ernest Borlaug.
In an interview, Conko described Frankenfood Myth as follows:
"It's not a point-by-point refutation of all the misconceptions that are being spread about agricultural biotechnology. The primary mess that we tackle has to do with an attitude that is being spread by both opponents of biotechnology and by a lot of its supporters that it is somehow uniquely risky and therefore should be subject to special caution and special regulatory oversight."
A Barron's reviewer wrote:
"The heated debate over so-called Frankenfoods is not only about the pros and cons of genetically manipulating crops to improve their nutritional value and resistance to disease; it also concerns intellectual honesty. For years, activists opposed to the new science have been spreading unfounded and inaccurate horror stories, threatening to derail progress vitally needed to feed the world. the Frankenfood Myth by Henry Miller and Gregory Conko takes a long, hard look at both the new agricultural biotechnology and the policy debate surrounding it."
See also
Genetically modified food controversies
References
2004 in the environment
Current affairs books
Genetic engineering and agriculture | The Frankenfood Myth | [
"Engineering",
"Biology"
] | 302 | [
"Genetic engineering and agriculture",
"Genetic engineering"
] |
1,017,131 | https://en.wikipedia.org/wiki/Retina%20bipolar%20cell | As a part of the retina, bipolar cells exist between photoreceptors (rod cells and cone cells) and ganglion cells. They act, directly or indirectly, to transmit signals from the photoreceptors to the ganglion cells.
Structure
Bipolar cells are so-named as they have a central body from which two sets of processes arise. They can synapse with either rods or cones (rod/cone mixed input BCs have been found in teleost fish but not mammals), and they also accept synapses from horizontal cells. The bipolar cells then transmit the signals from the photoreceptors or the horizontal cells, and pass it on to the ganglion cells directly or indirectly (via amacrine cells). Unlike most neurons, bipolar cells communicate via graded potentials, rather than action potentials.
Function
Bipolar cells receive synaptic input from either rods or cones, or both rods and cones, though they are generally designated rod bipolar or cone bipolar cells. There are roughly 10 distinct forms of cone bipolar cells, however, only one rod bipolar cell, due to the rod receptor arriving later in the evolutionary history than the cone receptor.
In the dark, a photoreceptor (rod/cone) cell will release glutamate, which inhibits (hyperpolarizes) the ON bipolar cells and excites (depolarizes) the OFF bipolar cells. In light, however, light strikes the photoreceptor cell which causes it to be inhibited (hyperpolarized) due to the activation of opsins which activate G-Proteins that activate phosphodiesterase (PDE) which cleaves cGMP into 5'-GMP. In photoreceptor cells, there is an abundance of cGMP in dark conditions, keeping cGMP-gated Na channels open and so, activating PDE diminishes the supply of cGMP, reducing the number of open Na channels and thus hyperpolarizing the photoreceptor cell, causing less glutamate to be released. This causes the ON bipolar cell to lose its inhibition and become active (depolarized), while the OFF bipolar cell loses its excitation (becomes hyperpolarized) and becomes silent.
Rod bipolar cells do not synapse directly on to ganglion cells. Instead, rod bipolar cells synapse on to a Retina amacrine cell, which in turn excite cone ON bipolar cells (via gap junctions) and inhibit cone OFF bipolar cells (via glycine-mediated inhibitory synapses) thereby overtaking the cone pathway in order to send signals to ganglion cells at scotopic (low) ambient light conditions.
OFF bipolar cells synapse in the outer layer of the inner plexiform layer of the retina, and ON bipolar cells terminate in the inner layer of the inner plexiform layer.
Signal transmission
Bipolar cells effectively transfer information from rods and cones to ganglion cells. The horizontal cells and the amacrine cells complicate matters somewhat. The horizontal cells introduce lateral inhibition to the dendrites and give rise to the center-surround inhibition which is apparent in retinal receptive fields. The amacrine cells also introduce lateral inhibition to the axon terminal, serving various visual functions including efficient signal transduction with high signal-to-noise ratio.
The mechanism for producing the center of a bipolar cell's receptive field is well known: direct innervation of the photoreceptor cell above it, either through a metabotropic (ON) or ionotropic (OFF) receptor. However, the mechanism for producing the monochromatic surround of the same receptive field is under investigation. While it is known that an important cell in the process is the horizontal cell, the exact sequence of receptors and molecules is unknown.
See also
Amacrine cell
Retinal ganglion cell
Notes
References
External links
Diagram at mcgill.ca
NIF Search - Retinal Bipolar Cell via the Neuroscience Information Framework
Human eye anatomy
Histology
Human cells
Neurons | Retina bipolar cell | [
"Chemistry"
] | 846 | [
"Histology",
"Microscopy"
] |
1,017,427 | https://en.wikipedia.org/wiki/Prodrug | A prodrug is a pharmacologically inactive medication or compound that, after intake, is metabolized (i.e., converted within the body) into a pharmacologically active drug. Instead of administering a drug directly, a corresponding prodrug can be used to improve how the drug is absorbed, distributed, metabolized, and excreted (ADME).
Prodrugs are often designed to improve bioavailability when a drug itself is poorly absorbed from the gastrointestinal tract. A prodrug may be used to improve how selectively the drug interacts with cells or processes that are not its intended target. This reduces adverse or unintended effects of a drug, especially important in treatments like chemotherapy, which can have severe unintended and undesirable side effects.
History
Many herbal extracts historically used in medicine contain glycosides (sugar derivatives) of the active agent, which are hydrolyzed in the intestines to release the active and more bioavailable aglycone. For example, salicin is a β-D-glucopyranoside that is cleaved by esterases to release salicylic acid. Aspirin, acetylsalicylic acid, first made by Felix Hoffmann at Bayer in 1897, is a synthetic prodrug of salicylic acid. However, in other cases, such as codeine and morphine, the administered drug is enzymatically activated to form sugar derivatives (morphine-glucuronides) that are more active than the parent compound.
The first synthetic antimicrobial drug, arsphenamine, discovered in 1909 by Sahachiro Hata in the laboratory of Paul Ehrlich, is not toxic to bacteria until it has been converted to an active form by the body. Likewise, prontosil, the first sulfa drug (discovered by Gerhard Domagk in 1932), must be cleaved in the body to release the active molecule, sulfanilamide. Since that time, many other examples have been identified.
Terfenadine, the first non-sedating antihistamine, had to be withdrawn from the market because of the small risk of a serious side effect. However, terfenadine was discovered to be the prodrug of the active molecule, fexofenadine, which does not carry the same risks as the parent compound. Therefore, fexofenadine could be placed on the market as a safe replacement for the original drug.
Loratadine, another non-sedating antihistamine, is the prodrug of desloratadine, which is largely responsible for the antihistaminergic effects of the parent compound. However, in this case the parent compound does not have the side effects associated with terfenadine, and so both loratadine and its active metabolite, desloratadine, are currently marketed.
Recent prodrugs
Approximately 10% of all marketed drugs worldwide can be considered prodrugs. Since 2008, at least 30 prodrugs have been approved by the FDA. Seven prodrugs were approved in 2015 and six in 2017. Examples of recently approved prodrugs are such as dabigatran etexilate (approved in 2010), gabapentin enacarbil (2011), sofosbuvir (2013), tedizolid phosphate (2014), isavuconazonium (2015), aripiprazole lauroxil (2015), selexipag (2015), latanoprostene bunod (2017), benzhydrocodone (2018), tozinameran (2020) and serdexmethylphenidate (2021).
Classification
Prodrugs can be classified into two major types, based on how the body converts the prodrug into the final active drug form:
Type I prodrugs are bioactivated inside the cells (intracellularly). Examples of these are anti-viral nucleoside analogs that must be phosphorylated and the lipid-lowering statins.
Type II prodrugs are bioactivated outside cells (extracellularly), especially in digestive fluids or in the body's circulatory system, particularly in the blood. Examples of Type II prodrugs are salicin (described above) and certain antibody-, gene- or virus-directed enzyme prodrugs used in chemotherapy or immunotherapy.
Both major types can be further categorized into subtypes, based on factors such as (Type I) whether the intracellular bioactivation location is also the site of therapeutic action, or (Type 2) whether or not bioactivation occurs in the gastrointestinal fluids or in the circulation system.
Subtypes
Type IA prodrugs include many antimicrobial and chemotherapy agents (e.g., 5-flurouracil). Type IB agents rely on metabolic enzymes, especially in hepatic cells, to bioactivate the prodrugs intracellularly to active drugs. Type II prodrugs are bioactivated extracellularly, either in the milieu of GI fluids (Type IIA), within the systemic circulation and/or other extracellular fluid compartments (Type IIB), or near therapeutic target tissues/cells (Type IIC), relying on common enzymes such as esterases and phosphatases or target directed enzymes. Importantly, prodrugs can belong to multiple subtypes (i.e., Mixed-Type). A Mixed-Type prodrug is one that is bioactivated at multiple sites, either in parallel or sequential steps. For example, a prodrug, which is bioactivated concurrently in both target cells and metabolic tissues, could be designated as a "Type IA/IB" prodrug (e.g., HMG Co-A reductase inhibitors and some chemotherapy agents; note the symbol " / " applied here). When a prodrug is bioactivated sequentially, for example initially in GI fluids then systemically within the target cells, it is designated as a "Type IIA-IA" prodrug (e.g., tenofovir disoproxil; note the symbol " - " applied here). Many antibody- virus- and gene-directed enzyme prodrug therapies (ADEPTs, VDEPTs, GDEPTs) and proposed nanoparticle- or nanocarrier-linked drugs can understandably be Sequential Mixed-Type prodrugs. To differentiate these two Subtypes, the symbol dash " - " is used to designate and to indicate sequential steps of bioactivation, and is meant to distinguish from the symbol slash " / " used for the Parallel Mixed-Type prodrugs.
See also
Precursor (chemistry)
Toxication
Neurotransmitter prodrug
References
External links
Special Issue on Prodrugs: from Design to Applications
Pharmacokinetics | Prodrug | [
"Chemistry"
] | 1,490 | [
"Pharmacology",
"Chemicals in medicine",
"Pharmacokinetics",
"Prodrugs"
] |
1,017,492 | https://en.wikipedia.org/wiki/Wolf%20tone | A wolf tone, or simply a "wolf", is an undesirable phenomenon that occurs in some bowed-string instruments, most famously in the cello. It happens when the pitch of the played note is close to a particularly strong natural resonant frequency of the body of the musical instrument. A wolf tone is hard for the player to control: instead of a solid tone it tends to produce a thin "surface" sound, sometimes jumping to the octave of the intended note. In extreme cases, a "stuttering" or "warbling" sound is produced, as in the sound example. This sound may be likened to the howling of a wolf. A somewhat similar sound is the beating produced by a wolf interval, which is usually the interval between E and G of the various non-circulating temperaments.
Stringed instruments
The physics behind the warbling wolf was first explained by C. V. Raman. He used simultaneous measurements of the vibrating string and the vibrating body of the cello, to show that the warbling sound is caused by an alternation of two different types of string vibration. All bowed string vibration is “stick-slip oscillation”. One of the vibration types involves a single slip in every cycle of the note, but the other type involves two slips per cycle.
Frequently, the wolf is present on or in between the pitches E and F on the cello, and around G on the double bass.
A wolf can be reduced or eliminated with a piece of equipment called a wolf tone eliminator. There are several types. The one illustrated is a metal tube and mounting screw with an interior rubber sleeve, that fits around one of the lengths of string below the bridge. The position of the tube must be adjusted so that the short section of string resonates exactly at the frequency at which the wolf occurs. It works in the same way as a tuned-mass damper, often used to reduce vibration of bridges or tall buildings.
An older device on cellos was a fifth string that could be tuned to the wolf frequency; fingering an octave above or below also attenuates the effect somewhat, as does the trick of squeezing with the knees.
While it has been said that Lou Harrison wrote a piece (evidently reworked as the second movement of the Suite for Cello and Harp) that exploited the wolf specific to Seymour Barab's new cello, there is no clear evidence that this occurred.
"Naldjorlak I", composed by Éliane Radigue for realisation exclusively by the cellist Charles Curtis, is in fact composed solely around the manipulation of the wolf tone of Curtis's cello.
See also
Mechanical resonance
String resonance
Violin acoustics
References
Wilkins, R.A.; Pan, J.; Sun, H. (Fall 2013). "An Empirical Investigation into the Mechanism of Cello Wolf-Tone Beats". Journal of the Violin Society of America. 24 (2).
Wilkins, R.A.; Pan, J.; Sun, H. (Fall 2013). "An Investigation into the Techniques for Controlling Cello Wolf-Tones". Journal of the Violin Society of America. 24 (2).
Sounds by type
Resonance
String performance techniques | Wolf tone | [
"Physics",
"Chemistry"
] | 652 | [
"Resonance",
"Waves",
"Physical phenomena",
"Scattering"
] |
1,017,561 | https://en.wikipedia.org/wiki/Business%20telephone%20system | A business telephone system is a telephone system typically used in business environments, encompassing the range of technology from the key telephone system (KTS) to the private branch exchange (PBX).
A business telephone system differs from an installation of several telephones with multiple central office (CO) lines in that the CO lines used are directly controllable in key telephone systems from multiple telephone stations, and that such a system often provides additional features for call handling. Business telephone systems are often broadly classified into key telephone systems and private branch exchanges, but many combinations (hybrid telephone systems) exist.
A key telephone system was originally distinguished from a private branch exchange in that it did not require an operator or attendant at a switchboard to establish connections between the central office trunks and stations, or between stations. Technologically, private branch exchanges share lineage with central office telephone systems, and in larger or more complex systems, may rival a central office system in capacity and features. With a key telephone system, a station user could control the connections directly using line buttons, which indicated the status of lines with built-in lamps.
Key telephone system
Key telephone systems are primarily defined by arrangements with individual line selection buttons for each available telephone line. The earliest systems were known as wiring plans and simply consisted of telephone sets, keys, lamps, and wiring.
Key was a Bell System term of art for a customer-controlled switching system such as the line buttons on the phones associated with such systems. The electrical components that allow for the selection of lines and features such as hold and intercom are housed in a panel or cabinet, called the key service unit or key system unit (KSU).
The wiring plans evolved into modular hardware building blocks with a variety of functionality and services in the 1A key telephone system developed in the Bell System in the 1930s.
Key systems can be built using three principal architectures: electromechanical shared-control, electronic shared-control, or independent key sets.
New installations of key telephone systems have become less common, as hybrid systems and private branch exchanges of comparable size have similar costs and greater functionality.
Electromechanical shared-control key system
Before the development of large-scale integrated circuits, key systems typically consisted of electromechanical components, such as relays, as were larger telephone switching systems.
The systems marketed in North America as the 1A, 1A1, 1A2 Key System, and the 6A, are typical examples and were sold for many decades. The Western Electric 1A family of key telephone units (KTUs) was introduced in the late 1930s and remained in use until the 1950s. 1A equipment was primitive and required at least two KTUs per line; one for line termination and one for station (telephone instrument) termination. The telephone instrument commonly used by 1A systems was the WECo 300/400-series telephone. Introduced in 1953, 1A1 key systems simplified wiring with a single KTU for both line and station termination, and increased the features available. As the 1A1 systems became commonplace, requirements for intercom features grew. The original intercom KTUs, WECo Model 207, were wired for a single talk link, that is, a single conversation on the intercom at a time. The WECo 6A dial intercom system provided two talk links and was often installed as the dial intercom in a 1A1 or 1A2 key system. The 6A systems were complex, troublesome, and expensive, and never became popular. The advent of 1A2 technology in 1964 simplified key system setup and maintenance. These continued to be used throughout the 1980s when the arrival of electronic key systems with their easier installation and greater features signaled the end of electromechanical key systems.
Two lesser-known key systems were used at airports for air traffic control communications, the 102 and 302 key systems. These were uniquely designed for communications between the air traffic control tower and radar approach control (RAPCON) or ground control approach (GCA) and included radio line connections.
Automatic Electric Company also produced a family of key telephone equipment, some of it compatible with Western Electric equipment, but it did not gain the widespread use enjoyed by Western Electric equipment.
Electronic shared-control system
With the advent of LSI ICs, the same architecture could be implemented much less expensively than was possible using relays. In addition, it was possible to eliminate the many-wire cabling and replace it with much simpler cable similar to (or even identical to) that used by non-key systems. Electronic shared-control systems led quickly to the modern hybrid telephone system, as the features of PBXs and key systems quickly merged. One of the most recognized such systems is the AT&T Merlin.
Additionally, these more modern systems allowed a diverse set of features including:
Answering machine functions
Automatic call accounting
Caller ID
Remote supervision of the entire system
Selection of signaling sounds
Speed dialing
Station-specific limitations (such as no long-distance access or no paging)
Features could be added or modified simply using software, allowing easy customization of these systems. The stations were easier to maintain than the previous electromechanical key systems, as they used efficient LEDs instead of incandescent light bulbs for line status indication.
LSI also allowed smaller systems to distribute the control (and features) into individual telephone sets that don't require any single shared control unit. Such systems were dubbed KSU-less; the first such phone was introduced in 1975 with the Com Key 416. Generally, these systems are used with relatively few telephone sets and it is often more difficult to keep the feature set (such as speed-dialing numbers) in synchrony between the various sets.
Hybrid key telephone system
Into the 21st century, the distinction between key systems and PBX systems has become increasingly blurred. Early electronic key systems used dedicated handsets which displayed and allowed access to all connected PSTN lines and stations.
The modern key system now supports SIP, ISDN, analog handsets (in addition to its own proprietary handsetsusually digital) as well as a raft of features more traditionally found on larger PBX systems. Their support for both analog and digital signaling, and of some PBX functionality gives rise to the hybrid designation.
A hybrid system typically has some call appearance buttons that directly correspond to individual lines and/or stations, but may also support direct dialing to extensions or outside lines without selecting a line appearance.
The modern key system is usually fully digital, although analog variants persist and some systems implement VOIP services. Effectively, the aspects that distinguish a PBX from a hybrid key system are the amount, scope, and complexity of the features and facilities offered.
Private branch exchange
A PBX is a telephone exchange or switching system that serves a private organization. A PBX permits the sharing of central office trunks between internally installed telephones, and provides intercommunication between those internal telephones within the organization without the use of external lines. The central office lines provide connections to the public switched telephone network (PSTN) and the concentration aspect of a PBX permits the shared use of these lines between all stations in the organization. Its intercommunication ability allows two or more stations to directly connect while not using the public switched telephone network. This method reduces the number of lines needed from the organization to the public switched telephone network.
Each device connected to the PBX, such as a telephone, a fax machine, or a computer modem, is referred to as an extension and has a designated extension telephone number that may or may not be mapped automatically to the numbering plan of the central office and the telephone number block allocated to the PBX.
Initially, PBX systems offered the primary advantage of cost savings for internal phone calls: handling the circuit switching locally reduced charges for telephone service via central-office lines. As PBX systems gained popularity, they began to feature services not available in the public network, such as hunt groups, call forwarding, and extension dialing. From the 1960s, a simulated PBX, known as Centrex, provided similar features from the central telephone exchange.
A PBX differs from a key telephone system (KTS) in that users of a key system manually select their own outgoing lines on special telephone sets that control buttons for this purpose, while PBXs select the outgoing line automatically. The telephone sets connected to a PBX do not normally have special keys for central-office line control, but it is not uncommon for key systems to be connected to a PBX to extend its services.
A PBX, in contrast to a key system, employs an organizational numbering plan for its stations. In addition, a dial plan determines whether additional digit sequences must be prefixed when dialing to obtain access to a central office trunk. Modern number-analysis systems permit users to dial internal and external telephone numbers without special codes to distinguish the intended destination.
History
The term PBX originated when switchboard operators managed company switchboards manually using cord circuits. As automated electromechanical switches and later electronic switching systems gradually replaced the manual systems, the terms private automatic branch exchange (PABX) and private manual branch exchange (PMBX) differentiated them. Solid-state digital systems were sometimes referred to as electronic private automatic branch exchanges (EPABX). , the term PBX is by far the most widely recognized. The abbreviation now applies to all types of complex, in-house telephony switching systems.
Two significant developments during the 1990s led to new types of PBX systems. One was the massive growth of data networks and increased public understanding of packet switching. Companies needed packet-switched networks for data, so using them for telephone calls proved tempting, and the availability of the Internet as a global delivery system made packet-switched communications even more attractive. These factors led to the development of the voice over IP PBX, or IP PBX.
The other trend involved the idea of focusing on core competence. PBX services had always been hard to arrange for smaller companies, and many companies realized that handling their own telephony was not their core competence. These considerations gave rise to the concept of the hosted PBX. In wireline telephony, the original hosted PBX was the Centrex service provided by telcos since the 1960s; later competitive offerings evolved into the modern competitive local exchange carrier. In voice-over IP, hosted solutions are easier to implement as the PBX may be located at and managed by any telephone service provider, connecting to the individual extensions via the Internet. The upstream provider no longer needs to run direct, local leased lines to the served premises.
Manual PBX
Many manufacturers provided manually operated private branch exchange systems in various sizes and features; examples are pictured here:
System components
A PBX system often includes the following:
Cabinets, closets, vaults, and other housings
Console or switchboard allowing an operator to control incoming calls
Interconnecting wires and cables
Logic cards, switching and control cards, power cards, and related devices that facilitate PBX operation
Microcontroller or microcomputer for arbitrary data processing, control and logic
Outside trunks connecting the PBX to the public switched telephone network
Stations, or telephone sets, sometimes called lines by metonymy
The PBX's internal switching network
Uninterruptible power supply (UPS) consisting of sensors, power switches, and batteries
Current trends
Since the advent of Internet telephony (Voice over IP) technologies, PBX development has tended toward the IP PBX, which uses the Internet Protocol to carry calls. Most modern PBXs support VoIP. ISDN PBX systems also replaced some traditional PBXs in the 1990s, as ISDN offers features such as conference calling, call forwarding, and programmable caller ID. As of 2015, ISDN is being phased out by most major telecommunication carriers throughout Europe in favor of all-IP networks, with some expecting complete migration by 2025.
Originally having started as an organization's manual switchboard or attendant console operated by a telephone operator or just simply the operator, PBXs have evolved into VoIP centers that are hosted by the operators or even manufacturers.
Even though VoIP is considered by many people as the future of telephony, the circuit switched network remains the core of communications, and the existing PBX systems are competitive in services with modern IP systems. Five distinct scenarios exist:
Hosted/virtual PBX (hosted and circuit-switched) or traditional Centrex
IP Centrex or hosted/virtual IP (hosted and packet-switched)
IP PBX (private and packet-switched)
Mobile PBX solution (mobile phones replacing or used in combination with fixed phones)
PBX (private and circuit-switched)
For the option to call from the IP network to the circuit-switched PSTN (SS7/ISUP), the hosted solutions include interconnecting media gateways.
Home and small-business usage
Historically, the expense of full-fledged PBX systems has put them out of reach of small businesses and individuals. However, since the 1990s many small, consumer-grade, and consumer-size PBXs have become available. These systems are not comparable in size, robustness, or flexibility to commercial-grade PBXs, but still provide many features.
The first consumer PBX systems used analog (POTS) telephone lines, typically supporting four private analog and one public analog line. They were the size of a small cigar box. In Europe, these systems for analog phones were followed by consumer-grade PBXs for ISDN. Using small PBXs for ISDN is a logical step since the ISDN basic rate interface provides two logical phone lines (via two ISDN B channels) that can be used in parallel. With the adoption of VoIP by consumers, consumer VoIP PBXs have appeared, with PBX functions becoming simple additional software features of consumer-grade routers and switches. Additionally, many telecommunications providers now offer hosted PBX systems where the provider actually hosts the PBX and the phone handsets are connected to it through an internet connection.
Open source projects have provided PBX-style features since the 1990s. These projects provide flexibility, features, and programmability.
PBX functions
Functionally, the PBX performs four main call processing duties:
Establishing connections (circuits) between the telephone sets of two users (e.g. mapping a dialed number to a physical phone, ensuring the phone isn't already busy)
Maintaining such connections as long as the users require them (i.e. channeling voice signals between the users)
Disconnecting those connections as per the user's requirement
Providing information for accounting purposes (e.g. metering calls)
In addition to these basic functions, PBXs offer many other calling features and capabilities, with different manufacturers providing different features in an effort to differentiate their products.
Common capabilities include (manufacturers may have a different name for each capability):
Auto attendant
Auto dialing
Automated directory services (where callers can be routed to a given employee by keying or speaking the letters of the employee's name)
Automatic call distributor
Automatic ring back
Busy override
Call blocking
Call forwarding on busy or absence
Call logging
Call park
Call pick-up
Call transfer
Call waiting
Call whisper
Camp-on
Conference call
Custom greetings
Customized abbreviated dialing (speed dialing)
Direct inward dialing (DID)
Direct inward system access (DISA) (the ability to access internal features from an outside telephone line)
Do not disturb (DND)
Follow-me, also known as find-me: Determines the routing of incoming calls. The exchange is configured with a list of numbers for a person. When a call is received for that person, the exchange routes it to each number on the list in turn until either the call is answered or the list is exhausted (at which point the call may be routed to a voice mail system).
Interactive voice response
Local connection: Another useful attribute of a hosted PBX is the ability to have a local number in cities in which you are not physically present. This service essentially lets you create a virtual office presence anywhere in the world.
Music on hold
Night service
Public address voice paging
Shared message boxes (where a department can have a shared voicemail box)
Voice mail
Voice message broadcasting
Welcome message
Interface standards
Interfaces for connecting extensions to a PBX include:
DECT – a standard for connecting cordless phones.
Internet Protocol – For example, H.323 and SIP.
POTS (plain old telephone service) – the common two-wire interface used in most homes. This is cheap and effective and allows almost any standard phone to be used as an extension.
proprietary – the manufacturer has defined a protocol. One can only connect the manufacturer's sets to their PBX, but the benefit is more visible information displayed and/or specific function buttons.
Interfaces for connecting PBXs to each other include:
DPNSS – for connecting PBXs to trunk lines. Standardized by British Telecom, this usually runs over E1 (E-carrier) physical circuits.
Internet Protocol – H.323 and the Session Initiation Protocol (SIP) are IP-based solutions for multimedia sessions.
Primary rate interface (ISDN) – Provided over T1 (23 bearer channels and 1 signaling channel) or E1 carriers.
Proprietary protocols – if equipment from several manufacturers is on-site, the use of a standard protocol is required.
QSIG – for connecting PBXs to each other, usually runs over T1 (T-carrier) or E1 (E-carrier) physical circuits.
Interfaces for connecting PBXs to trunk lines include:
Internet Protocol – H.323, SIP, MGCP, and Inter-Asterisk eXchange protocols operate over IP and are supported by some network providers.
ISDN – the most common digital standard for fixed telephony devices. This can be supplied in either Basic (2-circuit capacity) or Primary (24- or 30-circuit capacity) versions. Most medium to large companies would use Primary ISDN circuits carried on T1 or E1 physical connections.
RBS (robbed bit signaling) – delivers 24 digital circuits over a four-wire (T1) interface
standard POTS (plain old telephone service) lines – the common two-wire interface used in most domestic homes. This is adequate only for smaller systems and can suffer from not being able to detect incoming calls when trying to make an outbound call (commonly called glare).
Interfaces for collecting data from the PBX:
File – the PBX generates a file containing the call records from the PBX.
Network port (listen mode) – an external application connects to the TCP or UDP port. The PBX streams information to the application.
Network port (server mode) – the PBX connects to another application or buffer.
Serial interface – historically used to print every call record to a serial printer. In modern systems, a software application connects via serial cable to this port.
A data record from a PBX or other telecommunication system that provides the statistics for a telephone call is usually termed a call detail record (CDR) or a Station Messaging Detail Record (SMDR).
Hosted PBX systems
Virtual PBX systems or hosted PBX systems deliver PBX functionality as a service, available over the public switched telephone network (PSTN) or the Internet. Hosted PBXs are typically provided by a telephone company or service provider, using equipment located in the premises of a telephone exchange or the provider's data center. This means the customer does not need to buy or install PBX equipment. Generally, the service is provided by a lease agreement and the provider can, in some configurations, use the same switching equipment to service multiple hosted PBX customers.
The first hosted PBX services were feature-rich compared to most premises-based systems of the time. Some PBX functions, such as follow-me calling, appeared in a hosted service before they became available in hardware PBX equipment. Since its introduction, updates and new offerings have moved feature sets in both directions. It is possible to get hosted PBX services that include feature sets from minimal functionality to advanced feature combinations.
In addition to the features available from premises-based PBX systems, hosted PBX:
allows a single number to be presented for the entire company, despite its being geographically distributed. A company could even choose to have no premises, with workers connected from home using their domestic telephones but receiving the same features as any PBX user.
allows multimodal access, where employees access the network via a variety of telecommunications systems, including POTS, ISDN, cellular phones, and VOIP. This allows one extension to ring in multiple locations (either concurrently or sequentially).
allows scalability so that a larger system is not needed if new employees are hired, and so that resources are not wasted if the number of employees is reduced.
eliminates the need for companies to manage or pay for on-site hardware maintenance.
supports integration with custom toll plans (that allow intra-company calls, even from private premises, to be dialed at a cheaper rate) and integrated billing and accounting (where calls made on a private line but on the company's behalf are billed centrally to the company).
Hosted PBX providers
The ongoing migration of most major telecommunication carriers to IP-based networks, coupled with the rise in Cloud Communications has resulted in a significant rise in the uptake of hosted PBX solutions.
Mobile PBX
A mobile PBX is a hosted PBX service that extends fixed-line PBX functionality to mobile devices such as cellular handsets, smartphones, and PDA phones by provisioning them as extensions. Mobile PBX services also can include fixed-line phones. Mobile PBX systems are different from other hosted PBX systems that simply forward data or calls to mobile phones by allowing the mobile phone itself, through the use of buttons, keys, and other input devices, to control PBX phone functions and to manage communications without having to call into the system first.
A mobile PBX may exploit the functionality available in smartphones to run custom applications to implement the PBX-specific functionality.
In addition, a mobile PBX may create extension identifiers for each handset that allow to dial other cell phones in the PBX via their extension shortcut, instead of a PSTN number.
IP-PBX
An IP PBX handles voice calls over the Internet Protocol (IP), bringing benefits for computer telephony integration (CTI). An IP-PBX can exist as physical hardware or can carry out its functions virtually, performing the call-routing activities of the traditional PBX or key system as a software system. The virtual version is also called a "Soft PBX".
See also
Centrex
Circuit ID
Cloud communications
Ground start trunk
List of SIP software
Reference computer
RJ21
Switchboard operator
Telephone exchange
Telephone switchboards
References
External links
Telephone exchange equipment
Computer telephony integration
Telephony equipment | Business telephone system | [
"Technology"
] | 4,761 | [
"Information technology",
"Computer telephony integration"
] |
1,017,788 | https://en.wikipedia.org/wiki/Myth%20of%20Er | The Myth of Er (; , gen.: ) is a legend that concludes Plato's Republic (10.614–10.621). The story includes an account of the cosmos and the afterlife that greatly influenced religious, philosophical, and scientific thought for many centuries.
The story begins as a man named Er, son of Armenios (), of Pamphylia, dies in battle. When the bodies of those who died in the battle are collected, ten days after his death, Er remains undecomposed. Two days later he revives on his funeral-pyre and tells others of his journey in the afterlife, including an account of metempsychosis and the celestial spheres of the astral plane. The tale includes the idea that moral people are rewarded and immoral people punished after death.
Although called the Myth of Er, the word "myth" here means "word, speech, account", rather than the modern meaning. The word is used at the end when Socrates explains that because Er did not drink the waters of Lethe, the account (mythos in Greek) was preserved for us.
Er's tale
With many other souls as his companions, Er had come across an awe-inspiring place with four openings – two into and out of the sky and two into and out of the ground. Judges sat between these openings and ordered the souls which path to follow: the good were guided into the path into the sky, the immoral were directed below. But when Er approached the judges, he was told to remain, listening and observing in order to report his experience to humankind.
Meanwhile from the other opening in the sky, clean souls floated down, recounting beautiful sights and wondrous feelings. Those returning from underground appeared dirty, haggard, and tired, crying in despair when recounting their awful experiences, as each was required to pay a tenfold penalty for all the wicked deeds committed when alive. There were some, however, who could not be released from underground. Murderers, tyrants and other non-political criminals were doomed to remain by the exit of the underground, unable to escape.
After seven days in the meadow, the souls and Er were required to travel farther. After four days they reached a place where they could see a shaft of rainbow light brighter than any they had seen before. After another days travel they reached it. This was the Spindle of Necessity. Several women, including Lady Necessity, her daughters, and the Sirens were present. The souls – except for Er – were then organized into rows and were each given a lottery token.
Then, in the order in which their lottery tokens were chosen, each soul was required to come forward to choose his or her next life. Er recalled the first one to choose a new life: a man who had not known the terrors of the underground but had been rewarded in the sky, hastily chose a powerful dictatorship. Upon further inspection he realized that, among other atrocities, he was destined to eat his own children. Er observed that this was often the case of those who had been through the path in the sky, whereas those who had been punished often chose a better life. Many preferred a life different from their previous experience. Animals chose human lives while humans often chose the apparently easier lives of animals.
After this, each soul was assigned a guardian spirit to help him or her through their life. They passed under the throne of Lady Necessity, then traveled to the Plane of Oblivion, where the River of Forgetfulness (River Lethe) flowed. Each soul was required to drink some of the water, in varying quantities; again, Er only watched. As they drank, each soul forgot everything. As they lay down at night to sleep each soul was lifted up into the night in various directions for rebirth, completing their journey. Er remembered nothing of the journey back to his body. He opened his eyes to find himself lying on the funeral pyre early in the morning, able to recall his journey through the afterlife.
The moral
In the dialogue Plato introduces the story by having Socrates explain to Glaucon that the soul must be immortal, and cannot be destroyed. Socrates tells Glaucon the Myth of Er to explain that the choices we make and the character we develop will have consequences after death. Earlier in Book II of the Republic, Socrates points out that even the gods can be tricked by a clever charlatan who appears just while unjust in his psyche, in that they would welcome the pious but false "man of the people" and would reject and punish the truly just but falsely accused man. In the Myth of Er the true characters of the falsely-pious and those who are immodest in some way are revealed when they are asked to choose another life and pick the lives of tyrants. Those who lived happy but middling lives in their previous life are most likely to choose the same for their future life, not necessarily because they are wise, but out of habit. Those who were treated with infinite injustice, despairing of the possibility of a good human life, choose the souls of animals for their future incarnation. The philosophic life — which identifies the types of lives that emerge from experience, character, and fate — allow men to make good choices when presented with options for a new life. Whereas success, fame, and power may provide temporary heavenly rewards or hellish punishments, philosophic virtues always work to one's advantage.
The Spindle of Necessity
The myth mentions "The Spindle of Necessity", in that the cosmos is represented by the Spindle attended by sirens and the three daughters of the Goddess Necessity known collectively as The Fates, whose duty is to keep the rims of the spindle revolving. The Fates, Sirens, and Spindle are used in the Republic, partly to help explain how known celestial bodies revolved around the Earth according to the cosmology in the Republic.
The "Spindle of Necessity", according to Plato, is "shaped... like the ones we know"—the standard Greek spindle, consisting of a hook, shaft, and whorl. The hook was fixed near the top of the shaft on its long side. On the other end resided the whorl. The hook was used to spin the shaft, which in turn spun the whorl on the other end.
Placed on the whorl of his celestial spindle were 8 "orbits", whereof each created a perfect circle. Each "orbit" is given different descriptions by Plato.
Based on Plato's descriptions within the passage, the orbits can be identified as those of the classical planets, corresponding to the Aristotelian planetary spheres:
Orbit 1 – Stars
Orbit 2 – Saturn
Orbit 3 – Jupiter
Orbit 4 – Mars
Orbit 5 – Sun
Orbit 6 – Venus
Orbit 7 – Mercury
Orbit 8 – Moon
The descriptions of the rims accurately fit the relative distance and revolution speed of the respective bodies as would appear to an observer from Earth (aside from the Moon, which revolves around the Earth slightly more slowly than the sun).
Comparative mythology
Some scholars have connected the Myth of Er to the Armenian legend of Ara the Handsome (Armenian: Արա Գեղեցիկ Ara Gełec‘ik). In the Armenian story, the king Ara was so handsome that the Assyrian queen Semiramis waged war against Armenia to capture him and bring him back to her, alive, so she could marry him. During the battle, Semiramis was victorious, but Ara was slain despite her orders to capture him alive. To avoid continuous warfare with the Armenians, Semiramis, reputed to be a sorceress, took Ara's body and prayed to the gods to raise him from the dead. When the Armenians advanced to avenge their leader, Semiramis disguised one of her lovers as Ara and spread the rumor that the gods had brought Ara back to life, convincing the Armenians not to continue the war. In one tradition, Semiramis' prayers are successful and Ara returns to life.
Armen Petrosyan suggests that Plato's version reflects an earlier form of the story where Er (Ara) rises from the grave.
See also
Allegorical interpretations of Plato
Axis mundi
Dante's Inferno
Daniil Andreev
Dream of Scipio
Near-death experience
The Gospel of Afranius
Zalmoxis
References
Further reading
External links
The Myth of Er – text from Luke Dysinger, O.S.B.
The Vision of Er, retelling by W. M. L. Hutchinson.
Platonism
Visionary literature
Religious cosmologies
Early scientific cosmologies
Ancient astronomy
Philosophical arguments
Locations in the Greek underworld | Myth of Er | [
"Astronomy"
] | 1,751 | [
"Ancient astronomy",
"History of astronomy"
] |
1,017,886 | https://en.wikipedia.org/wiki/Phosphofructokinase%20deficiency | Phosphofructokinase deficiency is a rare muscular metabolic disorder, with an autosomal recessive inheritance pattern. It is characterized as a deficiency in the Phosphofructokinase (PFK) enzyme throughout the body, including the skeletal muscles and red blood cells. Phosphofrucotkinase is an enzyme involved in the glycolytic process. The lack of PFK blocks the completion of the glycolytic pathway. Therefore, all products past the block would be deficient, including Adenosine triphosphate (ATP).
It may affect humans as well as other mammals (especially dogs). It was named after the Japanese physician Seiichiro Tarui (b. 1927), who first observed the condition in 1965.
Presentation
In humans
Human PFK deficiency is categorized into four types: classic, late-onset, infantile and hemolytic. These types are differentiated by age at which symptoms are observed and which symptoms present.
Classic form
Classic phosphofructokinase deficiency is the most common type of this disorder. This type presents with exercise-induced muscle cramps and weakness (sometimes rhabdomyolysis), myoglobinuria, as well as with haemolytic anaemia causing dark urine a few hours later.
Hyperuricemia is common, due to the kidneys' inability to process uric acid following damage resulting from processing myoglobin. Nausea and vomiting following strenuous exercise is another common indicator of classic PFK deficiency. Many patients will also display high levels of bilirubin, which can lead to a jaundiced appearance. Symptoms for this type of PFK deficiency usually appear in early childhood.
Late-onset form
Late-onset PFK deficiency, as the name suggests, is a form of the disease that presents later in life. Common symptoms associated with late-onset phosphofructokinase deficiency are myopathy, weakness and fatigue. Many of the more severe symptoms found in the classic type of this disease are absent in the late-onset form.
Infantile form
Phosphofructokinase deficiency also presents in a rare infantile form. Infants with this deficiency often display floppy infant syndrome (hypotonia), arthrogryposis, encephalopathy and cardiomyopathy. The disorder can also manifest itself in the central nervous system, usually in the form of seizures. PFK deficient infants also often have some type of respiratory issue. Survival rate for the infantile form of PFK deficiency is low, and the cause of death is often due to respiratory failure.
Hemolytic form
The defining characteristic of this form of the disorder is hemolytic anemia, in which red blood cells break down prematurely. Muscle weakness and pain are not as common in patients with hemolytic PFK deficiency.
In dogs
Presentation of the canine form of the disease is similar to that of the human form. Most notably, PFK deficient dogs have mild, but persistent, anemia with hemolytic episodes, exercise intolerance, hemoglobinuria, and pale or jaundiced mucous membranes. Muscle weakness and cramping are not uncommon symptoms, but they are not as common as they are in human PFKM deficiency.
Risk factors
In humans
In order to get Tarui's disease, both parents must be carriers of the genetic defect so that the child is born with the full form of the recessive trait. The best indicator of risk is a family member with PFK deficiency.
In dogs
Canine phosphofructokinase deficiency is found mostly in English Springer Spaniels and American Cocker Spaniels, but has also been reported in Whippets and Wachtelhunds. Mixed-breed dogs descended from any of these breeds are also at risk to inherit PFK deficiency.
Pathophysiology
Phosphofructokinase is a tetrameric enzyme that consists of three types of subunits: PFKL (liver), PFKM (muscle), and PFKP (platelet). The combination of these subunits varies depending on the tissue in question. In this condition, a deficiency of the M subunit (PFKM) of the phosphofructokinase enzyme impairs the ability of cells such as erythrocytes and rhabdomyocytes (skeletal muscle cells) to use carbohydrates (such as glucose) for energy. Unlike most other glycogen storage diseases, it directly affects glycolysis.
The mutation impairs the ability of phosphofructokinase to phosphorylate fructose-6-phosphate prior to its cleavage into glyceraldehyde-3-phosphate which is the rate limiting step in the glycolysis pathway. Inhibition of this step prevents the formation of adenosine triphosphate (ATP) from adenosine diphosphate (ADP), which results in a lack of available energy for muscles during heavy exercise. This results in the muscle cramping and pain that are common symptoms of the disease.
In humans
Genetic mutation is the cause of phosphofructokinase deficiency. Several different mutations in the gene that encodes for PFKM have been reported in humans, but the result is production of PFKM subunits with little to no function. As a result, affected individuals display only about 50–65% of total normal phosphofructokinase enzyme function.
In dogs
PFK deficiency is believed to be the result of a nonsense mutation in the gene that encodes for PFKM. This results in an unstable, truncated protein that lacks normal function. This results in a near complete loss of PFKM activity in the skeletal muscle. Dogs with the mutation display 10–20% of normal PFK activity in their erythrocytes, due to a higher proportion of PFKM in those cells.
Diagnosis
Symptoms of phosphofructokinase deficiency can closely resemble those of other metabolic diseases, include deficiencies of phosphoglycerate kinase, phosphoglycerate mutase, lactate dehydrogenase, beta-enolase and aldolase A. Thus, proper diagnosis is important to determine a treatment plan.
A diagnosis can be made through a muscle biopsy that shows excess glycogen accumulation. Glycogen deposits in the muscle are a result of the interruption of normal glucose breakdown that regulates the breakdown of glycogen. Blood tests are conducted to measure the activity of phosphofructokinase, which would be lower in a patient with this condition. Patients also commonly display elevated levels of creatine kinase.
Management
Treatment usually entails that the patient refrain from strenuous exercise to prevent muscle pain and cramping. Avoiding carbohydrates is also recommended.
A ketogenic diet also improved the symptoms of an infant with PFK deficiency. The logic behind this treatment is that the low-carb high fat diet forces the body to use fatty acids as a primary energy source instead of glucose. This bypasses the enzymatic defect in glycolysis, lessening the impact of the mutated PFKM enzymes. This has not been widely studied enough to prove if it is a viable treatment, but testing is continuing to explore this option.
Genetic testing to determine whether or not a person is a carrier of the mutated gene is also available.
Mitapivat (brand name Pyrukynd) may improve symptoms by stimulating pyruvate kinase, another enzyme in the glycolytic pathway.
In dogs
Diagnosis of canine phosphofructokinase deficiency is similar to the blood tests used in diagnosis of humans. Blood tests measuring the total erythrocyte PFK activity are used for definitive diagnosis in most cases. DNA testing for presence of the condition is also available.
Treatment mostly takes the form of supportive care. Owners are advised to keep their dogs out of stressful or exciting situations, avoid high temperature environments and strenuous exercise. It is also important for the owner to be alert for any signs of a hemolytic episode. Dogs carrying the mutated form of the gene should be removed from the breeding population, in order to reduce incidence of the condition.
References
External links
Inborn errors of carbohydrate metabolism
Glycogen storage disease type VII | Phosphofructokinase deficiency | [
"Chemistry"
] | 1,761 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism"
] |
1,017,889 | https://en.wikipedia.org/wiki/Pyruvate%20kinase%20deficiency | Pyruvate kinase deficiency is an inherited metabolic disorder of the enzyme pyruvate kinase which affects the survival of red blood cells. Both autosomal dominant and recessive inheritance have been observed with the disorder; classically, and more commonly, the inheritance is autosomal recessive. Pyruvate kinase deficiency is the second most common cause of enzyme-deficient hemolytic anemia, following G6PD deficiency.
Signs and symptoms
Symptoms can be extremely varied among those suffering from pyruvate kinase deficiency. The majority of those suffering from the disease are detected at birth while some only present symptoms during times of great physiological stress such as pregnancy, or with acute illnesses (viral disorders). Symptoms are limited to or most severe during childhood. Among the symptoms of pyruvate kinase deficiency are:
Mild to severe hemolytic anemia
Cholecystolithiasis
Tachycardia
Hemochromatosis
Icteric sclera
Splenomegaly
Leg ulcers
Jaundice
Fatigue
Shortness of breath
The level of 2,3-bisphosphoglycerate is elevated: 1,3-bisphosphoglycerate, a precursor of phosphoenolpyruvate which is the substrate for Pyruvate kinase, is increased and so the Luebering-Rapoport pathway is overactivated. This led to a rightward shift in the oxygen dissociation curve of hemoglobin (i.e. it decreases the hemoglobin affinity for oxygen): In consequence, patients may tolerate anemia surprisingly well.
Cause
Pyruvate kinase deficiency is due to a mutation in the PKLR gene. There are four pyruvate kinase isoenzymes, two of which are encoded by the PKLR gene (isoenzymes L and R, which are used in the liver and erythrocytes, respectively). Mutations in the PKLR gene therefore cause a deficiency in the pyruvate kinase enzyme.
180 different mutations have been found on the gene coding for the L and R isoenzymes, 124 of which are single-nucleotide missense mutations. Pyruvate kinase deficiency is most commonly an autosomal recessive trait. Although it is mostly homozygotes that demonstrate symptoms of the disorder, compound heterozygotes can also show clinical signs.
Pathophysiology
Pyruvate kinase is the last enzyme involved in the glycolytic process, transferring the phosphate group from phosphenol pyruvate to a waiting adenosine diphosphate (ADP) molecule, resulting in both adenosine triphosphate (ATP) and pyruvate. This is the second ATP producing step of the process and the third regulatory reaction. Pyruvate kinase deficiency in the red blood cells results in an inadequate amount of or complete lack of the enzyme, blocking the completion of the glycolytic pathway. Therefore, all products past the block would be deficient in the red blood cell. These products include ATP and pyruvate.
Mature erythrocytes lack a nucleus and mitochondria. Without a nucleus, they lack the ability to synthesize new proteins so if anything happens to their pyruvate kinase, they are unable to generate replacement enzymes throughout the rest of their life cycle. Without mitochondria, erythrocytes are heavily dependent on the anaerobic generation of ATP during glycolysis for nearly all of their energy requirements.
With insufficient ATP in an erythrocyte, all active processes in the cell come to a halt. Sodium potassium ATPase pumps are the first to stop. Since the cell membrane is more permeable to potassium than sodium, potassium leaks out. Intracellular fluid becomes hypotonic, water moves down its concentration gradient out of the cell. The cell shrinks and cellular death occurs, this is called 'dehydration at cellular level'. This is how a deficiency in pyruvate kinase results in hemolytic anaemia, the body is deficient in red blood cells as they are destroyed by lack of ATP at a larger rate than they are being created.
Diagnosis
The diagnosis of pyruvate kinase deficiency can be done by full blood counts (differential blood counts) and reticulocyte counts. Other methods include direct enzyme assays, which can determine pyruvate kinase levels in erythrocytes separated by density centrifugation, as well as direct DNA sequencing. For the most part when dealing with pyruvate kinase deficiency, these two diagnostic techniques are complementary to each other as they both contain their own flaws. Direct enzyme assays can diagnose the disorder and molecular testing confirms the diagnosis or vice versa. Furthermore, tests to determine bile salts (bilirubin) can be used to see whether the gall bladder has been compromised.
Treatment
Most affected individuals with pyruvate kinase deficiency do not require treatment. Those individuals who are more severely affected may die in utero of anemia or may require intensive treatment. With these severe cases of pyruvate kinase deficiency in red blood cells, treatment is the only option, there is no cure. However, treatment is usually effective in reducing the severity of the symptoms.
The most common treatment is blood transfusions, especially in infants and young children. This is done if the red blood cell count has fallen to a critical level. The transplantation of bone marrow has also been conducted as a treatment option.
There is a natural way the body tries to treat this disease. It increases the erythrocyte production (reticulocytosis) because reticulocytes are immature red blood cells that still contain mitochondria and so can produce ATP via oxidative phosphorylation. Therefore, a treatment option in extremely severe cases is to perform a splenectomy. This does not stop the destruction of erythrocytes but it does help increase the amount of reticulocytes in the body since most of the hemolysis occurs when the reticulocytes are trapped in the hypoxic environment of the spleen. This reduces severe anemia and the need for blood transfusions.
Mitapivat was approved for medical use in the United States in February 2022.
Epidemiology
Pyruvate kinase deficiency happens worldwide, however northern Europe, and Japan have many cases. The prevalence of pyruvate kinase deficiency is around 51 cases per million in the population (via gene frequency).
See also
List of hematologic conditions
References
Further reading
External links
Inborn errors of carbohydrate metabolism
Hereditary hemolytic anemias | Pyruvate kinase deficiency | [
"Chemistry"
] | 1,385 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism"
] |
1,018,020 | https://en.wikipedia.org/wiki/Evaporative%20cooling%20%28atomic%20physics%29 | Evaporative cooling is an atomic physics technique to achieve high phase space densities which optical cooling techniques alone typically can not reach.
Atoms trapped in optical or magnetic traps can be evaporatively cooled via two primary mechanisms, usually specific to the type of trap in question: in magnetic traps, radiofrequency (RF) fields are used to selectively drive warm atoms from the trap by inducing transitions between trapping and non-trapping spin states; or, in optical traps, the depth of the trap itself is gradually decreased, allowing the most energetic atoms in the trap to escape over the edges of the optical barrier. In the case of a Maxwell-Boltzmann distribution for the velocities of the atoms in the trap, these atoms which escape/are driven out of the trap lie in the highest velocity tail of the distribution, meaning that their kinetic energy (and therefore temperature) is much higher than the average for the trap. The net result is that while the total trap population decreases, so does the mean energy of the remaining population. This decrease in the mean kinetic energy of the atom cloud translates into a progressive decrease in the trap temperature, cooling the trap.
The process is analogous to blowing on a cup of coffee to cool it: those molecules at the highest end of the energy distribution for the coffee form a vapor above the surface and are then removed from the system by blowing them away, decreasing the average energy, and therefore temperature, of the remaining coffee molecules.
Evaporation is a change of state from liquid to gas.
Radiofrequency induced evaporation
Radiofrequency (RF) induced evaporative cooling is the most common method for evaporatively cooling atoms in a magneto-optical trap (MOT). Consider trapped atoms laser cooled on a |F=0 |F=1 transition. The magnetic sublevels of the |F=1 state (|m= -1,0,1) are degenerate for zero external field. The confining magnetic quadrupole field, which is zero at the center of the trap and nonzero everywhere else, causes a Zeeman shift in atoms which stray from the trap center, lifting the degeneracy of the three magnetic sublevels. The interaction energy between the total spin angular momentum of the trapped atom and the external magnetic field depends on the projection of the spin angular momentum onto the z-axis, and is proportional toFrom this relation it can be seen that only the |m=-1 magnetic sublevel will have a positive interaction energy with the field, that is to say, the energy of atoms in this state increases as they migrate from the trap center, making the trap center a point of minimum energy, the definition of a trap. Conversely, the energy of the |m=0 state is unchanged by the field (no trapping), and the |m=1 state actually decreases in energy as it strays from the trap center, making the center a point of maximum energy. For this reason |m=-1 is referred to as the trapping state, and |m=0,1 the non-trapping states.
From the equation for the magnetic field interaction energy, it can also be seen that the energies of the |m=1,-1 states shift in opposite directions, changing the total energy difference between these two states. The |m=-1|m=1 transition frequency therefore experiences a Zeeman shift. With this in mind, the RF evaporative cooling scheme works as follows: the size of the Zeeman shift of the -1+1 transition depends on the strength of the magnetic field, which increases radially outward from the trap center. Those atoms which are coldest move within a small region around the trap center, where they experience only a small Zeeman shift in the -1+1 transition frequency. Warm atoms, however, spend time in regions of the trap much further from the center, where the magnetic field is stronger and the Zeeman shift therefore larger. The shift induced by magnetic fields on the scale used in typical MOTs is on the order of MHz, so that a radiofrequency source can be used to drive the -1+1 transition. The choice of frequency for the RF source corresponds to a point on the trapping potential curve at which atoms experience a Zeeman shift equal to the frequency of the RF source, which then drives the atoms to the anti-trapping |m=1 magnetic sublevel and immediately exits the trap. Lowering the RF frequency is therefore equivalent to lowering the dashed line in the figure, effectively reducing the depth of the potential well. For this reason the RF source used to remove these energetic atoms is often referred to as an "RF knife," as it effectively lowers the height of the trapping potential to remove the most energetic atoms from the trap, "cutting" away the high energy tail of the trap's energy distribution. This method was famously used to cool a cloud of rubidium atoms below the condensation critical temperature to form the first experimentally observed Bose-Einstein condensate (BEC)
.
Optical evaporation
While the first observation of Bose-Einstein condensation was made in a magnetic atom trap using RF driven evaporative cooling, optical dipole traps are now much more common platforms for achieving condensation. Beginning in a MOT, cold, trapped atoms are transferred to the focal point of a high power, tightly focused, off-resonant laser beam. The electric field of the laser at its focus is sufficiently strong to induce dipole moments in the atoms, which are then attracted to the electric field maximum at the laser focus, effectively creating a trapping potential to hold them at the beam focus.
The depth of the optical trapping potential in an optical dipole trap (ODT) is proportional to the intensity of the trapping laser light. Decreasing the power in the trapping laser beam therefore decreases the depth of the trapping potential. In the case of RF-driven evaporation, the actual height of the potential barrier confining the atoms is fixed during the evaporation sequence, but the RF knife effectively decreases the depth of this barrier, as previously discussed. For an optical trap, however, evaporation is facilitated by decreasing the laser power and thus lowering the depth of the trapping potential. As a result, the warmest atoms in the trap will have sufficient kinetic energy to be able to make it over the barrier walls and escape the trap, reducing the average energy of the remaining atoms as previously described. While trap depths for ODTs can be shallow (on the order of mK, in terms of temperature), the simplicity of this optical evaporation procedure has helped to make it increasingly popular for BEC experiments since its first demonstrations shortly after magnetic BEC production.
See also
Magneto-optical trap
Bose-Einstein condensation
Optical tweezers
Laser cooling
Sisyphus cooling
Raman cooling
References
M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman and E. A. Cornell, Observations of Bose-Einstein Condensation in a Dilute Atomic Vapor, Science, 269:198–201, July 14, 1995.
J. J. Tollett, C. C. Bradley, C. A. Sackett, and R. G. Hulet, Permanent magnet trap for cold atoms, Phys. Rev. A 51, R22, 1995.
Bouyer et al., RF-induced evaporative cooling and BEC in a high magnetic field, physics/0003050, 2000.
Thermodynamics
Atomic physics
Cooling technology | Evaporative cooling (atomic physics) | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,562 | [
"Dynamical systems",
"Quantum mechanics",
"Atomic physics",
"Thermodynamics",
" molecular",
"Atomic",
" and optical physics"
] |
1,018,112 | https://en.wikipedia.org/wiki/Cimetidine | Cimetidine, sold under the brand name Tagamet among others, is a histamine H2 receptor antagonist that inhibits stomach acid production. It is mainly used in the treatment of heartburn and peptic ulcers.
With the development of proton pump inhibitors, such as omeprazole, approved for the same indications, cimetidine is available as an over-the-counter formulation to prevent heartburn or acid indigestion, along with the other H2-receptor antagonists.
Cimetidine was developed in 1971 and came into commercial use in 1977. Cimetidine was approved in the United Kingdom in 1976, and was approved in the United States by the Food and Drug Administration in 1979.
Medical uses
Cimetidine is indicated for the treatment of duodenal ulcers, gastric ulcers, gastroesophageal reflux disease, and pathological hypersecretory conditions. Cimetidine is also used to relieve or prevent heartburn.
Side effects
Reported side effects of cimetidine include diarrhea, rashes, dizziness, fatigue, constipation, and muscle pain, all of which are usually mild and transient. It has been reported that mental confusion may occur in the elderly. Because of its hormonal effects, cimetidine rarely may cause sexual dysfunction including loss of libido and erectile dysfunction and gynecomastia (0.1–0.2%) in males during long-term treatment. Rarely, interstitial nephritis, urticaria, and angioedema have been reported with cimetidine treatment. Cimetidine is also commonly associated with transient raised aminotransferase activity; hepatotoxicity is rare.
Overdose
Cimetidine appears to be very safe in overdose, producing no symptoms even with massive overdoses (e.g., 20 g).
Interactions
Due to its non-selective inhibition of cytochrome P450 enzymes, cimetidine has numerous drug interactions. Examples of specific interactions include, but are not limited to, the following:
Cimetidine affects the metabolism of methadone, sometimes resulting in higher blood levels and a higher incidence of side effects, and may interact with the antimalarial medication hydroxychloroquine.
Cimetidine can also interact with a number of psychoactive medications, including tricyclic antidepressants and selective serotonin reuptake inhibitors, causing increased blood levels of these drugs and the potential of subsequent toxicity.
Following administration of cimetidine, the elimination half-life and area-under-curve of zolmitriptan and its active metabolites were roughly doubled.
Cimetidine is a potent inhibitor of tubular creatinine secretion. Creatinine is a metabolic byproduct of creatine breakdown. Accumulation of creatinine is associated with uremia, but the symptoms of creatinine accumulation are unknown, as they are hard to separate from other nitrogenous waste buildups.
Like several other medications (e.g., erythromycin), cimetidine interferes with the body's metabolization of sildenafil, causing its strength and duration to increase and making its side effects more likely and prominent.
Clinically significant drug interactions with the CYP1A2 substrate theophylline, the CYP2C9 substrate tolbutamide, the CYP2D6 substrate desipramine, and the CYP3A4 substrate triazolam have all been demonstrated with cimetidine, and interactions with other substrates of these enzymes are likely as well.
Cimetidine has been shown clinically to reduce the clearance of mirtazapine, imipramine, timolol, nebivolol, sparteine, loratadine, nortriptyline, gabapentin, and desipramine in humans.
Cimetidine inhibits the renal excretion of metformin and procainamide, resulting in increased circulating levels of these drugs.
Interactions of potential clinical importance with cimetidine include warfarin, theophylline, phenytoin, carbamazepine, pethidine and other opioid analgesics, tricyclic antidepressants, lidocaine, terfenadine, amiodarone, flecainide, quinidine, fluorouracil, and benzodiazepines.
Cimetidine may decrease the effects of CYP2D6 substrates that are prodrugs, such as codeine, tramadol, and tamoxifen.
Cimetidine reduces the absorption of ketoconazole and itraconazole (which require a low pH).
Cimetidine has a theoretical but unproven benefit in paracetamol toxicity. This is because N-acetyl-p-benzoquinone imine (NAPQI), a metabolite of paracetamol (acetaminophen) that is responsible for its hepatotoxicity, is formed from it by the cytochrome P450 system (specifically, CYP1A2, CYP2E1, and CYP3A4).
Cimetidine is used in cancer metastasis research as a blocker of E-selectin.
Pharmacology
Pharmacodynamics
Histamine H2 receptor antagonism
The mechanism of action of cimetidine as an antacid is as a histamine H2 receptor antagonist. It has been found to bind to the H2 receptor with a Kd of 42 nM.
Cytochrome P450 inhibition
Cimetidine is a potent inhibitor of certain cytochrome P450 (CYP) enzymes, including CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1, and CYP3A4. The drug appears to primarily inhibit CYP1A2, CYP2D6, and CYP3A4, of which it is described as a moderate inhibitor. This is notable since these three CYP isoenzymes are involved in CYP-mediated drug biotransformations; however, CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1, and CYP3A4 are also involved in the oxidative metabolism of many commonly used drugs. As a result, cimetidine has the potential for a large number of pharmacokinetic interactions.
Cimetidine is reported to be a competitive and reversible inhibitor of several CYP enzymes, although mechanism-based (suicide) irreversible inhibition has also been identified for cimetidine's inhibition of CYP2D6. It reversibly inhibits CYP enzymes by binding directly with the complexed heme-iron of the active site via one of its imidazole ring nitrogen atoms, thereby blocking the oxidation of other drugs.
Antiandrogenic and estrogenic effects
Cimetidine has been found to possess weak antiandrogenic activity at high doses. It directly and competitively antagonizes the androgen receptor (AR), the biological target of androgens like testosterone and dihydrotestosterone (DHT). However, the affinity of cimetidine for the AR is very weak; in one study, it showed only 0.00084% of the affinity of the anabolic steroid metribolone (100%) for the human AR (Ki = 140 μM and 1.18 nM, respectively). In any case, at sufficiently high doses, cimetidine has demonstrated weak but significant antiandrogenic effects in animals, including antiandrogenic effects in the rat ventral prostate and mouse kidney, reductions in the weights of the male accessory glands like the prostate gland and seminal vesicles in rats, and elevated gonadotropin levels in male rats (due to reduced negative feedback on the axis by androgens). In addition to AR antagonism, cimetidine has been found to inhibit the 2-hydroxylation of estradiol (via inhibition of CYP450 enzymes, which are involved in the metabolic inactivation of estradiol), resulting in increased estrogen levels. The medication has also been reported to reduce testosterone biosynthesis and increase prolactin levels in individual case reports, effects which might be secondary to increased estrogen levels.
At typical therapeutic levels, cimetidine has either no effect on or causes small increases in circulating testosterone concentrations in men. Any increases in testosterone levels with cimetidine have been attributed to the loss of negative feedback on the HPG axis that results due to AR antagonism. At typical clinical dosages, such as those used to treat peptic ulcer disease, the incidence of gynecomastia (breast development) with cimetidine is very low at less than 1%. In one survey of over 9,000 patients taking cimetidine, gynecomastia was the most frequent endocrine-related complaint but was reported in only 0.2% of patients. At high doses however, such as those used to treat Zollinger–Ellison syndrome, there may be a higher incidence of gynecomastia with cimetidine. In one small study, a 20% incidence of gynecomastia was observed in 25 male patients with duodenal ulcers who were treated with 1,600 mg/day cimetidine. The symptoms appeared after 4 months of treatment and regressed within a month following discontinuation of cimetidine. In another small study, cimetidine was reported to have induced breast changes and erectile dysfunction in 60% of 22 men treated with it. These adverse effects completely resolved in all cases when the men were switched from cimetidine to ranitidine. A study of the United Kingdom General Practice Research Database, which contains over 80,000 men, found that the relative risk of gynecomastia in cimetidine users was 7.2 relative to non-users. People taking a dosage of cimetidine of greater than or equal to 1,000 mg showed more than 40 times the risk of gynecomastia than non-users. The risk was highest during the period of time of 7 to 12 months after starting cimetidine. The gynecomastia associated with cimetidine is thought to be due to blockade of ARs in the breasts, which results in estrogen action unopposed by androgens in this tissue, although increased levels of estrogens due to inhibition of estrogen metabolism is another possible mechanism. Cimetidine has also been associated with oligospermia (decreased sperm count) and sexual dysfunction (e.g., decreased libido, erectile dysfunction) in men in some research, which are hormonally related similarly.
In accordance with the very weak nature of its AR antagonistic activity, cimetidine has shown minimal effectiveness in the treatment of androgen-dependent conditions such as acne, hirsutism (excessive hair growth), and hyperandrogenism (high androgen levels) in women. As such, its use for such indications is not recommended.
Pharmacokinetics
Cimetidine is rapidly absorbed regardless of route of administration. The oral bioavailability of cimetidine is 60 to 70%. The onset of action of cimetidine when taken orally is 30 minutes, and peak levels occur within 1 to 3 hours. Cimetidine is widely distributed throughout all tissues. It is able to cross the blood–brain barrier and can produce effects in the central nervous system (e.g., headaches, dizziness, somnolence). The volume of distribution of cimetidine is 0.8 L/kg in adults and 1.2 to 2.1 L/kg in children. Its plasma protein binding is 13 to 25% and is said to be without pharmacological significance. Cimetidine undergoes relatively little metabolism, with 56 to 85% excreted unchanged. It is metabolized in the liver into cimetidine sulfoxide, hydroxycimetidine, and guanyl urea cimetidine. The major metabolite of cimetidine is the sulfoxide, which accounts for about 30% of excreted material. Cimetidine is rapidly eliminated, with an elimination half-life of 123 minutes, or about 2 hours. It has been said to have a duration of action of 4 to 8 hours. The medication is mainly eliminated in urine.
History
Cimetidine, approved by the FDA for inhibition of gastric acid secretion, has been advocated for a number of dermatological diseases. Cimetidine was the prototypical histamine H2 receptor antagonist from which the later members of the class were developed. Cimetidine was the culmination of a project at Smith, Kline and French (SK&F) Laboratories in Welwyn Garden City (now part of GlaxoSmithKline) by James W. Black, C. Robin Ganellin, and others to develop a histamine receptor antagonist to suppress stomach acid secretion. This was one of the first drugs discovered using a rational drug design approach. Sir James W. Black shared the 1988 Nobel Prize in Physiology or Medicine for the discovery of propranolol and also is credited for the discovery of cimetidine.
At the time (1964), histamine was known to stimulate the secretion of stomach acid, but also that traditional antihistamines had no effect on acid production. In the process, the SK&F scientists also proved the existence of histamine H2 receptors.
The SK&F team used a rational drug-design structure starting from the structure of histamine — the only design lead, since nothing was known of the then hypothetical H2 receptor. Hundreds of modified compounds were synthesized in an effort to develop a model of the receptor. The first breakthrough was Nα-guanylhistamine, a partial H2 receptor antagonist. From this lead, the receptor model was further refined and eventually led to the development of burimamide, the first H2 receptor antagonist. Burimamide, a specific competitive antagonist at the H2 receptor, 100 times more potent than Nα-guanylhistamine, proved the existence of the H2 receptor.
Burimamide was still insufficiently potent for oral administration, and further modification of the structure, based on modifying the pKa of the compound, led to the development of metiamide. Metiamide was an effective agent; it was associated, however, with unacceptable nephrotoxicity and agranulocytosis. The toxicity was proposed to arise from the thiourea group, and similar guanidine analogues were investigated until the ultimate discovery of cimetidine. The compound was synthesized in 1972 and evaluated for toxicology by 1973. It passed all trials.
Cimetidine was first marketed in the United Kingdom in 1976, and in the U.S. in August 1977; therefore, it took 12 years from initiation of the H2 receptor antagonist program to commercialization. By 1979, Tagamet was being sold in more than 100 countries and became the top-selling prescription product in the U.S., Canada, and several other countries. In November 1997, the American Chemical Society and the Royal Society of Chemistry in the U.K. jointly recognized the work as a milestone in drug discovery by designating it an International Historic Chemical Landmark during a ceremony at SmithKline Beecham's New Frontiers Science Park research facilities in Harlow, England.
The commercial name "Tagamet" was decided upon by fusing the two words "antagonist" and "cimetidine". Subsequent to the introduction onto the U.S. drug market, two other H2 receptor antagonists were approved, ranitidine (Zantac, Glaxo Labs) and famotidine (Pepcid, Yamanouchi, Ltd.) Cimetidine became the first drug ever to reach more than $1 billion a year in sales, thus making it the first blockbuster drug.
Tagamet has been largely replaced by proton pump inhibitors for treating peptic ulcers, but is available as an over-the-counter medicine for heartburn in many countries.
Research
Some evidence suggests cimetidine could be effective in the treatment of common warts, but more rigorous double-blind clinical trials found it to be no more effective than a placebo.
Tentative evidence supports a beneficial role as add-on therapy in colorectal cancer.
Cimetidine inhibits ALA synthase activity and hence may have some therapeutic value in preventing and treating acute porphyria attacks.
There is some evidence supporting the use of Cimetidine in the treatment of PFAPA.
Veterinary use
In dogs, cimetidine is used as an antiemetic when treating chronic gastritis.
References
External links
CYP1A2 inhibitors
CYP2D6 inhibitors
CYP3A4 inhibitors
Cytochrome P450 inhibitors
Equine medications
Guanidines
H2 receptor antagonists
Hepatotoxins
Imidazoles
Nitriles
Nonsteroidal antiandrogens
Thioethers
Nephrotoxins
Drugs developed by GSK plc | Cimetidine | [
"Chemistry"
] | 3,696 | [
"Nitriles",
"Guanidines",
"Functional groups"
] |
1,018,113 | https://en.wikipedia.org/wiki/Jerzy%20Konorski | Jerzy Konorski (1 December 1903 in Łódź, Congress Poland – 14 November 1973 in Warsaw, Poland) was a Polish neurophysiologist who further developed the work of Ivan Pavlov by discovering secondary conditioned reflexes and operant conditioning. He also proposed the idea of gnostic neurons, a concept similar to the grandmother cell. He coined the term neural plasticity, and he developed theoretical ideas regarding it that are similar to those proposed soon after by Donald Hebb.
Secondary conditioned reflexes
When he and Stefan Miller were medical students in Warsaw they proposed another type of conditioned reflex in addition to that discovered by Pavlov which was under the control of reward. This has come to be known as "type II conditioned reflexes," or secondary conditioned reflexes Type II conditioned reflexes are now known as operant or instrumental conditioning.
He spent two years at Pavlov's laboratory as the result of a letter that he sent to Pavlov describing this work. Pavlov however was never convinced that instrumental conditioning (which Konorski called "Type II" to distinguish it from Pavlov's "Type I" learning) differed in any important way from his own Type I conditioning.
An exchange between B. F. Skinner and Konorski also occurred over the two types of learning. Skinner had originally referred to operant conditioning as Type I and Pavlovian conditioning as Type II. Konorski agreed to revise his nomenclature to avoid confusion.
Neural plasticity
Konorski married the neurophysiologist Liliana Lubinska, who obtained her doctorate with Louis Lapicque. Konorski, Lubinska, and Miller established a laboratory at the Nencki Institute of Experimental Biology. With Konorski's knowledge of neurophysiology greatly expanded through his collaboration with Lubinska, he turned his attention to the neural mechanisms that underlie conditioning.
Konorski asked how pre-existing connections between neurons in the brain could be changed by conditioning. He suggested an idea similar to Hebb in which coincidental activation in time causes the potential connections to be transformed into actual excitatory connections. Inhibitory connections arise when the excitation of one input coincides in time with a decease in its associated connection. He described the process: "The plastic changes would be related to the formation and multiplication of new synaptic junctions between the axon terminals of one nerve cell and the soma (i.e. the body and the dendrites) of the other" This idea that synapses strengthen with use was also proposed in the West in the theory of Hebbian synapses by Donald Hebb.
Grandmother cells
Konorski first proposed two key concepts in neuroscience (independently of Western scientists who also suggested them). The grandmother cell of the West which Konorski called "gnostic unit." This was developed in great detail in his 1967 book.
Publications
He was the author of two important books on learning, Conditioned Reflexes and Neuron Organization (1948), and Integrative Activity of the Brain (1967). The first book, presented one of the first theories of associative learning as a result of long-term neuronal plasticity. In the second, he substantially revised his early theories and synthesised work on associative learning and neurobiology of perception and motivation.
World War II and Stalin
The Department of Neurophysiology at the Nencki Institute of Experimental Biology in Warsaw, Poland was created for him but this was destroyed in the first days of the invasion of Poland in 1939. He failed to get to England to join his brother who lived there. Konorski managed to escape to the Soviet Union where he was appointed the head of the primate laboratory at Sukhumi on the Black Sea in Georgia. Due to German invasion of the Soviet Union, the laboratory was relocated to Tbilisi. He spent much of World War II at Sukhumi treating traumatic injuries of the central nervous system. After the war he returned to Nencki Institute as head of the Department of Neurophysiology. In 1948 Cambridge University Press published his "Conditioned reflexes and neuron organization". Then in 1949, during the peak of Stalinism, at a conference in Leningrad commemorating the 100th anniversary of Pavlov's birth, his book was condemned and rejected. In 1951, at a conference organized in Krynica in support of him, this was shown in a 40-minute period of continuous clapping and applause. With Stalin's death his prosecution ended.
Legacy
Later Konorski became a foreign member of the National Academy of Sciences. Since his death his influence has grown considerably and now recognized as the first to systematically investigate the mechanisms underlying instrumental conditioning. Many consider him among the most important of theoretical neurobiologists.
See also
List of Polish medical scientists
Timeline of Polish science and technology
References
External links
Bibliography of Jerzy Konorski Selected publications of Jerzy Konorski and history of the Department of Neurophysiology at Nencki Institute. (170MB in pdf files).
1903 births
1973 deaths
Polish neuroscientists
Behaviourist psychologists
Foreign associates of the National Academy of Sciences
Neurophysiologists
20th-century Polish scientists
Recipients of the State Award Badge (Poland) | Jerzy Konorski | [
"Biology"
] | 1,082 | [
"Behaviourist psychologists",
"Behavior",
"Behaviorism"
] |
1,018,115 | https://en.wikipedia.org/wiki/Project%20PACER | Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs)—or, as stated in a later proposal, fission bombs—inside an underground cavity. Its proponents claimed that the system is the only fusion power system that could be demonstrated to work using existing technology. It would also require a continuous supply of nuclear explosives and contemporary economics studies demonstrated that these could not be produced at a competitive price compared to conventional energy sources.
Development
The earliest references to the use of nuclear explosions for power generation date to a meeting called by Edward Teller in 1957. Among the many topics covered, the group considered power generation by exploding 1-megaton bombs in a diameter steam-filled cavity dug in granite. This led to the realization that the fissile material from the fission sections of the bombs, the "primaries", would accumulate in the chamber. Even at this early stage, physicist John Nuckolls became interested in designs of very small bombs, and ones with no fission primary at all. This work would later lead to his development of the inertial fusion energy concept.
The initial PACER proposals were studied under the larger Project Plowshares efforts in the United States, which examined the use of nuclear explosions in place of chemical ones for construction. Examples included the possibility of using large nuclear devices to create an artificial harbour for mooring ships in the north, or as a sort of nuclear fracking to improve natural gas yields. Another proposal would create an alternative to the Panama Canal in a single sequence of detonations, crossing a Central American nation. One of these tests, 1961's Project Gnome, also considered the generation of steam for possible extraction as a power source. LANL proposed PACER as an adjunct to these studies.
Early examples considered diameter water-filled caverns created in salt domes at as much as deep. A series of 50-kiloton bombs would be dropped into the cavern and exploded to heat the water and create steam. The steam would then power a secondary cooling loop for power extraction using a steam turbine. Dropping about two bombs a day would cause the system to reach thermal equilibrium, allowing the continual extraction of about 2 GW of electrical power. There was also some consideration given to adding thorium or other material to the bombs to breed fuel for conventional fission reactors.
In a 1975 review of the various Plowshares efforts, the Gulf University Research Consortium (GURC) considered the economics of the PACER concept. They demonstrated that assuming a cost of $42 000 for the 50kT nuclear explosives would be the equivalent of fuelling a conventional light-water reactor with uranium fuel at a price of $27 per pound for yellowcake. If the cost of the explosives would be $400 000 it would be equivalent to a pressurized water reactor with an equivalent price of $328 per ton of uranium. The price for 1 ton of yellowcake was around $45 in 2012. The report also noted the problems with any program that generated large numbers of nuclear bombs, saying it was "bound to be controversial" and that it would "arouse considerable negative responses". GURC concluded that the likelihood of PACER being developed was very low, even if the formidable technical issues could be solved. In 1975 further funding for PACER research was cancelled.
Despite the cancellation of this early work, basic studies of the concept have continued. A more developed version considered the use of engineered vessels in place of the large open cavities. A typical design called for a thick steel alloy blast-chamber, in diameter and tall, to be embedded in a cavity dug into bedrock in Nevada. Hundreds of long bolts were to be driven into the surrounding rock to support the cavity. The space between the blast-chamber and the rock cavity walls was to be filled with concrete; then the bolts were to be put under enormous tension to pre-stress the rock, concrete, and blast-chamber. The blast-chamber was then to be partially filled with molten fluoride salts to a depth of , a "waterfall" would be initiated by pumping the salt to the top of the chamber and letting it fall to the bottom. While surrounded by this falling coolant, a 1-kiloton fission bomb would be detonated; this would be repeated every 45 minutes. The fluid would also absorb neutrons to avoid damage to the walls of the cavity.
See also
Nuclear pulse propulsion
Project Gnome
Nuclear fusion-fission hybrid
References
Citations
Bibliography
External links
Determination of main reactor parameters for flibe (Li2BeF4) cooled peaceful nuclear explosive reactors (PACER) (pdf)
Ralph Moir's PACER page - contains a short summary and research papers about the topic
Nuclear weapons testing
Inertial confinement fusion
Fusion reactors | Project PACER | [
"Chemistry",
"Technology"
] | 977 | [
"Nuclear fusion",
"Environmental impact of nuclear power",
"Nuclear weapons testing",
"Fusion reactors"
] |
1,018,197 | https://en.wikipedia.org/wiki/Monadic%20Boolean%20algebra | In abstract algebra, a monadic Boolean algebra is an algebraic structure A with signature
of type 〈2,2,1,0,0,1〉,
where 〈A, ·, +, ', 0, 1〉 is a Boolean algebra.
The monadic/unary operator ∃ denotes the existential quantifier, which satisfies the identities (using the received prefix notation for ∃):
is the existential closure of x. Dual to ∃ is the unary operator ∀, the universal quantifier, defined as .
A monadic Boolean algebra has a dual definition and notation that take ∀ as primitive and ∃ as defined, so that . (Compare this with the definition of the dual Boolean algebra.) Hence, with this notation, an algebra A has signature , with 〈A, ·, +, ', 0, 1〉 a Boolean algebra, as before. Moreover, ∀ satisfies the following dualized version of the above identities:
.
is the universal closure of x.
Discussion
Monadic Boolean algebras have an important connection to topology. If ∀ is interpreted as the interior operator of topology, (1)–(3) above plus the axiom ∀(∀x) = ∀x make up the axioms for an interior algebra. But ∀(∀x) = ∀x can be proved from (1)–(4). Moreover, an alternative axiomatization of monadic Boolean algebras consists of the (reinterpreted) axioms for an interior algebra, plus ∀(∀x)' = (∀x)' (Halmos 1962: 22). Hence monadic Boolean algebras are the semisimple interior/closure algebras such that:
The universal (dually, existential) quantifier interprets the interior (closure) operator;
All open (or closed) elements are also clopen.
A more concise axiomatization of monadic Boolean algebra is (1) and (2) above, plus ∀(x∨∀y) = ∀x∨∀y (Halmos 1962: 21). This axiomatization obscures the connection to topology.
Monadic Boolean algebras form a variety. They are to monadic predicate logic what Boolean algebras are to propositional logic, and what polyadic algebras are to first-order logic. Paul Halmos discovered monadic Boolean algebras while working on polyadic algebras; Halmos (1962) reprints the relevant papers. Halmos and Givant (1998) includes an undergraduate treatment of monadic Boolean algebra.
Monadic Boolean algebras also have an important connection to modal logic. The modal logic S5, viewed as a theory in S4, is a model of monadic Boolean algebras in the same way that S4 is a model of interior algebra. Likewise, monadic Boolean algebras supply the algebraic semantics for S5. Hence S5-algebra is a synonym for monadic Boolean algebra.
See also
Clopen set
Cylindric algebra
Interior algebra
Kuratowski closure axioms
Łukasiewicz–Moisil algebra
Modal logic
Monadic logic
References
Paul Halmos, 1962. Algebraic Logic. New York: Chelsea.
------ and Steven Givant, 1998. Logic as Algebra. Mathematical Association of America.
Algebraic logic
Boolean algebra
Closure operators | Monadic Boolean algebra | [
"Mathematics"
] | 693 | [
"Boolean algebra",
"Closure operators",
"Mathematical logic",
"Fields of abstract algebra",
"Algebraic logic",
"Order theory"
] |
1,018,257 | https://en.wikipedia.org/wiki/3-manifold | In mathematics, a 3-manifold is a topological space that locally looks like a three-dimensional Euclidean space. A 3-manifold can be thought of as a possible shape of the universe. Just as a sphere looks like a plane (a tangent plane) to a small and close enough observer, all 3-manifolds look like our universe does to a small enough observer. This is made more precise in the definition below.
Principles
Definition
A topological space is a 3-manifold if it is a second-countable Hausdorff space and if every point in has a neighbourhood that is homeomorphic to Euclidean 3-space.
Mathematical theory of 3-manifolds
The topological, piecewise-linear, and smooth categories are all equivalent in three dimensions, so little distinction is made in whether we are dealing with say, topological 3-manifolds, or smooth 3-manifolds.
Phenomena in three dimensions can be strikingly different from phenomena in other dimensions, and so there is a prevalence of very specialized techniques that do not generalize to dimensions greater than three. This special role has led to the discovery of close connections to a diversity of other fields, such as knot theory, geometric group theory, hyperbolic geometry, number theory, Teichmüller theory, topological quantum field theory, gauge theory, Floer homology, and partial differential equations. 3-manifold theory is considered a part of low-dimensional topology or geometric topology.
A key idea in the theory is to study a 3-manifold by considering special surfaces embedded in it. One can choose the surface to be nicely placed in the 3-manifold, which leads to the idea of an incompressible surface and the theory of Haken manifolds, or one can choose the complementary pieces to be as nice as possible, leading to structures such as Heegaard splittings, which are useful even in the non-Haken case.
Thurston's contributions to the theory allow one to also consider, in many cases, the additional structure given by a particular Thurston model geometry (of which there are eight). The most prevalent geometry is hyperbolic geometry. Using a geometry in addition to special surfaces is often fruitful.
The fundamental groups of 3-manifolds strongly reflect the geometric and topological information belonging to a 3-manifold. Thus, there is an interplay between group theory and topological methods.
Invariants describing 3-manifolds
3-manifolds are an interesting special case of low-dimensional topology because their topological invariants give a lot of information about their structure in general. If we let be a 3-manifold and be its fundamental group, then a lot of information can be derived from them. For example, using Poincare duality and the Hurewicz theorem, we have the following homology groups:
where the last two groups are isomorphic to the group homology and cohomology of , respectively; that is,From this information a basic homotopy theoretic classification of 3-manifolds can be found. Note from the Postnikov tower there is a canonical mapIf we take the pushforward of the fundamental class into we get an element . It turns out the group together with the group homology class gives a complete algebraic description of the homotopy type of .
Connected sums
One important topological operation is the connected sum of two 3-manifolds . In fact, from general theorems in topology, we find for a three manifold with a connected sum decomposition the invariants above for can be computed from the . In particularMoreover, a 3-manifold which cannot be described as a connected sum of two 3-manifolds is called prime.
Second homotopy groups
For the case of a 3-manifold given by a connected sum of prime 3-manifolds, it turns out there is a nice description of the second fundamental group as a -module. For the special case of having each is infinite but not cyclic, if we take based embeddings of a 2-sphere where then the second fundamental group has the presentationgiving a straightforward computation of this group.
Important examples of 3-manifolds
Euclidean 3-space
Euclidean 3-space is the most important example of a 3-manifold, as all others are defined in relation to it. This is just the standard 3-dimensional vector space over the real numbers.
3-sphere
A 3-sphere is a higher-dimensional analogue of a sphere. It consists of the set of points equidistant from a fixed central point in 4-dimensional Euclidean space. Just as an ordinary sphere (or 2-sphere) is a two-dimensional surface that forms the boundary of a ball in three dimensions, a 3-sphere is an object with three dimensions that forms the boundary of a ball in four dimensions. Many examples of 3-manifolds can be constructed by taking quotients of the 3-sphere by a finite group acting freely on via a map , so .
Real projective 3-space
Real projective 3-space, or RP3, is the topological space of lines passing through the origin 0 in R4. It is a compact, smooth manifold of dimension 3, and is a special case Gr(1, R4) of a Grassmannian space.
RP3 is (diffeomorphic to) SO(3), hence admits a group structure; the covering map S3 → RP3 is a map of groups Spin(3) → SO(3), where Spin(3) is a Lie group that is the universal cover of SO(3).
3-torus
The 3-dimensional torus is the product of 3 circles. That is:
The 3-torus, T3 can be described as a quotient of R3 under integral shifts in any coordinate. That is, the 3-torus is R3 modulo the action of the integer lattice Z3 (with the action being taken as vector addition). Equivalently, the 3-torus is obtained from the 3-dimensional cube by gluing the opposite faces together.
A 3-torus in this sense is an example of a 3-dimensional compact manifold. It is also an example of a compact abelian Lie group. This follows from the fact that the unit circle is a compact abelian Lie group (when identified with the unit complex numbers with multiplication). Group multiplication on the torus is then defined by coordinate-wise multiplication.
Hyperbolic 3-space
Hyperbolic space is a homogeneous space that can be characterized by a constant negative curvature. It is the model of hyperbolic geometry. It is distinguished from Euclidean spaces with zero curvature that define the Euclidean geometry, and models of elliptic geometry (like the 3-sphere) that have a constant positive curvature. When embedded to a Euclidean space (of a higher dimension), every point of a hyperbolic space is a saddle point. Another distinctive property is the amount of space covered by the 3-ball in hyperbolic 3-space: it increases exponentially with respect to the radius of the ball, rather than polynomially.
Poincaré dodecahedral space
The Poincaré homology sphere (also known as Poincaré dodecahedral space) is a particular example of a homology sphere. Being a spherical 3-manifold, it is the only homology 3-sphere (besides the 3-sphere itself) with a finite fundamental group. Its fundamental group is known as the binary icosahedral group and has order 120. This shows the Poincaré conjecture cannot be stated in homology terms alone.
In 2003, lack of structure on the largest scales (above 60 degrees) in the cosmic microwave background as observed for one year by the WMAP spacecraft led to the suggestion, by Jean-Pierre Luminet of the Observatoire de Paris and colleagues, that the shape of the universe is a Poincaré sphere. In 2008, astronomers found the best orientation on the sky for the model and confirmed some of the predictions of the model, using three years of observations by the WMAP spacecraft.
However, there is no strong support for the correctness of the model, as yet.
Seifert–Weber space
In mathematics, Seifert–Weber space (introduced by Herbert Seifert and Constantin Weber) is a closed hyperbolic 3-manifold. It is also known as Seifert–Weber dodecahedral space and hyperbolic dodecahedral space. It is one of the first discovered examples of closed hyperbolic 3-manifolds.
It is constructed by gluing each face of a dodecahedron to its opposite in a way that produces a closed 3-manifold. There are three ways to do this gluing consistently. Opposite faces are misaligned by 1/10 of a turn, so to match them they must be rotated by 1/10, 3/10 or 5/10 turn; a rotation of 3/10 gives the Seifert–Weber space. Rotation of 1/10 gives the Poincaré homology sphere, and rotation by 5/10 gives 3-dimensional real projective space.
With the 3/10-turn gluing pattern, the edges of the original dodecahedron are glued to each other in groups of five. Thus, in the Seifert–Weber space, each edge is surrounded by five pentagonal faces, and the dihedral angle between these pentagons is 72°. This does not match the 117° dihedral angle of a regular dodecahedron in Euclidean space, but in hyperbolic space there exist regular dodecahedra with any dihedral angle between 60° and 117°, and the hyperbolic dodecahedron with dihedral angle 72° may be used to give the Seifert–Weber space a geometric structure as a hyperbolic manifold.
It is a quotient space of the order-5 dodecahedral honeycomb, a regular tessellation of hyperbolic 3-space by dodecahedra with this dihedral angle.
Gieseking manifold
In mathematics, the Gieseking manifold is a cusped hyperbolic 3-manifold of finite volume. It is non-orientable and has the smallest volume among non-compact hyperbolic manifolds, having volume approximately 1.01494161. It was discovered by .
The Gieseking manifold can be constructed by removing the vertices from a tetrahedron, then gluing the faces together in pairs using affine-linear maps. Label the vertices 0, 1, 2, 3. Glue the face with vertices 0,1,2 to the face with vertices 3,1,0 in that order. Glue the face 0,2,3 to the face 3,2,1 in that order. In the hyperbolic structure of the Gieseking manifold, this ideal tetrahedron is the canonical polyhedral decomposition of David B. A. Epstein and Robert C. Penner. Moreover, the angle made by the faces is . The triangulation has one tetrahedron, two faces, one edge and no vertices, so all the edges of the original tetrahedron are glued together.
Some important classes of 3-manifolds
Graph manifold
Haken manifold
Homology spheres
Hyperbolic 3-manifold
I-bundles
Knot and link complements
Lens space
Seifert fiber spaces, Circle bundles
Spherical 3-manifold
Surface bundles over the circle
Torus bundle
Hyperbolic link complements
A hyperbolic link is a link in the 3-sphere with complement that has a complete Riemannian metric of constant negative curvature, i.e. has a hyperbolic geometry. A hyperbolic knot is a hyperbolic link with one component.
The following examples are particularly well-known and studied.
Figure eight knot
Whitehead link
Borromean rings
The classes are not necessarily mutually exclusive.
Some important structures on 3-manifolds
Contact geometry
Contact geometry is the study of a geometric structure on smooth manifolds given by a hyperplane distribution in the tangent bundle and specified by a one-form, both of which satisfy a 'maximum non-degeneracy' condition called 'complete non-integrability'. From the Frobenius theorem, one recognizes the condition as the opposite of the condition that the distribution be determined by a codimension one foliation on the manifold ('complete integrability').
Contact geometry is in many ways an odd-dimensional counterpart of symplectic geometry, which belongs to the even-dimensional world. Both contact and symplectic geometry are motivated by the mathematical formalism of classical mechanics, where one can consider either the even-dimensional phase space of a mechanical system or the odd-dimensional extended phase space that includes the time variable.
Haken manifold
A Haken manifold is a compact, P²-irreducible 3-manifold that is sufficiently large, meaning that it contains a properly embedded two-sided incompressible surface. Sometimes one considers only orientable Haken manifolds, in which case a Haken manifold is a compact, orientable, irreducible 3-manifold that contains an orientable, incompressible surface.
A 3-manifold finitely covered by a Haken manifold is said to be virtually Haken. The Virtually Haken conjecture asserts that every compact, irreducible 3-manifold with infinite fundamental group is virtually Haken.
Haken manifolds were introduced by Wolfgang Haken. Haken proved that Haken manifolds have a hierarchy, where they can be split up into 3-balls along incompressible surfaces. Haken also showed that there was a finite procedure to find an incompressible surface if the 3-manifold had one. Jaco and Oertel gave an algorithm to determine if a 3-manifold was Haken.
Essential lamination
An essential lamination is a lamination where every leaf is incompressible and end incompressible, if the complementary regions of the lamination are irreducible, and if there are no spherical leaves.
Essential laminations generalize the incompressible surfaces found in Haken manifolds.
Heegaard splitting
A Heegaard splitting is a decomposition of a compact oriented 3-manifold that results from dividing it into two handlebodies.
Every closed, orientable three-manifold may be so obtained; this follows from deep results on the triangulability of three-manifolds due to Moise. This contrasts strongly with higher-dimensional manifolds which need not admit smooth or piecewise linear structures. Assuming smoothness the existence of a Heegaard splitting also follows from the work of Smale about handle decompositions from Morse theory.
Taut foliation
A taut foliation is a codimension 1 foliation of a 3-manifold with the property that there is a single transverse circle intersecting every leaf. By transverse circle, is meant a closed loop that is always transverse to the tangent field of the foliation. Equivalently, by a result of Dennis Sullivan, a codimension 1 foliation is taut if there exists a Riemannian metric that makes each leaf a minimal surface.
Taut foliations were brought to prominence by the work of William Thurston and David Gabai.
Foundational results
Some results are named as conjectures as a result of historical artifacts.
We begin with the purely topological:
Moise's theorem
In geometric topology, Moise's theorem, proved by Edwin E. Moise in, states that any topological 3-manifold has an essentially unique piecewise-linear structure and smooth structure.
As corollary, every compact 3-manifold has a Heegaard splitting.
Prime decomposition theorem
The prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) collection of prime 3-manifolds.
A manifold is prime if it cannot be presented as a connected sum of more than one manifold, none of which is the sphere of the same dimension.
Kneser–Haken finiteness
Kneser-Haken finiteness says that for each compact 3-manifold, there is a constant C such that any collection of disjoint incompressible embedded surfaces of cardinality greater than C must contain parallel elements.
Loop and Sphere theorems
The loop theorem is a generalization of Dehn's lemma and should more properly be called the "disk theorem". It was first proven by Christos Papakyriakopoulos in 1956, along with Dehn's lemma and the Sphere theorem.
A simple and useful version of the loop theorem states that if there is a map
with not nullhomotopic in , then there is an embedding with the same property.
The sphere theorem of gives conditions for elements of the second homotopy group of a 3-manifold to be represented by embedded spheres.
One example is the following:
Let be an orientable 3-manifold such that is not the trivial group. Then there exists a non-zero element of
having a representative that is an embedding .
Annulus and Torus theorems
The annulus theorem states that if a pair of disjoint simple closed curves on the boundary of a three manifold are freely homotopic then they cobound a properly embedded annulus. This should not be confused with the high dimensional theorem of the same name.
The torus theorem is as follows: Let M be a compact, irreducible 3-manifold with nonempty boundary. If M admits an essential map of a torus, then M admits an essential embedding of either a torus or an annulus
JSJ decomposition
The JSJ decomposition, also known as the toral decomposition, is a topological construct given by the following theorem:
Irreducible orientable closed (i.e., compact and without boundary) 3-manifolds have a unique (up to isotopy) minimal collection of disjointly embedded incompressible tori such that each component of the 3-manifold obtained by cutting along the tori is either atoroidal or Seifert-fibered.
The acronym JSJ is for William Jaco, Peter Shalen, and Klaus Johannson. The first two worked together, and the third worked independently.
Scott core theorem
The Scott core theorem is a theorem about the finite presentability of fundamental groups of 3-manifolds due to G. Peter Scott. The precise statement is as follows:
Given a 3-manifold (not necessarily compact) with finitely generated fundamental group, there is a compact three-dimensional submanifold, called the compact core or Scott core, such that its inclusion map induces an isomorphism on fundamental groups. In particular, this means a finitely generated 3-manifold group is finitely presentable.
A simplified proof is given in, and a stronger uniqueness statement is proven in.
Lickorish–Wallace theorem
The Lickorish–Wallace theorem states that any closed, orientable, connected 3-manifold may be obtained by performing Dehn surgery on a framed link in the 3-sphere with surgery coefficients. Furthermore, each component of the link can be assumed to be unknotted.
Waldhausen's theorems on topological rigidity
Friedhelm Waldhausen's theorems on topological rigidity say that certain 3-manifolds (such as those with an incompressible surface) are homeomorphic if there is an isomorphism of fundamental groups which respects the boundary.
Waldhausen conjecture on Heegaard splittings
Waldhausen conjectured that every closed orientable 3-manifold has only finitely many Heegaard splittings (up to homeomorphism) of any given genus.
Smith conjecture
The Smith conjecture (now proven) states that if f is a diffeomorphism of the 3-sphere of finite order, then the fixed point set of f cannot be a nontrivial knot.
Cyclic surgery theorem
The cyclic surgery theorem states that, for a compact, connected, orientable, irreducible three-manifold M whose boundary is a torus T, if M is not a Seifert-fibered space and r,s are slopes on T such that their Dehn fillings have cyclic fundamental group, then the distance between r and s (the minimal number of times that two simple closed curves in T representing r and s must intersect) is at most 1. Consequently, there are at most three Dehn fillings of M with cyclic fundamental group.
Thurston's hyperbolic Dehn surgery theorem and the Jørgensen–Thurston theorem
Thurston's hyperbolic Dehn surgery theorem states: is hyperbolic as long as a finite set of exceptional slopes is avoided for the i-th cusp for each i. In addition, converges to M in H as all for all corresponding to non-empty Dehn fillings .
This theorem is due to William Thurston and fundamental to the theory of hyperbolic 3-manifolds. It shows that nontrivial limits exist in H. Troels Jorgensen's study of the geometric topology further shows that all nontrivial limits arise by Dehn filling as in the theorem.
Another important result by Thurston is that volume decreases under hyperbolic Dehn filling. In fact, the theorem states that volume decreases under topological Dehn filling, assuming of course that the Dehn-filled manifold is hyperbolic. The proof relies on basic properties of the Gromov norm.
Jørgensen also showed that the volume function on this space is a continuous, proper function. Thus by the previous results, nontrivial limits in H are taken to nontrivial limits in the set of volumes. In fact, one can further conclude, as did Thurston, that the set of volumes of finite volume hyperbolic 3-manifolds has ordinal type . This result is known as the Thurston-Jørgensen theorem. Further work characterizing this set was done by Gromov.
Also, Gabai, Meyerhoff & Milley showed that the Weeks manifold has the smallest volume of any closed orientable hyperbolic 3-manifold.
Thurston's hyperbolization theorem for Haken manifolds
One form of Thurston's geometrization theorem states:
If M is a compact irreducible atoroidal Haken manifold whose boundary has zero Euler characteristic, then the interior of M has a complete hyperbolic structure of finite volume.
The Mostow rigidity theorem implies that if a manifold of dimension at least 3 has a hyperbolic structure of finite volume, then it is essentially unique.
The conditions that the manifold M should be irreducible and atoroidal are necessary, as hyperbolic manifolds have these properties. However the condition that the manifold be Haken is unnecessarily strong. Thurston's hyperbolization conjecture states that a closed irreducible atoroidal 3-manifold with infinite fundamental group is hyperbolic, and this follows from Perelman's proof of the Thurston geometrization conjecture.
Tameness conjecture, also called the Marden conjecture or tame ends conjecture
The tameness theorem states that every complete hyperbolic 3-manifold with finitely generated fundamental group is topologically tame, in other words homeomorphic to the interior of a compact 3-manifold.
The tameness theorem was conjectured by Marden. It was proved by Agol and, independently, by Danny Calegari and David Gabai. It is one of the fundamental properties of geometrically infinite hyperbolic 3-manifolds, together with the density theorem for Kleinian groups and the ending lamination theorem. It also implies the Ahlfors measure conjecture.
Ending lamination conjecture
The ending lamination theorem, originally conjectured by William Thurston and later proven by Jeffrey Brock, Richard Canary, and Yair Minsky, states that hyperbolic 3-manifolds with finitely generated fundamental groups are determined by their topology together with certain "end invariants", which are geodesic laminations on some surfaces in the boundary of the manifold.
Poincaré conjecture
The 3-sphere is an especially important 3-manifold because of the now-proven Poincaré conjecture. Originally conjectured by Henri Poincaré, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time.
After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attack the problem. Perelman introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way. Several teams of mathematicians have verified that Perelman's proof is correct.
Thurston's geometrization conjecture
Thurston's geometrization conjecture states that certain three-dimensional topological spaces each have a unique geometric structure that can be associated with them. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic).
In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by William , and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture.
Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s and since then several complete proofs have appeared in print.
Grigori Perelman sketched a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery.
There are now several different manuscripts (see below) with details of the proof. The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture.
Virtually fibered conjecture and Virtually Haken conjecture
The virtually fibered conjecture, formulated by American mathematician William Thurston, states that every closed, irreducible, atoroidal 3-manifold with infinite fundamental group has a finite cover which is a surface bundle over the circle.
The virtually Haken conjecture states that every compact, orientable, irreducible three-dimensional manifold with infinite fundamental group is virtually Haken. That is, it has a finite cover (a covering space with a finite-to-one covering map) that is a Haken manifold.
In a posting on the ArXiv on 25 Aug 2009, Daniel Wise implicitly implied (by referring to a then unpublished longer manuscript) that he had proven the Virtually fibered conjecture for the case where the 3-manifold is closed, hyperbolic, and Haken. This was followed by a survey article in Electronic Research Announcements in Mathematical Sciences.
Several more preprints have followed, including the aforementioned longer manuscript by Wise. In March 2012, during a conference at Institut Henri Poincaré in Paris, Ian Agol announced he could prove the virtually Haken conjecture for closed hyperbolic 3-manifolds. The proof built on results of Kahn and Markovic in their proof of the Surface subgroup conjecture and results of Wise in proving the Malnormal Special Quotient Theorem and results of Bergeron and Wise for the cubulation of groups. Taken together with Wise's results, this implies the virtually fibered conjecture for all closed hyperbolic 3-manifolds.
Simple loop conjecture
If is a map of closed connected surfaces such that is not injective, then there exists a non-contractible simple closed
curve such that is homotopically trivial. This conjecture was proven by David Gabai.
Surface subgroup conjecture
The surface subgroup conjecture of Friedhelm Waldhausen states that the fundamental group of every closed, irreducible 3-manifold with infinite fundamental group has a surface subgroup. By "surface subgroup" we mean the fundamental group of a closed surface not the 2-sphere. This problem is listed as Problem 3.75 in Robion Kirby's problem list.
Assuming the geometrization conjecture, the only open case was that of closed hyperbolic 3-manifolds. A proof of this case was announced in the Summer of 2009 by Jeremy Kahn and Vladimir Markovic and outlined in a talk August 4, 2009 at the FRG (Focused Research Group) Conference hosted by the University of Utah. A preprint appeared on the arxiv in October 2009. Their paper was published in the Annals of Mathematics in 2012. In June 2012, Kahn and Markovic were given the Clay Research Awards by the Clay Mathematics Institute at a ceremony in Oxford.
Important conjectures
Cabling conjecture
The cabling conjecture states that if Dehn surgery on a knot in the 3-sphere yields a reducible 3-manifold, then that knot is a -cable on some other knot, and the surgery must have been performed using the slope .
Lubotzky–Sarnak conjecture
The fundamental group of any finite volume hyperbolic n-manifold does
not have Property τ.
References
Further reading
External links
Strickland, Neil, A Bestiary of Topological Objects
Low-dimensional topology
Geometric topology | 3-manifold | [
"Mathematics"
] | 6,084 | [
"Topology",
"Low-dimensional topology",
"Geometric topology"
] |
1,018,267 | https://en.wikipedia.org/wiki/Jet%20injector | A jet injector is a type of medical injecting syringe device used for a method of drug delivery known as jet injection. A narrow, high-pressure stream of liquid is made to penetrate the outermost layer of the skin (stratum corneum) to deliver medication to targeted underlying tissues of the epidermis or dermis ("cutaneous" injection, also known as classical "intradermal" injection), fat ("subcutaneous" injection), or muscle ("intramuscular" injection).
The jet stream is usually generated by the pressure of a piston in an enclosed liquid-filled chamber. The piston is usually pushed by the release of a compressed metal spring, although devices being studied may use piezoelectric effects and other novel technologies to pressurize the liquid in the chamber. The springs of currently marketed and historical devices may be compressed by operator muscle power, hydraulic fluid, built-in battery-operated motors, compressed air or gas, and other means. Gas-powered and hydraulically powered devices may involve hoses that carry compressed gas or hydraulic fluid from separate cylinders of gas, electric air pumps, foot-pedal pumps, or other components to reduce the size and weight of the hand-held part of the system and to allow faster and less-tiring methods to perform numerous consecutive vaccinations.
Jet injectors were used for mass vaccination, and as an alternative to needle syringes for diabetics to inject insulin. However, the World Health Organization no longer recommends jet injectors for vaccination due to risks of disease transmission. Similar devices are used in other industries to inject grease or other fluid.
The term "hypospray", although better known from its usage in the 1960s television show Star Trek, is attested in the medical literature as early as 1956.
Types
A jet injector, also known as a jet gun injector, air gun, or pneumatic injector, is a medical instrument that uses a high-pressure jet of liquid medication to penetrate the skin and deliver medication under the skin without a needle. Jet injectors can be single-dose or multi-dose.
Throughout the years jet injectors have been redesigned to overcome the risk of carrying contamination to successive subjects. To try to stop the risk, researchers placed a single-use protective cap over the reusable nozzle. The protective cap was intended to act as a shield between the reusable nozzle and the patient's skin. After each injection the cap would be discarded and replaced with a sterile one. These devices were known as protector cap needle-free injectors or PCNFI. A safety test by Kelly and colleagues (2008) found a PCNFI device failed to prevent contamination. After administering injections to hepatitis B patients, researchers found hepatitis B had penetrated the protective cap and contaminated the internal components of the jet injector, showing that the internal fluid pathway and patient-contacting parts cannot safely be reused.
Researchers developed a new jet injection design by combining the drug reservoir, plunger and nozzle into a single-use disposable cartridge. The cartridge is placed onto the tip of the jet injector and, when activated, a rod pushes the plunger forward. This device is known as a disposable-cartridge jet injector (DCJI).
The International Standards Organization recommended abandoning the use of the name "jet injector", which is associated with a risk of cross-contamination and rather refer to newer devices as "needle-free injectors".
Modern needle-free injector brands
Since the late 1970s, jet injectors have been increasingly used by diabetics in the United States. These devices have all been spring-loaded. At their peak, jet injectors accounted for 7% of the injector market. Currently, the only model available in the United States is the Injex 23. In the United Kingdom, the Insujet has recently entered the market. As of June 2015, the Insujet is available in the UK and a few select countries.
Researchers from the University of Twente in the Netherlands patented a Jet Injection System, comprising a microfluidic device for jet ejection and a laser-based heating system. A continuous laser beam – also called a continuous-wave laser – heats the liquid to be administered, which is launched in a droplet form across the epidermis and slows down into the tissue below.
Concerns
Since the jet injector breaks the barrier of the skin, there is a risk of blood and biological material being transferred from one user to the next. Research on the risks of cross-contamination arose immediately after the invention of jet injection technology.
There are three inherent problems with jet injectors:
Splash-back
Splash-back refers to the jet stream penetrating the outer skin at a high velocity, causing the jet stream to ricochet backward and contaminate the nozzle.
Instances of splash-back have been published by several researchers. Samir Mitragrotri visually captured splash-back after discharging a multi-use nozzle jet injector using high-speed microcinematography. Hoffman and colleagues (2001) also observed the nozzle and internal fluid pathway of the jet injector becoming contaminated.
Fluid suck-back
Fluid suck-back occurs when blood left on the nozzle of the jet injector is sucked back into the injector orifice, contaminating the next dose to be fired.
The CDC has acknowledged that the most widely used jet injector in the world, the Ped-O-Jet, sucked fluid back into the gun. "After injections, they [CDC] observed fluid remaining on the Ped-O-Jet nozzle being sucked back into the device upon its cocking and refilling for the next injection (beyond the reach of alcohol swabbing or acetone swabbing)," stated Dr. Bruce Weniger.
Retrograde flow
Retrograde flow happens after the jet stream penetrates the skin and creates a hole, if the pressure of the jet stream causes the spray, after mixing with tissue fluids and blood, to rebound back out of the hole, against the incoming jet stream and back into the nozzle orifice.
This problem has been reported by numerous researchers.
Hepatitis B can be transmitted by less than one nanolitre so makers of injectors must ensure there is no cross-contamination between applications. The World Health Organization no longer recommends jet injectors for vaccination due to risks of disease transmission.
Numerous studies have found cross-infection of diseases from jet injections. An experiment using mice, published in 1985, showed that jet injectors would frequently transmit the viral infection lactate dehydrogenase elevating virus (LDV) from one mouse to another. Another study used the device on a calf, then tested the fluid remaining in the injector for blood. Every injector they tested had detectable blood in a quantity sufficient to pass on a virus such as hepatitis B.
From 1984 to 1985, a weight-loss clinic in Los Angeles administered human chorionic gonadotropin (hCG) with a Med-E-Jet injector. A CDC investigation found 57 out of 239 people who had received the jet injection tested positive for hepatitis B.
Jet injectors have also been found to inoculate bacteria from the environment into users. In 1988 a podiatry clinic used a jet injector to deliver local anaesthetic into patients' toes. Eight of these patients developed infections caused by Mycobacterium chelonae. The injector was stored in a container of water and disinfectant between use, but the organism grew in the container. This species of bacteria is sometimes found in tap water, and had been previously associated with infections from jet injectors.
History
19th century: Workmen in France had accidental jet injections with high-powered grease guns.
December 18, 1866: Jules-Auguste Béclard presented Dr. Jean Sales-Girons invention, Appareil pour l'aquapuncture to l'Académie Impériale de Médecine in Paris. This is the earliest documented jet injector to administer water or medicine under enough pressure to penetrate the skin without the use of a needle.
1920s: Diesel engines began to be made in large quantities: thus the start of serious risk of accidental jet-injection by their fuel injectors in workshop accidents.
1935: Arnold K. Sutermeister, a mechanical engineer, witnessed a worker injure his hand from a high-pressure jet stream and theorized of using the concept to administer medicine. Sutermeister collaborates with Dr. John Roberts in creating a prototype jet injector.
1937: First published accidental jet injection by a diesel engine's fuel injector.
1936: Marshall Lockhart, an engineer, filed a patent for his idea of a jet injector after learning of Sutermeister's invention.
1947: Lockhart's jet injector, known as the Hypospray, was introduced for clinical evaluation by Dr. Robert Hingson and Dr. James Hughes.
1951: The Commission on Immunization of the Armed Forces Epidemiological Board requested the Army Medical Service Graduate School to develop "jet injection equipment specifically intended for rapid semiautomatic operation in large-scale immunization programs." This device became known as the multi-use nozzle jet injector (MUNJI).
1954–1967: Dr. Robert Hingson partook in numerous health expeditions with his charity, Brother's Brother Foundation. Hingson stated he vaccinated upwards of 2 million people across the globe using various multi-use nozzle jet injectors.
1955: Warren and colleagues (1955) reported on the introduction of a prototype multi-dose jet injector, known as the Press-O-Jet, which had successfully undergone clinical testing upon 1,685 soldiers within the U.S. Army.
1959: Abram Benenson, the Lieutenant Colonel for the Division of Immunology at Walter Reed Army Institute of Research, reported on the development of what became widely known as the Ped-O-Jet. The invention was the collaboration of Dr. Benenson and Aaron Ismach. Ismach was a civilian scientist working for the US Army Medical Equipment and Research Development Laboratory.
1961: The Department of the Army made multi-use nozzle jet injectors the standard for administering immunizations.
1961: The CDC implemented mass vaccination programs across the United States called Babies and Breadwinners to combat polio. These vaccination events used multi-use nozzle jet injectors.
1964: Aaron Ismach invented an intradermal nozzle for the Ped-O-Jet injector, which allowed delivery of the shallower smallpox vaccinations.
1964: Aaron Ismach was awarded the Exceptional Civilian Service Award at the Eighth Annual Secretary of the Army Awards ceremonies for his invention of the intradermal nozzle.
1966: Oscar Banker, an engineer, patented his invention of a portable multi-use nozzle jet injector that utilizes as its energy source. This would become known as the Med-E-Jet.
September 1966: The Star Trek series started to use its own jet injector device under the name "hypospray".
1967: Nicaraguans undergoing smallpox vaccinations nicknamed the gun-like jet injectors (Ped-O-Jet and Med-E-Jet) as "la pistola de la paz", meaning "the pistol of peace". The name "Peace Guns" stuck.
1976: The United States Agency for International Development (USAID) published a book called War on Hunger which detailed the War Against Smallpox which Ismach's Jet Injector gun was used to eradicate the disease in Africa and Asia. The US government spent $150 million a year to prevent its recurrence in North America.
1986: A hepatitis B outbreak occurs amongst 57 patients at a Los Angeles clinic due to a Med-E-Jet injector.
1997: The US Department of Defense, the jet injector's biggest user, announced that it would stop using it for mass vaccinations due to concerns about infection.
2003: The US Department of Veterans Affairs recognized for the first time that a veteran acquired Hepatitis C from his military jet injections and awarded service-connection for his disability.
April 2010: A laser-based reusable microjet injector for transdermal drug delivery was made by Tae-hee Han and Jack J. Yoh.
February 13, 2013: The PharmaJet Stratis Needle-Free Injector received WHO PQS Certification.
2013: The most comprehensive review and history of jet injection to date is published in the 6th edition of the textbook Vaccines.
August 14, 2014: The U.S. Food and Drug Administration (FDA) approved the use of the PharmaJet Stratis 0.5ml Needle-free Jet Injector for delivery of one particular flu vaccine (AFLURIA by bioCSL Inc.) in people 18 through 64 years of age.
October 2017: A group of scientists publishes an academic study in the Journal of Biomedical Optics, about a new jet injection technique of jet injection by continuous-wave laser cavitation aimed to "develop a needle-free device for eliminating major global healthcare problems caused by needles".
References
External links
Problems in use of jet injectors by diabetics
Memory Alpha (Star Trek Wiki) page about the hypospray
Medical equipment
Drug delivery devices
Dosage forms
American inventions | Jet injector | [
"Chemistry",
"Biology"
] | 2,835 | [
"Pharmacology",
"Drug delivery devices",
"Medical equipment",
"Medical technology"
] |
1,018,292 | https://en.wikipedia.org/wiki/Hylozoism | Hylozoism is the philosophical doctrine according to which all matter is alive or animated, either in itself or as participating in the action of a superior principle, usually the world-soul (anima mundi). The theory holds that matter is unified with life or spiritual activity. The word is a 17th-century term formed from the Greek words ὕλη (hyle: "wood, matter") and ζωή (zoē: "life"), which was coined by the English Platonist philosopher Ralph Cudworth in 1678.
Hylozoism in Ancient Greek Philosophy
Hylozoism in Western philosophy can be traced back to ancient Greece. The Milesian philosophers Thales, Anaximander, and Anaximenes, can be described as hylozoists. Philosopher David Skrbina states that hylozoism was implicit in early Greek philosophy, and was not a doctrine that was typically challenged. "For the Milesians, matter (hyle) possessed life (zoe) as an essential quality. Something like hylozoism was simply accepted as a brute condition of reality." Though hylozoism was implicit in early Greek thought, the philosopher Heraclitus specifically used the term zoe, making him explicitly hylozoist. The hylozoism of the pre-Socratic philosophers such as Thales and Heraclitus influenced later Greek philosophers such as Plato, Aristotle, and the Stoics.
Though hylozoism was common in ancient Greek thought, the term had not been coined yet. In modern literature, hylozoism has tended to carry a negative connotation, and labeling a Greek philosopher as a hylozoist might be a vague disparagement of their thought.
Renaissance period and early modernity
During the Renaissance period in Western Europe, humanist scholars and philosophers such as Bernardino Telesio, Paracelsus, Cardanus, and Giordano Bruno revived the doctrine of hylozoism. The latter, for example, held a form of Christian pantheism wherein God is conceived as the source, cause, medium, and end of all things, and therefore all things are participatory in the ongoing Godhead. Bruno's ideas were so radical that he was excommunicated by the Catholic Church with the accusation of heresy, as well as from a few Protestant denominations, and he was eventually burned at the stake for various other beliefs that were regarded as heretical. Telesio, on the other hand, began from an Aristotelian basis and, through radical empiricism, came to believe that a living force was what informed all matter. Instead of the intellectual universals of Aristotle, he believed that life generated form.
In the Kingdom of England, some of the Cambridge Platonists approached hylozoism as well. Both Henry More and Ralph Cudworth (the Younger, 1617–1688), through their reconciliation of Platonic idealism with Christian doctrines of deific generation, came to see the divine lifeforce as the informing principle in the world. Thus, like Bruno, but not nearly to the extreme, they saw God's generative impulse as giving life to all things that exist. Accordingly, Cudworth, the most systematic metaphysician of the Cambridge Platonist tradition, fought hylozoism. His work is primarily a critique of what he took to be the two principal forms of atheism—materialism and hylozoism.
Cudworth singled out Hobbes not only as a defender of the hylozoic atheism "which attributes life to matter", but also as one going beyond it and defending "hylopathian atheism, which attributes all to matter." Cudworth attempted to show that Hobbes had revived the doctrines of Protagoras and was therefore subject to the criticisms which Plato had deployed against Protagoras in the Theaetetus. On the side of hylozoism, Strato of Lampsacus was the official target. However, Cudworth's Dutch friends had reported to him the views which Spinoza was circulating in manuscript. Cudworth remarks in his Preface that he would have ignored hylozoism had he not been aware that a new version of it would shortly be published.
Spinoza's idealism also tends toward hylozoism. In order to hold a balance even between matter and mind, Spinoza combined materialistic with pantheistic hylozoism, by demoting both to mere attributes of the one infinite substance. Although specifically rejecting identity in inorganic matter, he, like the Cambridge Platonists, sees a life force within, as well as beyond, all matter.
Contemporary hylozoism
Immanuel Kant presented arguments against hylozoism in the third chapter of his 1786 book Metaphysische Anfangsgründe der Naturwissenschaften ("First Metaphysical Principles of Natural Science") and also in his 1781 book Kritik der reinen Vernunft ("Critique of Pure Reason"). Yet, in our times, scientific hylozoism – whether modified, or keeping the trend to make all beings conform to some uniform pattern, to which the concept was adhered in modernity by Herbert Spencer, Hermann Lotze, and Ernst Haeckel – was often called upon as a protest against a mechanistic worldview.
In the 19th century, Haeckel developed a materialist form of hylozoism, specially against Rudolf Virchow's and Hermann von Helmholtz's mechanical views of humans and nature. In his Die Welträtsel of 1899 (The Riddle of the Universe 1901), Haeckel upheld a unity of organic and inorganic nature and derived all actions of both types of matter from natural causes and laws. Thus, his form of hylozoism reverses the usual course by maintaining that living and nonliving things are essentially the same, and by erasing the distinction between the two and stipulating that they behave by a single set of laws.
In contrast, the Argentine-German neurobiological tradition terms hylozoic hiatus all of the parts of nature which can only behave lawfully or nomically and, upon such a feature, are described as lying outside of minds and amid them – i.e. extramentally. Thereby the hylozoic hiatus becomes contraposed to minds deemed able of behaving semoviently, i.e. able of inaugurating new causal series (semovience). Hylozoism in this contemporary neurobiological tradition is thus restricted to the portions of nature behaving nomically inside the minds, namely the minds' sensory reactions (Christfried Jakob's "sensory intonations") whereby minds react to the stimuli coming from the hylozoic hiatus or extramental realm.
Martin Buber too takes an approach that is quasi-hylozoic. By maintaining that the essence of things is identifiable and separate, although not pre-existing, he can see a soul within each thing.
The French Pythagorean and Rosicrucian alchemist, Francois Jollivet-Castelot (1874–1937), established a hylozoic esoteric school which combined the insight of spagyrics, chemistry, physics, transmutations and metaphysics. He published many books, including the 1896 publication "L’Hylozoïsme, l’alchimie, les chimistes unitaires". In his view there was no difference between spirit and matter except for the degree of frequency and other vibrational conditions.
The Mormon theologian Orson Pratt taught a form of hylozoism.
Alice A. Bailey wrote a book called The Consciousness of the Atom.
Influenced by Alice A. Bailey, Charles Webster Leadbeater, and their predecessor Madame Blavatsky, Henry T. Laurency produced voluminous writings describing a hylozoic philosophy.
Influenced by George Ivanovich Gurdjieff, the English philosopher and mathematician John Godolphin Bennett, in his four-volume work The Dramatic Universe and his book Energies, developed a six-dimensional framework in which matter-energy takes on 12 levels of hylozoic quality.
The English cybernetician Stafford Beer adopted a hylozoism position, arguing that it could be defended scientifically and expending much effort on Biological computing in consequence. This is described as Beer's "spiritually-charged awe at the activity and powers of nature in relation to our inability to grasp them representationally". Beer claimed that "Nature does not need to make any detours; it does not just exceed our computational abilities, in effect it surpasses them in unimaginable ways. In a poem on the Irish Sea, Beer talks about nature as exceeding our capacities in way that we can only wonder at, ‘shocked’ and ‘dumbfounded.’" In partnership with his friend Gordon Pask, who was experimenting with various chemical and bio-chemical devices, he explored the possibility for intelligence to be developed in very simple network-complex systems. In one possibly unique experiment led by Pask, they found that such a structure would 'grow' a sensing organization in response to the stimuli of different audio inputs in about half a day.
Ken Wilber embraces hylozoism to explain subjective experience and provides terms describing the ladder of subjective experience experienced by entities from atoms up to Human beings in the upper left quadrant of his Integral philosophy chart.
Physicist Thomas Brophy, in The Mechanism Demands a Mysticism, embraces hylozoism as the basis of a framework for re-integrating modern physical science with perennial spiritual philosophy. Brophy coins two additional words to stand with hylozoism as the three possible ontological stances consistent with modern physics. Thus: hylostatism (universe is deterministic, thus "static" in a four-dimensional sense); hylostochastism (universe contains a fundamentally random or stochastic component); hylozoism (universe contains a fundamentally alive aspect).
Architect Christopher Alexander has put forth a theory of the living universe, where life is viewed as a pervasive patterning that extends to what is normally considered non-living things, notably buildings. He wrote a four-volume work called The Nature of Order which explicates this theory in detail.
Philosopher and ecologist David Abram articulates and elaborates a form of hylozoism grounded in the phenomenology of sensory experience. In his books Becoming Animal and The Spell of the Sensuous, Abram suggests that matter is never entirely passive in our direct experience, holding rather that material things actively "solicit our attention" or "call our focus," coaxing the perceiving body into an ongoing participation with those things. In the absence of intervening technologies, sensory experience is inherently animistic, disclosing a material field that is animate and self-organizing from the get-go. Drawing upon contemporary cognitive and natural science as well as the perspectival worldviews of diverse indigenous, oral cultures, Abram proposes a richly pluralist and story-based cosmology, in which matter is alive through and through. Such an ontology is in close accord, he suggests, with our spontaneous perceptual experience; it calls us back to our senses and to the primacy of the sensuous terrain, enjoining a more respectful and ethical relation to the more-than-human community of animals, plants, soils, mountains, waters and weather-patterns that materially sustains us.
Bruno Latour's actor-network theory, in the sociology of science, treats non-living things as active agents and thus bears some metaphorical resemblance to hylozoism.
The metaphysics of Gilles Deleuze has been described as a form of hylozoism.
In popular culture
Art
Hylozoic Series: Sibyl, an interactive installation of Canadian artist and architect Philip Beesley, was presented in the 18th Biennale of Sydney and was on display until September, 2012. Using sensors, LEDs, and shape memory alloy, Beesley constructed an interactive environment that responded to the actions of the audience, offering a vision of how buildings in the future might move, think and feel.
Literature
In mathematician and writer Rudy Rucker's novels Postsingular and Hylozoic, the emergent sentience of all material things is described as a property of the technological singularity.
The Hylozoist is one of the Culture ships mentioned in Iain M. Banks's novel Surface Detail – appropriately, this ship is a member of the branch of Contact dealing with smart matter outbreaks.
MMORPGs
The monster "Hylozoist" (sometimes spelled "Heirozoist") in the MMORPG Ragnarok Online is a plush rabbit doll with its mouth sewn shut, possessed by the spirit of a child. Although hylozoism has nothing to do with possession, it is clear that the name was derived from this ancient philosophy.
Music
The Hylozoists are a Canadian band.
Sonic Youth references hylozoism in the second verse of the fourth track on the Sister LP
See also
Biocentric universe
Biological naturalism
Clinamen
Daoism
Hyle
Hylomorphism
Hylopathism
Organicism
Pantheism
Pathetic fallacy
Vitalism
References
External links
Metaphysical theories
Natural philosophy
Nonduality
Pantheism
Renaissance philosophy
Vitalism | Hylozoism | [
"Biology"
] | 2,829 | [
"Non-Darwinian evolution",
"Vitalism",
"Biology theories"
] |
1,018,293 | https://en.wikipedia.org/wiki/Barrel%20organ | A barrel organ (also called roller organ or crank organ) is a French mechanical musical instrument consisting of bellows and one or more ranks of pipes housed in a case, usually of wood, and often highly decorated. The basic principle is the same as a traditional pipe organ, but rather than being played by an organist, the barrel organ is activated either by a person turning a crank, or by clockwork driven by weights or springs. The pieces of music are encoded onto wooden barrels (or cylinders), which are analogous to the keyboard of the traditional pipe organ. A person (or in some cases, a trained animal) who plays a barrel organ is known as an organ grinder.
Terminology
There are many names for the barrel organ, such as hand organ, cylinder organ, box organ (though that can also mean a positive organ), street organ, grinder organ, and Low Countries organ.
In French names include orgue à manivelle ("crank organ") and orgue de Barbarie ("Barbary organ"); German names include Drehorgel ("crank organ"), Leierkasten ("brace box"), and Walzenorgel ("cylinder organ"); Hungarian names include verkli (from Austrian-German Werkl), sípláda ("whistle chest") and kintorna (from Bayern-Austrian "Kinterne"); Italian names include organetto a manovella ("crank organ") and organo tedesco ("German organ"); the Polish name is katarynka.
However, several of these names include types of mechanical organs for which the music is encoded as book music or by holes on a punched paper tape instead of by pins on a barrel. While many of these terms refer to the physical operation of the crank, some refer to an exotic origin. The French name orgue de Barbarie, suggesting barbarians, has been explained as a corruption of, variously, the terms bara ("bread") and gwen ("wine") in the Breton language, the surname of an early barrel-organ manufacturer from Modena, Giovanni Barberi, or that of the English inventor John Burberry.
The term hurdy-gurdy is sometimes mistakenly applied to a small, portable barrel organ that was frequently played by organ grinders and buskers (street musicians), but the two terms should not be confused. Although the hurdy-gurdy is also powered by a crank and often used by street performers, it produces sound with a rosin-covered wheel rotated against tuned strings. Another key difference is that the hurdy-gurdy player is free to play any tune he or she desires, while the barrel organist is generally confined to pre-programmed tunes.
Some also confuse the barrel organ with the steam organ or calliope. In the United Kingdom barrel pianos, particularly those played in the streets, are frequently called barrel organs.
Barrel
The pieces of music (or tunes) are encoded onto the barrel using metal pins and staples. Pins are used for short notes, and staples of varying lengths for longer notes. Each barrel usually carried several different tunes. Pinning such barrels was something of an art form, and the quality of the music produced by a barrel organ is largely a function of the quality of its pinning.
The organ barrels must be sturdy to maintain precise alignment over time, since they play the same programming role as music rolls and have to endure significant mechanical strain. Damage to the barrel, such as warpage, would have a direct (and usually detrimental) effect on the music produced.
The size of the barrel will depend on the number of notes in the organ and the length of the tune to be played. The more notes, the longer the barrel. The longer the tune, the greater the diameter.
Since the music is hard-coded onto the barrel, the only way for a barrel organ to play a different set of tunes is to replace the barrel with another one. While not a difficult operation, barrels are unwieldy and expensive, so many organ grinders have only one barrel for their instrument.
Operation
A set of levers called keys is positioned just above the surface of the barrel. Each key corresponds to one pitch. A rod is connected to the rear of each key. The other end of the rod is a metal pin which operates a valve within the wind chest. When the instrument is played (by turning the crank), offsets on the crank shaft cause bellows to open and close to produce pressurized air. A reservoir/regulator maintains a constant pressure. A worm gear on the crank shaft causes the barrel to rotate slowly and its pins and staples lift the fronts of the keys. This causes the other end of the key to press down on the end of the rod which, in turn, activates the valve and allows air from the bellows to pass into the corresponding pipe.
To allow different tunes to be played from one barrel, the barrel can be moved laterally to bring a different set of pins and staples under the keys. Street barrel organs usually play 7 to 9 tunes, although small organs (usually the older ones) can play up to 15 tunes. Less commonly (and usually for large orchestrions) the pinning will form one continuous spiral and the barrel will be gradually moved as it rotates so that the pins remain lined up with the keys. In this case, each barrel plays only one long tune.
Usage
The barrel organ was the traditional instrument of organ grinders. With a few exceptions, organ grinders used one of the smaller, more portable versions of the barrel organ, containing perhaps one (or just a few) rank(s) of pipes and only 7 to 9 tunes. Use of these organs was limited by their weight. Most weighed 25 to 50 pounds but some were as heavy as 100 pounds.
There were many larger versions located in churches, fairgrounds, music halls, and other large establishments such as sports arenas and theaters. The large barrel organs were often powered by very heavy weights and springs, like a more powerful version of a longcase clock. They could also be hydraulically powered, with a turbine or waterwheel arrangement giving the mechanical force to turn the barrel and pump the bellows. The last barrel organs were electrically powered, or converted to electrical power. Eventually, many large barrel organs had their barrel actions removed and were converted to play a different format of music, such as cardboard books or paper rolls.
Combined barrel and manually played instruments
Especially in churches, some large barrel organs were built as "barrel and finger" organs. Such instruments are furnished with a normal organ keyboard, in addition to the automatic mechanism, making it possible to play them by hand when a human organist is available. The barrels were often out of sight.
At the beginning of the 20th century, large barrel organs intended for use as fairground organs or street organs were often converted, or newly built, to play music rolls or book music rather than barrels. This allows a much greater variety of melodies to be played.
See also
Barrel Organ Museum Haarlem (Netherlands)
Calliope (music)
Player piano
Dance organ
Fairground organ
Musical clock
Music box
Organ grinder
Pipe organ
Serinette
Street organ
Notes
References
Diagram Group. Musical Instruments of the World. New York: Facts on File, 1976.
Reblitz, Arthur A., Q. David Bowers. Treasures of Mechanical Music. New York: The Vestal Press, 1981.
Smithsonian Institution. History of Music Machines. New York: Drake Publishers, 1975.
External links
Argentinian Barrel Organ Museum - Official website
Museum of Musical Instruments in Netherlands: "From musical clock to street organ"
A 1790 John Langshaw Chamber Barrel Organ
An Open Source Virtual Barrel Organ Player Software
Project about mechanical music machines in Czech Republic
Association of Barrel organ players in Czech Republic
Recordings of historical barrel organs
The Magic of the Barrel Organ
SAYDISC with categories about "Mechanical" Music and "Musical Boxes"
Mechanical Music from Phonogrammarchiv of the Austrian Academy of Sciences
Pipe organ
Street performance
Mechanical musical instruments
French inventions
French musical instruments
Culture in Strasbourg | Barrel organ | [
"Physics",
"Technology"
] | 1,653 | [
"Physical systems",
"Mechanical musical instruments",
"Machines"
] |
1,018,300 | https://en.wikipedia.org/wiki/Anglo-Australian%20Near-Earth%20Asteroid%20Survey | The Anglo-Australian Near-Earth Asteroid Survey (AANEAS) operated from 1990 to 1996, becoming one of the most prolific programs of its type in the world. Apart from leading to the discovery of 38 near-Earth asteroids, 9 comets, 63 supernovae, several other astronomical phenomena and the delivery of a substantial proportion of all NEA astrometry obtained worldwide (e.g., 30% in 1994–95), AANEAS also led to many other scientific advances which were reported in the refereed literature.
See also
List of asteroid close approaches to Earth
List of near-Earth object observation projects
References
D. I. Steel, R. H. McNaught, G. J. Garradd, D. J. Asher and K. S. Russell, AANEAS: A Valedictory Report, Australian Journal of Astronomy, 1998
Astronomical surveys
Observational astronomy | Anglo-Australian Near-Earth Asteroid Survey | [
"Astronomy"
] | 179 | [
"Astronomical surveys",
"Observational astronomy",
"Works about astronomy",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
1,018,336 | https://en.wikipedia.org/wiki/Lawson%20criterion | The Lawson criterion is a figure of merit used in nuclear fusion research. It compares the rate of energy being generated by fusion reactions within the fusion fuel to the rate of energy losses to the environment. When the rate of production is higher than the rate of loss, the system will produce net energy. If enough of that energy is captured by the fuel, the system will become self-sustaining and is said to be ignited.
The concept was first developed by John D. Lawson in a classified 1955 paper that was declassified and published in 1957. As originally formulated, the Lawson criterion gives a minimum required value for the product of the plasma (electron) density ne and the "energy confinement time" that leads to net energy output.
Later analysis suggested that a more useful figure of merit is the triple product of density, confinement time, and plasma temperature T. The triple product also has a minimum required value, and the name "Lawson criterion" may refer to this value.
On August 8, 2021, researchers at Lawrence Livermore National Laboratory's National Ignition Facility in California confirmed to have produced the first-ever successful ignition of a nuclear fusion reaction surpassing the Lawson's criteria in the experiment.
Energy balance
The central concept of the Lawson criterion is an examination of the energy balance for any fusion power plant using a hot plasma. This is shown below:
Net power = Efficiency × (Fusion − Radiation loss − Conduction loss)
Net power is the excess power beyond that needed internally for the process to proceed in any fusion power plant.
Efficiency is how much energy is needed to drive the device and how well it collects energy from the reactions.
Fusion is rate of energy generated by the fusion reactions.
Radiation loss is the energy lost as light (including X-rays) leaving the plasma.
Conduction loss is the energy lost as particles leave the plasma, carrying away energy.
Lawson calculated the fusion rate by assuming that the fusion reactor contains a hot plasma cloud which has a Gaussian curve of individual particle energies, a Maxwell–Boltzmann distribution characterized by the plasma's temperature. Based on that assumption, he estimated the first term, the fusion energy being produced, using the volumetric fusion equation.
Fusion = Number density of fuel A × Number density of fuel B × Cross section(Temperature) × Energy per reaction
Fusion is the rate of fusion energy produced by the plasma
Number density is the density in particles per unit volume of the respective fuels (or just one fuel, in some cases)
Cross section is a measure of the probability of a fusion event, which is based on the plasma temperature
Energy per reaction is the energy released in each fusion reaction
This equation is typically averaged over a population of ions which has a normal distribution. The result is the amount of energy being created by the plasma at any instant in time.
Lawson then estimated the radiation losses using the following equation:
where N is the number density of the cloud and T is the temperature. For his analysis, Lawson ignores conduction losses. In reality this is nearly impossible; practically all systems lose energy through mass leaving the plasma and carrying away its energy.
By equating radiation losses and the volumetric fusion rates, Lawson estimated the minimum temperature for the fusion for the deuterium–tritium (D-T) reaction
to be 30 million degrees (2.6 keV), and for the deuterium–deuterium (D-D) reaction
to be 150 million degrees (12.9 keV).
Extensions into nτE
The confinement time measures the rate at which a system loses energy to its environment. The faster the rate of loss of energy, , the shorter the energy confinement time. It is the energy density (energy content per unit volume) divided by the power loss density (rate of energy loss per unit volume):
For a fusion reactor to operate in steady state, the fusion plasma must be maintained at a constant temperature. Thermal energy must therefore be added at the same rate the plasma loses energy in order to maintain the fusion conditions. This energy can be supplied by the fusion reactions themselves, depending on the reaction type, or by supplying additional heating through a variety of methods.
For illustration, the Lawson criterion for the D-T reaction will be derived here, but the same principle can be applied to other fusion fuels. It will also be assumed that all species have the same temperature, that there are no ions present other than fuel ions (no impurities and no helium ash), and that D and T are present in the optimal 50-50 mixture. Ion density then equals electron density and the energy density of both electrons and ions together is given, according to the ideal gas law, by
where is the temperature in electronvolt (eV) and is the particle density.
The volume rate (reactions per volume per time) of fusion reactions is
where is the fusion cross section, is the relative velocity, and denotes an average over the Maxwellian velocity distribution at the temperature .
The volume rate of heating by fusion is times , the energy of the charged fusion products (the neutrons cannot help to heat the plasma). In the case of the D-T reaction, .
The Lawson criterion requires that fusion heating exceeds the losses:
Substituting in known quantities yields:
Rearranging the equation produces:
The quantity is a function of temperature with an absolute minimum. Replacing the function with its minimum value provides an absolute lower limit for the product . This is the Lawson criterion.
For the deuterium–tritium reaction, the physical value is at least
The minimum of the product occurs near .
Extension into the "triple product"
A still more useful figure of merit is the "triple product" of density, temperature, and confinement time, nTτE. For most confinement concepts, whether inertial, mirror, or toroidal confinement, the density and temperature can be varied over a fairly wide range, but the maximum attainable pressure p is a constant. When such is the case, the fusion power density is proportional to p2<σv>/T 2. The maximum fusion power available from a given machine is therefore reached at the temperature T where <σv>/T 2 is a maximum. By continuation of the above derivation, the following inequality is readily obtained:
The quantity is also a function of temperature with an absolute minimum at a slightly lower temperature than .
For the D-T reaction, the minimum occurs at T = 14 keV. The average <σv> in this temperature region can be approximated as
so the minimum value of the triple product value at T = 14 keV is about
This number has not yet been achieved in any reactor, although the latest generations of machines have come close. JT-60 reported 1.53x1021 keV.s.m−3. For instance, the TFTR has achieved the densities and energy lifetimes needed to achieve Lawson at the temperatures it can create, but it cannot create those temperatures at the same time. ITER aims to do both.
As for tokamaks, there is a special motivation for using the triple product. Empirically, the energy confinement time τE is found to be nearly proportional to n1/3/P 2/3. In an ignited plasma near the optimum temperature, the heating power P equals fusion power and therefore is proportional to n2T 2. The triple product scales as
The triple product is only weakly dependent on temperature as T -1/3. This makes the triple product an adequate measure of the efficiency of the confinement scheme.
Inertial confinement
The Lawson criterion applies to inertial confinement fusion (ICF) as well as to magnetic confinement fusion (MCF) but in the inertial case it is more usefully expressed in a different form. A good approximation for the inertial confinement time is the time that it takes an ion to travel over a distance R at its thermal speed
where mi denotes mean ionic mass. The inertial confinement time can thus be approximated as
By substitution of the above expression into relationship (), we obtain
This product must be greater than a value related to the minimum of T 3/2/<σv>. The same requirement is traditionally expressed in terms of mass density ρ = <nmi>:
Satisfaction of this criterion at the density of solid D-T (0.2 g/cm3) would require a laser pulse of implausibly large energy. Assuming the energy required scales with the mass of the fusion plasma (Elaser ~ ρR3 ~ ρ−2), compressing the fuel to 103 or 104 times solid density would reduce the energy required by a factor of 106 or 108, bringing it into a realistic range. With a compression by 103, the compressed density will be 200 g/cm3, and the compressed radius can be as small as 0.05 mm. The radius of the fuel before compression would be 0.5 mm. The initial pellet will be perhaps twice as large since most of the mass will be ablated during the compression.
The fusion power times density is a good figure of merit to determine the optimum temperature for magnetic confinement, but for inertial confinement the fractional burn-up of the fuel is probably more useful. The burn-up should be proportional to the specific reaction rate (n2<σv>) times the confinement time (which scales as T -1/2) divided by the particle density n:
Thus the optimum temperature for inertial confinement fusion maximises <σv>/T3/2, which is slightly higher than the optimum temperature for magnetic confinement.
Non-thermal systems
Lawson's analysis is based on the rate of fusion and loss of energy in a thermalized plasma. There is a class of fusion machines that do not use thermalized plasmas but instead directly accelerate individual ions to the required energies. The best-known examples are the migma, fusor and polywell.
When applied to the fusor, Lawson's analysis is used as an argument that conduction and radiation losses are the key impediments to reaching net power. Fusors use a voltage drop to accelerate and collide ions, resulting in fusion. The voltage drop is generated by wire cages, and these cages conduct away particles.
Polywells are improvements on this design, designed to reduce conduction losses by removing the wire cages which cause them. Regardless, it is argued that radiation is still a major impediment.
See also
Fusion energy gain factor (Q)
Notes
It is straightforward to relax these assumptions. The most difficult question is how to define when the ion and electrons differ in density and temperature. Considering that this is a calculation of energy production and loss by ions, and that any plasma confinement concept must contain the pressure forces of the plasma, it seems appropriate to define the effective (electron) density through the (total) pressure as . The factor of is included because usually refers to the density of the electrons alone, but here refers to the total pressure. Given two species with ion densities , atomic numbers , ion temperature , and electron temperature , it is easy to show that the fusion power is maximized by a fuel mix given by . The values for , , and the power density must be multiplied by the factor . For example, with protons and boron () as fuel, another factor of must be included in the formulas. On the other hand, for cold electrons, the formulas must all be divided by (with no additional factor for ).
References
External links
Mathematical derivation, archived 2019 from the original
Fusion power | Lawson criterion | [
"Physics",
"Chemistry"
] | 2,342 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
1,018,347 | https://en.wikipedia.org/wiki/Utility%20maximization%20problem | Utility maximization was first developed by utilitarian philosophers Jeremy Bentham and John Stuart Mill. In microeconomics, the utility maximization problem is the problem consumers face: "How should I spend my money in order to maximize my utility?" It is a type of optimal decision problem. It consists of choosing how much of each available good or service to consume, taking into account a constraint on total spending (income), the prices of the goods and their preferences.
Utility maximization is an important concept in consumer theory as it shows how consumers decide to allocate their income. Because consumers are modelled as being rational, they seek to extract the most benefit for themselves. However, due to bounded rationality and other biases, consumers sometimes pick bundles that do not necessarily maximize their utility. The utility maximization bundle of the consumer is also not set and can change over time depending on their individual preferences of goods, price changes and increases or decreases in income.
Basic setup
For utility maximization there are four basic steps process to derive consumer demand and find the utility maximizing bundle of the consumer given prices, income, and preferences.
1) Check if Walras's law is satisfied
2) 'Bang for buck'
3) the budget constraint
4) Check for negativity
1) Walras's Law
Walras's law states that if a consumers preferences are complete, monotone and transitive then the optimal demand will lie on the budget line.
Preferences of the consumer
For a utility representation to exist the preferences of the consumer must be complete and transitive (necessary conditions).
Complete
Completeness of preferences indicates that all bundles in the consumption set can be compared by the consumer. For example, if the consumer has 3 bundles A,B and C then;
A B, A C, B A, B C, C B, C A, A A, B B, C C. Therefore, the consumer has complete preferences as they can compare every bundle.
Transitive
Transitivity states that individuals preferences are consistent across the bundles.
therefore, if the consumer weakly prefers A over B (A B) and B C this means that A C (A is weakly preferred to C)
Monotone
For a preference relation to be monotone increasing the quantity of both goods should make the consumer strictly better off (increase their utility), and increasing the quantity of one good holding the other quantity constant should not make the consumer worse off (same utility).
The preference is monotone if and only if;
1)
2)
3)
where > 0
2) 'Bang for buck'
Bang for buck is a concept in utility maximization which refers to the consumer's desire to get the best value for their money. If Walras's law has been satisfied, the optimal solution of the consumer lies at the point where the budget line and optimal indifference curve intersect, this is called the tangency condition. To find this point, differentiate the utility function with respect to x and y to find the marginal utilities, then divide by the respective prices of the goods.
This can be solved to find the optimal amount of good x or good y.
3) Budget constraint
The basic set up of the budget constraint of the consumer is:
Due to Walras's law being satisfied:
The tangency condition is then substituted into this to solve for the optimal amount of the other good.
4) Check for negativity
Negativity must be checked for as the utility maximization problem can give an answer where the optimal demand of a good is negative, which in reality is not possible as this is outside the domain. If the demand for one good is negative, the optimal consumption bundle will be where 0 of this good is consumed and all income is spent on the other good (a corner solution). See figure 1 for an example when the demand for good x is negative.
A technical representation
Suppose the consumer's consumption set, or the enumeration of all possible consumption bundles that could be selected if there were a budget constraint.
The consumption set = (a set of positive real numbers, the consumer cannot preference negative amount of commodities).
Suppose also that the price vector (p) of the n commodities is positive,
and that the consumer's income is ; then the set of all affordable packages, the budget set is,
The consumer would like to buy the best affordable package of commodities.
It is assumed that the consumer has an ordinal utility function, called u. It is a real-valued function with domain being the set of all commodity bundles, or
Then the consumer's optimal choice is the utility maximizing bundle of all bundles in the budget set if then the consumers optimal demand function is:
Finding is the utility maximization problem.
If u is continuous and no commodities are free of charge, then exists, but it is not necessarily unique. If the preferences of the consumer are complete, transitive and strictly convex then the demand of the consumer contains a unique maximiser for all values of the price and wealth parameters. If this is satisfied then is called the Marshallian demand function. Otherwise, is set-valued and it is called the Marshallian demand correspondence.
Utility maximisation of perfect complements
U = min {x, y}
For a minimum function with goods that are perfect complements, the same steps cannot be taken to find the utility maximising bundle as it is a non differentiable function. Therefore, intuition must be used. The consumer will maximise their utility at the kink point in the highest indifference curve that intersects the budget line where x = y. This is intuition, as the consumer is rational there is no point the consumer consuming more of one good and not the other good as their utility is taken at the minimum of the two ( they have no gain in utility from this and would be wasting their income). See figure 3.
Utility maximisation of perfect substitutes
U = x + y
For a utility function with perfect substitutes, the utility maximising bundle can be found by differentiation or simply by inspection. Suppose a consumer finds listening to Australian rock bands AC/DC and Tame Impala perfect substitutes. This means that they are happy to spend all afternoon listening to only AC/DC, or only Tame Impala, or three-quarters AC/DC and one-quarter Tame Impala, or any combination of the two bands in any amount. Therefore, the consumer's optimal choice is determined entirely by the relative prices of listening to the two artists. If attending a Tame Impala concert is cheaper than attending the AC/DC concert, the consumer chooses to attend the Tame Impala concert, and vice versa. If the two concert prices are the same, the consumer is completely indifferent and may flip a coin to decide. To see this mathematically, differentiate the utility function to find that the MRS is constant - this is the technical meaning of perfect substitutes. As a result of this, the solution to the consumer's constrained maximization problem will not (generally) be an interior solution, and as such one must check the utility level in the boundary cases (spend entire budget on good x, spend entire budget on good y) to see which is the solution. The special case is when the (constant) MRS equals the price ratio (for example, both goods have the same price, and same coefficients in the utility function). In this case, any combination of the two goods is a solution to the consumer problem.
Reaction to changes in prices
For a given level of real wealth, only relative prices matter to consumers, not absolute prices. If consumers reacted to changes in nominal prices and nominal wealth even if relative prices and real wealth remained unchanged, this would be an effect called money illusion. The mathematical first order conditions for a maximum of the consumer problem guarantee that the demand for each good is homogeneous of degree zero jointly in nominal prices and nominal wealth, so there is no money illusion.
When the prices of goods change, the optimal consumption of these goods will depend on the substitution and income effects. The substitution effect says that if the demand for both goods is homogeneous, when the price of one good decreases (holding the price of the other good constant) the consumer will consume more of this good and less of the other as it becomes relatively cheeper. The same goes if the price of one good increases, consumers will buy less of that good and more of the other.
The income effect occurs when the change in prices of goods cause a change in income. If the price of one good rises, then income is decreased (more costly than before to consume the same bundle), the same goes if the price of a good falls, income is increased (cheeper to consume the same bundle, they can therefore consume more of their desired combination of goods).
Reaction to changes in income
If the consumers income is increased their budget line is shifted outwards and they now have more income to spend on either good x, good y, or both depending on their preferences for each good. if both goods x and y were normal goods then consumption of both goods would increase and the optimal bundle would move from A to C (see figure 5). If either x or y were inferior goods, then demand for these would decrease as income rises (the optimal bundle would be at point B or C).
Bounded rationality
for further information see: Bounded rationality
In practice, a consumer may not always pick an optimal bundle. For example, it may require too much thought or too much time. Bounded rationality is a theory that explains this behaviour. Examples of alternatives to utility maximisation due to bounded rationality are; satisficing, elimination by aspects and the mental accounting heuristic.
The satisficing heuristic is when a consumer defines an aspiration level and looks until they find an option that satisfies this, they will deem this option good enough and stop looking.
Elimination by aspects is defining a level for each aspect of a product they want and eliminating all other options that do not meet this requirement e.g. price under $100, colour etc. until there is only one product left which is assumed to be the product the consumer will choose.
The mental accounting heuristic: In this strategy it is seen that people often assign subjective values to their money depending on their preferences for different things. A person will develop mental accounts for different expenses, allocate their budget within these, then try to maximise their utility within each account.
Related concepts
The relationship between the utility function and Marshallian demand in the utility maximisation problem mirrors the relationship between the expenditure function and Hicksian demand in the expenditure minimisation problem. In expenditure minimisation the utility level is given and well as the prices of goods, the role of the consumer is to find a minimum level of expenditure required to reach this utility level.
The utilitarian social choice rule is a rule that says that society should choose the alternative that maximizes the sum of utilities. While utility-maximization is done by individuals, utility-sum maximization is done by society.
See also
Welfare maximization
Profit maximization
Choice modelling
Expenditure minimisation problem
Optimal decision
Substitution effect
Utility function
Law of demand
Marginal utility
References
External links
Anatomy of Cobb-Douglas Type Utility Functions in 3D
Rules for maximising utility by lumen learning
An example of utility maximisation
Utility maximisation definition by Economics help
Application of a utility function by Investopedia
Definition of substitute goods by Investopedia
Optimal decisions
Utility
Mathematical optimization
Business and economics portal | Utility maximization problem | [
"Mathematics"
] | 2,324 | [
"Mathematical optimization",
"Mathematical analysis"
] |
1,018,367 | https://en.wikipedia.org/wiki/Reclaimed%20water | Water reclamation is the process of converting municipal wastewater or sewage and industrial wastewater into water that can be reused for a variety of purposes . It is also called wastewater reuse, water reuse or water recycling. There are many types of reuse. It is possible to reuse water in this way in cities or for irrigation in agriculture. Other types of reuse are environmental reuse, industrial reuse, and reuse for drinking water, whether planned or not. Reuse may include irrigation of gardens and agricultural fields or replenishing surface water and groundwater. This latter is also known as groundwater recharge. Reused water also serve various needs in residences such as toilet flushing, businesses, and industry. It is possible to treat wastewater to reach drinking water standards. Injecting reclaimed water into the water supply distribution system is known as direct potable reuse. Drinking reclaimed water is not typical. Reusing treated municipal wastewater for irrigation is a long-established practice. This is especially so in arid countries. Reusing wastewater as part of sustainable water management allows water to remain an alternative water source for human activities. This can reduce scarcity. It also eases pressures on groundwater and other natural water bodies.
There are several technologies used to treat wastewater for reuse. A combination of these technologies can meet strict treatment standards and make sure that the processed water is hygienically safe, meaning free from pathogens. The following are some of the typical technologies: Ozonation, ultrafiltration, aerobic treatment (membrane bioreactor), forward osmosis, reverse osmosis, and advanced oxidation, or activated carbon. Some water-demanding activities do not require high grade water. In this case, wastewater can be reused with little or no treatment.
The cost of reclaimed water exceeds that of potable water in many regions of the world, where fresh water is plentiful. The costs of water reclamation options might be compared to the costs of alternative options which also achieve similar effects of freshwater savings, namely greywater reuse systems, rainwater harvesting and stormwater recovery, or seawater desalination.
Water recycling and reuse is of increasing importance, not only in arid regions but also in cities and contaminated environments. Municipal wastewater reuse is particularly high in the Middle East and North Africa region, in countries such as the UAE, Qatar, Kuwait and Israel.
Definition
The term "water reuse" is generally used interchangeably with terms such as wastewater reuse, water reclamation, and water recycling. A definition by the USEPA states: "Water reuse is the method of recycling treated wastewater for beneficial purposes, such as agricultural and landscape irrigation, industrial processes, toilet flushing, and groundwater replenishing (EPA, 2004)." A similar description is: "Water Reuse, the use of reclaimed water from treated wastewater, has been a long-established reality in many (semi)arid countries and regions. It helps to alleviate water scarcity by supplementing limited freshwater resources."
The water that is used as an input to the treatment and reuse processes can be from a variety of sources. Usually it is wastewater (domestic or municipal, industrial or agricultural wastewater) but it could also come from urban runoff.
Overview
Reclaimed water is water that is used more than one time before it passes back into the natural water cycle. Advances in municipal wastewater treatment technology allow communities to reuse water for many different purposes. The water is treated differently depending upon the source and use of the water as well as how it gets delivered.
Driving forces
The World Health Organization has recognized the following principal driving forces for municipal wastewater reuse:
increasing water scarcity and stress,
increasing populations and related food security issues,
increasing environmental pollution from improper wastewater disposal, and
increasing recognition of the resource value of wastewater, excreta and greywater.
In some areas, one driving force is also the implementation of advanced wastewater treatment for the removal of organic micropollutants, which leads to an overall improved water quality.
Water recycling and reuse is of increasing importance, not only in arid regions but also in cities and contaminated environments.
Already, the groundwater aquifers that are used by over half of the world population are being over-drafted. Reuse will continue to increase as the world's population becomes increasingly urbanized and concentrated near coastlines, where local freshwater supplies are limited or are available only with large capital expenditure. Large quantities of freshwater can be saved by municipal wastewater reuse and recycling, reducing environmental pollution and improving carbon footprint. Reuse can be an alternative water supply option.
Achieving more sustainable sanitation and wastewater management will require emphasis on actions linked to resource management, such as wastewater reuse or excreta reuse that will keep valuable resources available for productive uses. This in turn supports human wellbeing and broader sustainability.
Potential benefits
Water/wastewater reuse, as an alternative water source, can provide significant economic, social and environmental benefits, which are key motivators for implementing such reuse programs. These benefits include:
For cities and households: Increased water availability (drinking water substitution – keep drinking water for drinking and reclaimed water for non-drinking use such as industry, cleaning, irrigation, domestic uses, and toilet flushing).
For the environment: Reduced nutrient loads to receiving waters (i.e. rivers, canals and other surface water resources); reduced over-abstraction of surface and groundwater; enhanced environmental protection by restoration of streams, wetlands and ponds; reduced energy consumption associated with production, treatment, and distribution of water (1.2 to 2.1 kWh/m3) compared to using deep groundwater resources, water importation or desalination
Reduced manufacturing costs of using high quality reclaimed water
In agriculture: Irrigation with treated wastewater may contribute to improve production yields, reduce the ecological footprint and promote socioeconomic benefits. It may also lead to reduced application of fertilizers (i.e. conservation of nutrients and reducing the need for artificial fertilizer through soil nutrition by the nutrients existing in the treated effluents).
Reclaiming water for reuse applications instead of using freshwater supplies can be a water-saving measure. When used water is eventually discharged back into natural water sources, it can still have benefits to ecosystems, improving streamflow, nourishing plant life and recharging aquifers, as part of the natural water cycle.
Scale
Global treated wastewater reuse is estimated at 40.7 billion m3 per year, representing approximately 11% of the total domestic and manufacturing wastewater produced. Municipal wastewater reuse is particularly high in the Middle East and North Africa region, in countries such as the UAE, Qatar, Kuwait and Israel.
For the Sustainable Development Goal 6 by the United Nations, Target 6.3 states "Halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally by 2030".
Types and applications
Treated wastewater can be reused in industry (for example in cooling towers), in artificial recharge of aquifers, in agriculture, and in the rehabilitation of natural ecosystems (for example in wetlands). The main reclaimed water applications in the world are shown below:
Urban reuse
In rarer cases reclaimed water is also used to augment drinking water supplies. Most of the uses of water reclamation are non-potable uses such as washing cars, flushing toilets, cooling water for power plants, concrete mixing, artificial lakes, irrigation for golf courses and public parks, and for hydraulic fracturing. Where applicable, systems run a dual piping system to keep the recycled water separate from the potable water.
Usage types are distinguished as follows:
Unrestricted: The use of reclaimed water for non-potable applications in municipal settings, where public access is not restricted.
Restricted: The use of reclaimed water for non-potable applications in municipal settings, where public access is controlled or restricted by physical or institutional barriers, such as fencing, advisory signage, or temporal access restriction.
Agricultural reuse
Irrigation with recycled municipal wastewater can also serve to fertilize plants if it contains nutrients, such as nitrogen, phosphorus and potassium. There are benefits of using recycled water for irrigation, including the lower cost compared to some other sources and consistency of supply regardless of season, climatic conditions and associated water restrictions. When reclaimed water is used for irrigation in agriculture, the nutrient (nitrogen and phosphorus) content of the treated wastewater has the benefit of acting as a fertilizer. This can make the reuse of excreta contained in sewage attractive.
The irrigation water can be used in different ways on different crops, such as for food crops to be eaten raw or for crops which are intended for human consumption to be eaten raw or unprocessed. For processed food crops: crops which are intended for human consumption not to be eaten raw but after food processing (i.e. cooked, industrially processed). It can also be used on crops which are not intended for human consumption (e.g. pastures, forage, fiber, ornamental, seed, forest and turf crops).
Risks in agricultural reuse
In developing countries, agriculture is increasingly using untreated municipal wastewater for irrigation – often in an unsafe manner. Cities provide lucrative markets for fresh produce, so they are attractive to farmers. However, because agriculture has to compete for increasingly scarce water resources with industry and municipal users, there is often no alternative for farmers but to use water polluted with urban waste directly to water their crops.
There can be significant health hazards related to using untreated wastewater in agriculture. Municipal wastewater can contain a mixture of chemical and biological pollutants. In low-income countries, there are often high levels of pathogens from excreta. In emerging nations, where industrial development is outpacing environmental regulation, there are increasing risks from inorganic and organic chemicals. The World Health Organization developed guidelines for safe use of wastewater in 2006, advocating a ‘multiple-barrier' approach wastewater use, for example by encouraging farmers to adopt various risk-reducing behaviors. These include ceasing irrigation a few days before harvesting to allow pathogens to die off in the sunlight; applying water carefully so it does not contaminate leaves likely to be eaten raw; cleaning vegetables with disinfectant; or allowing fecal sludge used in farming to dry before being used as a human manure.
Drawbacks or risks often mentioned include the content of potentially harmful substances such as bacteria, heavy metals, or organic pollutants (including pharmaceuticals, personal care products and pesticides). Irrigation with wastewater can have both positive and negative effects on soil and plants, depending on the composition of the wastewater and on the soil or plant characteristics.
Environmental reuse
The use of reclaimed water to create, enhance, sustain, or augment water bodies including wetlands, aquatic habitats, or stream flow is called "environmental reuse". For example, constructed wetlands fed by wastewater provide both wastewater treatment and habitats for flora and fauna.
Industrial reuse
Treated wastewater can be reused in industry (for example in cooling towers).
Planned potable reuse
Planned potable reuse is publicly acknowledged as an intentional project to recycle water for drinking water. There are two ways in which potable water can be delivered for reuse – "Indirect Potable Reuse" (IPR) and "Direct Potable Reuse". Both these forms of reuse are described below, and commonly involve a more formal public process and public consultation program than is the case with de facto or unacknowledged reuse.
Some water agencies reuse highly treated effluent from municipal wastewater or resource recovery plants as a reliable, drought-proof source of drinking water. By using advanced purification processes, they produce water that meets all applicable drinking water standards. System reliability and frequent monitoring and testing are imperative to their meeting stringent controls.
The water needs of a community, water sources, public health regulations, costs, and the types of water infrastructure in place— such as distribution systems, man-made reservoirs, or natural groundwater basins— determine if and how reclaimed water can be part of the drinking water supply. Some communities reuse water to replenish groundwater basins. Others put it into surface water reservoirs. In these instances the reclaimed water is blended with other water supplies and/or sits in storage for a certain amount of time before it is drawn out and gets treated again at a water treatment or distribution system. In some communities, the reused water is put directly into pipelines that go to a water treatment plant or distribution system.
Modern technologies such as reverse osmosis and ultraviolet disinfection are commonly used when reclaimed water will be mixed with the drinking water supply.
Many people associate a feeling of disgust with reclaimed water and 13% of a survey group said they would not even sip it. Nonetheless, the main health risk for potable use of reclaimed water is the potential for pharmaceutical and other household chemicals or their derivatives (environmental persistent pharmaceutical pollutants) to persist in this water. This would be less of a concern if human excreta was kept out of sewage by using dry toilets or, alternatively, systems that treat blackwater separately from greywater.
Indirect potable reuse
Indirect potable reuse (IPR) means the water is delivered to the consumer indirectly. After it is purified, the reused water blends with other supplies and/or sits a while in some sort of storage, man-made or natural, before it gets delivered to a pipeline that leads to a water treatment plant or distribution system. That storage could be a groundwater basin or a surface water reservoir.
Some municipalities are using and others are investigating IPR of reclaimed water. For example, reclaimed water may be pumped into (subsurface recharge) or percolated down to (surface recharge) groundwater aquifers, pumped out, treated again, and finally used as drinking water. This technique may also be referred to as groundwater recharging. This includes slow processes of further multiple purification steps via the layers of earth/sand (absorption) and microflora in the soil (biodegradation).
IPR or even unplanned potable use of reclaimed wastewater is used in many countries, where the latter is discharged into groundwater to hold back saline intrusion in coastal aquifers. IPR has generally included some type of environmental buffer, but conditions in certain areas have created an urgent need for more direct alternatives.
IPR occurs through the augmentation of drinking water supplies with municipal wastewater treated to a level suitable for IPR followed by an environmental buffer (e.g. rivers, dams, aquifers, etc.) that precedes drinking water treatment. In this case, municipal wastewater passes through a series of treatment steps that encompasses membrane filtration and separation processes (e.g. MF, UF and RO), followed by an advanced chemical oxidation process (e.g. UV, UV+H2O2, ozone). In ‘indirect' potable reuse applications, the reclaimed wastewater is used directly or mixed with other sources.
Direct potable reuse
Direct potable reuse (DPR) means the reused water is put directly into pipelines that go to a water treatment plant or distribution system. Direct potable reuse may occur with or without "engineered storage" such as underground or above ground tanks. In other words, DPR is the introduction of reclaimed water derived from domestic wastewater after extensive treatment and monitoring to assure that strict water quality requirements are met at all times, directly into a municipal water supply system.
Reuse in space stations
Wastewater reclamation can be especially important in relation to human spaceflight. In 1998, NASA announced it had built a human waste reclamation bioreactor designed for use in the International Space Station and a crewed Mars mission. Human urine and feces are input into one end of the reactor and pure oxygen, pure water, and compost (humanure) are output from the other end. The soil could be used for growing vegetables, and the bioreactor also produces electricity.
Aboard the International Space Station, astronauts have been able to drink recycled urine due to the introduction of the ECLSS system. The system costs $250 million and has been working since May 2009. The system recycles wastewater and urine back into potable water used for drinking, food preparation, and oxygen generation. This cuts back on the need to frequently resupply the space station.
De facto wastewater reuse (unplanned potable reuse)
De facto, unacknowledged or unplanned potable reuse refers to situations where reuse of treated wastewater is practiced but is not officially recognized. For example, a sewage treatment plant from one city may be discharging effluents to a river which is used as a drinking water supply for another city downstream.
Unplanned Indirect Potable Use has existed for a long time. Large towns on the River Thames upstream of London (Oxford, Reading, Swindon, Bracknell) discharge their treated sewage ("non-potable water") into the Thames, which supplies water to London downstream. In the United States, the Mississippi River serves as both the destination of sewage treatment plant effluent and the source of potable water.
Design considerations
Distribution
Non-potable reclaimed water is often distributed with a dual piping network that keeps reclaimed water pipes completely separate from potable water pipes.
Treatment processes
There are several technologies used to treat wastewater for reuse. A combination of these technologies can meet strict treatment standards and make sure that the processed water is hygienically safe, meaning free from pathogens. Some common technologies include ozonation, ultrafiltration, aerobic treatment (membrane bioreactor), forward osmosis, reverse osmosis, advanced oxidation or activated carbon. Reclaimed water providers use multi-barrier treatment processes and constant monitoring to ensure that reclaimed water is safe and treated properly for the intended end use.
Some water-demanding activities do not require high grade water. In this case, wastewater can be reused with little or no treatment. One example of this scenario is in the domestic environment where toilets can be flushed using greywater from baths and showers with little or no treatment.
In the case of municipal wastewater, the wastewater must pass through numerous sewage treatment process steps before it can be used. Steps might include screening, primary settling, biological treatment, tertiary treatment (for example reverse osmosis), and disinfection.
Wastewater is generally treated to only secondary level treatment when used for irrigation.
A pump station distributes reclaimed water to users around a city. These may include golf courses, agricultural uses, cooling towers, or landfills.
Alternative options
Rather than treating municipal wastewater for reuse purposes, other options can achieve similar effects of freshwater savings:
Greywater reuse systems – at a household level, treated or untreated greywater may be used for flush toilets or to water a garden.
Rainwater harvesting and stormwater recovery – Urban design systems which incorporate rainwater harvesting and reduce runoff are known as water-sensitive urban design (WSUD) in Australia, low-impact development (LID) in the United States and sustainable urban drainage systems (SUDS) in the United Kingdom.
Seawater desalination – an energy-intensive process where salt and other minerals are removed from seawater to produce potable water for drinking and irrigation, typically through membrane filtration (reverse osmosis) or steam distillation.
Costs
The cost of reclaimed water exceeds that of potable water in many regions of the world, where fresh water is plentiful. However, reclaimed water is usually sold to citizens at a cheaper rate to encourage its use. As fresh water supplies become limited from distribution costs, increased population demands, or climate change, the cost ratios will evolve also. The evaluation of reclaimed water needs to consider the entire water supply system, as it may bring important flexibility into the overall system
Reclaimed water systems usually require a dual piping network, often with additional storage tanks, which adds to the costs of the system.
Barriers to implementation
Barriers to water reclamation may include:
Full-scale implementation and operation of water reuse schemes still face regulatory, economic, social and institutional challenges.
Low economic viability of water reuse schemes. This may partly be due to costs of water quality monitoring and identification of contaminants. Difficulties in contaminant identification may include the separation of inorganic and organic pollutants, microorganisms, colloids, and others. Full cost recovery from water reuse schemes is difficult. There is a lack of financial water pricing systems comparable to already subsidized conventional treatment plants.
Psychological barriers, sometimes referred to as the "yuck factor", can also be an impediment to implementation, particularly for direct potable reuse plans. These psychological factors are closely associated with disgust, specifically pathogen avoidance.
Health aspects
Reclaimed water is considered safe when appropriately used. Reclaimed water planned for use in recharging aquifers or augmenting surface water receives adequate and reliable treatment before mixing with naturally occurring water and undergoing natural restoration processes. Some of this water eventually becomes part of drinking water supplies.
A study published in 2009 compared the differences in water quality between reclaimed/recycled water, surface water, and groundwater. Results indicated that reclaimed water, surface water, and groundwater are more similar than dissimilar with regard to constituents. The researchers tested for 244 representative constituents typically found in water. When detected, most constituents were in the parts-per-billion and parts-per-trillion range. DEET (an insect repellant) and caffeine were found in all water types and in virtually all samples. Triclosan (in antibacterial soap and toothpaste) was found in all water types, but detected in higher levels (parts-per-trillion) in reclaimed water than in surface or groundwater. Very few hormones/steroids were detected in samples, and when detected were at very low levels. Haloacetic acids (a disinfection by-product) were found in all types of samples, even groundwater. The largest difference between reclaimed water and the other waters appears to be that reclaimed water has been disinfected and thus has disinfection byproducts (due to chlorine use).
A 2005 study found that there had been no instances of illness or disease from either microbial pathogens or chemicals, and the risks of using reclaimed water for irrigation are not measurably different from irrigation using potable water.
A 2012 study conducted by the National Research Council in the United States found that the risk of exposure to certain microbial and chemical contaminants from drinking reclaimed water does not appear to be higher than the risk experienced in some current drinking water treatment systems, and may be orders of magnitude lower. This report recommends adjustments to the federal regulatory framework that could enhance public health protection for both planned and unplanned (or de facto reuse) and increase public confidence in water reuse.
Environmental aspects
Using reclaimed water for non-potable uses saves potable water for drinking, since less potable water will be used for non-potable uses.
It sometimes contains higher levels of nutrients such as nitrogen, phosphorus and oxygen which may help fertilize garden and agricultural plants when used for irrigation.
Fresh water makes up less than 3% of the world's water resources, and just 1% of that is readily available. Even though fresh water is scarce, just 3% of it is extracted for human consumption. The remaining water is mostly used for agriculture, which uses roughly two-thirds of all fresh water.
Reclaimed water can offer a viable and effective alternative to freshwater where freshwater supplies are scarce. Reclaimed water is utilized to maintain or increase lake levels, restore wetlands, and restore river flows during hot weather and droughts, protecting biodiversity. Additionally, reclaimed water is utilized for street cleaning, irrigation of urban green spaces, and industrial processes. Reclaimed water has the advantage of being a consistent source of water supply that is unaffected by seasonal droughts and weather changes.
The usage of water reclamation decreases the pollution sent to sensitive environments. It can also enhance wetlands, which benefits the wildlife depending on that ecosystem. It also helps to reduce the likelihood of drought as recycling of water reduces the use of fresh water supply from underground sources. For instance, the San Jose/Santa Clara Water Pollution Control Plant instituted a water recycling program to protect the San Francisco Bay area's natural salt water marshes.
The main potential risks that are associated with reclaimed wastewater reuse for irrigation purposes when the treatment is not adequate are the following:
Contamination of the food chain with microcontaminants, pathogens (i.e. bacteria, viruses, protozoa, helminths), or antibiotic resistance determinants;
Soil salinization and accumulation of various unknown constituents that might adversely affect agricultural production;
Distribution of the indigenous soil microbial communities;
Alteration of the physicochemical and microbiological properties of the soil and contribution to the accumulation of chemical/biological contaminants (e.g. heavy metals, chemicals (i.e. boron, nitrogen, phosphorus, chloride, sodium, pesticides/herbicides), natural chemicals (i.e. hormones), contaminants of emerging concern (CECs) (i.e. pharmaceuticals and their metabolites, personal care products, household chemicals and food additives and their transformation products), etc.) in it and subsequent uptake by plants and crops;
Excessive growth of algae and vegetation in canals carrying wastewater (i.e. eutrophication);
Groundwater quality degradation by the various reclaimed water contaminants, migrating and accumulating in the soil and aquifers.
Guidelines and regulations
International organizations
World Health Organization (WHO): "Guidelines for the safe use of wastewater, excreta and greywater" (2006).
United Nations Environment Programme (UNEP): "Guidelines for municipal wastewater reuse in the Mediterranean region" (2005).
United Nations Water Decade Programme on Capacity Development (UNW-DPC): Proceedings on the UNWater project "Safe use of wastewater in agriculture" (2013).
European Union
Since 26 June 2023 there is an EU regulation on minimum requirements for water reuse for irrigation purposes. The water quality requirements are divided into four categories depending on what is irrigated and how the irrigation is performed. The water quality parameters included are E.coli, BOD5, total suspended solids (TSS), turbidity, legionella, and intestinal nematodes (helminth eggs).
In the Water Framework Directive, reuse of water is mentioned as one of the possible measures to achieve the Directive's quality goals. However, this remains a relatively vague recommendation rather than a requirement: Part B of Annex VI refers to reuse as one of the "supplementary measures which Member States within each river basin district may choose to adopt as part of the programme of measures required under Article 11(4)".
Besides that, Article 12 of the Urban Wastewater Treatment Directive concerning the reuse of treated wastewater states that "treated wastewater shall be reused whenever appropriate", which some consider not specific enough to promote water reuse as it may leave too much room for interpretation as to what can be considered as an "appropriate" situation to reuse treated wastewater.
Despite the lack of common water reuse criteria at the EU level, several member states have issued their own legislative frameworks, regulations, or guidelines for different water reuse applications (e.g. Cyprus, France, Greece, Italy, and Spain).
However, an evaluation carried out by the European Commission on the water reuse standards of several member states concluded that they differed in their approach. There are important differences among the standards regarding permitted uses, parameters to be monitored, and limit values allowed. This lack of harmonization among water reuse standards could potentially create trade barriers for agricultural goods irrigated with reclaimed water. Once on the common market, the level of safety in the producing member states may be not considered sufficient by the importing countries. The most representative standards on wastewater reuse from European member states are the following:
Cyprus: Law 106 (I) 2002 Water and Soil pollution control and associated regulations (KDP 772/2003, KDP 269/2005) (Issuing Institutions: Ministry of Agriculture, Natural resources and Environment, Water Development Department).
France: Jorf num.0153, 4 July 2014. Order of 2014, related to the use of water from treated urban wastewater for irrigation of crops and green areas (Issuing Institutions: Ministry of Public Health, Ministry of Agriculture, Food and Fisheries, Ministry of Ecology, Energy and Sustainability).
Greece: CMD No 145116. Measures, limits and procedures for reuse of treated wastewater (Issuing Institutions: Ministry of Environment, Energy and Climate Change).
Italy: DM 185/2003. Technical measures for reuse of wastewater (Issuing Institutions: Ministry of Environment, Ministry of Agriculture, Ministry of Public Health).
Portugal: NP 4434 2005. Reuse of reclaimed urban water for irrigation (Issuing Institutions: Portuguese Institute for Quality).
Spain: RD 1620/2007. The legal framework for the reuse of treated wastewater (Issuing Institutions: Ministry of Environment, Ministry of Agriculture, Food and Fisheries, Ministry of Health).
By 2023, a new EU agriculture law may raise water reuse by six times, from 1.7 billion m3 to 6.6 billion m3, and cut water stress by 5%.
United States
In the U.S., the Clean Water Act of 1972 mandated elimination of the discharge of untreated waste from municipal and industrial sources to make water safe for fishing and recreation. The US federal government provided billions of dollars in grants for building sewage treatment plants around the country. Modern treatment plants, usually using oxidation and/or chlorination in addition to primary and secondary treatment, were required to meet certain standards.
Los Angeles County's sanitation districts started providing treated wastewater for landscape irrigation in parks and golf courses in 1929. The first reclaimed water facility in California was built at San Francisco's Golden Gate Park in 1932. The Water Replenishment District of Southern California was the first groundwater agency to obtain permitted use of recycled water for groundwater recharge in 1962.
Denver's Direct Potable Water Reuse Demonstration Project examined the technical, scientific, and public acceptance aspects of DPR from 1979 to 1993. A chronic lifetime whole-animal health effects study on the 1 MGD advanced treatment plant product was conducted in conjunction with a comprehensive assessment of the chemical and microbiological water quality. The $30 million study found that the water produced met all health standards and compared favorably with Denver's high quality drinking water. Further, the projected cost was lower than estimates for obtaining distant new water supplies.
Reclaimed water is not regulated by the U.S. Environmental Protection Agency (EPA), but the EPA has developed water reuse guidelines that were most recently updated in 2012. The EPA Guidelines for Water Reuse represents the international standard for best practices in water reuse. The document was developed under a Cooperative Research and Development Agreement between the EPA, the U.S. Agency for International Development (USAID), and the global consultancy CDM Smith. The Guidelines provide a framework for states to develop regulations that incorporate the best practices and address local requirements.
Trade associations
The WateReuse Association is a trade association in the United States which promotes reuse of water. According to their website, "The WateReuse Association is the nation's only trade association solely dedicated to advancing laws, policy, funding, and public acceptance of recycled water. WateReuse represents a coalition of utilities that recycle water, businesses that support the development of recycled water projects, and consumers of recycled water." The WateReuse Research Foundation was merged into the WateReuse Association on July 11, 2016.
Other countries
Canada: "Canadian guidelines for domestic reclaimed water for use in toilet and urinal flushing" (2010).
China: China National Reclaimed Water Quality Standard; China National Standard GB/T 18920-2002, GB/T 19923-2005, GB/T 18921-2002, GB 20922-2007 and GB/T 19772-2005.
Israel: Ministry of Health regulation (2005).
Japan: National Institute for Land and Infrastructure Management: Report of the Microbial Water Quality Project on Treated Sewage and Reclaimed Wastewater (2008).
Jordan: Jordanian technical base n. 893/2006 Jordan water reuse management Plan (policy).
Mexico: Mexican Standard NOM-001-ECOL-1996 governing wastewater reuse in Agriculture.
South Africa: The latest revision of the Water Services Act of 1997 relating to grey-water and treated effluent (Department of Water Affairs and Forestry, 2001).
Tunisia: Standard for the use of treated wastewater in agriculture (NT 106-109 of 1989) and list of crops that can be irrigated with treated wastewater (Ministry of Agriculture, 1994).
Australia: National level Guidelines: Government of Australia (the Natural Resource Management Ministerial Council, the Environment Protection and Heritage Council, and the Australian Health Ministers Conference (NRMMC-EPHC-AHMC)): Guidelines for water recycling: managing health and environmental risks" Phase 1, 2006.
History
Wastewater reuse (planned or unplanned) is a practice which has been applied throughout human history and is closely connected to the development of sanitation.
Country examples
Australia
Israel
Namibia
Singapore
Water reclaimation was pursued primarily due to geopolitical tensions arising from Singapore’s dependency on water imported from Malaysia.
South Africa
See also
Bioretention
One Water (water management)
Water conservation
Water heat recycling
Water recycling shower
WateReuse
References
Further reading
Hoffman, Steve. Planet Water: Investing in the World’s Most Valuable Resource. New York: Wiley, 2009.
Pearce, Fred. When the Rivers Run Dry: Water-The Defining Crisis of the Twenty-First Century. Boston: Beacon Press, 2007.
Solomon, Steven. Water: The Epic Struggle for Wealth, Power, and Civilization. New York: Harper, 2010.
External links
Hydrology and urban planning
Irrigation
Recycling by material
Reuse
Sanitation
Sustainable agriculture
Sustainable technologies
Water conservation
Water supply
Water treatment | Reclaimed water | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 6,979 | [
"Hydrology",
"Water treatment",
"Water pollution",
"Hydrology and urban planning",
"Environmental engineering",
"Water technology",
"Water supply"
] |
1,018,482 | https://en.wikipedia.org/wiki/Palomar%20Planet-Crossing%20Asteroid%20Survey | The Palomar Planet-Crossing Asteroid Survey (PCAS) was an astronomical survey, initiated by American astronomers Eleanor Helin and Eugene Shoemaker at the U.S Palomar Observatory, California, in 1973. The program is responsible for the discovery of 95 near-Earth Objects including 17 comets, while the Minor Planet Center directly credits PCAS with the discovery of 20 numbered minor planets during 1993–1994. PCAS ran for nearly 25 years until June 1995. It had an international extension, INAS, and was the immediate predecessor of the outstandingly successful NEAT program.
Notable discoveries
The first NEO discovered by PACS was (5496) 1973 NA, an Apollo asteroid with an exceptional orbital inclination of 68°, the most highly inclined minor planet known until 1999. In 1976, Eleanor Helin discovered 2062 Aten, the first of a new class of asteroids called the Aten asteroids with small orbits that are never far from Earth's orbit. As a result, these objects have a particularly high probability of colliding with the Earth. In 1979, Helin discovered an Apollo-type asteroid, that they later identified with the comet 4015 Wilson–Harrington. It was the first confirmation that a comet can evolve into an asteroid after it has degassed.
List of discovered minor planets
See also
Brian P. Roman
Spacewatch
List of near-Earth object observation projects
References
Publications
1973 in science
Asteroid surveys
Astronomical surveys
Near-Earth Asteroid Tracking
Astronomical discoveries by institution | Palomar Planet-Crossing Asteroid Survey | [
"Astronomy"
] | 297 | [
"Astronomical surveys",
"Works about astronomy",
"Astronomical objects"
] |
1,018,499 | https://en.wikipedia.org/wiki/International%20Near-Earth%20Asteroid%20Survey | The International Near-Earth Asteroid Survey (INAS) was an astronomical survey, organized and co-ordinated by prolific American astronomer Eleanor Helin during the 1980s. It is considered to be the international extension of the Planet-Crossing Asteroid Survey (PCAS). While PCAS operated exclusively from the U.S. Palomar Observatory in California, INAS attempted to encourage and stimulate worldwide interest in asteroids, and to expand the sky coverage and the discovery and recovery of near-Earth objects around the world.
The IAU's Minor Planet Center credits INAS with the discovery of 8 minor planets in 1986 (compared to 20 discoveries made by PCAS during 1993–1994). One of the discoveries was the 7-kilometer sized main-belt asteroid 4121 Carlin.
See also
List of near-Earth object observation projects
References
External links
Astronomical surveys
Asteroid surveys | International Near-Earth Asteroid Survey | [
"Astronomy"
] | 176 | [
"Astronomical surveys",
"Works about astronomy",
"Astronomical objects"
] |
1,018,525 | https://en.wikipedia.org/wiki/Carina%E2%80%93Sagittarius%20Arm | The Carina–Sagittarius Arm (also known as the Sagittarius Arm or Sagittarius–Carina Arm, labeled -I) is generally thought to be a minor spiral arm of the Milky Way galaxy. Each spiral arm is a long, diffuse curving streamer of stars that radiates from the Galactic Center. These gigantic structures are often composed of billions of stars and thousands of gas clouds. The Carina–Sagittarius Arm is one of the most pronounced arms in our galaxy as many HII regions, young stars and giant molecular clouds are concentrated in it.
The Milky Way is a barred spiral galaxy, consisting of a central crossbar and bulge from which two major and several minor spiral arms radiate outwards. This arm lies between two major spiral arms, the Scutum–Centaurus Arm, the near part of which is visible looking inward, i.e. toward the Galactic Center with the rest beyond the galactic central bulge, and the Perseus Arm, similar in size and shape but locally much closer looking outward, away from the bright, immediately obvious extent of the Milky Way in a perfect observational sky. It is named for its proximity to the Sagittarius and Carina constellations as seen in the night sky from Earth, in the direction of the Galactic Center.
The arm dissipates near its middle, shortly after reaching its maximal angle, viewed from the Solar System, from the Galactic Center of about 80°. Extending from the galaxy's central bar is the Sagittarius Arm (Sagittarius bar). Beyond the dissipated zone it is the Carina Arm.
Geometry
A study was done with the measured parallaxes and motions of 10 regions in the Sagittarius arm where massive stars are formed. Data was gathered using the BeSSeL Survey with the VLBA, and the results were synthesized to discover the physical properties of these sections (called the Galactocentric azimuth, around −2 and 65 degrees). The results were that the spiral pitch angle of the arms is 7.3 ± 1.5 degrees, and the half-width of the arms of the Milky Way were found to be 0.2 kpc. The nearest part to the Sun is around 1.4 ± 0.2 kpc away.
Minor arm
In 2008, infrared observations with the Spitzer Space Telescope showed that the Carina–Sagittarius Arm has a relative paucity of young stars, in contrast with the Scutum-Centaurus Arm and Perseus Arm. This suggests that the Carina–Sagittarius Arm is a minor arm, along with the Norma Arm (Outer Arm). These two appear to be mostly concentrations of gas, sparsely sprinkled with pockets of newly formed stars.
Visible objects
A number of Messier objects and other objects visible through an amateur's telescope or binoculars are found in the Sagittarius Arm (here listed approximately in order from east to west along the arm):
M11, the Wild Duck Cluster in Scutum (RA 18h 51m)
Open Cluster M26 in Scutum (RA 18h 45m)
M16, the Eagle Nebula in Serpens (RA 18h 19m)
M17, the Omega Nebula in Sagittarius (RA 18h 20.4m)
Open Cluster M18 in Sagittarius (RA 18h 19.9m)
Globular Cluster M55 in Sagittarius (RA 19h 40m)
M24, the Small Sagittarius Star Cloud (RA 18h 17m)
Open Cluster M21 in Sagittarius (RA 18h 5m)
M8, the Lagoon Nebula in Sagittarius (RA 18h 4m)
NGC 3372, the Carina Nebula in Carina (RA 10h 45m)
See also
Galactic disc
References
External links
http://members.fcac.org/~sol/chview/chv5.htm
Messier Objects in the Milky Way (SEDS)
Milky Way arms
Galactic astronomy
Spiral galaxies | Carina–Sagittarius Arm | [
"Astronomy"
] | 843 | [
"Galactic astronomy",
"Astronomical sub-disciplines"
] |
1,018,540 | https://en.wikipedia.org/wiki/Medina%20quarter | A medina (from ) is a historical district in a number of North African cities, often corresponding to an old walled city. The term comes from the Arabic word simply meaning "city" or "town".
Historical background
Prior to the rise and intrusion of European colonial rule in North Africa, the region was home to many major cities which had long been centres of culture, commerce, and political power over many centuries.
In Algeria, the French conquest that began in 1830 and brought the country under colonial control resulted in significant destruction of the urban fabric of its historic cities. Colonial rule also led to the dismantling of many traditional urban institutions, the disruption of local culture, and even a certain level of depopulation over time. Fewer cities have preserved their pre-colonial urban fabric in Algeria by comparison with neighbouring countries, but significant remains have been preserved in historic cities such as Algiers, Tlemcen, Nedroma, and Constantine, as well as in many Saharan towns. In Algiers, most of the historic lower town was demolished and remodeled along European lines after the French conquest. The only part of the old city that remained relatively untouched was the upper town, which contained the citadel (qasaba) and the former residence of the rulers, and thus became known as the "Casbah" of Algiers.
The fate of traditional walled cities in Tunisia and Morocco, which also came under French colonial rule over the next hundred years, was quite different. The French conquest of Tunisia took place in 1881 and resulted in the establishment of a French "Protectorate", while nominally retaining the existing Tunisian monarchy. In Tunisia the French generally built new planned cities (the Villes Nouvelles) outside the established historic cities. These new planned towns were almost exclusively inhabited by European colonists while the indigenous population predominantly resided in the old districts, resulting in a certain level of racial segregation during the colonial period. Some French assimilationist policies, as witnessed in Algeria, were also implemented in Tunisia. In Tunis, the old city was preserved but it was physically linked with the European town, making it easier to police, while its traditional economic and administrative systems were marginalized, rendering it dependent on the European districts. The most important preserved historic towns or medinas today include those of Tunis, Kairouan, Mahdia, Sfax, and Sousse.
In Morocco, the Treaty of Fes established another French Protectorate over that country in 1912. The first French resident general in Morocco, Hubert Lyautey, appointed Henri Prost to oversee the urban development of cities under his control. One important colonial policy with long-term consequences was the decision to largely forego development of existing historic cities and to deliberately preserve them as sites of historic heritage, the "medinas". The French administration again built new planned cities outside the old walled cities, where European settlers largely resided with modern Western-style amenities. This was part of a larger "policy of association" adopted by Lyautey which favoured various forms of indirect colonial rule by preserving local institutions and elites, in contrast with other French colonial policies favouring assimilation. The desire to preserve historic cities was also consistent with one of the trends in European ideas about urban planning at the time which argued for the preservation of historic cities in Europe – ideas which Lyautey himself favored. Scholar Janet Abu-Lughod has argued that French urban policies and regulations created a kind of urban "apartheid" between the indigenous Moroccan urban areas – which were forced to remain stagnant in terms of urban development – and the new planned cities which were mainly inhabited by Europeans and expanded to occupy rural lands outside the city which were formerly used by Moroccans. This separation was partly softened by wealthy Moroccans who started moving into the Villes Nouvelles during the colonial period.
List of medinas
Algeria
Casbah of Algiers, a medina named after its fortress
Casbah of Dellys
Libya
Derna
Ghadames
Gharyan
Hun
Murzuk
Tripoli
Waddan
Tazirbu
Benghazi
Morocco
Asilah
Casablanca
Chefchaouen
Essaouira
Fes el Bali, the first medina of Fes (considered one of the largest car-free urban areas in the world)
Fes Jdid, the second medina of Fes
Marrakesh
Meknes
Rabat
Salé
Tangier
Taroudant
Taza
Tétouan
Tunisia
Hammamet
Kairouan
Monastir
Sfax
Sousse
Tozeur
Tunis
See also
Altstadt
References
External links
Map of Tunis medina
Carfree Cities: Morocco
Arabic architecture
Berber architecture
Islamic architecture
Maghreb
Urban planning | Medina quarter | [
"Engineering"
] | 922 | [
"Urban planning",
"Architecture"
] |
17,413,598 | https://en.wikipedia.org/wiki/Rarefaction%20%28ecology%29 | In ecology, rarefaction is a technique to assess species richness from the results of sampling. Rarefaction allows the calculation of species richness for a given number of individual samples, based on the construction of so-called rarefaction curves. This curve is a plot of the number of species as a function of the number of samples. Rarefaction curves generally grow rapidly at first, as the most common species are found, but the curves plateau as only the rarest species remain to be sampled.
The issue that occurs when sampling various species in a community is that the larger the number of individuals sampled, the more species that will be found. Rarefaction curves are created by randomly re-sampling the pool of N samples multiple times and then plotting the average number of species found in each sample (1,2, ... N). "Thus rarefaction generates the expected number of species in a small collection of n individuals (or n samples) drawn at random from the large pool of N samples.".
History
The technique of rarefaction was developed in 1968 by Howard Sanders in a biodiversity assay of marine benthic ecosystems, as he sought a model for diversity that would allow him to compare species richness data among sets with different sample sizes; he developed rarefaction curves as a method to compare the shape of a curve rather than absolute numbers of species.
Following initial development by Sanders, the technique of rarefaction has undergone a number of revisions. In a paper criticizing many methods of assaying biodiversity, Stuart Hurlbert refined the problem that he saw with Sanders' rarefaction method, that it overestimated the number of species based on sample size, and attempted to refine his methods. The issue of overestimation was also dealt with by Daniel Simberloff, while other improvements in rarefaction as a statistical technique were made by Ken Heck in 1975.
Today, rarefaction has grown as a technique not just for measuring species diversity, but of understanding diversity at higher taxonomic levels as well. Most commonly, the number of species is sampled to predict the number of genera in a particular community; similar techniques had been used to determine this level of diversity in studies several years before Sanders quantified his individual to species determination of rarefaction. Rarefaction techniques are used to quantify species diversity of newly studied ecosystems, including human microbiomes, as well as in applied studies in community ecology, such as understanding pollution impacts on communities and other management applications.
Derivation
Deriving rarefaction:
N = total number of items
K = total number of groups
Ni = the number of items in group i (i = 1, ..., K).
Mj = number of groups consisting in j elements
From these definitions, it therefore follows that:
In a rarefied sample we have chosen a random subsample n from the total N items. The relevance of a rarefied sample is that some groups may now be necessarily absent from this subsample. We therefore let:
the number of groups still present in the subsample of "n" items
It is true that is less than K whenever at least one group is missing from this subsample.
Therefore the rarefaction curve, is defined as:
From this it follows that 0 ≤ f(n) ≤ K.
Furthermore, .
Despite being defined at discrete values of n, these curves are most frequently displayed as continuous functions.
Correct usage
Rarefaction curves are necessary for estimating species richness. Raw species richness counts, which are used to create accumulation curves, can only be compared when the species richness has reached a clear asymptote. Rarefaction curves produce smoother lines that facilitate point-to-point or full dataset comparisons.
One can plot the number of species as a function of either the number of individuals sampled or the number of samples taken. The sample-based approach accounts for patchiness in the data that results from natural levels of sample heterogeneity. However, when sample-based rarefaction curves are used to compare taxon richness at comparable levels of sampling effort, the number of taxa should be plotted as a function of the accumulated number of individuals, not accumulated number of samples, because datasets may differ systematically in the mean number of individuals per sample.
One cannot simply divide the number of species found by the number of individuals sampled in order to correct for different sample sizes. Doing so would assume that the number of species increases linearly with the number of individuals present, which is not always true.
Rarefaction analysis assumes that the individuals in an environment are randomly distributed, the sample size is sufficiently large, that the samples are taxonomically similar, and that all of the samples have been performed in the same manner. If these assumptions are not met, the resulting curves will be greatly skewed.
Cautions and criticism
Rarefaction only works well when no taxon is extremely rare or common, or when beta diversity is very high. Rarefaction assumes that the number of occurrences of a species reflects the sampling intensity, but if one taxon is especially common or rare, the number of occurrences will be related to the extremity of the number of individuals of that species, not to the intensity of sampling.
The technique does not account for specific taxa. It examines the number of species present in a given sample, but does not look at which species are represented across samples. Thus, two samples that each contain 20 species may have completely different compositions, leading to a skewed estimate of species richness.
The technique does not recognize species abundance, only species richness. A true measure of diversity accounts for both the number of species present and the relative abundance of each.
Rarefaction is unrealistic in its assumption of random spatial distribution of individuals.
Rarefaction does not provide an estimate of asymptotic richness, so it cannot be used to extrapolate species richness trends in larger samples.
References
External links
Rarefaction Calculator
EcoSim Professional Rarefaction Software
PAST (PAlaeontological STatistics)
Biodiversity
Measurement of biodiversity | Rarefaction (ecology) | [
"Biology"
] | 1,246 | [
"Biodiversity",
"Measurement of biodiversity"
] |
17,414,343 | https://en.wikipedia.org/wiki/MVGroup | MVGroup is a BitTorrent tracker and file sharing forum community that specializes in the distribution of educational media, especially documentaries. MVGroup was established in 2002 by "Merrin" and "DarkRain" (Vittorio in those days, hence MVGroup) as a DVD-ripping-and-distributing group for the eDonkey file-sharing network, and the group continues to distribute DVD rips and TV rips on both eDonkey and BitTorrent. It has continued to function since its establishment, except for a short-lived April 2008 outage caused by an error from an anti-piracy group.
On May 5, 2008, "Merrin", the co-founder of the tracker died of undisclosed long-term health problems at the age of 31. By the time of his death, MVGroup had gained over 150,000 members, and has continued to set itself apart from larger trackers, such as The Pirate Bay, by focusing on documentaries and educational material only.
See also
Comparison of BitTorrent sites
References
External links
MVGroup
DocuWiki.net - an index of documentary films on the eDonkey network
BitTorrent websites
Internet properties established in 2002
File sharing communities | MVGroup | [
"Technology"
] | 244 | [
"File sharing communities",
"Computing websites"
] |
17,414,348 | https://en.wikipedia.org/wiki/Phylogenetic%20diversity | Phylogenetic diversity is a measure of biodiversity which incorporates phylogenetic difference between species. It is defined and calculated as "the sum of the lengths of all those branches that are members of the corresponding minimum spanning path", in which 'branch' is a segment of a cladogram, and the minimum spanning path is the minimum distance between the two nodes.
This definition is distinct from earlier measures which attempted to incorporate phylogenetic diversity into conservation planning, such as the measure of 'taxic diversity' introduced by Vane-Wright, Humphries, and William.
The concept of phylogenetic diversity has been rapidly adopted in conservation planning, with programs such as the Zoological Society of London's EDGE of Existence programme focused on evolutionary distinct species. Similarly, the WWF's Global 200 also includes unusual evolutionary phenomena in their criteria for selecting target ecoregions.
Some studies have indicated that alpha diversity is a good proxy for phylogenetic diversity, so suggesting that term has little use, but a study in the Cape Floristic Region showed that while phylogenetic and species/genus diversity are very strongly correlated (R2 = 0.77 and 0.96, respectively), using phylogenetic diversity led to selection of different conservation priorities than using species richness. It also demonstrated that PD led to greater preservation of 'feature diversity' than species richness alone.
References
Measurement of biodiversity | Phylogenetic diversity | [
"Biology"
] | 269 | [
"Biodiversity",
"Measurement of biodiversity"
] |
17,414,470 | https://en.wikipedia.org/wiki/George%20C.%20Schatz | George Chappell Schatz (born April 14, 1949), the Morrison Professor of Chemistry at Northwestern University, is a theoretical chemist best known for his seminal contributions to the fields of reaction dynamics and nanotechnology.
Early life and career
Born in Watertown, New York and raised in Sackets Harbor, New York, he obtained his B. S. in chemistry from Clarkson University in 1971. At Clarkson, he was mentored by organic chemistry professor Richard Partsch, who encouraged him to spend a summer working at Argonne National Laboratory in 1971. He went on to earn a Ph.D. from Caltech in 1976 under Aron Kuppermann. While working on his doctorate, he took classes taught by Richard Feynman on quantum electrodynamics and particle physics. Following postdoctoral work at MIT under John Ross, he joined the chemistry department at Northwestern University. Schatz is a member of the Center for Chemistry at the Space-Time Limit.
To date he has co-authored over 1000 scientific papers and co-authored two books with his colleague Mark A. Ratner: Introduction to Quantum Mechanics in Chemistry and Quantum Mechanics in Chemistry. Recently much of Schatz's research has been concerned with nanotechnology and bionanotechnology. His work has collectively received over 130,000 citations, including a 2003 article on the optical properties of nanoparticles which has been cited more than 12,000 times.
A longtime senior editor of the Journal of Physical Chemistry, he became its editor-in-chief in 2005. The journal previously (1997) having been split in Journal of Physical Chemistry A (molecular physical chemistry, both theoretical and experimental) and Journal of Physical Chemistry B (solid state, soft matter, liquids), Schatz initiated the spin-off of a third journal, Journal of Physical Chemistry C, focusing on nanotechnology and molecular electronics.
Awards and honors
Schatz is a Fellow of the American Physical Society (1987) and a member of the National Academy of Sciences (2007) and the International Academy of Quantum Molecular Science. Schatz has won the Ahmed Zewail Prize award of the journal Chemical Physics Letters for "outstanding contributions to the theory and understanding of gas-phase reaction dynamics, plasmonics, and nanostructured materials". The biennial prize was developed by Elsevier to honor Nobel laureate Ahmed Zewail, who was a longtime editor of the journal.
References
External links
Official home page at Northwestern University
Editorial profile at the Journal of Physical Chemistry home page
1949 births
Living people
Northwestern University faculty
American physical chemists
Theoretical chemists
Members of the International Academy of Quantum Molecular Science
Members of the United States National Academy of Sciences
Fellows of the American Physical Society | George C. Schatz | [
"Chemistry"
] | 552 | [
"Theoretical chemists",
"American theoretical chemists"
] |
17,414,542 | https://en.wikipedia.org/wiki/Telechron | Telechron was an American company that manufactured electric clocks between 1912 and 1992. "Telechron" is derived from the Greek words tele, meaning "far off," and chronos, "time," thus referring to the transmission of time over long distances. Founded by Henry Ellis Warren, Telechron introduced the synchronous electric clock, which keeps time by the oscillations of the alternating current electricity that powers it from the electric power grid. Telechron had its heyday between 1925 and 1955, when it sold millions of electric clocks to American consumers.
Henry Warren: the synchronous motor and the master clock
Henry E. Warren established the company in 1912 in Ashland, Massachusetts. Initially, it was called "The Warren Clock Company," producing battery-powered clocks. These proved unreliable, however, since batteries weakened quickly, which resulted in inaccurate time-keeping. Warren saw electric motors as the solution to this problem. In 1915, he invented a self-starting synchronous motor consisting of a rotor and a coil, which was patented in 1918. A synchronous motor spins at the same rate as the cycle of the alternating current driving it. Synchronous electric clocks had been available previously, but had to be started manually. In later years, Telechron would advertise its clocks as "bringing true time," because power plants had begun to maintain frequency of the alternating current very close to an average of 60 Hz.
But such constancy did not yet exist when Warren first experimented with his synchronous motors. Irregularities in the frequency of the alternating current led not only to inaccurate time-keeping but, more seriously, to incompatible power grids in the United States, as power could not readily be transferred from one grid to another. In order to overcome these problems, Warren invented a "master clock," which he installed at the Boston Edison Company in 1916.
This master clock had two movements, one driven by a synchronous motor connected to the current produced by the power plant, the other driven by a traditional spring and pendulum. The pendulum was adjusted twice a day in accordance with time signals received from the Naval Observatory. As long as the hands of the electric clock, powered by a 60 Hz synchronous motor, moved along perfectly with those of the "traditional" clock, the power produced by the electric company was uniform. In Electrifying Time, Jim Linz writes that "in 1947, Warren Master Clocks regulated over 95 percent of the electric lines in the United States."
It is interesting to note, then, that the uniformity of alternating current in the United States, which was necessary in order to build large power grids, was initially ensured by a very traditional clock system. Furthermore, Henry Warren invented his master clock at first simply in order to guarantee that his synchronous clock motor would provide accurate time.
Telechron and Art Deco
The Telechron company's success from the 1920s into the 1950s was not solely due to the technical advantages of their clocks, although all Telechron clocks were powered by successive versions of Henry Warren's synchronous motor. Rather, the Telechron company sought to produce clocks whose designs reflected one of the fundamental principles of the Art Deco movement: to combine modern engineering (including mass-production) with the beauty of simple geometric shapes. Thus, Telechron clocks are often considered genuine pieces of art—but art affordable by all, as thousands of them were made. The company employed some of the finest designers of the time, such as Leo Ivan Bruce (1911–1973) and John P. Rainbault. In the evolution of their designs, Telechron clocks were a faithful mirror of their own time. Just as a clock like the "Administrator" (designed by Leo Ivan Bruce) reflected thirties aesthetics, so the "Dimension" had 1950s lines. Telechrons were relatively expensive compared to other clocks. In 1941, their most inexpensive alarm clock was the model 7H117 "Reporter," and it sold for $2.95, the equivalent of $30.00 in 2008 funds. But their beautiful design and amazing reliability assured a brisk market for them throughout the company's most prosperous years.
History
As noted above, Henry Warren initially named his company "The Warren Clock Company." It became "Warren Telechron" in 1926. As early as 1917, General Electric acquired a strong interest in Telechron, realizing the economic potential of Warren's invention. When Warren retired in 1943, General Electric gradually absorbed Telechron into its operations. The clocks labeled "Telechron" on the dial, as well as those labeled "General Electric" (or both "General Electric" and "Telechron" on the dials) were both made in the Ashland, Massachusetts, factory. GE clocks had their own case, dial and hand designs, as well as model names and numbers, but the internal workings of both brands of clock were always the same Telechron type of movement.
In addition to its association with GE, Telechron cooperated closely with one of America's most famous makers of traditional clocks, the Herschede company. Walter Herschede became interested in synchronous clocks in the 1920s, but did not want to risk the good name of his company by associating it too quickly with the new technology. Thus, he founded the Revere Clock Company as a division of Herschede that would market clocks driven by Telechron motors. These motors, however, were housed in the elegant cases of mantel and grandfather clocks for which Herschede was known; moreover, these clocks were equipped with chimes.
Telechron—now the "Clock and Timer Division" of GE—declined in the 1950s, mainly because batteries had become much more long-lived and reliable. Battery-powered clocks have the obvious advantage of not depending on the proximity of a power outlet, and do not require the often somewhat unattractive electric cable. Furthermore, the accuracy of the quartz clock superseded the principles of the synchronous motor. GE tried to respond to the declining market for Warren's technology by producing cheaper, less solidly manufactured clocks. Thus, plastic replaced bakelite or wood as the material for the cases; glass crystals were phased out in favor of plastic ones; and the much less durable S rotor took the place of the H rotor. Nevertheless, the decline of the synchronous clock could not be stopped. GE sold the last of its former Telechron plants in 1979. After successive attempts to revive the business remained fruitless, it closed permanently in 1992.
Nonetheless, even if Telechron's original operations have ceased, Telechron continues to exist as a brand: "Telechron" is the name used by a manufacturer of electric timers in Leland, North Carolina. Moreover, a company that spun off from one of Telechron's research labs in 1928 is still flourishing: Electric Time Company manufactures custom tower and post clocks in Medfield, Massachusetts. Electric Time is the only such company in the U.S. that still makes its own clock movements.
Limitations of Telechron technology
From a commercial point of view, it was the increased durability of batteries as well as the invention of the quartz movement that proved fatal to Telechron. From the point of view of the history of technology, however, another problem is more crucial: if the electric power grid is used as a system for the "distribution of time," as Warren himself wrote, then, in the case of a power failure, the clocks stop, and the individual consumers' Telechrons lose their connection with the master clock (and, by implication, with the time provided by the Naval Observatory). If there is a temporary power outage while the owner is out, the running clock will display the incorrect time when he returns. Warren, foreseeing this difficulty, provided his clocks with an "indicating device": a red dot that would appear on the dial whenever the power failed. This red dot alerted the consumer to the need to reset the clock (by obtaining the accurate time through the telephone, for example, or from a radio). Setting the clock would reset the indicator. The electric clock market grew rapidly in the 1930s, and Telechron's patented power interruption indicator gave his clocks an advantage over competing synchronous clocks, but by the 1950s battery-operated clocks that weren't dependent on the power grid took market share, and in the 1960s the quartz clock replaced synchronous clocks.
The problem of how to keep clocks synchronized with primary standards was solved with the radio clock, which receives time signals not through the electric grid, but from government time radio stations.
Collecting Telechron clocks
There is a growing community of hobbyists who collect Telechron clocks. An antique Telechron clock will usually come to life immediately (though sometimes noisily) when it is plugged in.
Telechron motors are easily quieted and revived by carefully drilling 2 small holes that just puncture the surface, one on the large section, and one on the small section. A very light oil is injected, and then the small holes are carefully soldered shut. If a heavy oil is used, the clock may fail to keep accurate time until the motor becomes warm.
Telechron alarm clocks
Telechron alarm clocks are particularly popular with collectors. Until about 1940, the overwhelming majority of Telechron alarm clocks had bell alarms. The entire mechanism was enclosed in a bell housing of steel. Atop the clock's coil was a metal strip that vibrated at 60 cycles per second when the alarm was tripped. This strip had a V-shaped arm attached to it, ending in a striker, which vibrated in turn against the bell housing. With the approach of war, restrictions on various metals required a reduction in their use, and the bell housing was eliminated, with only the metal strip above the coil remaining. This in itself, however, provided a loud buzz when the alarm was tripped (and was the basis of the alarm in all brands of alarm clocks for many years after the war). Post-war, very few Telechrons had bell alarms, and the bell had disappeared completely by 1960. Telechron was one of the first companies to introduce what became known as the "snooze" alarm in the early 1950s.
Notes
Further reading
Jim Linz, Electrifying Time: Telechron & G.E. Clocks, 1925–1955 (Atglen, Pa.: Schiffer, 2001)
External links
Telechron.net is the most comprehensive Telechron site, with hundreds of pictures, historical information, and descriptions of almost every clock, as well as a forum for Telechron aficionados.
The Warren Conference Center, owned by Framingham State University and located in Ashland, Massachusetts, was donated by Henry Warren and his wife Edith. Framingham State University acquired the center from Northeastern University in April 2016.
The Electric Time Company, Inc, Medfield, Mass., an outgrowth from Telechron, manufactures tower and street clocks.
Telechron Clock Motor Information on the Telechron clock motor.
ClockHistory.com Company history, patents, early synchronous clocks.
SilverdollarProductions.net History, information, picture galleries and ID pages for Telechron, GE and Revere.
Ashland, Massachusetts
Clock manufacturing companies of the United States
Horology
Clock brands
Defunct manufacturing companies based in Massachusetts | Telechron | [
"Physics"
] | 2,362 | [
"Spacetime",
"Horology",
"Physical quantities",
"Time"
] |
17,415,408 | https://en.wikipedia.org/wiki/Einstein%20%28US-CERT%20program%29 | The EINSTEIN System (part of the National Cybersecurity Protection System) is a network intrusion detection and prevention system that monitors the networks of US federal government departments and agencies. The system is developed and managed by the Cybersecurity and Infrastructure Security Agency (formerly NPPD/United States Computer Emergency Readiness Team (US-CERT)) in the United States Department of Homeland Security (DHS).
The program was originally developed to provide "situational awareness" for the civilian agencies and to "facilitate identifying and responding to cyber threats and attacks, improve network security, increase the resiliency of critical, electronically delivered government services, and enhance the survivability of the Internet." The first version examined basic network traffic and subsequent versions examined content.
EINSTEIN does not protect the network infrastructure of the private sector.
History
The Federal Computer Incident Response Capability (FedCIRC) was one of four watch centers that were protecting federal information technology when the E-Government Act of 2002 designated it the primary incident response center. With FedCIRC at its core, US-CERT was formed in 2003 as a partnership between the newly created DHS and the CERT Coordination Center which is at Carnegie Mellon University and funded by the U.S. Department of Defense. US-CERT delivered EINSTEIN to meet statutory and administrative requirements that DHS help protect federal computer networks and the delivery of essential government services. EINSTEIN was implemented to determine if the government was under cyber attack. EINSTEIN does this by collecting flow data from all civilian agencies and compared that flow data to a baseline.
If one Agency reported a cyber event, the 24/7 Watch at US-CERT could look at the incoming flow data and assist resolution.
If one Agency was under attack, US-CERT Watch could quickly look at other Agency feeds to determine if it was across the board or isolated.
During EINSTEIN 1, it was determined that the civilian agencies did not know the entirety of what their registered IPv4 space included. This was obviously a security concern. Once an Agency's IPv4 space was validated, it was immediately clear that the Agency had more external Internet Connections or Gateways than could be reasonably instrumented and protected. This gave birth to the Office of Management and Budget's Trusted Internet Connections (TIC) Initiative. The initiative expected to reduce the government's 4,300 access points to 50 or fewer by June 2008.
Therefore, a new version of EINSTEIN was planned to "collect network traffic flow data in real time and also analyze the content of some communications, looking for malicious code, for example in e-mail attachments." Three constraints on EINSTEIN that the DHS is trying to address are the large number of access points to U.S. agencies, the low number of agencies participating, and the program's "backward-looking architecture". The expansion is known to be one of at least nine measures to protect federal networks.
Mandate
EINSTEIN is the product of U.S. congressional and presidential actions of the early 2000s including the E-Government Act of 2002 which sought to improve U.S. government services on the Internet.
The Consolidated Appropriations Act of 2016 added 6 USC 663(b)(1), which requires the Secretary of Homeland Security to "deploy, operate, and maintain" a capability to detect and prevent cybersecurity risks in network traffic in federal information systems.
The use of these systems is mandated for federal agencies by 6 USC 663 'Agency Responsibilities'. Agencies must adopt updates to the system within 6 months. The Department of Defense, Intelligence Community, and other "national security systems" are exempt.
Adoption
EINSTEIN was deployed in 2004 and until 2008 was voluntary. By 2005, three federal agencies participated and funding was available for six additional deployments. By December 2006, eight agencies participated in EINSTEIN and by 2007, DHS itself was adopting the program department-wide. By 2008, EINSTEIN was deployed at fifteen of the nearly six hundred agencies, departments and Web resources in the U.S. government.
As of September 2022, 248 federal agencies use EINSTEIN 1 and 2 "representing approximately 2.095 million users, or 99% of the total user population" and 257 agencies use E3A.
EINSTEIN 1
When it was created, EINSTEIN was "an automated process for collecting, correlating, analyzing, and sharing computer security information across the Federal civilian government."
EINSTEIN 1 was designed to resolve the six common security weaknesses that were collected from federal agency reports and identified by the OMB in or before its report for 2001 to the U.S. Congress. In addition, the program addresses detection of computer worms, anomalies in inbound and outbound traffic, configuration management as well as real-time trends analysis which CISA offers to U.S. departments and agencies on the "health of the Federal.gov domain". EINSTEIN was designed to collect session data including:
Autonomous system numbers (ASN)
ICMP type and code
Packet length
Protocol
Sensor identification and connection status (the location of the source of the data)
Source and destination IP address
Source and destination port
TCP flag information
Timestamp and duration information
Around 2019, CISA expanded the system to include application layer information, such as HTTP URLs and SMTP headers..
CISA may ask for additional information in order to find the cause of anomalies EINSTEIN finds. The results of CISA's analysis are then given to the agency for disposition.
EINSTEIN 2
EINSTEIN 2 was deployed in 2008 and "identifies malicious or potentially harmful computer network activity in federal government network traffic based on specific known signatures" and generates around 30,000 alerts a day.
The EINSTEIN 2 sensor monitors each participating agency's Internet access point, "not strictly...limited to" Trusted Internet Connections, using both commercial and government-developed software. EINSTEIN could be enhanced to create an early warning system to predict intrusions.
CISA may share EINSTEIN 2 information with "federal executive agencies" according to "written standard operating procedures". CISA has no intelligence or law enforcement mission but will notify and provide contact information to "law enforcement, intelligence, and other agencies" when an event occurs that falls under their responsibility.
EINSTEIN 3
Version 3.0 of EINSTEIN has been discussed to prevent attacks by "shoot[ing] down an attack before it hits its target."
The NSA is moving forward to begin a program known as “EINSTEIN 3,” which will monitor “government computer traffic on private sector sites.” (AT&T is being considered as the first private sector site.) The program plan, which was devised under the Bush administration, is controversial, given the history of the NSA and the warrantless wiretapping scandal. Many DHS officials fear that the program should not move forward because of “uncertainty about whether private data can be shielded from unauthorized scrutiny.”
Some believe the program will invade the privacy of individuals too much.
Privacy
In the Privacy Impact Assessment (PIA) for EINSTEIN 2 published in 2008, DHS gave a general notice to people who use U.S. federal networks. DHS assumes that Internet users do not expect privacy in the "To" and "From" addresses of their email or in the "IP addresses of the websites they visit" because their service providers use that information for routing. DHS also assumes that people have at least a basic understanding of how computers communicate and know the limits of their privacy rights when they choose to access federal networks. The Privacy Act of 1974 does not apply to EINSTEIN 2 data because its system of records generally does not contain personal information and so is not indexed or queried by the names of individual persons. A PIA for the first version is also available from 2004.
DHS is seeking approval for an EINSTEIN 2 retention schedule in which flow records, alerts, and specific network traffic related to an alert may be maintained for up to three years, and if, for example in the case of a false alert, data is deemed unrelated or potentially collected in error, it can be deleted.
According to the DHS privacy assessment for US-CERT's 24x7 Incident Handling and Response Center in 2007, US-CERT data is provided only to those authorized users who "need to know such data for business and security purposes" including security analysts, system administrators and certain DHS contractors. Incident data and contact information are never shared outside of US-CERT and contact information is not analyzed. To secure its data, US-CERT's center began a DHS certification and accreditation process in May 2006 and expected to complete it by the first quarter of fiscal year 2007. As of March 2007, the center had no retention schedule approved by the National Archives and Records Administration and until it does, has no "disposition schedule"—its "records must be considered permanent and nothing may be deleted". As of April 2013, DHS still had no retention schedule but was working "with the NPPD records manager to develop disposition schedules". An update was issued in May 2016.
2020 federal government data breach
Einstein failed to detect the 2020 United States federal government data breach.
See also
National Security Directive
Managed Trusted Internet Protocol Service
ADAMS, CINDER (DARPA)
References
External links
Computer security software
United States Department of Homeland Security | Einstein (US-CERT program) | [
"Engineering"
] | 1,875 | [
"Cybersecurity engineering",
"Computer security software"
] |
17,417,624 | https://en.wikipedia.org/wiki/NetExpert | NetExpert monitors and controls networks and service impacting resources using object-oriented and expert systems technologies.
Background
NetExpert is considered an OSS, used in managing wireline and wireless networks and services.
NetExpert is a scalable and distributable architecture that supports flexible configuration while maintaining individual component independence. Its application packages address many areas of communication services management, including fault, performance, reporting, activation, IP services, and others. These can be further tailored to individual customer environments and management requirements.
This framework consists of a set of integrated software modules and graphical user interface (GUI) development tools to enable the creation and deployment of complex management solutions. The object-oriented architecture of the NetExpert framework provides the building blocks to implement operations support and management systems using high-level tools rather than low-level programming languages.
The NetExpert framework is founded on open systems and object-oriented methodology. NetExpert supports different standards, transmission protocols, and equipment data models. NetExpert is based on the Telecommunications Management Network architecture created by the Telecommunications Standardization Sector of the International Telecommunication Union. It supports the development and deployment of applications for the main TMN management areas—fault, configuration, accounting, performance, and security—and the implementation of layered management architectures. In addition, the NetExpert framework employs expert rules and policies that replace complex programming languages and enable network analysts to model desired system behaviors by using GUI-based rule editors.
See also
Agilent Technologies
Operations support system
Communications service provider
Service management
Expert system
Notes
Reference books
Plunkett, J (1996). Plunkett's Infotech Industry Almanac - Page 75 Google Books
Minoli, D, Golway, T, and Smith, N (1996) Planning and Managing ATM Networks - Google Books New York: Manning, 1997. .
Terplan, K (1998). Telecom Operations Management Solutions with NetExpert Amazon, Google Books CRC Press LLC
Terplan, K (1999). Web-Based Systems & Network Management - Page 185 Google Books
Terplan, K (1999). Applications for Distributed Systems and Network Management - Page 101 Google Books
Van Nostrand Reinhold
Network management
Telecommunications systems | NetExpert | [
"Technology",
"Engineering"
] | 460 | [
"Computer networks engineering",
"Telecommunications systems",
"Network management"
] |
17,417,840 | https://en.wikipedia.org/wiki/Russell%20Ormond%20Redman | Russell Ormond Redman (born 1951) is a Canadian astronomer and a specialist in radio astronomy who worked on the staff of the National Research Council of Canada at the Dominion Astrophysical Observatory until he retired in 2013. In 1966, just after 9th grade, he first volunteered for the DAO in Victoria, British Columbia. His initial publication was a list of nearest stars in the 1970 Observer's Handbook, while he was still in high school. He received his Ph.D. from California Institute of Technology in 1982, and subsequently published over 75 scientific papers.
Honors
The inner main-belt asteroid 7886 Redman, discovered by Canadian astronomer David D. Balam in 1993, has been named jointly for him and for Roderick Oliver Redman, Professor of Astronomy at Cambridge University, no relation except that both worked at the DAO during significant parts of their careers. The official naming citation was published on 10 June 1998 ().
References
External links
Canadian Astronomical Society
Bio at CAS
Personal profile, National Research Council of Canada
1951 births
Living people
20th-century Canadian astronomers
21st-century Canadian astronomers | Russell Ormond Redman | [
"Astronomy"
] | 219 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
17,417,986 | https://en.wikipedia.org/wiki/Vincennes%20Trace | The Vincennes Trace was a major trackway running through what are now the American states of Kentucky, Indiana, and Illinois. Originally formed by millions of migrating bison, the Trace crossed the Ohio River near the Falls of the Ohio and continued northwest to the Wabash River, near present-day Vincennes, before it crossed to what became known as Illinois. This buffalo migration route, often 12 to 20 feet wide in places, was well known and used by American Indians. Later European traders and American settlers learned of it, and many used it as an early land route to travel west into Indiana and Illinois. It is considered the most important of the traces to the Illinois country.
It was known by various names, including Buffalo Trace, Louisville Trace, Clarksville Trace, and Old Indian Road. After being improved as a turnpike, the New Albany-Paoli Pike, among others. The Trace's continuous use encouraged improvements over the years, including paving and roadside development. U.S. Route 150 between Vincennes, Indiana, and Louisville, Kentucky, follows a portion of this path. Sections of the improved Trace have been designated as part of a National Scenic Byway that crosses southern Indiana.
History
The Trace was created by millions of migrating bison that were numerous in the region from the Great Lakes to the Piedmont of North Carolina. It was part of a greater buffalo migration route that extended from present-day Big Bone Lick State Park in Kentucky, through Bullitt's Lick, south of present-day Louisville, and across the Falls of the Ohio River to Indiana, then northwest to Vincennes, before crossing the Wabash River into Illinois. The trail was well known among the area's natives and used for centuries. It later became known and used by European traders and white settlers who crossed the Ohio River at the Falls and followed the Trace overland to the western territories. It is considered to be the most important of the early traces leading to the Illinois country.
In Indiana the Trace's main line split into several smaller trails that converged north of Jasper, near several large ponds, or mud holes, where buffalo would wallow. Due to the large number of buffalo that used the Trace, the well-worn path was twelve to twenty feet wide in places. Various trails also converged around a major salt lick, probably near present-day French Lick, Indiana. The Trace crossed the White River at several points, including places near the present-day towns of Petersburg and Portersville, Indiana. After a major crossing at the Wabash River, the Trace split into separate trails that led west across Illinois to the Mississippi River or north to what would become Chicago. In Chicago, the Trace is called Vincennes Avenue, and after state-funded improvements and straightening, parts became State Street.
The Trace across southern Indiana became integral to early development. Two main areas of early settlement in the Indiana Territory were made along it: Vincennes to the west and Clark's Grant in the south. In the early 18th century, the French developed colonial posts in the Illinois Country by moving down the Mississippi and into its tributaries. In 1732 François-Marie Bissot, Sieur de Vincennes, founded a trading post near the Trace's Wabash River crossing; it developed as the town of Vincennes. After the American Revolutionary War, in the late 1780s the U.S. government granted land in New York, Ohio and Indiana to veterans as payment for service. The US granted "so many acres of land" to George Rogers Clark and his men for their military service in the Illinois campaign against the British during the Revolutionary War. It became known as Clark's Grant. George Rogers Clark used the Trace to return to the Louisville area after his Illinois Campaign.
As the Continentals took control of the Illinois country during the Revolutionary War, the Trace became a busy overland route, which made it a target for Indian war parties. Clark's memoirs mentioned the Trace in describing an early Indian attack on traders in 1779, after Hamilton surrendered at Fort Sackville and Clark's militia controlled Vincennes. He led his force against the Indians in the Battle of the White River Forks. Richard "Dickie" Clark (1760–c. 1784), the younger brother of General George Rogers Clark and Captain William Clark, disappeared while traveling along the Trace in 1784. He had left Clarksville, to travel alone to Vincennes. Accounts varied: one said that his horse had been found with saddlebags bearing his initials. Another account said his horse's bones were found with Clark's bags nearby. His remains were never found. There was speculation that he was killed by Indians or thieves in the area, but historian William Hayden English concluded that he probably drowned while crossing a river.
Several written accounts by explorers, the military, and settlers document the Trace's use as an overland route. In 1785 and 1786 explorer John Filson travelled by river to Vincennes and returned to the Falls of the Ohio via the Trace; he documented his travels along the road. Filson's overland route took nine days. General Josiah Harmar, Commander of the Army of the Ohio, kept a log when he led the First American Regiment on a return march from Vincennes in 1786. Following the Treaty of Greenville in 1795, settlers poured into the western territories. Many of them kept journals, recording their observations along the Trace.
In late 1799 U.S. postmaster Joseph Habersham established a mail route from Louisville through Vincennes to Kaskaskia, Illinois at the Mississippi River along the Trace. The route began on 22 March 1800 and ran every four weeks. It was extended to Cahokia, Illinois the following year. Both of these were former French colonial settlements from the early 18th century.
In 1802 William Henry Harrison, governor of the Indiana Territory, recommended that the Trace be improved as a road suitable for wagon travel, with inns developed for travelers every thirty to forty miles. By 1804 the Trace was so well known that Harrison used it as a treaty boundary with Indians. The Vincennes treaty of 1804 gave the U.S. government possession of Indiana land from south of the Trace to the Ohio River, including the Trace itself. William Rector was hired to survey the treaty land in 1805. His survey notes provide an important record of the Buffalo Trace's route. Survey maps and field notes identified forty-three miles of the old trace road from Clark's Grant to the White River in southern Indiana.
The Buffalo Trace was the primary travel route between the Louisville area and Vincennes; two-thirds of settlers traveling from the Louisville area into the interior of Indiana used the trace. Rangers were hired to protect travelers using the road, eventually doing so on horseback in 1812. During the War of 1812, Harrison assigned 150 men to patrol the Trace between Vincennes and Louisville, "so as to completely protect the citizens and the road."
Because the Trace remained the primary road across southern Indiana after the territory became a state in 1816, the state legislature had a road paved from New Albany to Vincennes as part of its internal improvements program. The road "approximated" the Trace's route. It was extended to Paoli, Indiana, after the state government leased operation of the road to a private organization as part of their negotiations to avoid bankruptcy. The paved road was called the "New Albany-Paoli Turnpike." The first stagecoach service in the state started in 1820 along the Trace; the route was from New Albany to Vincennes. The route served Floyd County, Indiana; the towns of Greenville, Galena, and Floyds Knobs in particular.
Other names for the Trace through its history have been Lan-an-zo-ki-mi-wi (or lenaswihkanawea, a Native American name meaning "bison trail" or "buffalo road"), the "Old Indian Road," the "Clarksville Trace," "Harrison's Road," the "Kentucky Road," the "Vincennes Trace," and the "Louisville Trace."
Present day
U.S. Route 150 from Vincennes to New Albany follows the path of the Trace. A large section of the original Trace can be seen south of French Lick in Orange County, Indiana, along the Springs Valley Trail System. In 2009 a section of U.S. Route 150 and the Buffalo Trace were designated as part of the Indiana Historic Pathways, a National Scenic Byway that crosses southern Indiana. In total, driving U.S. Route 150 to coincide with the Buffalo Trace is a distance of .
Parts of the Trace are now protected, including sections in the Hoosier National Forest and a small tract within Buffalo Trace Park, a preserve maintained by Harrison County, Indiana. The development of towns and highways has effaced much of the original Trace. Survey notes, plat maps and other documents provide clues as archeologists continue to discover more sections, aided by modern technologies such as GIS and GNSS.
See also
History of Indiana
References
Sources
External links
Buffalo Trace Park
Indiana's Historic Pathways
Hoosier National Forest, Buffalo Trace
Buffalo Trace Interactive Map, U.S. Forest Service
American bison
Animal migration
Floyd County, Indiana
Harrison County, Indiana
Historic trails and roads in the United States
Indiana Territory
Historic trails and roads in Indiana
U.S. Route 150 | Vincennes Trace | [
"Biology"
] | 1,909 | [
"Ethology",
"Behavior",
"Animal migration"
] |
17,418,488 | https://en.wikipedia.org/wiki/James%20Cullen%20Martin | James Cullen Martin (January 14, 1928 – April 20, 1999) was an American chemist. Known in the field as "J.C.", he specialized in physical organic chemistry with an emphasis on main group element chemistry.
Martin received his undergraduate and master's degree at Vanderbilt University. His PhD work was conducted with Paul Bartlett at Harvard. Most of his professional career was at the University of Illinois at Urbana-Champaign, where he was a colleague of Roger Adams, Speed Marvel, David Y. Curtin, Nelson J. Leonard, and Reynold C. Fuson. Late in his career, he moved back to Vanderbilt, but soon succumbed to poor health.
Professor Martin is best known for his work on bonding of main group elements. He is responsible for the hexafluorocumyl alcohol derived "Martin" bidentate ligand and a tridentate analog. With his doctoral student Daniel Benjamin Dess, he invented the Dess–Martin periodinane that is used for selective oxidation of alcohols. He is also known for the creation of the Martin's sulfurane. His later work included studies of the hexaiodobenzene dication that indicated σ-delocalization ("aromaticity") between the iodine atoms.
J.C. Martin received much recognition during his career, including Senior Research Prize from the Alexander von Humboldt Foundation and a Guggenheim Fellowship. He was chair of the Organic division of the American Chemical Society.
References
Literature
1928 births
1999 deaths
People from Dover, Tennessee
Vanderbilt University alumni
Harvard University alumni
University of Illinois Urbana-Champaign faculty
Vanderbilt University faculty
20th-century American chemists
American inorganic chemists
American organic chemists | James Cullen Martin | [
"Chemistry"
] | 340 | [
"American inorganic chemists",
"Organic chemists",
"Inorganic chemists",
"American organic chemists"
] |
17,419,031 | https://en.wikipedia.org/wiki/List%20of%20observatory%20software | The following is a list of astronomical observatory software.
Commercial software
MaximDL
Non-commercial software
See also
Space flight simulation game
List of space flight simulation games
Planetarium software
observatory software | List of observatory software | [
"Astronomy",
"Technology"
] | 37 | [
"Computing-related lists",
"Astronomy software",
"Lists of software",
"Works about astronomy"
] |
17,419,853 | https://en.wikipedia.org/wiki/Ch%C3%A9zy%20formula | The Chézy Formula is a semi-empirical resistance equation which estimates mean flow velocity in open channel conduits. The relationship was conceptualized and developed in 1768 by French physicist and engineer Antoine de Chézy (1718–1798) while designing Paris's water canal system. Chézy discovered a similarity parameter that could be used for estimating flow characteristics in one channel based on the measurements of another. The Chézy formula is a pioneering formula in the field of fluid mechanics that relates the flow of water through an open channel with the channel's dimensions and slope. It was expanded and modified by Irish engineer Robert Manning in 1889. Manning's modifications to the Chézy formula allowed the entire similarity parameter to be calculated by channel characteristics rather than by experimental measurements. Today, the Chézy and Manning equations continue to accurately estimate open channel fluid flow and are standard formulas in various fields related to fluid mechanics and hydraulics, including physics, mechanical engineering, and civil engineering.
The Chézy formula
The Chézy formula describes mean flow velocity in turbulent open channel flow and is used broadly in fields related to fluid mechanics and fluid dynamics. Open channels refer to any open conduit, such as rivers, ditches, canals, or partially full pipes. The Chézy formula is defined for uniform equilibrium and non-uniform, gradually varied flows.
The formula is written as:
where,
is average velocity [length/time];
is the hydraulic radius [length], which is the cross-sectional area of flow divided by the wetted perimeter, for a wide channel this is approximately equal to the water depth;
is the hydraulic gradient, which for uniform normal depth of flow is the slope of the channel bottom [unitless; length/length];
is Chézy's coefficient [length1/2/time]. Values of this coefficient must be determined experimentally. Typically, these range from 30 m1/2/s (small rough channel) to 90 m1/2/s (large smooth channel).
For many years following Antoine de Chézy's development of this formula, researchers assumed that was a constant, independent of flow conditions. However, additional research proved the coefficient's dependence on the Reynolds number as well as a channel's roughness. Accordingly, although the Chézy formula does not appear to incorporate either of these terms, the Chézy coefficient empirically and indirectly represents them.
Exploring Chézy's similarity parameter
The relationship between linear momentum and deformable fluid bodies is well explored, as are the Navier–Stokes equations for incompressible flow. However, exploring the relationships foundational to the Chézy formula can be helpful towards understanding the formula in full.
To understand the Chézy similarity parameter, a simple linear momentum equation can help summarize the conservation of momentum of a control volume uniformly flowing through an open channel:
Where the sum of forces on the contents of a control volume in the open channel is equal to the sum of the time rate of change of the linear momentum of the contents of the control volume, plus the net rate of flow of linear momentum through the control surface. The momentum principle may always be used for hydrodynamic force calculations.
As long as uniform flow can be assumed, applying the linear momentum equation to a river channel flowing in one dimension means that momentum remains conserved and the forces are balanced in the direction of flow:
Here, the hydrostatic pressure forces are F1 and F2, the component (τwPl) represents the shear force of friction acting on the control volume, and the component (ω sin θ) represents the gravitational force of the fluid's weight acting on the sloped channel bottom are held in balance in the flow direction. The free-body diagram below illustrates this equilibrium of forces in open channel flow with uniform flow conditions.
Most open-channel flows are turbulent and characterised by very large Reynolds numbers. Due to the large Reynolds numbers characteristic in open channel flow, the channel shear stress proves to be proportional to the density and velocity of the flow.
This can be illustrated in a series of advanced formulas which identify a shear stress similarity parameter characteristic of all turbulent open channels. Combining this parameter with the Chézy formula, channel components and the conservation of momentum in an open channel flow results in the relationship .
Chézy's similarity parameter and formula explain how the velocity of water flowing through a channel has a relationship with the slope and sheer stress of the channel bottom, the hydraulic radius of flow, and the Chézy coefficient, which empirically incorporates several other parameters of the flowing water. This relationship is driven by the conservation of momentum present during uniform flow conditions.
Chézy's formula inspires the Manning formula
Once this relationship was established by Chézy, many engineers and physicists (see the below section Authors of flow formulas) continued to search for ways to improve Chézy's equation. A slight oversight of Chézy's formula was determined by the research of these colleagues. They determined that the velocity's slope dependence in Chézy's formula (V:S0) was reasonable, but that the velocity's dependence on the hydraulic radius (V:Rh1/2) was not reasonable and that the relationship was closer to (V:Rh2/3). Many formulas based on Chézy's formula have been developed since its discovery by these contemporaries and others, and differing formulas are more suitable in differing conditions.
The Chézy formula provided a substantial foundation for a new flow formula proposed in 1889 by Irish engineer Robert Manning. Manning's formula is a modified Chézy formula that combines many of his aforementioned contemporaries' work. Manning's modifications to the Chézy formula allowed the entire similarity parameter to be calculated by channel characteristics rather than by experimental measurements. The Manning equation improved Chézy's equation by better representing the relationship between Rh and velocity, while also replacing the empirical Chézy coefficient () with the Manning resistance coefficient (), which is also referenced in places as the Manning roughness coefficient. Unlike the Chézy coefficient () which could only be determined by field measurements, the Manning coefficient () was determined to remain constant based on the material of the wetted perimeter, allowing for a standardized table of values to be developed that could reasonably estimate flow velocity. While field measurements remain the most precise way to obtain either Chézy or Manning coefficients, the standardized values that were developed with the use of the Manning formula provided a much-desired simplicity to open-channel flow estimates.
Chézy formula vs Manning formula
The Manning formula is described elsewhere but it is included below for comparison purposes. Below, the minor modifications used by the Manning formula to improve upon the Chézy formula are clear.
Chézy formula Manning formula
Using Chézy formula with Manning coefficient
This similarity between the Chézy and Manning formulas shown above also means that the standardized Manning coefficients may be used to estimate open channel flow velocity with the Chézy formula, by using them to calculate the Chézy's coefficient as shown below. Manning derived the following relationship between Manning coefficient () to Chézy coefficient () based upon experiments:
where
is the Chézy coefficient [length1/2/time], a function of relative roughness and Reynolds number;
is the hydraulic radius, which is the cross-sectional area of flow divided by the wetted perimeter (for a wide channel this is approximately equal to the water depth) [m];
is Manning's coefficient [time/length1/3]; and
is a constant; k = 1 when using SI units and k = 1.49 when using BG units.
Modern use of Chézy and Manning formulas
Since the Chézy formula and the Manning formula both reference a single control volume location along the channel, neither address friction factor nor head loss directly. However, the change in pressure head may be calculated by combining them with other formulas such as the Darcy–Weisbach equation.
The empirical aspect to the coefficient indirectly addresses friction factor and Reynold's number and is the reason why the Chézy formula remains most accurate in certain conditions, such as river channels with non-uniform channel dimensions. Additionally, both equations are explicitly used with uniform or "steady-state" flow where the hydraulic depth is constant, due to their derivation from the conservation of momentum. In contrast, if the hydraulic conditions fluctuate in open channel flow, they are then described as gradually or rapidly varied flow, and will require further analyses beyond these two formula methods.
Since partially full pipes aren't pressurized, they are considered open channels by definition. Therefore, the Manning and Chézy formulas can be applied to calculate partially full pipe flow. However, the intended use of these formulas are primarily for considering uniform and turbulent flow. Many other formulas that have been developed since may produce more accurate results, such as the Darcy–Weisbach equation or the Hazen–Williams equation, but lack the simplicity of the Manning or Chézy formulas.
Both formulas continue to be broadly taught and are used in open channel and fluid dynamics research. Today, the Manning formula is likely the most globally used formula for open channel uniform flow analysis, due to its simplicity, proven efficacy, and the fact that most open channel studies are concerned with turbulent flow. Chézy's formula is one of the oldest in the field of fluid mechanics, it applies to a wider range of flows than the Manning equation, and its influence continues to this day.
See also
Hydrology
Hydraulic engineering
Authors of flow formulas
Albert Brahms (1692–1758)
Antoine de Chézy (1718–1798)
Claude-Louis Navier (1785–1836)
Adhémar Jean Claude Barré de Saint-Venant (1797–1886)
Gotthilf Heinrich Ludwig Hagen (1797–1884)
Jean Léonard Marie Poiseuille (1797–1869)
Henri P. G. Darcy (1803–1858)
Julius Ludwig Weisbach (1806–1871)
Charles Storrow (1809–1904)
Robert Manning (1816–1897)
Wilhelm Rudolf Kutter (1818–1888)
Emile Oscar Ganguillet (1818–1894)
Sir George Stokes (1819–1903)
Philippe Gaspard Gauckler (1826–1905)
Henri-Émile Bazin (1829–1917)
Alphonse Fteley (1837–1903)
Frederic Stearns (1851–1919)
Ludwig Prandtl (1875–1953)
Paul Richard Heinrich Blasius (1883–1970)
Albert Strickler (1887–1963)
Cyril Frank Colebrook (1910–1997)
References
External links
History of the Chézy Formula
Eponymous equations of physics
Fluid dynamics
Piping
Scientific laws | Chézy formula | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,134 | [
"Equations of physics",
"Building engineering",
"Chemical engineering",
"Mathematical objects",
"Eponymous equations of physics",
"Equations",
"Scientific laws",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
17,420,716 | https://en.wikipedia.org/wiki/List%20of%20electric%20vehicle%20battery%20manufacturers |
List of world's largest EV cell manufacturers in 2023
List of other large EV battery manufacturers
List of smaller (<1GWh) EV battery and former cell manufacturers
See also
Electric vehicle battery
List of production battery electric vehicles
Electric vehicle industry in China
References
External links
List of Electric Vehicle Battery Manufacturers February 2010
Battery man
Lists of manufacturers
Technology-related lists | List of electric vehicle battery manufacturers | [
"Engineering"
] | 73 | [
"Electrical engineering",
"Electrical-engineering-related lists"
] |
17,420,797 | https://en.wikipedia.org/wiki/USA-201 | USA-201, also known as GPS IIR-19(M), GPS IIRM-6 and GPS SVN-48, is an American navigation satellite which forms part of the Global Positioning System. It was the sixth of eight Block IIRM satellites to be launched, and the nineteenth of twenty one Block IIR satellites overall. It was built by Lockheed Martin, using the AS-4000 satellite bus.
USA-201 was launched at 06:10 UTC on 15 March 2008, atop a Delta II carrier rocket, flight number D332, flying in the 7925-9.5 configuration. The launch took place from Space Launch Complex 17A at the Cape Canaveral Air Force Station, and placed USA-201 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-37FM apogee motor.
By 18 May 2008, USA-201 was in an orbit with a perigee of , an apogee of , a period of 717.98 minutes, and 55.1 degrees of inclination to the equator. It is used to broadcast the PRN 07 signal, and operates in slot 4 of plane A of the GPS constellation. The satellite has a design life of 10 years and a mass of . As of 2012 it remains in service.
References
Spacecraft launched in 2008
GPS satellites
USA satellites | USA-201 | [
"Technology"
] | 273 | [
"Global Positioning System",
"GPS satellites"
] |
17,420,856 | https://en.wikipedia.org/wiki/DemoSat | A DemoSat is a boilerplate spacecraft used to test a carrier rocket without risking a real satellite on the launch. They are most commonly flown on the maiden flights of rockets, but have also been flown on return-to-flight missions after launch failures. Defunct satellites from cancelled programmes may be flown as DemoSats, for example the maiden flight of the Soyuz-2 rocket placed an obsolete Zenit-8 satellite onto a sub-orbital trajectory in order to test the rocket's performance.
See also
Boilerplate (spaceflight)
References
Satellites by type
+ | DemoSat | [
"Astronomy"
] | 113 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
17,421,235 | https://en.wikipedia.org/wiki/Power%20trowel | A power trowel (also known as a "power float" and "troweling machine") is a piece of light construction equipment used by construction companies and contractors to apply a smooth finish to concrete slabs.
Types
Power trowels differ in the way they are controlled:
Ride-on power trowels have two spider/rotor assemblies and are controlled by an operator sitting on a seat upon the machine, controlling the power trowel movement with two joystick/levers (these can be either mechanical or electronic/hydraulic). Blade pitch is controlled either by manual turn handles, (usually both spiders are linked together) or by electric motors and switches. Ride on power trowels range in size from machines weighing , up to machines weighing over . Power ranges from small single cylinder engines, all the way up to multi fuel V8 engines. Drive systems come in two basic variations direct mechanical drive (typically using a CVT style clutch) and hydrostatic drive. Additionally, they are available in overlapping and non-overlapping configurations, the latter allowing the use of float pans.
Walk-behind power trowels are used by an operator walking behind the machine.
A power trowel performs the tasks of several hand tools, hand trowel, hand float, darby and concrete float.
See also
Concrete pump
Screed
Trowel
Similar vehicles
Personal hovercraft
References
External links
Hovercraft
Engineering vehicles
Construction equipment
Articles containing video clips | Power trowel | [
"Engineering"
] | 296 | [
"Construction",
"Construction equipment",
"Engineering vehicles",
"Industrial machinery"
] |
17,421,236 | https://en.wikipedia.org/wiki/Bad%20boy%20archetype | The bad boy is a cultural archetype that is variously defined and often used synonymously with the historic terms rake or cad: a male who behaves badly, especially within societal norms.
Definitions
The stereotypical "bad guy" was described by Kristina Grish in her book Addickted as "the irresistible rogue who has the dizzying ability to drive women wild" with a "laissez-faire attitude about life and love".
An article in The Independent compared the term "bad boys" with men who had a particular combination of personality traits, sometimes referred to as the "dark triad", and reported that a study found that such men were likely to have a greater number of sexual affairs.
See also
Boy next door (stock character)
Chad (slang)
Dark triad
Hybristophilia
Nice guy
Playboy (lifestyle)
Tall, dark and handsome
Toxic masculinity
References
Archetypes
Male stock characters
Stereotypes of men
Terms for men
Interpersonal relationships | Bad boy archetype | [
"Biology"
] | 198 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
17,421,384 | https://en.wikipedia.org/wiki/Chresonym | In biodiversity informatics, a chresonym is the cited use of a taxon name, usually a species name, within a publication. The term is derived from the Greek χρῆσις chresis meaning "a use" and refers to published usage of a name.
The technical meaning of the related term synonym is for different names that refer to the same object or concept. As noted by Hobart and Rozella B. Smith, zoological systematists had been using "the term (synonymy) in another sense as well, namely in reference to all occurrences of any name or set of names (usually synonyms) in the literature." Such a "synonymy" could include multiple listings, one for each place the author found a name used, rather than a summarized list of different synonyms. The term "chresonym" was created to replace this second sense of the term "synonym." The concept of synonymy is furthermore different in the zoological and botanical codes of nomenclature.
A name that correctly refers to a taxon is further termed an orthochresonym while one that is applied incorrectly for a given taxon may be termed a heterochresonym.
Orthochresonymy
Species names consist of a genus part and a species part to create a binomial name. Species names often also include a reference to the original publication of the name by including the author and sometimes the year of publication of the name. As an example, the sperm whale, Physeter catodon, was first described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. Thus, the name may also be referenced as Physeter catodon Linnaeus 1758. That name was also used by Harmer in 1928 to refer to the species in the Proceedings of the Linnean Society of London and of course, it has appeared in numerous other publications since then. Taxonomic catalogues, such as Catalog of Living Whales by Philip Hershkovitz, may reference this usage with a Genus+species+authorship convention that may appear to indicate a new species (a homonym) when in fact it is referencing a particular usage of a species name (a chresonym). Hershkovitz, for example refers to Physeter catodon Harmer 1928, which can cause confusion as this name+author combination really refers to the same name that Linnaeus first published in 1758.
Heterochresonymy
Nepenthes rafflesiana, a species of pitcher plant, was described by William Jack in 1835. The name Nepenthes rafflesiana as used by Hugh Low in 1848 is a heterochresonym. Cheek and Jebb (2001) explain the situation thus:
Low, ... accidentally, or otherwise, had described what we know as N. rafflesiana as Nepenthes × hookeriana and vice versa in his book "Sarawak, its Inhabitants and Productions" (1848). Masters was the first author to note this in the Gardeners' Chronicle..., where he gives the first full description and illustration of Nepenthes × hookeriana.
The description that Maxwell Tylden Masters provided in 1881 for the taxon that had previously been known to gardeners as Nepenthes hookeriana (an interchangeable form of the name for the hybrid Nepenthes × hookeriana) differs from Low's description. The International Code of Nomenclature for algae, fungi, and plants does not require that descriptions from so long ago include specification of a type specimen, and types can be chosen later to fit these old names. Since the descriptions differ, Low's and Masters' name have different types. Masters therefore created a later homonym, which, according to the rules of the code is illegitimate.
See also
Biodiversity
Synonym (taxonomy)
Glossary of scientific naming
References
Biodiversity
Taxonomy (biology) | Chresonym | [
"Biology"
] | 785 | [
"Taxonomy (biology)",
"Biodiversity"
] |
17,421,690 | https://en.wikipedia.org/wiki/Eukaryotic%20Linear%20Motif%20resource | The Eukaryotic Linear Motif (ELM) resource is a computational biology resource (developed at the European Molecular Biology Laboratory (EMBL)) for investigating short linear motifs (SLiMs) in eukaryotic proteins. It is currently the largest collection of linear motif classes with annotated and experimentally validated linear motif instances.
Linear motifs are specified as patterns using regular expression rules. These expressions are used in the ELM prediction pipeline which detects putative motif instances in protein sequences.
To improve the predictive power, context-based rules and logical filters are being developed and
applied to reduce the amount of false positives matches.
As of 2010 ELM contained 146 different motifs that annotate more than 1300 experimentally determined instances within proteins. The current version of the ELM server provides filtering by cell compartment, phylogeny, globular domain clash (using the SMART/Pfam databases) and structure. In addition, both the known ELM instances and any positionally conserved matches in sequences similar to ELM instance sequences are identified and displayed.
See also
Phospho.ELM
Minimotif miner
References
External links
ELM home page
Protein databases
Protein domains
Protein structural motifs | Eukaryotic Linear Motif resource | [
"Biology"
] | 239 | [
"Protein structural motifs",
"Protein domains",
"Protein classification"
] |
17,422,461 | https://en.wikipedia.org/wiki/Indecomposability%20%28intuitionistic%20logic%29 | In intuitionistic analysis and in computable analysis, indecomposability or indivisibility (, from the adjective unzerlegbar) is the principle that the continuum cannot be partitioned into two nonempty pieces. This principle was established by Brouwer in 1928 using intuitionistic principles, and can also be proven using Church's thesis. The analogous property in classical analysis is the fact that every continuous function from the continuum to {0,1} is constant.
It follows from the indecomposability principle that any property of real numbers that is decided (each real number either has or does not have that property) is in fact trivial (either all the real numbers have that property, or else none of them do). Conversely, if a property of real numbers is not trivial, then the property is not decided for all real numbers. This contradicts the law of the excluded middle, according to which every property of the real numbers is decided; so, since there are many nontrivial properties, there are many nontrivial partitions of the continuum.
In constructive set theory (CZF), it is consistent to assume the universe of all sets is indecomposable—so that any class for which membership is decided (every set is either a member of the class, or else not a member of the class) is either empty or the entire universe.
See also
Indecomposable continuum
References
Constructivism (mathematics) | Indecomposability (intuitionistic logic) | [
"Mathematics"
] | 302 | [
"Mathematical logic",
"Constructivism (mathematics)"
] |
17,422,735 | https://en.wikipedia.org/wiki/Giulio%20Tononi | Giulio Tononi () is a neuroscientist and psychiatrist who holds the David P. White Chair in Sleep Medicine, as well as a Distinguished Chair in Consciousness Science, at the University of Wisconsin. He is best known for his Integrated Information Theory (IIT), a mathematical theory of consciousness, which he has proposed since 2004.
Biography
Tononi was born in Trento, Italy, and obtained an M.D. in psychiatry and a Ph.D. in neurobiology at the Sant'Anna School of Advanced Studies in Pisa, Italy.
He is an authority on sleep, and in particular the genetics and etiology of sleep. Tononi and collaborators have pioneered several complementary approaches to study sleep:
genomics
proteomics
fruit fly models
rodent models employing multiunit / local field potential recordings in behaving animals
in vivo voltammetry and microscopy
high-density EEG recordings and transcranial magnetic stimulation (TMS) in humans
large-scale computer models of sleep and wakefulness
This research has led to a comprehensive hypothesis on the function of sleep (proposed with sleep researcher Chiara Cirelli), the synaptic homeostasis hypothesis. According to the hypothesis, wakefulness leads to a net increase in synaptic strength, and sleep is necessary to reestablish synaptic homeostasis. The hypothesis has implications for understanding the effects of sleep deprivation and for developing novel diagnostic and therapeutic approaches to sleep disorders and neuropsychiatric disorders.
Tononi is a leader in the field of consciousness studies, and has co-authored a book on the subject with Nobel prize winner Gerald Edelman.
Tononi also developed the integrated information theory (IIT): a theory of what consciousness is, how it can be measured, how it is correlated with brain states, and why it fades when we fall into dreamless sleep and returns when we dream. The theory is being tested with neuroimaging, Transcranial magnetic stimulation (TMS), and computer models. His work has been described as "the only really promising fundamental theory of consciousness" by collaborator Christof Koch.
Works
References
External links
Sleep researchers
American psychiatrists
American consciousness researchers and theorists
Living people
Sant'Anna School of Advanced Studies alumni
1960 births
University of Wisconsin–Madison faculty | Giulio Tononi | [
"Biology"
] | 464 | [
"Sleep researchers",
"Behavior",
"Sleep"
] |
17,422,860 | https://en.wikipedia.org/wiki/G1.9%2B0.3 | G1.9+0.3 is a supernova remnant (SNR) in the constellation of Sagittarius. It is the youngest-known SNR in the Milky Way, resulting from an explosion the light from which would have reached Earth some time between 1890 and 1908. The explosion was not seen from Earth as it was obscured by the dense gas and dust of the Galactic Center, where it occurred. The remnant's young age was established by combining data from NASA's Chandra X-ray Observatory and the VLA radio observatory. It was a type Ia supernova. The remnant has a radius of over 1.3 light-years.
Discovery
G1.9+0.3 was first identified as an SNR in 1984 from observations made with the VLA radio telescope. Because of its unusually small angular size, it was thought to be young—less than about one thousand years old. In 2007, X-ray observations made with the Chandra X-ray Observatory revealed that the object was about 15% larger than in the earlier VLA observations. Further observations made with the VLA in 2008 verified increase in size, implying it is no more than 150 years old. A more recent estimate put its observable age at 110 years as of the data collection in 2008. That study also found that it was probably triggered by the merger of two white dwarf stars.
Announcement
The discovery that G1.9+0.3 had been identified as the youngest-known Galactic SNR was announced on May 14, 2008 at a NASA press conference. In the days leading up to the announcement, NASA said that they were going "to announce the discovery of an object in our Galaxy astronomers have been hunting for more than 50 years." Before this discovery, the youngest-known Milky Way supernova remnant was Cassiopeia A, at about 330 years.
References
External links
Supernova remnants
Astronomical objects discovered in 1984
Sagittarius (constellation) | G1.9+0.3 | [
"Astronomy"
] | 397 | [
"Sagittarius (constellation)",
"Constellations"
] |
17,423,238 | https://en.wikipedia.org/wiki/Ireviken%20event | The Ireviken event was the first of three relatively minor extinction events (the Ireviken, Mulde, and Lau events) during the Silurian period. It occurred at the Llandovery/Wenlock boundary (mid Silurian, ). The event is best recorded at Ireviken, Gotland, where over 50% of trilobite species became extinct; 80% of the global conodont species also became extinct in this interval.
Anatomy of the event
The event lasted around 200,000 years, spanning the base of the Wenlock epoch. It is associated with a period of global cooling.
It comprises eight extinction "datum points"—the first four being regularly spaced, every 30,797 years, and linked to the Milankovic obliquity cycle. The fifth and sixth probably reflect maxima in the precessional cycles, with periods of around 16.5 and 19 ka. The final two data are much further spaced, so harder to link with Milankovic changes.
Casualties
The mechanism responsible for the event originated in the deep oceans, and made its way into the shallower shelf seas. Correspondingly, shallow-water reefs were barely affected, while pelagic and hemipelagic organisms such as the graptolites, conodonts and trilobites were hit hardest.
Geochemistry
Subsequent to the first extinctions, excursions in the δ13C and δ18O records are observed; δ13C rises from +1.4‰ to +4.5‰, while δ18O increases from −5.6‰ to −5.0‰.
See also
Anoxic event
References
Extinction events
Silurian events
Isotope excursions | Ireviken event | [
"Chemistry",
"Biology"
] | 355 | [
"Evolution of the biosphere",
"Isotope excursions",
"Extinction events",
"Isotopes"
] |
17,423,266 | https://en.wikipedia.org/wiki/PaNie | PaNie is a 25 kDa protein produced by the root rot disease-causing pathogen Pythium aphanidermatum. It stands for Pythium aphanidermatum Necrosis inducing elicitor. PaNie (aka NLPPya) belongs to a family of elicitors named the Nep1-like proteins (NLPs), which cause necrosis when injected into the leaves of dicotyledonous plants.
References
Oomycete proteins
Elicitors | PaNie | [
"Chemistry",
"Biology"
] | 103 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
17,423,952 | https://en.wikipedia.org/wiki/History%20of%20the%20steam%20engine | The first recorded rudimentary steam engine was the aeolipile mentioned by Vitruvius between 30 and 15 BC and, described by Heron of Alexandria in 1st-century Roman Egypt. Several steam-powered devices were later experimented with or proposed, such as Taqi al-Din's steam jack, a steam turbine in 16th-century Ottoman Egypt, Denis Papin's working model of the steam digester in 1679 and Thomas Savery's steam pump in 17th-century England. In 1712, Thomas Newcomen's atmospheric engine became the first commercially successful engine using the principle of the piston and cylinder, which was the fundamental type of steam engine used until the early 20th century. The steam engine was used to pump water out of coal mines.
During the Industrial Revolution, steam engines started to replace water and wind power, and eventually became the dominant source of power in the late 19th century and remaining so into the early decades of the 20th century, when the more efficient steam turbine and the internal combustion engine resulted in the rapid replacement of the steam engines. The steam turbine has become the most common method by which electrical power generators are driven. Investigations are being made into the practicalities of reviving the reciprocating steam engine as the basis for the new wave of advanced steam technology.
Precursors
Early uses of steam power
The first to use steam as a way to transform heat into movement was Archytas, who propelled a wooden bird along wires using steam as propellant around 400 BC. The earliest known rudimentary steam engine and reaction steam turbine, the aeolipile, is described by a mathematician and engineer named Heron of Alexandria in 1st century Roman Egypt, as recorded in his manuscript Spiritalia seu Pneumatica.
The same device was also mentioned by Vitruvius in De Architectura about 100 years earlier. Steam ejected tangentially from nozzles caused a pivoted ball to rotate. This suggests that the conversion of steam pressure into mechanical movement was known in Roman Egypt in the 1st century, however, its thermal efficiency was low. Heron also devised a machine that used air heated in an altar fire to displace a quantity of water from a closed vessel. The weight of the water was made to pull a hidden rope to operate temple doors. Some historians have conflated the two inventions to assert, incorrectly, that the aeolipile was capable of useful work.
According to William of Malmesbury, in 1125, Reims was home to a church that had an organ powered by air escaping from compression "by heated water", apparently designed and constructed by professor Gerbertus.
Among the papers of Leonardo da Vinci dating to the late 15th century is the design for a steam-powered cannon called the Architonnerre, which works by the sudden influx of hot water into a sealed, red-hot cannon.
A rudimentary impact steam turbine was described in 1551 by Taqi al-Din, a philosopher, astronomer and engineer in 16th century Ottoman Egypt, who described a method for rotating a spit by means of a jet of steam playing on rotary vanes around the periphery of a wheel. A similar device for rotating a spit was also later described by John Wilkins in 1648. These devices were then called "mills" but are now known as steam jacks. Another similar rudimentary steam turbine is shown by Giovanni Branca, an Italian engineer, in 1629 for turning a cylindrical escapement device that alternately lifted and let fall a pair of pestles working in mortars. The steam flow of these early steam turbines, however, was not concentrated and most of its energy was dissipated in all directions. This would have led to a great waste of energy and so they were never seriously considered for industrial use.
In 1605, French mathematician David Rivault de Fleurance in his treatise on artillery wrote on his discovery that water, if confined in a bombshell and heated, would explode the shells.
In 1606, the Spaniard Jerónimo de Ayanz y Beaumont demonstrated and was granted a patent for a steam-powered water pump. The pump was successfully used to drain the inundated mines of Guadalcanal, Spain.
In 1679, French Physicist Denis Papin, invented the Steam Digester (pressure cooker) which was used to extract fats from bones in a high pressure environment and then also create Bone meal.
Development of the commercial steam engine
"The discoveries that, when brought together by Thomas Newcomen in 1712, resulted in the steam engine were:"
The concept of a vacuum (i.e. a reduction in pressure below ambient)
The concept of pressure
Techniques for creating a vacuum
A means of generating steam
The piston and cylinder
In the late 15th century, Italian polymath, engineer, painter and architect Leonardo da Vinci wrote papers that described the Architonnerre, a Steam powered cannon that used high pressure environments to launch large and heavy projectiles with incredible force. Da Vinci's design resembled the original cannon with a long cylindrical tube on one end used to aim the projectile correctly and the other end a large chamber which was used to heat up water into steam and when it was ready to fire a small cap would be placed tightly on a hole on top of the cannon, causing rapid buildup of steam and creating a very high pressure environment and propelled the projectile with immense force towards the target. The Architonnerre was designed to shoot a projectile that weighed one Roman Talent. Many of the principles employed by da Vinci for the Architonnerre were later used in the development of the steam engine.
In 1643, Evangelista Torricelli conducted experiments on suction lift water pumps to test their limits, which was about 32 feet. (Atmospheric pressure is 32.9 feet or 10.03 meters. Vapor pressure of water lowers theoretical lift height.) He devised an experiment using a tube filled with mercury and inverted in a bowl of mercury (a barometer) and observed an empty space above the column of mercury, which he theorized contained nothing, that is, a vacuum.
Influenced by Torricelli, Otto von Guericke invented a vacuum pump by modifying an air pump used for pressurizing an air gun. Guericke put on a demonstration in 1654 in Magdeburg, Germany, where he was mayor. Two copper hemispheres were fitted together and air was pumped out. Weights strapped to the hemispheres could not pull them apart until the air valve was opened. The experiment was repeated in 1656 using two teams of 8 horses each, which could not separate the Magdeburg hemispheres.
Gaspar Schott was the first to describe the hemisphere experiment in his Mechanica Hydraulico-Pneumatica (1657).
After reading Schott's book, Robert Boyle built an improved vacuum pump and conducted related experiments.
Denis Papin became interested in using a vacuum to generate motive power while working with Christiaan Huygens and Gottfried Leibniz in Paris in 1663. Papin worked for Robert Boyle from 1676 to 1679, publishing an account of his work in Continuation of New Experiments (1680) and gave a presentation to Royal Society in 1689. From 1690 on Papin began experimenting with a piston to produce power with steam, building model steam engines. He experimented with atmospheric and pressure steam engines, publishing his results in 1707.
In 1663, Edward Somerset, 2nd Marquess of Worcester published a book of 100 inventions which described a method for raising water between floors employing a similar principle to that of a coffee percolator. His system was the first to separate the boiler (a heated cannon barrel) from the pumping action. Water was admitted into a reinforced barrel from a cistern, and then a valve was opened to admit steam from a separate boiler. The pressure built over the top of the water, driving it up a pipe. He installed his steam-powered device on the wall of the Great Tower at Raglan Castle to supply water through the tower. The grooves in the wall where the engine was installed were still to be seen in the 19th century. However, no one was prepared to risk money for such a revolutionary concept, and without backers the machine remained undeveloped.
Samuel Morland, a mathematician and inventor who worked on pumps, left notes at the Vauxhall Ordinance Office on a steam pump design that Thomas Savery read. In 1698 Savery built a steam pump called "The Miner's Friend." It employed both vacuum and pressure. These were used for low horsepower service for a number of years.
Thomas Newcomen was a merchant who dealt in cast iron goods. Newcomen's engine was based on the piston and cylinder design proposed by Papin. In Newcomen's engine steam was condensed by water sprayed inside the cylinder, causing atmospheric pressure to move the piston. Newcomen's first engine installed for pumping in a mine in 1712 at Dudley Castle in Staffordshire.
Cylinders
Denis Papin (22 August 1647 – ) was a French physicist, mathematician and inventor, best known for his pioneering invention of the steam digester, the forerunner of the pressure cooker. In the mid-1670s Papin collaborated with the Dutch physicist Christiaan Huygens on an engine which drove out the air from a cylinder by exploding gunpowder inside it. Realising the incompleteness of the vacuum produced by this means and on moving to England in 1680, Papin devised a version of the same cylinder that obtained a more complete vacuum from boiling water and then allowing the steam to condense; in this way he was able to raise weights by attaching the end of the piston to a rope passing over a pulley. As a demonstration model, the system worked, but in order to repeat the process, the whole apparatus had to be dismantled and reassembled. Papin quickly saw that to make an automatic cycle the steam would have to be generated separately in a boiler; however, he did not take the project further. Papin also designed a paddle boat driven by a jet playing on a mill-wheel in a combination of Taqi al Din and Savery's conceptions and he is also credited with a number of significant devices such as the safety valve. Papin's years of research into the problems of harnessing steam was to play a key part in the development of the first successful industrial engines that soon followed his death.
Savery steam pump
The first steam engine to be applied industrially was the "fire-engine" or "Miner's Friend", designed by Thomas Savery in 1698. This was a pistonless steam pump, similar to the one developed by Worcester. Savery made two key contributions that greatly improved the practicality of the design. First, in order to allow the water supply to be placed below the engine, he used condensed steam to produce a partial vacuum in the pumping reservoir (the barrel in Worcester's example), and using that to pull the water upward. Secondly, in order to rapidly cool the steam to produce the vacuum, he ran cold water over the reservoir.
Operation required several valves; at the start of a cycle, when the reservoir was empty, a valve would be opened to admit steam. This valve would be closed to seal the reservoir, and the cooling water valve would be opened to condense the steam and create a partial vacuum. A supply valve would then be opened, pulling water upward into the reservoir; the typical engine could pull water up to 20 feet. This was then closed, and the steam valve reopened, building pressure over the water and pumping it upward, as in the Worcester design. This cycle essentially doubled the distance that water could be pumped for any given pressure of steam, and production examples raised water about 40 feet.
Savery's engine solved a problem that had only recently become a serious one; raising water out of the mines in southern England as they reached greater depths. Savery's engine was somewhat less efficient than Newcomen's, but this was compensated for by the fact that the separate pump used by the Newcomen engine was inefficient, giving the two engines roughly the same efficiency of 6 million foot pounds per bushel of coal (less than 1%). Nor was the Savery engine very safe because part of its cycle required steam under pressure supplied by a boiler, and given the technology of the period the pressure vessel could not be made strong enough and so was prone to explosion. The explosion of one of his pumps at Broad Waters (near Wednesbury), about 1705, probably marks the end of attempts to exploit his invention.
The Savery engine was less expensive than Newcomen's and was produced in smaller sizes. Some builders were manufacturing improved versions of the Savery engine until late in the 18th century. Bento de Moura Portugal, FRS, introduced an ingenious improvement of Savery's construction "to render it capable of working itself", as described by John Smeaton in the Philosophical Transactions published in 1751.
Atmospheric condensing engines
Newcomen "atmospheric" engine
It was Thomas Newcomen with his "atmospheric-engine" of 1712 who can be said to have brought together most of the essential elements established by Papin in order to develop the first practical steam engine for which there could be a commercial demand. This took the shape of a reciprocating beam engine installed at surface level driving a succession of pumps at one end of the beam. The engine, attached by chains from other end of the beam, worked on the atmospheric, or vacuum principle.
Newcomen's design used some elements of earlier concepts. Like the Savery design, Newcomen's engine used steam, cooled with water, to create a vacuum. Unlike Savery's pump, however, Newcomen used the vacuum to pull on a piston instead of pulling on water directly. The upper end of the cylinder was open to the atmospheric pressure, and when the vacuum formed, the atmospheric pressure above the piston pushed it down into the cylinder. The piston was lubricated and sealed by a trickle of water from the same cistern that supplied the cooling water. Further, to improve the cooling effect, he sprayed water directly into the cylinder.
The piston was attached by a chain to a large pivoted beam. When the piston pulled the beam, the other side of the beam was pulled upward. This end was attached to a rod that pulled on a series of conventional pump handles in the mine. At the end of this power stroke, the steam valve was reopened, and the weight of the pump rods pulled the beam down, lifting the piston and drawing steam into the cylinder again.
Using the piston and beam allowed the Newcomen engine to power pumps at different levels throughout the mine, as well as eliminating the need for any high-pressure steam. The entire system was isolated to a single building on the surface. Although inefficient and extremely heavy on coal (compared to later engines), these engines raised far greater volumes of water and from greater depths than had previously been possible. Over 100 Newcomen engines were installed around England by 1735, and it is estimated that as many as 2,000 were in operation by 1800 (including Watt versions).
John Smeaton made numerous improvements to the Newcomen engine, notably the seals, and by improving these was able to almost triple their efficiency. He also preferred to use wheels instead of beams for transferring power from the cylinder, which made his engines more compact. Smeaton was the first to develop a rigorous theory of steam engine design of operation. He worked backward from the intended role to calculate the amount of power that would be needed for the task, the size and speed of a cylinder that would provide it, the size of boiler needed to feed it, and the amount of fuel it would consume. These were developed empirically after studying dozens of Newcomen engines in Cornwall and Newcastle, and building an experimental engine of his own at his home in Austhorpe in 1770. By the time the Watt engine was introduced only a few years later, Smeaton had built dozens of ever-larger engines into the 100 hp range.
Watt's separate condenser
While working at the University of Glasgow as an instrument maker and repairman in 1759, James Watt was introduced to the power of steam by Professor John Robison. Fascinated, Watt took to reading everything he could on the subject, and independently developed the concept of latent heat, only recently published by Joseph Black at the same university. When Watt learned that the university owned a small working model of a Newcomen engine, he pressed to have it returned from London where it was being unsuccessfully repaired. Watt repaired the machine, but found it was barely functional even when fully repaired.
After working with the design, Watt concluded that 80% of the steam used by the engine was wasted. Instead of providing motive force, it was being used to heat the cylinder. In the Newcomen design, every power stroke was started with a spray of cold water, which not only condensed the steam, but also cooled the walls of the cylinder. This heat had to be replaced before the cylinder would accept steam again. In the Newcomen engine the heat was supplied only by the steam, so when the steam valve was opened again a high proportion condensed on the cold walls as soon as it was admitted to the cylinder. It took a considerable amount of time and steam before the cylinder warmed back up and the steam started to fill it up.
Watt solved the problem of the water spray by removing the cold water to a different cylinder, placed beside the power cylinder. Once the induction stroke was complete a valve was opened between the two, and any steam that entered the cylinder would condense inside this cold cylinder. This would create a vacuum that would pull more of the steam into the cylinder, and so on until the steam was mostly condensed. The valve was then closed, and operation of the main cylinder continued as it would on a conventional Newcomen engine. As the power cylinder remained at operational temperature throughout, the system was ready for another stroke as soon as the piston was pulled back to the top. Maintaining the temperature was a jacket around the cylinder where steam was admitted. Watt produced a working model in 1765.
Convinced that this was a great advance, Watt entered into partnerships to provide venture capital while he worked on the design. Not content with this single improvement, Watt worked tirelessly on a series of other improvements to practically every part of the engine. Watt further improved the system by adding a small vacuum pump to pull the steam out of the cylinder into the condenser, further improving cycle times. A more radical change from the Newcomen design was closing off the top of the cylinder and introducing low-pressure steam above the piston. Now the power was not due to the difference of atmospheric pressure and the vacuum, but the pressure of the steam and the vacuum, a somewhat higher value. On the upward return stroke, the steam on top was transferred through a pipe to the underside of the piston ready to be condensed for the downward stroke. Sealing of the piston on a Newcomen engine had been achieved by maintaining a small quantity of water on its upper side. This was no longer possible in Watt's engine due to the presence of the steam. Watt spent considerable effort to find a seal that worked, eventually obtained by using a mixture of tallow and oil. The piston rod also passed through a gland on the top cylinder cover sealed in a similar way.
The piston sealing problem was due to having no way to produce a sufficiently round cylinder. Watt tried having cylinders bored from cast iron, but they were too out of round. Watt was forced to use a hammered iron cylinder. The following quotation is from Roe (1916):
"When [John] Smeaton first saw the engine he reported to the Society of Engineers that 'neither the tools nor the workmen existed who could manufacture such a complex machine with sufficient precision' "
Watt finally considered the design good enough to release in 1774, and the Watt engine was released to the market. As portions of the design could be easily fitted to existing Newcomen engines, there was no need to build an entirely new engine at the mines. Instead, Watt and his business partner Matthew Boulton licensed the improvements to engine operators, charging them a portion of the money they would save in reduced fuel costs. The design was wildly successful, and the Boulton and Watt company was formed to license the design and help new manufacturers build the engines. The two would later open the Soho Foundry to produce engines of their own.
In 1774, John Wilkinson invented a boring machine with the shaft holding the boring tool supported on both ends, extending through the cylinder, unlike the then used cantilevered borers. With this machine he was able to successfully bore the cylinder for Boulton and Watt's first commercial engine in 1776.
Watt never ceased improving his designs. This further improved the operating cycle speed, introduced governors, automatic valves, double-acting pistons, a variety of rotary power takeoffs and many other improvements. Watt's technology enabled the widespread commercial use of stationary steam engines.
Humphrey Gainsborough produced a model condensing steam engine in the 1760s, which he showed to Richard Lovell Edgeworth, a member of the Lunar Society. Gainsborough believed that Watt had used his ideas for the invention; however, James Watt was not a member of the Lunar Society at this period and his many accounts explaining the succession of thought processes leading to the final design would tend to belie this story.
Power was still limited by the low pressure, the displacement of the cylinder, combustion and evaporation rates and condenser capacity. Maximum theoretical efficiency was limited by the relatively low temperature differential on either side of the piston; this meant that for a Watt engine to provide a usable amount of power, the first production engines had to be very large, and were thus expensive to build and install.
Watt double-acting and rotative engines
Watt developed a double-acting engine in which steam drove the piston in both directions, thereby increasing the engine speed and efficiency. The double-acting principle also significantly increased the output of a given physical sized engine.
Boulton & Watt developed the reciprocating engine into the rotative type. Unlike the Newcomen engine, the Watt engine could operate smoothly enough to be connected to a drive shaft – via sun and planet gears – to provide rotary power along with double-acting condensing cylinders. The earliest example was built as a demonstrator and was installed in Boulton's factory to work machines for lapping (polishing) buttons or similar. For this reason it was always known as the Lap Engine. In early steam engines the piston is usually connected by a rod to a balanced beam, rather than directly to a flywheel, and these engines are therefore known as beam engines.
Early steam engines did not provide constant enough speed for critical operations such as cotton spinning. To control speed the engine was used to pump water for a water wheel, which powered the machinery.
High-pressure engines
As the 18th century advanced, the call was for higher pressures; this was strongly resisted by Watt who used the monopoly his patent gave him to prevent others from building high-pressure engines and using them in vehicles. He mistrusted the boiler technology of the day, the way they were constructed and the strength of the materials used.
The important advantages of high-pressure engines were:
They could be made much smaller than previously for a given power output. There was thus the potential for steam engines to be developed that were small and powerful enough to propel themselves and other objects. As a result, steam power for transportation now became a practicality in the form of ships and land vehicles, which revolutionized cargo businesses, travel, military strategy, and essentially every aspect of society.
Because of their smaller size, they were much less expensive.
They did not require the significant quantities of condenser cooling water needed by atmospheric engines.
They could be designed to run at higher speeds, making them more suitable for powering machinery.
The disadvantages were:
In the low-pressure range they were less efficient than condensing engines, especially if steam was not used expansively.
They were more susceptible to boiler explosions.
The main difference between how high-pressure and low-pressure steam engines work is the source of the force that moves the piston. In the engines of Newcomen and Watt, it is the condensation of the steam that creates most of the pressure difference, causing atmospheric pressure (Newcomen) and low-pressure steam, seldom more than 7 psi boiler pressure, plus condenser vacuum (Watt), to move the piston. In a high-pressure engine, most of the pressure difference is provided by the high-pressure steam from the boiler; the low-pressure side of the piston may be at atmospheric pressure or connected to the condenser pressure. Newcomen's indicator diagram, almost all below the atmospheric line, would see a revival nearly 200 years later with the low pressure cylinder of triple expansion engines contributing about 20% of the engine power, again almost completely below the atmospheric line.
The first known advocate of "strong steam" was Jacob Leupold in his scheme for an engine that appeared in encyclopaedic works from . Various projects for steam propelled boats and vehicles also appeared throughout the century, one of the most promising being the construction of Nicolas-Joseph Cugnot, who demonstrated his "fardier" (steam wagon) in 1769. Whilst the working pressure used for this vehicle is unknown, the small size of the boiler gave insufficient steam production rate to allow the fardier to advance more than a few hundred metres at a time before having to stop to raise steam. Other projects and models were proposed, but as with William Murdoch's model of 1784, many were blocked by Boulton and Watt.
This did not apply in the US, and in 1788 a steamboat built by John Fitch operated in regular commercial service along the Delaware River between Philadelphia, Pennsylvania, and Burlington, New Jersey, carrying as many as 30 passengers. This boat could typically make 7 to 8 miles per hour, and traveled more than during its short length of service. The Fitch steamboat was not a commercial success, as this route was adequately covered by relatively good wagon roads. In 1802, William Symington built a practical steamboat, and in 1807, Robert Fulton used a Watt steam engine to power the first commercially successful steamboat.
Oliver Evans in his turn was in favour of "strong steam" which he applied to boat engines and to stationary uses. He was a pioneer of cylindrical boilers; however, Evans' boilers did suffer several serious boiler explosions, which tended to lend weight to Watt's qualms. He founded the Pittsburgh Steam Engine Company in 1811 in Pittsburgh, Pennsylvania.
The company introduced high-pressure steam engines to the riverboat trade in the Mississippi watershed.
The first high-pressure steam engine was invented in 1800 by Richard Trevithick.
The importance of raising steam under pressure (from a thermodynamic standpoint) is that it attains a higher temperature. Thus, any engine using high-pressure steam operates at a higher temperature and pressure differential than is possible with a low-pressure vacuum engine. The high-pressure engine thus became the basis for most further development of reciprocating steam technology. Even so, around the year 1800, "high pressure" amounted to what today would be considered very low pressure, i.e. 40-50 psi (276-345 kPa), the point being that the high-pressure engine in question was non-condensing, driven solely by the expansive power of the steam, and once that steam had performed work it was usually exhausted at higher-than-atmospheric pressure. The blast of the exhausting steam into the chimney could be exploited to create induced draught through the fire grate and thus increase the rate of burning, hence creating more heat in a smaller furnace, at the expense of creating back pressure on the exhaust side of the piston.
On 21 February 1804, at the Penydarren ironworks at Merthyr Tydfil in South Wales, the first self-propelled railway steam engine or steam locomotive, built by Richard Trevithick, was demonstrated.
Cornish engine and compounding
Around 1811, Richard Trevithick was required to update a Watt pumping engine in order to adapt it to one of his new large cylindrical Cornish boilers. When Trevithick left for South America in 1816, his improvements were continued by William Sims. In a parallel, Arthur Woolf developed a compound engine with two cylinders, so that steam expanded in a high-pressure cylinder before being released into a low-pressure one. Efficiency was further improved by Samuel Groase, who insulated the boiler, engine, and pipes.
Steam pressure above the piston was increased eventually reaching or even and now provided much of the power for the downward stroke; at the same time condensing was improved. This considerably raised efficiency and further pumping engines on the Cornish system (often known as Cornish engines) continued to be built new throughout the 19th century. Older Watt engines were updated to conform.
The take-up of these Cornish improvements was slow in textile manufacturing areas where coal was cheap, due to the higher capital cost of the engines and the greater wear that they suffered. The change only began in the 1830s, usually by compounding through adding another (high-pressure) cylinder.
Another limitation of early steam engines was speed variability, which made them unsuitable for many textile applications, especially spinning. In order to obtain steady speeds, early steam powered textile mills used the steam engine to pump water to a water wheel, which drove the machinery.
Many of these engines were supplied worldwide and gave reliable and efficient service over a great many years with greatly reduced coal consumption. Some of them were very large and the type continued to be built right down to the 1890s.
Corliss engine
The Corliss steam engine (patented 1849) was called the greatest improvement since James Watt. The Corliss engine had greatly improved speed control and better efficiency, making it suitable to all sorts of industrial applications, including spinning.
Corliss used separate ports for steam supply and exhaust, which prevented the exhaust from cooling the passage used by the hot steam. Corliss also used partially rotating valves that provided quick action, helping to reduce pressure losses. The valves themselves were also a source of reduced friction, especially compared to the slide valve, which typically used 10% of an engine's power.
Corliss used automatic variable cut off. The valve gear controlled engine speed by using the governor to vary the timing of the cut off. This was partly responsible for the efficiency improvement in addition to the better speed control.
Porter-Allen high speed steam engine
The Porter-Allen engine, introduced in 1862, used an advanced valve gear mechanism developed for Porter by Allen, a mechanic of exceptional ability, and was at first generally known as the Allen engine. The high speed engine was a precision machine that was well balanced, achievements made possible by advancements in machine tools and manufacturing technology.
The high speed engine ran at piston speeds from three to five times the speed of ordinary engines. It also had low speed variability. The high speed engine was widely used in sawmills to power circular saws. Later it was used for electrical generation.
The engine had several advantages. It could, in some cases, be directly coupled. If gears or belts and drums were used, they could be much smaller sizes. The engine itself was also small for the amount of power it developed.
Porter greatly improved the fly-ball governor by reducing the rotating weight and adding a weight around the shaft. This significantly improved speed control. Porter's governor became the leading type by 1880.
The efficiency of the Porter-Allen engine was good, but not equal to the Corliss engine.
Uniflow (or unaflow) engine
The uniflow engine was the most efficient type of high-pressure engine. It was invented in 1911 and was first patented in 1885 by Leonard Jennett Todd. The uniflow engine used poppet valves and half cylinders which allowed steam to pass into the engine was then used to create a high pressure environment that was key to the function of the uniflow engine. It was used in ships, steam locomotives and steam wagons but was displaced by steam turbines and later marine diesel engines.
References
Bibliography
see Thomas Tredgold
Further reading
Stuart, Robert, A Descriptive History of the Steam Engine, London: J. Knight and H. Lacey, 1824.
Steam power
Steam engines
Steam engine | History of the steam engine | [
"Physics",
"Technology"
] | 6,622 | [
"Physical quantities",
"Steam power",
"Science and technology studies",
"Power (physics)",
"History of technology",
"History of science and technology"
] |
4,182,449 | https://en.wikipedia.org/wiki/Tablet%20computer | A tablet computer, commonly shortened to tablet, is a mobile device, typically with a mobile operating system and touchscreen display processing circuitry, and a rechargeable battery in a single, thin and flat package. Tablets, being computers, have similar capabilities, but lack some input/output (I/O) abilities that others have. Modern tablets largely resemble modern smartphones, the only differences being that tablets are relatively larger than smartphones, with screens or larger, measured diagonally, and may not support access to a cellular network. Unlike laptops (which have traditionally run off operating systems usually designed for desktops), tablets usually run mobile operating systems, alongside smartphones.
The touchscreen display is operated by gestures executed by finger or digital pen (stylus), instead of the mouse, touchpad, and keyboard of larger computers. Portable computers can be classified according to the presence and appearance of physical keyboards. Two species of tablet, the slate and booklet, do not have physical keyboards and usually accept text and other input by use of a virtual keyboard shown on their touchscreen displays. To compensate for their lack of a physical keyboard, most tablets can connect to independent physical keyboards by Bluetooth or USB; 2-in-1 PCs have keyboards, distinct from tablets.
The form of the tablet was conceptualized in the middle of the 20th century (Stanley Kubrick depicted fictional tablets in the 1968 science fiction film 2001: A Space Odyssey) and prototyped and developed in the last two decades of that century. In 2010, Apple released the iPad, the first mass-market tablet to achieve widespread popularity. Thereafter, tablets rapidly rose in ubiquity and soon became a large product category used for personal, educational and workplace applications. Popular uses for a tablet PC include viewing presentations, video-conferencing, reading e-books, watching movies, sharing photos and more. As of 2021 there are 1.28 billion tablet users worldwide according to data provided by Statista, while Apple holds the largest manufacturer market share followed by Samsung and Lenovo.
History
The tablet computer and its associated operating system began with the development of pen computing. Electrical devices with data input and output on a flat information display existed as early as 1888 with the telautograph, which used a sheet of paper as display and a pen attached to electromechanical actuators. Throughout the 20th century devices with these characteristics have been imagined and created whether as blueprints, prototypes, or commercial products. In addition to many academic and research systems, several companies released commercial products in the 1980s, with various input/output types tried out.
Fictional and prototype tablets
Tablet computers appeared in a number of works of science fiction in the second half of the 20th century; all helped to promote and disseminate the concept to a wider audience. Examples include:
Isaac Asimov described a Calculator Pad in his novel Foundation (1951)
Stanisław Lem described the Opton in his novel Return from the Stars (1961)
Numerous similar devices were depicted in Gene Roddenberry's Star Trek: The Original Series (1966)
Dr Who: The Dominators Educator Balan holds a tablet which he hold and inputs data into using swipe gestures.(1967)
Arthur C. Clarke's newspad was depicted in Stanley Kubrick's film 2001: A Space Odyssey (1968)
Douglas Adams described a tablet computer in The Hitchhiker's Guide to the Galaxy and the associated comedy of the same name (1978)
The science fiction TV series Star Trek: The Next Generation featured tablet computers which were designated as PADDs, notable for (as with most computers in the show) using a touchscreen interface, both with and without a stylus (1987)
A device more powerful than today's tablets appeared briefly in The Mote in God's Eye (1974)
The Star Wars franchise features datapads, first described in print in the 1991 novel Heir to the Empire, and depicted on screen in the 1999 feature film, Star Wars: Episode I – The Phantom Menace
Further, real-life projects either proposed or created tablet computers, such as:
In 1968, computer scientist Alan Kay envisioned a KiddiComp; he developed and described the concept as a Dynabook in his proposal, A personal computer for children of all ages (1972), which outlines functionality similar to that supplied via a laptop computer, or (in some of its other incarnations) a tablet or slate computer, with the exception of near eternal battery life. The target audience was children.
In 1979, the idea of a touchscreen tablet that could detect an external force applied to one point on the screen was patented in Japan by a team at Hitachi consisting of Masao Hotta, Yoshikazu Miyamoto, Norio Yokozawa and Yoshimitsu Oshima, who later received a US patent for their idea.
In 1992, Atari showed developers the Stylus, later renamed ST-Pad. The ST-Pad was based on the TOS/GEM Atari ST platform and prototyped early handwriting recognition. Shiraz Shivji's company Momentus demonstrated in the same time a failed x86 MS-DOS based Pen Computer with its own graphical user interface (GUI).
In 1994, the European Union initiated the NewsPad project, inspired by Clarke and Kubrick's fictional work. Acorn Computers developed and delivered an ARM-based touch screen tablet computer for this program, branding it the "NewsPad"; the project ended in 1997.
During the November 2000 COMDEX, Microsoft used the term Tablet PC to describe a prototype handheld device they were demonstrating.
In 2001, Ericsson Mobile Communications announced an experimental product named the DelphiPad, which was developed in cooperation with the Centre for Wireless Communications in Singapore, with a touch-sensitive screen, Netscape Navigator as a web browser, and Linux as its operating system.
Early tablets
Following earlier tablet computer products such as the Pencept PenPad, the Linus Write-Top, and the CIC Handwriter, in September 1989, Grid Systems released the first commercially successful tablet computer, the GridPad. All four products were based on extended versions of the MS-DOS operating system. In 1992, IBM announced (in April) and shipped to developers (in October) the ThinkPad 700T (2521), which ran the GO Corporation's PenPoint OS. Also based on PenPoint was AT&T's EO Personal Communicator from 1993, which ran on AT&T's own hardware, including their own AT&T Hobbit CPU. Apple Computer launched the Apple Newton personal digital assistant in 1993. It used Apple's own new Newton OS, initially running on hardware manufactured by Motorola and incorporating an ARM CPU, that Apple had specifically co-developed with Acorn Computers. The operating system and platform design were later licensed to Sharp and Digital Ocean, who went on to manufacture their own variants.
Pen computing was highly hyped by the media during the early 1990s. Microsoft, the dominant PC software vendor, released Windows for Pen Computing in 1992 to compete against PenPoint OS. The company launched the WinPad project, working together with OEMs such as Compaq, to create a small device with a Windows-like operating system and handwriting recognition. However, the project was abandoned two years later; instead Windows CE was released in the form of "Handheld PCs" in 1996. That year, Palm, Inc. released the first of the Palm OS based PalmPilot touch and stylus based PDA, the touch based devices initially incorporating a Motorola Dragonball (68000) CPU. Also in 1996 Fujitsu released the Stylistic 1000 tablet format PC, running Microsoft Windows 95, on a 100 MHz AMD486 DX4 CPU, with 8 MB RAM offering stylus input, with the option of connecting a conventional Keyboard and mouse. Intel announced a StrongARM processor-based touchscreen tablet computer in 1999, under the name WebPAD. It was later re-branded as the "Intel Web Tablet". In 2000, Norwegian company Screen Media AS and the German company Dosch & Amand Gmbh released the "FreePad". It was based on Linux and used the Opera browser. Internet access was provided by DECT DMAP, only available in Europe and provided up to 10 Mbit/s. The device had 16 MB storage, 32 MB of RAM and x86 compatible 166 MHz "Geode"-Microcontroller by National Semiconductor. The screen was 10.4" or 12.1" and was touch sensitive. It had slots for SIM cards to enable support of television set-up box. FreePad were sold in Norway and the Middle East; but the company was dissolved in 2003. Sony released its Airboard tablet in Japan in late 2000 with full wireless Internet capabilities.
In the late 1990s, Microsoft launched the Handheld PC platform using their Windows CE operating system; while most devices were not tablets, a few touch enabled tablets were released on the platform such as the Fujitsu PenCentra 130 or Siemens's SIMpad. Microsoft took a more significant approach to tablets in 2002 as it attempted to define the Microsoft Tablet PC as a mobile computer for field work in business, though their devices failed, mainly due to pricing and usability decisions that limited them to their original purpose – such as the existing devices being too heavy to be held with one hand for extended periods, and having legacy applications created for desktop interfaces and not well adapted to the slate format.
Nokia had plans for an Internet tablet since before 2000. An early model was test manufactured in 2001, the Nokia M510, which was running on EPOC and featuring an Opera browser, speakers and a 10-inch 800×600 screen, but it was not released because of fears that the market was not ready for it. Nokia entered the tablet space in May 2005 with the Nokia 770 running Maemo, a Debian-based Linux distribution custom-made for their Internet tablet line. The user interface and application framework layer, named Hildon, was an early instance of a software platform for generic computing in a tablet device intended for internet consumption. But Nokia did not commit to it as their only platform for their future mobile devices and the project competed against other in-house platforms and later replaced it with the Series 60. Nokia used the term internet tablet to refer to a portable information appliance that focused on Internet use and media consumption, in the range between a personal digital assistant (PDA) and an Ultra-Mobile PC (UMPC). They made two mobile phones, the N900 that runs Maemo, and N9 that run Meego.
Before the release of iPad, Axiotron introduced an aftermarket, heavily modified Apple MacBook called Modbook, a Mac OS X-based tablet computer. The Modbook uses Apple's Inkwell for handwriting and gesture recognition, and uses digitization hardware from Wacom. To get Mac OS X to talk to the digitizer on the integrated tablet, the Modbook was supplied with a third-party driver.
Following the launch of the Ultra-mobile PC, Intel began the Mobile Internet Device initiative, which took the same hardware and combined it with a tabletized Linux configuration. Intel codeveloped the lightweight Moblin (mobile Linux) operating system following the successful launch of the Atom CPU series on netbooks. In 2010, Nokia and Intel combined the Maemo and Moblin projects to form MeeGo, a Linux-based operating system supports netbooks and tablets. The first tablet using MeeGo was the Neofonie WeTab launched September 2010 in Germany. The WeTab used an extended version of the MeeGo operating system called WeTab OS. WeTab OS adds runtimes for Android and Adobe AIR and provides a proprietary user interface optimized for the WeTab device. On September 27, 2011, the Linux Foundation announced that MeeGo would be replaced in 2012 by Tizen.
Modern tablets
Android was the first of the 2000s-era dominating platforms for tablet computers to reach the market. In 2008, the first plans for Android-based tablets appeared. The first products were released in 2009. Among them was the Archos 5, a pocket-sized model with a 5-inch touchscreen, that was first released with a proprietary operating system and later (in 2009) released with Android 1.4. The Camangi WebStation was released in Q2 2009. The first LTE Android tablet appeared late 2009 and was made by ICD for Verizon. This unit was called the Ultra, but a version called Vega was released around the same time. Ultra had a 7-inch display while Vega's was 15 inches. Many more products followed in 2010. Several manufacturers waited for Android Honeycomb, specifically adapted for use with tablets, which debuted in February 2011.
Apple is often credited for defining a new class of consumer device with the iPad, which shaped the commercial market for tablets in the following years, and was the most successful tablet at the time of its release. iPads and competing devices were tested by the U.S. military in 2011 and cleared for secure use in 2013. Its debut in 2010 pushed tablets into the mainstream. Samsung's Galaxy Tab and others followed, continuing the trends towards the features listed above. In March 2012, PC Magazine reported that 31% of U.S. Internet users owned a tablet, used mainly for viewing published content such as video and news. The top-selling line of devices was Apple's iPad with 100 million sold between its release in April 2010 and mid-October 2012, but iPad market share (number of units) dropped to 36% in 2013 with Android tablets climbing to 62%. Android tablet sales volume was 121 million devices, plus 52 million, between 2012 and 2013 respectively. Individual brands of Android operating system devices or compatibles follow iPad with Amazon's Kindle Fire with 7 million, and Barnes & Noble's Nook with 5 million.
The BlackBerry PlayBook was announced in September 2010 that ran the BlackBerry Tablet OS. The BlackBerry PlayBook was officially released to US and Canadian consumers on April 19, 2011. Hewlett-Packard announced that the TouchPad, running WebOS 3.0 on a 1.2 GHz Qualcomm Snapdragon CPU, would be released in June 2011. On August 18, 2011, HP announced the discontinuation of the TouchPad, due to sluggish sales. In 2013, the Mozilla Foundation announced a prototype tablet model with Foxconn which ran on Firefox OS. Firefox OS was discontinued in 2016. The Canonical hinted that Ubuntu would be available on tablets by 2014. In February 2016, there was a commercial release of the BQ Aquaris Ubuntu tablet using the Ubuntu Touch operating system. Canonical terminated support for the project due to lack of market interest on April 5, 2017 and it was then adopted by the UBports as a community project.
As of February 2014, 83% of mobile app developers were targeting tablets, but 93% of developers were targeting smartphones. By 2014, around 23% of B2B companies were said to have deployed tablets for sales-related activities, according to a survey report by Corporate Visions. The iPad held majority use in North America, Western Europe, Japan, Australia, and most of the Americas. Android tablets were more popular in most of Asia (China and Russia an exception), Africa and Eastern Europe. In 2015 tablet sales did not increase. Apple remained the largest seller but its market share declined below 25%. Samsung vice president Gary Riding said early in 2016 that tablets were only doing well among those using them for work. Newer models were more expensive and designed for a keyboard and stylus, which reflected the changing uses. As of early 2016, Android reigned over the market with 65%. Apple took the number 2 spot with 26%, and Windows took a distant third with the remaining 9%. In 2018, out of 4.4 billion computing devices Android accounted for 2 billion, iOS for 1 billion, and the remainder were PCs, in various forms (desktop, notebook, or tablet), running various operating systems (Windows, macOS, ChromeOS, Linux, etc.).
Since the early 2020s, various companies such as Samsung are beginning to introduce foldable technology into their tablets.
Types
Tablets can be loosely grouped into several categories by physical size, kind of operating system installed, input and output technology, and uses.
Slate
The size of a slate varies, but slates begin at 6 inches (approximately 15 cm). Some models in the larger than 10-inch (25 cm) category include the Samsung Galaxy Tab Pro 12.2 at 12.2 inches (31 cm), the Toshiba Excite at 13.3 inches (33 cm) and the Dell XPS 18 at 18.4 inches (47 cm). As of March 2013, the thinnest tablet on the market was the Sony Xperia Tablet Z at only 0.27 inches (6.9 mm) thick. On September 9, 2015, Apple released the iPad Pro with a screen size, larger than the regular iPad.
Mini tablet
Mini tablets are smaller and weigh less than slates, with typical screen sizes between . The first commercially successful mini tablets were introduced by Amazon.com (Kindle Fire), Barnes & Noble (Nook Tablet), and Samsung (Galaxy Tab) in 2011; and by Google (Nexus 7) in 2012. They operate identically to ordinary tablets but have lower specifications compared to them.
On September 14, 2012, Amazon, Inc. released an upgraded version of the Kindle Fire, the Kindle Fire HD, with higher screen resolution and more features compared to its predecessor, yet remaining only 7 inches. In October 2012, Apple released the iPad Mini with a 7.9-inch screen size, about 2 inches smaller than the regular iPad, but less powerful than the then current iPad 3. On July 24, 2013, Google released an upgraded version of the Nexus 7, with FHD display, dual cameras, stereo speakers, more color accuracy, performance improvement, built-in wireless charging, and a variant with 4G LTE support for AT&T, T-Mobile, and Verizon. In September 2013, Amazon further updated the Fire tablet with the Kindle Fire HDX. In November 2013, Apple released the iPad Mini 2, which remained at 7.9 inches and nearly matched the hardware of the iPad Air.
Phablet
Smartphones and tablets are similar devices, differentiated by the former typically having smaller screens and most tablets lacking cellular network capability. Since 2010, crossover touchscreen smartphones with screens larger than 5 inches have been released. That size is generally considered larger than a traditional smartphone, creating the hybrid category of the phablet by Forbes and other publications. "Phablet" is a portmanteau of "phone" and "tablet".
At the time of the introduction of the first phablets, they had screens of 5.3 to 5.5 inches, but as of 2017 screen sizes up to 5.5 inches are considered typical. Examples of phablets from 2017 and onward are the Samsung Galaxy Note series (newer models of 5.7 inches), the LG V10/V20 (5.7 inches), the Sony Xperia XA Ultra (6 inches), the Huawei Mate 9 (5.9 inches), and the Huawei Honor (MediaPad) X2 (7 inches).
2-in-1
A 2-in-1 PC is a hybrid or combination of a tablet and laptop computer that has features of both. Distinct from tablets, 2-in-1 PCs all have physical keyboards, but they are either concealable by folding them back and under the touchscreen ("2-in-1 convertible") or detachable ("2-in-1 detachable"). 2-in-1s typically also can display a virtual keyboard on their touchscreens when their physical keyboards are concealed or detached. Some 2-in-1s have processors and operating systems like those of laptops, such as Windows 10, while having the flexibility of operation as a tablet. Further, 2-in-1s may have typical laptop I/O ports, such as USB 3 and DisplayPort, and may connect to traditional PC peripheral devices and external displays. Simple tablets are mainly used as media consumption devices, while 2-in-1s have capacity for both media consumption and content creation, and thus 2-in-1s are often called laptop or desktop replacement computers.
There are two species of 2-in-1s:
Convertibles have a chassis design by which their physical keyboard may be concealed by flipping/folding the keyboard behind the chassis. Examples include 2-in-1 PCs of the Lenovo Yoga series.
Detachables or Hybrids have physical keyboards that may be detached from their chassis, even while the 2-in-1 is operating. Examples include 2-in-1 PCs of the Asus Transformer Pad and Book series, the iPad Pro, and the Microsoft Surface Book and Surface Pro.
Gaming tablet
Some tablets are modified by adding physical gamepad buttons such as D-pad and thumb sticks for better gaming experience combined with the touchscreen and all other features of a typical tablet computer. Most of these tablets are targeted to run native OS games and emulator games. Nvidia's Shield Tablet, with an display, and running Android, is an example. It runs Android games purchased from Google Play store. PC games can also be streamed to the tablet from computers with some higher end models of Nvidia-powered video cards. The Nintendo Switch hybrid console is also a gaming tablet that runs on its own system software, features detachable Joy-Con controllers with motion controls and three gaming modes: table-top mode using its kickstand, traditional docked/TV mode and handheld mode. While not entirely an actual tablet form factor due to their sizes, some other handheld console including the smaller version of Nintendo Switch, the Nintendo Switch Lite, and PlayStation Vita are treated as an gaming tablet or tablet replacement by community and reviewer/publisher due to their capabilities on browsing the internet and multimedia capabilities.
Booklet
Booklets are dual-touchscreen tablet computers with a clamshell design that can fold like a laptop. Examples include the Microsoft Courier, which was discontinued in 2010, the Sony Tablet P (considered a flop), and the Toshiba Libretto W100.
Customized business tablet
Customized business tablets are built specifically for a business customer's particular needs from a hardware and software perspective, and delivered in a business-to-business transaction. For example, in hardware, a transportation company may find that the consumer-grade GPS module in an off-the-shelf tablet provides insufficient accuracy, so a tablet can be customized and embedded with a professional-grade antenna to provide a better GPS signal. Such tablets may also be ruggedized for field use. For a software example, the same transportation company might remove certain software functions in the Android system, such as the web browser, to reduce costs from needless cellular network data consumption of an employee, and add custom package management software. Other applications may call for a resistive touchscreen and other special hardware and software.
A table ordering tablet is a touchscreen tablet computer designed for use in casual restaurants. Such devices allow users to order food and drinks, play games and pay their bill. Since 2013, restaurant chains including Chili's, Olive Garden and Red Robin have adopted them. As of 2014, the two most popular brands were Ziosk and Presto. The devices have been criticized by servers who claim that some restaurants determine their hours based on customer feedback in areas unrelated to service.
E-reader
Any device that can display text on a screen may act as an E-reader. While traditionally E-readers are designed primarily for the purpose of reading digital e-books and periodicals, modern E-readers that use a mobile operating system such as Android have incorporated modern functionally including internet browsing and multimedia capabilities; for example Huawei MatePad Paper is a tablet that uses e-ink instead of typical LCD or LED panel, hence focusing on the reading digital content while maintaining the internet and multimedia capabilities. Some E-reader such as PocketBook InkPad Color and ONYX BOOX NOVA 3 Color even came with colored e-ink panel and speaker which allowed for higher degree of multimedia consumption and video playback.
The Kindle line from Amazon was originally limited to E-reading capabilities; however, an update to their Kindle firmware added the ability to browse the Internet and play audio, allowing Kindles to be alternatives to a traditional tablet, in some cases, with a more readable e-ink panel and greater battery life, and providing the user with access to wider multimedia capabilities compared to the older model.
Hardware
System architecture
Two major architectures dominate the tablet market, ARM Ltd.'s ARM architecture and Intel's and AMD's x86. Intel's x86, including x86-64 has powered the "IBM compatible" PC since 1981 and Apple's Macintosh computers since 2006. The CPUs have been incorporated into tablet PCs over the years and generally offer greater performance along with the ability to run full versions of Microsoft Windows, along with Windows desktop and enterprise applications. Non-Windows based x86 tablets include the JooJoo. Intel announced plans to enter the tablet market with its Atom in 2010. In October 2013, Intel's foundry operation announced plans to build FPGA-based quad cores for ARM and x86 processors.
ARM has been the CPU architecture of choice for manufacturers of smartphones (95% ARM), PDAs, digital cameras (80% ARM), set-top boxes, DSL routers, smart televisions (70% ARM), storage devices and tablet computers (95% ARM). This dominance began with the release of the mobile-focused and comparatively power-efficient 32-bit ARM610 processor originally designed for the Apple Newton in 1993 and ARM3-using Acorn A4 laptop in 1992. The chip was adopted by Psion, Palm and Nokia for PDAs and later smartphones, camera phones, cameras, etc. ARM's licensing model supported this success by allowing device manufacturers to license, alter and fabricate custom SoC derivatives tailored to their own products. This has helped manufacturers extend battery life and shrink component count along with the size of devices.
The multiple licensees ensured that multiple fabricators could supply near-identical products, while encouraging price competition. This forced unit prices down to a fraction of their x86 equivalents. The architecture has historically had limited support from Microsoft, with only Windows CE available, but with the 2012 release of Windows 8, Microsoft announced added support for the architecture, shipping their own ARM-based tablet computer, branded the Microsoft Surface, as well as an x86-64 Intel Core i5 variant branded as Microsoft Surface Pro. Intel tablet chip sales were 1 million units in 2012, and 12 million units in 2013. Intel chairman Andy Bryant has stated that its 2014 goal is to quadruple its tablet chip sales to 40 million units by the end of that year, as an investment for 2015.
Display
A key component among tablet computers is touch input on a touchscreen display. This allows the user to navigate easily and type with a virtual keyboard on the screen or press other icons on the screen to open apps or files. The first tablet to do this was the Linus Write-Top by Linus Technologies; the tablet featured both a stylus, a pen-like tool to aid with precision in a touchscreen device, as well as handwriting recognition. The system must respond to on-screen touches rather than clicks of a keyboard or mouse. This operation makes precise use of our eye–hand coordination.
Touchscreens usually come in one of two forms:
Resistive touchscreens are passive and respond to pressure on the screen. They allow a high level of precision, useful in emulating a pointer (as is common in tablet computers) but may require calibration. Because of the high resolution, a stylus or fingernail is often used. Stylus-oriented systems are less suited to multi-touch.
Capacitive touchscreens tend to be less accurate, but more responsive than resistive devices. Because they require a conductive material, such as a fingertip, for input, they are not common among stylus-oriented devices but are prominent on consumer devices. Most finger-driven capacitive screens do not currently support pressure input (except for the iPhone 6S and later models), but some tablets use a pressure-sensitive stylus or active pen.
Some tablets can recognize individual palms, while some professional-grade tablets use pressure-sensitive films, such as those on graphics tablets. Some capacitive touch-screens can detect the size of the touched area and the pressure used.
Since mid-2010s, most tablets use capacitive touchscreens with multi-touch, unlike earlier resistive touchscreen devices which users needed styluses to perform inputs.
There are also electronic paper tablets such as Sony Digital Paper DPTS1 and reMarkable that use E ink for its display technology.
Handwriting recognition
Many tablets support a stylus and support handwriting recognition. Wacom and N-trig digital pens provide approximately 2500 DPI resolution for handwriting, exceeding the resolution of capacitive touch screens by more than a factor of 10. These pens also support pressure sensitivity, allowing for "variable-width stroke-based" characters, such as Chinese/Japanese/Korean writing, due to their built-in capability of "pressure sensing". Pressure is also used in digital art applications such as Autodesk Sketchbook. Apps exist on both iOS and Android platforms for handwriting recognition and in 2015 Google introduced its own handwriting input with support for 82 languages.
Other features
After 2007, with access to capacitive screens and the success of the iPhone, other features became common, such as multi-touch features (in which the user can touch the screen in multiple places to trigger actions and other natural user interface features, as well as flash memory solid state storage and "instant on" warm-booting; external USB and Bluetooth keyboards defined tablets.
Most tablets released since mid-2010 use a version of an ARM processor for longer battery life. The ARM Cortex family is powerful enough for tasks such as internet browsing, light creative and production work and mobile games.
Other features are: High-definition, anti-glare display, touchscreen, lower weight and longer battery life than a comparably-sized laptop, wireless local area and internet connectivity (usually with Wi-Fi standard and optional mobile broadband), Bluetooth for connecting peripherals and communicating with local devices, ports for wired connections and charging, for example USB ports, Early devices had IR support and could work as a TV remote controller, docking station, keyboard and added connectivity, on-board flash memory, ports for removable storage, various cloud storage services for backup and syncing data across devices, local storage on a local area network (LAN).
Speech recognition Google introduced voice input in Android 2.1 in 2009 and voice actions in 2.2 in 2010, with up to five languages (now around 40). Siri was introduced as a system-wide personal assistant on the iPhone 4S in 2011 and now supports nearly 20 languages. In both cases, the voice input is sent to central servers to perform general speech recognition and thus requires a network connection for more than simple commands.
Near-field communication with other compatible devices including ISO/IEC 14443 RFID tags.
Software
Current tablet operating systems
Tablets, like conventional PCs, use several different operating systems, though dual-booting is rare. Tablet operating systems come in two classes:
Desktop computer operating systems
Mobile operating systems
Desktop OS-based tablets are currently thicker and heavier. They require more storage and more cooling and give less battery life. They can run processor-intensive graphical applications in addition to mobile apps, and have more ports.
Mobile-based tablets are the reverse, and run only mobile apps. They can use battery life conservatively because the processor is significantly smaller. This allows the battery to last much longer than the common laptop.
In Q1 2018, Android tablets had 62% of the market, Apple's iOS had 23.4% of the market and Windows 10 had 14.6% of the market. In late 2021, iOS has 55% use worldwide (varies by continent, e.g. below 50% in South America and Africa) and Android 45% use. Still, Android tablets have more use than iOS in virtually all countries, except for e.g. the U.S. and China.
Android
Android is a Linux-based operating system that Google offers as open source under the Apache license. It is designed primarily for mobile devices such as smartphones and tablet computers. Android supports low-cost ARM systems and others. The first tablets running Android were released in 2009. Vendors such as Motorola and Lenovo delayed deployment of their tablets until after 2011, when Android was reworked to include more tablet features. Android 3.0 (Honeycomb), released in 2011 and later versions support larger screen sizes, mainly tablets, and have access to the Google Play service. Android includes operating system, middleware and key applications. Other vendors sell customized Android tablets, such as Kindle Fire and Nook, which are used to consume mobile content and provide their own app store, rather than using the larger Google Play system, thereby fragmenting the Android market. In 2022 Google began to re-emphasize in-house Android tablet development — at this point, a multi-year commitment.
Android Go
A few tablet computers are shipped with Android Go.
Fire OS
As mentioned above, Amazon Fire OS is an Android-based mobile operating system produced by Amazon for its Fire range of tablets. It is forked from Android. Fire OS primarily centers on content consumption, with a customized user interface and heavy ties to content available from Amazon's own storefronts and services.
ChromeOS
Several devices that run ChromeOS came on the market in 2017–2019, as tablets, or as 2-in-1s with touchscreen and 360-degree hinge.
HarmonyOS
HarmonyOS (HMOS) () is a distributed operating system developed by Huawei to collaborate and interconnect with multiple smart devices on the Internet of Things (IoT) ecosystem. In its current multi-kernel design, the operating system selects suitable kernels from the abstraction layer for devices with diverse resources. For IoT devices, the system is known to be based on LiteOS kernel; while for smartphones and tablets, it is based on a Linux kernel layer with AOSP libraries to support Android application package (APK) apps using Android Runtime (ART) through the Ark Compiler, in addition to native HarmonyOS apps built via integrated development environment (IDE) known as DevEco Studio.
iPadOS
The iPad runs on iPadOS. Prior to the introduction of iPadOS in 2019, the iPad ran iOS, which was created for the iPhone and iPod Touch. The first iPad was released in 2010. Although built on the same underlying Unix implementation as macOS, its user interface is radically different. iPadOS is designed for touch input from the user's fingers and has none of the features that required a stylus on earlier tablets. Apple introduced multi-touch gestures, such as moving two fingers apart or together to zoom in or out, also termed pinch to zoom. iPadOS and iOS are built for the ARM architecture.
Kindle firmware
Kindle firmware is a mobile operating system specifically designed for Amazon Kindle e-readers. It is based on a custom Linux kernel; however, it is entirely closed-source and proprietary, and only runs on Amazon Kindle line up manufactured under the Amazon brand.
Nintendo Switch system software
The Nintendo Switch system software (also known by its codename Horizon) is an updatable firmware and operating system used by the Nintendo Switch hybrid video game console/tablet and Nintendo Switch Lite handheld game console. It is based on a proprietary microkernel. The UI includes a HOME screen, consisting of the top bar, the screenshot viewer ("Album"), and shortcuts to the Nintendo eShop, News, and Settings.
PlayStation Vita system software
The PlayStation Vita system software is the official firmware and operating system for the PlayStation Vita and PlayStation TV video game consoles. It uses the LiveArea as its graphical shell. The PlayStation Vita system software has one optional add-on component, the PlayStation Mobile Runtime Package. The system is built on a Unix-base which is derived from FreeBSD and NetBSD. Due to it capabilities on browsing the internet and multimedia capabilities, it is treat as an gaming tablet or tablet replacement by community and reviewer/publisher.
Ubuntu Touch
Ubuntu Touch is an open-source (GPL) mobile version of the Ubuntu operating system originally developed in 2013 by Canonical Ltd. and continued by the non-profit UBports Foundation in 2017. Ubuntu Touch can run on a pure GNU/Linux base on phones with the required drivers, such as the Librem 5 and the PinePhone. To enable hardware that was originally shipped with Android, Ubuntu Touch makes use of the Android Linux kernel, using Android drivers and services via an LXC container, but does not use any of the Java-like code of Android. As of February 2022, Ubuntu Touch is available on 78 phones and tablets. The UBports Installer serves as an easy-to-use tool to allow inexperienced users to install the operating system on third-party devices without damaging their hardware.
Windows
Following Windows for Pen Computing for Windows 3.1 in 1991, Microsoft supported tablets running Windows XP under the Microsoft Tablet PC name. Microsoft Tablet PCs were pen-based, fully functional x86 PCs with handwriting and voice recognition functionality. Windows XP Tablet PC Edition provided pen support. Tablet support was added to both Home and Business versions of Windows Vista and Windows 7. Tablets running Windows could use the touchscreen for mouse input, hand writing recognition and gesture support. Following Tablet PC, Microsoft announced the Ultra-mobile PC initiative in 2006 which brought Windows tablets to a smaller, touch-centric form factor. In 2008, Microsoft showed a prototype of a two-screen tablet called Microsoft Courier, but cancelled the project.
In 2012, Microsoft released Windows 8, which features significant changes to various aspects of the operating system's user interface and platform which are designed for touch-based devices such as tablets. The operating system also introduced an application store and a new style of application optimized primarily for use on tablets. Microsoft also introduced Windows RT, an edition of Windows 8 for use on ARM-based devices. The launch of Windows 8 and RT was accompanied by the release of devices with the two operating systems by various manufacturers (including Microsoft themselves, with the release of Surface), such as slate tablets, hybrids, and convertibles.
Released in July 2015, Windows 10 introduces what Microsoft described as "universal apps"; expanding on Metro-style apps, these apps can be designed to run across multiple Microsoft product families with nearly identical code – including PCs, tablets, smartphones, embedded systems, Xbox One, Surface Hub and Windows Holographic. The Windows user interface was revised to handle transitions between a mouse-oriented interface and a touchscreen-optimized interface based on available input devices – particularly on 2-in-1 PCs; both interfaces include an updated Start menu. Windows 10 replaced all earlier editions of Windows.
Hybrid OS operation
Several hardware companies have built hybrid devices with the possibility to work with both Android and Windows Phone operating systems (or in rare cases Windows 8.1, as with the, by now cancelled, Asus Transformer Book Duet), while Ars Technica stated: "dual-OS devices are always terrible products. Windows and Android almost never cross-communicate, so any dual-OS device means dealing with separate apps, data, and storage pools and completely different UI paradigms. So from a consumer perspective, Microsoft and Google are really just saving OEMs from producing tons of clunky devices that no one will want."
Discontinued tablet operating systems
BlackBerry 10
BlackBerry 10 (based on the QNX OS) is from BlackBerry. As a smartphone OS, it is closed-source and proprietary, and only runs on phones and tablets manufactured by BlackBerry.
One of the dominant platforms in the world in the late 2000s, its global market share was reduced significantly by the mid-2010s. In late 2016, BlackBerry announced that it will continue to support the OS, with a promise to release 10.3.3. Therefore, BlackBerry 10 would not receive any major updates as BlackBerry and its partners would focus more on their Android base development.
BlackBerry Tablet OS
BlackBerry Tablet OS is an operating system from BlackBerry Ltd based on the QNX Neutrino real-time operating system designed to run Adobe AIR and BlackBerry WebWorks applications, currently available for the BlackBerry PlayBook tablet computer.
The BlackBerry Tablet OS is the first tablet running an operating system from QNX (now a subsidiary of RIM).
BlackBerry Tablet OS supports standard BlackBerry Java applications. Support for Android apps has also been announced, through sandbox "app players" which can be ported by developers or installed through sideloading by users. A BlackBerry Tablet OS Native Development Kit, to develop native applications with the GNU toolchain is currently in closed beta testing. The first device to run BlackBerry Tablet OS was the BlackBerry PlayBook tablet computer.
Application store
Apps that do not come pre-installed with the system are supplied through online distribution. These sources, termed app stores, provide centralized catalogs of software and allow "one click" on-device software purchasing, installation and updates.
Mobile device suppliers may adopt a "walled garden" approach, wherein the supplier controls what software applications ("apps") are available. Software development kits are restricted to approved software developers. This can be used to reduce the impact of malware, provide software with an approved content rating, control application quality and exclude competing vendors. Apple, Google, Amazon, Microsoft and Barnes & Noble all adopted the strategy. B&N originally allowed arbitrary apps to be installed, but, in December 2011, excluded third parties. Apple and IBM have agreed to cooperate in cross-selling IBM-developed applications for iPads and iPhones in enterprise-level accounts. Proponents of open source software say that the iPad (or such "walled garden" app store approach) violates the spirit of personal control that traditional personal computers have always provided.
Sales
Around 2010, tablet use by businesses jumped, as business began to use them for conferences, events, and trade shows. In 2012, Intel reported that their tablet program improved productivity for about 19,000 of their employees by an average of 57 minutes a day. In October 2012, display screen shipments for tablets began surpassing shipments for laptop display screens. Tablets became increasingly used in the construction industry to look at blueprints, field documentation and other relevant information on the device instead of carrying around large amounts of paper. Time described the product's popularity as a "global tablet craze" in a November 2012 article.
As of the start of 2014, 44% of US online consumers owned tablets, a significant jump from 5% in 2011. Tablet use also became increasingly common among children. A 2014 survey found that mobiles were the most frequently used object for play among American children under the age of 12. Mobiles were used more often in play than video game consoles, board games, puzzles, play vehicles, blocks and dolls/action figures. Despite this, the majority of parents said that a mobile was "never" or only "sometimes" a toy. As of 2014, nearly two-thirds of American 2- to 10-year-olds have access to a tablet or e-reader. The large use of tablets by adults is as a personal internet-connected TV. A 2015 study found that a third of children under five have their own tablet device.
After a fast rise in sales during the early 2010s, the tablet market had plateaued in 2015 and by Q3 2018 sales had declined by 35% from its Q3 2014 peak. In spite of this, tablet sales worldwide had surpassed sales of desktop computers in 2017, and worldwide PC sales were flat for the first quarter of 2018. In 2020 the tablet market saw a large surge in sales with 164 million tablet units being shipped worldwide due to a large demand for work from home and online learning.
2010 to 2014 figures are estimated by Gartner. 2014 to 2021 figures are estimated by IDC.
By manufacturer
By operating system
According to a survey conducted by the Online Publishers Association (OPA) now called Digital Content Next (DCN) in March 2012, it found that 72% of tablet owners had an iPad, while 32% had an Android tablet. By 2012, Android tablet adoption had increased. 52% of tablet owners owned an iPad, while 51% owned an Android-powered tablet (percentages do not add up to 100% because some tablet owners own more than one type). By end of 2013, Android's market share rose to 61.9%, followed by iOS at 36%. By late 2014, Android's market share rose to 72%, followed by iOS at 22.3% and Windows at 5.7%. As of early 2016, Android has 65% marketshare, Apple has 26% and Windows has 9% marketshare. In Q1 2018, Android tablets had 62% of the market, Apple's iOS had 23.4% of the market and Windows 10 had 14.6% of the market.
Source: Strategy Analytics
Use
Sleep
The blue wavelength of light from back-lit tablets may impact one's ability to fall asleep when reading at night, through the suppression of melatonin. Experts at Harvard Medical School suggest limiting tablets for reading use in the evening. Those who have a delayed body clock, such as teenagers, which makes them prone to stay up late in the evening and sleep later in the morning, may be at particular risk for increases in sleep deficiencies. A PC app such as F.lux and Android apps such as CF.lumen and Twilight attempt to decrease the impact on sleep by filtering blue wavelengths from the display. iOS 9.3 includes Night Shift that shifts the colors of the device's display to be warmer during the later hours.
By plane
Because of, among other things, electromagnetic waves emitted by this type of device, the use of any type of electronic device during the take-off and landing phases was totally prohibited on board commercial flights. On November 13, 2013, the European Aviation Safety Agency (EASA) announced that the use of mobile terminals could be authorized on the flights of European airlines during these phases from 2014 onwards, on the condition that the cellular functions are deactivated ("airplane" mode activated). In September 2014, EASA issued guidance that allows EU airlines to permit use of tablets, e-readers, smartphones, and other portable electronic devices to stay on without the need to be in airplane mode during all parts of EU flights; however, each airline has to decide to allow this behavior. In the U.S., the Federal Aviation Administration allowed use of portable electronic devices during all parts of flights while in airplane mode in late 2013.
Tourism
Some French historical monuments are equipped with digital tactile tablets called "HistoPad". It is an application integrated with an iPad Mini offering an interaction in augmented and virtual reality with several pieces of the visit, the visitor being able to take control of their visit in an interactive and personalized way.
Professional use for specific sectors
Some professionals – for example, in the construction industry, insurance experts, lifeguards or surveyors – use so-called rugged shelf models in the field that can withstand extreme hot or cold shocks or climatic environments. Some units are hardened against drops and screen breakage. Satellite-connectivity-equipped tablets such as the Thorium X, for example, can be used in areas where there is no other connectivity. This is a valuable feature in the aeronautical and military realms. For example, United States Army helicopter pilots are moving to tablets as electronic flight bags, which confer the advantages of rapid, convenient synchronization of large groups of users, and the seamless updating of information. US Army chaplains who are deployed in the field with the troops cite the accessibility of Army regulations, field manuals, and other critical information to help with their services; however, power generation, speakers, and a tablet rucksack are also necessary for the chaplains.
See also
Comparison of tablet computers
History of tablet computers
List of tablet PC dimensions and case sizes
Lists of mobile computers
Mobile device
Laptop–tablet convergence
References
External links
American inventions
Classes of computers | Tablet computer | [
"Technology"
] | 9,995 | [
"Computers",
"Computer systems",
"Classes of computers"
] |
4,182,509 | https://en.wikipedia.org/wiki/Oxygen%20transmission%20rate | Oxygen transmission rate (OTR) is the measurement of the amount of oxygen gas that passes through a substance over a given period. It is mostly carried out on non-porous materials, where the mode of transport is diffusion, but there are a growing number of applications where the transmission rate also depends on flow through apertures of some description.
It relates to the permeation of oxygen through packaging to sensitive foods and pharmaceuticals.
Measurement
Standard test methods are available for measuring the oxygen transmission rate of packaging materials. Completed packages, however, involve heat seals, creases, joints, and closures which often reduce the effective barrier of the package. For example, the glass of a glass bottle may have an effective total barrier but the screw cap closure and the closure liner might not.
ASTM standard test methods include:
F3136 Standard Test Method for Oxygen Gas Transmission Rate through Plastic Film and Sheeting using a Dynamic Accumulation Method
D3985 Standard Test Method for Oxygen Gas Transmission Rate Through Plastic Film and Sheeting Using a Coulometric Sensor
F1307 Standard Test Method for Oxygen Transmission Rate Through Dry Packages Using a Coulometric Sensor
F1927 Standard Test Method for Determination of Oxygen Gas Transmission Rate, Permeability and Permeance at Controlled Relative Humidity Through Barrier Materials Using a Coulometric Detector
F2622 Standard Test Method for Oxygen Gas Transmission Rate Through Plastic Film and Sheeting Using Various Sensors
Other test methods include:
The ambient oxygen ingress rate method (AOIR) an alternative method for measuring the oxygen transmission rates (OTR) of whole packages
Wine
Also a factor of increasing awareness in the debate surrounding wine closures, natural corks show small variation in their oxygen transmission rate, which in turn translates to a degree of bottle variation.
See also
Moisture vapor transmission rate
Permeation
Shelf life
Oxygen scavenger
References
Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009,
Massey, L K, "Permeability Properties of Plastics and Elastomers", 2003, Andrew Publishing,
Sanghyun Lee "Mass Transfer" Konkuk University, 2017
Hanne Larsen, Achim Kohlr and Ellen Merethe Magnus, "Ambient oxygen ingress rate method", John Wilew & Sons, Packaging Technology and Science, Volume 13 Issue 6, Pages 233 - 241
Footnotes
Packaging
Temporal rates | Oxygen transmission rate | [
"Physics"
] | 479 | [
"Temporal quantities",
"Temporal rates",
"Physical quantities"
] |
4,182,590 | https://en.wikipedia.org/wiki/Septum%20transversum | The septum transversum is a thick mass of cranial mesenchyme, formed in the embryo, that gives rise to parts of the thoracic diaphragm and the ventral mesentery of the foregut in the developed human being and other mammals.
Origins
The septum transversum originally arises as the most cranial part of the mesenchyme on day 22. During craniocaudal folding, it assumes a position cranial to the developing heart at the level of the cervical vertebrae. During subsequent weeks the dorsal end of the embryo grows much faster than its ventral counterpart resulting in an apparent descent of the ventrally located septum transversum. At week 8, it can be found at the level of the thoracic vertebrae.
Nerve supply
After successful craniocaudal folding the septum transversum picks up innervation from the adjacent ventral rami of spinal nerves C3, C4 and C5, thus forming the precursor of the phrenic nerve. During the descent of the septum, the phrenic nerve is carried along and assumes its descending pathway.
During embryonic development of the thoracic diaphragm, myoblast cells from the septum invade the other components of the diaphragm. They thus give rise to the motor and sensory innervation of the muscular diaphragm by the phrenic nerve.
Derivatives
The cranial part of the septum transversum gives rise to the central tendon of the diaphragm, and is the origin of the myoblasts that invade the pleuroperitoneal folds resulting in the formation of the muscular diaphragm.
The caudal part of the septum transversum is invaded by the hepatic diverticulum which divides within it to form the liver and thus gives rise to the ventral mesentery of the foregut, which in turn is the precursor of the lesser omentum, the visceral peritoneum of the liver and the falciform ligament.
Though not derived from the septum transversum, development of the liver is highly dependent upon signals originating here. Bone morphogenetic protein 2 (BMP-2), BMP-4 and BMP-7 produced from the septum transversum join fibroblast growth factor (FGF) signals from the cardiac mesoderm induce part of the foregut to differentiate towards a hepatic fate.
Additional images
References
External links
Developmental biology
Digestive system
Embryology | Septum transversum | [
"Biology"
] | 523 | [
"Digestive system",
"Behavior",
"Developmental biology",
"Reproduction",
"Organ systems"
] |
4,182,653 | https://en.wikipedia.org/wiki/Launch%20Services%20Alliance | Launch Services Alliance is a "back-up" launch service provider. It is a joint venture between the multinational aerospace company Arianespace and Japanese conglomerate Mitsubishi Heavy Industries; initially, the American aerospace firm Boeing Launch Services was involved as well.
LSA was established during 2003. In the event of one of the commercial partners not being able to execute a launch on time, one of the other partners could provide an alternative service, under a set of contractual conditions agreed between the participating companies. Such transfers would be made at the customer's discretion. The LSA offered this service for the Ariane 5 and H-IIA expendable launch systems; it previously offered the Zenit-3SL as well.
History
Following the end of the Cold War and the institution of the peace dividend, the aerospace industry went through a period of consolidation, mergers and partnerships. More specifically, a glut in affordable space launches during the early 2000s placed incumbent providers under pressure to respond. During 2001, the Japanese conglomerate Mitsubishi Heavy Industries and American aerospace firm Boeing Launch Services announced the formation of a strategic alliance to cooperate on various space-related opportunities; specifically, this alliance applied to space-based communications, air traffic management, multimedia, navigation, space and communications services, launch services and space infrastructure markets.
During July 2003, the Launch Services Alliance (LSA) was originally formed, then consisting of the multinational aerospace company Arianespace with Japanese conglomerate Mitsubishi Heavy Industries and American aerospace firm Boeing Launch Services; at this time, Boeing provided Sea Launch Zenit-3SL launch services at that time. The decision to switch between launchers is made by the end customer. Despite the formation of LSA, the three partner companies retained autonomy over their own operations and continued to independently market their respective commercial satellite launch capabilities. For Arianespace, its involvement in LSA represented a further diversification of their launch services.
During October 2003, LSA services were used for the first time when Arianespace transferred satellite DirecTV-7S delayed in manufacturing to the Zenit-3SL launch on 4 May 2004. During May 2004, the first contract for LSA services was signed for the Optus D1 satellite; Ariane 5 was assigned as the primary launch vehicle while the Zenit-3SL was served as backup. During 2005, LSA announced that the organisation had signed its fifth contract.
During April 2007, the LSA was reformed by Arianespace and Mitsubishi Heavy Industries; the main change being the withdrawal of Boeing from any involvement in the venture. Since fiscal year 2007, responsibility for both production and management of the H-IIA launch system was transferred to Mitsubishi Heavy Industries, the partnership with Arianespace was hoped to help the former enter the market.
References
Commercial spaceflight
Mitsubishi Heavy Industries
Space organizations | Launch Services Alliance | [
"Astronomy"
] | 566 | [
"Astronomy organizations",
"Space organizations"
] |
4,182,759 | https://en.wikipedia.org/wiki/High-Speed%20Serial%20Interface | The High-Speed Serial Interface (HSSI) is a differential ECL serial interface standard developed by Cisco Systems and T3plus Networking primarily for use in WAN router connections. It is capable of speeds up to 52 Mbit/s with cables up to in length.
While HSSI uses 50-pin connector physically similar to that used by SCSI-2, it requires a cable with an impedance of 110 Ω (as opposed to the 75 Ω of a SCSI-2 cable).
The physical layer of the standard is defined by EIA-613 and the electrical layer by EIA-612.
It is supported by the Linux kernel since version 3.4-rc2.
References
External links
What is HSSI?
HSSI Description
Serial buses | High-Speed Serial Interface | [
"Technology"
] | 158 | [
"Computing stubs",
"Computer hardware stubs"
] |
4,183,321 | https://en.wikipedia.org/wiki/NGC%204565 | NGC 4565 (also known as the Needle Galaxy or Caldwell 38) is an edge-on spiral galaxy about 30 to 50 million light-years away in the constellation Coma Berenices. It lies close to the North Galactic Pole and has a visual magnitude of approximately 10. It is known as the Needle Galaxy for its narrow profile. First recorded in 1785 by William Herschel, it is a prominent example of an edge-on spiral galaxy.
Characteristics
NGC 4565 is a giant spiral galaxy more luminous than the Andromeda Galaxy. Much speculation exists in literature as to the nature of the central bulge. In the absence of clear-cut dynamical data on the motions of stars in the bulge, the photometric data alone cannot adjudge among various options put forth. However, its exponential shape suggested that it is a barred spiral galaxy. Studies with the help of the Spitzer Space Telescope not only confirmed the presence of a central bar but also showed a pseudobulge within it as well as an inner ring.
NGC 4565 has at least two satellite galaxies, one of which is interacting with it. It has a population of roughly 240 globular clusters, more than the Milky Way.
NGC 4565 is one of the brightest member galaxies of the Coma I Group.
This edge-on galaxy exhibits a slightly warped and extended disk under deep optical surveys, likely due to ongoing interactions with neighboring satellite galaxies or other galaxies in the Coma I group. GALEX images show the slight warp at the edge of the disc more clearly than other surveys.
Using the LOw-Frequency ARray (LOFAR), astronomers of the University of Hamburg discovered a diffuse radio halo around NGC 4565. During the observations, a warp was detected in the radio continuum of NGC 4565 that is reminiscent of a neutral hydrogen line (HI) warp and identifying a slight flaring of the galaxy's radio halo. It is assumed that this flaring is caused by the warp as the vertical intensity profiles are asymmetric, which is in agreement with the warp. According to the study, a minimum age for the warp was estimated at approximately 130 million years. This is the spectral age of the galaxy's cosmic ray electrons, during which they are transported into the warp. This indicates that NGC 4565 may be in the aftermath of a period with more intense star formation.
References
External links
National Optical Astronomical Observatory – NGC 4565
APOD (2024-06-06) - NGC 4565: Galaxy on Edge
APOD (2010-03-04) – NGC 4565: Galaxy on Edge
APOD (2009-04-28) – NGC 4565
SEDS – NGC 4565
Unbarred spiral galaxies
Coma Berenices
4565
07772
42038
038b
Astronomical objects discovered in 1785
Discoveries by William Herschel
Coma I Group | NGC 4565 | [
"Astronomy"
] | 571 | [
"Coma Berenices",
"Constellations"
] |
4,183,464 | https://en.wikipedia.org/wiki/Hexamethylene%20diisocyanate | Hexamethylene diisocyanate (HDI) is the organic compound with the formula (CH2)6(NCO)2. It is classified as an diisocyanate. It is a colorless liquid. It has sometimes been called HMDI but this not usually done to avoid confusion with Hydrogenated MDI.
Synthesis
Compared to other commercial diisocyanates, HDI is produced in relatively small quantities, accounting for (with isophorone diisocyanate) only 3.4% of the global diisocyanate market in the year 2000. It is produced by phosgenation of hexamethylene diamine.
Applications
Aliphatic diisocyanates are used in specialty applications, such as enamel coatings which are resistant to abrasion and degradation by ultraviolet light. These properties are particularly desirable in, for instance, the exterior paint applied to aircraft and vessels. HDI is also sold oligomerized as the trimer or biuret which are used in automotive refinish coatings. Although more viscous in these forms, it reduces the volatility and toxicity. At least 3 companies sell material in this form commercially. It is also used as an activator in process of in situ polymerization of caprolactam i.e. cast nylon process. HDI is also used bisoxazolidine synthesis as the hydroxyl group on the molecule allows for further reaction with hexamethylene diisocyanate.
Toxicity
HDI is considered toxic, and its pulmonary toxicity has been studied as well as its oligomers.
See also
Isophorone diisocyanate
Methylene diphenyl diisocyanate
Toluene diisocyanate
References
External links
NIOSH Safety and Health Topic: Isocyanates, from the website of the National Institute for Occupational Safety and Health (NIOSH)
NIOSH Pocket Guide to Chemical Hazards - Hexamethylene diisocyanate
Isocyanates
Monomers | Hexamethylene diisocyanate | [
"Chemistry",
"Materials_science"
] | 422 | [
"Isocyanates",
"Monomers",
"Functional groups",
"Polymer chemistry"
] |
4,183,474 | https://en.wikipedia.org/wiki/Isophorone%20diisocyanate | Isophorone diisocyanate (IPDI) is an organic compound in the class known as isocyanates. More specifically, it is an aliphatic diisocyanate. It is produced in relatively small quantities, accounting for (with hexamethylene diisocyanate) only 3.4% of the global diisocyanate market in the year 2000. Aliphatic diisocyanates are used, not in the production of polyurethane foam, but in special applications, such as enamel coatings which are resistant to abrasion and degradation from ultraviolet light. These properties are particularly desirable in, for instance, the exterior paint applied to aircraft.
Properties
Isophorone diisocyanate (IPDI) stands out as a cycloaliphatic diisocyanate distinguished by its two reactive isocyanate groups, exhibiting differences in reactivity between primary and secondary NCO groups. This unique property ensures high selectivity in reacting with hydroxyl-bearing compounds.
This distinctive attribute proves advantageous in processing low-viscosity prepolymers, resulting in a notably reduced residual content of monomeric diisocyanate. Furthermore, the low viscosity of IPDI-based prepolymers facilitates a decrease in solvent usage. The presence of methyl groups linked to the cyclohexane ring broadens IPDI's compatibility with resins and solvents.
The inherent cycloaliphatic ring confers heightened rigidity and a notably elevated glass transition temperature to IPDI-based products. IPDI itself is a transparent, slightly yellowish, low-viscosity liquid with a solidification point at -60 °C and boiling point at 158 °C. Semi-finished products like NCO-terminated prepolymers exhibit a low tendency to crystallize, remaining in a liquid state and facilitating easy processing.
Synthesis
Isophorone diisocyanate is produced by phosgenation of isophorone diamine in five-step reaction:
Reaction of acetone with catalyst to form isophorone.
Isophorone react with HCN to form isophorone nitrile.
Isophorone nitrile react with ammonia and hydrogen under the influence of catalyst. This reaction create mixture of isophorone diamine conformers (25% cis, 75% trans).
Reaction of isophorone diamine with phosgene to form isophorone diisocyanate.
Purification of product by distillation.
Chemistry
IPDI exists in two stereoisomers, cis and trans. Their reactivities are similar. Each stereoisomer is an unsymmetrical molecule, and thus has isocyanate groups with different reactivities. The primary isocyanate group is more reactive than the secondary isocyanate group.
Application
Isophorone diisocyanate is used in special applications:
Hard foams and coatings
Polyurethanes resins (PUR)
Leather and textile
Adhesives for battery
Elastomers and TPU
PUR fibers and laminates
Adhesives and glues
Light-stable PUR
Aqueous dispersible polymers
Safety
Isophorone diisocyanate is a highly toxic substance if inhaled. It can cause eye irritation and irreversible eye damage, lung and respiratory damage. It is a skin irritant and causes allergic reactions and may cause skin corrosion on prolonged contact. It is highly hazardous to aquatic environment.
H-statements: H315, H317, H319, H331, H334, H335, H411
P-statements: P260, P273, P280, P305+P351+P338, P308+P313
See also
Hexamethylene diisocyanate
Methylene diphenyl diisocyanate
Tetramethylxylene diisocyanate
Toluene diisocyanate
References
External links
NIOSH Safety and Health Topic: Isocyanates, from the website of the National Institute for Occupational Safety and Health (NIOSH)
Isophorone diisocyanate - NIOSH Pocket Guide to Chemical Hazards
Isocyanates | Isophorone diisocyanate | [
"Chemistry"
] | 891 | [
"Isocyanates",
"Functional groups"
] |
4,183,809 | https://en.wikipedia.org/wiki/Eimac | Eimac is a trade mark of Eimac Products, part of the Microwave Power Products Division of Communications & Power Industries. It produces power vacuum tubes for radio frequency applications such as broadcast and radar transmitters. The company name is an initialism from the names of the founders, William Eitel and Jack McCullough.
History
The San Francisco Bay area was one of the early centers of amateur radio activity and experimentation, containing about 10% of the total operators in the US. Amateur radio enthusiasts sought vacuum tubes that would perform at higher power and on higher frequencies than those then available from RCA, Western Electric, General Electric, and Westinghouse. Additionally, they required tubes that would operate with the limited voltages available from typical amateur power supplies.
While employed by the small San Francisco, California manufacturing firm of Heintz & Kaufman which manufactured custom radio equipment, Bill Eitel (amateur radio call sign W6UF) and Jack McCullough (W6CHE) convinced company president Ralph Heintz (W6XBB) to allow them to develop a transmitting tube that could operate at lower voltages than those then available to the amateur radio market, such as the RCA UV-204A or the 852. Their effort was a success and resulted in production of the HK-354. Shortly after in 1934, Eitel and McCullough left H&K to form Eitel McCullough Corp. in San Bruno California.
The first product produced under the trade mark "Eimac" was the 150T power triode. Later tubes include the 3CX5000A7 power triode and the 4X150D tetrode. The new company thrived during World War II by selling tubes to the U.S. military for use in radar equipment. Charles Litton Sr. originated glass lathe techniques which made mass production of reliable high quality power tubes possible, and resulted in the award of wartime contracts to the company.
Mass production
Contracts to provide transmission tubes for radar and other radio equipment during World War II required adaption of mass production, research to improve the reliability of tubes, and development of standardized manufacturing techniques which could be performed by unskilled workers. The workforce expanded from a few hundred to several thousand. During the war Eimac produced hundreds of thousands of radar tubes.
Welfare capitalism
A union organizing drive in 1939-40 by the strong Bay area labor movement was fought off by adoption of a strategy of welfare capitalism which included pensions and other generous benefits, profit sharing, and such extras as a medical clinic and a cafeteria. An atmosphere of cooperation and collaboration was established,
Postwar
As wartime orders ceased and a large supply of military surplus transmission tubes flooded the market the firm laid off 90% of its workers and closed its plant in Salt Lake City. Reallocation of the FM band by the FCC in 1945, however, provided an opportunity for the firm to market a superior power tetrode tube which it had developed.
Beginning in 1947, Eimac operated FM radio station KSBR from their plant in San Bruno, California, one of only two FM stations in the United States to test the new Rangertone tape recorders (adapted from the German Magnetophon recorders). In need of more space, the company moved to San Carlos in 1959. Eimac's San Carlos plant was dedicated on April 16, 1959. By that time, the company had the following subsidiaries: National Electronics, Inc., Geneva, Illinois, and Eitel-McCullough, S.A., Geneva, Switzerland. During the Cold war era, Eimac supplied U.S. military with klystron power tubes and electron power tubes used in the defense communications network, navigation, detection, ranging and fire-control radars.
In the beginning of May 1959, the company announced its newly-produced giant klystron tube powered the Massachusetts Institute of Technology’s radar which recently established contact with planet Venus. The super-power klystron was developed under Rome Air Development Center sponsorship. Eimac klystrons also were chosen for NATO's tropospheric scatter communications network.
In 1965, Eimac merged with Varian Associates and became known as the Eimac Division. In August 1995, Varian Associates sold the Electron Device Business to Leonard Green & Partners, a private equity fund, and members of management. Together, they formed Communications & Power Industries.
In January 2004, affiliates of The Cypress Group, a private equity fund, acquired CPI.
In February 2011, an affiliate of Veritas Capital, a private equity investment firm acquired CPI.
In 2006 CPI relocated the Eimac facility from 301 Industrial Road, San Carlos to their operation in Palo Alto.
References
10. Eimac building in San Carlos https://ethw.org/File:Eitel_Mccullough.jpg
External links
Corporate Web site
Electronics companies established in 1934
Vacuum tubes
San Bruno, California
1934 establishments in California | Eimac | [
"Physics"
] | 1,020 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
4,184,055 | https://en.wikipedia.org/wiki/F-ratio%20%28oceanography%29 | In oceanic biogeochemistry, the f-ratio is the fraction of total primary production fuelled by nitrate (as opposed to that fuelled by other nitrogen compounds such as ammonium). The ratio was originally defined by Richard Eppley and Bruce Peterson in one of the first papers estimating global oceanic production. This fraction was originally believed significant because it appeared to directly relate to the sinking (export) flux of organic marine snow from the surface ocean by the biological pump. However, this interpretation relied on the assumption of a strong depth-partitioning of a parallel process, nitrification, that more recent measurements has questioned.
Overview
Gravitational sinking of organisms (or the remains of organisms) transfers particulate organic carbon from the surface waters of the ocean to its deep interior. This process is known as the biological pump, and quantifying it is of interest to scientists because it is an important aspect of the Earth's carbon cycle. Essentially, this is because carbon transported to the deep ocean is isolated from the atmosphere, allowing the ocean to act as a reservoir of carbon. This biological mechanism is accompanied by a physico-chemical mechanism known as the solubility pump which also acts to transfer carbon to the ocean's deep interior.
Measuring the flux of sinking material (so-called marine snow) is usually done by deploying sediment traps which intercept and store material as it sinks down the water column. However, this is a relatively difficult process, since traps can be awkward to deploy or recover, and they must be left in situ over a long period to integrate the sinking flux. Furthermore, they are known to experience biases and to integrate horizontal as well as vertical fluxes because of water currents. For this reason, scientists are interested in ocean properties that can be more easily measured, and that act as a proxy for the sinking flux. The f-ratio is one such proxy.
"New" and "regenerated" production
Bio-available nitrogen occurs in the ocean in several forms, including simple ionic forms such as nitrate (NO3−), nitrite (NO2−) and ammonium (NH4+), and more complex organic forms such as urea ((NH2)2CO). These forms are used by autotrophic phytoplankton to synthesise organic molecules such as amino acids (the building blocks of proteins). Grazing of phytoplankton by zooplankton and larger organisms transfers this organic nitrogen up the food chain and throughout the marine food-web.
When nitrogenous organic molecules are ultimately metabolised by organisms, they are returned to the water column as ammonium (or more complex molecules that are then metabolised to ammonium). This is known as regeneration, since the ammonium can be used by phytoplankton, and again enter the food-web. Primary production fuelled by ammonium in this way is thus referred to as regenerated production.
However, ammonium can also be oxidised to nitrate (via nitrite), by the process of nitrification. This is performed by different bacteria in two stages :
NH3 + O2 → NO2− + 3H+ + 2e−
NO2− + H2O → NO3− + 2H+ + 2e−
Crucially, this process is believed to only occur in the absence of light (or as some other function of depth). In the ocean, this leads to a vertical separation of nitrification from primary production, and confines it to the aphotic zone. This leads to the situation whereby any nitrate in the water column must be from the aphotic zone, and must have originated from organic material transported there by sinking. Primary production fuelled by nitrate is, therefore, making use of a "fresh" nutrient source rather than a regenerated one. Production by nitrate is thus referred to as new production.
The figure at the head of this section illustrates this. Nitrate and ammonium are taken up by primary producers, processed through the food-web, and then regenerated as ammonium. Some of this return flux is released into the surface ocean (where it is available again for uptake), while some is returned at depth. The ammonium returned at depth is nitrified to nitrate, and ultimately mixed or upwelled into the surface ocean to repeat the cycle.
Consequently, the significance of new production lies in its connection to sinking material. At equilibrium, the export flux of organic material sinking into the aphotic zone is balanced by the upward flux of nitrate. By measuring how much nitrate is consumed by primary production, relative to that of regenerated ammonium, one should be able to estimate the export flux indirectly.
As an aside, the f-ratio can also reveal important aspects of local ecosystem function. High f-ratio values are typically associated with productive ecosystems dominated by large, eukaryotic phytoplankton (such as diatoms) that are grazed by large zooplankton (and, in turn, by larger organisms such as fish). By contrast, low f-ratio values are generally associated with low biomass, oligotrophic food webs consisting of small, prokaryotic phytoplankton (such as Prochlorococcus) which are kept in check by microzooplankton.
Assumptions
A fundamental assumption in this interpretation of the f-ratio is the spatial separation of primary production and nitrification. Indeed, in their original paper, Eppley & Peterson noted that: "To relate new production to export requires that nitrification in the euphotic zone be negligible." However, subsequent observational work on the distribution of nitrification has found that nitrification can occur at shallower depths, and even within the photic zone.
As the adjacent diagram shows, if ammonium is indeed nitrified to nitrate in the ocean's surface waters it essentially "short circuits" the deep pathway of nitrate. In practice, this would lead to an overestimation of new production and a higher f-ratio, since some of the ostensibly new production would actually be fuelled by recently nitrified nitrate that had never left the surface ocean. After including nitrification measurements in its parameterisation, an ecosystem model of the oligotrophic subtropical gyre region (specifically the BATS site) found that, on an annual basis, around 40% of surface nitrate was recently nitrified (rising to almost 90% during summer). A further study synthesising geographically diverse nitrification measurements found high variability but no relationship with depth, and applied this in a global-scale model to estimate that up to a half of surface nitrate is supplied by surface nitrification rather than upwelling.
Although measurements of the rate of nitrification are still relatively rare, they do suggest that the f-ratio is not as straightforward a proxy for the biological pump as was once thought. For this reason, some workers have proposed distinguishing between the f-ratio and the ratio of particulate export to primary production, which they term the pe-ratio. While quantitatively different from the f-ratio, the pe-ratio shows similar qualitative variation between high productivity/high biomass/high export regimes and low productivity/low biomass/low export regimes.
In addition, a further process that potentially complicates the use of the f-ratio to estimate "new" and "regenerated" production is dissimilatory nitrate reduction to ammonium (DNRA). In low oxygen environments, such as oxygen minimum zones and seafloor sediments, chemoorganoheterotrophic microbes use nitrate as an electron acceptor for respiration, reducing it to nitrite, then to ammonium. Since, like nitrification, DNRA alters the balance in the availability of nitrate and ammonium, it has the potential to introduce inaccuracy to the calculated f-ratio. However, as DNRA's occurrence is limited to anaerobic situations, its importance is less widespread than nitrification, although it can occur in association with primary producers.
See also
Marine snow
Biological pump
References
Aquatic ecology
Biological oceanography
Chemical oceanography
Nitrates
Systems ecology
Biogeochemistry | F-ratio (oceanography) | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,703 | [
"Systems ecology",
"Environmental chemistry",
"Oxidizing agents",
"Nitrates",
"Chemical oceanography",
"Salts",
"Biogeochemistry",
"Ecosystems",
"Aquatic ecology",
"Environmental social science"
] |
4,184,326 | https://en.wikipedia.org/wiki/Compensation%20law%20of%20mortality | The compensation law of mortality (or late-life mortality convergence) states that the relative differences in death rates between different populations of the same biological species decrease with age, because the higher initial death rates in disadvantaged populations are compensated by lower pace of mortality increase with age. The age at which this imaginary (extrapolated) convergence of mortality trajectories takes place is named the "species-specific life span" (see Gavrilov and Gavrilova, 1979). For human beings, this human species-specific life span is close to 95 years (Gavrilov and Gavrilova, 1979; 1991).
Compensation law of mortality is a paradoxical empirical observation, and it represents a challenge for methods of survival analysis based on proportionality assumption (proportional hazard models). The compensation law of mortality also represents a great challenge for many theories of aging and mortality, which usually fail to explain this phenomenon. On the other hand, the compensation law follows directly from reliability theory, when the compared systems have different initial levels of redundancy.
See also
Ageing
Biodemography of human longevity
Biogerontology
Demography
Mortality
Reliability theory of aging and longevity
References
Gavrilov LA, Gavrilova NS. "Reliability Theory of Aging and Longevity." In: Masoro E.J. & Austad S.N.. (eds.): Handbook of the Biology of Aging, Sixth Edition. Academic Press. San Diego, CA, USA, 2006, 3-42.
Gavrilov LA, Gavrilova NS. Why We Fall Apart. Engineering's Reliability Theory Explains Human Aging. IEEE Spectrum, 2004, 41(9): 30–35.
Gavrilov L.A., Gavrilova N.S. "The quest for a general theory of aging and longevity". Science's SAGE KE (Science of Aging Knowledge Environment) for 16 July 2003; Vol. 2003, No. 28, 1–10. https://www.science.org/loi/sageke,
Gavrilov L.A., Gavrilova N.S. The reliability theory of aging and longevity. Journal of Theoretical Biology, 2001, 213(4): 527–545.
Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher,
Gavrilov, L.A., Gavrilova, N.S. "Determination of species length of life". Doklady Akademii Nauk SSSR, 1979, 246(2): 465–469. English translation by Plenum Publ Corp: pp. 905–908.
Gavrilov, L.A. "A mathematical model of the aging of animals". Doklady Akademii Nauk SSSR, 1978, 238(2): 490–492. English translation by Plenum Publ Corp: pp. 53–55.
Gavrilov, L.A., Gavrilova, N.S., Yaguzhinsky, L.S. "The main regularities of animal aging and death viewed in terms of reliability theory". J. General Biology [Zhurnal Obschey Biologii], 1978, 39(5): 734–742.
Population
Senescence | Compensation law of mortality | [
"Chemistry",
"Biology"
] | 715 | [
"Senescence",
"Metabolism",
"Cellular processes"
] |
4,184,621 | https://en.wikipedia.org/wiki/Chetaev%20instability%20theorem | The Chetaev instability theorem for dynamical systems states that if there exists, for the system with an equilibrium point at the origin, a continuously differentiable function V(x) such that
the origin is a boundary point of the set ;
there exists a neighborhood of the origin such that for all
then the origin is an unstable equilibrium point of the system.
This theorem is somewhat less restrictive than the Lyapunov instability theorems, since a complete sphere (circle) around the origin for which and both are of the same sign does not have to be produced.
It is named after Nicolai Gurevich Chetaev.
Applications
Chetaev instability theorem has been used to analyze the unfolding dynamics of proteins under the effect of optical tweezers.
See also
Lyapunov function — a function whose existence guarantees stability
References
Further reading
Theorems in dynamical systems
Stability theory | Chetaev instability theorem | [
"Mathematics"
] | 179 | [
"Theorems in dynamical systems",
"Stability theory",
"Mathematical problems",
"Mathematical theorems",
"Dynamical systems"
] |
4,184,791 | https://en.wikipedia.org/wiki/10-foot%20user%20interface | In computing, 10-foot user interface, 10-foot UI or 3-meter user interface is a graphical user interface designed for televisions. Compared to desktop computer and smartphone user interfaces, it uses text and other interface elements that are much larger in order to accommodate a typical television viewing distance of . In reality, this distance varies greatly between households. Additionally, the limitations of a television's remote control necessitate extra user experience considerations to minimize user effort.
In the past, these types of human interaction design (HID) interfaces were driven by remote controllers primarily using infrared (IR) codes signals, which are increasingly replaced by other two-way radio-frequency protocol standards such as Bluetooth while maintaining the use of IR for certain wake-up situations. The voice interfaces are also now purposed to provide a near-field experience in addition to the far-field experience of the likes of smart speakers. One of the requirements of voice-input 10-foot user interface usually require a device like smart speaker, over-the-top (OTT) TV box or smart television with Internet connectivity supported by an advanced software operating system.
Design
The term "10-foot" or "3-meter" is used to differentiate this user interface style from those used on desktop computers, which typically assume the user's eyes are only about two feet (24 inches, 60 cm) from the display. This difference in distance from the display has a huge impact on the interface design, requiring the use of extra large fonts on a television and allowing relatively few items to be shown on a television at once. The name "10-foot user interface" is criticised for indicating a distance that is more symbolic than objective. In fact, a 1920 x 1080 pixel resolution UI has a size in space that varies with the size of the TV set. This is why, in television, distance is expressed in picture heights (H) and not in metres (or feet). Furthermore, this 10-foot distance does not correspond to the optimal viewing distance or the Lechner distance (3.2 H for 1080 HD resolution and 1.6 H for 4K UHD resolution). Nor does it represent the actual distance at which televisions are used. The actual distance is greater than 10 ft in half of all households, but above all it varies greatly between households. This is why, when designing an interface for the TV set, it’s important to position oneself at various distances to see what different parts of the population will actually see.
A 10-foot UI is almost always designed to be operated by a simple hand-held remote control. Rather than the mouse or touchscreen which are commonly used with other types of user interfaces, the remote's directional pad is the primary means of navigation. This means that a 10-foot UI needs to arrange items on screen in a way that clearly shows which item would be next in each of the four directions of the directional pad – usually a grid layout. Also, without a mouse cursor, the currently-selected item must be highlighted in some way.
Ten-foot interfaces may resemble other post-WIMP systems graphically, but do not assume the use of a touch screen.
The goal of 10-foot user interface design is normally to make the user's interaction as simple and efficient as possible, trying to achieve a more laid-back and relaxed user experience with as few button presses as possible while still having an intuitive layout, in terms of accomplishing user goals—what is often called user-centered design. Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design may be utilized to support its usability; however, the design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.
One of the additional feature in 10-foot user interface design is also to repurpose the on screen display (OSD) for providing a clear menu-driven interaction for users. This complements the navigation available in most handheld remote controllers. The rise of the use of voice-based input (as found in some remote controllers and smart speakers) also provides a direct control interface enhancing the user experience.
See also
Human–computer interaction
Icon design
Smart TV
User experience design
References
External links
Graphical user interfaces
Human–computer interaction
Interactive television
Multimodal interaction
Television terminology | 10-foot user interface | [
"Engineering"
] | 901 | [
"Human–computer interaction",
"Human–machine interaction"
] |
4,184,875 | https://en.wikipedia.org/wiki/Pristane | Pristane is a natural saturated terpenoid alkane obtained primarily from shark liver oil, from which its name is derived (Latin pristis, "shark"). It is also found in the stomach oil of birds in the order Procellariiformes and in mineral oil and some foods. Pristane and phytane are used in the fields of geology and environmental science as biomarkers to characterize origins and evolution of petroleum hydrocarbons and coal.
It is a transparent oily liquid that is immiscible with water, but soluble in diethyl ether, benzene, chloroform and carbon tetrachloride.
Pristane is known to induce autoimmune diseases in rodents. It is used in research to understand the pathogenesis of rheumatoid arthritis and lupus.
It is used as a lubricant, a transformer oil, an immunologic adjuvant, and an anti-corrosion agent, biological marker, plasmocytomas inducer and in production of monoclonal antibodies.
Biosynthetically, pristane is derived from phytol and is used as a biomarker in petroleum studies. Tocopherols represent an alternate sedimentary source of pristane in sediments and petroleum.
Toxicity of pristane is alleviated by aconitine.
References
Alkanes
Diterpenes | Pristane | [
"Chemistry"
] | 292 | [
"Organic compounds",
"Alkanes"
] |
4,185,154 | https://en.wikipedia.org/wiki/Mixed-species%20foraging%20flock | A mixed-species feeding flock, also termed a mixed-species foraging flock, mixed hunting party or informally bird wave, is a flock of usually insectivorous birds of different species that join each other and move together while foraging. These are different from feeding aggregations, which are congregations of several species of bird at areas of high food availability.
While it is currently unknown how mixed-species foraging flocks originate, researchers have proposed a few mechanisms for their initiation. Many believe that nuclear species play a vital role in mixed-species flock initiation. Additionally, the forest structure is hypothesized to play a vital role in these flocks' formation. In Sri Lanka, for example, vocal mimicry by the greater racket-tailed drongo might have a key role in the initiation of mixed-species foraging flocks, while in parts of the American tropics packs of foraging golden-crowned warblers might play the same role.
Composition
Mixed-species foraging flocks tend to form around a "nuclear" species. Researchers believe nuclear species both stimulate the formation of a mixed-species flock and maintain the cohesion between bird species. They tend to have a disproportionately large influence on the flock. Nuclear species have a few universal qualities. Typically, they are both generalists that employ a gleaning foraging strategy and intraspecifically social birds. "Associate" or "attendant" species are birds that trail the flock only after it has entered their territory. Researchers have shown that these species tend to have a higher fitness following mixed-species foraging flocks. The third class of birds found in mixed-species flocks have been termed "sentinel" species. Unlike nuclear species, sentinels are fly-catching birds that are rarely gregarious. Their role is to alert the other birds in the mixed-species flock to the arrival of potential predators.
Benefits
Ecologists generally assume that species in the same ecological niche compete for resources. The formation of mixed-species flocks demonstrates a possible exception to this universal ecological assumption. Instead of competing with one another for limited resources, some bird species who share the same food source can co-exist in mixed-species flocks. In fact, the more similar body size, taxonomy, and foraging style two bird species are, the more likely they are to be found cooperating in mixed-species flocks. Researchers have proposed two primary evolutionary mechanisms to explain the formation of mixed-species flocks. The first mechanistic explanation is that these different bird species cooperate to gain access to more food. Studies have shown that birds in mixed-species flocks are more likely to spot potential food sources, avoid already exploited locations, and drive insects out of hiding. The second mechanistic explanation is that birds join mixed-species flocks to avoid predation. A bird reduces its risk of being eaten when it is surrounded by other birds who can be potential food for the predator instead. Other studies have hypothesized that multi-species flocks form because large groups reduce a predator's ability to single out one prey, while others have hypothesized that multi-species flocks are more likely to spot predators.
Costs
Mixed-species feeding flocks are not purely beneficial for their member species. Some bird species suffer a higher cost when joining mixed-species flocks. Studies have shown that some bird species will leave their standard optimal feeding area to travel to a worse foraging location in order to follow the path of a mixed-species flock. Birds may also be forced to change their foraging strategy in order to conform with the flock. Another third proposed cost of mixed-species flocks is an increased risk of kleptoparasitism.
In the Holarctic
In the North Temperate Zone, they are typically led by Paridae (tits and chickadees), often joined by nuthatches, treecreepers, woodpeckers (such as the downy woodpecker and lesser spotted woodpecker), kinglets, and in North America Parulidae (New World "warblers") – all insect-eating birds. This behavior is particularly common outside the breeding season.
The advantages of this behavior are not certain, but evidence suggests that it confers some safety from predators, especially for the less watchful birds such as vireos and woodpeckers, and also improves feeding efficiency, perhaps because arthropod prey that flee one bird may be caught by another.
In the Neotropics
Insectivorous feeding flocks reach their fullest development in tropical forests, where they are a typical feature of bird life. In the Neotropics the leaders or "core" members may be black-throated shrike-tanagers in southern Mexico, or three-striped warblers elsewhere in Central America. In South America, core species may include antbirds such as Thamnomanes, antshrikes, Furnariidae (ovenbirds and woodcreepers) like the buff-fronted foliage-gleaner or the olivaceous woodcreeper, or Parulidae (New World "warblers") like the golden-crowned warblers. In open cerrado habitat, it may be white-rumped or white-banded tanagers. Core species often have striking plumage and calls that attract other birds; they are often also known to be very active sentinels, providing warning of would-be predators.
But while such easy-to-locate bird species serve as a focal point for flock members, they do not necessarily initiate the flock. In one Neotropic mixed flock feeding on swarming termites, it was observed that buff-throated warbling finches were most conspicuous. As this species is not an aerial insectivore, it is unlikely to have actually initiated the flock rather than happening across it and joining in. And while Basileuterus species are initiators as well as core species, mixed flocks of Tangara species – in particular red-necked, brassy-breasted, and green-headed tanagers – often initiate formation of a larger and more diverse feeding flock, of which they are then only a less significant component.
Nine-primaried oscines make up much of almost every Neotropical mixed-species feeding flock. Namely, these birds are from families such as the cardinals, Parulidae (New World "warblers"), and in particular Passerellidae (American "sparrows") and Thraupidae (tanagers). Other members of a Neotropic mixed feeding flock may come from most of the local families of smaller diurnal insectivorous birds, and can also include woodpecker, toucans, and trogons. Most Furnariidae do not participate in mixed flocks, though there are exceptions such as Synallaxis spinetails and some species of the woodcreeper subfamily – e.g. those mentioned above or the lesser woodcreeper – are common or even "core" members. Among the tyrant flycatchers there are also some species joining mixed flocks on a somewhat regular basis, including the sepia-capped flycatcher, eared pygmy tyrant, white-throated spadebill, and Oustalet's tyrannulet.
However, even of commonly participating families not all species join mixed flocks. There are genera such as Vireo in which some species do not join mixed flocks, while others (e.g., the red-eyed vireo) will even do so in their winter quarters. Of the three subspecies groups of the yellow-rumped warbler, only one (Audubon's warbler) typically does. And while the importance of certain Thraupidae in initiating and keeping together mixed flocks has been mentioned already, for example the black-goggled tanager is an opportunistic feeder that will appear at but keep its distance from any disturbance—be it a mixed feeding flock, an army ant column or a group of monkeys – and pick off prey trying to flee.
Gnateaters are notable for their absence from these flocks, while swifts and swallows rarely join them, but will if there is for example an ant or termite swarm. Cotingidae (cotingas) are mainly opportunistic associates which rarely join flocks for long if they do so at all; the same holds true for most Muscicapoidea (mockingbirds and relatives), though some thrushes may participate on more often. And though most Tityridae rarely join mixed flocks, becards do so regularly. Tapaculos are rarely seen with mixed flocks, though the collared crescentchest, doubtfully assigned to that family, may be a regular member. Icteridae (grackles and relatives) are also not too often seen to take part in these assemblages, though caciques like the golden-winged or red-rumped cacique join mixed flocks on a somewhat more regular basis. Cuculiformes (cuckoos and allies) are usually absent from mixed feeding flocks, but some – for example, the squirrel cuckoo – can be encountered not infrequently.
Some species appear to prefer when certain others are present: Cyanolyca jays like to flock with unicolored jays and the emerald toucanets species complex. Many Icteridae associate only with related species, but the western subspecies of the yellow-backed oriole associates with jays and the band-backed wren.
Other species participate to varying extents depending on location or altitude – presumably, the different species composition of mixed flocks at varying locations allows these irregular members more or less opportunity to get food. Such species include the grey-hooded flycatcher, or the plain antvireo and the red-crowned ant tanager which are often recorded in lowland flocks but rarely join them at least in some more montane regions.
A typical Neotropic mixed feeding flock moves through the forest at about , with different species foraging in their preferred niches (on the ground, on trunks, in high or low foliage, etc.). Some species follow the flock all day, while others – such as the long-billed gnatwren – join it only as long as it crosses their own territories.
In the Old World tropics
The flocks in the Old World are often much more loosely bonded than in the Neotropics, many being only casual associations lasting the time the flock of core species spends in the attendants' territory. The more stable flocks are observed in tropical Asia, and especially Sri Lanka. Flocks there may number several hundred birds spending the entire day together, and an observer in the rain forest may see virtually no birds except when encountering a flock. For example, as a flock approaches in the Sinharaja Forest Reserve in Sri Lanka, the typical daytime quiet of the jungle is broken by the noisy calls of the orange-billed babbler and greater racket-tailed drongo, joined by species such as the ashy-headed laughingthrush, Kashmir flycatcher, and velvet-fronted nuthatch.
A mixed flock in the Cordillera Central of Luzon in the Philippines was mainly composed of bar-bellied cuckooshrikes, Philippine fairy-bluebirds, and violaceous crows. Luzon hornbills were also recorded as present. With the crows only joining later and the large hornbills probably only opportunistic attendants rather than core species, it is likely that this flock was started by one of the former species – probably the bold and vocal cuckoo-shrikes rather than the more retiring fairy-bluebirds, which are known to seek out such opportunities to forage.
African rainforests also hold mixed-species flocks, the core species including bulbuls and sunbirds, and attendants being as diverse as the red-billed dwarf hornbill and the tit-hylia, the smallest bird of Africa. Drongos and paradise-flycatchers are sometimes described as the sentinels of the flock, but they are also known to steal prey from other flock members. Acanthizidae are typical core members in New Guinea and Australia; in Australia, fairy-wrens are also significant. The core species are joined by birds of other families such as minivets.
Notes
References
External links
Zoology
Bird behavior
Ornithology
Birds | Mixed-species foraging flock | [
"Biology"
] | 2,563 | [
"Behavior by type of animal",
"Behavior",
"Animals",
"Zoology",
"Birds",
"Bird behavior"
] |
4,185,466 | https://en.wikipedia.org/wiki/Parrondo%27s%20paradox | Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996. A more explanatory description is:
There exist pairs of games, each with a higher probability of losing than winning, for which it is possible to construct a winning strategy by playing the games alternately.
Parrondo devised the paradox in connection with his analysis of the Brownian ratchet, a thought experiment about a machine that can purportedly extract energy from random heat motions popularized by physicist Richard Feynman. However, the paradox disappears when rigorously analyzed. Winning strategies consisting of various combinations of losing strategies were explored in biology before Parrondo's paradox was published.
Illustrative examples
The simple example
Consider two games Game A and Game B, with the following rules:
In Game A, you lose $1 every time you play.
In Game B, you count how much money you have left — if it is an even number you win $3, otherwise you lose $5.
Say you begin with $100 in your pocket. If you start playing Game A exclusively, you will obviously lose all your money in 100 rounds. Similarly, if you decide to play Game B exclusively, you will also lose all your money in 100 rounds.
However, consider playing the games alternatively, starting with Game B, followed by A, then by B, and so on (BABABA...). It should be easy to see that you will steadily earn a total of $2 for every two games.
Thus, even though each game is a losing proposition if played alone, because the results of Game B are affected by Game A, the sequence in which the games are played can affect how often Game B earns you money, and subsequently the result is different from the case where either game is played by itself.
The saw-tooth example
Consider an example in which there are two points A and B having the same altitude, as shown in Figure 1. In the first case, we have a flat profile connecting them. Here, if we leave some round marbles in the middle that move back and forth in a random fashion, they will roll around randomly but towards both ends with an equal probability. Now consider the second case where we have a saw-tooth-like profile between the two points. Here also, the marbles will roll towards either end depending on the local slope. Now if we tilt the whole profile towards the right, as shown in Figure 2, it is quite clear that both these cases will become biased towards B.
Now consider the game in which we alternate the two profiles while judiciously choosing the time between alternating from one profile to the other.
When we leave a few marbles on the first profile at point E, they distribute themselves on the plane showing preferential movements towards point B. However, if we apply the second profile when some of the marbles have crossed the point C, but none have crossed point D, we will end up having most marbles back at point E (where we started from initially) but some also in the valley towards point A given sufficient time for the marbles to roll to the valley. Then we again apply the first profile and repeat the steps (points C, D and E now shifted one step to refer to the final valley closest to A). If no marbles cross point C before the first marble crosses point D, we must apply the second profile shortly before the first marble crosses point D, to start over.
It easily follows that eventually we will have marbles at point A, but none at point B. Hence if we define having marbles at point A as a win and having marbles at point B as a loss, we clearly win by alternating (at correctly chosen times) between playing two losing games.
The coin-tossing example
A third example of Parrondo's paradox is drawn from the field of gambling. Consider playing two games, Game A and Game B with the following rules. For convenience, define to be our capital at time t, immediately before we play a game.
Winning a game earns us $1 and losing requires us to surrender $1. It follows that if we win at step t and if we lose at step t.
In Game A, we toss a biased coin, Coin 1, with probability of winning , where is some small positive constant. This is clearly a losing game in the long run.
In Game B, we first determine if our capital is a multiple of some integer . If it is, we toss a biased coin, Coin 2, with probability of winning . If it is not, we toss another biased coin, Coin 3, with probability of winning . The role of modulo provides the periodicity as in the ratchet teeth.
It is clear that by playing Game A, we will almost surely lose in the long run. Harmer and Abbott show via simulation that if and Game B is an almost surely losing game as well. In fact, Game B is a Markov chain, and an analysis of its state transition matrix (again with M=3) shows that the steady state probability of using coin 2 is 0.3836, and that of using coin 3 is 0.6164. As coin 2 is selected nearly 40% of the time, it has a disproportionate influence on the payoff from Game B, and results in it being a losing game.
However, when these two losing games are played in some alternating sequence - e.g. two games of A followed by two games of B (AABBAABB...), the combination of the two games is, paradoxically, a winning game. Not all alternating sequences of A and B result in winning games. For example, one game of A followed by one game of B (ABABAB...) is a losing game, while one game of A followed by two games of B (ABBABB...) is a winning game. This coin-tossing example has become the canonical illustration of Parrondo's paradox – two games, both losing when played individually, become a winning game when played in a particular alternating sequence.
Resolving the paradox
The apparent paradox has been explained using a number of sophisticated approaches, including Markov chains, flashing ratchets, simulated annealing, and information theory. One way to explain the apparent paradox is as follows:
While Game B is a losing game under the probability distribution that results for modulo when it is played individually ( modulo is the remainder when is divided by ), it can be a winning game under other distributions, as there is at least one state in which its expectation is positive.
As the distribution of outcomes of Game B depend on the player's capital, the two games cannot be independent. If they were, playing them in any sequence would lose as well.
The role of now comes into sharp focus. It serves solely to induce a dependence between Games A and B, so that a player is more likely to enter states in which Game B has a positive expectation, allowing it to overcome the losses from Game A. With this understanding, the paradox resolves itself: The individual games are losing only under a distribution that differs from that which is actually encountered when playing the compound game. In summary, Parrondo's paradox is an example of how dependence can wreak havoc with probabilistic computations made under a naive assumption of independence. A more detailed exposition of this point, along with several related examples, can be found in Philips and Feldman.
Applications
Parrondo's paradox is used extensively in game theory, and its application to engineering, population dynamics, financial risk, etc., are areas of active research. Parrondo's games are of little practical use such as for investing in stock markets as the original games require the payoff from at least one of the interacting games to depend on the player's capital. However, the games need not be restricted to their original form and work continues in generalizing the phenomenon. Similarities to volatility pumping and the two envelopes problem have been pointed out. Simple finance textbook models of security returns have been used to prove that individual investments with negative median long-term returns may be easily combined into diversified portfolios with positive median long-term returns. Similarly, a model that is often used to illustrate optimal betting rules has been used to prove that splitting bets between multiple games can turn a negative median long-term return into a positive one. In evolutionary biology, both bacterial random phase variation and the evolution of less accurate sensors have been modelled and explained in terms of the paradox. In ecology, the periodic alternation of certain organisms between nomadic and colonial behaviors has been suggested as a manifestation of the paradox. There has been an interesting application in modelling multicellular survival as a consequence of the paradox and some interesting discussion on the feasibility of it. Applications of Parrondo's paradox can also be found in reliability theory.
Name
In the early literature on Parrondo's paradox, it was debated whether the word 'paradox' is an appropriate description given that the Parrondo effect can be understood in mathematical terms. The 'paradoxical' effect can be mathematically explained in terms of a convex linear combination.
However, Derek Abbott, a leading researcher on the topic, provides the following answer regarding the use of the word 'paradox' in this context:
See also
Granular convection
Brownian ratchet
Game theory
List of paradoxes
Ratchet effect
Statistical mechanics
References
Further reading
John Allen Paulos, A Mathematician Plays the Stock Market, Basic Books, 2004, .
Neil F. Johnson, Paul Jefferies, Pak Ming Hui, Financial Market Complexity, Oxford University Press, 2003, .
Ning Zhong and Jiming Liu, Intelligent Agent Technology: Research and Development, World Scientific, 2001, .
Elka Korutcheva and Rodolfo Cuerno, Advances in Condensed Matter and Statistical Physics, Nova Publishers, 2004, .
Maria Carla Galavotti, Roberto Scazzieri, and Patrick Suppes, Reasoning, Rationality, and Probability, Center for the Study of Language and Information, 2008, .
Derek Abbott and Laszlo B. Kish, Unsolved Problems of Noise and Fluctuations, American Institute of Physics, 2000, .
Visarath In, Patrick Longhini, and Antonio Palacios, Applications of Nonlinear Dynamics: Model and Design of Complex Systems, Springer, 2009, .
Marc Moore, Sorana Froda, and Christian Léger, Mathematical Statistics and Applications: Festschrift for Constance van Eeden, IMS, 2003, .
Ehrhard Behrends, Fünf Minuten Mathematik: 100 Beiträge der Mathematik-Kolumne der Zeitung Die Welt, Vieweg+Teubner Verlag, 2006, .
Lutz Schimansky-Geier, Noise in Complex Systems and Stochastic Dynamics, SPIE, 2003, .
Susan Shannon, Artificial Intelligence and Computer Science, Nova Science Publishers, 2005, .
Eric W. Weisstein, CRC Concise Encyclopedia of Mathematics, CRC Press, 2003, .
David Reguera, José M. G. Vilar, and José-Miguel Rubí, Statistical Mechanics of Biocomplexity, Springer, 1999, .
Sergey M. Bezrukov, Unsolved Problems of Noise and Fluctuations, Springer, 2003, .
Julian Chela-Flores, Tobias C. Owen, and F. Raulin, First Steps in the Origin of Life in the Universe, Springer, 2001, .
Tönu Puu and Iryna Sushko, Business Cycle Dynamics: Models and Tools, Springer, 2006, .
Andrzej S. Nowak and Krzysztof Szajowski, Advances in Dynamic Games: Applications to Economics, Finance, Optimization, and Stochastic Control, Birkhäuser, 2005, .
Cristel Chandre, Xavier Leoncini, and George M. Zaslavsky, Chaos, Complexity and Transport: Theory and Applications, World Scientific, 2008, .
Richard A. Epstein, The Theory of Gambling and Statistical Logic (Second edition), Academic Press, 2009, .
Clifford A. Pickover, The Math Book, Sterling, 2009, .
External links
J. M. R. Parrondo, Parrondo's paradoxical games
Nature news article on Parrondo's paradox
Parrondo's Paradox - A Simulation
Parrondo's Paradox at Futility Closet
Parrondo's Paradox at Wolfram
Online Parrondo simulator
Parrondo's paradox at Maplesoft
Optimal adaptive strategies and Parrondo
Mathematical paradoxes
Game theory
Decision-making paradoxes | Parrondo's paradox | [
"Mathematics"
] | 2,601 | [
"Mathematical problems",
"Game theory",
"Mathematical paradoxes"
] |
4,185,606 | https://en.wikipedia.org/wiki/Avoidance%20response | An avoidance response is a response that prevents an aversive stimulus from occurring. It is a kind of negative reinforcement. An avoidance response is a behavior based on the concept that animals will avoid performing behaviors that result in an aversive outcome. This can involve learning through operant conditioning when it is used as a training technique. It is a reaction to undesirable sensations or feedback that leads to avoiding the behavior that is followed by this unpleasant or fear-inducing stimulus.
Whether the aversive stimulus is brought on intentionally by another or is naturally occurring, it is adaptive to learn to avoid situations that have previously yielded negative outcomes. A simple example of this is conditioned food aversion, or the aversion developed to food that has previously resulted in sickness. Food aversions can also be conditioned using classical conditioning, so that an animal learns to avoid a stimulus previously neutral that has been associated with a negative outcome. This is displayed nearly universally in animals since it is a defense against potential poisoning. A wide variety of species, even slugs, have developed the ability to learn food aversions.
Experiments
An experiment conducted by Solomon and Wynne in 1953 shows the properties of negative reinforcement. The subjects, dogs, were put in a shuttle box (a chamber containing two rectangular compartments divided by a barrier a few inches high). The dogs had the ability to move freely between compartments by going over the barrier. Both compartments had a metal floor designed to administer an unpleasant electric shock. Each compartment also had a light above each, which would turn on and off. Every few minutes, the light in the room the dog was occupying was turned off, while the other remained on. If after 10 seconds in the dark, the dog did not move to the lit compartment, a shock was delivered to the floor of the room the dog was in. The shock continued until the dog moved into the other compartment. In doing this, the dog was escaping the shock by jumping the barrier into the next room. The dog could avoid the shock completely though by jumping the barrier before the 10 seconds of darkness led to a shock. Each trial worked this way with avoiding the shock as the response. In the first few trials, the dog did not move until the shocks began and then it jumped over the barrier. However, after several trials, the dog began to make avoidance responses and would jump over the barrier when the light turned off, and would not receive the shock. Many dogs never received the shock after the first trial. These results led to questioning in the term avoidance paradox (the question of how the nonoccurrence of an aversive event can be a reinforcer for an avoidance response?)
Because the avoidance response is adaptive, humans have learned to use it in training animals such as dogs and horses. B.F. Skinner (1938) believed that animals learn primarily through rewards and punishments, the basis of operant conditioning. The avoidance response comes into play here when punishment is administered. An animal will presumably learn to avoid the behavior that preceded this punishment. A naturally occurring example for humans would be that after a child has been burned by a red stove, he or she learns not to touch the stove when it is red. The child avoids that behavior in the future. For a non-human animal, an example would be that of invisible fences which prompt a dog to learn not to cross a certain (invisible) boundary because its collar shocks it when it does.
Disorders
Although the avoidance response is often advantageous and has developed because it is adaptive, it can sometimes be harmful or become obsessive. Such is the case with obsessive compulsive disorder, a disorder involving mental obsessions followed by actions performed often repetitively, to relieve the anxiety of the obsessions, panic disorder, and other psychiatric disorders. In panic disorder, a person learns to avoid certain situations such as being in crowded places because when they enter these situations, a panic attack (aversive stimulus) ensues. People with obsessive compulsive disorder may learn to avoid using public restrooms because it produces anxiety in them (aversive stimulus).
Neuropharmacology
The posterior and intermediate lobes of the pituitary are necessary for maintenance of the avoidance response once learned. When these areas of the brain are lesioned or removed, animals display difficulty in maintaining a conditioned avoidance response.
The avoidance response can be extinguished using a procedure called "flooding" or response prevention. This is a method in which the subject is forced to remain in the fearsome or aversive situation and not allowed the opportunity to avoid it. This is sometimes used in treatment of obsessive compulsive disorder. Systematic desensitization can also be used to extinguish avoidance response behaviors.
See for example studies involving avoidance response.
See also
Conditioned avoidance response test
Escape response
Fight-or-flight response
Flight zone
Startle reaction
References
Ethology
Reflexes | Avoidance response | [
"Biology"
] | 1,003 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
4,185,921 | https://en.wikipedia.org/wiki/Gompertz%E2%80%93Makeham%20law%20of%20mortality | The Gompertz–Makeham law states that the human death rate is the sum of an age-dependent component (the Gompertz function, named after Benjamin Gompertz), which increases exponentially with age, and an age-independent component (the Makeham term, named after William Makeham). In a protected environment where external causes of death are rare (laboratory conditions, low mortality countries, etc.), the age-independent mortality component is often negligible. In this case the formula simplifies to a Gompertz law of mortality. In 1825, Benjamin Gompertz proposed an exponential increase in death rates with age.
Description
The Gompertz–Makeham law of mortality describes the age dynamics of human mortality rather accurately in the age window from about 30 to 80 years of age. At more advanced ages, some studies have found that death rates increase more slowly – a phenomenon known as the late-life mortality deceleration – but more recent studies disagree.
The decline in the human mortality rate before the 1950s was mostly due to a decrease in the age-independent (Makeham) mortality component, while the age-dependent (Gompertz) mortality component was surprisingly stable. Since the 1950s, a new mortality trend has started in the form of an unexpected decline in mortality rates at advanced ages and "rectangularization" of the survival curve.
The hazard function for the Gompertz-Makeham distribution is most often characterised as . The empirical magnitude of the beta-parameter is about .085, implying a doubling of mortality every .69/.085 = 8 years (Denmark, 2006).
The quantile function can be expressed in a closed-form expression using the Lambert W function:
The Gompertz law is the same as a Fisher–Tippett distribution for the negative of age, restricted to negative values for the random variable (positive values for age).
See also
Bathtub curve
Biodemography
Biodemography of human longevity
Gerontology
Demography
Life table
Maximum life span
Reliability theory of aging and longevity
References
Actuarial science
Medical aspects of death
Population
Senescence
Statistical laws
Applied probability | Gompertz–Makeham law of mortality | [
"Chemistry",
"Mathematics",
"Biology"
] | 446 | [
"Applied probability",
"Applied mathematics",
"Senescence",
"Actuarial science",
"Cellular processes",
"Metabolism"
] |
4,185,946 | https://en.wikipedia.org/wiki/Air%20Quality%20Modeling%20Group | The Air Quality Modeling Group (AQMG) is in the U.S. EPA's Office of Air and Radiation (OAR) and provides leadership and direction on the full range of air quality models, air pollution dispersion models and other mathematical simulation techniques used in assessing pollution control strategies and the impacts of air pollution sources.
The AQMG serves as the focal point on air pollution modeling techniques for other EPA headquarters staff, EPA regional Offices, and State and local environmental agencies. It coordinates with the EPA's Office of Research and Development (ORD) on the development of new models and techniques, as well as wider issues of atmospheric research. Finally, the AQMG conducts modeling analyses to support the policy and regulatory decisions of the EPA's Office of Air Quality Planning and Standards (OAQPS).
The AQMG is located in Research Triangle Park, North Carolina.
Projects maintained by the AQMG
The AQMG maintains the following specific projects:
Air Quality Analyses to Support Modeling
Air Quality Modeling Guidelines
Dispersion Modeling Computer Codes
Dispersion Modeling
Emissions Inventories For Regional Modeling
Guidance on Modeling for New NAAQS & Regional Haze
Meteorological Data Guidance and Modeling
Model Clearinghouse
Models-3/Community Multiscale Air Quality (CMAQ)
Models3 Applications Team, Outreach and Training Coordination
Multimedia Modeling
PM Data Analysis and PM Modeling
Preferred/Recommended Models Alternative Models Screening Models
Regional Ozone Modeling
Roadway Intersection Modeling
Support Center For Regulatory Air Models (SCRAM)
Urban Ozone Modeling
Visibility and Regional Haze Modeling
See also
Accidental release source terms
Bibliography of atmospheric dispersion modeling
Air Quality Modelling and Assessment Unit (AQMAU)
Air Resources Laboratory
AP 42 Compilation of Air Pollutant Emission Factors
Atmospheric dispersion modeling
Atmospheric Studies Group
:Category:Atmospheric dispersion modeling
List of atmospheric dispersion models
Met Office
UK Atmospheric Dispersion Modelling Liaison Committee
UK Dispersion Modelling Bureau
References
Further reading
www.crcpress.com
www.air-dispersion.com
External links
UK Dispersion Modelling Bureau web site
UK ADMLC web site
Air Resources Laboratory (ARL)
Air Quality Modeling Group
Met Office web site
Error propagation in air dispersion modeling
Air pollution in the United States
Air pollution organizations
Atmospheric dispersion modeling
United States Environmental Protection Agency | Air Quality Modeling Group | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 462 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
4,186,087 | https://en.wikipedia.org/wiki/Water%20dimer | The water dimer consists of two water molecules loosely bound by a hydrogen bond. It is the smallest water cluster. Because it is the simplest model system for studying hydrogen bonding in water, it has been the target of many theoretical (and later experimental) studies that it has been called a "theoretical Guinea pig".
Structure and properties
The ab initio binding energy between the two water molecules is estimated to be 5-6 kcal/mol, although values between 3 and 8 have been obtained depending on the method. The experimentally measured dissociation energy (including nuclear quantum effects) of (H2O)2 and (D2O)2 are 3.16 ± 0.03 kcal/mol (13.22 ± 0.12 kJ/mol) and 3.56 ± 0.03 kcal/mol (14.88 ± 0.12 kJ/mol), respectively. The values are in excellent agreement with calculations. The O-O distance of the vibrational ground-state is experimentally measured at ca. 2.98 Å; the hydrogen bond is almost linear, but the angle with the plane of the acceptor molecule is about 57°. The vibrational ground-state is known as the linear water dimer (shown in the figure to the right), which is a near prolate top (viz., in terms of rotational constants, A > B ≈ C). Other configurations of interest include the cyclic dimer and the bifurcated dimer.
History and relevance
The first theoretical study of the water dimer was an ab initio calculation published in 1968 by Morokuma and Pedersen. Since then, the water dimer has been the focus of sustained interest by theoretical chemists concerned with hydrogen bonding—a search of the CAS database up to 2006 returns over 1100 related references (73 of them in 2005). In addition to serving as a model for hydrogen bonding, (H2O)2 is thought to play a significant role in many atmospheric processes, including chemical reactions, condensation, and solar energy absorption by the atmosphere. In addition, a complete understanding of the water dimer is thought to play a key role in a more thorough understanding of hydrogen bonding in liquid and solid forms of water.
References
Forms of water
Water chemistry
Cluster chemistry
Dimers (chemistry) | Water dimer | [
"Physics",
"Chemistry",
"Materials_science"
] | 480 | [
"Cluster chemistry",
"Phases of matter",
"Dimers (chemistry)",
"Forms of water",
"Polymer chemistry",
"nan",
"Organometallic chemistry",
"Matter"
] |
4,186,476 | https://en.wikipedia.org/wiki/Salt%20tide | Salt tide is a phenomenon in which the lower course of a river, with its low altitude with respect to the sea level, becomes salty when the discharge of the river is low during dry season, usually worsened by the result of astronomical high tide.
The lower course Xijiang (West River) in Guangdong, China was periodically affected and has been widely reported since 2004, for bringing shortage of fresh water supply to the western part of the Pearl River Delta. The salinity level of tap water at Zhuhai was reported to be as high as 800 mg per litre in late February 2006, more than 3 times higher than the World Health Organization standard of 250 mg.
References
"Fresh-water crisis looms for Macau - Salinity keeps rising despite assurances", South China Morning Post, Page A7 Hong Kong & Delta, published Thursday, February 23, 2006.
Hydrology
Rivers
Freshwater ecology | Salt tide | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 181 | [
"Hydrology",
"Hydrology stubs",
"Environmental engineering"
] |
4,186,556 | https://en.wikipedia.org/wiki/Mermin%E2%80%93Wagner%20theorem | In quantum field theory and statistical mechanics, the Hohenberg–Mermin–Wagner theorem or Mermin–Wagner theorem (also known as Mermin–Wagner–Berezinskii theorem or Coleman theorem) states that continuous symmetries cannot be spontaneously broken at finite temperature in systems with sufficiently short-range interactions in dimensions . Intuitively, this theorem implies that long-range fluctuations can be created with little energy cost, and since they increase the entropy, they are favored.
This preference is because if such a spontaneous symmetry breaking occurred, then the corresponding Goldstone bosons, being massless, would have an infrared divergent correlation function.
The absence of spontaneous symmetry breaking in dimensional infinite systems was rigorously proved by David Mermin and Herbert Wagner (1966), citing a more general unpublished proof by Pierre Hohenberg (published later in 1967) in statistical mechanics. It was also reformulated later by for quantum field theory. The theorem does not apply to discrete symmetries that can be seen in the two-dimensional Ising model.
Introduction
Consider the free scalar field of mass in two Euclidean dimensions. Its propagator is:
For small is a solution to Laplace's equation with a point source:
This is because the propagator is the reciprocal of in space. To use Gauss's law, define the electric field analog to be . The divergence of the electric field is zero. In two dimensions, using a large Gaussian ring:
So that the function G has a logarithmic divergence both at small and large r.
The interpretation of the divergence is that the field fluctuations cannot stay centered around a mean. If you start at a point where the field has the value 1, the divergence tells you that as you travel far away, the field is arbitrarily far from the starting value. This makes a two dimensional massless scalar field slightly tricky to define mathematically. If you define the field by a Monte Carlo simulation, it doesn't stay put, it slides to infinitely large values with time.
This happens in one dimension too, when the field is a one dimensional scalar field, a random walk in time. A random walk also moves arbitrarily far from its starting point, so that a one-dimensional or two-dimensional scalar does not have a well defined average value.
If the field is an angle, , as it is in the Mexican hat model where the complex field has an expectation value but is free to slide in the direction, the angle will be random at large distances. This is the Mermin–Wagner theorem: there is no spontaneous breaking of a continuous symmetry in two dimensions.
XY model transition
While the Mermin–Wagner theorem prevents any spontaneous symmetry breaking on a global scale, ordering transitions of Kosterlitz–Thouless–type may be allowed. This is the case for the XY model where the continuous (internal) symmetry on a spatial lattice of dimension , i.e. the (spin-)field's expectation value, remains zero for any finite temperature (quantum phase transitions remain unaffected). However, the theorem does not prevent the existence of a phase transition in the sense of a diverging correlation length . To this end, the model has two phases: a conventional disordered phase at high temperature with dominating exponential decay of the correlation function for , and a low-temperature phase with quasi-long-range order where decays according to some power law for "sufficiently large", but finite distance ( with the lattice spacing).
Heisenberg model
We will present an intuitive way to understand the mechanism that prevents symmetry breaking in low dimensions, through an application to the Heisenberg model, that is a system of -component spins of unit length , located at the sites of a -dimensional square lattice, with nearest neighbour coupling . Its Hamiltonian is
The name of this model comes from its rotational symmetry. Consider the low temperature behavior of this system and assume that there exists a spontaneously broken symmetry, that is a phase where all spins point in the same direction, e.g. along the -axis. Then the rotational symmetry of the system is spontaneously broken, or rather reduced to the symmetry under rotations around this direction. We can parametrize the field in terms of independent fluctuations around this direction as follows:
with , and Taylor expand the resulting Hamiltonian. We have
whence
Ignoring the irrelevant constant term and passing to the continuum limit, given that we are interested in the low temperature phase where long-wavelength fluctuations dominate, we get
The field fluctuations are called spin waves and can be recognized as Goldstone bosons. Indeed, they are n-1 in number and they have zero mass since there is no mass term in the Hamiltonian.
To find if this hypothetical phase really exists we have to check if our assumption is self-consistent, that is if the expectation value of the magnetization, calculated in this framework, is finite as assumed. To this end we need to calculate the first order correction to the magnetization due to the fluctuations. This is the procedure followed in the derivation of the well-known Ginzburg criterion.
The model is Gaussian to first order and so the momentum space correlation function is proportional to . Thus the real space two-point correlation function for each of these modes is
where a is the lattice spacing. The average magnetization is
and the first order correction can now easily be calculated:
The integral above is proportional to
and so it is finite for , but appears to be divergent for (logarithmically for ).
This divergence signifies that fluctuations are large so that the expansion in the parameter performed above is not self-consistent. One can naturally expect then that beyond that approximation, the average magnetization is zero.
We thus conclude that for our assumption that there exists a phase of spontaneous magnetization is incorrect for all , because the fluctuations are strong enough to destroy the spontaneous symmetry breaking. This is a general result:
Hohenberg–Mermin–Wagner theorem. There is no phase with spontaneous breaking of a continuous symmetry for , in dimensions for an infinite system.
The result can also be extended to other geometries, such as Heisenberg films with an arbitrary number of layers, as well as to other lattice systems (Hubbard model, s-f model).
Generalizations
Much stronger results than absence of magnetization can actually be proved, and the setting can be substantially more general. In particular :
The Hamiltonian can be invariant under the action of an arbitrary compact, connected Lie group .
Long-range interactions can be allowed (provided that they decay fast enough; necessary and sufficient conditions are known).
In this general setting, Mermin–Wagner theorem admits the following strong form (stated here in an informal way):
All (infinite-volume) Gibbs states associated to this Hamiltonian are invariant under the action of .
When the assumption that the Lie group be compact is dropped, a similar result holds, but with the conclusion that infinite-volume Gibbs states do not exist.
Finally, there are other important applications of these ideas and methods, most notably to the proof that there cannot be non-translation invariant Gibbs states in 2-dimensional systems. A typical such example would be the absence of crystalline states in a system of hard disks (with possibly additional attractive interactions).
It has been proved however that interactions of hard-core type can lead in general to violations of Mermin–Wagner theorem.
Historical arguments
In 1930, Felix Bloch argued that, by diagonalizing the Slater determinant for fermions, magnetism in 2D should not exist. Some easy arguments, which are summarized below, were given by Rudolf Peierls based on entropic and energetic considerations. Lev Landau also did some work on symmetry breaking in two dimensions.
Energetic argument
One reason for the lack of global symmetry breaking is, that one can easily excite long wavelength fluctuations which destroy perfect order. "Easily excited" means, that the energy for those fluctuations tend to zero for large enough systems. Let's consider a magnetic model (e.g. the XY-model in one dimension). It is a chain of magnetic moments of length . We consider harmonic approximation, where the forces (torque) between neighbouring moments increase linearly with the angle of twisting . This implies, that the energy due to twisting increases quadratically . The total energy is the sum of all twisted pairs of magnetic moments . If one considers the excited mode with the lowest energy in one dimension (see figure), then the moments on the chain of length are tilted by along the chain. The relative angle between neighbouring moments is the same for all pairs of moments in this mode and equals , if the chain consists of magnetic moments. It follows that the total energy of this lowest mode is . It decreases with increasing system size and tends to zero in the thermodynamic limit , , For arbitrary large systems follows, that the lowest modes do not cost any energy and will be thermally excited. Simultaneously, the long range order is destroyed on the chain. In two dimensions (or in a plane) the number of magnetic moments is proportional to the area of the plain . The energy for the lowest excited mode is then , which tends to a constant in the thermodynamic limit. Thus the modes will be excited at sufficiently large temperatures. In three dimensions, the number of magnetic moments is proportional to the volume and the energy of the lowest mode is . It diverges with system size and will thus not be excited for large enough systems. Long range order is not affected by this mode and global symmetry breaking is allowed.
Entropic argument
An entropic argument against perfect long range order in crystals with is as follows (see figure): consider a chain of atoms/particles with an average particle distance of . Thermal fluctuations between particle and particle will lead to fluctuations of the average particle distance of the order of , thus the distance is given by . The fluctuations between particle and will be of the same size: . We assume that the thermal fluctuations are statistically independent (which is evident if we consider only nearest neighbour interaction) and the fluctuations between and particle (with double the distance) has to be summed statistically independent (or incoherent): . For particles N-times the average distance, the fluctuations will increase with the square root if neighbouring fluctuations are summed independently. Although the average distance is well defined, the deviations from a perfect periodic chain increase with the square root of the system size. In three dimensions, one has to walk along three linearly independent directions to cover the whole space; in a cubic crystal, this is effectively along the space diagonal, to get from particle to particle . As one can easily see in the figure, there are six different possibilities to do this. This implies, that the fluctuations on the six different pathways cannot be statistically independent, since they pass the same particles at position and . Now, the fluctuations of the six different ways have to be summed in a coherent way and will be of the order of – independent of the size of the cube. The fluctuations stay finite and lattice sites are well defined. For the case of two dimensions, Herbert Wagner and David Mermin have proved rigorously, that fluctuations distances increase logarithmically with systems size . This is frequently called the logarithmic divergence of displacements.
Crystals in 2D
The image shows a (quasi-) two-dimensional crystal of colloidal particles. These are micrometre-sized particles dispersed in water and sedimented on a flat interface, thus they can perform Brownian motions only within a plane. The sixfold crystalline order is easy to detect on a local scale, since the logarithmic increase of displacements is rather slow. The deviations from the (red) lattice axis are easy to detect, too, here shown as green arrows. The deviations are basically given by the elastic lattice vibrations (acoustic phonons). A direct experimental proof of Hohenberg–Mermin–Wagner fluctuations would be, if the displacements increase logarithmic with the distance of a locally fitted coordinate frame (blue). This logarithmic divergence goes along with an algebraic (slow) decay of positional correlations. The spatial order of a 2D crystal is called quasi-long-range (see also such hexatic phase for the phase behaviour of 2D ensembles).
Interestingly, significant signatures of Hohenberg–Mermin–Wagner fluctuations have not been found in crystals but in disordered amorphous systems.
This work did not investigate the logarithmic displacements of lattice sites (which are difficult to quantify for a finite system size), but the magnitude of the mean squared displacement of the particles as function of time. This way, the displacements are not analysed in space but in the time domain. The theoretical background is given by D. Cassi, as well as F. Merkl and H. Wagner. This work analyses the recurrence probability of random walks and spontaneous symmetry breaking in various dimensions. The finite recurrence probability of a random walk in one and two dimension shows a dualism to the lack of perfect long-range order in one and two dimensions, while the vanishing recurrence probability of a random walk in 3D is dual to existence of perfect long-range order and the possibility of symmetry breaking.
Limits
Real magnets usually do not have a continuous symmetry, since the spin-orbit coupling of the electrons imposes an anisotropy. For atomic systems like graphene, one can show that monolayers of cosmological (or at least continental) size are necessary to measure a significant size of the amplitudes of fluctuations.
A recent discussion about the Hohenberg–Mermin–Wagner theorems and its limitations in the thermodynamic limit is given by Bertrand Halperin.
More recently, it was shown that the most severe physical limitation are finite-size effects in 2D, because the suppression due to infrared fluctuations is only logarithmic in the size: The sample would have to be larger than the observable universe for a 2D superconducting transition to be suppressed below ~100 K.
For magnetism, there is a similar behaviour where the sample size must approach the size of the universe to have a Curie temperature Tc in the mK range. However, because disorder and interlayer coupling compete with finite-size effects at restoring order, it cannot be said a priori which of them is responsible for the observation of magnetic ordering in a given 2D sample.
Remarks
The discrepancy between the Hohenberg–Mermin–Wagner theorem (ruling out long range order in 2D) and the first computer simulations (Alder&Wainwright), which indicated crystallization in 2D, once motivated J. Michael Kosterlitz and David J, Thouless, to work on topological phase transitions in 2D. This work is awarded with the 2016 Nobel Prize in Physics (together with Duncan Haldane).
See also
Elitzur's theorem
Notes
References
Eponymous theorems of physics
Quantum field theory
No-go theorems
Physics theorems
Theorems in quantum mechanics
Statistical mechanics theorems
Theorems in mathematical physics | Mermin–Wagner theorem | [
"Physics",
"Mathematics"
] | 3,096 | [
"Theorems in dynamical systems",
"Theorems in quantum mechanics",
"Quantum field theory",
"No-go theorems",
"Mathematical theorems",
"Equations of physics",
"Quantum mechanics",
"Statistical mechanics theorems",
"Eponymous theorems of physics",
"Theorems in mathematical physics",
"Statistical me... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.