id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
12,747,246 | https://en.wikipedia.org/wiki/Outer%20space%20%28mathematics%29 | In the mathematical subject of geometric group theory, the Culler–Vogtmann Outer space or just Outer space of a free group Fn is a topological space consisting of the so-called "marked metric graph structures" of volume 1 on Fn. The Outer space, denoted Xn or CVn, comes equipped with a natural action of the group of outer automorphisms Out(Fn) of Fn. The Outer space was introduced in a 1986 paper of Marc Culler and Karen Vogtmann, and it serves as a free group analog of the Teichmüller space of a hyperbolic surface. Outer space is used to study homology and cohomology groups of Out(Fn) and to obtain information about algebraic, geometric and dynamical properties of Out(Fn), of its subgroups and individual outer automorphisms of Fn. The space Xn can also be thought of as the set of isometry types of minimal free discrete isometric actions of Fn on R-trees T such that the quotient metric graph T/Fn has volume 1.
History
The Outer space was introduced in a 1986 paper of Marc Culler and Karen Vogtmann, inspired by analogy with the Teichmüller space of a hyperbolic surface. They showed that the natural action of on is properly discontinuous, and that is contractible.
In the same paper Culler and Vogtmann constructed an embedding, via the translation length functions discussed below, of into the infinite-dimensional projective space , where is the set of nontrivial conjugacy classes of elements of . They also proved that the closure of in is compact.
Later a combination of the results of Cohen and Lustig and of Bestvina and Feighn identified (see Section 1.3 of ) the space with the space of projective classes of "very small" minimal isometric actions of on -trees.
Formal definition
Marked metric graphs
Let n ≥ 2. For the free group Fn fix a "rose" Rn, that is a wedge, of n circles wedged at a vertex v, and fix an isomorphism between Fn and the fundamental group 1(Rn, v) of Rn. From this point on we identify Fn and 1(Rn, v) via this isomorphism.
A marking on Fn consists of a homotopy equivalence f : Rn → Γ where Γ is a finite connected graph without degree-one and degree-two vertices. Up to a (free) homotopy, f is uniquely determined by the isomorphism f# : , that is by an isomorphism
A metric graph is a finite connected graph together with the assignment to every topological edge e of Γ of a positive real number L(e) called the length of e.
The volume of a metric graph is the sum of the lengths of its topological edges.
A marked metric graph structure on Fn consists of a marking f : Rn → Γ together with a metric graph structure L on Γ.
Two marked metric graph structures f1 : Rn → Γ1 and f2 : Rn → Γ2 are equivalent if there exists an isometry θ : Γ1 → Γ2 such that, up to free homotopy, we have θ o f1 = f2.
The Outer space Xn consists of equivalence classes of all the volume-one marked metric graph structures on Fn.
Weak topology on the Outer space
Open simplices
Let f : Rn → Γ where Γ is a marking and let k be the number of topological edges in Γ. We order the edges of Γ as e1, ..., ek. Let
be the standard (k − 1)-dimensional open simplex in Rk.
Given f, there is a natural map j : Δk → Xn, where for x = (x1, ..., xk) ∈ Δk, the point j(x) of Xn is given by the marking f together with the metric graph structure L on Γ such that L(ei) = xi for i = 1, ..., k.
One can show that j is in fact an injective map, that is, distinct points of Δk correspond to non-equivalent marked metric graph structures on Fn.
The set j(Δk) is called open simplex in Xn corresponding to f and is denoted S(f). By construction, Xn is the union of open simplices corresponding to all markings on Fn. Note that two open simplices in Xn either are disjoint or coincide.
Closed simplices
Let f : Rn → Γ where Γ is a marking and let k be the number of topological edges in Γ. As before, we order the edges of Γ as e1, ..., ek. Define Δk′ ⊆ Rk as the set of all x = (x1, ..., xk) ∈ Rk, such that , such that each xi ≥ 0 and such that the set of all edges ei in with xi = 0 is a subforest in Γ.
The map j : Δk → Xn extends to a map h : Δk′ → Xn as follows. For x in Δk put h(x) = j(x). For x ∈ Δk′ − Δk the point h(x) of Xn is obtained by taking the marking f, contracting all edges ei of with xi = 0 to obtain a new marking f1 : Rn → Γ1 and then assigning to each surviving edge ei of Γ1 length xi > 0.
It can be shown that for every marking f the map h : Δk′ → Xn is still injective. The image of h is called the closed simplex in Xn corresponding to f and is denoted by S′(f). Every point in Xn belongs to only finitely many closed simplices and a point of Xn represented by a marking f : Rn → Γ where the graph Γ is tri-valent belongs to a unique closed simplex in Xn, namely S′(f).
The weak topology on the Outer space Xn is defined by saying that a subset C of Xn is closed if and only if for every marking f : Rn → Γ the set h−1(C) is closed in Δk′. In particular, the map h : Δk′ → Xn is a topological embedding.
Points of Outer space as actions on trees
Let x be a point in Xn given by a marking f : Rn → Γ with a volume-one metric graph structure L on Γ. Let T be the universal cover of Γ. Thus T is a simply connected graph, that is T is a topological tree. We can also lift the metric structure L to T by giving every edge of T the same length as the length of its image in Γ. This turns T into a metric space (T, d) which is a real tree. The fundamental group 1(Γ) acts on T by covering transformations which are also isometries of (T, d), with the quotient space T/1(Γ) = Γ. Since the induced homomorphism f# is an isomorphism between Fn = 1(Rn) and 1(Γ), we also obtain an isometric action of Fn on T with T/Fn = Γ. This action is free and discrete. Since Γ is a finite connected graph with no degree-one vertices, this action is also minimal, meaning that T has no proper Fn-invariant subtrees.
Moreover, every minimal free and discrete isometric action of Fn on a real tree with the quotient being a metric graph of volume one arises in this fashion from some point x of Xn. This defines a bijective correspondence between Xn and the set of equivalence classes of minimal free and discrete isometric actions of Fn on a real trees with volume-one quotients. Here two such actions of Fn on real trees T1 and T2 are equivalent if there exists an Fn-equivariant isometry between T1 and T2.
Length functions
Give an action of Fn on a real tree T as above, one can define the translation length function associate with this action:
For g ≠ 1 there is a (unique) isometrically embedded copy of R in T, called the axis of g, such that g acts on this axis by a translation of magnitude . For this reason is called the translation length of g. For any g, u in Fn we have , that is the function is constant on each conjugacy class in G.
In the marked metric graph model of Outer space translation length functions can be interpreted as follows. Let T in Xn be represented by a marking f : Rn → Γ with a volume-one metric graph structure L on Γ. Let g ∈ Fn = 1(Rn). First push g forward via f# to get a closed loop in Γ and then tighten this loop to an immersed circuit in Γ. The L-length of this circuit is the translation length of g.
A basic general fact from the theory of group actions on real trees says that a point of the Outer space is uniquely determined by its translation length function. Namely if two trees with minimal free isometric actions of Fn define equal translation length functions on Fn then the two trees are Fn-equivariantly isometric. Hence the map from Xn to the set of R-valued functions on Fn is injective.
One defines the length function topology or axes topology on Xn as follows. For every T in Xn, every finite subset K of Fn and every ε > 0 let
In the length function topology for every T in Xn a basis of neighborhoods of T in Xn is given by the family VT(K, ε) where K is a finite subset of Fn and where ε > 0.
Convergence of sequences in the length function topology can be characterized as follows. For T in Xn and a sequence Ti in Xn we have if and only if for every g in Fn we have
Gromov topology
Another topology on is the so-called Gromov topology or the equivariant Gromov–Hausdorff convergence topology, which provides a version of Gromov–Hausdorff convergence adapted to the setting of an isometric group action.
When defining the Gromov topology, one should think of points of as actions of on -trees.
Informally, given a tree , another tree is "close" to in the Gromov topology, if for some large finite subtrees of and a large finite subset there exists an "almost isometry" between and with respect to which the (partial) actions of on and almost agree. For the formal definition of the Gromov topology see.
Coincidence of the weak, the length function and Gromov topologies
An important basic result states that the Gromov topology, the weak topology and the length function topology on Xn coincide.
Action of Out(Fn) on Outer space
The group Out(Fn) admits a natural right action by homeomorphisms on Xn.
First we define the action of the automorphism group Aut(Fn) on Xn. Let α ∈ Aut(Fn) be an automorphism of Fn.
Let x be a point of Xn given by a marking f : Rn → Γ with a volume-one metric graph structure L on Γ. Let τ : Rn → Rn be a homotopy equivalence whose induced homomorphism at the fundamental group level is the automorphism α of Fn = 1(Rn). The element xα of Xn is given by the marking f ∘ τ : Rn → Γ with the metric structure L on Γ. That is, to get xα from x we simply precompose the marking defining x with τ.
In the real tree model this action can be described as follows. Let T in Xn be a real tree with a minimal free and discrete co-volume-one isometric action of Fn. Let α ∈ Aut(Fn). As a metric space, Tα is equal to T. The action of Fn is twisted by α. Namely, for any t in T and g in Fn we have:
At the level of translation length functions the tree Tα is given as:
One then checks that for the above action of Aut(Fn) on Outer space Xn the subgroup of inner automorphisms Inn(Fn) is contained in the kernel of this action, that is every inner automorphism acts trivially on Xn. It follows that the action of Aut(Fn) on Xn quotients through to an action of Out(Fn) = Aut(Fn)/Inn(Fn) on Xn. namely, if φ ∈ Out(Fn) is an outer automorphism of Fn and if α in Aut(Fn) is an actual automorphism representing φ then for any x in Xn we have xφ = xα.
The right action of Out(Fn) on Xn can be turned into a left action via a standard conversion procedure. Namely, for φ ∈ Out(Fn) and x in Xn set
φx = xφ−1.
This left action of Out(Fn) on Xn is also sometimes considered in the literature although most sources work with the right action.
Moduli space
The quotient space Mn = Xn/Out(Fn) is the moduli space which consists of isometry types of finite connected graphs Γ without degree-one and degree-two vertices, with fundamental groups isomorphic to Fn (that is, with the first Betti number equal to n) equipped with volume-one metric structures. The quotient topology on Mn is the same as that given by the Gromov–Hausdorff distance between metric graphs representing points of Mn. The moduli space Mn is not compact and the "cusps" in Mn arise from decreasing towards zero lengths of edges for homotopically nontrivial subgraphs (e.g. an essential circuit) of a metric graph Γ.
Basic properties and facts about Outer space
Outer space Xn is contractible and the action of Out(Fn) on Xn is properly discontinuous, as was proved by Culler and Vogtmann in their original 1986 paper where Outer space was introduced.
The space Xn has topological dimension 3n − 4. The reason is that if Γ is a finite connected graph without degree-one and degree-two vertices with fundamental group isomorphic to Fn, then Γ has at most 3n − 3 edges and it has exactly 3n − 3 edges when Γ is trivalent. Hence the top-dimensional open simplex in Xn has dimension 3n − 4.
Outer space Xn contains a specific deformation retract Kn of Xn, called the spine of Outer space. The spine Kn has dimension 2n − 3, is Out(Fn)-invariant and has compact quotient under the action of Out(Fn).
Unprojectivized Outer space
The unprojectivized Outer space consists of equivalence classes of all marked metric graph structures on Fn where the volume of the metric graph in the marking is allowed to be any positive real number. The space can also be thought of as the set of all free minimal discrete isometric actions of Fn on R-trees, considered up to Fn-equivariant isometry. The unprojectivized Outer space inherits the same structures that has, including the coincidence of the three topologies (Gromov, axes, weak), and an -action. In addition, there is a natural action of on by scalar multiplication.
Topologically, is homeomorphic to . In particular, is also contractible.
Projectivized Outer space
The projectivized Outer space is the quotient space under the action of on by scalar multiplication. The space is equipped with the quotient topology. For a tree its projective equivalence class is denoted . The action of on naturally quotients through to the action of on . Namely, for and put .
A key observation is that the map is an -equivariant homeomorphism. For this reason the spaces and are often identified.
Lipschitz distance
The Lipschitz distance, named for Rudolf Lipschitz, for Outer space corresponds to the Thurston metric in Teichmüller space. For two points in Xn the (right) Lipschitz distance is defined as the (natural) logarithm of the maximally stretched closed path from to :
and
This is an asymmetric metric (also sometimes called a quasimetric), i.e. it only fails symmetry . The symmetric Lipschitz metric normally denotes:
The supremum is always obtained and can be calculated by a finite set the so called candidates of .
Where is the finite set of conjugacy classes in Fn which correspond to embeddings of a simple loop, a figure of eight, or a barbell into via the marking (see the diagram).
The stretching factor also equals the minimal Lipschitz constant of a homotopy equivalence carrying over the marking, i.e.
Where are the continuous functions such that for the marking on the marking is freely homotopic to the marking on .
The induced topology is the same as the weak topology and the isometry group is for both, the symmetric and asymmetric Lipschitz distance.
Applications and generalizations
The closure of in the length function topology is known to consist of (Fn-equivariant isometry classes of) all very small minimal isometric actions of Fn on R-trees. Here the closure is taken in the space of all minimal isometric "irreducible" actions of on -trees, considered up to equivariant isometry. It is known that the Gromov topology and the axes topology on the space of irreducible actions coincide, so the closure can be understood in either sense. The projectivization of with respect to multiplication by positive scalars gives the space which is the length function compactification of and of , analogous to Thurston's compactification of the Teichmüller space.
Analogs and generalizations of the Outer space have been developed for free products, for right-angled Artin groups, for the so-called deformation spaces of group actions and in some other contexts.
A base-pointed version of Outer space, called Auter space, for marked metric graphs with base-points, was constructed by Hatcher and Vogtmann in 1998. The Auter space shares many properties in common with the Outer space, but only comes with an action of .
See also
Geometric group theory
Mapping class group
Train track map
Out(Fn)
References
Further reading
Mladen Bestvina, The topology of Out(Fn). Proceedings of the International Congress of Mathematicians, Vol. II (Beijing, 2002), pp. 373–384, Higher Education Press, Beijing, 2002; .
Karen Vogtmann, On the geometry of outer space. Bulletin of the American Mathematical Society 52 (2015), no. 1, 27–46.
Geometric group theory
Geometric topology | Outer space (mathematics) | Physics,Mathematics | 3,996 |
41,359,517 | https://en.wikipedia.org/wiki/Recoil%20%28rheology%29 | Recoil is a rheological phenomenon observed only in non-Newtonian fluids that is characterized by a moving fluid's ability to snap back to a previous position when external forces are removed. Recoil is a result of the fluid's elasticity and memory where the speed and acceleration by which the fluid moves depends on the molecular structure and the location to which it returns depends on the conformational entropy. This effect is observed in numerous non-Newtonian liquids to a small degree, but is prominent in some materials such as molten polymers.
Memory
The degree to which a fluid will “remember” where it came from depends on the entropy. Viscoelastic properties in fluids cause them to snap back to entropically favorable conformations. Recoil is observed when a favorable conformation is in the fluid's recent past. However, the fluid cannot fully return to its original position due to energy losses stemming from less than perfect elasticity.
Recoiling fluids display fading memory meaning the longer a fluid is elongated, the less it will recover. Recoil is related to characteristic time, an estimate of the order of magnitude of reaction for the system. Fluids that are described as recoiling generally have characteristic times on the order of a few seconds. Although recoiling fluids usually recover relatively small distances, some molten polymers can recover back to 1/10 of the total elongation. This property of polymers must be accounted for in polymer processing.
Demonstrations of Recoil
When a spinning rod is placed in a polymer solution, elastic forces generated by the rotation motion cause fluid to climb up the rod (a phenomenon known as the Weissenberg effect). If the torque being applied is immediately brought to a stop, the fluid recoils down the rod.
When a viscoelastic fluid being poured from a beaker is quickly cut with a pair of scissors, the fluid recoils back into the beaker.
When fluid at rest in a circular tube is subjected to a pressure drop, a parabolic flow distribution is observed that pulls the liquid down the tube. Immediately after the pressure is alleviated, the fluid recoils backward in the tube and forms a more blunt flow profile.
When Silly Putty is rapidly stretched and held at an elongated position for a short period of time, it springs back. However, if it is held at an elongated position for a longer period of time, there is very little recovery and no visible recoil.
References
Fluid dynamics
Rheology
Non-Newtonian fluids | Recoil (rheology) | Chemistry,Engineering | 493 |
2,036,407 | https://en.wikipedia.org/wiki/Earthlife%20Africa | Earthlife Africa is a South African environmental and anti-nuclear organisation founded in August 1988, in Johannesburg. Initially conceived of as a South African version of Greenpeace, the group began by playing a radical, anti-apartheid, activist role. ELA is arguably now more of a reformist lobby or pressure group. Considered by some to be a key voice in the emerging environmental justice movement, Earthlife Africa has been criticised for being too radical, and by others for "working with traditional conservation movements" in furthering the environmental struggle.
The Earthlife Africa constitution (and name) was formally adopted at the first national conference at Dal Josophat, near Paarl (outside Cape Town) during 1989. Earthlife Africa was chosen as a conscious attempt to avoid the split affecting two factions in Greenpeace who were vying for control of the organisation. ELA therefore took a different approach to the environmental struggle.
The ELA constitution was initially loosely based upon the Four Pillars of the Green Party and other movement documents. In attendance at this historical inauguration of South Africa's green movement were various members of related environmental organisations and ecology groups including Peter Lukey, Henk Coetzee, Mike Kantey, Elfrieda Strauss, David Robert Lewis, and Rachel Brown.
In December 1989 Earthlife Africa formally placed environmental issues on the agenda of the Conference for a Democratic Future.
According to Jacklyn Cock, "the concept of environmental justice was first introduced in South Africa at the Earthlife 1992 conference." Environmental Justice "was articulated as a black concept and a poor concept and it took root very well' More accurately, it was the Environmental Justice Network Forum (EJNF) which was initiated at the 1992 conference hosted by Earthlife Africa on the theme "What does it mean to be green in South Africa.' At this conference 325 civil society delegates resolved to redefine the environmental agenda in South Africa in broad terms and to move beyond the loose anarchist constitution which had bound members with 'values' as opposed to 'rights'. The South African National Conference on Environment and Development had already set the agenda of the green movement in 1991 and thus the 1992 ELA conference was merely a sequel and precursor of later development within the broader movement.
The exposure of pollution by Thor Chemicals, a corporation which imported toxic waste into South Africa, by Earthlife and EJNF working closely with the Legal Resources Centre, the Chemical Workers Industrial Union, affected workers and local communities was the crucial turning point in the re-framing and 'browning' of environmentalism in South Africa.
Earthlife launched the People's Environmental Centre, the Greenhouse in 2002.
2007 ELA participates in a parliamentary portfolio committee hearing into the nuclear industry, delivering submissions and hearing from widows and workers affected by the Pelindaba accident
September 2010, Public Enterprises Minister Barbara Hogan announces the ANC government decision to mothball the PBMR project. The cost to the taxpayer is in the region of between R7bn and R9.5Bn wasted on an unproven technology which could not produce a working reactor after more than 11 years of research.
Conveners
Maya Aberman (Cape Town branch) 2006
Nosiphiwo Msithweni (Cape Town branch) 2007
Campaigns
Apartheid is an Ecology issue
Nuclear Energy Costs the Earth Campaign (NECTEC)
Toxics Campaign focuses mainly on the prevention of proposed incinerators, through input into EIAs
Sustainable Energy and Climate Change Partnership (SECCP)
Demonstrations & Actions
1998: picket at Durban harbour against a nuclear waste ship
2008: picket against the arrival of the USS Theodore Roosevelt
2012 The records and medical files of hundreds of the workers formally handed over to the Public Protector.
2017 Memorandum handed to Necsa relating to allegations of ill health caused to Necsa employees, accepted by Group CEO, Phumzile Tshelane.
Publicity
1998: campaign against air pollution in Johannesburg, three prominent sculptures were decorated with gas masks. They disseminate information on issues such as climate change, genetic engineering and nuclear energy
Conferences
1991 South African National Conference on Environment and Development
1992 What does it mean to be green in South Africa.
Legal cases
1992 Thor Chemicals is exposed resulting in various court applications that end up testing culpability of global corporates.
15 September 2003 Earthlife Africa - Cape Town launched a High Court application in Cape Town, seeking to review and set aside the environmental impact assessment (EIA) authorisation granted to Eskom to build a demonstration module Pebble Bed Modular Reactor (PBMR) at Koeberg, Cape Town.
2004 The clean-up operation for thousands of tons of Thor Chemicals mercury waste begins after an agreement with the British-owned chemical company to pay R24-million towards disposal costs.
2005 Earthlife Africa (Cape Town Branch) v Eskom Holdings Ltd, Access to Information Necsa provided information upon request, in terms of the Promotion of Access to Information Act (PAIA)), which was promulgated in 2000, regarding former employees.The request was received from the South African Historical Archives (SAHA) who acted on behalf of Earthlife Africa (ELA) who in turn acted on behalf of the former employees.
Earthlife Africa (Cape Town) v Director General Department of Environmental Affairs and Tourism and Another (7653/03) [2005] ZAWCHC 7; 2005 (3) SA 156 (C) [2006] 2 All SA 44 (C) (26 January 2005) The director-general: department of environmental affairs & tourism decision, made on 25 June 2003 in terms of s 22(3) of the Environment Conservation Act 73 of 1989, authorising the Eskom Holding's construction of a pebble bed modular reactor at Koeberg, was reviewed and set aside. The matter was remitted to the director-general with directions to afford the applicant and other interested parties an opportunity of addressing further written submissions to him along the lines as set out in this judgement and within such period as he may determine and to consider such submissions before making a decision anew on Eskom's application. Both the director-general and Eskom were ordered jointly and severally to pay the applicant's costs, including the costs of two counsel.
2015, SAFCEI and Earthlife Africa Jhb (ELA) joined hands in taking on legal action against the government's proposed nuclear deal. SAFCEI and ELA (Jhb) filed court papers against the Department of Energy (DoE), National Parliament, NERSA and President Zuma, challenging various aspects of the nuclear procurement process.
2017 landmark victory 22–24 February 2017 where the government was ordered to pay punitive costs of the above cases.
2017 Earthlife Africa Johannesburg declared a major victory on Wednesday 8 March after winning South Africa's first climate change case and forcing the government to reassess the impact of a coal power plant.
2017 High Court asked once again to halt nuclear deal
See also
Conservation movement
Ecology
Ecology movement
Environmentalism
Environmental movement
Environmental protection
List of environmental organizations
Natural resource
Renewable resource
Sustainable development
Sustainability
References
Environmental organisations based in South Africa
Environmental protests
Nuclear energy in South Africa
Anti-nuclear organizations
Civic and political organisations based in Johannesburg | Earthlife Africa | Engineering | 1,456 |
1,093,310 | https://en.wikipedia.org/wiki/Fischer%20indole%20synthesis | The Fischer indole synthesis is a chemical reaction that produces the aromatic heterocycle indole from a (substituted) phenylhydrazine and an aldehyde or ketone under acidic conditions. The reaction was discovered in 1883 by Emil Fischer. Today antimigraine drugs of the triptan class are often synthesized by this method.
This reaction can be catalyzed by Brønsted acids such as HCl, H2SO4, polyphosphoric acid and p-toluenesulfonic acid or Lewis acids such as boron trifluoride, zinc chloride, and aluminium chloride.
Several reviews have been published.
Reaction mechanism
The reaction of a (substituted) phenylhydrazine with a carbonyl (aldehyde or ketone) initially forms a phenylhydrazone which isomerizes to the respective enamine (or 'ene-hydrazine'). After protonation, a cyclic [3,3]-sigmatropic rearrangement occurs producing a diimine. The resulting diimine forms a cyclic aminoacetal (or aminal), which under acid catalysis eliminates NH3, resulting in the energetically favorable aromatic indole.
Isotopic labelling studies show that the aryl nitrogen (N1) of the starting phenylhydrazine is incorporated into the resulting indole.
Buchwald modification
Via a palladium-catalyzed reaction, the Fischer indole synthesis can be effected by cross-coupling aryl bromides and hydrazones. This result supports the previously proposed intermediacy as hydrazone intermediates in the classical Fischer indole synthesis. These N-arylhydrazones undergo exchange with other ketones, expanding the scope of this method.
Application
A variant of the Fischer indolization reaction, termed the interrupted Fischer indolization by Garg and coworkers, has been used in the total synthesis of akuammiline natural products. The method has also been used in medicinal chemistry.
Indometacin preparation.
Triptan synthesis
Iprindole synthesis (phenylhydrazine + suberone → 2,3-Cycloheptenoindole).
See also
Bartoli indole synthesis
Japp–Klingemann indole synthesis
Leimgruber–Batcho indole synthesis
Larock indole synthesis
Related reactions
Madelung synthesis
Reissert synthesis
Gassman synthesis
Nenitzescu synthesis
References
Indole forming reactions
Name reactions
Emil Fischer | Fischer indole synthesis | Chemistry | 515 |
23,774,498 | https://en.wikipedia.org/wiki/C5H10S | {{DISPLAYTITLE:C5H10S}}
The molecular formula C5H10S (molar mass: 102.20 g/mol, exact mass: 102.0503 u) may refer to:
Thiane
Prenylthiol, also known as 3-methyl-2-butene-1-thiol | C5H10S | Chemistry | 73 |
2,902,813 | https://en.wikipedia.org/wiki/59%20Arietis | 59 Arietis is a star in the northern constellation of Aries. 59 Arietis is the Flamsteed designation. It is dimly visible to the naked eye with an apparent visual magnitude of 5.91. Based upon an annual parallax shift of , it is located approximately distant from the Sun. The star is moving closer to the Earth with a heliocentric radial velocity of −4.7 km/s.
The spectrum of this object is that of a subgiant star with a stellar classification of G7 IV, which would suggest it has exhausted the supply of hydrogen at its core and has begun to evolve into a giant star. It is around 1.7 billion years old with a projected rotational velocity of 1.8 km/s. The star has nearly double the mass of the Sun and almost six times the Sun's radius. It is radiating 20 times the luminosity of the Sun from its photosphere at an effective temperature of 5,044 K.
References
External links
HR 995
Image 59 Arietis
G-type subgiants
Aries (constellation)
Durchmusterung objects
Arietis, 59
020618
015514
0995 | 59 Arietis | Astronomy | 248 |
3,770,438 | https://en.wikipedia.org/wiki/Association%20for%20Symbolic%20Logic | The Association for Symbolic Logic (ASL) is an international organization of specialists in mathematical logic and philosophical logic. The ASL was founded in 1936, and its first president was Curt John Ducasse. The current president of the ASL is Phokion Kolaitis.
Publications
The ASL publishes books and academic journals. Its three official journals are:
Journal of Symbolic Logic – publishes research in all areas of mathematical logic. Founded in 1936, .
Bulletin of Symbolic Logic – publishes primarily expository articles and reviews. Founded in 1995, .
Review of Symbolic Logic – publishes research relating to logic, philosophy, science, and their interactions. Founded in 2008, .
In addition, the ASL has a sponsored journal:
Journal of Logic and Analysis publishes research on the interactions between mathematical logic and pure and applied analysis. Founded in 2009 as an open-access successor to the Springer journal Logic and Analysis. .
The organization played a part in publishing the collected writings of Kurt Gödel.
Books Series
Lectures Notes in Logic
Perspective in Logic
Books
Mathematical Logic by Joseph R. Shoenfield
Gödel Lecture Series
The Gödel Lecture Series is series of annual ASL lectures that trace back to 1990.
The Thirty-Fifth Gödel Lecture 2024
Thomas Scanlon, (Un)decidability in fields
The Thirty-Fourth Gödel Lecture 2023
Carl Jockusch, From algorithms which succeed on a large set of inputs to the Turing degrees as a metric space
The Thirty-Third Gödel Lecture 2022
Patricia Blanchette, Formalism in Logic
The Thirty-Second Gödel Lecture 2021
Matthew Foreman, Gödel Diffeomorphisms
The Thirty-First Gödel Lecture 2020
Elisabeth Bouscaren, The ubiquity of configurations in Model Theory
The Thirtieth Gödel Lecture 2019
Sam Buss, Totality, Provability and Feasibility
The Twenty-Ninth Annual Gödel Lecture 2018
Rod Downey, Algorithmic randomness
The Twenty-Eighth Annual Gödel Lecture 2017
Charles Parsons, Gödel and the universe of sets
The Twenty-Seventh Annual Gödel Lecture 2016
Stevo Todorcevic, Basis problems in set theory
The Twenty-Sixth Annual Gödel Lecture 2015
Alex Wilkie, Complex continuations of functions definable in with a diophantine application
The Twenty-Fifth Annual Gödel Lecture 2014
Julia F. Knight, Computable structure theory and formulas of special forms
The Twenty-Fourth Annual Gödel Lecture 2013
Kit Fine, Truthmaker sematics
The Twenty-Third Annual Gödel Lecture 2012
John Steel, The hereditarily ordinal definable sets in models of determinacy
The Twenty-Second Annual Gödel Lecture 2011
Anand Pillay, First order theories
The Twenty-First Annual Gödel Lecture 2010
Alexander Razborov, Complexity of propositional proofs
The Twentieth Annual Gödel Lecture 2009
Richard Shore, Reverse Mathematics: the Playground of Logic
The Nineteenth Annual Gödel Lecture 2008
W. Hugh Woodin, The Continuum Hypothesis, the $\Omega$ Conjecture, and the inner model problem of one supercompact cardinal
The Eighteenth Annual Gödel Lecture 2007
Ehud Hrushovski (a lecture on his work delivered in his absence by Thomas Scanlon)
The Seventeenth Annual Gödel Lecture 2006
Per Martin-Löf, The two layers of logic
The Sixteenth Annual Gödel Lecture 2005
Menachem Magidor, Skolem-Lowenheim theorems for generalized logics
The Fifteenth Annual Gödel Lecture 2004
Michael O. Rabin, Proofs persuasions and randomness in mathematics
The Fourteenth Annual Gödel Lecture 2003
Boris Zilber, Categoricity
The Thirteenth Annual Gödel Lecture 2002
Harvey Friedman, Issues in the foundations of mathematics
The Twelfth Annual Gödel Lecture 2001
Theodore A. Slaman, Recursion Theory
The Eleventh Annual Gödel Lecture 2000
Jon Barwise (Cancelled due to death of speaker)
The Tenth Annual Gödel Lecture 1999
Stephen A. Cook, Logic and computatonal complexity
The Ninth Annual Gödel Lecture 1998
Alexander S. Kechris, Current Trends in Descriptive Set Theory
The Eighth Annual Gödel Lecture 1997
1997 Solomon Feferman, Occupations and Preoccupations with Gödel: His*Works* and the Work
The Seventh Annual Gödel Lecture 1996
1996 Saharon Shelah, Categoricity without compactness
The Sixth Annual Gödel Lecture 1995
1995 Leo Harrington, Goedel, Heidegger, and Direct Perception (or, Why I am a Recursion Theorist)
The Fifth Annual Gödel Lecture 1994
1994 Donald A. Martin, L(R): A Survey
The Fourth Annual Gödel Lecture 1993
1993 Angus Macintyre, Logic of Real and p-adic Analysis: Achievements and Challenges
The Third Annual Gödel Lecture 1992
1992 Joseph R. Shoenfield, The Priority Method
The Second Annual Gödel Lecture 1991
1991 Dana Scott, Will Logicians be Replaced by Machines?
The First Annual Gödel Lecture 1990
1990 Ronald Jensen, Inner Models and Large Cardinals
Meetings
The ASL holds two main meetings every year, one in North America and one in Europe (the latter known as the Logic Colloquium). In addition, the ASL regularly holds joint meetings with both the American Mathematical Society ("AMS") and the American Philosophical Association ("APA"), and sponsors meetings in many different countries every year.
List of presidents
Awards
The association periodically presents a number of prizes and awards.
Karp Prize
The Karp Prize is awarded by the association every five years for an outstanding paper or book in the field of symbolic logic. It consists of a cash award and was established in 1973 in memory of Professor Carol Karp.
Sacks Prize
The Sacks Prize is awarded for the most outstanding doctoral dissertation in mathematical logic. It consists of a cash award and was established in 1999 to honor Professor Gerald Sacks of MIT and Harvard.
Recipients include:
Shoenfield Prize
Inaugurated in 2007, the Shoenfield Prize is awarded every three years in two categories, book and article, recognizing outstanding expository writing in the field of logic and honoring the name of Joseph R. Shoenfield.
Recipients include:
Gödel Lecture
Inaugurated in 1990, the Gödel Lecture is the honor of being the speaker at the association's annual meeting. The award is named after Kurt Gödel.
For the complete list of speakers, please see Gödel Lecture Series above.
References
External links
ASL website
Journal of Symbolic Logic
The Review of Symbolic Logic
The Journal of Logic and Analysis
Learned societies of the United States
Mathematical logic organizations
Philosophical logic
Philosophy organizations
Organizations established in 1936 | Association for Symbolic Logic | Mathematics | 1,324 |
43,688,666 | https://en.wikipedia.org/wiki/Kerry%20International%20Dark-Sky%20Reserve | The Kerry International Dark-Sky Reserve (KIDSR; ) is a dark-sky preserve in County Kerry, Ireland. It was designated Ireland's first International Dark Sky Reserve by the International Dark-Sky Association (IDA). Kerry International Dark-Sky Reserve was awarded the Gold Tier Award on 27 January 2014, by the IDA. It was the first Gold Tier Reserve in the northern hemisphere, and is one of only four Gold Tier Dark-Sky Reserves in the world.
Location
The Kerry International Dark-Sky Reserve is approximately in size and covers nine regions.
Kells/Foilmore
Cahersiveen
Valentia Island
Portmagee
The Glen
Ballinskelligs
Waterville
Dromid
Derrynane/Caherdaniel
The Kerry Dark-Sky Group office is situated in Dungeagan, Ballinskelligs, County Kerry, Ireland.
History
The Kerry Dark-Sky Group was created in 2013 after several out-reach meetings with local community groups in the Reserve at the request of attendees to the gatherings. The purpose of the Kerry Dark-Sky Group is to promote astro-tourism in the Reserve via community projects, local outreach, and events.
Collaborations
On 5 August 2014 The Kerry International Dark-Sky Reserve officially twinned with the Aoraki Mackenzie Gold Tier Reserve in New Zealand.
References
External links
Official website
Sky’s the limit for Kerry’s astro-tourism
http://www.eturbonews.com/48918/alien-helping-lure-visitors-new-dark-sky-reserve-centre-ballinsk
http://www.rte.ie/news/player/2014/0127/20513706-south-west-kerry-named-as-europes-first-gold-tier-dark-sky-reserve/
Dark-sky preserves in the Republic of Ireland
Protected areas of County Kerry
International Dark Sky Reserves | Kerry International Dark-Sky Reserve | Astronomy | 385 |
77,904 | https://en.wikipedia.org/wiki/Television%20receive-only | Television receive-only (TVRO) is a term used chiefly in North America, South America to refer to the reception of satellite television from FSS-type satellites, generally on C-band analog; free-to-air and unconnected to a commercial DBS provider. TVRO was the main means of consumer satellite reception in the United States and Canada until the mid-1990s with the arrival of direct-broadcast satellite television services such as PrimeStar, USSB, Bell Satellite TV, DirecTV, Dish Network, Sky TV that transmit Ku signals. While these services are at least theoretically based on open standards (DVB-S, MPEG-2, MPEG-4), the majority of services are encrypted and require proprietary decoder hardware. TVRO systems relied on feeds being transmitted unencrypted and using open standards, which heavily contrasts to DBS systems in the region.
The term is also used to refer to receiving digital television "backhaul" feeds from FSS-type satellites. Reception of free-to-air satellite signals, generally Ku band Digital Video Broadcasting, for home viewing is still common in Europe and India, although the TVRO nomenclature was never used there. Free-to-air satellite signals are also very common in the People's Republic of China, as many rural locations cannot receive cable television and solely rely on satellites to deliver television signals to individual homes.
"Big ugly dish"
The term "BUD" (big ugly dish) is a colloquialism for C-Band satellite dishes used by TVRO systems. BUDs range from 4 to 16 feet in diameter, with the most popular large size being 10 feet. The name comes from their perception as an eyesore.
History
TVRO systems were originally marketed in the late 1970s. On October 18, 1979, the FCC began allowing people to have home satellite earth stations without a federal government license. The dishes were nearly in diameter, were remote controlled, and could only pick up HBO signals from one of two satellites.
Originally, the dishes used for satellite TV reception were 12 to 16 feet in diameter and made of solid fiberglass with an embedded metal coating, with later models being 4 to 10 feet and made of wire mesh and solid steel or aluminum. Early dishes cost more than $5,000, and sometimes as much as $10,000. The wider the dish was, the better its ability to provide adequate channel reception. Programming sent from ground stations was relayed from 18 satellites in geostationary orbit located 22,300 miles above the Earth. The dish had to be pointed directly at the satellite, with nothing blocking the signal. Weaker signals required larger dishes.
The dishes worked by receiving a low-power C-Band (3.7–4.2 GHz) frequency-modulated analog signal directly from the original distribution satellite – the same signal received by cable television headends. Because analog channels took up an entire transponder on the satellite, and each satellite had a fixed number of transponders, dishes were usually equipped with a modified polar mount and actuator to sweep the dish across the horizon to receive channels from multiple satellites. Switching between horizontal and vertical polarization was accomplished by a small electric servo motor that moved a probe inside the feedhorn throat at the command of the receiver (commonly called a "polarotor" setup). Higher-end receivers did this transparently, switching polarization and moving the dish automatically as the user changed channels.
By Spring of 1984, 18 C-Band satellites were in use for United States domestic communications, owned by five different companies.
The retail price for satellite receivers soon dropped, with some dishes costing as little as $2,000 by mid-1984. Dishes pointing to one satellite were even cheaper. Once a user paid for a dish, it was possible to receive even premium movie channels, raw feeds of news broadcasts or television stations from other areas. People in areas without local broadcast stations, and people in areas without cable television, could obtain good-quality reception with no monthly fees. Two open questions existed about this practice: whether the Communications Act of 1934 applied as a case of "unauthorized reception" by TVRO consumers; and to what extent it was legal for a service provider to encrypt their signals in an effort to prevent its reception.
The Cable Communications Policy Act of 1984 clarified all of these matters, making the following legal:
Reception of unencrypted satellite signals by a consumer
Reception of encrypted satellite signals by a consumer, when they have received authorization to legally decrypt it
This created a framework for the wide deployment of encryption on analog satellite signals. It further created a framework (and implicit mandate to provide) subscription services to TVRO consumers to allow legal decryption of those signals. HBO and Cinemax became the first two services to announce intent to encrypt their satellite feeds late in 1984. Others were strongly considering doing so as well. Where cable providers could compete with TVRO subscription options, it was thought this would provide sufficient incentive for competition.
HBO and Cinemax began encrypting their west coast feeds services with VideoCipher II 12 hours a day early in 1985, then did the same with their east coast feeds by August. The two networks began scrambling full time on January 15, 1986, which in many contemporary news reports was called "S-Day". This met with much protest from owners of big-dish systems, most of which had no other option at the time for receiving such channels. As required by the Cable Communications Policy act of 1984, HBO allowed dish owners to subscribe directly to their service, although at a price ($12.95 per month) higher than what cable subscribers were paying. This sentiment, and a collapse in the sales of TVRO equipment in early 1986, led to the April 1986 attack on HBO's transponder on Galaxy 1. Dish sales went down from 600,000 in 1985 to 350,000 in 1986, but pay television services were seeing dishes as something positive since some people would never have cable service, and the industry was starting to recover as a result. Through 1986, other channels that began full time encryption included Showtime and The Movie Channel on May 27, and CNN and CNN Headline News on July 1. Scrambling would also lead to the development of pay-per-view, as demonstrated by the early adoption of encryption by Request Television, and Viewer's Choice. Channels scrambled (encrypted) with VideoCipher and VideoCipher II could be defeated, and there was a black market for illegal descramblers.
By the end of 1987, 16 channels had employed encryption with another 7 planned in the first half of 1988. Packages that offered reduced rates for channels in bulk had begun to appear. At this time, the vast majority of analog satellite TV transponders still were not encrypted. On November 1, 1988, NBC began scrambling its C-band signal but left its Ku band signal unencrypted in order for affiliates to not lose viewers who could not see their advertising. Most of the two million satellite dish users in the United States still used C-band. ABC and CBS were considering scrambling, though CBS was reluctant due to the number of people unable to receive local network affiliates.
The growth of dishes receiving Ku band signals in North America was limited by the Challenger disaster, since 75 satellites were to be launched prior to the suspension of the Space Shuttle program. Only seven Ku band satellites were in use.
In addition to encryption, DBS services such as PrimeStar had been reducing the popularity for TVRO systems since the early 1990s. Signals from DBS satellites (operating in the more recent Ku band) are higher in both frequency and power (due to improvements in the solar panels and energy efficiency of modern satellites) and therefore require much smaller dishes than C-band, and the digital signals now used require far less signal strength at the receiver, resulting in a lower cost of entry. Each satellite also can carry up to 32 transponders in the Ku band, but only 24 in the C band, and several digital subchannels can be multiplexed (MCPC) or carried separately (SCPC) on a single transponder. General advances, such as HEMT, in noise reduction at microwave frequencies have also had an effect. However, a consequence of the higher frequency used for DBS services is rain fade where viewers lose signal during a heavy downpour. C-band's immunity to rain fade is one of the major reasons the system is still used as the preferred method for television broadcasters to distribute their signal.
Popularity
TVRO systems were most popular in rural areas, beyond the broadcast range of most local television stations. The mountainous terrain of West Virginia, for example, makes reception of over-the-air television broadcasts (especially in the higher UHF frequencies) very difficult. From the late 1970s to the early 1990s DBS systems were not available, and cable television systems of the time only carried a few channels, resulting in a boom in sales of systems in the area, which led to the systems being termed the "West Virginia state flower". The term was regional, known mostly to those living in West Virginia and surrounding areas. Another reason was the large sizes of the dishes. The first satellite systems consisted of "BUDs" twelve to sixteen feet in diameter. They became much more popular in the mid-1980s when dish sizes decreased to about six to ten feet, but have always been a source of much consternation (even local zoning disputes) due to their perception as an eyesore. Neighborhoods with restrictive covenants usually still prohibit this size of dish, except where such restrictions are illegal. Support for systems dried up when strong encryption was introduced around 1994. Many long-disconnected dishes still occupy their original spots.
TVRO on ships
The term TVRO has been in use on ships since it was introduced in the 1980s. One early provider of equipment was SeaTel with its first generation of stabilized satellite antennas that was launched in 1985, the TV-at-Sea 8885 system. Until this time ships had not been able to receive television signals from satellites due to their rocking motion rendering reception impossible. The SeaTel antenna however was stabilized using electrically driven gyroscopes and thus made it possible to point to the satellite accurately enough, that is to within 2°, in order to receive a signal. The successful implementation of stabilised TVRO systems on ships immediately led to the development of maritime VSAT systems. The second generation of SeaTel TVRO systems came in 1994 and was the 2494 antenna, which got its gyro signal from the ship rather than its own gyros, improving accuracy and reducing maintenance.
As of 2010, SeaTel continues to dominate the market for stabilized TVRO systems and has according to the Comsys group, a market share of 75%. Other established providers of stabilised satellite antennas are Intellian, KNS, Orbit, EPAK and KVH.
Current uses
Most of the free analogue channels that BUDs were built to receive have been taken offline. Due to the number of systems in existence, their lack of usefulness, and because many people consider them an eyesore, used BUDs can be purchased for very little money. As of 2009, there are 23 C-band satellites and 38 Ku/Ka band satellites.
There were over 150 channels for people who want to receive subscription channels on a C-band dish via Motorola's 4DTV equipment via two vendors Satellite Receivers Ltd (SRL) and Skyvision . The 4DTV subscription system shutdown on August 16, 2016.
The dishes themselves can be modified to receive free-to-air and DBS signals. The stock LNBs fitted to typical BUDs will usually need to be replaced with one of a lower noise temperature to receive digital broadcasts. With a suitable replacement LNB (provided there is no warping of the reflector) a BUD can be used to receive free-to-air (FTA) and DBS signals. Several companies market LNBs, LNBFs, and adaptor collars for big-dish systems. For receiving FTA signals the replacement should be capable of dual C/Ku reception with linear polarization, for DBS it will need a high band Ku LNBF using circular polarization. Older mesh dishes with perforations larger than 5mm are inefficient at Ku frequencies, because the smaller wavelengths will pass through them. Solid fiberglass dishes usually contain metal mesh with large-diameter perforations as a reflector and are usually unsuitable for anything other than C band.
Large dishes have higher antenna gain, which can be an advantage when used with DBS signals such as Dish Network and DirecTV, virtually eliminating rain fade. Restored dishes fitted with block upconverters can be used to transmit signals as well. BUDs can still be seen at antenna farms for these reasons, so that video and backhauls can be sent to and from the television network with which a station is affiliated, without interruption due to inclement weather. BUDs are also still useful for picking-up weak signals at the edge of a satellite's broadcast "footprint" – the area at which a particular satellite is aimed. For this reason, BUDs are helpful in places like Alaska, or parts of the Caribbean.
Modern equivalents
Large parabolic antennas similar to BUDs are still in production. New dishes differ in their construction and materials. New mesh dishes have much smaller perforations and solid dishes are now made with steel instead of fiberglass. New systems usually include a universal LNB that is switched electronically between horizontal and vertical polarization, obviating the need for a failure-prone polar rotor. As a complete system they have a much lower noise temperature than old BUDs, and are generally better for digital Ku reception. The prices of these dishes have fallen dramatically since the first BUDs were produced for several thousand dollars to as little as $200 for an 8 ft mesh started BUD sold on eBay or amazon as of 2014. Typical uses for these systems include receiving free-to-air and subscription services.
See also
Direct-broadcast satellite television
Polar mount
References
External links
rec.video.satellite.tvro FAQ
Part 1, Part 2, Part 3, Part 4
C/Ku Band Satellite Systems – Tuning, Tracking...
How to set up and align a BUD
North American seller of 8ft, 10ft, 12ft and 13.5ft mesh TVRO antennas
US satellite TV subscription provider for BUDs
Canadian satellite TV subscription provider for BUDs
Satellite Charts and Forum for C-Band Satellite users in North America
Satellite charts for C/Ku-Band Satellites world-wide
Television technology
Broadcast engineering
Radio frequency antenna types
Antennas (radio)
Satellite television
Television terminology | Television receive-only | Technology,Engineering | 3,005 |
56,990 | https://en.wikipedia.org/wiki/Tower%20of%20Hanoi | The Tower of Hanoi (also called The problem of Benares Temple, Tower of Brahma or Lucas' Tower, and sometimes pluralized as Towers, or simply pyramid puzzle) is a mathematical game or puzzle consisting of three rods and a number of disks of various diameters, which can slide onto any rod. The puzzle begins with the disks stacked on one rod in order of decreasing size, the smallest at the top, thus approximating a conical shape. The objective of the puzzle is to move the entire stack to one of the other rods, obeying the following rules:
Only one disk may be moved at a time.
Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty rod.
No disk may be placed on top of a disk that is smaller than it.
With three disks, the puzzle can be solved in seven moves. The minimal number of moves required to solve a Tower of Hanoi puzzle is , where n is the number of disks.
Origins
The puzzle was invented by the French mathematician Édouard Lucas, first presented in 1883 as a game discovered by "N. Claus (de Siam)" (an anagram of "Lucas d'Amiens"), and later published as a booklet in 1889 and in a posthumously-published volume of Lucas' Récréations mathématiques. Accompanying the game was an instruction booklet, describing the game's purported origins in Tonkin, and claiming that according to legend Brahmins at a temple in Benares have been carrying out the movement of the "Sacred Tower of Brahma", consisting of sixty-four golden disks, according to the same rules as in the game, and that the completion of the tower would lead to the end of the world. Numerous variations on this legend regarding the ancient and mystical nature of the puzzle popped up almost immediately.
If the legend were true, and if the priests were able to move disks at a rate of one per second, using the smallest number of moves, it would take them 264 − 1 seconds or roughly 585 billion years to finish, which is about 42 times the estimated current age of the universe.
There are many variations on this legend. For instance, in some back stories, the temple is a monastery, and the priests are monks. The temple or monastery may be in various locales including Hanoi, and may be associated with any religion. In some versions, other elements are introduced, such as the fact that the tower was created at the beginning of the world, or that the priests or monks may make only one move per day.
Solution
The puzzle can be played with any number of disks, although many toy versions have around 7 to 9 of them. The minimal number of moves required to solve a Tower of Hanoi puzzle with n disks is .
Iterative solution
A simple solution for the toy puzzle is to alternate moves between the smallest piece and a non-smallest piece. When moving the smallest piece, always move it to the next position in the same direction (to the right if the starting number of pieces is even, to the left if the starting number of pieces is odd). If there is no tower position in the chosen direction, move the piece to the opposite end, but then continue to move in the correct direction. For example, if you started with three pieces, you would move the smallest piece to the opposite end, then continue in the left direction after that. When the turn is to move the non-smallest piece, there is only one legal move. Doing this will complete the puzzle in the fewest moves.
Simpler statement of iterative solution
The iterative solution is equivalent to repeated execution of the following sequence of steps until the goal has been achieved:
Move one disk from peg A to peg B or vice versa, whichever move is legal.
Move one disk from peg A to peg C or vice versa, whichever move is legal.
Move one disk from peg B to peg C or vice versa, whichever move is legal.
Following this approach, the stack will end up on peg B if the number of disks is odd and peg C if it is even.
Recursive solution
The key to solving a problem recursively is to recognize that it can be broken down into a collection of smaller sub-problems, to each of which that same general solving procedure that we are seeking applies, and the total solution is then found in some simple way from those sub-problems' solutions. Each of these created sub-problems being "smaller" guarantees that the base case(s) will eventually be reached. For the Towers of Hanoi:
label the pegs A, B, C,
let n be the total number of disks, and
number the disks from 1 (smallest, topmost) to n (largest, bottom-most).
Assuming all n disks are distributed in valid arrangements among the pegs; assuming there are m top disks on a source peg, and all the rest of the disks are larger than m, so they can be safely ignored; to move m disks from a source peg to a target peg using a spare peg, without violating the rules:
Move m − 1 disks from the source to the spare peg, by the same general solving procedure. Rules are not violated, by assumption. This leaves the disk m as a top disk on the source peg.
Move the disk m from the source to the target peg, which is guaranteed to be a valid move, by the assumptions — a simple step.
Move the m − 1 disk that we have just placed on the spare, from the spare to the target peg by the same general solving procedure, so they are placed on top of the disk m without violating the rules.
The base case is to move 0 disks (in steps 1 and 3), that is, do nothing—which does not violate the rules.
The full Tower of Hanoi solution then moves n disks from the source peg A to the target peg C, using B as the spare peg.
This approach can be given a rigorous mathematical proof with mathematical induction and is often used as an example of recursion when teaching programming.
Logical analysis of the recursive solution
As in many mathematical puzzles, finding a solution is made easier by solving a slightly more general problem: how to move a tower of h (height) disks from a starting peg f = A (from) onto a destination peg t = C (to), B being the remaining third peg and assuming t ≠ f. First, observe that the problem is symmetric for permutations of the names of the pegs (symmetric group S3). If a solution is known moving from peg A to peg C, then, by renaming the pegs, the same solution can be used for every other choice of starting and destination peg. If there is only one disk (or even none at all), the problem is trivial. If h = 1, then move the disk from peg A to peg C. If h > 1, then somewhere along the sequence of moves, the largest disk must be moved from peg A to another peg, preferably to peg C. The only situation that allows this move is when all smaller h − 1 disks are on peg B. Hence, first all h − 1 smaller disks must go from A to B. Then move the largest disk and finally move the h − 1 smaller disks from peg B to peg C. The presence of the largest disk does not impede any move of the h − 1 smaller disks and can be temporarily ignored. Now the problem is reduced to moving h − 1 disks from one peg to another one, first from A to B and subsequently from B to C, but the same method can be used both times by renaming the pegs. The same strategy can be used to reduce the h − 1 problem to h − 2, h − 3, and so on until only one disk is left. This is called recursion. This algorithm can be schematized as follows.
Identify the disks in order of increasing size by the natural numbers from 0 up to but not including h. Hence disk 0 is the smallest one, and disk h − 1 the largest one.
The following is a procedure for moving a tower of h disks from a peg A onto a peg C, with B being the remaining third peg:
If h > 1, then first use this procedure to move the h − 1 smaller disks from peg A to peg B.
Now the largest disk, i.e. disk h can be moved from peg A to peg C.
If h > 1, then again use this procedure to move the h − 1 smaller disks from peg B to peg C.
By mathematical induction, it is easily proven that the above procedure requires the minimum number of moves possible and that the produced solution is the only one with this minimal number of moves. Using recurrence relations, the exact number of moves that this solution requires can be calculated by: . This result is obtained by noting that steps 1 and 3 take moves, and step 2 takes one move, giving .
Non-recursive solution
The list of moves for a tower being carried from one peg onto another one, as produced by the recursive algorithm, has many regularities. When counting the moves starting from 1, the ordinal of the disk to be moved during move m is the number of times m can be divided by 2. Hence every odd move involves the smallest disk. It can also be observed that the smallest disk traverses the pegs f, t, r, f, t, r, etc. for odd height of the tower and traverses the pegs f, r, t, f, r, t, etc. for even height of the tower. This provides the following algorithm, which is easier, carried out by hand, than the recursive algorithm.
In alternate moves:
Move the smallest disk to the peg it has not recently come from.
Move another disk legally (there will be only one possibility).
For the very first move, the smallest disk goes to peg t if h is odd and to peg r if h is even.
Also observe that:
Disks whose ordinals have even parity move in the same sense as the smallest disk.
Disks whose ordinals have odd parity move in opposite sense.
If h is even, the remaining third peg during successive moves is t, r, f, t, r, f, etc.
If h is odd, the remaining third peg during successive moves is r, t, f, r, t, f, etc.
With this knowledge, a set of disks in the middle of an optimal solution can be recovered with no more state information than the positions of each disk:
Call the moves detailed above a disk's "natural" move.
Examine the smallest top disk that is not disk 0, and note what its only (legal) move would be: if there is no such disk, then we are either at the first or last move.
If that move is the disk's "natural" move, then the disk has not been moved since the last disk 0 move, and that move should be taken.
If that move is not the disk's "natural" move, then move disk 0.
Binary solution
Disk positions may be determined more directly from the binary (base-2) representation of the move number (the initial state being move #0, with all digits 0, and the final state being with all digits 1), using the following rules:
There is one binary digit (bit) for each disk.
The most significant (leftmost) bit represents the largest disk. A value of 0 indicates that the largest disk is on the initial peg, while a 1 indicates that it is on the final peg (right peg if number of disks is odd and middle peg otherwise).
The bitstring is read from left to right, and each bit can be used to determine the location of the corresponding disk.
A bit with the same value as the previous one means that the corresponding disk is stacked on top of the previous disk on the same peg.
(That is to say: a straight sequence of 1s or 0s means that the corresponding disks are all on the same peg.)
A bit with a different value to the previous one means that the corresponding disk is one position to the left or right of the previous one. Whether it is left or right is determined by this rule:
Assume that the initial peg is on the left.
Also assume "wrapping"—so the right peg counts as one peg "left" of the left peg, and vice versa.
Let n be the number of greater disks that are located on the same peg as their first greater disk and add 1 if the largest disk is on the left peg. If n is even, the disk is located one peg to the right, if n is odd, the disk located one peg to the left (in case of an even number of disks and vice versa otherwise).
For example, in an 8-disk Hanoi:
Move 0 = 00000000.
The largest disk is 0, so it is on the left (initial) peg.
All other disks are 0 as well, so they are stacked on top of it. Hence all disks are on the initial peg.
Move = 11111111.
The largest disk is 1, so it is on the middle (final) peg.
All other disks are 1 as well, so they are stacked on top of it. Hence all disks are on the final peg and the puzzle is complete.
Move 21610 = 11011000.
The largest disk is 1, so it is on the middle (final) peg.
Disk two is also 1, so it is stacked on top of it, on the middle peg.
Disk three is 0, so it is on another peg. Since n is odd (n = 1), it is one peg to the left, i.e. on the left peg.
Disk four is 1, so it is on another peg. Since n is odd (n = 1), it is one peg to the left, i.e. on the right peg.
Disk five is also 1, so it is stacked on top of it, on the right peg.
Disk six is 0, so it is on another peg. Since n is even (n = 2), the disk is one peg to the right, i.e. on the left peg.
Disks seven and eight are also 0, so they are stacked on top of it, on the left peg.
The source and destination pegs for the mth move can also be found elegantly from the binary representation of m using bitwise operations. To use the syntax of the C programming language, move m is from peg (m & m - 1) % 3 to peg ((m | m - 1) + 1) % 3, where the disks begin on peg 0 and finish on peg 1 or 2 according as whether the number of disks is even or odd. Another formulation is from peg (m - (m & -m)) % 3 to peg (m + (m & -m)) % 3.
Furthermore, the disk to be moved is determined by the number of times the move count (m) can be divided by 2 (i.e. the number of zero bits at the right), counting the first move as 1 and identifying the disks by the numbers 0, 1, 2, etc. in order of increasing size. This permits a very fast non-recursive computer implementation to find the positions of the disks after m moves without reference to any previous move or distribution of disks.
The operation, which counts the number of consecutive zeros at the end of a binary number, gives a simple solution to the problem: the disks are numbered from zero, and at move m, disk number count trailing zeros is moved the minimal possible distance to the right (circling back around to the left as needed).
Gray-code solution
The binary numeral system of Gray codes gives an alternative way of solving the puzzle. In the Gray system, numbers are expressed in a binary combination of 0s and 1s, but rather than being a standard positional numeral system, the Gray code operates on the premise that each value differs from its predecessor by only one bit changed.
If one counts in Gray code of a bit size equal to the number of disks in a particular Tower of Hanoi, begins at zero and counts up, then the bit changed each move corresponds to the disk to move, where the least-significant bit is the smallest disk, and the most-significant bit is the largest.
Counting moves from 1 and identifying the disks by numbers starting from 0 in order of increasing size, the ordinal of the disk to be moved during move m is the number of times m can be divided by 2.
This technique identifies which disk to move, but not where to move it to. For the smallest disk, there are always two possibilities. For the other disks there is always one possibility, except when all disks are on the same peg, but in that case either it is the smallest disk that must be moved or the objective has already been achieved. Luckily, there is a rule that does say where to move the smallest disk to. Let f be the starting peg, t the destination peg, and r the remaining third peg. If the number of disks is odd, the smallest disk cycles along the pegs in the order f → t → r → f → t → r, etc. If the number of disks is even, this must be reversed: f → r → t → f → r → t, etc.
The position of the bit change in the Gray code solution gives the size of the disk moved at each step: 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, ... , a sequence also known as the ruler function, or one more than the power of 2 within the move number. In the Wolfram Language, IntegerExponent[Range[2^8 - 1], 2] + 1 gives moves for the 8-disk puzzle.
Graphical representation
The game can be represented by an undirected graph, the nodes representing distributions of disks and the edges representing moves. For one disk, the graph is a triangle:
The graph for two disks is three triangles connected to form the corners of a larger triangle.
A second letter is added to represent the larger disk. Clearly, it cannot initially be moved.
The topmost small triangle now represents the one-move possibilities with two disks:
The nodes at the vertices of the outermost triangle represent distributions with all disks on the same peg.
For h + 1 disks, take the graph of h disks and replace each small triangle with the graph for two disks.
For three disks the graph is:
call the pegs a, b, and c
list disk positions from left to right in order of increasing size
The sides of the outermost triangle represent the shortest ways of moving a tower from one peg to another one. The edge in the middle of the sides of the largest triangle represents a move of the largest disk. The edge in the middle of the sides of each next smaller triangle represents a move of each next smaller disk. The sides of the smallest triangles represent moves of the smallest disk.
In general, for a puzzle with n disks, there are 3n nodes in the graph; every node has three edges to other nodes, except the three corner nodes, which have two: it is always possible to move the smallest disk to one of the two other pegs, and it is possible to move one disk between those two pegs except in the situation where all disks are stacked on one peg. The corner nodes represent the three cases where all the disks are stacked on one peg. The diagram for n + 1 disks is obtained by taking three copies of the n-disk diagram—each one representing all the states and moves of the smaller disks for one particular position of the new largest disk—and joining them at the corners with three new edges, representing the only three opportunities to move the largest disk. The resulting figure thus has 3n+1 nodes and still has three corners remaining with only two edges.
As more disks are added, the graph representation of the game will resemble a fractal figure, the Sierpiński triangle. It is clear that the great majority of positions in the puzzle will never be reached when using the shortest possible solution; indeed, if the priests of the legend are using the longest possible solution (without re-visiting any position), it will take them 364 − 1 moves, or more than 1023 years.
The longest non-repetitive way for three disks can be visualized by erasing the unused edges:
Incidentally, this longest non-repetitive path can be obtained by forbidding all moves from a to c.
The Hamiltonian cycle for three disks is:
The graphs clearly show that:
From every arbitrary distribution of disks, there is exactly one shortest way to move all disks onto one of the three pegs.
Between every pair of arbitrary distributions of disks there are one or two different shortest paths.
From every arbitrary distribution of disks, there are one or two different longest non-self-crossing paths to move all disks to one of the three pegs.
Between every pair of arbitrary distributions of disks there are one or two different longest non-self-crossing paths.
Let Nh be the number of non-self-crossing paths for moving a tower of h disks from one peg to another one. Then:
N1 = 2
Nh+1 = (Nh)2 + (Nh)3
This gives Nh to be 2, 12, 1872, 6563711232, ...
Variations
Linear Hanoi
If all moves must be between adjacent pegs (i.e. given pegs A, B, C, one cannot move directly between pegs A and C), then moving a stack of n disks from peg A to peg C takes 3n − 1 moves. The solution uses all 3n valid positions, always taking the unique move that does not undo the previous move. The position with all disks at peg B is reached halfway, i.e. after (3n − 1) / 2 moves.
Cyclic Hanoi
In Cyclic Hanoi, we are given three pegs (A, B, C), which are arranged as a circle with the clockwise and the counterclockwise directions being defined as A – B – C – A and A – C – B – A, respectively. The moving direction of the disk must be clockwise. It suffices to represent the sequence of disks to be moved. The solution can be found using two mutually recursive procedures:
To move n disks counterclockwise to the neighbouring target peg:
move n − 1 disks counterclockwise to the target peg
move disk #n one step clockwise
move n − 1 disks clockwise to the start peg
move disk #n one step clockwise
move n − 1 disks counterclockwise to the target peg
To move n disks clockwise to the neighbouring target peg:
move n − 1 disks counterclockwise to a spare peg
move disk #n one step clockwise
move n − 1 disks counterclockwise to the target peg
Let C(n) and A(n) represent moving n disks clockwise and counterclockwise, then we can write down both formulas:
The solution for the Cyclic Hanoi has some interesting properties:
The move-patterns of transferring a tower of disks from a peg to another peg are symmetric with respect to the center points.
The smallest disk is the first and last disk to move.
Groups of the smallest disk moves alternate with single moves of other disks.
The number of disks moves specified by C(n) and A(n) are minimal.
With four pegs and beyond
Although the three-peg version has a simple recursive solution long been known, the optimal solution for the Tower of Hanoi problem with four pegs (called Reve's puzzle) was not verified until 2014, by Bousch.
However, in case of four or more pegs, the Frame–Stewart algorithm is known without proof of optimality since 1941.
For the formal derivation of the exact number of minimal moves required to solve the problem by applying the Frame–Stewart algorithm (and other equivalent methods), see the following paper.
For other variants of the four-peg Tower of Hanoi problem, see Paul Stockmeyer's survey paper.
The so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes.
Frame–Stewart algorithm
The Frame–Stewart algorithm is described below:
Let be the number of disks.
Let be the number of pegs.
Define to be the minimum number of moves required to transfer n disks using r pegs.
The algorithm can be described recursively:
For some , , transfer the top disks to a single peg other than the start or destination pegs, taking moves.
Without disturbing the peg that now contains the top disks, transfer the remaining disks to the destination peg, using only the remaining pegs, taking moves.
Finally, transfer the top disks to the destination peg, taking moves.
The entire process takes moves. Therefore, the count should be picked for which this quantity is minimum. In the 4-peg case, the optimal equals , where is the nearest integer function. For example, in the UPenn CIS 194 course on Haskell, the first assignment page lists the optimal solution for the 15-disk and 4-peg case as 129 steps, which is obtained for the above value of k.
This algorithm is presumed to be optimal for any number of pegs; its number of moves is 2Θ(n1/(r−2)) (for fixed r).
General shortest paths and the number 466/885
A curious generalization of the original goal of the puzzle is to start from a given configuration of the disks where all disks are not necessarily on the same peg and to arrive in a minimal number of moves at another given configuration. In general, it can be quite difficult to compute a shortest sequence of moves to solve this problem. A solution was proposed by Andreas Hinz and is based on the observation that in a shortest sequence of moves, the largest disk that needs to be moved (obviously one may ignore all of the largest disks that will occupy the same peg in both the initial and final configurations) will move either exactly once or exactly twice.
The mathematics related to this generalized problem becomes even more interesting when one considers the average number of moves in a shortest sequence of moves between two initial and final disk configurations that are chosen at random. Hinz and Chan Tat-Hung independently discovered (see also
) that the average number of moves in an n-disk Tower is given by the following exact formula:
For large enough n, only the first and second terms do not converge to zero, so we get an asymptotic expression: , as . Thus intuitively, we could interpret the fraction of as representing the ratio of the labor one has to perform when going from a randomly chosen configuration to another randomly chosen configuration, relative to the difficulty of having to cross the "most difficult" path of length which involves moving all the disks from one peg to another. An alternative explanation for the appearance of the constant 466/885, as well as a new and somewhat improved algorithm for computing the shortest path, was given by Romik.
Magnetic Hanoi
In Magnetic Tower of Hanoi, each disk has two distinct sides North and South (typically colored "red" and "blue").
Disks must not be placed with the similar poles together—magnets in each disk prevent this illegal move.
Also, each disk must be flipped as it is moved.
Bicolor Towers of Hanoi
This variation of the famous Tower of Hanoi puzzle was offered to grade 3–6 students at 2ème Championnat de France des Jeux Mathématiques et Logiques held in July 1988.
The rules of the puzzle are essentially the same: disks are transferred between pegs one at a time. At no time may a bigger disk be placed on top of a smaller one. The difference is that now for every size there are two disks: one black and one white. Also, there are now two towers of disks of alternating colors. The goal of the puzzle is to make the towers monochrome (same color). The biggest disks at the bottom of the towers are assumed to swap positions.
Tower of Hanoy
A variation of the puzzle has been adapted as a solitaire game with nine playing cards under the name Tower of Hanoy. It is not known whether the altered spelling of the original name is deliberate or accidental.
Applications
The Tower of Hanoi is frequently used in psychological research on problem-solving. There also exists a variant of this task called Tower of London for neuropsychological diagnosis and treatment of disorders of executive function.
Zhang and Norman used several isomorphic (equivalent) representations of the game to study the impact of representational effect in task design. They demonstrated an impact on user performance by changing the way that the rules of the game are represented, using variations in the physical design of the game components. This knowledge has impacted on the development of the TURF framework for the representation of human–computer interaction.
The Tower of Hanoi is also used as a backup rotation scheme when performing computer data backups where multiple tapes/media are involved.
As mentioned above, the Tower of Hanoi is popular for teaching recursive algorithms to beginning programming students. A pictorial version of this puzzle is programmed into the emacs editor, accessed by typing M-x hanoi. There is also a sample algorithm written in Prolog.
The Tower of Hanoi is also used as a test by neuropsychologists trying to evaluate frontal lobe deficits.
In 2010, researchers published the results of an experiment that found that the ant species Linepithema humile were successfully able to solve the 3-disk version of the Tower of Hanoi problem through non-linear dynamics and pheromone signals.
In 2014, scientists synthesized multilayered palladium nanosheets with a Tower of Hanoi-like structure.
In popular culture
In the science fiction story "Now Inhale", by Eric Frank Russell, a human is held prisoner on a planet where the local custom is to make the prisoner play a game until it is won or lost before his execution. The protagonist knows that a rescue ship might take a year or more to arrive, so he chooses to play Towers of Hanoi with 64 disks. This story makes reference to the legend about the Buddhist monks playing the game until the end of the world.
In the 1966 Doctor Who story The Celestial Toymaker, the eponymous villain forces the Doctor to play a ten-piece, 1,023-move Tower of Hanoi game entitled The Trilogic Game with the pieces forming a pyramid shape when stacked.
In 2007, the concept of the Towers Of Hanoi problem was used in Professor Layton and the Diabolical Box in puzzles 6, 83, and 84, but the disks had been changed to pancakes. The puzzle was based around a dilemma where the chef of a restaurant had to move a pile of pancakes from one plate to the other with the basic principles of the original puzzle (i.e. three plates that the pancakes could be moved onto, not being able to put a larger pancake onto a smaller one, etc.)
In the 2011 film Rise of the Planet of the Apes, this puzzle, called in the film the "Lucas Tower", is used as a test to study the intelligence of apes.
The puzzle is featured regularly in adventure and puzzle games. Since it is easy to implement, and easily recognised, it is well suited to use as a puzzle in a larger graphical game (e.g. Star Wars: Knights of the Old Republic and Mass Effect). Some implementations use straight disks, but others disguise the puzzle in some other form. There is an arcade version by Sega.
A 15-disk version of the puzzle appears in the game Sunless Sea as a lock to a tomb. The player has the option to click through each move of the puzzle in order to solve it, but the game notes that it will take 32,767 moves to complete. If an especially dedicated player does click through to the end of the puzzle, it is revealed that completing the puzzle does not unlock the door.
This was first used as a challenge in Survivor Thailand in 2002 but rather than rings, the pieces were made to resemble a temple. Sook Jai threw the challenge to get rid of Jed even though Shii-Ann knew full well how to complete the puzzle.
The problem is featured as part of a reward challenge in a 2011 episode of the American version of the Survivor TV series. Both players (Ozzy Lusth and Benjamin "Coach" Wade) struggled to understand how to solve the puzzle and are aided by their fellow tribe members.
In Genshin Impact, this puzzle is shown in Faruzan's hangout quest, "Early Learning Mechanism", where she mentions seeing it as a mechanism and uses it to make a toy prototype for children. She calls it pagoda stacks.
See also
ABACABA pattern
Backup rotation scheme, a TOH application
Baguenaudier
Recursion (computer science)
"The Nine Billion Names of God", 1953 Arthur C. Clark short story with a similar premise to the game's framing story
Notes
External links
1883 introductions
1889 documents
19th-century inventions
French inventions
Mechanical puzzles
Mathematical puzzles
Articles with example C code
Articles with example Python (programming language) code
Divide-and-conquer algorithms | Tower of Hanoi | Mathematics | 6,847 |
14,483,784 | https://en.wikipedia.org/wiki/Eagle%20Cap%20Wilderness | Eagle Cap Wilderness is a wilderness area located in the Wallowa Mountains of northeastern Oregon (United States), within the Wallowa–Whitman National Forest. The wilderness was established in 1940. In 1964, it was included in the National Wilderness Preservation System. A boundary revision in 1972 added and the Wilderness Act of 1964 added resulting in its current size of , making Eagle Cap by far Oregon's largest wilderness area.
Eagle Cap Wilderness is named after a peak in the Wallowa Mountains, which were once called the Eagle Mountains. At Eagle Cap was incorrectly thought to be the highest peak in the range.
Topography
The Eagle Cap Wilderness is characterized by high alpine lakes and meadows, bare granite peaks and ridges, and U-shaped glacial valleys. Thick timber is found in the lower valleys and scattered alpine timber on the upper slopes. Elevations in the wilderness range from approximately in lower valleys to at the summit of Sacajawea Peak with 30 other summits exceeding . The wilderness is home to Legore Lake, the highest true lake in Oregon at , as well as more than 60 named alpine lakes and tarns (12 of which are above 8,000 feet), and more than of streams.
History
The Eagle Cap Wilderness and surrounding country in the Wallowa–Whitman National Forest was first occupied by the ancestors of the Nez Perce Indian tribe around 1400 AD, and later by the Cayuse, the Shoshone, and Bannocks. The wilderness was used as hunting grounds for bighorn sheep and deer and to gather huckleberries. It was the summer home to the Joseph Band of the Nez Perce tribe. 1860 marked the year the first settlers moved into the Wallowa Valley. In 1930, the Eagle Cap was established as a primitive area and in 1940 earned wilderness designation.
Wildlife
Eagle Cap Wilderness is home to a variety of wildlife, including black bears, cougars, Rocky Mountain bighorn sheep, and mountain goats. In the summer white-tailed deer, mule deer, and Rocky Mountain elk roam the wilderness. Smaller mammals that inhabit the area year-round include the pika, pine martens, badgers, squirrels, and marmots. Birds include peregrine falcons, bald eagles, golden eagles, ferruginous hawks, and gray-crowned rosy finch. Trout can be found in many of the lakes and streams in the wilderness.
The Oregon State record golden trout was caught in the wilderness in 1987, by Douglas White. The lake where it was caught was not named.
Moose have recently returned to the wilderness; the herd now numbers about 40. There is possible evidence that grizzly bears and wolverines are returning as well. Sheep and cattle graze throughout Eagle Cap Wilderness, especially the surroundings of Mount Nebo. Shortly after World War II with the impact of the wool industry, the number of sheep nearly disappeared in the Eagle Cap Wilderness, while at the beginning of the 1900, their numbers exceeded the carrying capacity of the wilderness.
Wolves
Wolves have returned to Eagle Cap Wilderness with no reported encounters with humans, although some losses of sheep and cattle have been attributed to wolves in the area. In 2012, a trail-cam recorded a female black wolf. Tracking of the wolf revealed at least three total wolves in an area east of Minam River. Further surveys by the end of 2012 showed a count of at least seven wolves in a pack within the Upper Minam River area. The Oregon Department of Fish and Wildlife reported in 2013 a total of six known packs with 46 total wolves. All animals belonged to the same pack and are designated Minam Pack.
The first grey wolf trapped and radio-collared tagged by the ODFW was an female individual and marks the twentieth radio-collared wolf in Oregon. Another female was radio-collared which dispersed from the Miniam Pack and found traveling with another male wolf within the Miniam Area and into the Keating Unit. Through 2019 the Minam Pack produced litters annually within the Eagle Cap Wilderness. One of the females from the Minam Pack formed a pair bond in 2014 with a male member of the Snake River Pack forming a new pack within the Eagle Cap Wilderness, designated the Catherine Pack. The adult female was found deceased in 2019 although the pack remained classified as a breeding pack through 2019.
Vegetation
Plant communities in the Eagle Cap Wilderness range from low elevation grasslands and ponderosa pine forest to alpine meadows. Engelmann spruce, larch, mountain hemlock, sub-alpine fir, and whitebark pine can be found in the higher elevations. Varieties of Indian paintbrush, sego lilies, elephanthead, larkspur, shooting star, and bluebells are abundant in the meadows. The wilderness does contain some small groves of old growth forest.
Recreation
As Oregon's largest wilderness area, Eagle Cap offers many recreational activities, including hiking, backpacking, horseback riding, hunting, fishing, camping, and wildlife watching. Winter brings backcountry skiing and snowshoeing opportunities. Several Alpine Huts and campsites are located throughout the McCully Basin, which are used as a base camp in the winter for telemark skiing. There are 47 trailheads and approximately of trails in Eagle Cap, accessible from Wallowa, Union, and Baker Counties, and leading to all areas of the wilderness.
Wild and Scenic Rivers
Four designated Wild and Scenic Rivers originate in Eagle Cap Wilderness—the Lostine, Eagle Creek, Minam, and Imnaha.
Lostine River
of the Lostine from its headwaters in the wilderness to the Wallowa–Whitman National Forest boundary are designated Wild and Scenic. Established in 1988, of the river are designated "wild" and are designated "recreational." A small portion of the river is on private property.
Eagle Creek
of Eagle Creek from its output at Eagle Lake in the wilderness to the Wallowa–Whitman National Forest boundary at Skull Creek are designated Wild and Scenic. In 1988, of the river were designated "wild," are designated "scenic," and are designated "recreational."
Minam
of the Minam River from its headwaters at the south end of Minam Lake to the wilderness boundary, one-half mile downstream from Cougar Creek, are designated Wild and Scenic. In 1988, all were designated "wild."
Imnaha
of the Imnaha River from its headwaters are designated Wild and Scenic. The designation comprises the main stem from the confluence of the North and South Forks of the Imnaha River to its mouth, and the South Fork from its headwaters to the confluence with the main stem. In 1988, were designated "wild," were designated "scenic," and were designated "recreational," though only a portion of the Wild and Scenic Imnaha is located within Eagle Cap Wilderness.
Lakes
See also
List of U.S. Wilderness Areas
List of old growth forests
References
External links
Eagle Cap Wilderness - Wallowa–Whitman National Forest
Eagle Cap Wilderness - Wilderness.net
EagleCapWilderness.com
Eagle Cap Wilderness - JosephOregon.com
Protected areas of Baker County, Oregon
IUCN Category Ib
Protected areas of Union County, Oregon
Protected areas of Wallowa County, Oregon
Wilderness areas of Oregon
Old-growth forests
1940 establishments in Oregon | Eagle Cap Wilderness | Biology | 1,453 |
69,461,111 | https://en.wikipedia.org/wiki/TNet | TNet is a secure top-secret-level intranet system in the White House, notably used to record information about telephone and video calls between the President of the United States and other world leaders. TNet is connected to Joint Worldwide Intelligence Communications System (JWICS), which is used more widely across different offices in the White House. Contained within TNet is an even more secure system known as NSC Intelligence Collaboration Environment (NICE).
NSC Intelligence Collaboration Environment
The NSC Intelligence Collaboration Environment (NICE) is a computer system operated by the United States National Security Council's Directorate for Intelligence Programs. A subdomain of TNet, it was created to enable staff to produce and store documents, such as presidential findings or decision memos, on top secret codeword activities. Due to the extreme sensitivity of the material held on it, only about 20 percent of NSC staff can reportedly access the system. The documents held on the system are tightly controlled and only specific named staff are able to access files.
The system became the subject of controversy during the Trump–Ukraine scandal, when a whistleblower complaint to the Inspector General of the Intelligence Community revealed that NICE had been used to store transcripts of calls between President Donald Trump, and foreign leaders, apparently to restrict access to them. The system was reportedly used for this purpose from 2017 after leaks of conversations with foreign leaders. It was said to have been upgraded in the spring of 2018 to log, who had accessed particular files, as a deterrent against possible leaks.
See also
Classified website
Intellipedia
Joint Worldwide Intelligence Communications System (JWICS)
NIPRNet
RIPR
SIPRNet
References
Computer systems
United States National Security Council
Wide area networks
United States government secrecy | TNet | Technology,Engineering | 350 |
1,373,722 | https://en.wikipedia.org/wiki/Continuous%20automaton | A continuous automaton can be described as a cellular automaton extended so that the valid states a cell can take are not just discrete (for example, the states consist of integers between 0 and 3), but continuous, for example, the real number range [0,1]. The cells however remain discretely separated from each other. One example is called computational verb cellular network (CVCN), of which the states of cells are in the region of [0,1].
Such automata can be used to model certain physical reactions more closely, such as diffusion. One such diffusion model could conceivably consist of a transition function based on the average values of the neighbourhood of the cell. Many implementations of Finite Element Analysis can be thought of as continuous automata, though this degree of abstraction away from the physics of the problem is probably inappropriate.
Continuous spatial automata resemble continuous automata in having continuous values, but they also have a continuous set of locations rather than restricting the values to a discrete grid of cells.
See also
Continuous spatial automaton
Cellular automaton
Reference notes
Cellular automata | Continuous automaton | Mathematics | 228 |
35,252,470 | https://en.wikipedia.org/wiki/Center%20of%20pressure%20%28terrestrial%20locomotion%29 | In biomechanics, center of pressure (CoP) is the term given to the point of application of the ground reaction force vector. The ground reaction force vector represents the sum of all forces acting between a physical object and its supporting surface. Analysis of the center of pressure is common in studies on human postural control and gait. It is thought that changes in motor control may be reflected in changes in the center of pressure. In biomechanical studies, the effect of some experimental condition on movement execution will regularly be quantified by alterations in the center of pressure.
The center of pressure is not a static outcome measure. For instance, during human walking, the center of pressure is near the heel at the time of heelstrike and moves anteriorly throughout the step, being located near the toes at toe-off. For this reason, analysis of the center of pressure will need to take into account the dynamic nature of the signal. In the scientific literature various methods for the analysis of center of pressure time series have been proposed.
Measuring CoP
CoP measurements are commonly gathered through the use of a force plate. A force plate gathers data in the anterior-posterior direction (forward and backward), the medial-lateral direction (side-to-side) and the vertical direction, as well as moments about all 3 axes. Together, these can be used to calculate the position of the center of pressure relative to the origin of the force plate.
Relationship to balance
CoP and center of gravity (CoG) are both related to balance in that they are dependent on the position of the body with respect to the supporting surface. Center of gravity is subject to change based on posture. Center of pressure is the location on the supporting surface where the resultant vertical force vector would act if it could be considered to have a single point of application.
A shift of CoP is an indirect measure of postural sway and thus a measure of a person’s ability to maintain balance. People sway in the anterior-posterior direction (forward and backward) and the medial-lateral direction (side-to-side) when they are simply standing still. This comes as a result of small contractions of muscles in the body to maintain an upright position. An increase in sway is not necessarily an indicator of poorer balance so much as it is an indicator of decreased neuromuscular control, although it has been noted that postural sway is a precursor to a fall.
Notes
References
Benda, B.J., Riley, P.O. and Krebs, D.E. (1994). Biomechanical relationship between center of gravity and center of pressure during standing. IEEE Transactions on Rehabilitation Engineering, 2(1), 3-10.
Fernie, G.R, Gryfe, C.I., Holliday, P.J., and Llewellyn, A. (1982). The relationship of postural sway in standing to the incidence of falls in geriatric subjects. Age and Ageing, 11(1), 11-16.
Gribble, P.A., Hertel, J. (2004). Effect of Lower-Extremity Fatigue on Postural Control. Archives of Physical Medicine Rehabilitation and Rehabilitation, 85, 589-592.
Biomechanics
Walking
Pressure | Center of pressure (terrestrial locomotion) | Physics,Mathematics | 676 |
49,144,195 | https://en.wikipedia.org/wiki/Leaf%20size | Leaf size of plants can be described using the terms megaphyll, macrophyll, mesophyll, microphyll, nanophyll and leptophyll (in descending order) in a classification devised in 1934 by Christen C. Raunkiær and since modified by others. Definitions vary, some referring to length and others to area. Raunkiaer's original definitions were by leaf area, and differed by a factor of nine at each stage. Some authors simplified the system to make it specific to particular climates, and have introduced extra terms including notophyll, picophyll, platyphyll and subleptophyll.
In ecology, microphyll and similar terms based on blade size of the leaf are used to describe a flora, for example, a "microphyll rainforest" is often defined as a forest where the dominant trees have leaves less than 7.5 cm in length.
Raunkiaer's work
Christen C. Raunkiaer proposed using leaf size as a relatively easy measurement that could be used to compare the adaptation of a plant community to dryness. We have for a long time been aware of a series of different adaptations in the structure of plants enabling them to endure excessive evaporation, and thus allowing them to live in place where the environment determines intense evaporation, or where the conditions of water absorption of the ground are unfavourable either physically or physiologically. Examples of such structures are: (1) covering of wax, (2) thick cuticle, (3) sub-epidermal protective tissue, (4) water tissue, (5) covering of hairs (6) covering of the stomata, (7) sinking of the stomata, (8) inclusion of the stomata in a space protected from air currents, (9) diminution of the evaporating surface, &c. The matter however is so complicated that it is very difficult to reach an exact appraisal of these adaptations in characterizing the individual plant communities biologically. ... In general we must content ourselves with showing the most frequently occurring adaptations, without going farther into the statistical investigation. ... A preliminary direct consideration of a series of evergreen phanerophytic communities, ... show that amongst the adaptations named, diminution of the transpiring surface, diminution in leaf size, is one of the adaptations generally in evidence; and since this adaptation is easy to observe and comparatively easy to measure, it is convenient to begin with it if we wish to use the statistical method on this domain.
Raunkiaer used the following size classes:
Leptophyll: less than 25 square millimetres
Nanophyll: 25–225 square millimetres
Microphyll: 225-2,025 square millimetres
Mesophyll: 2,025-18,225 square millimetres
Macrophyll: 18,225-164,025 square millimetres
Megaphyll: greater than 164,025 square millimetres
Later authors have modified the classes and have sometimes used leaf length as a simpler measure than leaf area if the leaf shape is approximately an ellipse. For example, L.J. Webb used size classes:
Microphyll: less than 2,025 square millimetres
Notophyll: 2,025–4,500 square millimetres
Mesophyll: greater than 4,500 square millimetres
Examples of definitions
Single vegetable organisms with large leaves
Gunnera manicata, giant ornamental rhubarb; leaves ;
Raphia regalis, composed leaves ;
Manicaria saccifera, Amazonian palm; partially composed leaves ;
Marojejya darianii, big-leaf palm; leaves ;
Johannesteijsmannia altifrons, Joey palm; undivided leaves long;
Amorphophallus titanum, titan arum; leaves area ;
Victoria amazonica, giant Amazonian waterlily; aquatic plant with leaves long; leaves area .
See also
Leaf
References
Ecological metrics
Leaf morphology | Leaf size | Mathematics | 831 |
4,953,258 | https://en.wikipedia.org/wiki/Orenda | Orenda is the Haudenosaunee name for a certain spiritual energy inherent in people and their environment. It is an "extraordinary invisible power believed by the Iroquois Native Americans to pervade in varying degrees in all animate and inanimate natural objects as a transmissible spiritual energy capable of being exerted according to the will of its possessor." Orenda is a collective power of nature's energies through the living energy of all natural objects: animate and inanimate.
Anthropologist J. N. B. Hewitt notes intrinsic similarities between the Haudenosaunee concept of Orenda and that of the Siouxan wakan or mahopa; the Algonquin manitowi, and the pokunt of the Shoshone. Across the Iroquois tribes, the concept was referred to variously as orenna or karenna by the Mohawk, Cayuga, and Oneida; urente by the Tuscarora, and iarenda or orenda by the Huron.
Orenda is present in nature: storms are said to possess orenda. A strong connection exists between prayers and songs and orenda. Through song, a bird, a shaman, or a rabbit puts forth orenda.
See also
Manitou, similar concept among Algonquian peoples
Mana
Indigenous American philosophy
Ecopsychology
Spiritual ecology
Footnotes
References
Energy (esotericism)
Vitalism
Iroquois culture
Anthropology of religion | Orenda | Biology | 288 |
140,990 | https://en.wikipedia.org/wiki/Tanning%20%28leather%29 | Tanning, or hide tanning, is the process of treating skins and hides of animals to produce leather. A tannery is the place where the skins are processed.
Historically, vegetable based tanning used tannin, an acidic chemical compound derived from the bark of certain trees, in the production of leather. An alternative method, developed in the 1800s, is chrome tanning, where chromium salts are used instead of natural tannins.
History
Tanning hide into leather involves a process which permanently alters the protein structure of skin, making it more durable and less susceptible to decomposition and coloring. The place where hides are processed is known as a tannery.
The English word for tanning is from medieval Latin , derivative of (oak bark), from French (tanbark), from old-Cornish (oak). These terms are related to the hypothetical Proto-Indo-European * meaning 'fir tree'. (The same word is source for Old High German meaning 'fir', related to modern German Tannenbaum).
Ancient civilizations used leather for waterskins, bags, harnesses and tack, boats, armour, quivers, scabbards, boots, and sandals. Tanning was being carried out by the inhabitants of Mehrgarh in Pakistan between 7000 and 3300 BCE. Around 2500 BCE, the Sumerians began using leather, affixed by copper studs, on chariot wheels.
The process of tanning was also used for boats and fishing vessels: ropes, nets, and sails were tanned using tree bark.
Formerly, tanning was considered a noxious or "odoriferous trade" and relegated to the outskirts of town, among the poor. Tanning by ancient methods is so foul-smelling that tanneries are still isolated from those towns today where the old methods are used. Skins typically arrived at the tannery dried stiff and dirty with soil and gore. First, the ancient tanners would soak the skins in water to clean and soften them. Then they would pound and scour the skin to remove any remaining flesh and fat. Hair was removed by soaking the skin in urine, painting it with an alkaline lime mixture, or simply allowing the skin to putrefy for several months then dipping it in a salt solution. After the hair was loosened, the tanners scraped it off with a knife. Once the hair was removed, the tanners would "bate" (soften) the material by pounding dung into the skin, or soaking the skin in a solution of animal brains. Bating was a fermentative process that relied on enzymes produced by bacteria found in the dung. Among the kinds of dung commonly used were those of dogs or pigeons.
Historically the actual tanning process used vegetable tanning. In some variations of the process, cedar oil, alum, or tannin was applied to the skin as a tanning agent. As the skin was stretched, it would lose moisture and absorb the agent.
Following the adoption in medicine of soaking gut sutures in a chromium (III) solution after 1840, it was discovered that this method could also be used with leather and thus was adopted by tanners.
Preparation
The tanning process begins with obtaining an animal skin. When an animal skin is to be tanned, the animal is killed and skinned before the body heat leaves the tissues. This can be done by the tanner, or by obtaining a skin at a slaughterhouse, farm, or local fur trader.
Before tanning, the skins are often dehaired, then have fat, meat and connective tissue removed. They are then washed and soaked in water with various compounds, and prepared to receive a tanning agent. They are then soaked, stretched, dried, and sometimes smoked.
Curing
Preparing hides begins by curing them with salt to prevent putrefaction of the collagen from bacterial growth during the time lag from procuring the hide to when it is processed. Curing removes water from the hides and skins using a difference in osmotic pressure. The moisture content of hides and skins is greatly reduced, and osmotic pressure increased, to the point that bacteria are unable to grow. In wet-salting, the hides are heavily salted, then pressed into packs for about 30 days. In brine-curing, the hides are agitated in a saltwater bath for about 16 hours. Curing can also be accomplished by preserving the hides and skins at very low temperatures.
Beamhouse operations
The steps in the production of leather between curing and tanning are collectively referred to as beamhouse operations. They include, in order, soaking, liming, removal of extraneous tissues (unhairing, scudding and fleshing), deliming, bating or puering, drenching, and pickling.
Soaking
In soaking, the hides are soaked in clean water to remove the salt left over from curing and increase the moisture so that the hide or skin can be further treated.
To prevent damage of the skin by bacterial growth during the soaking period, biocides, typically dithiocarbamates, may be used. Fungicides such as TCMTB may also be added later in the process, to protect wet leathers from mold growth. After 1980, the use of pentachlorophenol and mercury-based biocides and their derivatives was forbidden.
Liming
After soaking, the hides are treated with milk of lime (a basic agent) typically supplemented by "sharpening agents" (disulfide reducing agents) such as sodium sulfide, cyanides, amines, etc.
This:
Removes the hair and other keratinous matter
Removes some of the interfibrillary soluble proteins such as mucins
Causes the fibers to swell up and split up to the desired extent
Removes the natural grease and fats to some extent
Brings the collagen in the hide to a proper condition for satisfactory tannage
The weakening of hair is dependent on the breakdown of the disulfide link of the amino acid cystine, which is the characteristic of the keratin class of proteins that gives strength to hair and wools (keratin typically makes up 90% of the dry weight of hair). The hydrogen atoms supplied by the sharpening agent weaken the cystine molecular link whereby the covalent disulfide bond links are ultimately ruptured, weakening the keratin. To some extent, sharpening also contributes to unhairing, as it tends to break down the hair proteins.
The isoelectric point of the collagen (a tissue-strengthening protein unrelated to keratin) in the hide is also shifted to around pH 4.7 due to liming.
Any hairs remaining after liming are removed mechanically by scraping the skin with a dull knife, a process known as scudding.
Deliming and bating
The pH of the collagen is then reduced so the enzymes may act on it in a process known as deliming. Depending on the end use of the leather, hides may be treated with enzymes to soften them, a process called bating. In modern tanning, these enzymes are purified agents, and the process no longer requires bacterial fermentation (as from dung-water soaking) to produce them.
Pickling
Pickling is another term for tanning, or what is the modern equivalent of turning rawhide into leather by the use of modern chemical agents, if mineral tanning is preferred. Once bating is complete, the hides and skins are treated by first soaking them in a bath containing common salt (sodium chloride), usually 1 quart of salt to 1 gallon of hot water. When the water cools, one fluid ounce of sulfuric acid is added. Small skins are left in this liquor for 2 days, while larger skins between 1 week and as much as 2 months.
In vegetable tanning, the hides are made to soak in a bath solution containing vegetable tannins, such as found in gallnuts, the leaves of sumac, the leaves of certain acacia trees, the outer green shells of walnuts, among other plants. The use of vegetable tanning is a process that takes longer than mineral tanning when converting rawhides into leather. Mineral tanned leather is used principally for shoes, car seats, and upholstery in homes (sofas, etc.). Vegetable tanned leather is used in leather crafting and in making small leather items, such as wallets, handbags and clothes.
Process
Chrome tanning
Chromium(III) sulfate () has long been regarded as the most efficient and effective tanning agent. Chromium(III) compounds of the sort used in tanning are significantly less toxic than hexavalent chromium, although the latter arises in inadequate waste treatment. Chromium(III) sulfate dissolves to give the hexaaquachromium(III) cation, [Cr(H2O)6]3+, which at higher pH undergoes processes called olation to give polychromium(III) compounds that are active in tanning, being the cross-linking of the collagen subunits. The chemistry of [Cr(H2O)6]3+ is more complex in the tanning bath rather than in water due to the presence of a variety of ligands. Some ligands include the sulfate anion, the collagen's carboxyl groups, amine groups from the side chains of the amino acids, and masking agents. Masking agents are carboxylic acids, such as acetic acid, used to suppress formation of polychromium(III) chains. Masking agents allow the tanner to further increase the pH to increase collagen's reactivity without inhibiting the penetration of the chromium(III) complexes.
Collagen is characterized by a high content of glycine, proline, and hydroxyproline, usually in the repeat -gly-pro-hypro-gly-. These residues give rise to collagen's helical structure. Collagen's high content of hydroxyproline allows cross-linking by hydrogen bonding within the helical structure. Ionized carboxyl groups (RCO2−) are formed by the action of hydroxide. This conversion occurs during the liming process, before introduction of the tanning agent (chromium salts). Later during pickling, collagen carboxyl groups are temporarily protonated for ready transport of chromium ions. During basification step of tanning, the carboxyl groups are ionized and coordinate as ligands to the chromium(III) centers of the oxo-hydroxide clusters.
Tanning increases the spacing between protein chains in collagen from 10 to 17 Å. The difference is consistent with cross-linking by polychromium species, of the sort arising from olation and oxolation.
Before the introduction of the basic chromium species in tanning, several steps are required to produce a tannable hide. The pH must be very acidic when the chromium is introduced to ensure that the chromium complexes are small enough to fit between the fibers and residues of the collagen. Once the desired level of penetration of chrome into the substance is achieved, the pH of the material is raised again to facilitate the process. This step is known as basification. In the raw state, chrome-tanned skins are greyish-blue, so are referred to as wet blue. Chrome tanning is faster than vegetable tanning (taking less than a day for this part of the process) and produces a stretchable leather which is excellent for use in handbags and garments.
After application of the chromium agent, the bath is treated with sodium bicarbonate in the basification process to increase the pH to 3.8–4.0, inducing cross-linking between the chromium and the collagen. The pH increase is normally accompanied by a gradual temperature increase up to 40 °C. Chromium's ability to form such stable bridged bonds explains why it is considered one of the most effective tanning compounds. Chromium-tanned leather can contain between 4 and 5% of chromium. This efficiency is characterized by its increased hydrothermal stability of the skin, and its resistance to shrinkage in heated water.
Vegetable tanning
Vegetable tanning uses tannins (a class of polyphenol astringent chemicals), which occur naturally in the bark and leaves of many plants. Tannins bind to the collagen proteins in the hide and coat them, causing them to become less water-soluble and more resistant to bacterial attack. The process also causes the hide to become more flexible. The primary barks processed in bark mills and used in modern times are chestnut, oak, redoul, tanoak, hemlock, quebracho, mangrove, wattle (acacia; see catechol), and myrobalans from Terminalia spp., such as Terminalia chebula. In Ethiopia, the combined vegetable oils of Niger seed (Guizotia abyssinica) and flaxseeds were used in treating the flesh side of the leather, as a means of tawing, rather than of tanning. In Yemen and Egypt, hides were tanned by soaking them in a bath containing the crushed leaves and bark of the Salam acacia (Acacia etbaica; A. nilotica kraussiana). Hides that have been stretched on frames are immersed for several weeks in vats of increasing concentrations of tannin. Vegetable-tanned hide is not very flexible. It is used for luggage, furniture, footwear, belts, and other clothing accessories.
Alternative chemicals
Wet white is a term used for leathers produced using alternative tanning methods that produce an off-white colored leather. Like wet blue, wet white is also a semifinished stage. Wet white can be produced using aldehydes, aluminum, zirconium, titanium, or iron salts, or a combination thereof. Concerns with the toxicity and environmental impact of any chromium (VI) that may form during the tanning process have led to increased research into more efficient wet white methods.
Natural tanning
The conditions present in bogs, including highly acidic water, low temperature, and a lack of oxygen, combine to preserve but severely tan the skin of bog bodies.
Tawing
Tawing is a method that uses alum and other aluminium salts, generally in conjunction with binders such as egg yolk, flour, or other salts. The hide is tawed by soaking in a warm potash alum and salts solution, between . The process increases the hide's pliability, stretchability, softness, and quality. Then, the hide is air dried (crusted) for several weeks, which allows it to stabilize.
The use of alum alone for tanning rawhides is not recommended, as it shrinks the surface area of the skin, making it thicker and hard to the touch. If alum is applied to the fur, it makes the fur dull and harsh.
Post-tanning finishing
Depending on the finish desired, the leather may be waxed, rolled, lubricated, injected with oil, split, shaved, or dyed.
Health and environmental impact
The tanning process involves chemical and organic compounds that can have a detrimental effect on the environment. Agents such as chromium, vegetable tannins, and aldehydes are used in the tanning step of the process. Chemicals used in tanned leather production increase the levels of chemical oxygen demand and total dissolved solids in water when not disposed of responsibly. These processes also use large quantities of water and produce large amounts of pollutants.
Boiling and sun drying can oxidize and convert the various chromium(III) compounds used in tanning into carcinogenic hexavalent chromium, or chromium(VI). This hexavalent chromium runoff and scraps are then consumed by animals, in the case of Bangladesh, chickens (the nation's most common source of protein). Up to 25% of the chickens in Bangladesh contained harmful levels of hexavalent chromium, adding to the national health problem load.
Chromium is not solely responsible for these diseases. Methylisothiazolinone, which is used for microbiological protection (fungal or bacterial growth), causes problems with the eyes and skin. Anthracene, which is used as a leather tanning agent, can cause problems in the kidneys and liver and is also considered a carcinogen. Formaldehyde and arsenic, which are used for leather finishing, cause health problems in the eyes, lungs, liver, kidneys, skin, and lymphatic system and are also considered carcinogens. The waste from leather tanneries is detrimental to the environment and the people who live in it. The use of old technologies plays a large factor in how hazardous wastewater results in contaminating the environment. This is especially prominent in small and medium-sized tanneries in developing countries.
The UN Leather Working Group (LWG) "provides an environmental audit protocol, designed to assess the facilities of leather manufacturers," for "traceability, energy conservation, [and] responsible management of waste products."
Alternatives
Untanned hides can be dried and made pliable by rubbing and stretching the fibers with a hide stretcher, and fatting. However the hide will revert to rawhide if not periodically replenished with fat or oil, especially if it gets wet. Many Native Americans of the arid western regions wore clothing made by this process.
Smoke tanning is listed among the conventional methods like chrome tanning and vegetable tanning. Impregnation of the hide's cells with formaldehyde (from smoke) offers some microbial and water resistance.
Associated processes
Leftover leather would historically be turned into glue. Tanners would place scraps of hides in a vat of water and let them deteriorate for months. The mixture would then be placed over a fire to boil off the water to produce glue.
A tannery may be associated with a grindery, originally a whetstone facility for sharpening knives and other sharp tools, but later could carry shoemakers' tools and materials for sale.
There are several solid and waste water treatment methodologies currently being researched, such as anaerobic digestion of solid wastes and wastewater sludge.
See also
Tanwater
Leather production processes
References
External links
"Home Tanning of Leather and Small fur Skins" (pub. 1962) hosted by the UNT Government Documents Department
Muspratt's mid-19th century technical description of the whole process.
Leathermaking
Manufacturing | Tanning (leather) | Engineering | 3,861 |
25,406,027 | https://en.wikipedia.org/wiki/Health%20and%20environmental%20impact%20of%20transport | The health and environmental impact of transport is significant because transport burns most of the world's petroleum. This causes illness and deaths from air pollution, including nitrous oxides and particulates, and is a significant cause of climate change through emission of carbon dioxide. Within the transport sector, road transport is the largest contributor to climate change.
Environmental regulations in developed countries have reduced the individual vehicle's emission.
However, this has been offset by an increase in the number of vehicles, and increased use of each vehicle (an effect known as the Jevons paradox).
Some pathways to reduce the carbon emissions of road vehicles have been considerably studied.
Energy use and emissions vary largely between modes, causing environmentalists to call for a transition from air and road to rail and human-powered transport, and increase transport electrification and energy efficiency.
Other environmental impacts of transport systems include traffic congestion and automobile-oriented urban sprawl, which can consume natural habitat and agricultural lands. By reducing transport emissions globally, it is predicted that there will be significant positive effects on Earth's air quality, acid rain, smog, and climate change. Health effects of transport include noise pollution and carbon monoxide emissions.
While electric cars are being built to cut down emission at the point of use, an approach that is becoming popular among cities worldwide is to prioritize public transport, bicycles, and pedestrian movement. Redirecting vehicle movement to create 20-minute neighbourhoods that promotes exercise while greatly reducing vehicle dependency and pollution. Some policies include levying a congestion charge on cars travelling within congested areas during rush hour.
Types of effects
Emissions
The transportation sector is a major source of greenhouse gas emissions (GHGs) in the United States.
An estimated 30 percent of national GHGs are directly attributable to transportation—and in some regions, the proportion is even higher.
Transportation methods are the greatest contributing source of GHGs in the U.S., accounting for 47 percent of the net increase in total U.S. emissions since 1990.
Land
Other environmental effects of transport systems include traffic congestion and automobile-oriented urban sprawl, which can consume natural habitat and agricultural lands. By reducing transportation emissions globally, it is predicted that there will be significant positive effects on Earth's air quality, acid rain, smog and climate change.
Health
The health effects of transport emissions are also of concern. A recent survey of the studies on the effect of traffic emissions on pregnancy outcomes has linked exposure to emissions to adverse effects on gestational duration and possibly also intrauterine growth.
As listed above direct effects such as noise pollution and carbon monoxide emissions create direct and harmful effects on the environment, along with indirect effects. The indirect effects are often of higher consequence which leads to the misconception that it's the opposite since it is frequently understood that initial effects cause the most damage. For example, particulates which are the outcome of incomplete combustion done by an internal combustion engine, are not linked with respiratory and cardiovascular problems since they contribute to other factors not only to that specific condition. Even though the environmental effects are usually listed individually there are also cumulative effects. The synergetic consequences of transport activities. They take into account the varied direct and indirect effects on an ecosystem. Climate change is the sum total result of several natural and human-made factors. 15% of global CO2 emissions are attributed to the transport sector.
Mode
The following table compares the emissions of the different transport means for passenger transport in Europe:
Aviation
Aviation emissions vary based on length of flight. For covering long distances, longer flights are a better investment of the high energy costs of take-off and landing than very short flights, yet by nature of their length inevitably use much more energy. emissions from air travel range from 0.24 kg per passenger mile (0.15 kg/km per passenger) for short flights down to 0.18 kg per passenger mile (0.11 kg/km per passenger) for long flights. Researchers have been raising concern about the globally increasing hypermobility of society, involving frequent and often long-distance air travel and the resulting environmental and climate effects. This threatens to overcome gains made in the efficiency of aircraft and their operations. Climate scientist Kevin Anderson raised concern about the growing effect of air transport on the climate in a paper[13] and a presentation[14] in 2008. He has pointed out that even at a reduced annual rate of increase in UK passenger air travel and with the government's targeted emissions reductions in other energy use sectors, by 2030 aviation would be causing 70% of the UK's allowable emissions.
Worse, aircraft emissions at stratospheric altitudes have a greater contribution to radiative forcing than do emissions at sea level, due to the effects of several greenhouses gases in the emissions, apart from CO2. The other GHGs include methane (CH4), NOx which leads to ozone [O3], and water vapor. Overall, in 2005 the radiative forcing caused by aviation amounted to 4.9% of all human-caused radiative forcing on Earth's heat balance.
Road transport
Cycling
Cycling has a low carbon-emission and low environmental footprint. A European study of thousands of urban dwellers found that daily mobility-related emissions were of per person, with car travel contributing 70% and cycling 1% (including the entire lifecycle of vehicles and fuels). 'Cyclists' had 84% lower lifecycle emissions from all daily travel than 'non-cyclists', and the more people cycled on a daily basis, the lower was their mobility-related carbon footprint. Motorists who shifted travel modes from cars to bikes as their 'main method of travel' emitted less per day. Regular cycling was most strongly associated with reduced life cycle emissions for commuting and social trips.
Changing from motorised to non-motorised travel behaviour can also have significant effects. A European study of nearly 2000 participants showed that an average person cycling 1 trip/day more and driving 1 trip/day less for 200 days a year would decrease mobility-related lifecycle emissions by about 0.5 tonnes over a year, representing a substantial share of average per capita emissions from transport (which are about 1.5 to 2.5 tonnes per year, depending on where you live).
Cars
When burned, unleaded gasoline produces of CO2 per gallon, while diesel produces . CO2 emissions originating from ethanol are disregarded by international agreements however so gasoline containing 10% ethanol would only be considered to produce of CO2 per gallon. The average fuel economy for new light-duty vehicles sold in the US of the 2017 model year was about 24.9 MPG giving around of CO2 per mile. The Department of Transportation's MOBILE 6.2 model, used by regional governments to model air quality, uses a fleet average (all cars, old and new) of 20.3 mpg giving around of CO2 per mile.
In Europe, the European Commission enforced that from 2015 all new cars registered shall not emit more than an average of of CO2 per kilometre (kg CO2/km). The target is that by 2021 the average emissions for all new cars is of CO2 per kilometre.
Buses
On average, inner city commuting buses emit of per passenger mile (0.18 kg/km per passenger), and long distance (>20 mi, >32 km) bus trips emit 0.08 kg of per passenger mile (0.05 kg/km per passenger). Road and transportation conditions vary, so some carbon calculations add 10% to the total distance of the trip to account for potential traffic jams, detours, and pit-stops that may arise.
Rail
On average, commuter rail and subway trains emit of per passenger mile (0.11 kg/km per passenger), and long distance (>20 mi, >32 km) trains emit of per passenger mile (0.12 kg/km per passenger). Some carbon calculations add 10% to the total trip distance to account for detours, stop-overs, and other issues that may arise.
Electric trains contributes relatively less to the pollution as pollution happens in the power plants which are lot more efficient than diesel driven engines. Generally electric motors even when accounting for transmission losses are more efficient than internal combustion engines with efficiency further improving through recuperative braking.
Trains contain many different parts that have the potential to create noise. Wheels, engines and non-aerodynamic cargo are prone to vibrate at certain speeds. Noise caused from directly neighboring railways has the potential to lessen value to nearby property. In order to combat unbearable volumes resulting from railways, US diesel locomotives are required to be quieter than 90 decibels at 25 meters away since 1979. This noise, however, has been shown to be harmless to animals, except for horses who will become skittish.
Railway cargo can be a cause of pollution. Air pollution can occur from boxcars carrying materials such as iron ore, coal, soil, or aggregates and exposing these materials to the air. This can release nitrogen oxide, carbon monoxide, sulphur dioxide, or hydrocarbons into the air. Liquid pollution can come from railways contributing to a runoff into water sources, like groundwater or rivers and can result from spillage of fuels like oil into water supplies or onto land or discharge of human waste.
When railways are built in wilderness areas, the environment is visually altered by cuttings, embankments, dikes, and stilts.
Shipping
The fleet emission average for delivery vans, trucks and big rigs is per gallon of diesel consumed. Delivery vans and trucks average about 7.8 mpg (or 1.3 kg of per mile) while big rigs average about 5.3 mpg (or 1.92 kg of per mile).
Discharges of sewage into water bodies can come from many sources, including wastewater treatment facilities, runoff from livestock operations, and vessels. These discharges have the potential to impair water quality, adversely affecting aquatic environments and increasing the risks to human health. While sewage discharges have potentially wide-ranging effects on all aquatic environments, the effects may be especially problematic in marinas, slow-moving rivers, lakes and other bodies of water with low flushing rates. Environmentally this creates invasive species that often drive other species to their extinction and cause harm to the environment and local businesses.
Emissions from ships have much more significant environmental effects; many ships go internationally from port to port and are not seen for weeks, contributing to air and water pollution on its voyage. Emission of greenhouse gases displaces the amount of gas that allows for UV-rays through the ozone. Sulfur and nitrogen compounds emitted from ship will oxidize in the atmosphere to form sulfate and nitrate. Emissions of nitrogen oxides, carbon monoxide, and volatile organic compounds (VOC) will lead to enhanced surface ozone formation and methane oxidation, depleting the ozone. The effect of the international ship emission on the distribution of chemical compounds such as , CO, O3, •OH, SO2, HNO3, and sulfate is studied using a global chemical transport model (CTM), the Oslo CTM2. In particular, the large-scale distribution and diurnal variation of the oxidants and sulfur compounds are studied interactively. Meteorological data (winds, temperature, precipitation, clouds, etc.) used as input for the CTM calculations are provided by a weather prediction model.
Shipping Emissions Factors:
The road haulage industry is contributing around 20% of the UK's total carbon emissions a year, with only the energy industry having a larger contribution, at around 39%.
Road haulage is a significant consumer of fossil fuels and associated carbon emissions – HGV vehicles account for almost 20 percent of total emissions.
Mitigation of environmental effects
Sustainable transport
Sustainable transport is transport with either lower environmental footprint per passenger, per distance or higher capacity. Typically sustainable transport modes are rail, bicycle and walking.
Road-rail parallel layout
Road-Rail Parallel Layout is a design option to reduce the environmental effects of new transportation routes by locating railway tracks alongside a highway. In 1984 the Paris—Lyon high-speed rail route in France had about 14% parallel layout with the highway, and in 2002, 70% parallel layout was achieved with the Cologne–Frankfurt high-speed rail line.
Involvement
Mitigation does not entirely involve large-scale changes such as road construction, but everyday people can contribute. Walking, cycling trips, short or non-commute trips, can be an alternate mode of transportation when travelling short or even long distances. A multi-modal trip involving walking, a bus ride, and bicycling may be counted solely as a transit trip. Economic evaluations of transportation investments often ignore the true effects of increased vehicular traffic—incremental parking, traffic accidents, and consumer costs—and the real benefits of alternative modes of transport. Most travel models do not account for the negative effects of additional vehicular traffic that result from roadway capacity expansion and overestimate the economic benefits of urban highway projects. Transportation planning indicators, such as average traffic speeds, congestion delays, and roadway level of service, measure mobility rather than accessibility.
Climate change is a factor that 67% of Europeans consider when choosing where to go on holiday. Specifically, people under the age of 30 are more likely to consider climate implications of travelling to vacation spots. 52% of young Europeans, 37% of people ages 30–64 and 25% of people aged above 65, state that in 2022 they will choose to travel by plane. 27% of young people claim they will travel to a faraway destination.
Europeans expect lifestyle changes to experience great transformation in the next 20 years. 31% of respondents to a climate survey conducted in 2021 believe that most people will no longer own their own vehicle, while 63% believe that teleworking will become the norm to reduce emissions and mitigate the effects of climate change. 48% predict that energy quotas will be individually assigned.
Influence of e-commerce
As large retail corporations in recent years have focused attention on eCommerce, many have begun to offer fast (e.g. 2-day) shipping. These fast shipping options get products and services to the hands of buyers faster than ever before, but have they are negative externalities on public roads and climate change. A survey in 2016 by UPS shows that 46% of online shoppers abandoned an unused shopping cart due to a shipping time that was way too long and that 1 and 3 online shoppers look at the speed of delivery from the marketplaces they buy from. Consumers are demanding the fast delivery of goods and services. AlixPartners LLP found that consumers expect to wait an average of 4.8 days for delivery, down from 5.5 days in 2012. And the share of those who are willing to wait more than five days has declined to 60% from 74% in four years.
E-commerce shopping can be seen as the best way to reduce one's carbon footprint. Yet, this is only true to some extent. Shopping online is less energy intensive than driving to a physical store location and then driving back home. This is because shipping can take advantage of economies of scale. However, these benefits are diminished when e-commerce stores package items separately or when customers buy items separately and do not take the time to one stop shop. For large stores with a large online presence, they can have millions of customers opting for these shipping benefits. As a result, they are unintentionally increasing carbon emissions from not consolidating their purchases. Josué Velázquez-Martínez, a sustainable logistics professor at MIT notes that "if you are willing to wait a week for shipping, you just kill 20 trees instead of 100 trees." The only time shipping works in being less energy intensive is when customer do not choose rush delivery, which includes 2-day shipping. M. Sanjayan, the CEO of Conservation International, explains that getting your online purchase delivered at home in just two days puts more polluting vehicles on the road. In addition to standard shipping, consumers must be satisfied with their purchases so that they do not constantly returns items. By returning shipments on standard shipping, the positive contribution to environment is being taken back. In research done by Vox, they found in 2016 transportation overtook power plants as the top prouder of carbon dioxide emissions in the US for the first time since 1979. These environmental issues came from nearly a quarter of transportation trucks that either carry medium and heavy duty loads of merchandise; these trucks are often the ones doing e-commerce shipping.
Since 2009, UPS deliveries have increased by 65%. With the increase in deliveries, there is a demand for trucks on the road, resulting in more carbon emissions in our atmosphere. More recently, there has been research to help combat greenhouse gas emission to the atmosphere with better traffic signals. These WiFi signals cut down on wait time at stop lights and reduce wasting fuel. These signals help automobiles adjust their velocity so that they can increase their chances of getting through the light, smoothing travel patterns and obtaining fuel-economy benefits. These small adjustments result in big changes in fuel savings. The cities that have started implementing smart light technology such as San Jose, CA and Las Vegas, NV. Light technology has shown to save 15-20% in fuel savings. According to the United States Environmental Protection Agency, transportation is the second leading source of GHG emission behind electricity and project that by 2050 freight transportation emissions will pass passenger vehicle emissions. Another technological advancements is truck platooning, trucks are able to send signals to neighboring trucks about their speed. This communication between vehicles reduces congestion on the roads and reduce drag, increasing fuel savings by 10 to 20%.
With these tech implementations in major cities and towns, there is the ability to reach an optimal level of pollution given the rise of e-commerce shipments. The figure above illustrates that decreasing emissions would result in the equilibrium for the market of shipping population, which can be done by consolidating packages, light technology, or truck platooning.
See also
References
External links
Personal Transportation Factsheet by the University of Michigan's Center for Sustainable Systems
Comparison of CO2 Emissions by Different Modes of Transport by the International Chamber of Shipping
Environmental impact by source
Articles containing video clips | Health and environmental impact of transport | Physics | 3,730 |
1,930,122 | https://en.wikipedia.org/wiki/Residual%20entropy | Residual entropy is the difference in entropy between a non-equilibrium state and crystal state of a substance close to absolute zero. This term is used in condensed matter physics to describe the entropy at zero kelvin of a glass or plastic crystal referred to the crystal state, whose entropy is zero according to the third law of thermodynamics. It occurs if a material can exist in many different states when cooled. The most common non-equilibrium state is vitreous state, glass.
A common example is the case of carbon monoxide, which has a very small dipole moment. As the carbon monoxide crystal is cooled to absolute zero, few of the carbon monoxide molecules have enough time to align themselves into a perfect crystal (with all of the carbon monoxide molecules oriented in the same direction). Because of this, the crystal is locked into a state with different corresponding microstates, giving a residual entropy of , rather than zero.
Another example is any amorphous solid (glass). These have residual entropy, because the atom-by-atom microscopic structure can be arranged in a huge number of different ways across a macroscopic system.
The residual entropy has a somewhat special significance compared to other residual properties, in that it has a role in the framework of residual entropy scaling, which is used to compute transport coefficients (coefficients governing non-equilibrium phenomena) directly from the equilibrium property residual entropy, which can be computed directly from any equation of state.
History
One of the first examples of residual entropy was pointed out by Pauling to describe water ice. In water, each oxygen atom is bonded to two hydrogen atoms. However, when water freezes it forms a tetragonal structure where each oxygen atom has four hydrogen neighbors (due to neighboring water molecules). The hydrogen atoms sitting between the oxygen atoms have some degree of freedom as long as each oxygen atom has two hydrogen atoms that are 'nearby', thus forming the traditional H2O water molecule. However, it turns out that for a large number of water molecules in this configuration, the hydrogen atoms have a large number of possible configurations that meet the 2-in 2-out rule (each oxygen atom must have two 'near' (or 'in') hydrogen atoms, and two far (or 'out') hydrogen atoms). This freedom exists down to absolute zero, which was previously seen as an absolute one-of-a-kind configuration. The existence of these multiple configurations (choices for each H of orientation along O--O axis) that meet the rules of absolute zero (2-in 2-out for each O) amounts to randomness, or in other words, entropy. Thus systems that can take multiple configurations at or near absolute zero are said to have residual entropy.
Although water ice was the first material for which residual entropy was proposed, it is generally very difficult to prepare pure defect-free crystals of water ice for studying. A great deal of research has thus been undertaken into finding other systems that exhibit residual entropy. Geometrically frustrated systems in particular often exhibit residual entropy. An important example is spin ice, which is a geometrically frustrated magnetic material where the magnetic moments of the magnetic atoms have Ising-like magnetic spins and lie on the corners of network of corner-sharing tetrahedra. This material is thus analogous to water ice, with the exception that the spins on the corners of the tetrahedra can point into or out of the tetrahedra, thereby producing the same 2-in, 2-out rule as in water ice, and therefore the same residual entropy. One of the interesting properties of geometrically frustrated magnetic materials such as spin ice is that the level of residual entropy can be controlled by the application of an external magnetic field. This property can be used to create one-shot refrigeration systems.
See also
Proton disorder in ice
Ice rules
Geometrical frustration
Notes
Thermodynamic entropy | Residual entropy | Physics | 793 |
1,496,720 | https://en.wikipedia.org/wiki/Altacast | Altacast (formerly known as Edcast and Oddcast) is a free and open-source audio encoder that can be used to create Internet streams of varying types. Many independent and commercial broadcasters use Altacast to create Internet radio stations, such as those listed on the Icecast, Loudcaster and Shoutcast station directories.
Development
The original streaming software, Oddcast, was developed from 2000 to 2010. The official site at Oddsock.org hosted streaming media tools, which included Oddcast, Stream Transcoder, Icecast Station Browser plugin, Song Requester plugin and Do Something plugin. In late November 2010, Oddsock.org was shut down.
Edcast, a fork of Oddcast, is being updated and hosted at Club RIO. In early 2012, development of Edcast was moved to Google Code and SourceForge. As of October 30, 2011, the latest stable version is 3.33.2011.1026 and the latest beta version is 3.37.2011.1214.
In September 2012, a second fork, Altacast was released. The Standalone & DSP edition are derived from GPL software and is available on GitHub, while the RadioDJ edition is written in .NET Framework and developed separately. A version 2.0 for the Standalone & DSP edition that will be SHOUTcast v2 compatible is planned for the future.
Altacast plugin for RadioDJ will no longer function with new versions of RadioDJv2 - Altacast is not supported by the developer of RadioDJ due to legal issues.
Features
Altacast is supported on Windows. It will run in conjunction with various media players compatible with Winamp plugins, such as AIMP, JetAudio, KMPlayer, MediaMonkey, MusicBee and foobar2000, as well as a standalone encoder.
Altacast Standalone & DSP can stream to Icecast and SHOUTcast servers in Ogg Vorbis and Ogg FLAC out-of-the-box. MP3, AAC and AAC+ support can be added via the LAME encoder (lame_enc.dll), FAAC encoder (libFAAC.dll), and CT-aacPlus encoder (enc_aacplus.dll obtainable from Winamp 5.61) respectively. Adjustable settings for each encoder include bitrate (for MP3, AAC+, Ogg Vorbis), quality (for AAC, Ogg Vorbis), sample rate (22050 Hz or 44100 Hz) and channels (Parametric Stereo is available for AAC+ up till 56 kbit/s).
SHOUTcast v2 is currently not officially supported in Altacast Standalone & DSP. However, it is possible to connect to stream ID no. 1 of a SHOUTcast v2 server in legacy (v1) mode. As a temporary workaround, the SHOUTcast DSP 1.9.2 plugin for Winamp-compatible media players may be used to broadcast to alternate mount points (e.g. stream ID no. 2).
SHOUTcast v2 and Opus support is available in v1.4 onwards in the plugin.
Reception
In 2007 Oddcast was used in a document from the Department of Audio Communication of Technische Universität Berlin as part of description to set up an internet radio broadcast system. Use of edcast for a similar purpose was described in a 2010 article in the PCWorld magazine. A 2016 thesis for Oulu University of Applied Sciences has described the used of Altacast in the implementation of an internet radio station while the for a 2018 article in the Linux Journal recommended it as a compatible source client for Microsoft Windows in setting up a freeware internet radio station using Liquidsoap, icecast and open standards. Académie d'Orléans-Tours has used web radio (internet radio) for broadcasts for students and the use of edcast subsequently altacast in the system has been described.
See also
List of Internet radio stations
List of streaming media systems
References
External links
Audio software
Internet radio in the United States
Streaming software
Internet radio software
2001 software
Companies based in Chicago | Altacast | Engineering | 861 |
1,970,162 | https://en.wikipedia.org/wiki/List%20of%20backup%20software | This is a list of notable backup software that performs data backups. Archivers, transfer protocols, and version control systems are often used for backups but only software focused on backup is listed here. See Comparison of backup software for features.
Free and open-source software
Proprietary
Defunct software
See also
Comparison of file synchronization software
Comparison of online backup services
Data recovery
File synchronization
List of data recovery software
Remote backup service
Tape management system
Notes
References
Backup software | List of backup software | Technology | 96 |
33,743,713 | https://en.wikipedia.org/wiki/Unapproved%20Drugs%20Initiative | Unapproved Drugs Initiative is a program by the U.S Food and Drug Administration announced in June 2006 to remove unapproved drugs from the market.
some 14 categories of drugs have been affected.
It has been controversial due to the resulting increase in some drug prices.
In April 2010, in an editorial in the New England Journal of Medicine (NEJM), A.S. Kesselheim and D.H. Solomon said that the rewards of this legislation are not calibrated to the quality or value of the information produced, that there is no evidence of meaningful improvement to public health, that it would be much less expensive for the FDA or National Institutes of Health to pay for trials themselves on widely available drugs such as colchicine, and that the cost burden falls primarily on patients or their insurers. URL Pharma posted a detailed rebuttal of the NEJM editorial.
Drugs affected
Colchicine, (pill price rose from $0.09 to $4.85)
Ergotamine
Albuterol
many others
References
Food and Drug Administration | Unapproved Drugs Initiative | Chemistry | 219 |
11,798,496 | https://en.wikipedia.org/wiki/Marasmius%20stenophyllus | Marasmius stenophyllus is a fungal plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
stenophyllus
Taxa named by Camille Montagne
Fungus species | Marasmius stenophyllus | Biology | 45 |
70,148,516 | https://en.wikipedia.org/wiki/Monocrotaline | Monocrotaline (MCT) is a pyrrolizidine alkaloid that is present in plants of the Crotalaria genus. These species can synthesise MCT out of amino acids and can cause liver, lung and kidney damage in various organisms. Initial stress factors are released intracellular upon binding of MCT to BMPR2 receptors and elevated MAPK phosphorylation levels are induced, which can cause cancer in Homo sapiens. MCT can be detoxified in rats via oxidation, followed by glutathione-conjugation and hydrolysis.
Origin
MCT occurs in the seeds of certain species of the genus Crotalaria, for example, Crotalaria spectabilis and Crotalaria mucronata. MCT is a chemical with pesticide properties and therefore serves as a defence mechanism to fend off predators. However, it can also lead to the poisoning of mammals and birds.
The butterfly Utetheisa ornatrix also benefits from MCT by using it as protection. The larvae of the butterfly feed almost exclusively on Crotalaria seeds, where MCT is accumulated in their bodies. In this way, they are protected from predators such as spiders for the rest of their lives (even after pupation as butterflies).
Toxicity
MCT is an acute toxic substance. The toxicity of MCT is dose-dependent, and it can harm both organs and genetic material (genotoxicity). The organs that will be targeted are the liver (hepatotoxicity), the kidneys (nephrotoxicity) and the lungs (pneumotoxicity). MCT falls into Category 3 toxicity for oral ingestion and Category 2 toxicity for carcinogenicity according to the European Chemicals Agency (ECHA).
Studies concluded that the ingestion of MCT will cause centrilobular necrosis, pulmonary fibrosis and increase in blood urea nitrogen. These conclusions are based on the models that were used during these studies as these effects were caused in rats instead of humans. During the studies it was also concluded that mice are more resilient to MCT than rats, meaning that more mice survived the experiments than rats.
Biosynthesis of monocrotaline
The biosynthesis of MCT involves condensation of monocrotalic acid (MCA), which is derived from L-isoleucine, and retronecine, which is derived from putrescine.
MCA is formed from L-isoleucine and a synthon for propionate of uncertain origin.
Retronecine is synthesized from L-arginine via a multi-step pathway involving putrescine and spermidine intermediates:
Putrescine is converted to spermidine by addition of a propylamino group from decarboxylated S-adenosylmethioninamine (4: spermidine synthase). Spermidine and another molecule of putrescine react to form the symmetric homospermidine with loss of 1,3-diaminopropane (5: homospermidine synthase).
Oxidation (likely catalysed by 6: copper-dependent diamine oxidases) to 4,4’-iminodibutanal results into the cyclization of pyrrolizidine-1-carbaldehyde, which is reduced to 1-hydroxymethyl pyrrolizidine (likely catalysed by 7: alcohol dehydrogenase). To form the final product retronecine, 1-hydroxymethyl pyrrolizidine is desaturated and hydroxylated respectively by unknown enzymes.
MCA and retronecine are then condensed to form MCT via an unknown mechanism:
Biotransformation of monocrotaline
MCT is detoxified in rats by the liver via divergent biotransformation reactions. These reactions proceed as follows:
In Rats, MCT is first oxidised by the biotransformation enzyme cytochrome P450 (CYP) to form dehydro MCT. In this phase 1 reaction a double carbon-carbon bond is introduced out of a single carbon-carbon bond.
After the phase 1 reaction, the oxidised intermediate can either undergo hydrolysis to form monocrotalic acid and dihydropyrolizine or perform group transfer with glutathione to form MCA and a glutathione-conjugated dihydropyrolizine (GS-conjugation). These metabolites are more hydrophilic than MCT and could therefore be more easily excreted by the kidneys, which results in less exposure from MCT to the liver. The phase 2 reactions are thus classified as the detoxifying reactions during the biotransformation of MCT in rats.
During the phase 2 reactions, dehydro MCT can react with nucleophilic biological macromolecules (NuS) which is a toxic intermediate. Addition of such molecules may result into Cytotoxicity. Dehydro MCT may also undergo further toxification after hydrolysis, as dihydropyrolizine can be further oxidized to 7-dihydro-1-hydroxymethyl-5H-pyrrolizine (DHP). This intermediate can bind to DNA which may cause Genotoxicity.
Note that the biotransformation routes may differ based on the studied organism.
Mechanism of action
MCT aggregates on and activates the calcium-sensing receptor (CaSR) of pulmonary artery endothelial cells to trigger endothelial damage and, ultimately, induces pulmonary hypertension. MCT binds to the extracellular domain of the CaSR (calcium-sensing receptor). Thereby, the assembly of CaSR is enhanced and triggers the mobilisation of calcium signalling, and damages pulmonary artery endothelial cells. In addition, MCT strengthens this effect by binding to the bone morphogenetic protein receptor type II (BMPR2), which is a transmembrane receptor. BMPR2 inhibition occurs which in turn induces a blockade of BMPR1 receptor activation via phosphorylation. Inhibiting this process disturbs cell differentiation processes and ossification. Interference with these receptors induce pulmonary arterial hypertension.
MAPK is a mitogen activated protein kinase that gets activated upon BMPR2 activation. The protein kinase in turn phosphorylates p38 via a reinforced cascade of intracellular signals. It also activates p21 which has a regulating role in the cell cycle. However, MCT administration inhibits this process via a blockade of BMPR2. Cytokines such as TNF-α are released which cause activation of inflammation mechanisms, attracting neutrophils among others. Furthermore, inducible nitric oxide synthases (iNOS) are upregulated upon MCT induced cellular stress, whereas endothelial NOS (eNOS) gets downregulated. The cytokine TGF-β (also released by macrophages via chemotaxis during inflammation reactions in a positive feedback loop with TNF-α) is a transforming growth factor that is upregulated as a result of iNOS increasement, contributing to pulmonary artery proliferation. Increased levels of iNOS also stimulate caspase-3 activity which increases apoptosis levels.
See also
Oxidative stress
References
Pyrrolizidine alkaloids
Lactones
Diols | Monocrotaline | Chemistry | 1,571 |
2,070,766 | https://en.wikipedia.org/wiki/Constitutional%20growth%20delay | Constitutional delay of growth and puberty (CDGP) is a term describing a temporary delay in the skeletal growth and thus height of a child with no physical abnormalities causing the delay. Short stature may be the result of a growth pattern inherited from a parent (familial) or occur for no apparent reason (idiopathic). Typically at some point during childhood, growth slows down, eventually resuming at a normal rate. CDGP is the most common cause of short stature and delayed puberty.
Synonyms
Constitutional Delay of Growth and Adolescence (CDGA)
Constitutional Growth Delay (CGD)
See also
Idiopathic short stature
Failure to thrive
References
Developmental biology
Pediatrics
Sexuality and age
Human height | Constitutional growth delay | Biology | 145 |
23,515,456 | https://en.wikipedia.org/wiki/Mohammadia%20School%20of%20Engineering | The Mohammadia School of Engineers (, abbreviated EMI; ) is the first to be established engineering school in Morocco. EMI was founded in 1959 by the King Mohammed V as Morocco's first polytechnic, it's one of the largest schools of engineering in Morocco, and its most prestigious.
History
EMI became in 1982 under the order of the king Hassan II a school combining academic and military education in order to control the students promoting communism.
The new model set was following the establishment of the polytechnical school of Paris (École Polytechnique).
Special events
First Computer in Morocco as a gift from the king Baudoin of the Belgians to EMI.
First Internet node in Morocco introduced by EMI.
First School to introduce in 2003 the Internet country code top-level domain ".ma".
First School introducing the Electrical Engineering 's bachelor's degree.
Organization
After three years of academic studies and military training the students have to take an oath in front of his majesty the King of Morocco in order to get the 'Grandes Ecoles d'ingénieurs' degree, a Bac+5 in the French education System, and the equivalent of a master's degree . In the military side the students graduate as reserve Officers. The school consists of nine departments :
Department of Civil Engineering
Department of Computer Science
Department of Electrical Engineering
Department of Industrial Engineering
Department of Mechanical Engineering
Department of Mineral Engineering
Department of Modelling and Scientific Computing
Department of Networks & Telecommunications
Department of Process Engineering
References
External links
Official website
AIEM Europe web site
Mohammadia School of Engineering
Engineering universities and colleges
Education in Morocco
Schools of informatics
Grandes écoles
Universities and colleges established in 1959
Education in Rabat
Buildings and structures in Rabat
1959 establishments in Morocco
20th-century architecture in Morocco | Mohammadia School of Engineering | Engineering | 353 |
22,615,385 | https://en.wikipedia.org/wiki/Firehose%20instability | The firehose instability (or hose-pipe instability) is a dynamical instability of thin or elongated galaxies. The instability causes the galaxy to buckle or bend in a direction perpendicular to its long axis. After the instability has run its course, the galaxy is less elongated (i.e. rounder) than before. Any sufficiently thin stellar system, in which some component of the internal velocity is in the form of random or counter-streaming motions (as opposed to rotation), is subject to the instability.
The firehose instability is probably responsible for the fact that elliptical galaxies and dark matter haloes never have axis ratios more extreme than about 3:1, since this is roughly the axis ratio at which the instability sets in. It may also play a role in the formation of barred spiral galaxies, by causing the bar to thicken in the direction perpendicular to the galaxy disk.
The firehose instability derives its name from a similar instability in magnetized plasmas. However, from a dynamical point of view, a better analogy is with the Kelvin–Helmholtz instability, or with beads sliding along an oscillating string.
Stability analysis: sheets and wires
The firehose instability can be analyzed exactly in the case of an infinitely thin, self-gravitating sheet of stars. If the sheet experiences a small displacement in the direction, the vertical acceleration for stars of velocity as they move around the bend is
provided the bend is small enough that the horizontal velocity is unaffected. Averaged over all stars at , this acceleration must equal the gravitational restoring force per unit mass . In a frame chosen such that the mean streaming motions are zero, this relation becomes
where is the horizontal velocity dispersion in that frame.
For a perturbation of the form
the gravitational restoring force is
where is the surface mass density. The dispersion relation for a thin self-gravitating sheet is then
The first term, which arises from the perturbed gravity, is stabilizing, while the second term, due to the centrifugal force that the stars exert on the sheet, is destabilizing.
For sufficiently long wavelengths:
the gravitational restoring force dominates, and the sheet is stable; while at short wavelengths the sheet is unstable. The firehose instability is precisely complementary, in this sense, to the Jeans instability in the plane, which is stabilized at short wavelengths, .
A similar analysis can be carried out for a galaxy that is idealized as a one-dimensional wire, with density that varies along the axis. This is a simple model of a (prolate) elliptical galaxy. Some unstable eigenmodes are shown in Figure 2 at the left.
Stability analysis: finite-thickness galaxies
At wavelengths shorter than the actual vertical thickness of a galaxy, the bending is stabilized. The reason is that stars in a finite-thickness galaxy oscillate vertically with an unperturbed frequency ; like any oscillator, the phase of the star's response to the imposed bending depends entirely on whether the forcing frequency is greater than or less than its natural frequency. If for most stars, the overall density response to the perturbation will produce a gravitational potential opposite to that imposed by the bend and the disturbance will be damped. These arguments imply that a sufficiently thick galaxy (with low ) will be stable to bending at all wavelengths, both short and long.
Analysis of the linear normal modes of a finite-thickness slab shows that bending is indeed stabilized when the ratio of vertical to horizontal velocity dispersions exceeds about 0.3. Since the elongation of a stellar system with this anisotropy is approximately 15:1 — much more extreme than observed in real galaxies — bending instabilities were believed for many years to be of little importance. However, Fridman & Polyachenko showed that the critical axis ratio for stability of homogeneous (constant-density) oblate and prolate spheroids was roughly 3:1, not 15:1 as implied by the infinite slab, and Merritt & Hernquist found a similar result in an N-body study of inhomogeneous prolate spheroids (Fig. 1).
The discrepancy was resolved in 1994. The gravitational restoring force from a bend is substantially weaker in finite or inhomogeneous galaxies than in infinite sheets and slabs, since there is less matter at large distances to contribute to the restoring force. As a result, the long-wavelength modes are not stabilized by gravity, as implied by the dispersion relation derived above. In these more realistic models, a typical star feels a vertical forcing frequency from a long-wavelength bend that is roughly twice the frequency of its unperturbed orbital motion along the long axis. Stability to global bending modes then requires that this forcing frequency be greater than , the frequency of orbital motion parallel to the short axis. The resulting (approximate) condition
predicts stability for homogeneous prolate spheroids rounder than 2.94:1, in excellent agreement with the normal-mode calculations of Fridman & Polyachenko and with N-body simulations of homogeneous oblate and inhomogeneous prolate galaxies.
The situation for disk galaxies is more complicated, since the shapes of the dominant modes depend on whether the internal velocities are azimuthally or radially biased. In oblate galaxies with radially-elongated velocity ellipsoids, arguments similar to those given above suggest that an axis ratio of roughly 3:1 is again close to critical, in agreement with N-body simulations for thickened disks. If the stellar velocities are azimuthally biased, the orbits are approximately circular and so the dominant modes are angular (corrugation) modes, . The approximate condition for stability becomes
with the circular orbital frequency.
Importance
The firehose instability is believed to play an important role in determining the structure of both spiral and elliptical galaxies and of dark matter haloes.
As noted by Edwin Hubble and others, elliptical galaxies are rarely if ever observed to be more elongated than E6 or E7, corresponding to a maximum axis ratio of about 3:1. The firehose instability is probably responsible for this fact, since an elliptical galaxy that formed with an initially more elongated shape would be unstable to bending modes, causing it to become rounder.
Simulated dark matter haloes, like elliptical galaxies, never have elongations greater than about 3:1. This is probably also a consequence of the firehose instability.
N-body simulations reveal that the bars of barred spiral galaxies often "puff up" spontaneously, converting the initially thin bar into a bulge or thick disk subsystem. The bending instability is sometimes violent enough to weaken the bar. Bulges formed in this way are very "boxy" in appearance, similar to what is often observed.
The firehose instability may play a role in the formation of galactic warps.
See also
Stellar dynamics
References
Astrophysics
Stability theory
Plasma instabilities | Firehose instability | Physics,Astronomy,Mathematics | 1,425 |
55,268,273 | https://en.wikipedia.org/wiki/SN%202012fr | SN 2012fr was a supernova in the NGC 1365 galaxy that was discovered by Alain Klotz on October 27, 2012.
Discovery
When Klotz, an astrophysicist from Institut de Recherche en Astrophysique et Planetologie in France checked the galaxy images from TAROT la Silla observatory, the comparison of the night image of the galaxy with a reference image taken one month before clearly revealed the presence of a new star 3"W and 52"N from the nucleus of the galaxy.
After checking for possible objects like asteroids that might have been in the same location, four individual images that showed the new star were retrieved in the TAROT image archive. Also, the object did not appear in images taken several days before. On October 28 at 6:41 UTC, Emmanuel Conseil sent an email to Alain Klotz indicating he took an image of NGC 1365 using the robotic telescope Slooh robotic telescope. The picture showed the supernova candidate and was seen as the first confirmation.
At 22:00 UTC, Michael Childress from Australian National University took the first spectrum, indicating that it is a type Ia supernova 11 days before the maximum of light.
On October 31, 2012, the supernova was given the official designation 2012fr.
Observations
The TAROT telescope was taking images of the NGC 1365 and NGC 1316 every night from 29 October. The preliminary light curve indicates the supernova becoming bluer before reaching the maximum.
External links
Light curves and spectra on the Open Supernova Catalog
Discovery process by Alain Klotz
Detailed optical and ultraviolet Observations
AAVSO
References
20121027
Supernovae
Discoveries by Alain Klotz
Fornax | SN 2012fr | Chemistry,Astronomy | 345 |
2,795,512 | https://en.wikipedia.org/wiki/TeenScreen | The TeenScreen National Center for Mental Health Checkups at Columbia University was a national mental health and suicide risk screening initiative for middle- and high-school age adolescents. On November 15, 2012, according to its website, the program was terminated. The organization operated as a center in the Division of Child and Adolescent Psychiatry Department at Columbia University, in New York City. The program was developed at Columbia University in 1999, and launched nationally in 2003. Screening was voluntary and offered through doctors' offices, schools, clinics, juvenile justice facilities, and other youth-serving organizations and settings. , the program had more than 2,000 active screening sites across 46 states in the United States, and in other countries including Australia, Brazil, India and New Zealand.
Screening program
Organization
The program was developed by a team of researchers at Columbia University, led by David Shaffer. The goal was to make researched and validated screening questionnaires available for voluntary identification of possible mental disorders and suicide risk in middle and high school students. The questionnaire they developed is known as the Columbia Suicide Screen, which entered into use in 1999, an early version of what is now the Columbia Health Screen. In 2003, the New Freedom Commission on Mental Health, created under the administration of George W. Bush, identified the TeenScreen program as a "model" program and recommended adolescent mental health screening become common practice.
The organization launched an initiative to provide voluntary mental health screening to all U.S. teens in 2003. The following year, TeenScreen was included in the national Suicide Prevention Resource Center's (SPRC) list of evidence-based suicide prevention programs. In 2007, it was included as an evidence-based program in the U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)'s National Registry of Evidence-based Programs and Practices. In 2009, the organization launched the TeenScreen Primary Care initiative to increase mental health screening by pediatricians and other primary care providers, the same year the U.S. Preventive Services Task Force recommended annual adolescent mental health screening as part of routine primary care, and the Institute of Medicine recommended expansion of prevention and early identification programs.
, the program was led by executive director Laurie Flynn, deputy executive director Leslie McGuire and scientific advisor Mark Olfson, M.D., alongside a National Advisory Council of healthcare professionals, educators and advocates.
As of November 15, 2012, TeenScreen has been terminated, will no longer train or register new programs, and will cease all operations by the end of the year.
Mission and locations
The mission of the TeenScreen National Center was to expand and improve the early identification of mental health problems in youth. In particular, TeenScreen aimed to find young people at risk of suicide or developing mental health disorders so they could be referred for a comprehensive mental health evaluation by a health professional. The program focuses on providing screening to young people in the 11-18 age range. From 2003 until 2012, the program was offered nationally in schools, clinics, doctors' offices and in youth service environments such as shelters and juvenile justice settings. , more than 2,000 primary care providers, schools and community-based sites in 46 states offered adolescent mental health screening through the TeenScreen National Center. In addition, the screening was also being provided in other countries including Australia, Brazil, India, New Zealand and Scotland.
Screening process
TeenScreen provided materials, training and technical help through its TeenScreen Primary Care and Schools and Communities programs for primary care providers, schools and youth-serving organizations that provided mental health screening to adolescents. A toolkit was provided, including researched and validated questionnaires, instructions for administering, scoring and interpreting the screening responses. Primary care program materials included information on primary care referrals for clinical evaluation. In the school and community setting, the screening process was voluntary and required active parental consent and participant assent prior to screening sessions.
The validated questionnaires included items about depression, thoughts of suicide and attempts, anxiety, and substance use. The screening questionnaires typically took up to ten minutes for an adolescent to complete. Once the responses to the questionnaire had been reviewed, any adolescent identified as being at possible risk for suicide or other mental health concerns would then assessed by a health or mental health professional. The result of this assessment determined whether the adolescent could be referred for mental health services. If this was the case, parents were involved and provided with help locating the appropriate mental health services.
Research, endorsements and responses
Recommendations and research
Mental health screening has been endorsed by the former U.S. Surgeon General David Satcher, who launched a "Call to Action" in 1999 encouraging the development and implementation of safe, effective school-based programs offering intervention, help and support to young people with mental health issues. TeenScreen is included as an evidence-based program in the U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)'s National Registry of Evidence-based Programs and Practices as a scientifically tested and reviewed intervention. In addition, the U.S. Preventive Services Task Force recommended in 2009 that mental health screening for teenagers be integrated into routine primary care appointments.
Studies have been conducted on the effectiveness and impact of mental health screening for young people. In a 2004 systematic evidence review, the U.S. Preventive Services Task Force found that there were no studies that addressed whether screening as part of primary care reduced morbidity and mortality, nor any information of the potential risks of screening. In a later review, published in 2009, the task force found that there was evidence supporting the efficacy of screening tools in identifying teenagers at risk of suicide or mental health disorders.
A team of researchers from Columbia University and the New York State Psychiatric Institute completed a randomized controlled clinical trial on the impact of suicide screening on high school students in New York State from 2002-2004. The study found that students who were given a questionnaire about suicide were no more likely to report suicidal thoughts after the survey than students in the control group who had not been questioned. Neither was there any greater risk for "high risk" students. A subsequent study by the researchers, in 2009, found that screening appeared to increase the likelihood that adolescents would receive treatment if they were at risk for mental health disorders or suicide.
A study published in 2011, involving 2,500 high school students, examined the value of routine mental health screening in school to identify adolescents at-risk for mental illness, and to connect those adolescents with recommended follow-up care. The research, conducted between 2005 and 2009 at six public high schools in suburban Wisconsin, found that nearly three out of four high school students identified as being at-risk for having a mental health problem were not in treatment at the time of screening. Of those students identified as at-risk, a significant majority (76.3 percent) completed at least one visit with a mental health provider within 90 days of screening. More than half (56.3 percent) received minimally adequate treatment, defined as having three or more visits with a provider, or any number of visits if termination was agreed to by the provider.
A separate study published in 2011, found that mental health screening was effective at connecting African-American middle school students from a predominantly low-income area with school-based mental health services. Researchers have also found evidence to support the addition of mental health screenings for adolescents while undergoing routine physical examinations.
Acceptance and critical responses
Recommendations endorsing adolescent mental health screening have been issued by the Institute of Medicine (IOM) and the U.S. Preventative Services Task Force (USPSTF). The American Academy of Pediatrics recommends assessment of mental health at primary care visits and suggests the use of validating screening instruments. These add to statements and recommendations to screen adolescents for mental illness from the American Medical Association (AMA), the Society for Adolescent Health and Medicine, the American Academy of Family Physicians and the National Association of Pediatric Nurse Practitioners. TeenScreen has been endorsed by a number of organizations, including the National Alliance for the Mentally Ill, and federal and state commissions such as the New Freedom Commission.
There is opposition to mental health screening programs in general and TeenScreen in particular, from civil liberties, parental rights, and politically conservative groups. Much of the opposition is led by groups who claim that the organization is funded by the pharmaceutical industry; however, in 2011, an inquiry launched by Senator Charles E. Grassley into the funding of health advocacy groups by pharmaceutical, medical-device, and insurance companies demonstrated to Senator Grassley's satisfaction that TeenScreen does not receive funding from the pharmaceutical industry. Sen. Grassley sent a letter to TeenScreen and 33 other organizations like the American Cancer Society asking about their financial ties to the pharmaceutical industry. TeenScreen replied saying they did not accept money from medical companies.
In 2005, TeenScreen was criticized following media coverage of a suit filed a local screening program in Indiana by the parents of a teenager who had taken part in screening. The suit alleged that the screening had taken place without parents' permissions. The complaint led to a change in how parental consent was handled by TeenScreen sites. In 2006, the program's policy was amended so that active rather than passive consent was required from parents before screening adolescents in a school setting.
References
External links
National Registry of Evidence Based Programs and Practice
Mental health organizations based in New York (state)
Clinical psychology
Health informatics
Child and adolescent psychiatry
Suicide prevention
Organizations disestablished in 2012
Pediatric organizations | TeenScreen | Biology | 1,913 |
15,711,726 | https://en.wikipedia.org/wiki/Extinct%20radionuclide | An extinct radionuclide is a radionuclide that was formed by nucleosynthesis before the formation of the Solar System, about 4.6 billion years ago, but has since decayed to virtually zero abundance and is no longer detectable as a primordial nuclide. Extinct radionuclides were generated by various processes in the early Solar system, and became part of the composition of meteorites and protoplanets. All widely documented extinct radionuclides have half-lives shorter than 100 million years.
Short-lived radioisotopes that are found in nature are continuously generated or replenished by natural processes, such as cosmic rays (cosmogenic nuclides), background radiation, or the decay chain or spontaneous fission of other radionuclides.
Short-lived isotopes that are not generated or replenished by natural processes are not found in nature, so they are known as extinct radionuclides. Their former existence is inferred from a superabundance of their stable or nearly stable decay products.
Examples of extinct radionuclides include iodine-129 (the first to be noted in 1960, inferred from excess xenon-129 concentrations in meteorites, in the xenon-iodine dating system), aluminium-26 (inferred from extra magnesium-26 found in meteorites), and iron-60.
The Solar System and Earth are formed from primordial nuclides and extinct nuclides. Extinct nuclides have decayed away, but primordial nuclides still exist in their original state (undecayed). There are 251 stable primordial nuclides, and remainders of 35 primordial radionuclides that have very long half-lives.
List of extinct radionuclides
A partial list of radionuclides not found on Earth, but for which decay products are present:
Plutonium-244 and samarium-146 have half-lives long enough to still be present on Earth, but they have not been confirmed experimentally to be present.
Notable isotopes with shorter lives still being produced on Earth include:
Manganese-53 and beryllium-10 are produced by cosmic ray spallation on dust in the upper atmosphere.
Uranium-236 is produced in uranium ores by neutrons from other radioisotopes.
Iodine-129 is produced from tellurium-130 by cosmic-ray muons and from cosmic ray spallation of stable xenon isotopes in the atmosphere.
Radioisotopes with half-lives shorter than one million years are also produced: for example, carbon-14 by cosmic ray production in the atmosphere (half-life 5730 years).
Use in geochronology
Despite the fact that the radioactive isotopes mentioned above are now effectively extinct, the record of their existence is found in their decay products and are very useful to geologists who wish to use them as geochronometers. Their usefulness derives from a few factors such as the fact that their short half-lives provide high chronological resolution and the chemical mobility of various elements can date unique geological processes such as igneous fractionation and surface weathering. There are, however, hurdles to overcome when using extinct nuclides. The need for high-precision isotope ratio measurements is paramount as the extinct radionuclides contribute such a small fraction of the daughter isotopes. Compounding this problem is the increasing contribution that high-energy cosmic rays have on already minute amounts of daughter isotopes formed from the extinct nuclides. Distinguishing the source and abundance of these effects is critical to obtaining accurate ages from extinct nuclides. Additionally, more work needs to be done in determining a more precise half-life for some of these isotopes, such as 60Fe and 146Sm.
See also
Presolar grains
Radiogenic nuclide, the dual concept
Radiometric dating
List of nuclides which includes a list of radionuclides in order by half-life
References
External links
List of isotopes found and not found in nature, with half-lives
Discussion of extinct radionuclides
Geochemistry
Geochronology
Radioactivity | Extinct radionuclide | Physics,Chemistry | 856 |
37,690,762 | https://en.wikipedia.org/wiki/R136c | R136c is a star located in R136, a tight knot of stars at the centre of NGC 2070, an open cluster weighing 450,000 solar masses and containing 10,000 stars. At and 3.8 million , it is the one of the most massive stars known and one of the most luminous, along with being one of the hottest, at over . It was first resolved and named by Feitzinger in 1980, along with R136a and R136b.
Description
R136c is a Wolf–Rayet star of the spectral type WN5h and with a temperature of , making it one of hottest stars known. It is the one of the most massive stars known, with a mass of , and it is one of the most luminous stars known, with a luminosity of 3.8 million . The extreme luminosity is produced by the CNO fusion process in its highly compressed hot core. Typical of all Wolf–Rayet stars, R136c has been losing mass by means of a strong stellar wind with speeds over and mass loss rates in excess of solar masses per year. It is strongly suspected to be a binary, due to the detection of hard x-ray emission typical of colliding wind binaries, but the companion is thought to make only a small contribution to the total luminosity.
Evolution
R136c is so energetic that it has already lost a substantial fraction of its initial mass, even though it is only a few million years old. It is still effectively on the main sequence, fusing hydrogen at its core via the CNO cycle, but it has convected and mixed fusion products to the surface and these create a powerful stellar wind and emission spectrum normally only seen in highly evolved stars.
Its fate depends on the amount of mass it loses before its core collapses, but is likely to result in a supernova. The most recent models for single star evolution at near-solar metallicities suggest that the most massive stars explode as highly stripped type Ic supernovae, although different outcomes are possible for binaries. Some of these supernovae are expected to produce a type of gamma-ray burst and the expected remnant is a black hole.
References
Stars in the Large Magellanic Cloud
Tarantula Nebula
Wolf–Rayet stars
Extragalactic stars
?
Dorado
Large Magellanic Cloud | R136c | Astronomy | 485 |
2,709,092 | https://en.wikipedia.org/wiki/Mashup%20%28web%20application%20hybrid%29 | A mashup (computer industry jargon), in web development, is a web page or web application that uses content from more than one source to create a single new service displayed in a single graphical interface. For example, a user could combine the addresses and photographs of their library branches with a Google map to create a map mashup. The term implies easy, fast integration, frequently using open application programming interfaces (open API) and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data.
The term mashup originally comes from creating something by combining elements from two or more sources.
The main characteristics of a mashup are combination, visualization, and aggregation. It is important to make existing data more useful, for personal and professional use. To be able to permanently access the data of other services, mashups are generally client applications or hosted online.
In the past years, more and more Web applications have published APIs that enable software developers to easily integrate data and functions the SOA way, instead of building them by themselves. Mashups can be considered to have an active role in the evolution of social software and Web 2.0. Mashup composition tools are usually simple enough to be used by end-users. They generally do not require programming skills and rather support visual wiring of GUI widgets, services and components together. Therefore, these tools contribute to a new vision of the Web, where users are able to contribute.
The term "mashup" is not formally defined by any standard-setting body.
History
The broader context of the history of the Web provides a background for the development of mashups. Under the Web 1.0 model, organizations stored consumer data on portals and updated them regularly. They controlled all the consumer data, and the consumer had to use their products and services to get the information.
The advent of Web 2.0 introduced Web standards that were commonly and widely adopted across traditional competitors and which unlocked the consumer data. At the same time, mashups emerged, allowing mixing and matching competitors' APIs to develop new services.
The first mashups used mapping services or photo services to combine these services with data of any kind and therefore to produce visualizations of data.
In the beginning, most mashups were consumer-based, but recently the mashup is to be seen as an interesting concept useful also to enterprises. Business mashups can combine existing internal data with external services to generate new views on the data.
There was also the free Yahoo! Pipes to build mashups for free using the Yahoo! Query Language.
Types of mashup
There are many types of mashup, such as business mashups, consumer mashups, and data mashups. The most common type of mashup is the consumer mashup, aimed at the general public.
Business (or enterprise) mashups define applications that combine their own resources, application and data, with other external Web services. They focus data into a single presentation and allow for collaborative action among businesses and developers. This works well for an agile development project, which requires collaboration between the developers and customer (or customer proxy, typically a product manager) for defining and implementing the business requirements. Enterprise mashups are secure, visually rich Web applications that expose actionable information from diverse internal and external information sources.
Consumer mashups combine data from multiple public sources in the browser and organize it through a simple browser user interface. (e.g.: Wikipediavision combines Google Map and a Wikipedia API)
Data mashups, opposite to the consumer mashups, combine similar types of media and information from multiple sources into a single representation. The combination of all these resources create a new and distinct Web service that was not originally provided by either source.
By API type
Mashups can also be categorized by the basic API type they use but any of these can be combined with each other or embedded into other applications.
Data types
Indexed data (documents, weblogs, images, videos, shopping articles, jobs ...) used by metasearch engines
Cartographic and geographic data: geolocation software, geovisualization
Feeds, podcasts: news aggregators
Functions
Data converters: language translators, speech processing, URL shorteners...
Communication: email, instant messaging, notification...
Visual data rendering: information visualization, diagrams
Security related: electronic payment systems, ID identification...
Editors
Mashup enabler
In technology, a mashup enabler is a tool for transforming incompatible IT resources into a form that allows them to be easily combined, in order to create a mashup. Mashup enablers allow powerful techniques and tools (such as mashup platforms) for combining data and services to be applied to new kinds of resources. An example of a mashup enabler is a tool for creating an RSS feed from a spreadsheet (which cannot easily be used to create a mashup). Many mashup editors include mashup enablers, for example, Presto Mashup Connectors, Convertigo Web Integrator or Caspio Bridge.
Mashup enablers have also been described as "the service and tool providers, [sic] that make mashups possible".
History
Early mashups were developed manually by enthusiastic programmers. However, as mashups became more popular, companies began creating platforms for building mashups, which allow designers to visually construct mashups by connecting together mashup components.
Mashup editors have greatly simplified the creation of mashups, significantly increasing the productivity of mashup developers and even opening mashup development to end-users and non-IT experts. Standard components and connectors enable designers to combine mashup resources in all sorts of complex ways with ease. Mashup platforms, however, have done little to broaden the scope of resources accessible by mashups and have not freed mashups from their reliance on well-structured data and open libraries (RSS feeds and public APIs).
Mashup enablers evolved to address this problem, providing the ability to convert other kinds of data and services into mashable resources.
Web resources
Of course, not all valuable data is located within organizations. In fact, the most valuable information for business intelligence and decision support is often external to the organization. With the emergence of rich web applications and online Web portals, a wide range of business-critical processes (such as ordering) are becoming available online. Unfortunately, very few of these data sources syndicate content in RSS format and very few of these services provide publicly accessible APIs. Mashup editors therefore solve this problem by providing enablers or connectors.
Mashups versus portals
Mashups and portals are both content aggregation technologies. Portals are an older technology designed as an extension to traditional dynamic Web applications, in which the process of converting data content into marked-up Web pages is split into two phases: generation of markup "fragments" and aggregation of the fragments into pages. Each markup fragment is generated by a "portlet", and the portal combines them into a single Web page. Portlets may be hosted locally on the portal server or remotely on a separate server.
Portal technology defines a complete event model covering reads and updates. A request for an aggregate page on a portal is translated into individual read operations on all the portlets that form the page ("render" operations on local, JSR 168 portlets or "getMarkup" operations on remote, WSRP portlets). If a submit button is pressed on any portlet on a portal page, it is translated into an update operation on that portlet alone (processAction on a local portlet or performBlockingInteraction on a remote, WSRP portlet). The update is then immediately followed by a read on all portlets on the page.
Portal technology is about server-side, presentation-tier aggregation. It cannot be used to drive more robust forms of application integration such as two-phase commit.
Mashups differ from portals in the following respects:
The portal model has been around longer and has had greater investment and product research. Portal technology is therefore more standardized and mature. Over time, increasing maturity and standardization of mashup technology will likely make it more popular than portal technology because it is more closely associated with Web 2.0 and lately Service-oriented Architectures (SOA). New versions of portal products are expected to eventually add mashup support while still supporting legacy portlet applications. Mashup technologies, in contrast, are not expected to provide support for portal standards.
Business mashups
Mashup uses are expanding in the business environment. Business mashups are useful for integrating business and data services, as business mashups technologies provide the ability to develop new integrated services quickly, to combine internal services with external or personalized information, and to make these services tangible to the business user through user-friendly Web browser interfaces.
Business mashups differ from consumer mashups in the level of integration with business computing environments, security and access control features, governance, and the sophistication of the programming tools (mashup editors) used. Another difference between business mashups and consumer mashups is a growing trend of using business mashups in commercial software as a service (SaaS) offering.
Many of the providers of business mashups technologies have added SOA features.
Architectural aspects of mashups
The architecture of a mashup is divided into three layers:
Presentation / user interaction: this is the user interface of mashups. The technologies used are HTML/XHTML, CSS, JavaScript, Asynchronous JavaScript and Xml (Ajax).
Web Services: the product's functionality can be accessed using API services. The technologies used are XMLHTTPRequest, XML-RPC, JSON-RPC, SOAP, REST.
Data: handling the data like sending, storing and receiving. The technologies used are XML, JSON, KML.
Architecturally, there are two styles of mashups: Web-based and server-based. Whereas Web-based mashups typically use the user's web browser to combine and reformat the data, server-based mashups analyze and reformat the data on a remote server and transmit the data to the user's browser in its final form.
Mashups appear to be a variation of a façade pattern. That is: a software engineering design pattern that provides a simplified interface to a larger body of code (in this case the code to aggregate the different feeds with different APIs).
Mashups can be used with software provided as a service (SaaS).
After several years of standards development, mainstream businesses are starting to adopt service-oriented architectures (SOA) to integrate disparate data by making them available as discrete Web services. Web services provide open, standardized protocols to provide a unified means of accessing information from a diverse set of platforms (operating systems, programming languages, applications). These Web services can be reused to provide completely new services and applications within and across organizations, providing business flexibility.
See also
Mashup (culture)
Mashup (music)
Open Mashup Alliance
Open API
Yahoo! Pipes
Webhook
Web portal
Web scraping
References
Further reading
Ahmet Soylu, Felix Mödritscher, Fridolin Wild, Patrick De Causmaecker, Piet Desmet. 2012 . “Mashups by Orchestration and Widget-based Personal Environments: Key Challenges, Solution Strategies, and an Application.” Program: Electronic Library and Information Systems 46 (4): 383–428.
Endres-Niggemeyer, Brigitte ed. 2013. Semantic Mashups. Intelligent Reuse of Web Resources. Springer. (Print)
Software architecture
Web 2.0
Web 2.0 neologisms
Web development | Mashup (web application hybrid) | Engineering | 2,463 |
469,690 | https://en.wikipedia.org/wiki/National%20Water%20Carrier%20of%20Israel | The National Water Carrier of Israel (, HaMovil HaArtzi) is the largest water project in Israel, completed in 1964. Its main purpose is to transfer water from the Sea of Galilee in the north of the country to the highly populated center and the arid south and to enable efficient use of water and regulation of the water supply in the country. It is about long. Up to of water can flow through the carrier each hour, totalling 1.7 million cubic meters in a day.
The carrier consists of a system of giant pipes, open canals, tunnels, reservoirs and large scale pumping stations. Building the carrier was a considerable technical challenge as it traverses a wide variety of terrain and elevations. Most of the water works in Israel are integrated with the National Water Carrier.
History
Planning and construction
While early plans were made before the establishment of the state of Israel, detailed planning began after Israeli independence in 1948. The construction of the project, originally known as the Jordan Valley Unified Water Plan, started in 1953, during the planning phase, long before the detailed final plan was completed in 1956. The project was designed by Tahal and constructed by Mekorot. It was started during the tenure of Prime Minister David Ben-Gurion, and was completed in June 1964 under Prime Minister Levi Eshkol, at a cost of about 420 million Israeli lira (at 1964 values).
Agriculture, drinking water, Jordan's share (1964-1990s)
The National Water Carrier was inaugurated in 1964, with 80% of its water being allocated to agriculture and 20% for drinking water. As time passed, increasing amounts were consumed as drinking water, and by the early 1990s, the National Carrier was supplying half of the drinking water in Israel. It was forecast that by the year 2010, 80% of the National Carrier would be directed more at providing drinking water. The reasons for the increased demand for drinking water were twofold. First, Israel saw rapid population growth, primarily in the center of the country which increased the demand for water. As the standard of living in the country rose, there was increased domestic water use. As a result of the 1994 Israel-Jordan Treaty of Peace, among other items, Israel agreed to transfer 50 million cubic metres of water annually to Jordan.
Since 2015 (after large-scale desalination)
As of 2016, water from the Sea of Galilee was supplying approximately 10% of Israel's drinking water needs. In the previous years, the Israeli government had undertaken extensive investments in water reclamation and desalination infrastructure in the country, while promoting water conservation. This has lessened the country's reliance on the National Water Carrier and has allowed it to significantly reduce the amount of water pumped from the Sea of Galilee in an effort to restore and improve the lake's ecological environment, especially in face of severe droughts affecting the lake's intake basin in previous years. It was expected that in 2016 only about of water would be drawn from the lake for Israeli domestic water consumption, down from more than pumped annually a decade earlier.
Route
Water first enters the National Water Carrier through a several hundred meter long pipeline which is submerged under the northern part of Sea of Galilee. The water passes into a reservoir on the shore and then travels to a pumping station, initially called "Eshed Kinrot" or "Eshed Kinnarot", later renamed "Sapir" (English name: Sapir Pumping Station) after Pinhas Sapir, co-founder of Mekorot in 1937 (see also Tel Kinrot for the site).
The pipeline entering the lake is composed of nine pipes which are joined by an internal cable threaded through them. Each of these pipes includes twelve concrete pipes, each five meters long and three meters wide. As these pipes were cast, they were encased in steel pipes, sealed at the ends and floated out onto the lake. A winged star-shaped cap is mounted in a vertical section of the underwater pipe to allow water to be taken in from all directions.
Water travels to the Sapir Pumping Station on the shore of the lake where four horizontal pumps lift the water into three pipes which subsequently join to form the pressure pipe, a long pressure-resistant steel pipe, which raises the water from -213 meters below sea level to +44 meters. From here, the water flows into the Jordan Canal, an open canal. This runs along a mountainside for most of its route. When full, the water in the canal is deep and flows purely by gravity apart from where two deep wadis intersect the course of the canal, Nahal Amud and . To overcome these obstacles, water is carried through inverted siphons.
The canal transfers the water into the Tzalmon Reservoir, a 1 hm3 operational reservoir in the Nahal Tzalmon valley. Here, the second pumping station in the course of the Water Carrier is located, the Tzalmon Pumping Station which is designed to lift water an additional . Water then enters the Ya’akov Tunnel which is long and 3 meters in diameter. This flows under hills near the village of Eilabun and transfers the water from the Jordan Canal to the open canal crossing which crosses the Beit Netofa Valley – the Beit Netofa Canal. The Beit Netofa Canal takes the water 17 kilometers and was built with an oval base due to the clay soil through which it runs. The width of the canal is 19.4 meters, the bottom is 12 meters wide and it is 2.60 meters deep, with the water flowing through it at a height of 2.15 meters.
The advanced Eshkol Water Filtration Plant, completed in 2007-2008 by Mekorot, the fourth largest in the world, is located at the southwestern edge of the Beit Netofa Valley. The water first passes through two large reservoirs. The first of these is a sedimentation pond, holding about 1.5 million m³ of water, which allow suspended matter in the water to settle to the bottom, thus cleaning the water. The second reservoir is separated from the sedimentation pond by a dam and has a capacity of 4.5 million m³. Here the inflow of water from the pumping stations and open canals is regulated against the outflow into the closed pipeline. The amount allowed through depends on water demand. A special canal bypasses the reservoirs allowing water to travel straight through the carrier. Before entering the closed pipeline, final tests are performed on the water in the carrier, with chemicals added to bring the water to drinking standards. At the end of the filtration process the water enters the 108" Pipeline, which transports it 86 km to the Yarkon-Negev system near the city of Rosh HaAyin to the east of Tel Aviv and Petah Tikva.
Alternative plans
Herzl plan
The initial idea of a National Water Carrier followed the proposal of several solutions for the water problems of Palestine put forward before the establishment of Israel in 1948. Early ideas appeared in the 1902 book Altneuland by Theodor Herzl in which he talked about utilizing the sources of the Jordan River for irrigation purposes and channeling sea water for producing electricity from the Mediterranean Sea near Haifa through the Beit She'an and Jordan valleys to a canal which ran parallel to the Jordan River and the Dead Sea.
Hayes plan
An earlier water development scheme was proposed by Walter C. Lowdermilk in his book Palestine, Land of Promise, published in 1944. It was developed with human and financial assistance from the American Zionist Emergency Council. The book became a bestseller, and important in swaying the debate within the Truman administration concerning immigrant absorptive capacity and the Negev as part of Israel. His book served as the basis for a detailed water resource plan which was prepared by James Hayes, an engineer from the USA, who proposed utilizing all water sources in Israel (2 km3 per annum) for irrigation and the production of electricity. This would involve diverting part of the Litani River water to the Hasbani River. This water which would be further transported by a dam and canal to the area south of Tel Hai, from where it would be "dropped" to produce electricity. Water would also be carried from Tel Hai to the Beit Netofa Valley which would become a national water reservoir, of about one billion cu.m. volume (one quarter of the Sea of Galilee's volume). An electricity generating station would be located at the reservoir's outlet, from where the water would flow into an open canal to Rafiah, which, whilst travelling south would collect water from wadis and streams, including the waters of the Yarkon River. Hayes also asserted that the Yarmouk River would be channeled into Lake Kinneret, in order to prevent a rise in its salinity which could come about as a result of the diversion of the River Jordan, and that a joint Israeli-Jordanian dam about 5 km east of kibbutz Sha'ar HaGolan would be constructed. The Hayes plan was designed to be implemented in two stages over a 10-year period, but never materialised due to its economic infeasibility and lack of cooperation by Jordan.
Johnston Plan
Eric Johnston, the water envoy of US President Dwight Eisenhower between 1954 and 1957, developed another water plan for Israel, which became known as the Johnston Plan. In this, water from the Jordan River and Yarmuk River would be divided between Israel (40%), Jordan (45%) and Syria and Lebanon (15%). Each country would keep its right to utilize the water flowing within its borders, if it caused no harm to a neighboring country. Whilst this plan was accepted as fair by Arab water experts, it later floundered as a result of increasing tensions in the region, but was later seriously considered by Arab leaders.
Tensions with Syria and Jordan
Since its construction, the resulting diversion of water from the Jordan River has been a source of tension with Syria and Jordan. In 1964, Syria attempted construction of a Headwater Diversion Plan that would have prevented Israel from using a major portion of its water allocation, sharply reducing the capacity of the carrier. This project and Israel's subsequent physical attack on those diversion efforts in 1965 were factors which played into regional tensions culminating in the 1967 Six-Day War. In the course of the war, Israel captured from Syria the Golan Heights, which contain some of the sources of the Sea of Galilee.
Receding Dead Sea and sinkholes
The surface of the Dead Sea has shrunk by about 33% since the 1960s, which is partly attributed to the much-reduced flow of the Jordan River since the construction of the National Water Carrier project. The EcoPeace Middle East, a joint Israeli-Palestinian-Jordanian environmental group, has estimated that the annual flow into the Dead Sea from the Jordan is less than of water, compared with former flows of between and .
The water level of the Dead Sea had been declining, , at an annual rate of more than a metre, which is attributed to the battle for scarce water resources in the very arid region.
One effect of the shrinking of the Dead Sea is the apparition of sinkholes along its shores. The sinkholes form as a result of the receding shoreline, where a thick layer of underground salt is left behind. When fresh water arrives in the form of heavy rains, it dissolves the salt as it sinks into the ground, forming an underground cavity, which eventually collapses under the weight of the surface ground layer. At Ein Gedi for instance, on the western coast of the Dead Sea, a large number of sinkholes have appeared in the area, which have even damaged a highway segment built in 2010, supposedly to a "sinkhole-proof" design.
The Dead Sea is further shrinking due to the diminished amount of water from the rains reaching it since flash floods started pouring into the sinkholes.
See also
Water supply and sanitation in Israel
Water politics
Water politics in the Middle East
Water politics in the Jordan River basin
References
Quotes
External links
Description of the National Water Carrier by Shmuel Kantor, former chief engineer of Mekorot, Israel's national water company
Fossil Water Reserves - Israel - from two hundred billion (109) to "several hundred billion" cubic meters of water
Infrastructure in Israel
Interbasin transfer
Sea of Galilee | National Water Carrier of Israel | Engineering,Environmental_science | 2,496 |
6,645,085 | https://en.wikipedia.org/wiki/Billiard-ball%20computer | A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
Simulating circuits with billiard balls
This model can be used to simulate Boolean circuits in which the wires of the circuit correspond to paths on which one of the balls may travel, the signal on a wire is encoded by the presence or absence of a ball on that path, and the gates of the circuit are simulated by collisions of balls at points where their paths cross. In particular, it is possible to set up the paths of the balls and the buffers around them to form a reversible Toffoli gate, from which any other Boolean logic gate may be simulated. Therefore, suitably configured billiard-ball computers may be used to perform any computational task.
Simulating billiard balls in other models of computation
It is possible to simulate billiard-ball computers on several types of reversible cellular automaton, including block cellular automata and second-order cellular automata. In these simulations, the balls are only allowed to move at a constant speed in an axis-parallel direction, assumptions that in any case were already present in the use of the billiard ball model to simulate logic circuits. Both the balls and the buffers are simulated by certain patterns of live cells, and the field across which the balls move is simulated by regions of dead cells, in these cellular automaton simulations.
Logic gates based on billiard-ball computer designs have also been made to operate using live soldier crabs of the species Mictyris guinotae in place of the billiard balls.
See also
Unconventional computing
Fluidics
References
Models of computation
Mechanical computers
Reversible computing | Billiard-ball computer | Physics,Technology | 421 |
23,153,819 | https://en.wikipedia.org/wiki/Macrolepiota%20excoriata | Macrolepiota excoriata is a mushroom in the family Agaricaceae.
Description
The height is . The color of the mushroom is white to cream. The cap is convex to shield shaped, is arched over with a raised center, in diameter, has a brownish center, and has ochre yellow to pale brown scales. The gills are white to cream. The stipe is smooth, cylindrical, has a bulbous base, and has a ring. The spores are smooth, hyaline, and ellipsoid. The spore print is white, cream, or yellowish. The ring is whitish to white. The flesh is white, fibrous, and does not change color. The mushroom is saprophytic. It is listed as a vulnerable species. The threat to this species is over-growing of ungrazed and unmowed meadows. The species is similar to Macrolepiota procera, although the latter is bigger.
Edibility
The flesh is white, tender, and has a pleasant taste, best when it is consumed while it is young. The flesh tastes like hazelnut. The odor of the species is weak. The mushroom is similar to numerous toxic species so harvesting them is problematic. It has been advised by the website Memento des champignons (Memento of Fungus) to not collect specimens by roadsides. The collection period for the mushroom is from mid-summer to late autumn.
Habitat
The mushroom can be found in North America and Europe and can be found on the ground, in fields, in lawns, or on roadsides. The species is common in Guernsey, even though most books say that it is rare in Guernsey.
References
Edible fungi
Fungi described in 1774
Fungi of Europe
Fungi of North America
Agaricaceae
Taxa named by Jacob Christian Schäffer
Fungus species | Macrolepiota excoriata | Biology | 372 |
21,391,368 | https://en.wikipedia.org/wiki/Prismatic%20joint | A prismatic joint is a one-degree-of-freedom kinematic pair which constrains the motion of two bodies to sliding along a common axis, without rotation; for this reason it is often called a slider (as in the slider-crank linkage) or a sliding pair. They are often utilized in hydraulic and pneumatic cylinders.
A prismatic joint can be formed with a polygonal cross-section to resist rotation. Examples of this include the dovetail joint and linear bearings.
See also
Cylindrical joint
Degrees of freedom (mechanics)
Kinematic pair
Kinematics
Mechanical joint
Revolute joint
References
Kinematics
Rigid bodies | Prismatic joint | Physics,Technology | 135 |
59,999,226 | https://en.wikipedia.org/wiki/Seth%20G.%20Atwood | Seth Glanville Atwood (June 2, 1917 – February 21, 2010) was an American industrialist, community leader, and horological collector. He was the chairman and president of Atwood Vacuum Machine Company, one of the world's largest manufacturers of automobile body hardware, and a long-time leader of the Atwood family's business which involved in manufacturing, banking and hotel industries, with over 2,500 employees. In addition, Atwood was a director of the Illinois Manufacturers' Association, and had served in the Illinois Chamber of Commerce and the Graduate School of Business at the University of Chicago.
In 1971, Seth G. Atwood founded the Time Museum at the Clock Tower Resort in Rockford, Illinois, which later became one of the leading horological museums in the world with nearly 1,500 pieces of horological collection, including atomic clocks. The museum's notable collection included ancient Chinese sundials and water clocks, early pendulum clocks, a quarter-repeater by Thomas Tompion, Breguet Sympathique Clocks, and the Patek Philippe Henry Graves Supercomplication which currently holds the title of the most expensive watch ever sold at auction, fetching 24 million US dollars (23,237,000 CHF) in Geneva on November 11, 2014. However, the museum was shut down in 1999 and its collection was sent to auctions over the years.
Early life
Seth G. Atwood was born in Rockford, Illinois on June 2, 1917. He attended Carleton College, and graduated from Stanford University with a B.A. degree in 1938. He later studied at the University of Wisconsin for a year, and obtained an M.B.A. from Harvard University in 1940. From 1942-1946, he served as an officer in the United States Navy, achieving the rank of lieutenant commander.
Seth G. Atwood later returned to Rockford and joined in the Atwood Vacuum Machine Company, which was founded by his father, Seth B. Atwood, and his uncle, James T. Atwood in 1909 specializing in manufacturing vacuum cleaners.
Family business
By 1920, the Atwood Vacuum Machine Company had already shifted its focus from manufacturing vacuum cleaners to door silencers for cars. Eventually, the company began to manufacture a complete line of automobile body hardware. Seth G. Atwood became the president of the Atwood Vacuum Machine Company in 1953 when his father became chairman of the board.
In 1967, Seth G. Atwood became the chairman of the company, and under his leadership the company became the world's "largest independent manufacturer of internal auto body hardware" in 1968. In 1970, the company re-organized and established the Automotive and Contract Division and the Mobile Products Division, employing over 2,500 employees with five plants in Canada and the United States. In 1971, the annual sale of the company reached around US$50 million. In 1985, Atwood Vacuum Machine was sold to Anderson Industries in Rockford, Illinois; the annual sale of the company was US$138 million at the time of this acquisition.
Seth G. Atwood also managed other businesses of his family involving banking, venture capital, hotels and real estate properties.
Timepiece collection
Time Museum
In 1971, Seth G. Atwood founded the Time Museum at the Clock Tower Resort in Rockford, Illinois. The resort was originally built by the Atwood's family in 1968. In 1980s, the museum became one of the leading horological museums in the world, with nearly 1,500 pieces of horological collection, including atomic clocks. The museum's notable collection included ancient Chinese sundials and water clocks by Su Song, early pendulum clocks, a quarter-repeater by Thomas Tompion, an astronomical and world time clock by Christian Gebhard, the Harrison wooden regulator clock, the Richard Glynn mechanical equinoctial standing Ring-Dial, and so on. In 1990s, the museum attracted over 50,000 visitors each year.
However, the museum was shut down in March 1999 when United Realty Corp., a company owned by Atwood family interests, sold the Clock Tower Resort to Regency Hotel Management. As a result, the majority of the museum's collection went to the Museum of Science and Industry in Chicago, and was on display from January 2001 to February 2004. In 2004, a campaign to raise US$35 million to buy the collection for Time Museum failed, and the collection was broken up with its timepieces sent to auctions.
Over the years, hundreds of items from the museum's original collection went up for sale in Sotheby's auctions, and several pieces became the world's most expensive watches and clocks ever auctioned. These included the Patek Philippe Henry Graves Supercomplication and the Breguet Sympathique Clock No.128 & 5009 (Duc d'Orléans Breguet Sympathique, owned by Ferdinand Philippe, Duke of Orléans), which was originally restored by English watchmaker George Daniels at the request of Seth G. Atwood. The Patek Philippe pocket watch currently holds the title of the most expensive watch ever sold at auction, fetching US$24 million US dollars (CHF 23,237,000) in Sotheby's Geneva auction on November 11, 2014. The Breguet Sympathique Clock, on the other hand, currently ranks as one of the most expensive clocks ever sold at auction, fetching US$6.80 million in Sotheby's New York auction on December 4, 2012.
Coaxial escapement
During the quartz crisis in 1970s, Seth G. Atwood commissioned a mechanical timepiece from English watchmaker George Daniels to fundamentally improve the performance of mechanical watches. As a result, Daniels invented the revolutionary coaxial escapement in 1974 and patented it in 1980. The Atwood watch for Seth G. Atwood was completed in 1976.
The coaxial escapement was later used in the watches of watch manufacturers such as Omega SA.
See also
Patek Philippe Henry Graves Supercomplication
Coaxial escapement
References
Further reading
Masterpieces from the Time Museum. Volume I - III. Sotheby's. New York, 1999, 2002, 2004.
Masterpieces from the Time Museum. Volume IV. Sotheby's. New York, 2004.
External links
Time Museum
1917 births
2010 deaths
Carleton College alumni
Stanford University alumni
University of Wisconsin–Madison alumni
Harvard Business School alumni
People from Rockford, Illinois
United States Navy officers
Horology
Military personnel from Illinois | Seth G. Atwood | Physics | 1,334 |
2,974,121 | https://en.wikipedia.org/wiki/Higgs%20phase | In theoretical physics, it is often important to consider gauge theory that admits many physical phenomena and "phases", connected by phase transitions, in which the vacuum may be found.
Global symmetries in a gauge theory may be broken by the Higgs mechanism. In more general theories such as those relevant in string theory, there are often many Higgs fields that transform in different representations of the gauge group.
If they transform in the adjoint representation or a similar representation, the original gauge symmetry is typically broken to a product of U(1) factors. Because U(1) describes electromagnetism including the Coulomb field, the corresponding phase is called a Coulomb phase.
If the Higgs fields that induce the spontaneous symmetry breaking transform in other representations, the Higgs mechanism often breaks the gauge group completely and no U(1) factors are left. In this case, the corresponding vacuum expectation values describe a Higgs phase.
Using the representation of a gauge theory in terms of a D-brane, for example D4-brane combined with D0-branes, the Coulomb phase describes D0-branes that have left the D4-branes and carry their own independent U(1) symmetries. The Higgs phase describes D0-branes dissolved in the D4-branes as instantons.
References
Gauge theories | Higgs phase | Physics | 282 |
33,234,902 | https://en.wikipedia.org/wiki/Interpersonal%20complementarity%20hypothesis | Interpersonal complementarity hypothesis asserts that individuals often behave in ways that evoke complementary or reciprocal behavior from others. More specifically, this hypothesis predicts that positive behaviors evoke positive behaviors, negative behaviors evoke negative behaviors, dominant behaviors evoke submissive behaviors, and vice versa.
Essentially, each action carried out by a member of a group has the ability to elicit predictable actions from other group members. For example, individuals who display evidence of positive behavior (e.g., smiling, behaving cooperatively) tend to trigger positively valenced behaviors from others. In much the same way, group members who behave in a docile or submissive fashion tend to elicit complementary, dominant behaviors from other members of the group. This behavioral congruency, as it applies to obedience and authority, has been illustrated in several studies assessing power hierarchies present in groups. These studies highlight the increased comfort experienced by individuals when the power or status behavior of others complement that of their own (e.g., a "leader" preferring a "follower").
See also
Interpersonal compatibility
Authority
Obedience (human behavior)
References
Behavior
Interpersonal relationships | Interpersonal complementarity hypothesis | Biology | 237 |
76,426,230 | https://en.wikipedia.org/wiki/Icaroscope | An icaroscope is a telescope-like nonlinear optical device that enables viewing of both very bright and dark objects in the same image simultaneously. The problem the icaroscope was designed to solve was observing enemy aircraft approaching with the sun behind them, when the bright sun in a clear sky dazzles the observer and masks aircraft near the sun's disc. In the icaroscope, the scene is not viewed directly; instead it is briefly projected onto a screen coated with a special phosphor, and this screen is then shown to the viewer. The specific silver-activated zinc-cadmium sulphide phosphor has a short afterglow even in areas saturated by the full brightness of the sun. By rapidly exposing the phosphor, allowing it to decay for around 5 ms, and showing it to the viewer, the effect is to attenuate the brightness of the sun's disc by about 500 times, allowing details near it to be clearly seen. The icaroscope repeats this process at a rate of 90 Hz, permitting continuous observation.
Development of the icaroscope was carried out during the Second World War at the Institute of Optics by Brian O'Brien, Franz Urbach, and other researchers. The device is named for Icarus, the mythological figure known for flying too close to the sun.
References
Nonlinear optics
Science and technology during World War II
Telescope types | Icaroscope | Technology | 285 |
35,735,664 | https://en.wikipedia.org/wiki/Powermat%20Technologies | Powermat Technologies Ltd. is a developer of wireless power solutions. The company licenses intellectual property (IP), selling charging spots to public venues along with the software to support their maintenance, management, and consumer interaction.
The company's inductive charging technology has been adopted by the Power Matters Alliance (PMA) and is the platform adopted by Duracell, General Motors, Starbucks and AT&T.
Products
Powermat manufactures both receivers and transmitters for the mobile industry, consumers, and public venues. It licenses its technology, which enables compliance with the AirFuel (formally PMA) and the Qi standard. Furthermore, Powermat operates a software service system to allow venue owners to control and manage their installed wireless power networks, each of which consists of charging spots and a gateway.
Technology
The company's technology is based upon Inductively Coupled Power Transfer. As the block diagram shows, varying the current in the primary induction coil within a transmitter generates an alternating magnetic field from within a charging spot. The receiver is a second induction coil in the handheld device that takes power from the magnetic field and converts it back into electric current to charge the device battery.
An additional part of the technology is the System Control Communication:
Data over Coil (DoC) – the Rx sends feedback to Tx by changing the load seen by the Tx coil. The protocol is frequency-based signaling, which enables fast response by the transmitter. Each receiver is equipped with a unique ID (RxID), enabling the system, when installed in public venues, to recognize users and communicate with them. The RxID is communicated as part of the data over coil to the Tx.
History
The company was founded in 2006 by Ran Poliakine. Its first products were launched in 2009. In 2011, General Motors announced that it would integrate Powermat's wireless charging technology into certain vehicles in its 2013 Chevrolet Volt line and would also invest in the private company. In the same year, Powermat also partnered with Leyden Energy, manufacturer of advanced lithium-imide (Li-imide) batteries, in order to develop wireless chargeable batteries, and with Arconas, provider of public seating, to incorporate wireless charging directly into airport seating and lounge areas. Among the first integrations with airports were those at Chicago O'Hare International Airport, Aspen–Pitkin Airport, Eppley Airfield in Omaha, and Toronto Pearson International Airport. Powermat and Procter & Gamble created a joint venture under the Duracell Powermat brand that began operations in January 2012. The entertainer Jay-Z signed on as the "face and voice" of the venture and took an equity stake in the company. As part of a partnership with Madison Square Garden begun in mid-2012, the arena features Duracell Powermat charging surfaces in a number of suites and other areas. In addition, Duracell Powermat charging spots were embedded in Jay-Z's 40/40 Club NYC club tables. A year later, Powermat Technologies, along with Procter & Gamble, founded the Power Matters Alliance, an alliance of semiconductor and consumer electronics industries as well as governmental organizations. The alliance is dedicated to advancing smart and environmental wireless power. AT&T and Starbucks are board members, and among the Alliance's members are Samsung, LG, HTC, BlackBerry, Huawei, ZTE, Texas Instruments, STMicroelectronics, Broadcom, Fairchild Semiconductor, Freescale, IDT, Otterbox, Incipio and Skech.
In October 2012, Powermat and Starbucks announced a pilot program to install Powermat charging surfaces in store tabletops in 17 Boston-area locations. The technology is consistent with Starbucks' environmentally friendly guidelines. As the pilot ended in July 2013, Starbucks decided to bring the Powermat's wireless charging technology to additional locations in Silicon Valley. Powermat announced it had acquired Powerkiss, a provider of integrated wireless charging solutions. Powerkiss, headquartered in Helsinki, Finland, had deployed wireless charging hot spots across Europe since its founding in 2008. In November 2013, the company announced a deployment at some Coffee Bean and Tea Leaf locations. Additional Powermat systems were installed at McDonald's restaurants in New York City and in select locations in Europe. In January 2014, the company, together with Flextronics, agreed to collaborate on embedding wireless power in electronic mobile devices.
In March 2015, Samsung included wireless charging in its Galaxy S6 mobile phone series. In June 2015, Powermat and Dupont launched the Dupont Corian charging surface, bringing innovation into the surfacing solutions world.
In January 2016, Powermat rolled out and installed at 150 Starbucks Chicago stores.
In December 2016, Elad Dubzinski was appointed the company's chief executive officer.
In September 2017, Apple announced at its annual Keynote event that the new iPhone line-up (iPhone 8, iPhone 8 Plus and iPhone X) would feature inductive wireless charging (Qi standard). This announcement had a significant effect on the wireless charging market, and Powermat was quick to announce that all existing charging spots would be compatible with Qi in order to support iPhone users.
A few months later, during the Consumer Electronics Show (CES) in early January, Powermat announced its membership in the Wireless Power Consortium, developer of the Qi standard, and announced its groundbreaking SmartInductive technology.
In September 2018, Powermat HQ moved to a new office in the city of Petah Tikva, Israel.
References
Wireless energy transfer
Consumer electronics brands
Privately held companies of Israel
Electronics companies of Israel
Israeli companies established in 2006
Wireless
Wireless transmitters
Israeli brands
Electronics companies established in 2006 | Powermat Technologies | Engineering | 1,148 |
182,146 | https://en.wikipedia.org/wiki/Orbital%20mechanics | Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets, satellites, and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and the law of universal gravitation. Orbital mechanics is a core discipline within space-mission design and control.
Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbital plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers.
General relativity is a more exact theory than Newton's laws for calculating orbits, and it is sometimes necessary to use it for greater accuracy or in high-gravity situations (e.g. orbits near the Sun).
History
Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. At the time of Sputnik, the field was termed 'space dynamics'. The fundamental techniques, such as those used to solve the Keplerian problem (determining position as a function of time), are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared.
Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy, publishing his laws in 1605. Isaac Newton published more general laws of celestial motion in the first edition of Philosophiæ Naturalis Principia Mathematica (1687), which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Leonhard Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Johann Lambert in 1761–1777.
Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of pairs of right ascension and declination), to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Modern orbit determination and prediction are used to operate all types of satellites and space probes, as it is necessary to know their future positions to a high degree of accuracy.
Astrodynamics was developed by astronomer Samuel Herrick beginning in the 1930s. He consulted the rocket scientist Robert Goddard and was encouraged to continue his work on space navigation techniques, as Goddard believed they would be needed in the future. Numerical techniques of astrodynamics were coupled with new powerful computers in the 1960s, and humans were ready to travel to the Moon and return.
Practical techniques
Rules of thumb
The following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics outlined below. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun.
Kepler's laws of planetary motion:
Orbits are elliptical, with the heavier body at one focus of the ellipse. A special case of this is a circular orbit (a circle is a special case of ellipse) with the planet at the center.
A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured.
The square of a satellite's orbital period is proportional to the cube of its average distance from the planet.
Without applying force (such as firing a rocket engine), the period and shape of the satellite's orbit will not change.
A satellite in a low orbit (or a low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet.
If thrust is applied at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus one cannot move from one circular orbit to another with only one brief application of thrust.
From a circular orbit, thrust applied in a direction opposite to the satellite's motion changes the orbit to an elliptical one; the satellite will descend and reach the lowest orbital point (the periapse) at 180 degrees away from the firing point; then it will ascend back. The period of the resultant orbit will be less than that of the original circular orbit. Thrust applied in the direction of the satellite's motion creates an elliptical orbit with its highest point (apoapse) 180 degrees away from the firing point. The period of the resultant orbit will be longer than that of the original circular orbit.
The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecrafts are in the same circular orbit and wish to dock, unless they are very close, the trailing craft cannot simply fire its engines to go faster. This will change the shape of its orbit, causing it to gain altitude and actually slow down relative to the leading craft, missing the target. The space rendezvous before docking normally takes multiple precisely calculated engine firings in multiple orbital periods, requiring hours or even days to complete.
To the extent that the standard assumptions of astrodynamics do not hold, actual trajectories will vary from those calculated. For example, simple atmospheric drag is another complicating factor for objects in low Earth orbit.
These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system (see n-body problem). Celestial mechanics uses more general rules applicable to a wider variety of situations. Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold strictly only in describing the motion of two gravitating bodies in the absence of non-gravitational forces; they also describe parabolic and hyperbolic trajectories. In the close proximity of large objects like stars the differences between classical mechanics and general relativity also become important.
Laws of astrodynamics
The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is differential calculus.
In a Newtonian framework, the laws governing orbits and trajectories are in principle time-symmetric.
Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile.
Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws which have been set out above. The three laws are:
The orbit of every planet is an ellipse with the Sun at one of the foci.
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis of the orbits.
Escape velocity
The formula for an escape velocity is derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by
where G is the gravitational constant and r is the distance between the two bodies;
while the specific kinetic energy of an object is given by
where v is its Velocity;
and so the total specific orbital energy is
Since energy is conserved, cannot depend on the distance, , from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite only if this quantity is nonnegative, which implies
The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit.
Formulae for free orbits
Orbits are conic sections, so the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is:
is called the gravitational parameter. and are the masses of objects 1 and 2, and is the specific angular momentum of object 2 with respect to object 1. The parameter is known as the true anomaly, is the semi-latus rectum, while is the orbital eccentricity, all obtainable from the various forms of the six independent orbital elements.
Circular orbits
All bounded orbits where the gravity of a central body dominates are elliptical in nature. A special case of this is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M can be derived as follows:
Centrifugal acceleration matches the acceleration due to gravity.
So,
Therefore,
where is the gravitational constant, equal to
6.6743 × 10−11 m3/(kg·s2)
To properly use this formula, the units must be consistent; for example, must be in kilograms, and must be in meters. The answer will be in meters per second.
The quantity is often termed the standard gravitational parameter, which has a different value for every planet or moon in the Solar System.
Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by :
To escape from gravity, the kinetic energy must at least match the negative potential energy. Therefore,
Elliptical orbits
If , then the denominator of the equation of free orbits varies with the true anomaly , but remains positive, never becoming zero. Therefore, the relative position vector remains bounded, having its smallest magnitude at periapsis , which is given by:
The maximum value is reached when . This point is called the apoapsis, and its radial coordinate, denoted , is
Let be the distance measured along the apse line from periapsis to apoapsis , as illustrated in the equation below:
Substituting the equations above, we get:
a is the semimajor axis of the ellipse. Solving for , and substituting the result in the conic section curve formula above, we get:
Orbital period
Under standard assumptions the orbital period () of a body traveling along an elliptic orbit can be computed as:
where:
is the standard gravitational parameter,
is the length of the semi-major axis.
Conclusions:
The orbital period is equal to that for a circular orbit with the orbit radius equal to the semi-major axis (),
For a given semi-major axis the orbital period does not depend on the eccentricity (See also: Kepler's third law).
Velocity
Under standard assumptions the orbital speed () of a body traveling along an elliptic orbit can be computed from the Vis-viva equation as:
where:
is the standard gravitational parameter,
is the distance between the orbiting bodies.
is the length of the semi-major axis.
The velocity equation for a hyperbolic trajectory is .
Energy
Under standard assumptions, specific orbital energy () of elliptic orbit is negative and the orbital energy conservation equation (the Vis-viva equation) for this orbit can take the form:
where:
is the speed of the orbiting body,
is the distance of the orbiting body from the center of mass of the central body,
is the semi-major axis,
is the standard gravitational parameter.
Conclusions:
For a given semi-major axis the specific orbital energy is independent of the eccentricity.
Using the virial theorem we find:
the time-average of the specific potential energy is equal to
the time-average of is
the time-average of the specific kinetic energy is equal to
Parabolic orbits
If the eccentricity equals 1, then the orbit equation becomes:
where:
is the radial distance of the orbiting body from the mass center of the central body,
is specific angular momentum of the orbiting body,
is the true anomaly of the orbiting body,
is the standard gravitational parameter.
As the true anomaly θ approaches 180°, the denominator approaches zero, so that r tends towards infinity. Hence, the energy of the trajectory for which e=1 is zero, and is given by:
where:
is the speed of the orbiting body.
In other words, the speed anywhere on a parabolic path is:
Hyperbolic orbits
If , the orbit formula,
describes the geometry of the hyperbolic orbit. The system consists of two symmetric curves. The orbiting body occupies one of them; the other one is its empty mathematical image. Clearly, the denominator of the equation above goes to zero when . we denote this value of true anomaly
since the radial distance approaches infinity as the true anomaly approaches , known as the true anomaly of the asymptote. Observe that lies between 90° and 180°. From the trigonometric identity it follows that:
Energy
Under standard assumptions, specific orbital energy () of a hyperbolic trajectory is greater than zero and the orbital energy conservation equation for this kind of trajectory takes form:
where:
is the orbital velocity of orbiting body,
is the radial distance of orbiting body from central body,
is the negative semi-major axis of the orbit's hyperbola,
is standard gravitational parameter.
Hyperbolic excess velocity
Under standard assumptions the body traveling along a hyperbolic trajectory will attain at infinity an orbital velocity called hyperbolic excess velocity () that can be computed as:
where:
is standard gravitational parameter,
is the negative semi-major axis of orbit's hyperbola.
The hyperbolic excess velocity is related to the specific orbital energy or characteristic energy by
Calculating trajectories
Kepler's equation
One approach to calculating orbits (mainly used historically) is to use Kepler's equation:
.
where M is the mean anomaly, E is the eccentric anomaly, and is the eccentricity.
With Kepler's formula, finding the time-of-flight to reach an angle (true anomaly) of from periapsis is broken into two steps:
Compute the eccentric anomaly from true anomaly
Compute the time-of-flight from the eccentric anomaly
Finding the eccentric anomaly at a given time (the inverse problem) is more difficult. Kepler's equation is transcendental in , meaning it cannot be solved for algebraically. Kepler's equation can be solved for analytically by inversion.
A solution of Kepler's equation, valid for all real values of is:
Evaluating this yields:
Alternatively, Kepler's Equation can be solved numerically. First one must guess a value of and solve for time-of-flight; then adjust as necessary to bring the computed time-of-flight closer to the desired value until the required precision is achieved. Usually, Newton's method is used to achieve relatively fast convergence.
The main difficulty with this approach is that it can take prohibitively long to converge for the extreme elliptical orbits. For near-parabolic orbits, eccentricity is nearly 1, and substituting into the formula for mean anomaly, , we find ourselves subtracting two nearly-equal values, and accuracy suffers. For near-circular orbits, it is hard to find the periapsis in the first place (and truly circular orbits have no periapsis at all). Furthermore, the equation was derived on the assumption of an elliptical orbit, and so it does not hold for parabolic or hyperbolic orbits. These difficulties are what led to the development of the universal variable formulation, described below.
Conic orbits
For simple procedures, such as computing the delta-v for coplanar transfer ellipses, traditional approaches are fairly effective. Others, such as time-of-flight are far more complicated, especially for near-circular and hyperbolic orbits.
The patched conic approximation
The Hohmann transfer orbit alone is a poor approximation for interplanetary trajectories because it neglects the planets' own gravity. Planetary gravity dominates the behavior of the spacecraft in the vicinity of a planet and in most cases Hohmann severely overestimates delta-v, and produces highly inaccurate prescriptions for burn timings. A relatively simple way to get a first-order approximation of delta-v is based on the 'Patched Conic Approximation' technique. One must choose the one dominant gravitating body in each region of space through which the trajectory will pass, and to model only that body's effects in that region. For instance, on a trajectory from the Earth to Mars, one would begin by considering only the Earth's gravity until the trajectory reaches a distance where the Earth's gravity no longer dominates that of the Sun. The spacecraft would be given escape velocity to send it on its way to interplanetary space. Next, one would consider only the Sun's gravity until the trajectory reaches the neighborhood of Mars. During this stage, the transfer orbit model is appropriate. Finally, only Mars's gravity is considered during the final portion of the trajectory where Mars's gravity dominates the spacecraft's behavior. The spacecraft would approach Mars on a hyperbolic orbit, and a final retrograde burn would slow the spacecraft enough to be captured by Mars. Friedrich Zander was one of the first to apply the patched-conics approach for astrodynamics purposes, when proposing the use of intermediary bodies' gravity for interplanetary travels, in what is known today as a gravity assist.
The size of the "neighborhoods" (or spheres of influence) vary with radius :
where is the semimajor axis of the planet's orbit relative to the Sun; and are the masses of the planet and Sun, respectively.
This simplification is sufficient to compute rough estimates of fuel requirements, and rough time-of-flight estimates, but it is not generally accurate enough to guide a spacecraft to its destination. For that, numerical methods are required.
The universal variable formulation
To address computational shortcomings of traditional approaches for solving the 2-body problem, the universal variable formulation was developed. It works equally well for the circular, elliptical, parabolic, and hyperbolic cases, the differential equations converging well when integrated for any orbit. It also generalizes well to problems incorporating perturbation theory.
Perturbations
The universal variable formulation works well with the variation of parameters technique, except now, instead of the six Keplerian orbital elements, we use a different set of orbital elements: namely, the satellite's initial position and velocity vectors and at a given epoch . In a two-body simulation, these elements are sufficient to compute the satellite's position and velocity at any time in the future, using the universal variable formulation. Conversely, at any moment in the satellite's orbit, we can measure its position and velocity, and then use the universal variable approach to determine what its initial position and velocity would have been at the epoch. In perfect two-body motion, these orbital elements would be invariant (just like the Keplerian elements would be).
However, perturbations cause the orbital elements to change over time. Hence, the position element is written as and the velocity element as , indicating that they vary with time. The technique to compute the effect of perturbations becomes one of finding expressions, either exact or approximate, for the functions and .
The following are some effects which make real orbits differ from the simple models based on a spherical Earth. Most of them can be handled on short timescales (perhaps less than a few thousand orbits) by perturbation theory because they are small relative to the corresponding two-body effects.
Equatorial bulges cause precession of the node and the perigee
Tesseral harmonics of the gravity field introduce additional perturbations
Lunar and solar gravity perturbations alter the orbits
Atmospheric drag reduces the semi-major axis unless make-up thrust is used
Over very long timescales (perhaps millions of orbits), even small perturbations can dominate, and the behavior can become chaotic. On the other hand, the various perturbations can be orchestrated by clever astrodynamicists to assist with orbit maintenance tasks, such as station-keeping, ground track maintenance or adjustment, or phasing of perigee to cover selected targets at low altitude.
Orbital maneuver
In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM).
Orbital transfer
Transfer orbits are usually elliptical orbits that allow spacecraft to move from one (usually substantially circular) orbit to another. Usually they require a burn at the start, a burn at the end, and sometimes one or more burns in the middle.
The Hohmann transfer orbit requires a minimal delta-v.
A bi-elliptic transfer can require less energy than the Hohmann transfer, if the ratio of orbits is 11.94 or greater, but comes at the cost of increased trip time over the Hohmann transfer.
Faster transfers may use any orbit that intersects both the original and destination orbits, at the cost of higher delta-v.
Using low thrust engines (such as electrical propulsion), if the initial orbit is supersynchronous to the final desired circular orbit then the optimal transfer orbit is achieved by thrusting continuously in the direction of the velocity at apogee. This method however takes much longer due to the low thrust.
For the case of orbital transfer between non-coplanar orbits, the change-of-plane thrust must be made at the point where the orbital planes intersect (the "node"). As the objective is to change the direction of the velocity vector by an angle equal to the angle between the planes, almost all of this thrust should be made when the spacecraft is at the node near the apoapse, when the magnitude of the velocity vector is at its lowest. However, a small fraction of the orbital inclination change can be made at the node near the periapse, by slightly angling the transfer orbit injection thrust in the direction of the desired inclination change. This works because the cosine of a small angle is very nearly one, resulting in the small plane change being effectively "free" despite the high velocity of the spacecraft near periapse, as the Oberth Effect due to the increased, slightly angled thrust exceeds the cost of the thrust in the orbit-normal axis.
Gravity assist and the Oberth effect
In a gravity assist, a spacecraft swings by a planet and leaves in a different direction, at a different speed. This is useful to speed or slow a spacecraft instead of carrying more fuel.
This maneuver can be approximated by an elastic collision at large distances, though the flyby does not involve any physical contact. Due to Newton's Third Law (equal and opposite reaction), any momentum gained by a spacecraft must be lost by the planet, or vice versa. However, because the planet is much, much more massive than the spacecraft, the effect on the planet's orbit is negligible.
The Oberth effect can be employed, particularly during a gravity assist operation. This effect is that use of a propulsion system works better at high speeds, and hence course changes are best done when close to a gravitating body; this can multiply the effective delta-v.
Interplanetary Transport Network and fuzzy orbits
It is now possible to use computers to search for routes using the nonlinearities in the gravity of the planets and moons of the Solar System. For example, it is possible to plot an orbit from high Earth orbit to Mars, passing close to one of the Earth's Trojan points. Collectively referred to as the Interplanetary Transport Network, these highly perturbative, even chaotic, orbital trajectories in principle need no fuel beyond that needed to reach the Lagrange point (in practice keeping to the trajectory requires some course corrections). The biggest problem with them is they can be exceedingly slow, taking many years. In addition launch windows can be very far apart.
They have, however, been employed on projects such as Genesis. This spacecraft visited the Earth-Sun point and returned using very little propellant.
See also
Celestial mechanics
Chaos theory
Kepler orbit
Lagrange point
Mechanical engineering
N-body problem
Roche limit
Spacecraft propulsion
Universal variable formulation
References
Further reading
Many of the options, procedures, and supporting theory are covered in standard works such as:
External links
ORBITAL MECHANICS (Rocket and Space Technology)
Java Astrodynamics Toolkit
Astrodynamics-based Space Traffic and Event Knowledge Graph | Orbital mechanics | Engineering | 5,270 |
47,591,700 | https://en.wikipedia.org/wiki/NGC%206886 | NGC 6886 is a planetary nebula in the constellation Sagitta. It was discovered by Ralph Copeland on September 17, 1884. It is distant from Earth, and is composed of a hot central post-AGB star that has 55% of the Sun's mass yet 2700 ± 850 its luminosity, with a surface temperature of 142,000 K. The planetary nebula is thought to have been expanding for between 1280 and 1600 years.
References
External links
Planetary nebulae
Sagitta
6886
Astronomical objects discovered in 1884 | NGC 6886 | Astronomy | 109 |
561,585 | https://en.wikipedia.org/wiki/Master%20theorem%20%28analysis%20of%20algorithms%29 | In the analysis of algorithms, the master theorem for divide-and-conquer recurrences provides an asymptotic analysis for many recurrence relations that occur in the analysis of divide-and-conquer algorithms. The approach was first presented by Jon Bentley, Dorothea Blostein (née Haken), and James B. Saxe in 1980, where it was described as a "unifying method" for solving such recurrences. The name "master theorem" was popularized by the widely used algorithms textbook Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein.
Not all recurrence relations can be solved by this theorem; its generalizations include the Akra–Bazzi method.
Introduction
Consider a problem that can be solved using a recursive algorithm such as the following:
procedure p(input x of size n):
if n < some constant k:
Solve x directly without recursion
else:
Create a subproblems of x, each having size n/b
Call procedure p recursively on each subproblem
Combine the results from the subproblems
The above algorithm divides the problem into a number of subproblems recursively, each subproblem being of size . Its solution tree has a node for each recursive call, with the children of that node being the other calls made from that call. The leaves of the tree are the base cases of the recursion, the subproblems (of size less than k) that do not recurse. The above example would have child nodes at each non-leaf node. Each node does an amount of work that corresponds to the size of the subproblem passed to that instance of the recursive call and given by . The total amount of work done by the entire algorithm is the sum of the work performed by all the nodes in the tree.
The runtime of an algorithm such as the above on an input of size , usually denoted , can be expressed by the recurrence relation
where is the time to create the subproblems and combine their results in the above procedure. This equation can be successively substituted into itself and expanded to obtain an expression for the total amount of work done. The master theorem allows many recurrence relations of this form to be converted to Θ-notation directly, without doing an expansion of the recursive relation.
Generic form
The master theorem always yields asymptotically tight bounds to recurrences from divide and conquer algorithms that partition an input into smaller subproblems of equal sizes, solve the subproblems recursively, and then combine the subproblem solutions to give a solution to the original problem. The time for such an algorithm can be expressed by adding the work that they perform at the top level of their recursion (to divide the problems into subproblems and then combine the subproblem solutions) together with the time made in the recursive calls of the algorithm. If denotes the total time for the algorithm on an input of size , and denotes the amount of time taken at the top level of the recurrence then the time can be expressed by a recurrence relation that takes the form:
Here is the size of an input problem, is the number of subproblems in the recursion, and is the factor by which the subproblem size is reduced in each recursive call (). Crucially, and must not depend on . The theorem below also assumes that, as a base case for the recurrence, when is less than some bound , the smallest input size that will lead to a recursive call.
Recurrences of this form often satisfy one of the three following regimes, based on how the work to split/recombine the problem relates to the critical exponent . (The table below uses standard big O notation).
A useful extension of Case 2 handles all values of :
Examples
Case 1 example
As one can see from the formula above:
, so
, where
Next, we see if we satisfy the case 1 condition:
.
It follows from the first case of the master theorem that
(This result is confirmed by the exact solution of the recurrence relation, which is , assuming ).
Case 2 example
As we can see in the formula above the variables get the following values:
where
Next, we see if we satisfy the case 2 condition:
, and therefore, c and are equal
So it follows from the second case of the master theorem:
Thus the given recurrence relation was in .
(This result is confirmed by the exact solution of the recurrence relation, which is , assuming ).
Case 3 example
As we can see in the formula above the variables get the following values:
, where
Next, we see if we satisfy the case 3 condition:
, and therefore, yes,
The regularity condition also holds:
, choosing
So it follows from the third case of the master theorem:
Thus the given recurrence relation was in , that complies with the of the original formula.
(This result is confirmed by the exact solution of the recurrence relation, which is , assuming .)
Inadmissible equations
The following equations cannot be solved using the master theorem:
a is not a constant; the number of subproblems should be fixed
non-polynomial difference between and (see below; extended version applies)
, which is the combination time, is not positive
case 3 but regularity violation.
In the second inadmissible example above, the difference between and can be expressed with the ratio . It is clear that for any constant . Therefore, the difference is not polynomial and the basic form of the Master Theorem does not apply. The extended form (case 2b) does apply, giving the solution .
Application to common algorithms
See also
Akra–Bazzi method
Asymptotic complexity
Notes
References
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw–Hill, 2001. . Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.
Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundation, Analysis, and Internet Examples. Wiley, 2002. . The master theorem (including the version of Case 2 included here, which is stronger than the one from CLRS) is on pp. 268–270.
Asymptotic analysis
Theorems in computational complexity theory
Recurrence relations
Analysis of algorithms | Master theorem (analysis of algorithms) | Mathematics | 1,355 |
11,388,276 | https://en.wikipedia.org/wiki/Two-dimensionalism | Two-dimensionalism is an approach to semantics in analytic philosophy. It is a theory of how to determine the sense and reference of a word and the truth-value of a sentence. It is intended to resolve the puzzle: How is it possible to discover empirically that a necessary truth is true? Two-dimensionalism provides an analysis of the semantics of words and sentences that makes sense of this possibility. The theory was first developed by Robert Stalnaker, but it has been advocated by numerous philosophers since, including David Chalmers.
Two-dimensional semantic analysis
According to two-dimensionalism, any statement, for example "Water is H2O", is taken to express two distinct propositions, often referred to as a primary intension and a secondary intension, which together compose its meaning.
The primary intension of a word or sentence is its sense, i.e., is the idea or method by which we find its referent. In other words, it's how we identify something in any possible world before knowing its actual nature. The primary intension of "water" might be a description, such as watery stuff or "the clear, drinkable liquid that fills oceans and lakes." The entity identified by this intension could vary in different hypothetical worlds. In the twin Earth thought experiment, for example, inhabitants might use "water" to mean their equivalent of water, even if its chemical composition is not H2O but XYZ. Thus, for that world, "water" does not refer to H2O.
The secondary intension of "water" is whatever "water" refers to in this world. It's determined after we discover water's actual composition in our world. So, if we assign "water" the primary intension watery stuff, then the secondary intension of "water" is H2O, since H2O is watery stuff in this world. The secondary intension of "water" in our world is H2O, which is H2O in every world because unlike watery stuff it is impossible for H2O to be other than H2O. When considered according to its secondary intension, "Water is H2O" is true in every world. This explains how "water is XYZ" can be conceivable (using the primary intension) but not possible (using the secondary intension).
Impact
If two-dimensionalism is workable it solves some very important problems in the philosophy of language. Saul Kripke has argued that "Water is H2O" is an example of a necessary truth which is true a posteriori, since we had to discover that water was H2O, but given that it is true (which it is) it cannot be false. It would be absurd to claim that something that is water is not H2O, for these are known to be identical.
However, this contention that one and the same proposition can be both a posteriori and necessary is considered absurd by some philosophers (as is Kripke's paired claim that the same proposition can be both a priori and contingent).
For example, Robert Stalnaker's account of knowledge represents knowledge as a relation on possible worlds, which entails that it is impossible for a proposition to fail to be a priori given that it is necessary. This can be proven as follows: If a proposition P is necessary it is true in all possible worlds. If P is true at all possible worlds and what we know are sets of possible worlds, then it is not possible not to know that P, for P is the case at all possible worlds in the set of worlds that we know. So if P is necessary then we know it necessarily, and ipso facto we know it a priori.
Under two-dimensionalism, the problem disappears. The primary intension of "Water is H2O" is the a posteriori component, since it is contingent that the referent of "water" is H2O, while the secondary intension is the necessary component of the sentence, since it is necessary that the stuff we in fact call water is H2O. Neither intension gives us both a necessary and an a posteriori component. But one gets the false impression that the sentence expresses a necessary a posteriori proposition because this single sentence expresses two propositions, one a posteriori and one necessary.
In the philosophy of mind
Two-dimensional semantics has been used by David Chalmers to counter objections to the various arguments against materialism in the philosophy of mind. Specifically, Chalmers deploys two-dimensional semantics to "bridge the (gap between) epistemic and modal domains" in arguing from knowability or epistemic conceivability to what is necessary or possible (modalities).
The reason Chalmers employs two-dimensional semantics is to avoid objections to conceivability implying possibility. For instance, it's claimed that we can conceive of water not having been , but it's not possible that water isn't . Chalmers replies that it is 1-possible that water wasn't because we can imagine another substance XYZ with watery properties, but it's not 2-possible. Hence, objections to conceivability implying possibility are unfounded when these words are used more carefully.
Chalmers then advances the following "two-dimensional argument against materialism". Define P as all physical truths about the universe and Q as a truth about phenomenal experience, such as that someone is conscious. Let "1-possible" refer to possibility relative to primary intension and "2-possible" relative to secondary intension.
P&~Q is conceivable [i.e., zombies are conceivable]
If P&~Q is conceivable, then P&~Q is 1-possible
If P&~Q is 1-possible, then P&~Q is 2-possible or Russellian monism is true.
If P&~Q is 2-possible, materialism is false.
Materialism is false or Russellian monism is true.
Criticism
Scott Soames is a notable opponent of two-dimensionalism, which he sees as an attempt to revive Russelian–Fregean descriptivism and to overturn what he sees as a "revolution" in semantics begun by Kripke and others. Soames argues that two-dimensionalism stems from a misreading of passages in Kripke (1980) as well as Kaplan (1989).
See also
David Kaplan
References
Sources
External links
Two-Dimensional Semantics (Internet Encyclopedia of Philosophy)
Two-Dimensional Semantics (Stanford Encyclopedia of Philosophy)
Assertion by Robert Stalnaker
Two dimensional semantics--the basics Christian Nimtz
The Case of Hyper-intensionality in Two-Dimensional Modal Semantics: Alexandra Arapinis
Two-Dimensionalism and Kripkean A Posteriori Necessity Kai-Yee Wong
Sentence-Relativity and the Necessary A Posteriori Kai-Yee Wong
Two Dimensional Semantics by David Chalmers
The Foundations of Two Dimensional Semantics by David Chalmers
The Two Dimensional Argument against Materialism by David Chalmers
Two Dimensional Modal Logic by Gary Hardegree
Modal logic
Semantics
Theories of language
Modal metaphysics | Two-dimensionalism | Mathematics | 1,500 |
20,830,595 | https://en.wikipedia.org/wiki/Relative%20abundance%20distribution | In ecology the relative abundance distribution (RAD) or species abundance distribution species abundance distribution (SAD) describes the relationship between the number of species observed in a field study as a function of their observed abundance. The SAD is one of ecology's oldest and most universal laws – every community shows a hollow curve or hyperbolic shape on a histogram with many rare species and just a few common species. When plotted as a histogram of number (or percent) of species on the y-axis vs. abundance on an arithmetic x-axis, the classic hyperbolic J-curve or hollow curve is produced, indicating a few very abundant species and many rare species. The SAD is central prediction of the Unified neutral theory of biodiversity.
Starting in the 1970s and running unabated to the present day, mechanistic models (models attempting to explain the causes of the hollow curve SAD) and alternative interpretations and extensions of prior theories have proliferated to an extraordinary degree. The graphs obtained in this manner are typically fitted to a Zipf–Mandelbrot law, the exponent of which serves as an index of biodiversity in the ecosystem under study.
Notes and references
Ecological metrics | Relative abundance distribution | Mathematics | 241 |
1,566,437 | https://en.wikipedia.org/wiki/Physiologically%20based%20pharmacokinetic%20modelling | Physiologically based pharmacokinetic (PBPK) modeling is a mathematical modeling technique for predicting the absorption, distribution, metabolism and excretion (ADME) of synthetic or natural chemical substances in humans and other animal species. PBPK modeling is used in pharmaceutical research and drug development, and in health risk assessment for cosmetics or general chemicals.
PBPK models strive to be mechanistic by mathematically transcribing anatomical, physiological, physical, and chemical descriptions of the phenomena involved in the complex ADME processes. A large degree of residual simplification and empiricism is still present in those models, but they have an extended domain of applicability compared to that of classical, empirical function based, pharmacokinetic models. PBPK models may have purely predictive uses, but other uses, such as statistical inference, have been made possible by the development of Bayesian statistical tools able to deal with complex models. That is true for both toxicity risk assessment and therapeutic drug development.
PBPK models try to rely a priori on the anatomical and physiological structure of the body, and to a certain extent, on biochemistry. They are usually multi-compartment models, with compartments corresponding to predefined organs or tissues, with interconnections corresponding to blood or lymph flows (more rarely to diffusions). A system of differential equations for concentration or quantity of substance on each compartment can be written, and its parameters represent blood flows, pulmonary ventilation rate, organ volumes etc., for which information is available in scientific publications. Indeed, the description they make of the body is simplified and a balance needs to be struck between complexity and simplicity. Besides the advantage of allowing the recruitment of a priori information about parameter values, these models also facilitate inter-species transpositions or extrapolation from one mode of administration to another (e.g., inhalation to oral). An example of a 7-compartment PBPK model, suitable to describe the fate of many solvents in the mammalian body, is given in the Figure on the right.
History
The first pharmacokinetic model described in the scientific literature
was in fact a PBPK model. It led, however, to computations intractable at that time. The focus shifted then to simpler models,
for which analytical solutions could be obtained (such solutions were sums of exponential terms, which led to further simplifications.) The availability of computers and numerical integration algorithms marked a renewed interest in physiological models in the early 1970s.
For substances with complex kinetics, or when inter-species extrapolations were required, simple models were insufficient and research continued on physiological models.
By 2010, hundreds of scientific publications had described and used PBPK models, and at least two private companies have based their business on their expertise in this area.
Building a PBPK model
The model equations follow the principles of mass transport, fluid dynamics, and biochemistry in order to simulate the fate of a substance in the body.
Compartments are usually defined by grouping organs or tissues with similar blood perfusion rate and lipid content (i.e. organs for which chemicals' concentration vs. time profiles will be similar). Ports of entry (lung, skin, intestinal tract...), ports of exit (kidney, liver...) and target organs for therapeutic effect or toxicity are often left separate. Bone can be excluded from the model if the substance of interest does not distribute to it. Connections between compartment follow physiology (e.g., blood flow in exit of the gut goes to liver, etc.)
Basic transport equations
Drug distribution into a tissue can be rate-limited by either perfusion or permeability. Perfusion-rate-limited kinetics apply when the tissue membranes present no barrier to diffusion. Blood flow, assuming that the drug is transported mainly by blood, as is often the case, is then the limiting factor to distribution in the various cells of the body. That is usually true for small lipophilic drugs. Under perfusion limitation, the instantaneous rate of entry for the quantity of drug in a compartment is simply equal to (blood) volumetric flow rate through the organ times the incoming blood concentration. In that case; for a generic compartment i, the differential equation for the quantity Qi of substance, which defines the rate of change in this quantity, is:
where Fi is blood flow (noted Q in the Figure above), Cart incoming arterial blood concentration, Pi the tissue over blood partition coefficient and Vi the volume of compartment i.
A complete set of differential equations for the 7-compartment model shown above could therefore be given by the following table:
The above equations include only transport terms and do not account for inputs or outputs.
Those can be modeled with specific terms, as in the following.
Modeling inputs
Modeling inputs is necessary to come up with a meaningful description of a chemical's pharmacokinetics. The following examples show how to write the corresponding equations.
Ingestion
When dealing with an oral bolus dose (e.g. ingestion of a tablet), first order absorption is a very common assumption. In that case the gut equation is augmented with an input term, with an absorption rate constant Ka:
That requires defining an equation for the quantity ingested and present in the gut lumen:
In the absence of a gut compartment, input can be made directly in the liver. However, in that case local metabolism in the gut may not be correctly described. The case of approximately continuous absorption (e.g. via drinking water) can be modeled by a zero-order absorption rate (here Ring in units of mass over time):
More sophisticated gut absorption model can be used. In those models, additional compartments describe the various sections of the gut lumen and tissue. Intestinal pH, transit times and presence of active transporters can be taken into account
.
Skin depot
The absorption of a chemical deposited on skin can also be modeled using first order terms. It is best in that case to separate the skin from the other tissues, to further differentiate exposed skin and non-exposed skin, and differentiate viable skin (dermis and epidermis) from the stratum corneum (the actual skin upper layer exposed). This is the approach taken in [Bois F., Diaz Ochoa J.G. Gajewska M., Kovarich S., Mauch K., Paini A., Péry A., Sala Benito J.V., Teng S., Worth A., in press, Multiscale modelling approaches for assessing cosmetic ingredients safety, Toxicology. doi: 10.1016/j.tox.2016.05.026]
Unexposed stratum corneum simply exchanges with the underlying viable skin by diffusion:
where is the partition coefficient, is the total skin surface area, the fraction of skin surface area exposed, ...
For the viable skin unexposed:
For the skin stratum corneum exposed:
for the viable skin exposed:
dt(QSkin_u) and dt(QSkin_e) feed from arterial blood and back to venous blood.
More complex diffusion models have been published [reference to add].
Intra-venous injection
Intravenous injection is a common clinical route of administration. (to be completed)
Inhalation
Inhalation occurs through the lung and is hardly dissociable from exhalation (to be completed)
Modelling metabolism
There are several ways metabolism can be modeled. For some models, a linear excretion rate is preferred. This can be accomplished with a simple differential equation. Otherwise a Michaelis-Menten equation, as follows, is generally appropriate for a more accurate result.
.
Uses of PBPK modeling
PBPK models are compartmental models like many others, but they have a few advantages over so-called "classical" pharmacokinetic models, which are less grounded in physiology. PBPK models can first be used to abstract and eventually reconcile disparate data (from physicochemical or biochemical experiments, in vitro or in vivo pharmacological or toxicological experiments, etc.) They give also access to internal body concentrations of chemicals or their metabolites, and in particular at the site of their effects, be it therapeutic or toxic. Finally they also help interpolation and extrapolation of knowledge between:
Doses: e.g., from the high concentrations typically used in laboratory experiments to those found in the environment
Exposure duration: e.g., from continuous to discontinuous, or single to multiple exposures
Routes of administration: e.g., from inhalation exposures to ingestion
Species: e.g., transpositions from rodents to human, prior to giving a drug for the first time to subjects of a clinical trial, or when experiments on humans are deemed unethical, such as when the compound is toxic without therapeutic benefit
Individuals: e.g., from males to females, from adults to children, from non-pregnant women to pregnant
From in vitro to in vivo.
Some of these extrapolations are "parametric" : only changes in input or parameter values are needed to achieve the extrapolation (this is usually the case for dose and time extrapolations). Others are "nonparametric" in the sense that a change in the model structure itself is needed (e.g., when extrapolating to a pregnant female, equations for the foetus should be added).
Owing to the mechanistic basis of PBPK models, another potential use of PBPK modeling is hypothesis testing. For example, if a drug compound showed lower-than-expected oral bioavailability, various model structures (i.e., hypotheses) and parameter values can be evaluated to determine which models and/or parameters provide the best fit to the observed data. If the hypothesis that metabolism in the intestines was responsibility for the low bioavailability yielded the best fit, then the PBPK modeling results support this hypothesis over the other hypotheses evaluated.
As such, PBPK modeling can be used, inter alia, to evaluate the involvement of carrier-mediated transport, clearance saturation, enterohepatic recirculation of the parent compound, extra-hepatic/extra-gut elimination; higher in vivo solubility than predicted in vitro; drug-induced gastric emptying delays; gut loss and regional variation in gut absorption.
Limits and extensions of PBPK modeling
Each type of modeling technique has its strengths and limitations. PBPK modeling is no exception. One limitation is the potential for a large number of parameters, some of which may be correlated. This can lead to the issues of parameter identifiability and redundancy. However, it is possible (and commonly done) to model explicitly the correlations between parameters (for example, the non-linear relationships between age, body-mass, organ volumes and blood flows).
After numerical values are assigned to each PBPK model parameter, specialized or general computer software is typically used to numerically integrate a set of ordinary differential equations like those described above, in order to calculate the numerical value of each compartment at specified values of time (see Software). However, if such equations involve only linear functions of each compartmental value, or under limiting conditions (e.g., when input values remain very small) that guarantee such linearity is closely approximated, such equations may be solved analytically to yield explicit equations (or, under those limiting conditions, very accurate approximations) for the time-weighted average (TWA) value of each compartment as a function of the TWA value of each specified input (see, e.g.,).
PBPK models can rely on chemical property prediction models (QSAR models or predictive chemistry models) on one hand. For example, QSAR models can be used to estimate partition coefficients. They also extend into, but are not destined to supplant, systems biology models of metabolic pathways. They are also parallel to physiome models, but do not aim at modelling physiological functions beyond fluid circulation in detail. In fact the above four types of models can reinforce each other when integrated.
References
Further references:
Reddy M. et al. (2005) Physiologically Based Pharmacokinetic Modeling : Science and Applications, Wiley-Interscience.
Peters S.A (2012) Physiologically-Based Pharmacokinetic (PBPK) Modeling and Simulations, Wiley.
Forums
Ecotoxmodels is a website on mathematical models in ecotoxicology.
Software
Dedicated software:
BioDMET
GastroPlus
Maxsim2
PK-Sim
PKQuest
PSE: gCOAS
Simcyp Simulator
ADME Workbench
General software:
ADAPT 5
Berkeley Madonna
COPASI: Biochemical System Simulator
Ecolego
Free simulation software: GNU MCSIM
GNU Octave
Matlab PottersWheel
ModelMaker
PhysioLab
R deSolve package
SAAM II
Phoenix WinNonlin/NLME/IVIVC/Trial Simulator
Toxicology
Pharmacokinetics
Pharmaceutics | Physiologically based pharmacokinetic modelling | Chemistry,Environmental_science | 2,736 |
39,332,819 | https://en.wikipedia.org/wiki/Gulp%3A%20Adventures%20on%20the%20Alimentary%20Canal | Gulp: Adventures on the Alimentary Canal is a nonfiction work by science author Mary Roach, published in April 2013 by W.W. Norton & Company.
Topics covered
The book covers 17 topics:
Nose Job: Tasting has little to do with taste
I'll Have the Putrescine: Your pet is not like you.
Liver and Opinions: Why we eat what we eat and despise the rest
The Longest Meal: Can thorough chewing lower the national debt?
Hard to Stomach: The acid relationship of William Beaumont and Alexis St. Martin.
Spit Gets a Polish: Someone ought to bottle the stuff
A Bolus of Cherries: Life at the oral processing lab
Big Gulp: How to survive being swallowed alive
Dinner's Revenge: Can the eaten eat back?
Stuffed: The science of eating yourself to death
Up Theirs: The alimentary canal as criminal accomplice
Inflammable You: Fun with hydrogen and methane
Dead Man's Bloat: And other diverting tales from the history of flatulence research.
Smelling a Rat: Does noxious flatus do more than clear a room?
Eating Backward: Is the digestive tract a two-way street?
I'm All Stopped Up: Elvis Presley's megacolon, and other ruminations on death by constipation.
The Ick Factor: We can cure you, but there's just one thing
Reviews
Maslin, Janet (April 4, 2013). Food and You, From One End to the Other. Books of the Times (The New York Times). Retrieved June 1, 2013.
Publishers Weekly. (January 21, 2013) Gulp: Adventures on the Alimentary Canal Starred Review (Publishers Weekly.) Retrieved June 1, 2013.
References
External links
Gulp Video Book Trailer
Mary Roach's website.
Gulp page on Publisher's Site, W.W. Norton and Company
Roach talks about Stiff on NPR
Roach talks about Gulp on Bullseye with Jesse Thorn
2013 non-fiction books
Biology books
Digestive system
Popular science books
W. W. Norton & Company books | Gulp: Adventures on the Alimentary Canal | Biology | 421 |
4,295,839 | https://en.wikipedia.org/wiki/COGO | COGO is a suite of programs used in civil engineering for modelling horizontal and vertical alignments and solving coordinate geometry problems. Cogo alignments are used as controls for the geometric design of roads, railways, and stream relocations or restorations.
COGO was originally a subsystem of MIT's Integrated Civil Engineering System (ICES), developed in the 1960s. Other ICES subsystems included STRUDL, BRIDGE, LEASE, PROJECT, ROADS and TRANSET, and the internal languages ICETRAN and CDL. Evolved versions of COGO are still widely used.
Some basic types of elements of COGO are points, Euler spirals, lines and horizontal curves (circular arcs).
More complex elements can be developed such as alignments or chains which are made up of a combination of points, curves or spirals.
See also
Civil engineering software
References
"Engineer's Guide to ICES COGO I", R67-46, Civil Engineering Dept MIT (Aug 1967)
"An Integrated Computer System for Engineering Problem Solving", D. Roos, Proc SJCC 27(2), AFIPS (Spring 1965). Sammet 1969, pp.615-620.
Mathematical software
Surveying
History of software | COGO | Mathematics,Technology,Engineering | 255 |
8,894,516 | https://en.wikipedia.org/wiki/BioCyc%20database%20collection | The BioCyc database collection is an assortment of organism specific Pathway/Genome Databases (PGDBs) that provide reference to genome and metabolic pathway information for thousands of organisms. As of July 2023, there were over 20,040 databases within BioCyc. SRI International, based in Menlo Park, California, maintains the BioCyc database family.
Categories of Databases
Based on the manual curation done, BioCyc database family is divided into 3 tiers:
Tier 1: Databases which have received at least one year of literature based manual curation. Currently there are seven databases in Tier 1. Out of the seven, MetaCyc is a major database that contains almost 2500 metabolic pathways from many organisms. The other important Tier 1 database is HumanCyc which contains around 300 metabolic pathways found in humans. The remaining five databases include, EcoCyc (E. coli), AraCyc (Arabidopsis thaliana), YeastCyc (Saccharomyces cerevisiae), LeishCyc (Leishmania major Friedlin) and TrypanoCyc (Trypanosoma brucei).
Tier 2: Databases that were computationally predicted but have received moderate manual curation (most with 1–4 months curation). Tier 2 Databases are available for manual curation by scientists who are interested in any particular organism. Tier 2 databases currently contain 43 different organism databases.
Tier 3: Databases that were computationally predicted by PathoLogic and received no manual curation. As with Tier 2, Tier 3 databases are also available for curation for interested scientists.
Software tools
The ontological resource contains a variety of software tools for searching, visualizing, comparing, and analyzing genome and pathway information. It includes a genome browser, and browsers for metabolic and regulatory networks. The website also includes tools for painting large-scale ("omics") datasets onto metabolic and regulatory networks, and onto the genome.
Use in Research
Since BioCyc Database family comprises a long list of organism specific databases and also data at different systems level in a living system, the usage in research has been in a wide variety of context. Here, two studies are highlighted which show two different varieties of uses, one on a genome scale and other on identifying specific SNPs (Single Nucleotide Polymorphisms) within a genome.
AlgaGEM
AlgaGEM is a genome scale metabolic network model for a compartmentalized algae cell developed by Gomes de Oliveira Dal’Molin et al. based on the Chlamydomonas reinhardtii genome. It has 866 unique ORFs, 1862 metabolites, 2499 gene-enzyme-reaction-association entries, and 1725 unique reactions. One of the Pathway databases used for reconstruction is MetaCyc.
SNPs
The study by Shimul Chowdhury et al. showed association differed between maternal SNPs and metabolites involved in homocysteine, folate, and transsulfuration pathways in cases with Congenital Heart Defects (CHDs) as opposed to controls. The study used HumanCyc to select candidate genes and SNPs.
References
Biochemistry databases
Genome databases
Biotechnology
Metabolomic databases
SRI International | BioCyc database collection | Chemistry,Biology | 659 |
76,983,517 | https://en.wikipedia.org/wiki/Wolfgang%20Gustav%20Triest | Wolfgang Gustav Triest (1946) was an American civil engineer. He was the founder of the Triest Construction Company in New York, and had homes in Annapolis, Maryland, and Southampton, Long Island.
Early life
Wolfgang Gustav Triest, also known as W. G. Triest, was born in New York City, New York.
Career
In 1898, Frederick Snare and W.G. Triest founded the Snare & Triest Company. The Snare & Triest Company was incorporated in 1900. The Snare & Triest Company was renamed the Frederick Snare Corporation Contracting Engineers in the early 1920s. Around 1921, Snare and Triest agreed to part ways. He continued in heavy construction under the firm, Triest Contracting Corporation, subway and bridge builders of New York.
Triest was involved in the construction of the Brooklyn Bridge. His company also built part of the subway tunnel on the IND line, part of the Queens–Midtown Tunnel, and the cutoff wall of Merriman Dam in Wawarsing, New York.
Death
W.G. Triest died in 1946. He was a resident of Great Neck in Nassau County at the time of his death.
References
1946 deaths
Civil engineers
American civil engineers
Civil engineering contractors
American civil engineering contractors
American bridge engineers | Wolfgang Gustav Triest | Engineering | 259 |
436,324 | https://en.wikipedia.org/wiki/PhyloCode | The International Code of Phylogenetic Nomenclature, known as the PhyloCode for short, is a formal set of rules governing phylogenetic nomenclature. Its current version is specifically designed to regulate the naming of clades, leaving the governance of species names up to the rank-based nomenclature codes (ICN, ICNCP, ICNP, ICZN, ICVCN).
The PhyloCode is associated with the International Society for Phylogenetic Nomenclature (ISPN). The companion volume, Phylonyms, establishes 300 taxon names under PhyloCode, serving as examples for those unfamiliar with the code. RegNum is an associated online database for registered clade names.
The PhyloCode regulates phylogenetic nomenclature by providing rules for deciding which associations of names and definitions are considered established, which of those will be considered homonyms or synonyms, and which one of a set of synonyms or homonyms will be considered accepted (generally the one registered first; see below). The PhyloCode only governs the naming of clades, not of paraphyletic or polyphyletic groups, and only allows the use of specimens, species, and apomorphies as specifiers (anchors).
Phylogenetic nomenclature
Unlike rank-based nomenclatural codes (ICN, ICZN, ICNB), the PhyloCode does not require the use of ranks, although it does optionally allow their use. The rank-based codes define taxa using a rank (such as genus, family, etc.) and, in many cases, a type specimen or type subtaxon. The exact content of a taxon, other than the type, is not specified by the rank-based codes.
In contrast, under phylogenetic nomenclature, the content of taxa are delimited using a definition that is based on phylogeny (i.e., ancestry and descent) and uses specifiers (e.g., species, specimens, apomorphies) to indicate actual organisms. The formula of the definition indicates an ancestor. The defined taxon, then, is that ancestor and all of its descendants. Thus, the content of a phylogenetically defined taxon relies on a phylogenetic hypothesis.
The following are examples of types of phylogenetic definition (capital letters indicate specifiers):
Node-based: "the clade originating with the most recent common ancestor of A and B" or "the least inclusive clade containing A and B"
Branch-based: "the clade consisting of A and all organisms or species that share a more recent common ancestor with A than with Z" or "the most inclusive clade containing A but not Z." Another term for definitions of this sort is stem-based.
Apomorphy-based: "the clade originating with the first organism or species to possess apomorphy M inherited by A".
Other types of definition are possible as well, taking into account not only organisms' phylogenetic relations and apomorphies but also whether or not related organisms are extant.
The following table gives examples of phylogenetic definitions of clades that also have ranks in traditional nomenclature. When all the specifiers in a node-based definition are extant specimens or species, as in the following definition of Mammalia, a crown group is defined. (The traditional definition of Mammalia is less restrictive, including some fossil groups outside of the crown group.)
Versions
PhyloCode has gone through several revisions. , the current version is 6, released on the website on June 8, 2020.
Organization
As with other nomenclatural codes, the rules of the PhyloCode are organized as articles, which in turn are organized as chapters. Each article may also contain notes, examples, and recommendations.
Table of contents
Preface (including Literature Cited)
Preamble
Division I. Principles
Division II. Rules
Chapter I. Taxa (Arts. 1–3)
Chapter II. Publication (Arts. 4–5)
Chapter III. Names (Arts. 6–8)
Chapter IV. Clade Names (Arts. 9–11)
Chapter V. Selection of Established Names (Arts. 12–15)
Chapter VI. Provisions for Hybrids (Art. 16)
Chapter VII. Orthography (Arts. 17–18)
Chapter VIII. Authorship of Names (Art. 19)
Chapter IX. Citation of Authors and Registration Numbers (Art. 20)
Chapter X. Species Names (Art. 21)
Chapter XI. Governance (Art. 22)
Glossary
Appendices
Appendix A. Registration Procedures and Data Requirements
Appendix B. Code of Ethics
Registration database
Once implemented, the PhyloCode will be associated with a registration database, called RegNum, which will store all clade names and definitions that will be considered acceptable. It is hoped that this will provide a publicly usable tool for associating clade names with definitions, which could then be associated with sets of subtaxa or specimens through phylogenetic tree databases (such as TreeBASE).
As currently planned, however, the most important use of RegNum will be the decision of which one of a number of synonyms or homonyms will be considered accepted: the one with the lowest registration number, except in cases of conservation.
History
(Condensed from the PhyloCode'''s Preface.)
The PhyloCode grew out of a workshop at Harvard University in August 1998, where decisions were made about its scope and content. Many of the workshop participants, together with several other people who subsequently joined the project, served as an advisory group. In April 2000, a draft was made public on the web and comments were solicited from the scientific community.
A second workshop was held at Yale University in July 2002, at which some modifications were made in the rules and recommendations of the PhyloCode. Other revisions have been made from time to time as well.
The First International Phylogenetic Nomenclature Meeting, which took place from July 6, 2004, to July 9, 2004, in Paris, France, was attended by about 70 systematic and evolutionary biologists from 11 nations. This was the first open, multi-day conference that focused entirely on phylogenetic nomenclature, and it provided the venue for the inauguration of a new association, the International Society for Phylogenetic Nomenclature (ISPN). The ISPN membership elects the Committee on Phylogenetic Nomenclature (CPN), which has taken over the role of the advisory group that oversaw the earlier stages of development of the PhyloCode.
The Second International Phylogenetic Nomenclature Meeting took place from June 28, 2006, to July 2, 2006, at Yale University (New Haven, Connecticut, U.S.).
The Third International Phylogenetic Nomenclature Meeting took place from July 21, 2008, to July 22, 2008, at Dalhousie University (Halifax, Nova Scotia, Canada).
The PhyloCode went into effect with the publication of the companion volume, Phylonyms, in 2020.
Influences
The theoretical foundation of the PhyloCode was developed in a series of papers by de Queiroz and Gauthier, which was foreshadowed by earlier suggestions that a taxon name could be defined by reference to a part of a phylogenetic tree.
Whenever possible, the writers of the PhyloCode used the draft BioCode, which attempted to unify the rank-based approach into a single code, as a model. Thus, the organization of the PhyloCode, some of its terminology, and the wording of certain rules are derived from the BioCode. Other rules are derived from one or more of the rank-based codes, particularly the botanical and zoological codes. However, many rules in the PhyloCode have no counterpart in any code based on taxonomic ranks because of fundamental differences in the definitional foundations of the alternative systems. Note that the PhyloCode does not govern the names of species, whose rules of availability, typification, etc., remain regulated by the requisite traditional Code of Nomenclature.
Future
The PhyloCode is controversial and has inspired considerable criticism from some taxonomists. While inaugurated decades ago, the number of supporters for widespread adoption of the PhyloCode is still small, and the publication of PhyloCode literature stagnated in the mid-2010s, before accelerating after publication of Phylonyms in 2020 and of the launch of the Bulletin of Phylogenetic Nomenclature, which is a journal dedicated to the publication of nomenclatural acts (especially definition of taxon names) valid under the PhyloCode. To be valid under the PhyloCode, taxon names and associated definitions should be registered in the RegNum database.
A list of published critiques of the PhyloCode can be found on the ISPN's website, as can a list of rebuttals.
References
Literature
including proposal, but without the 150 supporting signatories
External links
The PhyloCode (current draft)
International Society for Phylogenetic Nomenclature
International Society for Phylogenetic Nomenclature Discussion Forum
Literature on Phylogenetic Nomenclature
RegNum, the official repository of phylogenetic clade names generated according to the rules of the PhyloCode
Christine Soares, What's in a Name?, Scientific American, (November 2004).
PhyloCode debate
What if we decide to rename every living thing on Earth?, Discovery Magazine'', (04.28.2005)
Nomenclature codes | PhyloCode | Biology | 1,903 |
3,180,026 | https://en.wikipedia.org/wiki/DRAM%20price%20fixing%20scandal | In 2002, the United States Department of Justice, under the Sherman Antitrust Act, began a probe into the activities of dynamic random-access memory (DRAM) manufacturers in response to claims by US computer makers, including Dell and Gateway, that inflated DRAM pricing was causing lost profits and hindering their effectiveness in the marketplace.
To date, five manufacturers have pleaded guilty to their involvement in an international price-fixing conspiracy between July 1, 1998, and June 15, 2002, including Hynix, Infineon, Micron Technology, Samsung, and Elpida.
2002–2005
"In December 2003, the Department charged Alfred P. Censullo, a Regional Sales Manager for Micron Technology Inc., with obstruction of justice in violation of 18 U.S.C. § 1503. Censullo pleaded guilty to the charge and admitted to having withheld and altered documents responsive to a grand jury subpoena served on Micron in June 2002."
On 20 October 2004, Infineon also pleaded guilty. The company was fined US$160M for its involvement, then the third largest antitrust fine in US history. In April 2005, Hynix Semiconductor was fined US$185M after they also admitted guilt. In October 2005, Samsung entered a guilty plea in connection with the cartel.
On 5 April 2006, Sun Woo Lee, Senior Manager of DRAM at Samsung Electronics, entered into a plea bargain with the US Government for his involvement in the price fixing conspiracy. Following the plea agreement he was sentenced to 8 months in prison and fined US$250,000. Lee was subsequently promoted to President of Samsung Germany in 2009, and then President of Samsung Europe in 2014.
On 19 May, 2010, European antitrust regulators also fined nine semiconductor manufacturers more than €331 million for the cartel that operated back in 2002. The companies fined are Samsung Electronics, Infineon, Hynix Semiconductor, Elpida Memory, NEC Electronics, Hitachi, Toshiba, Mitsubishi Electric and Nanya Technology. Micron Technology received immunity for blowing the whistle on the cartel and will not be fined for its involvement.
2017–2018
On 27 April 2018, Hagens Berman filed a class-action lawsuit against Samsung, Hynix, and Micron in U.S. District Court alleging the trio engaged in DRAM price fixing causing prices to skyrocket through 2016 and 2017. Between June 2016 and January 2018, the price of DRAM nearly tripled.
References
External links
Four Infineon Technologies Executives Agree to Plead Guilty in International DRAM Price-Fixing Conspiracy
Korean Company Hynix Agrees to Plead Guilty to Price Fixing
Anti-competitive practices
Cartels
Price fixing convictions | DRAM price fixing scandal | Technology | 549 |
33,622,416 | https://en.wikipedia.org/wiki/Lactarius%20maculatipes | Lactarius maculatipes is a member of the large milk-cap genus Lactarius in the order Russulales. The species was described as new to science by mycologist Gertrude S. Burlingham in 1942.
See also
List of Lactarius species
References
External links
maculatipes
Fungi described in 1942
Fungi of North America
Fungus species | Lactarius maculatipes | Biology | 75 |
75,479,694 | https://en.wikipedia.org/wiki/Verinurad | Verinurad is a selective URAT1 inhibitor developed for gout and heart failure by AstraZeneca.
References
Drugs developed by AstraZeneca
Nitriles
Naphthalenes
Pyridines
Thioethers
Carboxylic acids | Verinurad | Chemistry | 54 |
72,825,844 | https://en.wikipedia.org/wiki/The%20Little%20Princess%20Trust | The Little Princess Trust is a UK children's charity based in Hereford.
The charity provides free, real hair wigs to children and young people up to the age of 24 who have lost their own hair due to cancer treatment or to other conditions such as Alopecia.
History
The charity was founded by Wendy and Simon Tarplee in memory of their daughter Hannah Tarplee. Hannah was diagnosed with cancer when she was four and lost her hair during chemotherapy.
The Tarplees had problems finding a suitable wig for Hannah before she died in 2005.
The charity is also a significant supporter of childhood cancer research in the UK and one study at Manchester University NHS Foundation Trust, funded by The Little Princess Trust, has revealed an innovative new treatment for children with acute myeloid leukaemia who were previously on a palliative care pathway.
The Little Princess Trust moved into its own headquarters in 2021 and the new premises is called The Hannah Tarplee Building.
References
Hair
Wigs
Cancer
Charities based in the United Kingdom | The Little Princess Trust | Biology | 206 |
2,918,518 | https://en.wikipedia.org/wiki/Atom%20transfer%20radical%20polymerization | Atom transfer radical polymerization (ATRP) is an example of a reversible-deactivation radical polymerization. Like its counterpart, ATRA, or atom transfer radical addition, ATRP is a means of forming a carbon-carbon bond with a transition metal catalyst. Polymerization from this method is called atom transfer radical addition polymerization (ATRAP). As the name implies, the atom transfer step is crucial in the reaction responsible for uniform polymer chain growth. ATRP (or transition metal-mediated living radical polymerization) was independently discovered by Mitsuo Sawamoto and by Krzysztof Matyjaszewski and Jin-Shan Wang in 1995.
The following scheme presents a typical ATRP reaction:
Overview of ATRP
ATRP usually employs a transition metal complex as the catalyst with an alkyl halide as the initiator (R-X). Various transition metal complexes, namely those of Cu, Fe, Ru, Ni, and Os, have been employed as catalysts for ATRP. In an ATRP process, the dormant species is activated by the transition metal complex to generate radicals via one electron transfer process. Simultaneously the transition metal is oxidized to higher oxidation state. This reversible process rapidly establishes an equilibrium that is predominately shifted to the side with very low radical concentrations. The number of polymer chains is determined by the
number of initiators. Each growing chain has the same probability to propagate with monomers to form living/dormant polymer chains (R-Pn-X). As a result, polymers with similar molecular weights and narrow molecular weight distribution can be prepared.
ATRP reactions are very robust in that they are tolerant of many functional groups like allyl, amino, epoxy, hydroxy, and vinyl groups present in either the monomer or the initiator. ATRP methods are also advantageous due to the ease of preparation, commercially available and inexpensive catalysts (copper complexes), pyridine-based ligands, and initiators (alkyl halides).
Components of normal ATRP
There are five important variable components of atom transfer radical polymerizations. They are the monomer, initiator, catalyst, ligand, and solvent. The following section breaks down the contributions of each component to the overall polymerization.
Monomer
Monomers typically used in ATRP are molecules with substituents that can stabilize the propagating radicals; for example, styrenes, (meth)acrylates, (meth)acrylamides, and acrylonitrile. ATRP is successful at leading to polymers of high number average molecular weight and low dispersity when the concentration of the propagating radical balances the rate of radical termination. Yet, the propagating rate is unique to each individual monomer. Therefore, it is important that the other components of the polymerization (initiator, catalyst, ligand, and solvent) are optimized in order for the concentration of the dormant species to be greater than that of the propagating radical while being low enough as to prevent slowing down or halting the reaction.
Initiator
The number of growing polymer chains is determined by the initiator. To ensure a low polydispersity and a controlled polymerization, the rate of initiation must be as fast or preferably faster than the rate of propagation Ideally, all chains will be initiated in a very short period of time and will be propagated at the same rate. Initiators are typically chosen to be alkyl halides whose frameworks are similar to that of the propagating radical. Alkyl halides such as alkyl bromides are more reactive than alkyl chlorides. Both offer good molecular weight control. The shape or structure of the initiator influences polymer architecture. For example, initiators with multiple alkyl halide groups on a single core can lead to a star-like polymer shape. Furthermore, α-functionalized ATRP initiators can be used to synthesize hetero-telechelic polymers with a variety of chain-end groups
Catalyst
The catalyst is the most important component of ATRP because it determines the equilibrium constant between the active and dormant species. This equilibrium determines the polymerization rate. An equilibrium constant that is too small may inhibit or slow the polymerization while an equilibrium constant that is too large leads to a wide distribution of chain lengths.
There are several requirements for the metal catalyst:
There needs to be two accessible oxidation states that are differentiated by one electron
The metal center needs to have reasonable affinity for halogens
The coordination sphere of the metal needs to be expandable when it is oxidized as to accommodate the halogen
The transition metal catalyst should not lead to significant side reactions, such as irreversible coupling with the propagating radicals and catalytic radical termination
The most studied catalysts are those that include copper, which has shown the most versatility with successful polymerizations for a wide selection of monomers.
Ligand
One of the most important aspects in an ATRP reaction is the choice of ligand which is used in combination with the traditionally copper halide catalyst to form the catalyst complex. The main function of the ligand is to solubilize the copper halide in whichever solvent is chosen and to adjust the redox potential of the copper. This changes the activity and dynamics of the halogen exchange reaction and subsequent activation and deactivation of the polymer chains during polymerization, therefore greatly affecting the kinetics of the reaction and the degree of control over the polymerization. Different ligands should be chosen based on the activity of the monomer and the choice of metal for the catalyst. As copper halides are primarily used as the catalyst, amine based ligands are most commonly chosen. Ligands with higher activities are being investigated as ways to potentially decrease the concentration of catalyst in the reaction since a more active catalyst complex would lead to a higher concentration of deactivator in the reaction. However, a too active catalyst can lead to a loss of control and increase the polydispersity of the resulting polymer.
Solvents
Toluene, 1,4-dioxane, xylene, anisole, DMF, DMSO, water, methanol, acetonitrile, or even the monomer itself (described as a bulk polymerization) are commonly used.
Kinetics of normal ATRP
Reactions in atom transfer radical polymerization
Initiation
Quasi-steady state
Other chain breaking reactions () should also be considered.
ATRP equilibrium constant
The radical concentration in normal ATRP can be calculated via the following equation:
It is important to know the KATRP value to adjust the radical concentration. The KATRP value depends on the homo-cleavage energy of the alkyl halide and the redox potential of the Cu catalyst with different ligands. Given two alkyl halides (R1-X and R2-X) and two ligands (L1 and L2), there will be four combinations between different alkyl halides and ligands. Let KijATRP refer to the KATRP value for Ri-X and Lj. If we know three of these four combinations, the fourth one can be calculated as:
The KATRP values for different alkyl halides and different Cu catalysts can be found in literature.
Solvents have significant effects on the KATRP values. The KATRP value increases dramatically with the polarity of the solvent for the same alkyl halide and the same Cu catalyst. The polymerization must take place in solvent/monomer mixture, which changes to solvent/monomer/polymer mixture gradually. The KATRP values could change 10000 times by switching the reaction medium from pure methyl acrylate to pure dimethyl sulfoxide.
Activation and deactivation rate coefficients
Deactivation rate coefficient, kd, values must be sufficiently large to obtain low dispersity. The direct measurement of kd is difficult though not impossible. In most cases, kd may be calculated from known KATRP and ka. Cu complexes providing very low kd values are not recommended for use in ATRP reactions.
Retention of chain end functionality
High level retention of chain end functionality is typically desired. However, the determination of the loss of chain end functionality based on 1H NMR and mass spectroscopy methods cannot provide precise values. As a result, it is difficult to identify the contributions of different chain breaking reactions in ATRP. One simple rule in ATRP comprises the principle of halogen conservation. Halogen conservation means the total amount of halogen in the reaction systems must remain as a constant. From this rule, the level of retention of chain end functionality can be precisely determined in many cases. The precise determination of the loss of chain end functionality enabled further investigation of the chain breaking reactions in ATRP.
Advantages and disadvantages of ATRP
Advantages
ATRP enables the polymerization of a wide variety of monomers with different chemical functionalities, proving to be more tolerant of these functionalities than ionic polymerizations. It provides increased control of molecular weight, molecular architecture and polymer composition while maintaining a low polydispersity (1.05-1.2). The halogen remaining at the end of the polymer chain after polymerization allows for facile post-polymerization chain-end modification into different reactive functional groups. The use of multi-functional initiators facilitates the synthesis of lower-arm star polymers and telechelic polymers. External visible light stimulation ATRP has a high responding speed and excellent functional group tolerance.
Disadvantages
The most significant drawback of ATRP is the high concentrations of catalyst required for the reaction. This catalyst standardly consists of a copper halide and an amine-based ligand. The removal of the copper from the polymer after polymerization is often tedious and expensive, limiting ATRP's use in the commercial sector. However, researchers are currently developing methods which would limit the necessity of the catalyst concentration to ppm. ATRP is also a traditionally air-sensitive reaction normally requiring freeze-pump thaw cycles. However, techniques such as Activator Generated by Electron Transfer (AGET) ATRP provide potential alternatives which are not air-sensitive. A final disadvantage is the difficulty of conducting ATRP in aqueous media.
Different ATRP methods
Activator regeneration ATRP methods
In a normal ATRP, the concentration of radicals is determined by the KATRP value, concentration of dormant species, and the [CuI]/[CuII] ratio. In principle, the total amount of Cu catalyst should not influence polymerization kinetics. However, the loss of chain end functionality slowly but irreversibly converts CuI to CuII. Thus initial [CuI]/[I] ratios are typically 0.1 to 1. When very low concentrations of catalysts are used, usually at the ppm level, activator regeneration processes are generally required to compensate the loss of CEF and regenerate a sufficient amount of CuI to continue the polymerization. Several activator regeneration ATRP methods were developed, namely ICAR ATRP, ARGET ATRP, SARA ATRP, eATRP, and photoinduced ATRP. The activator regeneration process is introduced to compensate the loss of chain end functionality, thus the cumulative amount of activator regeneration should roughly equal the total amount of the loss of chain end functionality.
ICAR ATRP
Initiators for continuous activator regeneration (ICAR) is a technique that uses conventional radical initiators to continuously regenerate the activator, lowering its required concentration from thousands of ppm to <100 ppm; making it an industrially relevant technique.
ARGET ATRP
Activators regenerated by electron transfer (ARGET) employs non-radical forming reducing agents for regeneration of CuI. A good reducing agent (e.g. hydrazine, phenols, sugars, ascorbic acid) should only react with CuII and not with radicals or other reagents in the reaction mixture.
SARA ATRP
A typical SARA ATRP employs Cu0 as both supplemental activator and reducing agent (SARA). Cu0 can activate alkyl halide directly but slowly. Cu0 can also reduce CuII to CuI. Both processes help to regenerate CuI activator. Other zerovalent metals, such as Mg, Zn, and Fe, have also been employed for Cu-based SARA ATRP.
eATRP
In eATRP the activator CuI is regenerated via electrochemical process. The development of eATRP enables precise control of the reduction process and external regulation of the polymerization. In an eATRP process, the redox reaction involves two electrodes. The CuII species is reduced to CuI at the cathode. The anode compartment is typically separated from the polymerization environment by a glass frit and a conductive gel. Alternatively, a sacrificial aluminum counter electrode can be used, which is directly immersed in the reaction mixture.
Photoinduced ATRP
The direct photo reduction of transition metal catalysts in ATRP and/or photo assistant activation of alkyl halide is particularly interesting because such a procedure will allow performing of ATRP with ppm level of catalysts without any other additives.
Other ATRP methods
Reverse ATRP
In reverse ATRP, the catalyst is added in its higher oxidation state. Chains are activated by conventional radical initiators (e.g. AIBN) and deactivated by the transition metal. The source of transferable halogen is the copper salt, so this must be present in concentrations comparable to the transition metal.
SR&NI ATRP
A mixture of radical initiator and active (lower oxidation state) catalyst allows for the creation of block copolymers (contaminated with homopolymer) which is impossible using standard reverse ATRP. This is called SR&NI (simultaneous reverse and normal initiation ATRP).
AGET ATRP
Activators generated by electron transfer uses a reducing agent unable to initiate new chains (instead of organic radicals) as regenerator for the low-valent metal. Examples are metallic copper, tin(II), ascorbic acid, or triethylamine. It allows for lower concentrations of transition metals, and may also be possible in aqueous or dispersed media.
Hybrid and bimetallic systems
This technique uses a variety of different metals/oxidation states, possibly on solid supports, to act as activators/deactivators, possibly with reduced toxicity or sensitivity. Iron salts can, for example, efficiently activate alkyl halides but requires an efficient Cu(II) deactivator which can be present in much lower concentrations (3–5 mol%)
Metal-free ATRP
Trace metal catalyst remaining in the final product has limited the application of ATRP in biomedical and electronic fields. In 2014, Craig Hawker and coworkers developed a new catalysis system involving photoredox reaction of 10-phenothiazine. The metal-free ATRP has been demonstrated to be capable of controlled polymerization of methacrylates. This technique was later expanded to polymerization of acrylonitrile by Matyjaszewski et al.
Mechano/sono-ATRP
Mechano/sono-ATRP uses mechanical forces, typically ultrasonic agitation, as an external stimulus to induce the (re)generation of activators in ATRP. Esser-Kahn, et al. demonstrated the first example of mechanoATRP using the piezoelectricity of barium titanate to reduce Cu(II) species. Matyjaszewski, et al. later improved the technique by using nanometer-sized and/or surface-functionalized barium titanate or zinc oxide particles, achieving superior rate and control of polymerization, as well as temporal control, with ppm-level of copper catalysts. In addition to peizoelectric particles, water and carbonates were found to mediate mechano/sono-ATRP. Mechochemically homolyzed water molecules undergoes radical addition to monomers, which in turn reduces Cu(II) species. Mechanically unstable Cu(II)-carbonate complexes formed in the presence to insoluble carbonates, which oxidizes dimethylsulfoxide, the solvent molecules, to generate Cu(I) species and carbon dioxide.
Biocatalytic ATRP
Metalloenzymes have been used for the first time as ATRP catalysts, in parallel and independently, by the research teams of Fabio Di Lena and Nico Bruns. This pioneering work has paved the way to the emerging field of biocatalytic reversible-deactivation radical polymerization.
Polymers synthesized through ATRP
Polystyrene
Poly (methyl methacrylate)
Polyacrylamide
See also
Heteropolymer
Radical (chemistry)
Reversible addition−fragmentation chain-transfer polymerization
Nitroxide mediated radical polymerization
External links
About ATRP - Matyjaszewski Polymer Group
References
Polymerization reactions | Atom transfer radical polymerization | Chemistry,Materials_science | 3,527 |
40,036,842 | https://en.wikipedia.org/wiki/Hart%20Energy | Hart Energy, based in Houston, publishes online newspapers and magazines covering the petroleum industry and provides related research and consulting services.
History
The company was founded in Denver in 1973. Phillips International acquired the company in 1991 and sold it in 2000 for over $100 million. In March 2004, the company was acquired by management and changed its name to Hart Energy Publishing, LP.
In October 2010, Hart Energy acquired Rextag Strategies Mapping & Data Services.
On May 2, 2013, Hart Energy acquired Subsea Engineering News.
Hart Energy features an annual listing of influential women in energy.
References
External links
Rextag - Hart Energy Mapping & Data Services
Companies based in Houston
Petroleum in Texas
Petroleum industry
Publishing companies established in 1973
Publishing companies of the United States | Hart Energy | Chemistry | 148 |
40,370,137 | https://en.wikipedia.org/wiki/International%20Oil%20and%20Gas%20University | Yagshygeldi Kakayev International Oil and Gas University () is a university located in Ashgabat, the main university of the Turkmenistan oil and gas community. It was founded on May 25, 2012 as the Turkmen State Institute of Oil and Gas. On August 10, 2013, it became an international university. A branch of the University operate in Balkanabat.
History
Created in order to improve existing work on the diversification of exports to the world market of Turkmen minerals, the implementation of high-quality level of development programs of oil and gas industry. The decree № PP-6081 was signed by President of Turkmenistan on the establishment of the Turkmen State Institute of Oil and Gas. Was placed under the supervision to the Ministry of Education of Turkmenistan.
August 10, 2013 "in order to radically improve the training of highly qualified specialists for the oil and gas industry" was renamed the International University of Oil and Gas.
On February 12, 2019, by the Resolution of the Assembly of Turkmenistan, the International University of Oil and Gas was named after the political and public figure Yagshygeldi Kakayev.
As of June 2022, the school’s rector and administrative leader is Atamanov Bayrammyrat Yaylymovich.
Education
The institute created about twenty specialties in eight areas: geology, exploration and mining, chemical engineering, computer technology, construction, architecture, manufacturing machinery and equipment, energy, economics and management in industry, management.
The institute seven faculties and 27 departments. Taught by some 250 teachers, including 6 doctors, including 5 professors and 33 candidates of sciences, including 14 professors.
Faculties
Geology
Exploration and development of mineral resources
Сhemical Engineering
Computer Technology
Engineering and Architecture
Technological machinery and equipment
Energy
Economics and management in the industry
Management
Сampus
The building was built in the southern part of Ashgabat, where a new business and cultural center of the capital of Turkmenistan. 17-storey office building was built by Turkish company «Renaissance». The project started in 2010. The opening ceremony of the buildings took place September 1, 2012 with the participation of President of Turkmenistan Gurbanguly Berdimuhamedow.
The building is symbolically resembles an oil rig. The complex of buildings of the University covers an area of 30 hectares, consists of a main 18-storey building and five academic buildings, 86 classrooms at the same time be able to learn to 3,000 students. The institute located assembly and meeting rooms, museum, archive, library (with 250 seats) equipped with multimedia equipment reading rooms, Center for Information Technology, cafe, clinic, grocery and department store. Classrooms and laboratories equipped with modern equipment.
The university has a museum, where the archive is created fund, telling about the oil and gas production, oil and gas development and the national economy of Turkmenistan.
The building in 2012, recognized as the best building of the CIS according to the International Union of Architects Association of CIS.
Dormitories
6 dormitories constructed for each faculty, designed for 230 seats. Rooms - double the terms, kitchen room is equipped with household appliances.
Sport complex
Operates an indoor sports complex, with a boxing ring, gym and swimming pool. The multi-purpose sports hall has fields for football, basketball, volleyball, tennis and other sports. Separately located gymnasium. There are showers. On the sports field with natural grass flooring classes are held in the open air.
References
Buildings and structures in Ashgabat
Universities in Turkmenistan
Universities and colleges established in 2012
Petroleum engineering schools
2012 establishments in Turkmenistan | International Oil and Gas University | Engineering | 715 |
4,646,870 | https://en.wikipedia.org/wiki/Spartan%20%28chemistry%20software%29 | Spartan is a molecular modelling and computational chemistry application from Wavefunction. It contains code for molecular mechanics, semi-empirical methods, ab initio models, density functional models, post-Hartree–Fock models, and thermochemical recipes including G3(MP2) and T1. Quantum chemistry calculations in Spartan are powered by Q-Chem.
Primary functions are to supply information about structures, relative stabilities and other properties of isolated molecules. Molecular mechanics calculations on complex molecules are common in the chemical community. Quantum chemical calculations, including Hartree–Fock method molecular orbital calculations, but especially calculations that include electronic correlation, are more time-consuming in comparison.
Quantum chemical calculations are also called upon to furnish information about mechanisms and product distributions of chemical reactions, either directly by calculations on transition states, or based on Hammond's postulate, by modeling the steric and electronic demands of the reactants. Quantitative calculations, leading directly to information about the geometries of transition states, and about reaction mechanisms in general, are increasingly common, while qualitative models are still needed for systems that are too large to be subjected to more rigorous treatments. Quantum chemical calculations can supply information to complement existing experimental data or replace it altogether, for example, atomic charges for quantitative structure-activity relationship (QSAR) analyses, and intermolecular potentials for molecular mechanics and molecular dynamics calculations.
Spartan applies computational chemistry methods (theoretical models) to many standard tasks that provide calculated data applicable to the determination of molecular shape conformation, structure (equilibrium and transition state geometry), NMR, IR, Raman, and UV-visible spectra, molecular (and atomic) properties, reactivity, and selectivity.
Computational abilities
This software provides the molecular mechanics, Merck Molecular Force Field (MMFF), (for validation test suite), MMFF with extensions, and SYBYL, force fields calculation, Semi-empirical calculations, MNDO/MNDO(D), Austin Model 1 (AM1), PM3, Recife Model 1 (RM1) PM6.
Hartree–Fock, self-consistent field (SCF) methods, available with implicit solvent (SM8).
Restricted, unrestricted, and restricted open-shell Hartree–Fock
Density functional theory (DFT) methods, available with implicit solvent (SM8).
Standard functionals: BP, BLYP, B3LYP, EDF1, EDF2, M06, ωB97X-D
Exchange functionals: HF, Slater-Dirac, Becke88, Gill96, GG99, B(EDF1), PW91
Correlation functionals: VWN, LYP, PW91, P86, PZ81, PBE.
Combination or hybrid functionals: B3PW91, B3LYP, B3LYP5, EDF1, EDF2, BMK
Truhlar group functionals: M05, M05-2X, M06, M06-L M06-2X, M06-HF
Head-Gordon group functionals: ωB97, ωB97X, ωB97X-D
Coupled cluster methods.
CCSD, CCSD(T), CCSD(2), OD, OD(T), OD(2), QCCD, VOD, VOD(2), VQCCD
Møller–Plesset methods.
MP2, MP3, MP4, RI-MP2
Excited state methods.
Time-dependent density functional theory (TDDFT)Configuration interaction: CIS, CIS(D), QCIS(D), quadratic configuration interaction (QCISD(T)), RI-CIS(D)Quantum chemistry composite methods, thermochemical recipes.
T1, G2, G3, G3(MP2)
Tasks performed
Available computational models provide molecular, thermodynamic, QSAR, atomic, graphical, and spectral properties. A calculation dialogue provides access to the following computational tasks:
Energy – For a given geometry, provides energy and associated properties of a molecule or system. If quantum chemical models are employed, the wave function is calculated.
Equilibrium molecular geometry - Locates the nearest local minimum and provides energy and associated properties.
Transition state geometry - Locates the nearest first-order saddle point (a maximum in a single dimension and minima in all others) and provides energy and associated properties.
Equilibrium conformer – Locates lowest-energy conformation. Often performed before calculating structure using a quantum chemical model.
Conformer distribution – Obtains a selection of low-energy conformers. Commonly used to identify the shapes a specific molecule is likely to adopt and to determine a Boltzmann distribution for calculating average molecular properties.
Conformer library – Locates lowest-energy conformer and associates this with a set of conformers spanning all shapes accessible to the molecule without regard to energy. Used to build libraries for similarity analysis.
Energy profile – Steps a molecule or system through a user defined coordinate set, providing equilibrium geometries for each step (subject to user-specified constraints).
Similarity analysis – quantifies the likeness of molecules (and optionally their conformers) based on either structure or chemical function (Hydrogen bond acceptors–donors, positive–negative ionizables, hydrophobes, aromatics). Quantifies likeness of a molecule (and optionally its conformers) to a pharmacophore.
Graphical user interface
The software contains an integrated graphical user interface. Touch screen operations are supported for Windows 7 and 8 devices. Construction of molecules in 3D is facilitated with molecule builders (included are organic, inorganic, peptide, nucleotide, and substituent builders). 2D construction is supported for organic molecules with a 2D sketch palette. The Windows version interface can access ChemDraw; which versions 9.0 or later may also be used for molecule building in 2D. A calculations dialogue is used for specification of task and computational method. Data from calculations are displayed in dialogues, or as text output. Additional data analysis, including linear regression, is possible from an internal spreadsheet.
Graphical models
Graphical models, especially molecular orbitals, electron density, and electrostatic potential maps, are a routine means of molecular visualization in chemistry education.Surfaces:
Molecular orbitals (highest occupied, lowest unoccupied, and others)
Electron density – The density, ρ(r), is a function of the coordinates r, defined such that ρ(r)dr is the number of electrons inside a small volume dr. This is what is measured in an X-ray diffraction experiment. The density may be portrayed in terms of an isosurface (isodensity surface) with the size and shape of the surface being given by the value (or percentage of enclosure) of the electron density.
Spin density – The density, ρspin(r), is defined as the difference in electron density formed by electrons of α spin, ρα(r), and the electron density formed by electrons of β spin, ρβ(r). For closed-shell molecules (in which all electrons are paired), the spin density is zero everywhere. For open-shell molecules (in which one or more electrons are unpaired), the spin density indicates the distribution of unpaired electrons. Spin density is an indicator of reactivity of radicals.
Van der Waals radius (surface)
Solvent accessible surface area
Electrostatic potential – The potential, εp, is defined as the energy of interaction of a positive point charge located at p with the nuclei and electrons of a molecule. A surface for which the electrostatic potential is negative (a negative potential surface) delineates regions in a molecule which are subject to electrophilic attack.Composite surfaces (maps):
Electrostatic potential map (electrophilic indicator) – The most commonly employed property map is the electrostatic potential map. This gives the potential at locations on a particular surface, most commonly a surface of electron density corresponding to overall molecular size.
Local ionization potential map – Is defined as the sum over orbital electron densities, ρi(r) times absolute orbital energies, ∈i, and divided by the total electron density, ρ(r). The local ionization potential reflects the relative ease of electron removal ("ionization") at any location around a molecule. For example, a surface of "low" local ionization potential for sulfur tetrafluoride demarks the areas which are most easily ionized.
LUMO map (nucleophilic indicator) – Maps of molecular orbitals may also lead to graphical indicators. For example, the LUMO map, wherein the (absolute value) of the lowest-unoccupied molecular orbital (the LUMO) is mapped onto a size surface (again, most commonly the electron density), providing an indication of nucleophilic reactivity.
Spectral calculations
Available spectra data and plots for:Infrared spectroscopy (IR) spectraFourier transform spectroscopy (FT-IR)
Raman spectroscopy (IR)Nuclear magnetic resonance (NMR) spectra1H chemical shifts and coupling constants (empirical)
13C chemical shifts, Boltzmann averaged shifts, and 13C DEPT spectra
2D H vs H Spectra
COSY plots
2D C vs H Spectra
Heteronuclear single-quantum correlation spectroscopy (HSQC) spectra
HMBC spectraUV/vis SpectraExperimental spectra may be imported for comparison with calculated spectra: IR and UV/vis spectra in Joint Committee on Atomic and Molecular Physical Data (JCAMP) (.dx) format and NMR spectra in Chemical Markup Language (.cml) format. Access to public domain spectral databases is available for IR, NMR, and UV/vis spectra.
Databases
Spartan accesses several external databases.Quantum chemical calculations databases:Spartan Spectra & Properties Database (SSPD) – a set of about 252,000 molecules, with structures, energies, NMR and IR spectra, and wave functions calculated using the EDF2 density functional theory with the 6-31G* basis set.
Spartan Molecular Database (SMD) – a set of about 100,000 molecules calculated from following models:
Hartree–Fock with 3-21G, 6-31G*, and 6-311+G** basis sets
B3LYP density functional with 6-31G* and 6-311+G** basis sets
EDF1 density functional with 6-31G* basis set
MP2 with 6-31G* and 6-311+G** basis sets
G3(MP2)
T1Experimental databases:''
NMRShiftDB – an open-source database of experimental 1H and 13C chemical shifts.
Cambridge Structural Database (CSD) - a large repository of small molecule organic and inorganic experimental crystal structures of about 600,000 entries.
NIST database of experimental IR and UV/vis spectra.
Major release history
1991 Spartan version 1 Unix
1993 Spartan version 2 Unix
1994 Mac Spartan Macintosh
1995 Spartan version 3 Unix
1995 PC Spartan Windows
1996 Mac Spartan Plus Macintosh
1997 Spartan version 4 Unix
1997 PC Spartan Plus Windows
1999 Spartan version 5 Unix
1999 PC Spartan Pro Windows
2000 Mac Spartan Pro Macintosh
2002 Spartan'02 Unix, Linux, Windows, Mac
Windows, Macintosh, Linux versions
2004 Spartan'04
2006 Spartan'06
2008 Spartan'08
2010 Spartan'10
2013 Spartan'14
2016 Spartan'16
2018 Spartan'18
2021 Spartan'20
See also
Q-Chem quantum chemistry software
Molecular design software
Molecule editor
Comparison of software for molecular mechanics modeling
List of software for Monte Carlo molecular modeling
Quantum chemistry composite methods
List of quantum chemistry and solid state physics software
References
External links
, Wavefunction, Inc.
Molecular modelling software
Computational chemistry software
Electronic structure methods
Monte Carlo molecular modelling software | Spartan (chemistry software) | Physics,Chemistry | 2,458 |
28,779,877 | https://en.wikipedia.org/wiki/Atmospheric%20optics | Atmospheric optics is "the study of the optical characteristics of the atmosphere or products of atmospheric processes .... [including] temporal and spatial resolutions beyond those discernible with the naked eye". Meteorological optics is "that part of atmospheric optics concerned with the study of patterns observable with the naked eye". Nevertheless, the two terms are sometimes used interchangeably.
Meteorological optical phenomena, as described in this article, are concerned with how the optical properties of Earth's atmosphere cause a wide range of optical phenomena and visual perception phenomena.
Examples of meteorological phenomena include:
The blue color of the sky. This is from Rayleigh scattering, which sends more higher frequency/shorter wavelength (blue) sunlight into the eye of an observer than other frequencies/wavelength.
The reddish color of the Sun when it is observed through a thick atmosphere, as during a sunrise or sunset. This is because long-wavelength (red) light is scattered less than blue light. The red light reaches the observer's eye, whereas the blue light is scattered out of the line of sight.
Other colours in the sky, such as glowing skies at dusk and dawn. These are from additional particulate matter in the sky that scatter different colors at different angles.
Halos, afterglows, coronas, polar stratospheric clouds, and sun dogs. These are from scattering, or refraction, by ice crystals and from other particles in the atmosphere. They depend on different particle sizes and geometries.
Mirages. These are optical phenomena in which light rays are bent due to thermal variations in the refractive index of air, producing displaced or heavily distorted images of distant objects. Other optical phenomena associated with this include the Novaya Zemlya effect, in which the Sun has a distorted shape and rises earlier or sets later than predicted. A spectacular form of refraction, called the Fata Morgana, occurs with a temperature inversion, in which objects on the horizon or even beyond the horizon (e.g. islands, cliffs, ships, and icebergs) appear elongated and elevated, like "fairy tale castles".
Rainbows. These result from a combination of internal reflection and dispersive refraction of light in raindrops. Because rainbows are seen on the opposite side of the sky from the Sun, rainbows are more visible the closer the Sun is to the horizon. For example, if the Sun is overhead, any possible rainbow appears near an observer's feet, making it hard to see, and involves very few raindrops between the observer's eyes and the ground, making any rainbow very sparse.
Other phenomena that are remarkable because they are forms of visual illusions include:
Crepuscular rays,
Anticrepuscular rays, and
The apparent size of celestial objects such as the Sun and Moon.
History
A book on meteorological optics was published in the sixteenth century, but there have been numerous books on the subject since about 1950. The topic was popularised by the wide circulation of a book by Marcel Minnaert, Light and Color in the Open Air, in 1954.
Sun and Moon size
In the Book of Optics (1011–22 AD), Ibn al-Haytham argued that vision occurs in the brain, and that personal experience has an effect on what people see and how they see, and that vision and perception are subjective. Arguing against Ptolemy's refraction theory for why people perceive the Sun and Moon larger at the horizon than when they are higher in the sky, he redefined the problem in terms of perceived, rather than real, enlargement. He said that judging the distance of an object depends on there being an uninterrupted sequence of intervening bodies between the object and the observer. Critically, Ibn al-Haytham said that judging the size of an object depends on its judged distance: an object that appears near appears smaller than an object having the same image size on the retina that appears far. With the overhead Moon, there is no uninterrupted sequence of intervening bodies. Hence it appears far and small. With a horizon Moon, there is an uninterrupted sequence of intervening bodies: all the objects between the observer and the horizon, so the Moon appears far and large. Through works by Roger Bacon, John Pecham, and Witelo based on Ibn al-Haytham's explanation, the Moon illusion gradually came to be accepted as a psychological phenomenon, with Ptolemy's theory being rejected in the 17th century.
For over 100 years, research on the Moon illusion has been conducted by vision scientists who invariably have been psychologists specializing in human perception. After reviewing the many different explanations in their 2002 book The Mystery of the Moon Illusion, Ross and Plug concluded "No single theory has emerged victorious".
Sky coloration
The color of light from the sky is a result of Rayleigh scattering of sunlight, which results in a perceived blue color. On a sunny day, Rayleigh scattering gives the sky a blue gradient, darkest around the zenith and brightest near the horizon. Light rays coming from the zenith take the shortest-possible path () through the air mass, yielding less scattering. Light rays coming from the horizon take the longest-possible path through the air, yielding more scattering.
The blueness is at the horizon because the blue light coming from great distances is also preferentially scattered. This results in a red shift of the distant light sources that is compensated by the blue hue of the scattered light in the line of sight. In other words, the red light scatters also; if it does so at a point a great distance from the observer it has a much higher chance of reaching the observer than blue light. At distances nearing infinity, the scattered light is therefore white. Distant clouds or snowy mountaintops will seem yellow for that reason; that effect is not obvious on clear days, but very pronounced when clouds are covering the line of sight reducing the blue hue from scattered sunlight.
The scattering due to molecule sized particles (as in air) is greater in the forward and backward directions than it is in the lateral direction. Individual water droplets exposed to white light will create a set of colored rings. If a cloud is thick enough, scattering from multiple water droplets will wash out the set of colored rings and create a washed out white color. Dust from the Sahara moves around the southern periphery of the subtropical ridge moves into the southeastern United States during the summer, which changes the sky from a blue to a white appearance and leads to an increase in red sunsets. Its presence negatively affects air quality during the summer since it adds to the count of airborne particulates.
The sky can turn a multitude of colors such as red, orange, pink and yellow (especially near sunset or sunrise) and black at night. Scattering effects also partially polarize light from the sky, most pronounced at an angle 90° from the Sun.
Sky luminance distribution models have been recommended by the International Commission on Illumination (CIE) for the design of daylighting schemes. Recent developments relate to “all sky models” for modelling sky luminance under weather conditions ranging from clear sky to overcast.
Cloud coloration
The color of a cloud, as seen from the Earth, tells much about what is going on inside the cloud. Dense deep tropospheric clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top. Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very dark grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. Thin clouds may look white or appear to have acquired the color of their environment or background. High tropospheric and non-tropospheric clouds appear mostly white if composed entirely of ice crystals and/or supercooled water droplets.
As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets, which may combine to form droplets large enough to fall as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, it may be that a percentage of the light which enters the cloud is not reflected back out before it is absorbed. A simple example of this is being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.
Other colors occur naturally in clouds. Bluish-grey is the result of light scattering within the cloud. In the visible spectrum, blue and green are at the short end of light's visible wavelengths, while red and yellow are at the long end. The short rays are more easily scattered by water droplets, and the long rays are more likely to be absorbed. The bluish color is evidence that such scattering is being produced by rain-sized droplets in the cloud. A cumulonimbus cloud emitting green is a sign that it is a severe thunderstorm, capable of heavy rain, hail, strong winds and possible tornadoes. The exact cause of green thunderstorms is still unknown, but it could be due to the combination of reddened sunlight passing through very optically thick clouds. Yellowish clouds may occur in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds caused by the presence of nitrogen dioxide are sometimes seen in urban areas with high air pollution levels.
Red, orange and pink clouds occur almost entirely at sunrise and sunset and are the result of the scattering of sunlight by the atmosphere. When the angle between the Sun and the horizon is less than 10 percent, as it is just after sunrise or just prior to sunset, sunlight becomes too red due to refraction for any colors other than those with a reddish hue to be seen. The clouds do not become that color; they are reflecting long and unscattered rays of sunlight, which are predominant at those hours. The effect is much like if a person were to shine a red spotlight on a white sheet. In combination with large, mature thunderheads this can produce blood-red clouds. Clouds look darker in the near-infrared because water absorbs solar radiation at those wavelengths.
Halos
A halo (ἅλως; also known as a nimbus, icebow or gloriole) is an optical phenomenon produced by the interaction of light from the Sun or Moon with ice crystals in the atmosphere, resulting in colored or white arcs, rings or spots in the sky. Many halos are positioned near the Sun or Moon, but others are elsewhere and even in the opposite part of the sky. They can also form around artificial lights in very cold weather when ice crystals called diamond dust are floating in the nearby air.
There are many types of ice halos. They are produced by the ice crystals in cirrus or cirrostratus clouds high in the upper troposphere, at an altitude of to , or, during very cold weather, by ice crystals called diamond dust drifting in the air at low levels. The particular shape and orientation of the crystals are responsible for the types of halo observed. Light is reflected and refracted by the ice crystals and may split into colors because of dispersion. The crystals behave like prisms and mirrors, refracting and reflecting sunlight between their faces, sending shafts of light in particular directions. For circular halos, the preferred angular distance are 22 and 46 degrees from the ice crystals which create them. Atmospheric phenomena such as halos have been used as part of weather lore as an empirical means of weather forecasting, with their presence indicating an approach of a warm front and its associated rain.
Sun dogs
Sun dogs are a common type of halo, with the appearance of two subtly-colored bright spots to the left and right of the Sun, at a distance of about 22° and at the same elevation above the horizon. They are commonly caused by plate-shaped hexagonal ice crystals. These crystals tend to become horizontally aligned as they sink through the air, causing them to refract the sunlight to the left and right, resulting in the two sun dogs.
As the Sun rises higher, the rays passing through the crystals are increasingly skewed from the horizontal plane. Their angle of deviation increases and the sundogs move further from the Sun. However, they always stay at the same elevation as the Sun. Sun dogs are red-colored at the side nearest the Sun. Farther out the colors grade to blue or violet. However, the colors overlap considerably and so are muted, rarely pure or saturated. The colors of the sun dog finally merge into the white of the parhelic circle (if the latter is visible).
It is theoretically possible to predict the forms of sun dogs as would be seen on other planets and moons. Mars might have sundogs formed by both water-ice and CO2-ice. On the giant gas planets — Jupiter, Saturn, Uranus and Neptune — other crystals form the clouds of ammonia, methane, and other substances that can produce halos with four or more sundogs.
Glory
A common optical phenomenon involving water droplets is the glory. A glory is an optical phenomenon, appearing much like an iconic Saint's halo about the head of the observer, produced by light backscattered (a combination of diffraction, reflection and refraction) towards its source by a cloud of uniformly sized water droplets. A glory has multiple colored rings, with red colors on the outermost ring and blue/violet colors on the innermost ring.
The angular distance is much smaller than a rainbow, ranging between 5° and 20°, depending on the size of the droplets. The glory can only be seen when the observer is directly between the Sun and cloud of refracting water droplets. Hence, it is commonly observed while airborne, with the glory surrounding the airplane's shadow on clouds (this is often called The Glory of the Pilot). Glories can also be seen from mountains and tall buildings, when there are clouds or fog below the level of the observer, or on days with ground fog. The glory is related to the optical phenomenon anthelion.
Rainbow
A rainbow is a narrow, multicoloured semicircular arc due to dispersion of white light by a multitude of drops of water, usually in the form of rain, when they are illuminated by sunlight. Hence, when conditions are right, a rainbow always appears in the section of sky directly opposite the Sun. For an observer on the ground, the amount of the arc that is visible depends on the height of the sun above the horizon. It is a full semicircle with an angular radius of 42° when the sun is at the horizon. But as the sun rises in the sky, the arc grows smaller and ceases to be visible when the sun is more than 42° above the horizon. To see more than a semicircular bow, an observer would have to be able to look down on the drops, say from an airplane or a mountaintop. Rainbows are most common during afternoon rain showers in summer.
A single reflection off the backs of an array of raindrops produces a rainbow with an angular size that ranges from 40° to 42° with red on the outside and blue/violet on the inside. This is known as the primary bow. A fainter secondary bow is often visible some 10° outside the primary bow. It is due to two internal reflections within a drop. The resulting secondary arc is some 3° wide and the colours are reversed, with blue/violet on the outside. Two internal reflections produce a bow with angular size of 50.5° to 54° with blue/violet on the outside. The region between a double rainbow is often noticeably darker that the sky withinthe primary bow and that beyond the seconday bow. It known an Alexander's Dark Band. The reason for this apparent reduction in sky brightness is that, while light from the sky enclosed within the primary bow comes from droplet reflection, and light beyond the secondary bow also comes from droplet reflection, there is no mechanism for the region between the bows to reflect light in the direction of the observer. Generally speaking, larger the droplets make for brighter bows.
A rainbow spans a continuous spectrum of colors; the distinct bands (including the number of bands) are an artifact of human color vision, and no banding of any type is seen in a black-and-white photograph of a rainbow (only a smooth gradation of intensity to a maxima, then fading to a minima at the other side of the arc). For colors seen by a normal human eye, the most commonly cited and remembered sequence, in English, is Isaac Newton's sevenfold red, orange, yellow, green, blue, indigo and violet (popularly memorized by mnemonics like Roy G. Biv).
Mirage
A mirage is a naturally occurring optical phenomenon in which light rays are bent to produce a displaced image of distant objects or the sky. The word comes to English via the French mirage, from the Latin mirare, meaning "to look at, to wonder at". This is the same root as for "mirror" and "to admire". Also, it has its roots in the Arabic mirage.
In contrast to a hallucination, a mirage is a real optical phenomenon which can be captured on camera, since light rays actually are refracted to form the false image at the observer's location. What the image appears to represent, however, is determined by the interpretive faculties of the human mind. For example, inferior images on land are very easily mistaken for the reflections from a small body of water.
Mirages can be categorized as "inferior" (meaning lower), "superior" (meaning higher) and "Fata Morgana", one kind of superior mirage consisting of a series of unusually elaborate, vertically stacked images, which form one rapidly changing mirage.
Green flashes and green rays are optical phenomena that occur shortly after sunset or before sunrise, when a green spot is visible, usually for no more than a second or two, above the Sun, or a green ray shoots up from the sunset point. Green flashes are actually a group of phenomena stemming from different causes, and some are more common than others. Green flashes can be observed from any altitude (even from an aircraft). They are usually seen at an unobstructed horizon, such as over the ocean, but are possible over cloud tops and mountain tops as well.
A green flash from the Moon and bright planets at the horizon, including Venus and Jupiter, can also be observed.
Fata Morgana
This optical phenomenon occurs because rays of light are strongly bent when they pass through air layers of different temperatures in a steep thermal inversion where an atmospheric duct has formed. A thermal inversion is an atmospheric condition where warmer air exists in a well-defined layer above a layer of significantly cooler air. This temperature inversion is the opposite of what is normally the case; air is usually warmer close to the surface, and cooler higher up. In calm weather, a layer of significantly warmer air can rest over colder dense air, forming an atmospheric duct which acts like a refracting lens, producing a series of both inverted and erect images.
A Fata Morgana is an unusual and very complex form of mirage, a form of superior mirage, which, like many other kinds of superior mirages, is seen in a narrow band right above the horizon. It is an Italian phrase derived from the vulgar Latin for "fairy" and the Arthurian sorcerer Morgan le Fay, from a belief that the mirage, often seen in the Strait of Messina, were fairy castles in the air, or false land designed to lure sailors to their death created by her witchcraft. Although the term Fata Morgana is sometimes incorrectly applied to other, more common kinds of mirages, the true Fata Morgana is not the same as an ordinary superior mirage, and is certainly not the same as an inferior mirage.
Fata Morgana mirages tremendously distort the object or objects which they are based on, such that the object often appears to be very unusual, and may even be transformed in such a way that it is completely unrecognizable. A Fata Morgana can be seen on land or at sea, in polar regions or in deserts. This kind of mirage can involve almost any kind of distant object, including such things as boats, islands, and coastline.
A Fata Morgana is not only complex, but also rapidly changing. The mirage comprises several inverted (upside down) and erect (right side up) images that are stacked on top of one another. Fata Morgana mirages also show alternating compressed and stretched zones.
Novaya Zemlya effect
The Novaya Zemlya effect is a polar mirage caused by high refraction of sunlight between atmospheric thermoclines. The Novaya Zemlya effect will give the impression that the sun is rising earlier or setting later than it actually should (astronomically speaking). Depending on the meteorological situation the effect will present the Sun as a line or a square (which is sometimes referred to as the "rectangular sun"), made up of flattened hourglass shapes. The mirage requires rays of sunlight to have an inversion layer for hundreds of kilometres, and depends on the inversion layer's temperature gradient. The sunlight must bend to the Earth's curvature at least to allow an elevation rise of 5 degrees for sight of the sun disk.
The first person to record the phenomenon was Gerrit de Veer, a member of Willem Barentsz' ill-fated third expedition into the polar region. Novaya Zemlya, the archipelago where de Veer first observed the phenomenon, lends its name to the effect.
Crepuscular rays
Crepuscular rays are near-parallel rays of sunlight moving through the Earth's atmosphere, but appear to diverge because of linear perspective. They often occur when objects such as mountain peaks or clouds partially shadow the Sun's rays like a cloud cover. Various airborne compounds scatter the sunlight and make these rays visible, due to diffraction, reflection, and scattering.
Crepuscular rays can also occasionally be viewed underwater, particularly in arctic areas, appearing from ice shelves or cracks in the ice. Also they are also viewed in days when the sun hits the clouds in a perfect angle shining upon the area.
There are three primary forms of crepuscular rays:
Rays of light penetrating holes in low clouds (also called "Jacob's Ladder").
Beams of light diverging from behind a cloud.
Pale, pinkish or reddish rays that radiate from below the horizon. These are often mistaken for light pillars.
They are commonly seen near sunrise and sunset, when tall clouds such as cumulonimbus and mountains can be most effective at creating these rays.
Anticrepuscular rays
Anticrepuscular rays while parallel in reality are sometimes visible in the sky in the direction opposite the sun. They appear to converge again at the distant horizon.
Atmospheric refraction
Atmospheric refraction influences the apparent position of astronomical and terrestrial objects, usually causing them to appear higher than they actually are. For this reason navigators, astronomers, and surveyors observe positions when these effects are minimal. Sailors will only shoot a star when 20° or more above the horizon, astronomers try to schedule observations when an object is highest in the sky, and surveyors try to observe in the afternoon when refraction is minimum.
Atmospheric diffraction
Atmospheric diffraction is a visual effect caused when sunlight is bent by particles suspended in the air.
List
See also
Alpenglow
Nacreous cloud
Noctilucent cloud
Sunset#Colors
Sunrise#Colors
References
Applied and interdisciplinary physics
Atmospheric optical phenomena
Optics
Scattering, absorption and radiative transfer (optics) | Atmospheric optics | Physics,Chemistry | 4,929 |
14,205,596 | https://en.wikipedia.org/wiki/Shaheen-III | The Shaheen-III ( ; lit. Falcon), is a land-based medium range ballistic missile, which was test fired for the first time by military service on 9 March 2015.
Development began in secrecy in the early 2000s in response to India's Agni-III, Shaheen was successfully tested on 9 March 2015 with a range of , which enables it to strike all of India and reach deep into the Middle East parts of North Africa. The Shaheen-III, according to its program manager, the Strategic Plans Division, is "18 times faster than speed of sound and designed to reach the Indian islands of Andaman and Nicobar so that India cannot use them as "strategic bases" to establish a second strike capability."
The Shaheen program is composed of the solid-fuel system in contrast to the Ghauri program that is primarily based on liquid-fuel system. With the successful launch of the Shaheen-III, it surpasses the range of Shaheen-II— hence, it is the longest-range missile to be launched by the Pakistani military.
Its deployment has not been commented by the Pakistani military but Shaheen-III is currently deemed as operational in the strategic command of Pakistan Army.
Overview
Development history
Development of a long-range space launch vehicle began in 1999 with an aim of a rocket engines reaching the range of to . The Indian military had moved its strategic commands to east and the range of was determined by a need to be able to target the Nicobar and Andaman Islands in the eastern part of the Indian Ocean that are "developed as strategic bases" where "Indian military might think of putting its weapons", according to Shaheen-III's program manager, the Strategic Plans Division (SPD). With this mission, Shaheen-III was actively pursued along with Ghauri-III'''.
In 2000, the SUPARCO concluded at least two design studies for its space launch vehicle. Initially, there were two earlier designs were shown in IDEAS held in 2002 and its design was centered on developing a space booster based on the design technologies of the Shaheen-I. Since then, Shaheen owes its existence largely to the joint efforts led by NDC of NESCOM and Space Research Commission.
The Shaheen-III was shrouded in top secrecy and very little information was available to the public, mostly provided in 2002 IDEAS. Majority of the efforts and funding was being made available to Ghauri-III to seek strike in Eastern region of India. In May 2000, the Ghauri-III was cancelled due to its less advance and lack of technological gain. Despite strong advocacy by Abdul Qadeer Khan for the Ghauri-III program made to be feasible, the program was terminated by then-President Pervez Musharraf who made the funding available for Shaheen-III program which was to be led under Samar Mubarakmand.
The Shaheen-III was initially purposed as the space booster for the space program to make it possible for installing the satellite payload applications. In a press conference held in Lahore in 2009, Samar Mubarakmand stated that: "Pakistan would launch its own satellite in April 2011." Although no confirmation or denial of Shaheen program's existence was given by Dr. Mubarakmand, the rumors and speculations yet to be continued for the existence of the program.
After years of speculations, the Shaheen-III was eventually revealed and tested on 9 March 2015 with a 2750 km (1700-mile) range. The Shaheen-III uses the Pakistan-engineered WS21200 transporter erector launcher (TEL) manufactured in China by Wanshan Special Vehicle.
Testing
On 9 March 2015, the ISPR released a press statement notifying the successful testing of the Shaheen-III that was conducted from the southern coast off the Indian Ocean.
Military officials from JS HQ, SPD scientists and engineers, oversaw the launch of the system and witnessed the impact point in the Arabian Sea. Reports summed up by NTI, there had been series of testing's taken place of the rocket engine nozzles before the eventual tests took place in 2015.
On 20 January 2021, the ISPR released a press statement stating that a successful test of Shaheen-III aimed at "revalidating various design & tech parameters of weapon system" was conducted.
On 9 April 2022, the ISPR released a press statement stating that a successful test of Shaheen-III aimed at "re-validating various design and technical parameters of the weapon system" was conducted.
Analysis
Strategic prospect
Several Pakistani nuclear and military strategists reportedly quoted that the "Shaheen-III has a range greater than that of any other missile system in-service with Pakistan. Earlier testings of Shaheen-III had the maximum range of about 2,500km, which meant it can reach all parts of India, even the northeastern and eastern frontier.
Air Marshal Shahid Latif, a retired senior commander in the Pakistan Air Force, was reported to have said: "Now, India doesn't have its safe havens any more. It's all a reaction to India, which has now gone even for tests of extra-regional missiles. It sends a [very] loud message: If you hurt us, we are going to hurt you back!".
Mansoor Ahmad, a professor of Strategic studies at the Islamabad's Quaid-i-Azam University, stated, "Pakistan's military, however, is not interested in a "tit-for-tat" arms race with India," and speculated that developmental work may be under progress to make missile capable of delivering multiple warheads which would make them harder to defend against. Pakistan would later test the Ababeel missile with this capability.
Peace prospect
In a views of political scientist, Dr. Farrukh Saleem, the Shaheen-III'' seems to be a reaction to Integrated Guided Missile Development Program. Dr. Saleem, on the other hand, stressed that: "Pakistan seem to be aiming at competing with India and Pakistan's aims seem to revolve around the creation of a credible deterrence, and a credible deterrence is bound to strengthen strategic stability."
See also
Pakistan and its Nuclear Deterrent Program
Medium-range ballistic missile
Ababeel, a development of the Shaheen-III with an enlarged payload fairing containing a MIRV bus
Notes
References
External links
Shaheen III test fire video
Image of Shaheen 3 Missile on Launchpad
2015 in Pakistan
Embedded systems
Medium-range ballistic missiles of Pakistan
Military equipment introduced in the 2010s
Nuclear missiles of Pakistan | Shaheen-III | Technology,Engineering | 1,355 |
30,985,684 | https://en.wikipedia.org/wiki/Advanced%20Facer-Canceler%20System | The Advanced Facer Canceller System (AFCS) is an electro-mechanical mail handling system. A high-speed machine used by the US Postal Service to cull, face, and cancel letter mail through a series of automated operations. AFCS was first implemented in 1992, and is capable of processing 30,000 pieces of mail per hour.
References
Letter sortere get faster, smarter
Advanced Facer Canceler System 200
United States Postal Service
1992 establishments in the United States
Mail sorting
Postal history
Postal systems
United States Postal Service | Advanced Facer-Canceler System | Technology | 107 |
75,654 | https://en.wikipedia.org/wiki/Hyperthermia | Hyperthermia, also known simply as overheating, is a condition in which an individual's body temperature is elevated beyond normal due to failed thermoregulation. The person's body produces or absorbs more heat than it dissipates. When extreme temperature elevation occurs, it becomes a medical emergency requiring immediate treatment to prevent disability or death. Almost half a million deaths are recorded every year from hyperthermia.
The most common causes include heat stroke and adverse reactions to drugs. Heat stroke is an acute temperature elevation caused by exposure to excessive heat, or combination of heat and humidity, that overwhelms the heat-regulating mechanisms of the body. The latter is a relatively rare side effect of many drugs, particularly those that affect the central nervous system. Malignant hyperthermia is a rare complication of some types of general anesthesia. Hyperthermia can also be caused by a traumatic brain injury.
Hyperthermia differs from fever in that the body's temperature set point remains unchanged. The opposite is hypothermia, which occurs when the temperature drops below that required to maintain normal metabolism. The term is from Greek ὑπέρ, hyper, meaning "above", and θέρμος, thermos, meaning "heat".
Classification
In humans, hyperthermia is defined as a temperature greater than , depending on the reference used, that occurs without a change in the body's temperature set point.
The normal human body temperature can be as high as in the late afternoon. Hyperthermia requires an elevation from the temperature that would otherwise be expected. Such elevations range from mild to extreme; body temperatures above can be life-threatening.
Signs and symptoms
An early stage of hyperthermia can be "heat exhaustion" (or "heat prostration" or "heat stress"), whose symptoms can include heavy sweating, rapid breathing and a fast, weak pulse. If the condition progresses to heat stroke, then hot, dry skin is typical as blood vessels dilate in an attempt to increase heat loss. An inability to cool the body through perspiration may cause dry skin. Hyperthermia from neurological disease may include little or no sweating, cardiovascular problems, and confusion or delirium.
Other signs and symptoms vary. Accompanying dehydration can produce nausea, vomiting, headaches, and low blood pressure and the latter can lead to fainting or dizziness, especially if the standing position is assumed quickly.
In severe heat stroke, confusion and aggressive behavior may be observed. Heart rate and respiration rate will increase (tachycardia and tachypnea) as blood pressure drops and the heart attempts to maintain adequate circulation. The decrease in blood pressure can then cause blood vessels to contract reflexively, resulting in a pale or bluish skin color in advanced cases. Young children, in particular, may have seizures. Eventually, organ failure, unconsciousness and death will result.
Causes
Heat stroke occurs when thermoregulation is overwhelmed by a combination of excessive metabolic production of heat (exertion), excessive environmental heat, and insufficient or impaired heat loss, resulting in an abnormally high body temperature. In severe cases, temperatures can exceed . Heat stroke may be non-exertional (classic) or exertional.
Exertional
Significant physical exertion in hot conditions can generate heat beyond the ability to cool, because, in addition to the heat, humidity of the environment may reduce the efficiency of the body's normal cooling mechanisms. Human heat-loss mechanisms are limited primarily to sweating (which dissipates heat by evaporation, assuming sufficiently low humidity) and vasodilation of skin vessels (which dissipates heat by convection proportional to the temperature difference between the body and its surroundings, according to Newton's law of cooling). Other factors, such as insufficient water intake, consuming alcohol, or lack of air conditioning, can worsen the problem.
The increase in body temperature that results from a breakdown in thermoregulation affects the body biochemically. Enzymes involved in metabolic pathways within the body such as cellular respiration fail to work effectively at higher temperatures, and further increases can lead them to denature, reducing their ability to catalyse essential chemical reactions. This loss of enzymatic control affects the functioning of major organs with high energy demands such as the heart and brain. Loss of fluid and electrolytes cause heat cramps – slow muscular contraction and severe muscular spasm lasting between one and three minutes. Almost all cases of heat cramps involve vigorous physical exertion. Body temperature may remain normal or a little higher than normal and cramps are concentrated in heavily used muscles.
Situational
Situational heat stroke occurs in the absence of exertion. It mostly affects the young and elderly. In the elderly in particular, it can be precipitated by medications that reduce vasodilation and sweating, such as anticholinergic drugs, antihistamines, and diuretics. In this situation, the body's tolerance for high environmental temperature may be insufficient, even at rest.
Heat waves are often followed by a rise in the death rate, and these 'classical hyperthermia' deaths typically involve the elderly and infirm. This is partly because thermoregulation involves cardiovascular, respiratory and renal systems which may be inadequate for the additional stress because of the existing burden of aging and disease, further compromised by medications. During the July 1995 heatwave in Chicago, there were at least 700 heat-related deaths. The strongest risk factors were being confined to bed, and living alone, while the risk was reduced for those with working air conditioners and those with access to transportation. Even then, reported deaths may be underestimated as diagnosis can be mis-classified as stroke or heart attack.
Drugs
Some drugs cause excessive internal heat production. The rate of drug-induced hyperthermia is higher where use of these drugs is higher.
Many psychotropic medications, such as selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and tricyclic antidepressants, can cause hyperthermia. Serotonin syndrome is a rare adverse reaction to overdose of these medications or the use of several simultaneously. Similarly, neuroleptic malignant syndrome is an uncommon reaction to neuroleptic agents. These syndromes are differentiated by other associated symptoms, such as tremor in serotonin syndrome and "lead-pipe" muscle rigidity in neuroleptic malignant syndrome.
Recreational drugs such as amphetamines and cocaine, PCP, dextromethorphan, LSD, and MDMA may cause hyperthermia.
Malignant hyperthermia is a rare reaction to common anesthetic agents (such as halothane) or the paralytic agent succinylcholine. Those who have this reaction, which is potentially fatal, have a genetic predisposition.
The use of anticholinergics, more specifically muscarinic antagonists are thought to cause mild hyperthermic episodes due to its parasympatholytic effects. The sympathetic nervous system, also known as the "fight-or-flight response", dominates by raising catecholamine levels by the blocked action of the "rest and digest system".
Drugs that decouple oxidative phosphorylation may also cause hyperthermia. From this group of drugs the most well-known is 2,4-dinitrophenol which was used as a weight loss drug until dangers from its use became apparent.
Personal protective equipment
Those working in industry, in the military, or as first responders may be required to wear personal protective equipment (PPE) against hazards such as chemical agents, gases, fire, small arms and improvised explosive devices (IEDs). PPE includes a range of hazmat suits, firefighting turnout gear, body armor and bomb suits, among others. Depending on design, the wearer may be encapsulated in a microclimate, due to an increase in thermal resistance and decrease in vapor permeability. As physical work is performed, the body's natural thermoregulation (i.e. sweating) becomes ineffective. This is compounded by increased work rates, high ambient temperature and humidity levels, and direct exposure to the sun. The net effect is that desired protection from some environmental threats inadvertently increases the threat of heat stress.
The effect of PPE on hyperthermia has been noted in fighting the 2014 Ebola virus epidemic in Western Africa. Doctors and healthcare workers were only able to work for 40 minutes at a time in their protective suits, fearing heat stroke.
Other
Other rare causes of hyperthermia include thyrotoxicosis and an adrenal gland tumor, called pheochromocytoma, both of which can cause increased heat production. Damage to the central nervous system from brain hemorrhage, traumatic brain injury, status epilepticus, and other kinds of injury to the hypothalamus can also cause hyperthermia.
Pathophysiology
A fever occurs when the core temperature is set higher, through the action of the pre-optic region of the anterior hypothalamus. For example, in response to a bacterial or viral infection, certain white blood cells within the blood will release pyrogens which have a direct effect on the anterior hypothalamus, causing body temperature to rise, much like raising the temperature setting on a thermostat.
In contrast, hyperthermia occurs when the body temperature rises without a change in the heat control centers.
Some of the gastrointestinal symptoms of acute exertional heatstroke, such as vomiting, diarrhea, and gastrointestinal bleeding, may be caused by barrier dysfunction and subsequent endotoxemia. Ultraendurance athletes have been found to have significantly increased plasma endotoxin levels. Endotoxin stimulates many inflammatory cytokines, which in turn may cause multiorgan dysfunction. Experimentally, monkeys treated with oral antibiotics prior to induction of heat stroke do not become endotoxemic.
There is scientific support for the concept of a temperature set point; that is, maintenance of an optimal temperature for the metabolic processes that life depends on. Nervous activity in the preoptic-anterior hypothalamus of the brain triggers heat losing (sweating, etc.) or heat generating (shivering and muscle contraction, etc.) activities through stimulation of the autonomic nervous system. The pre-optic anterior hypothalamus has been shown to contain warm sensitive, cool sensitive, and temperature insensitive neurons, to determine the body's temperature setpoint. As the temperature that these neurons are exposed to rises above , the rate of electrical discharge of the warm-sensitive neurons increases progressively. Cold-sensitive neurons increase their rate of electrical discharge progressively below .
Diagnosis
Hyperthermia is generally diagnosed by the combination of unexpectedly high body temperature and a history that supports hyperthermia instead of a fever. Most commonly this means that the elevated temperature has occurred in a hot, humid environment (heat stroke) or in someone taking a drug for which hyperthermia is a known side effect (drug-induced hyperthermia). The presence of signs and symptoms related to hyperthermia syndromes, such as extrapyramidal symptoms characteristic of neuroleptic malignant syndrome, and the absence of signs and symptoms more commonly related to infection-related fevers, are also considered in making the diagnosis.
If fever-reducing drugs lower the body temperature, even if the temperature does not return entirely to normal, then hyperthermia is excluded.
Prevention
When ambient temperature is excessive, humans and many other animals cool themselves below ambient by evaporative cooling of sweat (or other aqueous liquid; saliva in dogs, for example); this helps prevent potentially fatal hyperthermia. The effectiveness of evaporative cooling depends upon humidity. Wet-bulb temperature, which takes humidity into account, or more complex calculated quantities such as wet-bulb globe temperature (WBGT), which also takes solar radiation into account, give useful indications of the degree of heat stress and are used by several agencies as the basis for heat-stress prevention guidelines. (Wet-bulb temperature is essentially the lowest skin temperature attainable by evaporative cooling at a given ambient temperature and humidity.)
A sustained wet-bulb temperature exceeding is likely to be fatal even to fit and healthy people unclothed in the shade next to a fan; at this temperature, environmental heat gain instead of loss occurs. , wet-bulb temperatures only very rarely exceeded anywhere, although significant global warming may change this.
In cases of heat stress caused by physical exertion, hot environments, or protective equipment, prevention or mitigation by frequent rest breaks, careful hydration, and monitoring body temperature should be attempted. However, in situations demanding one is exposed to a hot environment for a prolonged period or must wear protective equipment, a personal cooling system is required as a matter of health and safety. There are a variety of active or passive personal cooling systems; these can be categorized by their power sources and whether they are person- or vehicle-mounted.
Because of the broad variety of operating conditions, these devices must meet specific requirements concerning their rate and duration of cooling, their power source, and their adherence to health and safety regulations. Among other criteria are the user's need for physical mobility and autonomy. For example, active-liquid systems operate by chilling water and circulating it through a garment; the skin surface area is thereby cooled through conduction. This type of system has proven successful in certain military, law enforcement, and industrial applications. Bomb-disposal technicians wearing special suits to protect against improvised explosive devices (IEDs) use a small, ice-based chiller unit that is strapped to one leg; a liquid-circulating garment, usually a vest, is worn over the torso to maintain a safe core body temperature. By contrast, soldiers traveling in combat vehicles can face microclimate temperatures in excess of and require a multiple-user, vehicle-powered cooling system with rapid connection capabilities. Requirements for hazmat teams, the medical community, and workers in heavy industry vary further.
Treatment
The underlying cause must be removed. Mild hyperthemia caused by exertion on a hot day may be adequately treated through self-care measures, such as increased water consumption and resting in a cool place. Hyperthermia that results from drug exposure requires prompt cessation of that drug, and occasionally the use of other drugs as counter measures.
Antipyretics (e.g., acetaminophen, aspirin, other nonsteroidal anti-inflammatory drugs) have no role in the treatment of heatstroke because antipyretics interrupt the change in the hypothalamic set point caused by pyrogens; they are not expected to work on a healthy hypothalamus that has been overloaded, as in the case of heatstroke. In this situation, antipyretics actually may be harmful in patients who develop hepatic, hematologic, and renal complications because they may aggravate bleeding tendencies.
When body temperature is significantly elevated, mechanical cooling methods are used to remove heat and to restore the body's ability to regulate its own temperatures. Passive cooling techniques, such as resting in a cool, shady area and removing clothing can be applied immediately. Active cooling methods, such as sponging the head, neck, and trunk with cool water, remove heat from the body and thereby speed the body's return to normal temperatures. When methods such as immersion are impractical, misting the body with water and using a fan have also been shown to be effective.
Sitting in a bathtub of tepid or cool water (immersion method) can remove a significant amount of heat in a relatively short period of time. It was once thought that immersion in very cold water is counterproductive, as it causes vasoconstriction in the skin and thereby prevents heat from escaping the body core. However, a British analysis of various studies stated: "this has never been proven experimentally. Indeed, a recent study using normal volunteers has shown that cooling rates were fastest when the coldest water was used." The analysis concluded that iced water immersion is the most-effective cooling technique for exertional heat stroke. No superior cooling method has been found for non-exertional heat stroke. Thus, aggressive ice-water immersion remains the gold standard for life-threatening heat stroke.
When the body temperature reaches about , or if the affected person is unconscious or showing signs of confusion, hyperthermia is considered a medical emergency that requires treatment in a proper medical facility. A cardiopulmonary resuscitation (CPR) may be necessary if the person goes into cardiac arrest (stop of heart beats). Already in a hospital, more aggressive cooling measures are available, including intravenous hydration, gastric lavage with iced saline, and even hemodialysis to cool the blood.
Epidemiology
Hyperthermia affects those who are unable to regulate their body heat, mainly due to environmental conditions. The main risk factor for hyperthermia is the lack of ability to sweat. People who are dehydrated or who are older may not produce the sweat they need to regulate their body temperature. High heat conditions can put certain groups at risk for hyperthermia including: physically active individuals, soldiers, construction workers, landscapers and factory workers. Some people that do not have access to cooler living conditions, like people with lower socioeconomic status, may have a difficult time fighting the heat. People are at risk for hyperthermia during high heat and dry conditions, most commonly seen in the summer.
Various cases of different types of hyperthermia have been reported. A research study was published in March 2019 that looked into multiple case reports of drug induced hyperthermia. The study concluded that psychotropic drugs such as anti-psychotics, antidepressants, and anxiolytics were associated with an increased heat-related mortality as opposed to the other drugs researched (anticholinergics, diuretics, cardiovascular agents, etc.). A different study was published in June 2019 that examined the association between hyperthermia in older adults and the temperatures in the United States. Hospitalization records of elderly patients in the US between 1991 and 2006 were analyzed and concluded that cases of hyperthermia were observed to be highest in regions with arid climates. The study discussed finding a disproportionately high number of cases of hyperthermia in early seasonal heat waves indicating that people were not yet practicing proper techniques to stay cool and prevent overheating in the early presence of warm, dry weather.
In urban areas people are at an increased susceptibility to hyperthermia. This is due to a phenomenon called the urban heat island effect. Since the 20th century in the United States, the north-central region (Ohio, Indiana, Illinois, Missouri, Iowa, and Nebraska) was the region with the highest morbidity resulting from hyperthermia. Northeastern states had the next highest. Regions least affected by heat wave-related hyperthermia causing death were Southern and Pacific Coastal states. Northern cities in the United States are at greater risk of hyperthermia during heat waves due to the fact that people tend to have a lower minimum mortality temperature at higher latitudes. In contrast, cities residing in lower latitudes within the continental US typically have higher thresholds for ambient temperatures. In India, hundreds die every year from summer heat waves, including more than 2,500 in the year 2015. Later that same summer, the 2015 Pakistani heat wave killed about 2,000 people. An extreme 2003 European heat wave caused tens of thousands of deaths.
Causes of hyperthermia include dehydration, use of certain medications, using cocaine and amphetamines or excessive alcohol use. Bodily temperatures greater than can be diagnosed as a hyperthermic case. As body temperatures increase or excessive body temperatures persist, individuals are at a heightened risk of developing progressive conditions. Greater risk complications of hyperthermia include heat stroke, organ malfunction, organ failure, and death. There are two forms of heat stroke; classical heatstroke and exertional heatstroke. Classical heatstroke occurs from extreme environmental conditions, such as heat waves. Those who are most commonly affected by classical heatstroke are very young, elderly or chronically ill. Exertional heatstroke appears in individuals after vigorous physical activity. Exertional heatstroke is displayed most commonly in healthy 15-50 year old people. Sweating is often present in exertional heatstroke. The associated mortality rate of heatstroke is 40 to 64%.
Research
Hyperthermia can also be deliberately induced using drugs or medical devices, and is being studied and applied in clinical routine as a treatment of some kinds of cancer. Research has shown that medically controlled hyperthermia can shrink tumours. This occurs when a high body temperature damages cancerous cells by destroying proteins and structures within each cell. Hyperthermia has also been researched to investigate whether it causes cancerous tumours to be more prone to radiation as a form of treatment; which as a result has allowed hyperthermia to be used to complement other forms of cancer therapy. Various techniques of hyperthermia in the treatment of cancer include local or regional hyperthermia, as well as whole body techniques.
See also
Effects of climate change on human health
Space blanket
References
External links
Tips to Beat the Heat —American Red Cross
Extreme Heat—CDC Emergency Preparedness and Response
Workplace Safety and Health Topics: Heat Stress—CDC and NIOSH
Excessive Heat Events Guidebook—US EPA
Physiological Responses to Exercise in the Heat—US National Academies
Causes of death
Heat waves
Medical emergencies
Weather and health
Physiology
Thermoregulation | Hyperthermia | Biology | 4,496 |
20,897,472 | https://en.wikipedia.org/wiki/Simple-homotopy%20equivalence | In mathematics, particularly the area of topology, a simple-homotopy equivalence is a refinement of the concept of homotopy equivalence. Two CW-complexes are simple-homotopy equivalent if they are related by a sequence of collapses and expansions (inverses of collapses), and a homotopy equivalence is a simple homotopy equivalence if it is homotopic to such a map.
The obstruction to a homotopy equivalence being a simple homotopy equivalence is the Whitehead torsion,
A homotopy theory that studies simple-homotopy types is called simple homotopy theory.
See also
Discrete Morse theory
References
Homotopy theory
Equivalence (mathematics) | Simple-homotopy equivalence | Mathematics | 142 |
31,834,133 | https://en.wikipedia.org/wiki/H.%20B.%20Walikar | H. B. Walikar (born 18 March 1951) was the vice-chancellor of the Karnatak University in Dharwad, India.
Walikar was arrested for corruption charges and removed from office.
References
Kannada people
Graph theorists
People from Bijapur district, Karnataka
1951 births
Living people
Academic staff of Karnatak University | H. B. Walikar | Mathematics | 72 |
40,344,750 | https://en.wikipedia.org/wiki/Stropharia%20pseudocyanea | Stropharia pseudocyanea is a mushroom in the family Strophariaceae.
References
Strophariaceae
Fungi of Europe
Fungi described in 1823
Fungus species | Stropharia pseudocyanea | Biology | 35 |
6,517,510 | https://en.wikipedia.org/wiki/Elevator%20test%20tower | An elevator test tower is a structure usually 100 to over 200 metres (300 feet to over 600 feet) tall that is designed to evaluate the stress and fatigue limits of specific elevator cars in a controlled environment. Tests are also carried out in the test tower to ensure reliability and safety in current elevator designs and address any failures that may arise.
Examples of an elevator test tower are the National Lift Tower in Northampton, England; the Solae Tower in Inazawa, Japan; and the TK Elevator Test Tower in Rottweil, Germany (owned by ThyssenKrupp).
History
In 1888, Otis completed an elevator test tower at their factory in Yonkers, New York; this was possibly the first elevator test tower in the United States.
See also
List of elevator test towers
References
Test Tower
Elevator test towers
Towers | Elevator test tower | Engineering | 167 |
64,329,231 | https://en.wikipedia.org/wiki/Beet%20pseudoyellows%20virus | Beet pseudoyellows virus (BPYV) is a species of virus in the genus Crinivirus.
The virus was first recognised by James E. Duffus of the United States Department of Agriculture, and reported in 1975 under the title 'A new type of whitefly-transmitted disease – a link to the aphid-transmitted viruses'. Beet (Beta vulgaris) in a research greenhouse unexpectedly presented symptoms characteristic of the aphid-vectored virus Beet yellows virus, despite no aphids being present. Instead, greenhouse whiteflies (Trialeurodes vaporariorum) were present and determined to be the vector. The presumed new species of virus was designated 'Beet pseudo-yellows virus' (note the hyphen, omitted in the currently accepted name). Further investigation revealed the virus typically causes stunting, interveinal yellowing, and/or chlorotic spotting in its hosts, and that at least an additional 36 species of plants from various families are susceptible to infection.
Images
References
External links
Image and general information at Seminis
Images at Texas A&M AgriLife Extension
Rfam entry for 3'-terminal pseudoknot of BPYV
Crinivirus | Beet pseudoyellows virus | Biology | 249 |
13,775,580 | https://en.wikipedia.org/wiki/Hunter%20Wheel | The Hunter Wheel was a device intended to improve the propulsion of steam-powered ships and evaluated in the middle 1840s. At the time, as ships were transitioning from sail to steam engine power, the understanding of the principles of hydrodynamics and efficient use of steam was in its infancy.
Concept
The vertically mounted paddle wheel, at side or at the stern, was the first propulsion scheme used with steam power, but naval authorities were concerned about the vulnerability of the wheels to damage, whether in combat or peacetime use, and sought to increase the efficiency of ship designs, as the navies of the world began to switch from wooden hulls to iron ones. The only competition to wheel designs was John Ericsson's patent screw, which was just then entering its first trials on
Lt. William W. Hunter and Benjamin Harris proposed a new wheel design, which consisted of a conventional paddle wheel drum placed horizontally within the vessel, below the water-line. The paddles were so arranged as to project from a suitable opening in the side of the ship, at right angles to the keel. Water was kept from entering by a cofferdam placed around the paddle wheel drum and against the side of the ship.
USS Union evaluation, 1843
The Hunter wheel was tested in 1843 on the which had been modified to accept the device. It was discovered that Union’s engines wasted too much energy uselessly driving the paddle wheels through the water-filled cofferdam inside the ship.
In USS Water Witch, 1845
Hunter's wheel was also tested in ; again, the wheels lost much of their power pushing water through the encased area inside the hull, forfeiting between 50 and 70 percent of their potential power.
Revenue cutter George M. Bibb, 1845
Also in 1845, the Revenue Marine ordered a cutter from Knapp's Fort Pitt Foundry with Hunter's wheel, the , which sank at her moorings in Cincinnati on her trials and was converted to side wheels before entering service.
Final evaluation on USS Allegheny, 1847
The Hunter design was also used in the construction of but it was confirmed as being unreliable and inefficient, and Allegheny was later converted to screw propulsion. This appears to have been the last test of Hunter's wheel.
See also
Paddle steamer
References
Marine propulsion
. | Hunter Wheel | Engineering | 464 |
1,132,416 | https://en.wikipedia.org/wiki/Porte-coch%C3%A8re | A porte-cochère (; ; ; ) is a doorway to a building or courtyard, "often very grand," through which vehicles can enter from the street or a covered porch-like structure at a main or secondary entrance to a building through which originally a horse and carriage and today a motor vehicle can pass to provide arriving and departing occupants protection from the elements.
Portes-cochères are still found on such structures as major public buildings and hotels, providing covered access for visitors and guests arriving by motorized transport.
A porte-cochère, a structure for vehicle passage, is to be distinguished from a portico, a columned porch or entry for human, rather than vehicular, traffic.
History
The porte-cochère was a feature of many late 18th- and 19th-century mansions and public buildings. A well-known example is at Buckingham Palace in London. A portico at the White House in Washington, D.C. is often confused with a porte-cochère, where a raised vehicle ramp gives an architectural portico the functionality of the latter.
Today portes-cochères are found at both elaborate private homes and such public buildings as churches, hotels, health facilities, and schools. Portes-cochère differ from carports in that the vehicles pass through for passengers to board or exit rather than being parked beneath the covered area.
Guard stones are often found at the foot of portes-cochère, acting as protective bollards to prevent vehicles from damaging the structure.
Gallery
See also
Glossary of architecture
References
Architectural elements | Porte-cochère | Technology,Engineering | 322 |
5,795,063 | https://en.wikipedia.org/wiki/Philanthrocapitalism | Philanthrocapitalism or philanthropic capitalism is a way of doing philanthropy, which mirrors the way that business is done in the for-profit world. It may involve venture philanthropy that actively invests in social programs to pursue specific philanthropic goals that would yield return on investment over the long term, or in a more passive form whereby "social investors" benefit from investing in socially-responsible programs.
History
The term appears as early as February 2006 in The Economist, and was popularized by Matthew Bishop and Michael Green in their 2008 book Philanthrocapitalism: How the Rich Can Save The World. The book was endorsed by Bill Clinton, who wrote in its foreword that this concept drives the Clinton Foundation. The shift in implementing business models in charity is not a new concept – John D. Rockefeller and Andrew Carnegie sought to apply their business strategies in their philanthropy in the 20th century. Since then, a significant increase in charity spending by other organizations such as the Bill & Melinda Gates Foundation and Chan Zuckerberg Initiative, both described as examples of philanthrocapitalism, has been noted.
In December 2015, Mark Zuckerberg and his spouse Priscilla Chan pledged to donate over the decades 99% of their Facebook shares, then valued at $45 billion, to the Chan Zuckerberg Initiative, a newly created LLC with focuses on health and education.
These more modern organizations differ from other groups or organizations since their funds come more from the private capital of an individual rather than donors or profit from physical products. The integration of business models in charity foundations has focused on a symbiotic relationship between social responsibility and the local, national, and international markets. Philanthrocapitalism has been compared and contrasted with altruism due to the similar stated goals of the movements’ advocates.
Criticism
There are many criticisms of philanthrocapitalism beginning with the limited transparency and accountability involved. There are also concerns that private philanthropy erodes support for governmental spending on public services. The main worry with this practice is that collectively, it can lead to tax revenue problems for the government. Donations are still going towards philanthropy, but some public services may not be able to utilize these funds because they may never receive them. Because of this, there is concern from John Cassidy that the wealth of a few may be able to determine what organizations receive the most funding.
Sociology professor Linsey McGoey has written that many current and past philanthropists amassed their fortunes by predatory business practices which enhanced the very social problems their philanthropy is intended to alleviate. Finally there are concerns of the existence of ulterior motives. These ulterior motives can range from business owners avoiding capital-gains taxes by donating their company's excess stock instead of selling it and estate taxes which would be assessed onto their family to collecting tax credits from the government.
Limited liability companies
Some philanthropists have decided to forego the Foundation route in favor of utilizing a limited liability company (LLC) to pursue their philanthropic goals. This allows the organization to avoid three main constrictions on Foundations, such as follows:
Foundations must give away 5% of assets annually
Foundations must disclose where the grants are going and generally can only give to 501(c)(3) registered charities
Foundations must avoid funding or even advocating for a side in politics
The LLC structure allows the philanthropist to keep their initiatives private although there is no requirement that they do. An LLC is allowed to support for-profit companies that they feel support their mission. And the LLC, therefore, permitted to make and keep any profits made on such an investment. Lastly, an LLC can openly support politicians with whom they agree and advocate for policy positions and even author such policy positions elected officials may opt to use. Lastly, the original donor, such as Zuckerberg, retains control over the shares donated. Had he donated shares to a Foundation, they would no longer be his to control.
A Partial List of Philanthropic LLCs:
Chan Zuckerberg Initiative
Arnold Ventures LLC
Omidyar Network
Emerson Collective
See also
Impact investing
Social entrepreneurship
Microfinance
References
Sources
The Economist, "The Birth of Philanthrocapitalism".
Matthew Bishop and Michael Green, Philanthrocapitalism: How the Rich Can Save The World, http://www.philanthrocapitalism.net
A debate about philanthrocapitalism has run on Opendemocracy.net and another on the Global Philanthropy Forum at https://www.philanthropyforum.org/forum/Discussion_Forum1.asp
Philanthropy
Social finance | Philanthrocapitalism | Biology | 928 |
1,848,778 | https://en.wikipedia.org/wiki/Great%20hall | A great hall is the main room of a royal palace, castle or a large manor house or hall house in the Middle Ages. It continued to be built in the country houses of the 16th and early 17th centuries, although by then the family used the great chamber for eating and relaxing. At that time the word "great" simply meant big and had not acquired its modern connotations of excellence. In the medieval period, the room would simply have been referred to as the "hall" unless the building also had a secondary hall. The term "great hall" has been mainly used for surviving rooms of this type for several centuries to distinguish them from the different type of hall found in post-medieval houses. Great halls were found especially in France, England and Scotland, but similar rooms were also found in some other European countries.
A typical great hall was a rectangular room between one and a half and three times as long as it was wide, and also higher than it was wide. It was entered through a screens passage at one end, and had windows on the long sides, often including a large bay window. There was usually a minstrels' gallery above the screens passage. The screens passage was divided from the hall by a timber screen with two openings. The portion of the screen between these openings could be movable, such as the one at Rufford Old Hall. At the other end of the hall was the dais where the high table was situated. The ceiling above the dais was often ornamented to denote its higher status. The lord's family's more private rooms lay beyond the dais end of the hall, and the kitchen, buttery and pantry were on the opposite side of the screens passage. The dais end is generally referred to as the 'upper' end, and the screens end as the 'lower' end.
Even royal and noble residences had few living rooms until late in the Middle Ages, and a great hall was a multifunctional room. It was used for receiving guests and it was the place where the household would dine together, including the lord of the house, his gentleman attendants and at least some of the servants. At night some members of the household might sleep on the floor of the great hall.
Evolution
From the fall of the Roman Empire to the Renaissance, the hall was at the heart of residential complexes. Early examples were timber built and have vanished, only being known from documentary sources like Beowulf, and excavations. Archaeologists have uncovered Anglo-Saxon halls from the highest social levels at the palaces of Yeavering (Northumberland) and Cheddar (Somerset). The halls at both palaces were 120 feet (37m) long, that at Yeavering being seventh century and that at Cheddar (the first of several) being ninth century. Saxon halls were routinely aisled and occasionally had side walls that were bowed out in plan. At this point the hall was merely the largest of several detached structures, rather than being a room within a single building. From later Saxon times, the standard manorial plan began to emerge - the excavated tenth century hall at Sulgrave (Northamptonshire) has a definite 'high' end with an attached stone chamber wing and 'low' end with a cross-passage, services and detached kitchen. In the late tenth century, first floor stone halls began to be built in both France and England, partly for reasons of security. This form would become the basis for the hall keep. Examples can be seen at Langeais Castle (France), Richmond Castle (England) and Chepstow Castle (Wales), as well as on the Bayeux Tapestry. Many large ground floor aisled halls were built in England following the Norman Conquest, as the key room in the new feudal society. The greatest was that at Westminster Palace, built by William Rufus as a setting for secular royal events. Even ground floor halls were increasingly built of stone as the material became more widely available, though in thickly forested areas timber remained the material of choice. From the 13th century, improved carpentry techniques meant that roofs could span greater distances, eliminating the need for aisles, and by c.1300, the standard hall plan with the dais and great chamber at the upper end and the entrance, screens passage and services at the lower end had become commonplace. After this time, the function of the hall began to narrow to solely a dining and circulation space, and architectural developments reflected that, with the rise of the wall fireplace and bay window (also known as an oriel) creating a more pleasant and specialised chamber. It was formerly considered that the decline of the hall began with the decline of feudalism in the 14th century. More recent scholarship, however, is of the opinion that the great hall retained vitality into the sixteenth century, with many of the most impressive halls being later, like those of Eltham Palace (1475-80) and Hampton Court Palace (1532-35).
Architectural detail
The hall would originally have had a central hearth, with the smoke rising through a vent in the roof. Examples can be seen at Stokesay Castle and Ludlow Castle. Chimneys were later added, and it would then have one of the largest fireplaces of the palace, manor house or castle, frequently big enough to walk and stand inside. Where there was a wall fireplace, it was generally at the dais end of the hall with the bay window, as at Raglan Castle, so the lord could get the most heat and light. The hearth was used for heating and also for some of the cooking, although most houses had a dedicated kitchen for the bulk of the cooking. The fireplace would commonly have an elaborate overmantel with stone or wood carvings or plasterwork which might contain coats of arms, heraldic mottoes (usually in Latin), caryatids or another adornment. In the upper halls of French manor houses, the fireplaces were usually very large and elaborate.
The great hall typically had the finest decorations, as well as on the window frame mouldings of the outer wall. Many French manor houses have very beautifully decorated external window frames on the large mullioned windows that light the hall. This decoration clearly marked the window as belonging to the lord's private hall. It was where guests slept.
In western France, the early manor houses were centred on a ground-floor hall. Later, the hall reserved for the lord and his high-ranking guests was moved up to the first-floor level. This was called the salle haute or upper hall (or "high room"). In some of the larger three-storey manor houses, the upper hall was as high as the second storey roof. The smaller ground-floor hall or salle basse remained, but was for receiving guests of any social order. It is very common to find these two halls superimposed, one on top of the other, in larger manor houses in Normandy and Brittany. Access from the ground-floor hall to the upper (great) hall was normally via an external staircase tower. The upper hall often contained the lord's bedroom and living quarters off one end.
In Scotland, six common furnishings were present in the sixteenth-century hall: the high table and principal seat; side tables for others; the cupboard and silver plate; the hanging chandelier, often called the 'hart-horn' made of antler; ornamental weapons, commonly a halberd; and the cloth and napery used for dining.
Occasionally the great hall would have an early listening device system, allowing conversations to be heard in the lord's bedroom above. In Scotland, these devices are called a laird's lug. In many French manor houses, there are small peepholes from which the lord could observe what was happening in the hall. This type of hidden peephole is called a judas in French. In England, such an opening is referred to as a squint and there are two connecting the hall and great chamber in Stokesay Castle.
Examples
Many great halls still exist to this day. Three very large surviving royal halls are Westminster Hall, Ridderzaal in Binnenhof and the Vladislav Hall in Prague Castle (although the latter was used only for public events and never as a great hall). Penshurst Place in Kent, England, has a little-altered 14th century example, and Great Chalfield Manor has a similarly intact 15th century one. At the scale of yeoman housing, a restored 15th century hall can be seen in Bayleaf Farmhouse, now at the Weald and Downland Living Museum. Surviving 16th and early 17th century specimens in Britain are numerous, for example those at Eltham Palace (England), Longleat (England), Deene Park (England), Burghley House (England), Bodysgallen Hall (Wales), Darnaway Castle (Scotland), Muchalls Castle (Scotland) and Crathes Castle (Scotland). There are numerous ruined examples, notably at Linlithgow Palace (Scotland), Kenilworth Castle (England) and Raglan Castle (Wales).
Survival
The domestic and monastic model applied to collegiate institutions during the Middle Ages. A few university colleges, including Merton College, Oxford (1277), Peterhouse, Cambridge (1290), University College, Durham (between 1284 and 1311, originally for the Prince Bishop of Durham), Trinity Hall, Cambridge (1350), and
New College, Oxford (14th century), have medieval halls which are still used as dining rooms on a daily basis; many other colleges have later halls built in a similar medieval style, as do the Inns of Court and the Livery Companies in London. The "high table" (often on a small dais or stage at the top of the hall, furthest away from the screens passage) seats dons (at the universities) and Masters of the Bench (at the Inns of Court), whilst students (at the universities) and barristers or students (at the Inns of Court) dine at tables placed at right angles to the high table and running down the body of the hall, thus maintaining the hierarchical arrangement of the medieval domestic, monastic or collegiate household. Numerous more recently founded schools and institutions have halls and dining halls based on medieval great halls or monastic refectories.
Decline and revival
From the 15th century onwards, halls lost most of their traditional functions to more specialised rooms, first for family members and guests to the great chamber and parlours, withdrawing rooms, and later for servants who finally achieved their own servants hall to eat in and servants’ bedrooms in attics or basements). By the late 16th century, the great hall was beginning to lose its purpose. Increasing centralization of power in royal hands meant that men of good social standing were less inclined to enter the service of a lord to obtain his protection, and so the size of the inner noble household shrank.
As the social gap between master and servant grew, the family retreated, usually to the first floor, to private rooms. In fact, servants were not normally allowed to use the same staircases as nobles to access the great hall of larger castles in early times , and servants' staircases are still extant in places such as Muchalls Castle. Other reception and living rooms in country houses became more numerous, specialised and important, and by the late 17th century, the halls of many new houses were simply vestibules, passed through to get to somewhere else, but not lived in. Several great halls like that at Great Hall in Lancashire were downsized to create two rooms. From the 16th century onwards, it was common to insert a floor into the smaller halls to create a lower entrance hall and a commodious first floor chamber.
The halls of late 17th, 18th and 19th-century country houses and palaces usually functioned almost entirely as impressive entrance points to the house, and for large scale entertaining, as at Christmas, for dancing, or when a touring company of actors performed. With the arrival of ballrooms and dedicated music rooms in the largest houses by the late 17th century, these functions too were lost. Where large halls survived, it was usually due to continuing institutional use, especially as a courtroom. This change of use preserved the halls of Winchester, Oakham and Leicester Castles. Other halls, like that at Eltham Palace, remained standing in a neglected state as barns. There was a revival of the great hall concept in the late 19th and early 20th centuries, with large halls used for banqueting and entertaining (but not as eating or sleeping places for servants) featuring in some houses of this period as part of a broader medieval revival, for example Thoresby Hall. Some medieval halls were also restored from neglect or ruin, like that at Mayfield Palace, which now serves Mayfield School.
In popular culture
In the Harry Potter franchise of books, films, and video games, the Great Hall within Hogwarts is the site of meals, feasts, assemblies, and awards ceremonies.
Winchester Castle's Great Hall is an important site in British history; it was the location of the trial of Walter Raleigh and partially of the Bloody Assizes and it also contains a well-preserved imitative Arthurian Round Table.
See also
Banquet hall
Dining hall
Great room
Hall and parlor house
Manor house
Mead hall
Moot hall
Refectory
Tapestry
Notes
External links
Architecture in the United Kingdom
Rooms
Castle architecture | Great hall | Engineering | 2,725 |
38,385,634 | https://en.wikipedia.org/wiki/List%20of%20filename%20extensions%20%28S%E2%80%93Z%29 | This alphabetical list of filename extensions contains extensions of notable file formats used by multiple notable applications or services.
A–E
F–L
M–R
S
T
U
V
W
X
Y
Z
See also
List of filename extensions
List of file formats
References
External links
File Extension Resource
The File Extensions Resource
File information site
File Extension Database
File format finder
List of file types
S
S | List of filename extensions (S–Z) | Technology | 78 |
18,222,616 | https://en.wikipedia.org/wiki/Teo%20Spiller | Teo Spiller (born December 4, 1965, in Ljubljana) is a Slovenian digital artist who has been active in the net.art movement since 1995. Spiller is notable for being one of the first artists to sell a piece of Internet art to a museum or collector. he was an assistant professor at Arthouse College in Ljubljana.
net.art Career
In May 1999 Spiller sold his work Megatronix to Mestna galerija Ljubljana for approximately . This made Spiller one of the first net.art artists to sell a piece to a gallery or collector. Sale negotiations between Spiller and the buyer were conducted through the use of an open online forum. His other notable net.art projects include Hommage to Mondrian, Nice Page, Caprices for Netscape and Esmeralda.
In 2000 Spiller organized an international art event called INFOS 2000. This was an offline net.art contest addressing how new multimedia has contextually and aesthetically obscured the borders between net.art and CD-ROM art. He also produced many fine art works reflecting the aesthetics of new media.
In 2004 Spiller launched X-lam, a different media for viewing images, in collaboration with Tadej Komavec. It works best in low-light conditions. A 'stick' contains a series of blinking diodes; by moving the eyes quickly the viewer can briefly see an image floating in open space. Spiller exhibited X-lam at the 10th Cairo International Biennale, alongside other viewing technologies like stereograms and streaming textuality, which he later used in the installation Intruders (Kino Siska, 2012).
In 2007 Spiller declined to participate in the U3 Triennial of Contemporary Arts due to conflicting ideas concerning the presentation of net.art in a gallery.
In 2008 Spiller launched Real3Dfriend, a project that questions the ethics of virtual-reality worlds such as Second Life and critiques virtual-reality systems for being too commercial and lacking basis in humanist values.
In 2011 Spiller launched projects in new media textuality and new media semiotics, which combined the artist's writing with net.art projects like SPAM sonnet and news sonnets.
Combining visual media and machinery, Spiller built the robot Laboro to explore the concept of cyborg artistry (a combination of human and machinery within the artistic process). This resulted in "Wooden In/Form/Ations" and robot-generated graphics such as President Obama following the execution of Osama bin Laden.
Solo exhibitions
Galerija Commerce, Ljubljana, Slovenia
Klub Cankarjev dom, Ljubljana, Slovenia
Gallery Rael Artel, Pärnu, Estonia
INFOS 2000 (off-line) net.art contest
Installation "Inside the web server", Hevreka!05, Ljubljana, Slovenia, 2005
"Sshh!", KUD France Prešeren, Ljubljana, Slovenia, 2010
"Wooden In/form/Ation", Trubar Literature House, Ljubljana, Slovenia, 2011
"LIFE", Merlin Theatre, Budapest, Hungary, 2011
"Intruders", Kino Šiška, Ljubljana, Slovenia, 2012
Group exhibitions
film+arc, Graz; Austria, 1997
Ostranenie 97, Dessau, Germany, 1997
Digital Graphic Art on Paper, Ljubljana Municipal Museum, 1999
Masters of Graphic Arts, Győr, Hungary, 2001
Break 2.3, Ljubljana, Slovenia, 2005
Territories, Identities, Nets, Museum of Modern Art, Ljubljana, Slovenia, 2005
Device-art 2006, Kontejner Zagreb/Blasthaus San Francisco, 2006, Croatia/US
10th Cairo International Biennale, Cairo, Egypt, 2006/2007
Kiparstvo danes, Celje, Slovenia, 2010
Interruption - 30th Ljubljana Biennial of Graphic art, Ljubljana, 2013
References
External links
Official website
Videos about his work
Gallery of his work
Artist Career His lectures about being a contemporary artist
Megatronix
Mestna galerija Ljubljana
Net.artists
Artists from Ljubljana
New media artists
Slovenian digital artists
Living people
1965 births | Teo Spiller | Technology | 821 |
1,514,954 | https://en.wikipedia.org/wiki/Load-balanced%20switch | A load-balanced switch is a switch architecture which guarantees 100% throughput with no central arbitration at all, at the cost of sending each packet across the crossbar twice. Load-balanced switches are a subject of research for large routers scaled past the point of practical central arbitration.
Introduction
Internet routers are typically built using line cards connected with a switch. Routers supporting moderate total bandwidth may use a bus as their switch, but high bandwidth routers typically use some sort of crossbar interconnection. In a crossbar, each output connects to one input, so that information can flow through every output simultaneously. Crossbars used for packet switching are typically reconfigured tens of millions of times per second. The schedule of these configurations is determined by a central arbiter, for example a Wavefront arbiter, in response to requests by the line cards to send information to one another.
Perfect arbitration would result in throughput limited only by the maximum throughput of each crossbar input or output. For example, if all traffic coming into line cards A and B is destined for line card C, then the maximum traffic that cards A and B can process together is limited by C. Perfect arbitration has been shown to require massive amounts of computation, that scales up much faster than the number of ports on the crossbar. Practical systems use imperfect arbitration heuristics (such as iSLIP) that can be computed in reasonable amounts of time.
A load-balanced switch is not related to a load balancing switch, which refers to a kind of router used as a front end to a farm of web servers to spread requests to a single website across many servers.
Basic architecture
As shown in the figure to the right, a load-balanced switch has N input line cards, each of rate R, each connected to N buffers by a link of rate R/N. Those buffers are in turn each connected to N output line cards, each of rate R, by links of rate R/N. The buffers in the center are partitioned into N virtual output queues.
Each input line card spreads its packets evenly to the N buffers, something it can clearly do without contention. Each buffer writes these packets into a single buffer-local memory at a combined rate of R. Simultaneously, each buffer sends packets at the head of each virtual output queue to each output line card, again at rate R/N to each card. The output line card can clearly forward these packets out the line with no contention.
Each buffer in a load-balanced switch acts as a shared-memory switch, and a load-balanced switch is essentially a way to scale up a shared-memory switch, at the cost of additional latency associated with forwarding packets at rate R/N twice.
The Stanford group investigating load-balanced switches is concentrating on implementations where the number of buffers is equal to the number of line cards. One buffer is placed on each line cards, and the two interconnection meshes are actually the same mesh, supplying rate 2R/N between every pair of line cards. But the basic load-balanced switch architecture does not require that the buffers be placed on the line cards, or that there be the same number of buffers and line cards.
One interesting property of a load-balanced switch is that, although the mesh connecting line cards to buffers is required to connect every line card to every buffer, there is no requirement that the mesh act as a non-blocking crossbar, nor that the connections be responsive to any traffic pattern. Such a connection is far simpler than a centrally arbitrated crossbar.
Keeping packets in-order
If two packets destined for the same output arrive back-to-back at one line card, they will be spread to two different buffers, which could have two different occupancies, and so the packets could be reordered by the time they are delivered to the output. Although reordering is legal, it is typically undesirable because TCP does not perform well with reordered packets.
By adding yet more latency and buffering, the load-balanced switch can maintain packet order within flows using only local information. One such algorithm is FOFF (Fully Ordered Frames First). FOFF has the additional benefits of removing any vulnerability to pathological traffic patterns, and providing a mechanism for implementing priorities.
Implementations
Single chip crossbar plus load-balancing arbiter
The Stanford University Tiny Tera project (see Abrizio) introduced a switch architecture that required at least two chip designs for the switching fabric itself (the crossbar slice and the arbiter). Upgrading the arbiter to include load-balancing and combining these devices could have reliability, cost and throughput advantages.
Single global router
Since the line cards in a load-balanced switch do not need to be physically near one another, one possible implementation is to use an entire continent- or global-sized backbone network as the interconnection mesh, and core routers as the "line cards". Such an implementation suffers from having all latencies increased to twice the worst-case transmission latency. But it has a number of intriguing advantages:
Large backbone packet networks typically have massive overcapacity (10x or more) to deal with imperfect capacity planning, congestion, and other problems. A load-balanced switch backbone can deliver 100% throughput with an overcapacity of just 2x, as measured across the whole system.
The underpinnings of large backbone networks are usually optical channels that cannot be quickly switched. These map well to the constant-rate 2R/N channels of the load-balanced switch's mesh.
No route tables need be changed based on global congestion information, because there is no global congestion.
Rerouting in the case of a node failure does require changing the configuration of the optical channels. But the reroute can be precomputed (there are only a finite number of nodes that can fail), and the reroute causes no congestion that would then require further route table changes.
References
External links
Optimal Load-Balancing I. Keslassy, C. Chang, N. McKeown, and D. Lee
Scaling Internet Routers Using Optics I. Keslassy, S. Chuang, K. Yu, D. Miller, M. Horowitz, O. Solgaard, and N. McKeown
Computer networking
Media access control | Load-balanced switch | Technology,Engineering | 1,319 |
1,643,492 | https://en.wikipedia.org/wiki/Cosmic%20latte | Cosmic latte is the average color of the galaxies of the universe as perceived from the Earth, found by a team of astronomers from Johns Hopkins University (JHU). In 2002, Karl Glazebrook and Ivan Baldry determined that the average color of the universe was a greenish white, but they soon corrected their analysis in a 2003 paper in which they reported that their survey of the light from over 200,000 galaxies averaged to a slightly beigeish white. The hex triplet value for cosmic latte is #FFF8E7.
Discovery of the color
Finding the average colour of the universe was not the focus of the study. Rather, the study examined spectral analysis of different galaxies to study star formation. Like Fraunhofer lines, the dark lines displayed in the study's spectral ranges display older and younger stars and allow Glazebrook and Baldry to determine the age of different galaxies and star systems. What the study revealed is that the overwhelming majority of stars formed about 5 billion years ago. Because these stars would have been "brighter" in the past, the color of the universe changes over time, shifting from blue to red as more blue stars change to yellow and eventually red giants.
As light from distant galaxies reaches the Earth, the average "color of the universe" (as seen from Earth) tends towards pure white, due to the light coming from the stars when they were much younger and bluer.
Naming the color
The corrected color was initially published on the Johns Hopkins University (JHU) News website and updated on the team's initial announcement. Multiple news outlets, including NPR and BBC, displayed the color in stories and some relayed the request by Glazebrook on the announcement asking for suggestions for names, jokingly adding all were welcome as long as they were not "beige".
These were the results of a vote of the JHU astronomers involved based on the new color:
Though Drum's suggestion of "cappuccino cosmico" received the most votes, the researchers favored Drum's other suggestion, "cosmic latte". "" means "milk" in Italian, Galileo's native language, and the similar "" means "milky", similar to the Italian term for the Milky Way, "". They enjoyed the fact that the color would be similar to the Milky Way's average color as well, as it is part of the sum of the universe. They also claimed to be "caffeine biased".
See also
References
External links
Official project website: The Cosmic Spectrum (archived 2016) from Professor Karl Glazebrook's website
Color
Physical cosmology
Shades of white
de:Kosmisch-Latte | Cosmic latte | Physics,Astronomy | 545 |
15,962,940 | https://en.wikipedia.org/wiki/Rarian | Rarian is a document cataloging system (formerly known as Spoon). It manages documentation metadata, as specified by the Open Source Metadata Framework (OMF). Rarian is used by the GNOME desktop help browser, Yelp. It has replaced ScrollKeeper, as originally designed. It provides an API.
References
External links
Rarian
Open Source Metadata Framework
Freedesktop.org libraries
GNOME
KDE
Metadata | Rarian | Technology | 84 |
10,699,094 | https://en.wikipedia.org/wiki/Public%20Transport%20Information%20and%20Priority%20System | The Public Transport Information and Priority System, abbreviated PTIPS,
is a computer-based system used in New South Wales, Australia, that brings together information about public transport entities, such as buses. Where applicable, PTIPS can also provide transport vehicles with priority at traffic signals.
PTIPS consists of a number of hardware and software components installed on-board buses which wirelessly communicate with a central set of servers. PTIPS also relies on an interface with Sydney Coordinated Adaptive Traffic System (SCATS) - to provide the priority feature) and bus/route/timetable data provided by bus organisations and government authorities.
PTIPS provides:
Real-time tracking of bus location and status
Traffic light priority for late running buses
Bus/Timetable performance and reliability reports
Real-time Bus arrival information for bus stops
How PTIPS works
PTIPS works by combining, on the one hand, schedule and route path information for buses performing timetabled services (as opposed to, say, charter trips), and on the other hand, live location data transmitted by the buses to PTIPS.
PTIPS receives XML data files from the bus operators, which contain information relating to planned trips (for example, route paths, trips & schedules, bus stops etc.)
Each bus that PTIPS tracks is equipped with a hardware device that records its location via GPS, and transmits it to the central PTIPS servers via the cellular radio communications network. Buses transmit these messages at certain intervals (which are configurable, and which vary depending on what the bus is doing), and also when they pass certain points along their intended route. Apart from GPS location, the transmitted messages also include information about the vehicle and which trip it is doing.
With the above information, PTIPS can compare the location of a bus performing a certain trip, at a certain point in time, with where it should be, based on the planned route and timetable data.
Real time apps
Transport for NSW worked with several developers in late 2012 to create, and release smartphone applications with access to the real-time bus data provided from PTIPS. Released in December, several iOS and Android apps went live on their respective App stores, allowing customers to track where their buses were in real-time, as well as any delays or timetable changes as they occur. It was initially trialled on Sydney Buses' route 400.
In 2013, this real-time data was further expanded to provide live information from Sydney Trains, and private bus operators Hillsbus and Busways Blacktown, and was eventually rolled out across bus operators in Greater Sydney.
In 2020, Transport for NSW started working with bus operators to introduce real-time tracking to regional bus services. As of March 2022, PTIPS-assisted real-time tracking was available for the regional centres of Albury, Armidale, Bathurst, Bega, Coffs Harbour, Dubbo, Forbes, Grafton, Nowra, Parkes, Port Macquarie, Queanbeyan, Tamworth, Tweed Heads and Wagga Wagga.
References
Sample of Realtime Data Government of New South Wales
Contract ID: PTIPS Roads & Traffic Authority Retrieved on 16 April 2007.
Priority bus green lights scrapped Sydney Morning Herald Retrieved on 16 April 2007.
Bus transport in New South Wales
Intelligent transportation systems | Public Transport Information and Priority System | Technology | 670 |
1,181,818 | https://en.wikipedia.org/wiki/Signed-digit%20representation | In mathematical notation for numbers, a signed-digit representation is a positional numeral system with a set of signed digits used to encode the integers.
Signed-digit representation can be used to accomplish fast addition of integers because it can eliminate chains of dependent carries. In the binary numeral system, a special case signed-digit representation is the non-adjacent form, which can offer speed benefits with minimal space overhead.
History
Challenges in calculation stimulated early authors Colson (1726) and Cauchy (1840) to use signed-digit representation. The further step of replacing negated digits with new ones was suggested by Selling (1887) and Cajori (1928).
In 1928, Florian Cajori noted the recurring theme of signed digits, starting with Colson (1726) and Cauchy (1840). In his book History of Mathematical Notations, Cajori titled the section "Negative numerals". For completeness, Colson uses examples and describes addition (pp. 163–4), multiplication (pp. 165–6) and division (pp. 170–1) using a table of multiples of the divisor. He explains the convenience of approximation by truncation in multiplication. Colson also devised an instrument (Counting Table) that calculated using signed digits.
Eduard Selling advocated inverting the digits 1, 2, 3, 4, and 5 to indicate the negative sign. He also suggested snie, jes, jerd, reff, and niff as names to use vocally. Most of the other early sources used a bar over a digit to indicate a negative sign for it. Another German usage of signed-digits was described in 1902 in Klein's encyclopedia.
Definition and properties
Digit set
Let be a finite set of numerical digits with cardinality (If , then the positional number system is trivial and only represents the trivial ring), with each digit denoted as for is known as the radix or number base. can be used for a signed-digit representation if it's associated with a unique function such that for all
This function, is what rigorously and formally establishes how integer values are assigned to the symbols/glyphs in One benefit of this formalism is that the definition of "the integers" (however they may be defined) is not conflated with any particular system for writing/representing them; in this way, these two distinct (albeit closely related) concepts are kept separate.
can be partitioned into three distinct sets , , and , representing the positive, zero, and negative digits respectively, such that all digits satisfy , all digits satisfy and all digits satisfy . The cardinality of is , the cardinality of is , and the cardinality of is , giving the number of positive and negative digits respectively, such that .
Balanced form representations
Balanced form representations are representations where for every positive digit , there exist a corresponding negative digit such that . It follows that . Only odd bases can have balanced form representations, as otherwise has to be the opposite of itself and hence 0, but . In balanced form, the negative digits are usually denoted as positive digits with a bar over the digit, as for . For example, the digit set of balanced ternary would be with , , and . This convention is adopted in finite fields of odd prime order :
Dual signed-digit representation
Every digit set has a dual digit set given by the inverse order of the digits with an isomorphism defined by . As a result, for any signed-digit representations of a number system ring constructed from with valuation , there exists a dual signed-digit representations of , , constructed from with valuation , and an isomorphism defined by , where is the additive inverse operator of . The digit set for balanced form representations is self-dual.
For integers
Given the digit set and function as defined above, let us define an integer endofunction as the following:
If the only periodic point of is the fixed point , then the set of all signed-digit representations of the integers using is given by the Kleene plus , the set of all finite concatenated strings of digits with at least one digit, with . Each signed-digit representation has a valuation
.
Examples include balanced ternary with digits .
Otherwise, if there exist a non-zero periodic point of , then there exist integers that are represented by an infinite number of non-zero digits in . Examples include the standard decimal numeral system with the digit set , which requires an infinite number of the digit to represent the additive inverse , as , and the positional numeral system with the digit set with , which requires an infinite number of the digit to represent the number , as .
For decimal fractions
If the integers can be represented by the Kleene plus , then the set of all signed-digit representations of the decimal fractions, or -adic rationals , is given by , the Cartesian product of the Kleene plus , the set of all finite concatenated strings of digits with at least one digit, the singleton consisting of the radix point ( or ), and the Kleene star , the set of all finite concatenated strings of digits , with . Each signed-digit representation has a valuation
For real numbers
If the integers can be represented by the Kleene plus , then the set of all signed-digit representations of the real numbers is given by , the Cartesian product of the Kleene plus , the set of all finite concatenated strings of digits with at least one digit, the singleton consisting of the radix point ( or ), and the Cantor space , the set of all infinite concatenated strings of digits , with . Each signed-digit representation has a valuation
.
The infinite series always converges to a finite real number.
For other number systems
All base- numerals can be represented as a subset of , the set of all doubly infinite sequences of digits in , where is the set of integers, and the ring of base- numerals is represented by the formal power series ring , the doubly infinite series
where for .
Integers modulo powers of
The set of all signed-digit representations of the integers modulo , is given by the set , the set of all finite concatenated strings of digits of length , with . Each signed-digit representation has a valuation
Prüfer groups
A Prüfer group is the quotient group of the integers and the -adic rationals. The set of all signed-digit representations of the Prüfer group is given by the Kleene star , the set of all finite concatenated strings of digits , with . Each signed-digit representation has a valuation
Circle group
The circle group is the quotient group of the integers and the real numbers. The set of all signed-digit representations of the circle group is given by the Cantor space , the set of all right-infinite concatenated strings of digits . Each signed-digit representation has a valuation
The infinite series always converges.
-adic integers
The set of all signed-digit representations of the -adic integers, is given by the Cantor space , the set of all left-infinite concatenated strings of digits . Each signed-digit representation has a valuation
-adic solenoids
The set of all signed-digit representations of the -adic solenoids, is given by the Cantor space , the set of all doubly infinite concatenated strings of digits . Each signed-digit representation has a valuation
In written and spoken language
Indo-Aryan languages
The oral and written forms of numbers in the Indo-Aryan languages use a negative numeral (e.g., "un" in Hindi and Bengali, "un" or "unna" in Punjabi, "ekon" in Marathi) for the numbers between 11 and 90 that end with a nine. The numbers followed by their names are shown for Punjabi below (the prefix "ik" means "one"):
19 unni, 20 vih, 21 ikki
29 unatti, 30 tih, 31 ikatti
39 untali, 40 chali, 41 iktali
49 unanja, 50 panjah, 51 ikvanja
59 unahat, 60 sath, 61 ikahat
69 unattar, 70 sattar, 71 ikhattar
79 unasi, 80 assi, 81 ikiasi
89 unanve, 90 nabbe, 91 ikinnaven.
Similarly, the Sesotho language utilizes negative numerals to form 8's and 9's.
8 robeli (/Ro-bay-dee/) meaning "break two" i.e. two fingers down
9 robong (/Ro-bong/) meaning "break one" i.e. one finger down
Classical Latin
In Classical Latin, integers 18 and 19 did not even have a spoken, nor written form including corresponding parts for "eight" or "nine" in practice - despite them being in existence. Instead, in Classic Latin,
18 = duodēvīgintī ("two taken from twenty"), (IIXX or XIIX),
19 = ūndēvīgintī ("one taken from twenty"), (IXX or XIX)
20 = vīgintī ("twenty"), (XX).
For upcoming integer numerals [28, 29, 38, 39, ..., 88, 89] the additive form in the language had been much more common, however, for the listed numbers, the above form was still preferred. Hence, approaching thirty, numerals were expressed as:
28 = duodētrīgintā ("two taken from thirty"), less frequently also yet vīgintī octō / octō et vīgintī ("twenty eight / eight and twenty"), (IIXXX or XXIIX versus XXVIII, latter having been fully outcompeted.)
29 = ūndētrīgintā ("one taken from thirty") despite the less preferred form was also at their disposal.
This is one of the main foundations of contemporary historians' reasoning, explaining why the subtractive I- and II- was so common in this range of cardinals compared to other ranges. Numerals 98 and 99 could also be expressed in both forms, yet "two to hundred" might have sounded a bit odd - clear evidence is the scarce occurrence of these numbers written down in a subtractive fashion in authentic sources.
Finnish Language
There is yet another language having this feature (by now, only in traces), however, still in active use today. This is the Finnish Language, where the (spelled out) numerals are used this way should a digit of 8 or 9 occur. The scheme is like this:
1 = "yksi" (Note: yhd- or yht- mostly when about to be declined; e.g. "yhdessä" = "together, as one [entity]")
2 = "kaksi" (Also note: kahde-, kahte- when declined)
3 = "kolme"
4 = "neljä"
...
7 = "seitsemän"
8 = "kah(d)eksan" (two left [for it to reach it])
9 = "yh(d)eksän" (one left [for it to reach it])
10 = "kymmenen" (ten)
Above list is no special case, it consequently appears in larger cardinals as well, e.g.:
399 = "kolmesataayhdeksänkymmentäyhdeksän"
Emphasizing of these attributes stay present even in the shortest colloquial forms of numerals:
1 = "yy"
2 = "kaa"
3 = "koo"
...
7 = "seiska"
8 = "kasi"
9 = "ysi"
10 = "kymppi"
However, this phenomenon has no influence on written numerals, the Finnish use the standard Western-Arabic decimal notation.
Time keeping
In the English language it is common to refer to times as, for example, 'seven to three', 'to' performing the negation.
Other systems
There exist other signed-digit bases such that the base . A notable examples of this is Booth encoding, which has a digit set with and , but which uses a base . The standard binary numeral system would only use digits of value .
Note that non-standard signed-digit representations are not unique. For instance:
The non-adjacent form (NAF) of Booth encoding does guarantee a unique representation for every integer value. However, this only applies for integer values. For example, consider the following repeating binary numbers in NAF,
See also
Balanced ternary
Negative base
Redundant binary representation
Notes and references
J. P. Balantine (1925) "A Digit for Negative One", American Mathematical Monthly 32:302.
Lui Han, Dongdong Chen, Seok-Bum Ko, Khan A. Wahid "Non-speculative Decimal Signed Digit Adder" from Department of Electrical and Computer Engineering, University of Saskatchewan.
Non-standard positional numeral systems
Number theory
Ring theory
Arithmetic dynamics
Coding theory
Formal languages
Sign (mathematics) | Signed-digit representation | Mathematics | 2,722 |
7,239,222 | https://en.wikipedia.org/wiki/Australian%20Antarctic%20Building%20System | Australian Antarctic Building System, or AANBUS, is a modular construction system used by the Australian Government Antarctica Division for buildings in Antarctica. The individual modules resemble shipping containers. Each module is approximately 3.6 metres by 6 metres by 4 metres high.
Buildings built using the AANBUS modules are placed on concrete footings anchored into the ground and do not need external guy wires to anchor and support them. The modules are built of steel with attached insulation and vapor barriers.
The modular design provides improved shipping, speed of assembly in the short Antarctic summer, and better testing before shipping. The ability to test the assembled modules allows corrections to be made in the convenience of construction sites in temperate climates with easy access to parts and equipment, rather than at remote Antarctic locations where shipping in of replacement parts is an arduous undertaking.
Further reading
External links
Australia’s Antarctic Buildings: AANBUS
Characteristics of the Australian Antarctic Building System
References
Architecture in Australia
Australian Antarctic Territory
Technology related to buildings in Antarctica | Australian Antarctic Building System | Engineering | 199 |
48,303,871 | https://en.wikipedia.org/wiki/Exserohilum%20inaequale | Exserohilum inaequale is a species of fungus in the family Pleosporaceae. Found in Nigeria, it was described as new to science in 1984. It differs from other Exserohilum species in the size, shape, and septation of its conidia. Additionally, the septa are comparatively dark and thick.
References
External links
Fungi described in 1984
Pleosporaceae
Fungi of Africa
Fungus species | Exserohilum inaequale | Biology | 91 |
36,891,206 | https://en.wikipedia.org/wiki/Lowland%20kagu | The lowland kagu (Rhynochetos orarius) is a large, extinct species of kagu. It was endemic to the island of New Caledonia in Melanesia in the south-west Pacific region. It was described from subfossil bones found at the Pindai Caves paleontological site on the west coast of Grande Terre. The holotype is a right tibiotarsus (NCP 700), held by the Muséum national d'histoire naturelle in Paris. The specific epithet comes from the Latin orarius (of the coast) from its presumed lowland distribution, as opposed to its congener the living kagu R. jubatus.
Description
The general proportions of the various bones of the lowland kagu are very similar to those of the kagu. They differ in the greater size of the extinct species in averaging about 15% larger, with no overlap between the hindlimb elements and only rare overlap between those of the wings. The describers postulate that R. orarius and R. jubatus were lowland and highland forms, respectively.
Taxonomic uncertainty
In 2018, Jörn Theuerkauf and Roman Gula argued that R. orarius was not a valid species. They claimed that Balouet and Olson had overstated the larger size of R. orarius, and assigned all their found specimens to R. orarius but none to R. jubatus, which would be rare if there were two kagu species coexisting in the same island; that the extant kagu is also found in the lowlands, making speciation unlikely, and that no other two kagu species in Oceania share the same island. Instead, they proposed that there was only one kagu species in the Holocene of New Caledonia, R. jubatus, which decreased in average size after human colonization as a result of hunters and introduced predators like dogs favoring the capture of larger animals. This very same possibility had been raised by Balouet and Olson in their original paper and discounted as unlikely, but Theuerkaf and Gula pointed that similar rapid size changes have been documented in other vertebrates when exposed to new competitors and predators.
References
Extinct birds of New Caledonia
Holocene extinctions
Birds described in 1989
Fossil taxa described in 1989
Late Quaternary prehistoric birds
Taxa named by Jean-Christophe Balouet
Controversial bird taxa | Lowland kagu | Biology | 491 |
25,463,443 | https://en.wikipedia.org/wiki/Google%20Japanese%20Input | is an input method published by Google for the entry of Japanese text on a computer. Since its dictionaries are generated automatically from the Internet, it supports typing of personal names, Internet slang, neologisms and related terms. Google Japanese Input can be used on Windows, macOS, and ChromeOS.
Google also releases an open-source version under the name mozc. It can be used on Linux, Windows, macOS, Android, and ChromeOS. It does not use Google's closed-source algorithms for generating dictionary data from online sources.
See also
Google IME
Google Pinyin
References
External links
Japanese Input
Input methods
Japanese-language computing
2009 software | Google Japanese Input | Technology | 136 |
1,834,679 | https://en.wikipedia.org/wiki/StankDawg | David Blake (born 1971), also known as StankDawg, is the founder of the hacking group Digital DawgPound (DDP) and a long-time member of the hacking community. He is known for being a regular presenter at multiple hacking conferences, but is best known as the creator of the "Binary Revolution" initiative, including being the founding host and producer of Binary Revolution Radio, a long-running weekly Internet radio show which ran 200 episodes from 2003 to 2007.
Biography
Blake was born in Newport News, Virginia on September 13, 1971. He received an AAS (Associates in Applied Sciences) degree from the University of Kentucky 1992, and has a BS in Computer Science from Florida Atlantic University as well as a CEH certificate. He presently lives and works as a computer programmer/analyst in Orlando, Florida. Blake is a member of the International High IQ society.
Hacking
StankDawg is a staff writer for the well-known hacker periodical 2600: The Hacker Quarterly, as well as the now-defunct Blacklisted! 411 magazine. He has also been a contributing writer to several independent zines such as Outbreak, Frequency, and Radical Future. He has been a frequent co-host of Default Radio and was a regular on Radio Freek America. Additionally, he has appeared on GAMERadio, Infonomicon, The MindWar, Phreak Phactor, and HPR (Hacker Public Radio).
He has presented at technology conferences such as DEF CON, H.O.P.E., and Interz0ne. David has been very outspoken about many topics, many of which have gotten some negative feedback from different sources. His most controversial article was entitled "Hacking google Adwords" at DefCon13 which drew criticism from such people as Jason Calacanis. among others. His presentation at the fifth H.O.P.E. conference drew some surprise from the AS/400 community.
StankDawg appeared as a subject on the television show The Most Extreme on Animal Planet where he demonstrated the vulnerabilities of wireless internet connections.
Blake chose the handle "StankDawg" in college, where he started a local hacking group which became known as the "Digital DawgPound".
Digital DawgPound
The Digital DawgPound (more commonly referred to as the "DDP") is a group of hackers, best known for a series of articles in hacker magazines such as 2600: The Hacker Quarterly and Make, the long-running webcast Binary Revolution Radio, and a very active set of forums with posts from high-profile hackers such as Strom Carlson, decoder, Phiber Optik and many more. The stated mission of the DDP is to propagate a more positive image of hackers than the negative mass media stereotype. The group welcomes new members who want to learn about hacking, and attempts to teach them more positive aspects and steer them away from the negative aspects by reinforcing the hacker ethic. Their goal is to show that hackers can, and regularly do, make positive contributions not only to technology, but to society as a whole.
History
The DDP was founded and named by StankDawg. His stated reasons were that he had made many friends in the hacking scene and thought that it would be useful to have everyone begin working together in a more organized fashion. He was motivated by the fact that there had been other well-known Hacker Groups in the 1980s who had accomplished great things in the hacking world such as the LoD and the MoD. In 1988, while a junior in high school, StankDawg came up with the name on his way to the "Sweet 16" computer programming competition. He jokingly referred to his teammates as "The Digital Dawgpound".
StankDawg lurked in the shadows of the hacking world for many years throughout college under many different pseudonyms. In 1997 he popped his head out into the public and began becoming more active on IRC and many smaller hacking forums. He saw some people who he thought were insanely brilliant individuals who seemed to have the same mindset and positive attitude towards hacking that he did so he decided to approach a couple of them to see if anyone would be interested in forming a group and working together. There was always a huge emphasis not only on technical competence and variety, but also on strength of character and integrity. The team was a mix of hackers, programmers, phone phreakers, security professionals, and artists. They had experience in multiple programming languages and operating systems. DDP members are not only good programmers and hackers, but more importantly, good people. By 1999 the DDP had its first official members and from this partnership, creativity flowed.
The DDP communicated and worked together on StankDawg's personal site, which was open to anyone who wanted to join in on the fun. StankDawg was never comfortable with the fact that it was his name that was on the domain and that many people who were coming to the site were coming because of his articles or presentations but not really appreciating all of the other great contributions from other community members that were around. In 2002, after watching the web site grow quickly, it was decided that a new community needed to be created for these like-minded hackers who were gathering. This was the start of the biggest DDP project called Binary Revolution which was an attempt at starting a true "community" of hackers. As the site grew, so did the DDP roster.
Members
Over the years, DDP membership has included several staff writers for 2600: The Hacker Quarterly and Blacklisted! 411 magazine including StankDawg and bland_inquisitor. They frequently publish articles, provide content, and appear on many media sources across the global Interweb. DDP members are also regular speakers at hacking conferences such as DEF CON, H.O.P.E., Interzone, Notacon, and many more smaller and more regional cons.
Some DDP members hold memberships in Mensa and the International High IQ society. StankDawg is very proud of the diversity of the team and has spoken to this many times on Binary Revolution Radio. Members are from both coasts of the United States to Europe and have even had members from Jamaica, Brazil, and many other countries.
Recognition
The DDP maintains a blog "which they refer to as a "blawg" (Obviously a play on the intentionally misspelled word "Dawg"). Posts by DDP members have been featured on other technology-related sites such as those of Make Magazine,
HackADay,
Hacked Gadgets, and others.
Binary Revolution
In 2003, StankDawg moved the forums from his personal site over to a new site as part of a project called the Binary Revolution which he considered a "movement" towards a more positive hacking community.
This "Binary Revolution" is the best known of the DDP projects and is commonly referred to simply as "BinRev". This project was created in an attempt to bring the hacking community back together, working towards a common, positive goal of reclaiming the name of hackers. The Binary Revolution emphasizes positive aspects of hacking and projects that help society. It does this in a variety of outlets including monthly meetings, the weekly radio show Binary Revolution Radio(BRR), a video-based series of shows called HackTV, and very active message board forums.
BinRev is more than just a radio show or forums, although they are certainly the most well-known of the projects. It is actually composed of many parts.
Binary Revolution Radio
Binary Revolution Radio, often shortened to "BRR", was one part of the binrev community. Started and hosted by Blake in 2003, it featured different co-hosts each week, and covered different aspects of hacker culture and computer security.
It was broadcast via internet stream, usually prerecorded in Florida on a weekend, and then edited and released on the following Tuesday, on the DDP Hack Radio stream at 9:30pm EST. Topics included phreaking, identity theft, cryptography, operating systems, programming languages, free and open source software, Wi-Fi and bluetooth, social engineering, cyberculture, and information about various hacker conventions such as PhreakNIC, ShmooCon, H.O.P.E., and Def Con.
In July 2005 Blake announced that he was going to take a break, and so for the third season, the show was produced by Black Ratchet and Strom Carlson (who had been frequent co-hosts during Blake's run). During the time that they hosted the program, the format rotated between the standard prerecorded format, and a live format which included phone calls from listeners.
Blake returned to the show in May 2006. He maintained the prerecorded format, and brought more community input into the show, by bringing on more members of the Binary Revolution community. For the first episode of the fourth season, BRR had its first ever broadcast in front of live audience during the HOPE 6 convention in New York City, June 2006.
The final episode, #200, took place on October 30, 2007, with a marathon episode which clocked in at 7 hours and 12 minutes.
Notable co-hosts
Tom Cross (as "Decius")
Elonka Dunin
Jason Scott
Lance James
Mark Spencer
Virgil Griffith
MC Frontalot
Lucky225
Strom Carlson
Black Rachet
BinRev Meetings
As the forums grew there were many posts where people were looking for others in their area where other hacker meetings did not exist or were not approachable for some reason. Those places that did have meetings were sparse on information. Binary Revolution meetings were started as an answer to these problems and as a place for our forum members to get together. BinRev meetings offer free web hosting for all meetings to help organize the meetings and keep communications alive and to help projects to grow. Some meetings are in large cities like Chicago and Orlando while others are in small towns. Anyone can start their own BinRev meeting by asking in the BinRev forums.
BinRev.net
"BRnet" is the official IRC network of Binary Revolution. It is active at all hours of the day and contains a general #binrev channel but also contains many other channels for more specific and productive discussion.
HackTV
In the middle of 2003, he released an Internet video show entitled "HackTV" which was the first internet television show about hacking, and which has grown into a series of several different shows. They were released irregularly since most of the episodes were filmed by StankDawg in South Florida where he lived at the time. They wanted the show to appear professional in terms of quality, but this made cooperating over the internet difficult. Sharing large video files was difficult and encoded video caused editing problems and quality concerns. The original show was released as full-length 30 minute episodes. This was also a problem since it because more and more difficult to get enough material for full-length episodes. There was also some content that was related to hacking only on a fringe level and StankDawg did not feel it was appropriate to include in the show. This led to other ideas.
HackTV:Underground
In light of the difficulties of putting together the full HackTV original show, and in an attempt to make the show more accessible for community contributions, StankDawg launched a new series that was less focused on format and video quality that focused more on content and ease of participation. This series was titled "HackTV:Underground" or "HTV:U" for short. This series allowed anyone to contribute content in any format and at any length or video quality. The allowed people to film things with basic cameraphone quality video if this was the only way to get the content. One episode of HackTV:U was used by G4techTV show called "Torrent".
HackTV:Pwned
This series of HackTV was a prank style show, similar to the popular "Punk'd" show on MTV at the time. Even the logo is an obvious parody of the Punk'd logo. This series contains pranks that mostly took place at conferences, but is also open to social engineering and other light-hearted content.
DocDroppers
The DocDroppers project is a community project to create a centralized place to store hacking articles and information while still maintaining some formatting and readability. Old ascii text files existed scattered across the internet but they come and go quickly and are difficult to find. They are usually formatted with the very basics and sometimes difficult to read. DocDroppers allows users to submit articles to a centralized place where they can be searchable, easily maintained, and easy to read and reference.
Recently, this project has grown to include encyclopedia style entries on many hacking topics after many were deleted from sites such as Wikipedia. This has caused DocDroppers to include a section on hacker history and culture among its content.
Selected writing
"Stupid Webstats Tricks", Autumn 2005, 2600 Magazine
"Hacking Google AdWords", Summer 2005, 2600 Magazine
"Disposable Email Vulnerabilities", Spring 2005, 2600 Magazine
"How to Hack The Lottery", Fall 2004, 2600 Magazine
"Robots and Spiders", Winter 2003, 2600 Magazine
"A History of 31337sp34k", Fall 2002, 2600 Magazine
"Transaction Based Systems", Spring 2002, 2600 Magazine
"Batch vs. Interactive", Summer 1999, 2600 Magazine
Selected presentations
"The Art of Electronic Deduction", July 2006, H.O.P.E. Number Six (presented again at Interz0ne 5, Saturday March 11, 2006)
"Hacking Google AdWords", July 2005, DEF CON 13
"AS/400: Lifting the veil of obscurity", July 2004, The fifth H.O.P.E.
Projects
Projects that StankDawg was directly involved in creating/maintaining in addition to the ones mentioned above.
DDP HackRadio - A streaming radio station with a schedule of hacking and tech related shows.
Binary Revolution Magazine - The printed hacking magazine put out by the DDP.
Hacker Events - A calendar for all hacking conferences, events, meetings, or other related gatherings.
Hacker Media - A portal for all hacking, phreaking, and other related media shows.
Phreak Phactor - The world's first Hacking reality radio show.
WH4F - "Will Hack For Food" gives secure disposable temporary email accounts.
References
External links
Other links that were mentioned or referred to in this entry:
StankDawg's personal site.
The Digital DawgPound - official site.
BinRev IRC - Binary Revolution official IRC channel web site & BinRev IRC - Official Binary Revolution IRC network.
HPR - "Hacker Public Radio" is a daily hacking and technology radio show created by the DDP, infonomicon and others. It has many different hosts.
BRR Archive - Archive of the hacking radio show presented by members of the DDP (07/2003-10/2007).
Binary Revolution Meetings - Monthly hacker meetings that encourage participation and offers free hosting for all meetings.
HackTV - The Internet's first full-length regular Hacking video show.
Old Skool Phreak - Home of many phreaking related text files.
RFA Archive - Weekly Radio show about Technology, Privacy and Freedom (02/2002 - 02/2004).
1971 births
Living people
American computer programmers
Hackers
Internet radio in the United States
Phreaking
People from Newport News, Virginia | StankDawg | Technology | 3,220 |
38,272,708 | https://en.wikipedia.org/wiki/Virut | Virut is a cybercrime malware botnet, operating at least since 2006, and one of the major botnets and malware distributors on the Internet. In January 2013, its operations were disrupted by the Polish organization Naukowa i Akademicka Sieć Komputerowa.
Characteristics
Virut is a malware botnet that is known to be used for cybercrime activities such as DDoS attacks, spam (in collaboration with the Waledac botnet), fraud, data theft, and pay-per-install activities. It spreads through executable file infection (through infected USB sticks and other media), and more recently, through compromised HTML files (thus infecting vulnerable browsers visiting compromised websites). It has infected computers associated with at least 890,000 IP addresses in Poland. In 2012, Symantec estimated that the botnet had control of over 300,000 computers worldwide, primarily in Egypt, Pakistan and Southeast Asia (including India). A Kaspersky report listed Virut as the fifth-most widespread threat in the third quarter of 2012, responsible for 5.5% of computer infections.
History
The Virut botnet has been active since at least 2006.
On 17 January 2013, Polish research and development organization, data networks operator, and the operator of the Polish ".pl" top-level domain registry, Naukowa i Akademicka Sieć Komputerowa (NASK), took over twenty three domains used by Virut to attempt to shut it down. A NASK spokesperson stated that it was the first time NASK engaged in such an operation (taking over domains), owing to the major threat that the Virut botnet posed to the Internet. It is likely Virut will not be shut down completely, as some of its control servers are located at Russian ".ru" top-level domain name registrars outside the reach of the Polish NASK. Further, the botnet is able to look up alternate backup hosts, enabling the criminals operating it to reestablish control over the network.
See also
Command and control (malware)
Zombie (computer science)
Trojan horse (computing)
Botnet
Alureon
Conficker
Gameover ZeuS
ZeroAccess botnet
Regin (malware)
Zeus (malware)
Timeline of computer viruses and worms
References
Internet security
Distributed computing projects
Spamming
Botnets
Cybercrime in India | Virut | Engineering | 496 |
74,992,891 | https://en.wikipedia.org/wiki/Tremella%20fibulifera | Tremella fibulifera is a species of fungus in the family Tremellaceae. It produces soft, whitish, lobed to frondose, gelatinous basidiocarps (fruit bodies) and is parasitic on other fungi on dead branches of broad-leaved trees. It was originally described from Brazil.
Taxonomy
Tremella fibulifera was first published in 1895 by German mycologist Alfred Möller based on a collection made in Brazil.
Description
Fruit bodies are soft, gelatinous, whitish, up to 2.5 cm (1 in) across, and lobed. Microscopically, the basidia are tremelloid (subglobose, with oblique to vertical septa), 4-celled, 13 to 18 by 9 to 16 μm. The basidiospores are ellipsoid, smooth, 7 to 10 by 6 to 7 μm.
Similar species
Tremella subfibulifera, also described from Brazil, appears macroscopically identical but differs microscopically in having slightly smaller basidiospores (5.5 to 10 by 4 to 6 μm). DNA sequencing has shown that it is a distinct species. Several other species, including Tremella olens and Tremella neofibulifera, are macroscopically similar and belong within the T. fibulifera complex, but occur in Asia or Australia.
Habitat and distribution
Tremella fibulifera is a parasite on lignicolous fungi, but its host species is unknown, though collections have been noted on pyrenomycetes. It is found on dead, attached or fallen branches of broad-leaved trees.
The species is currently known from Brazil, Colombia, Costa Rica, Panama, Venezuela (as T. olens), and Jamaica (as T. olens).
References
fibulifera
Fungi described in 1895
Fungi of South America
Fungus species | Tremella fibulifera | Biology | 399 |
35,736,883 | https://en.wikipedia.org/wiki/Lidanserin | Lidanserin (INN; ZK-33,839) is a drug which acts as a combined 5-HT2A and α1-adrenergic receptor antagonist. It was developed as an antihypertensive agent but was never marketed.
See also
Glemanserin
Pruvanserin
Roluperidone
Volinanserin
Lenperone
Iloperidone
Ketanserin
References
5-HT2A antagonists
Abandoned drugs
Alpha-1 blockers
Antihypertensive agents
Ketones
4-Fluorophenyl compounds
Piperidines
Pyrrolidones | Lidanserin | Chemistry | 128 |
17,276,387 | https://en.wikipedia.org/wiki/5%2C10-Methenyltetrahydrofolate | 5,10-Methenyltetrahydrofolate (5,10-CH=THF) is a form of tetrahydrofolate that is an intermediate in metabolism. 5,10-CH=THF is a coenzyme that accepts and donates methenyl (CH=) groups.
It is produced from 5,10-methylenetetrahydrofolate by either a NAD+ dependent methylenetetrahydrofolate dehydrogenase, or a NADP+ dependent dehydrogenase. It can also be produced as an intermediate in histidine catabolism, by formiminotransferase cyclodeaminase, from 5-formiminotetrahydrofolate.
5,10-CH=THF is a substrate for methenyltetrahydrofolate cyclohydrolase, which converts it into 10-formyltetrahydrofolate.
Interactive pathway map
References
Folates
Coenzymes | 5,10-Methenyltetrahydrofolate | Chemistry | 212 |
43,349,629 | https://en.wikipedia.org/wiki/T%20Centauri | T Centauri is a variable star located in the far southern constellation Centaurus. It varies between magnitudes 5.56 and 8.44 over 181.4 days, making it intermittently visible to the naked eye. Pulsating between spectral classes K0:e and M4II:e, it has been classed as a semiregular variable, though Sebastian Otero of the American Association of Variable Star Observers has noted its curve more aligned with RV Tauri variable stars and has classified it as one.
The variability of the star was discovered in 1894 by Ernest Elliott Markwick, and independently by Williamina Fleming in 1895.
References
Centaurus
Semiregular variable stars
Centauri, T
Durchmusterung objects
119090
066825
K-type giants
5147
M-type bright giants
Asymptotic-giant-branch stars
RV Tauri variables | T Centauri | Astronomy | 182 |
64,936,230 | https://en.wikipedia.org/wiki/Ghost%20Gunner | Ghost Gunner is an American desktop CNC mill manufactured in Austin, Texas. It specializes in the making of firearms as well as finishing 0%–80% receivers. It was launched in October 2014 by Cody Wilson and the founders of Defense Distributed.
History
Ghost Gunner began as a limited series of CNC mills produced by Defense Distributed in a crowdfunding sale to its mailing list in October 2014. Spring 2015 shipments sold out almost immediately, and its first media reviewer noted the machine "...worked so well that it may signal a new era in the gun control debate, one where the barrier to legally building an untraceable, durable, and deadly semiautomatic rifle has reached an unprecedented low point in cost and skill."
Products
Since 2014, Ghost Gunner has issued 3 generations of its CNC mill, with the latest being the Ghost Gunner 3. The second version, named Ghost Gunner 2, is open-source hardware, allowing third party manufacturers to sell their own versions.
, Ghost Gunner had sold over 6,000 units worldwide. The most recent version of the Ghost Gunner accepts "Zero Percent Receivers," solid blocks of aluminum that are milled into a partial lower receiver of an AR-15 style rifle. These are in contrast to the 80 percent receivers first released with the Ghost Gunner.
Political controversy
Ghost Gunner is cited by politicians and the media as the most popular machine tool used to produce ghost guns.
In May 2024, San Diego County, joined by The Giffords Law Center, brought suit against Ghost Gunner in California state court arguing that it violated a state law "blocking gun-making milling machines" in developing and selling the Coast Runner CNC.
References
External links
Ghost Gunner and "zero percent" Receivers
Companies established in 2014
Firearm construction
Numerical control
Open-source hardware | Ghost Gunner | Engineering | 363 |
76,347,341 | https://en.wikipedia.org/wiki/Temporary%20Cyber%20Operations%20Act | The Temporary Cyber Operations Act () is proposed Dutch legislation that will relax the restrictions on data interception and surveillance in the Intelligence and Security Services Act. It is intended to be used to defend against cyberattacks by other countries.
It was adopted by the Dutch House of Representatives in October 2023. The law was accepted by the Dutch Senate in March 2024.
References
Proposed laws
Computer law
Dutch legislation | Temporary Cyber Operations Act | Technology | 82 |
2,670,377 | https://en.wikipedia.org/wiki/Gamma%20Sculptoris | Gamma Sculptoris, Latinized from γ Sculptoris, is a single, orange-hued star in the constellation Sculptor. Based upon an annual parallax shift of 17.90 mas as seen from Earth, this star is located about 182 light years from the Sun. It is bright enough to be visible to the naked eye with an apparent visual magnitude of 4.41. It is moving away from the Sun with a radial velocity of +15.6 km/s.
This is an evolved K-type giant star with a stellar classification of K1 III. At the age of 2.47 billion years it is a red clump star on the horizontal branch, which means it is generating energy through helium fusion at its core. The star has 1.60 times the mass of the Sun and it has expanded to 12 times the Sun's radius. It is radiating 72 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,578 K
References
K-type giants
Horizontal-branch stars
Sculptor (constellation)
Sculptoris, Gamma
CD-33 16476
9821
219784
115102
8863 | Gamma Sculptoris | Astronomy | 238 |
4,442,837 | https://en.wikipedia.org/wiki/NGC%204463 | NGC 4463 is an open cluster in the constellation Musca. The young planetary nebula He 2-86 is believed to be a member of the cluster.
References
External links
4463
Open clusters
Musca | NGC 4463 | Astronomy | 44 |
20,628,314 | https://en.wikipedia.org/wiki/Overhang%20%28architecture%29 | In architecture, an overhang is a protruding structure that may provide protection for lower levels. Overhangs on two sides of Pennsylvania Dutch barns protect doors, windows, and other lower-level structures. Overhangs on all four sides of barns and larger, older farmhouses are common in Swiss architecture. An overhanging eave is the edge of a roof, protruding outwards from the side of the building, generally to provide weather protection.
History
Overhangs are also common in medieval Indian architecture—especially Mughal architecture of the 16th–18th century, where they are called chhajja, often supported by ornate corbels and also seen in Hindu temple architecture. Later, these were adopted by Indo-Saracenic architecture, which flourished during the British Raj. Extensive overhangs were incorporated in early Buddhist architecture; were seen in early Buddhist temples; and later became part of Tibetan architecture, Chinese architecture, and eventually, traditional Japanese architecture, where they were a striking feature.
In late-medieval and Renaissance Europe, the upper stories of timber-framed houses often overhung the story below, the overhang being called a "jetty". This technique declined by the beginning of the 18th century as building with brick or stone became common.
By the 17th century, overhangs were one of the most common features of American colonial architecture in New England and Connecticut. This style featured an overhanging or jettied second story, which usually ran across the front of the house or sometimes around it; these dwellings were known as garrison houses. In the early 20th century, the style was adopted by Prairie School architecture and architects like Frank Lloyd Wright, thus making its way into modern architecture. An overhang may also refer to an awning or other protective elements.
Gallery
See also
Where eaves continue in the same plane over an ell (projection), this part of the roof is instead considered a catslide and if across a full façade the building may be a saltbox house.
Eaves
Five-foot way
Cantilever
References
Architectural elements
Roofs | Overhang (architecture) | Technology,Engineering | 417 |
11,421,607 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Me28S-Am982 | In molecular biology, Small nucleolar RNA Me28S-Am982 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA Me28S-Am982 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. It is predicted that this family directs 2'-O-methylation of 28S A-982.
References
External links
Small nuclear RNA | Small nucleolar RNA Me28S-Am982 | Chemistry | 210 |
78,443,075 | https://en.wikipedia.org/wiki/Iscartrelvir | Iscartrelvir is an investigational new drug developed by the Westlake University for the treatment of COVID-19. It targets the SARS-CoV-2 3CL protease, which is crucial for the replication of the virus responsible for COVID-19.
See also
3CLpro-1
Rupintrivir
References
Amines
Anilines
Benzamides
Bromobenzene derivatives
Nitrobenzenes
Cyclohexanes
Isoquinolines | Iscartrelvir | Chemistry | 101 |
34,003,244 | https://en.wikipedia.org/wiki/Olog | The theory of ologs is an attempt to provide a rigorous mathematical framework for knowledge representation, construction of scientific models and data storage using category theory, linguistic and graphical tools. Ologs were introduced in 2012 by David Spivak and Robert Kent.
Etymology
The term "olog" is short for "ontology log". "Ontology" derives from onto-, from the Greek ὤν, ὄντος "being; that which is", present participle of the verb εἰμί "be", and -λογία, -logia: science, study, theory.
Mathematical formalism
An olog for a given domain is a category whose objects are boxes labeled with phrases (more specifically, singular indefinite noun phrases) relevant to the domain, and whose morphisms are directed arrows between the boxes, labeled with verb phrases also relevant to the domain. These noun and verb phrases combine to form sentences that express relationships between objects in the domain.
In every olog, the objects exist within a target category. Unless otherwise specified, the target category is taken to be , the category of sets and functions.
The boxes in the above diagram represent objects of . For example, the box containing the phrase "an amino acid" represents the set of all amino acids, and the box containing the phrase "a side chain" represents the set of all side chains. The arrow labeled "has" that points from "an amino acid" to "a side chain" represents the function that maps each amino acid to its unique side chain.
Another target category that can be used is the Kleisli category of the power set monad. Given an , is then the power set of A. The natural transformation maps to the singleton , and the natural transformation maps a set of sets to its union. The Kleisli category is the category with the objects matching those in , and morphisms that establish binary relations. Given a morphism , and given and , we define the morphism by saying that whenever . The verb phrases used with this target category would need to make sense with objects that are subsets: for example, "is related to" or "is greater than".
Another possible target category is the Kleisli category of probability distributions, called the Giry monad. This provides a generalization of Markov decision processes.
Ologs and databases
An olog can also be viewed as a database schema. Every box (object of ) in the olog is a table and the arrows (morphisms) emanating from the box are columns in . The assignment of a particular instance to an object of is done through a functor . In the example above, the box "an amino acid" will be represented as a table whose number of rows is equal to the number of types of amino acids and whose number of columns is three, one column for each arrow emanating from that box.
Relations between ologs
"Communication" between different ologs which in practice can be communication between different models or world-views is done using functors. Spivak coins the notions of a 'meaningful' and 'strongly meaningful' functors. Let and be two ologs, , functors (see the section on ologs and databases) and a functor. is called a schema mapping. We say that a is meaningful if there exists a natural transformation (the pullback of J by F).
Taking as an example and as two different scientific models, the functor is meaningful if "predictions", which are objects in , made by the first model can be translated to the second model .
We say that is strongly meaningful if given an object we have . This equality is equivalent to requiring to be a natural isomorphism.
Sometimes it will be hard to find a meaningful functor from to . In such a case we may try to define a new olog which represents the common ground of and and find meaningful functors and .
If communication between ologs is limited to a two-way communication as described above then we may think of a collection of ologs as nodes of a graph and of the edges as functors connecting the ologs. If a simultaneous communication between more than two ologs is allowed then the graph becomes a symmetric simplicial complex.
Rules of good practice
Spivak provides some rules of good practice for writing an olog whose morphisms have a functional nature (see the first example in the section Mathematical formalism). The text in a box should adhere to the following rules:
begin with the word "a" or "an". (Example: "an amino acid").
refer to a distinction made and recognizable by the olog's author.
refer to a distinction for which there is well defined functor whose range is , i.e. an instance can be documented. (Example: there is a set of all amino acids).
declare all variables in a compound structure. (Example: instead of writing in a box "a man and a woman" write "a man and a woman " or "a pair where is a man and is a woman").
The first three rules ensure that the objects (the boxes) defined by the olog's author are well-defined sets. The fourth rule improves the labeling of arrows in an olog.
Applications
This concept was used in a paper published in the December 2011 issue of BioNanoScience by David Spivak and others to establish a scientific analogy between spider silk and musical composition.
See also
Hypergraph
Modeling language
Ontology language
Operad theory
Orgology
Universal algebra
Universal logic
References
External links
Category theory
Ontology (information science) | Olog | Mathematics | 1,168 |
5,499,512 | https://en.wikipedia.org/wiki/Matching%20law | In operant conditioning, the matching law is a quantitative relationship that holds between the relative rates of response and the relative rates of reinforcement in concurrent schedules of reinforcement. For example, if two response alternatives A and B are offered to an organism, the ratio of response rates to A and B equals the ratio of reinforcements yielded by each response. This law applies fairly well when non-human subjects are exposed to concurrent variable interval schedules (but see below); its applicability in other situations is less clear, depending on the assumptions made and the details of the experimental situation. The generality of applicability of the matching law is subject of current debate.
The matching law can be applied to situations involving a single response maintained by a single schedule of reinforcement if one assumes that alternative responses are always available to an organism, maintained by uncontrolled "extraneous" reinforcers. For example, an animal pressing a lever for food might pause for a drink of water.
The matching law was first formulated by R.J. Herrnstein (1961) following an experiment with pigeons on concurrent variable interval schedules. Pigeons were presented with two buttons in a Skinner box, each of which led to varying rates of food reward. The pigeons tended to peck the button that yielded the greater food reward more often than the other button, and the ratio of their rates to the two buttons matched the ratio of their rates of reward on the two buttons.
Mathematical statement
If R and R are the rate of responses on two schedules that yield obtained (as distinct from programmed) rates of reinforcement Rf and Rf, the strict matching law holds that the relative response rate R / (R + R) matches, that is, equals, the relative reinforcement rate Rf / (Rf + Rf). That is,
This relationship can also be stated in terms of response and reinforcement ratios:
Alternatively stated, it states that there exists a constant for an individual animal, such that for any . That is, for an individual animal, the rate of response is proportional to rate of reinforcement for any task.
Deviations from matching, and the generalized matching law
A recent review by McDowell reveals that Herrnstein's original equation fails to accurately describe concurrent-schedule data under a substantial range of conditions. Three deviations from matching have been observed: undermatching, overmatching, and bias. Undermatching means that the response proportions are less extreme than the law predicts. Undermatching can happen if subjects too often switch between the two response options, a tendency that may be strengthened by reinforcers that happen to occur just after a subject switches. A changeover delay may be used to reduce the effectiveness of such post-switch reinforcers; typically, this is a 1.5 second interval after a switch when no reinforcer is presented. Overmatching is the opposite of undermatching, and is less common. Here the subjects response proportions are more extreme than reinforcement proportions. Overmatching may occur if there is a penalty for switching. A final deviation is bias, which occurs when subjects spend more time on one alternative than the matching equation predicts. This may happen if a subject prefers a certain environment, area in a laboratory, or method of responding.
These failures of the matching law have led to the development of the "generalized matching law", which has parameters that reflect the deviations just described. This law is a power function generalization of the strict matching (Baum, 1974), and it has been found to fit a wide variety of matching data.
This is more conveniently expressed in logarithmic form
The constants b and s are referred to as "bias" and "sensitivity" respectively. "Bias" reflects any tendency the subject may have to prefer one response over the other. "Sensitivity" reflects the degree to which the reinforcement ratio actually impacts the choice ratio. When this equation is plotted, the result is straight line; sensitivity changes the slope and bias changes the intercept of this line.
The generalized matching law accounts for high proportions of the variance in most experiments on concurrent variable interval schedules in non-humans. Values of b often depend on details of the experiment set up, but values of s are consistently found to be around 0.8, whereas the value required for strict matching would be 1.0.
The concurrent VI VI choice situation involves strong negative feedbacks: the longer the subject refrains from responding to an alternative, the higher his probability of payoff: switching is encouraged.
Processes underlying the distribution of responses
There are three ideas on how humans and animals maximize reinforcement, molecular maximizing, molar maximizing and melioration.
molecular maximizing: organisms always choose whichever response alternative is most likely to be reinforced at the time.
molar maximizing: organisms distribute their responses among various alternatives so as to maximize the amount of reinforcement they earn over the long run.
melioration: literally means to "make better"; organisms respond so as to improve the local rates of reinforcement for response alternatives. behavior keeps shifting towards the better of two alternatives until ratios are equal-which makes matching.
Theoretical importance
The matching law is theoretically important for several reasons. First, it offers a simple quantification of behavior that can be applied to a number of situations. Secondly, offers a lawful account of choice. As Herrnstein (1970) expressed it, under an operant analysis, choice is nothing but behavior set into the context of other behavior. The matching law thus challenges the idea that choice is an unpredictable outcome of free will, just as B.F. Skinner and others have argued. However this challenge becomes serious only if it applies to human behavior, as well as to the behavior of pigeons and other animals. When human participants perform under concurrent schedules of reinforcement, matching has been observed in some experiments, but wide deviations from matching have been found in others. Finally, if nothing else, the matching law is important because it has generated a great deal of research that has widened our understanding of operant control.
Relevance to psychopathology
The matching law, and the generalized matching law, have helped behavior analysts to understand some complex human behaviors, especially the behavior of children in certain conflict situations. James Snyder and colleague have found that response matching predicts the use of conflict tactics by children and parents during conflict bouts. This matching rate predicts future arrests. Even children's use of deviant talk appears to follow a matching pattern.
Notes
References
Baum, W.M. (1974). On two types of deviation from the matching law: Bias and undermatching. Journal of the Experimental Analysis of Behavior, 22, 231–42.
Bradshaw, C.M.; Szabadi, E. & Bevan, P. (1976). Behavior of humans in variable-interval schedules of reinforcement Journal of the Experimental Analysis of Behavior, 26, 135–41.
Davison, M. & McCarthy, D. (1988). The matching law: A research review. Hillsdale, NJ: Erlbaum.
Herrnstein, R.J. (1961). Relative and absolute strength of responses as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behaviour, 4, 267–72.
Herrnstein, R.J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior, 13, 243–66.
Horne, P.J. & Lowe, C.F. (1993). Determinants of human performance on concurrent schedules. Journal of the Experimental Analysis of Behavior, 59, 29–60. .
Poling, A., Edwards, T. L., Weeden, M., & Foster, T. (2011). The matching law. Psychological Record, 61(2), 313-322.
Simon, C., & Baum, W. M. (2017). Allocation of Speech in Conversation. Journal of Experimental Analysis of Behavior, 107.
Behavioral concepts
Behaviorism | Matching law | Biology | 1,608 |
248,948 | https://en.wikipedia.org/wiki/Section%20sign | The section sign (§) is a typographical character for referencing individually numbered sections of a document; it is frequently used when citing sections of a legal code. It is also known as the section symbol, section mark, double-s, or silcrow. In other languages it may be called the "paragraph symbol" (for example, ).
Use
The section sign is often used when referring to a specific section of a legal code. For example, in Bluebook style, "Title 16 of the United States Code Section 580p" becomes "16 U.S.C. §580p". The section sign is frequently used along with the pilcrow (or paragraph sign), , to reference a specific paragraph within a section of a document.
While is usually read in spoken English as the word "section", many other languages use the word "paragraph" exclusively to refer to a section of a document (especially of legal text), and use other words to describe a paragraph in the English sense. Consequently, in those cases "§" may be read as "paragraph", and may occasionally also be described as a "paragraph sign", but this is a description of its usage, not a formal name.
When duplicated, as , it is read as the plural "sections". For example, "§§13–21" would be read as "sections 13 through 21", much as (pages) is the plural of , meaning page.
It may also be used with footnotes when asterisk , dagger , and double dagger have already been used on a given page.
It is common practice to follow the section sign with a non-breaking space so that the symbol is kept with the section number being cited.
The section sign is itself sometimes a symbol of the justice system, in much the same way as the Rod of Asclepius is used to represent medicine. For example, Austrian courts use the symbol in their logo.
Unicode
The section sign appeared in several early computer text encodings. It was placed at (167) in ISO-8859-1, a position that was inherited by United as code point .
Origin
Two possible origins are often posited for the section sign: most probably, that it is a ligature formed by the combination of two S glyphs (from the Latin signum sectiōnis). Some scholars, however, are skeptical of this explanation.
Others have theorized that it is an adaptation of the Ancient Greek (paragraphos), a catch-all term for a class of punctuation marks used by scribes with diverse shapes and intended uses.
The modern form of the sign, with its modern meaning, has been in use since the 13th century.
In literature
In Jaroslav Hašek's The Good Soldier Švejk, the symbol is used repeatedly to mean "bureaucracy". In his English translation of 1930, Paul Selver translated it as "red tape".
See also
Scilicet ("it may be known") is sometimes rendered using a § mark instead of "viz."
Explanatory footnotes
References
External links
Punctuation
Typographical symbols | Section sign | Mathematics | 646 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.