id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
2,239,822 | https://en.wikipedia.org/wiki/Robert%20Kraft%20%28astronomer%29 | Robert Paul Kraft (June 16, 1927 – May 26, 2015) was an American astronomer. He performed pioneering work on Cepheid variables, stellar rotation, novae, and the chemical evolution of the Milky Way. His name is also associated with the Kraft break: the abrupt change in the average rotation rate of main sequence stars around spectral type F8.
Career
Kraft served as director of the Lick Observatory (1981–1991), president of the American Astronomical Society (1974–1976), and president of the International Astronomical Union (1997–2000).
He received his B.S. at the University of Washington in 1947, M.S. in mathematics at the University of Washington in 1949, and PhD from the University of California, Berkeley. He died in 2015.
Honors
Awards
Helen B. Warner Prize for Astronomy (1962)
Henry Norris Russell Lectureship (1995)
Bruce Medal (2005)
National Academy of Sciences
Named after him
Asteroid 3712 Kraft
References
Further reading
External links
Sandra Faber, "Robert P. Kraft", Biographical Memoirs of the National Academy of Sciences (2022)
1927 births
2015 deaths
Members of the United States National Academy of Sciences
20th-century American astronomers
21st-century American astronomers
Scientists from Seattle
University of Washington alumni
University of California, Berkeley alumni
Presidents of the International Astronomical Union | Robert Kraft (astronomer) | [
"Astronomy"
] | 264 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
2,239,896 | https://en.wikipedia.org/wiki/Renzapride | Renzapride is a prokinetic agent and antiemetic which acts as a full 5-HT4 agonist and partial 5-HT3 antagonist. It also functions as a 5-HT2B antagonist and has some affinity for the 5-HT2A and 5-HT2C receptors.
Renzapride was being developed by Alizyme plc of the United Kingdom. In May 2016, EndoLogic LLC, a US-based pharmaceutical and medical device company, acquired the US and worldwide patent rights to Renzapride.
Endologic confirmed the cardiac safety of renzapride through a “Thorough QTc” study and sold the rights to Atlantic Healthcare plc in 2019, a specialist pharmaceutical company.
Atlantic Healthcare is focusing on the development of renzapride for the management of gastrointestinal (GI) motility in a number of rare diseases, including systemic scleroderma and cystic fibrosis, both of which are associated with chronic GI motility problems and for which there are no approved therapies.
Clinical trials
In nine diabetic patients with autonomic neuropathy, renzapride reduced the mean lag phase of gastric emptying by 20–26 min at all doses (P < 0.01)
In Phase 2a studies on subjects with constipation Renzapride was shown to accelerate colonic transit (p=0.016 vs placebo P=0.009) (Ref: ATL 1251/001/CL) as well as increase daily stool frequency (p<0.005) (Ref: ATL 1251/025/CL)
Renzapride has been assessed in Phase II clinical trials with a total of 578 patients with constipation-predominant irritable bowel syndrome (IBS-C). As compared with placebo, the treatment groups reported better relief of their overall symptoms, namely abdominal pain and discomfort, increase in the number of pain free days, improved stool frequency, consistency and ease of passage of bowel movements. There were no significant differences in the reported Serious Adverse Events between treatment and placebo groups.
In the largest of these Phase II trials, 510 subjects with IBS-C received either 1, 2 or 4 mg QD renzapride, or placebo QD for 12 weeks. The Weekly responder rate based on subject's assessment of whether they had relief from abdominal pain and/or discomfort associated with IBS during weeks 5-12 was 56% (renzapride 4 mg) vs 49% (placebo). For females the treatment effect was larger, 61% (renzapride 4 mg) vs 49% (placebo). Statistically significant effects in favour of renzapride were observed for improvements in stool consistency and increased bowel movements.
In the Phase III clinical trial in IBS-C, 1798 female patients received either 2 or 4 mg Renzapride, or placebo once daily, for 12 weeks. The mean number of months with relief of overall symptoms was 0.6, 0.55 and 0.44 for renzapride 2 mg twice a day, renzapride 4 mg once a day and placebo, respectively, with both renzapride doses being statistically superior to placebo (p=0.004 and p=0.027, respectively). On responder analysis, the proportion of responders was 33.2%, 29.8%, and 24.3% for renzapride 2 mg twice a day, renzapride 4 mg once a day and placebo, respectively.
The 8.9% delta between renzapride 2 mg twice daily and placebo compares favourably with other FDA approved therapies (Ford ).
References
5-HT3 antagonists
Abandoned drugs
Anilines
Benzamides
Chloroarenes
Nitrogen heterocycles
Phenol ethers
Heterocyclic compounds with 2 rings | Renzapride | [
"Chemistry"
] | 831 | [
"Drug safety",
"Abandoned drugs"
] |
2,239,927 | https://en.wikipedia.org/wiki/Asymmetric%20federalism | Asymmetric federalism or asymmetrical federalism is found in a federation or other types of union in which different constituent states possess different powers: one or more of the substates has considerably more autonomy than the other substates, although they have the same constitutional status. This is in contrast to symmetric federalism, where no distinction is made between constituent states. As a result, it is frequently proposed as a solution to the dissatisfaction that arises when one or more constituent units feel significantly different needs from the others, as the result of an ethnic, linguistic or cultural difference.
The difference between an asymmetric federation and federacy is indistinct. A federacy is essentially an extreme case of an asymmetric federation, either due to large differences in the level of autonomy, or the rigidity of the constitutional arrangements. An asymmetric federation, however, has to have a federal constitution, and all states in federation have the same formal status ("state"), while in a federacy independent substate has a different status ("autonomous region").
Types
Asymmetrical federalism can be divided into two types of agreements or arrangements. The first type resolves differences in legislative powers, representation in central institutions, and rights and obligations that are set in the constitution. This type of asymmetry can be called de jure asymmetry (Brown 2). The second type reflects agreements which come out of national policy, opting out, and (depending on one's definition of the term) bilateral and ad hoc deals with specific provinces, none of which are entrenched in the constitution. This type of asymmetry is known as de facto asymmetry. The Canadian federation uses a combination of these, which make up its asymmetrical character.
National examples
Canada
The Constitution of Canada is broadly symmetric but contains certain specific sections that apply only to certain provinces. In practice, a degree of asymmetry is created as a result of the evolution of the Canadian federal experiment, individual federal-provincial agreements, and judicial interpretation. Asymmetrical federalism has been much discussed as a formula for stability in Canada, meeting the aspirations of French-speaking Quebec for control over its cultural and social life without removing it from the national federation, where it coexists with nine largely English-speaking provinces.
The most prominent example of asymmetric federalism in Canada is the constitutional requirement that three Supreme Court justices must come from Quebec. The nine other provinces are each entitled to fair representation in the Supreme Court, but their entitlement is based on convention rather than enshrined in the constitution.
A recent example of asymmetry in the Canadian federation can be found in the terms of the September 2004 federal-provincial-territorial agreement on health care and the financing thereof. The Government of Quebec supported the broader agreement but insisted on a separate communiqué in which it was specified, among other things, that Quebec will apply its own wait time reduction plan in accordance with the objectives, standards and criteria established by the relevant Quebec authorities; that the Government of Quebec will report to Quebecers on progress in achieving its objectives, and will use comparable indicators, mutually agreed to with other governments; and that funding made available by the Government of Canada will be used by the Government of Quebec to implement its own plan for renewing Quebec's health system.
For example, Quebec operates its own pension plan, while the other nine provinces are covered by the federal/provincial Canada Pension Plan. Quebec has extensive authority over employment and immigration issues within its borders, matters that are handled by the federal government in all the other provinces.
Such an arrangement has led to criticism in the English-speaking provinces, where there is fear that Quebec is enjoying favouritism in the federal system. It, however, provides a useful lever for those who want to decentralize the structure as a whole, transferring more powers from the centre to the provinces overall, a trend that dominated Canadian politics for decades.
Czechoslovakia
The Second Czechoslovak Republic (1938–1939) was divided into five lands, with the land of Slovakia given a higher degree of autonomy than the other lands, often regarded as a de facto federalist devolution. From 1945 to 1968, Czechoslovakia operated under an asymmetric federal model, and the Slovak National Council appointed a Chairman of the Board of Trustees, de facto the Prime Minister of Slovakia. In 1968 asymmetric federalism was officially abandoned, and the constitution was changed to a federal republic with the creation of the Slovak Socialist Republic and the Czech Socialist Republic with a new Czech National Council, but the ruling Communist Party of Czechoslovakia retained an asymmetrical partisan model with only a Communist Party of Slovakia and no Czech Communist Party until 1990.
Germany
The Basic Law, Germany's constitution, is broadly symmetric with some exceptions. Article 138 provides that changes "rules governing the notarial profession" in a southern German state require the consent of the state legislatures. Article 141 exempts Bremen from the requirement that German schools provide religious education.
India
The Government of India (referred to as the Union Government or Central Government) was established by the Constitution of India, and is the governing authority of a federal union of 28 states and 8 union territories.
The governance of India is based on a tiered federal system, wherein the Constitution of India assigns the subjects on which each tier of government exercises powers.
An intrinsic characteristic of Indian federalism is that it is designed to be asymmetric where necessary. Until 2019, Article 370 made special provisions for the state of Jammu and Kashmir as per its Instrument of Accession. Article 371-371J make special provisions for the states of Andhra Pradesh, Arunachal Pradesh, Assam, Gujarat, Goa, Karnataka, Mizoram, Manipur, Maharashtra, Nagaland, Sikkim, and Telangana.
Indonesia
In Indonesia, although the form of state is unitary, four regions were given the special status of autonomy (keistimewaan) as provinces: Aceh, Jakarta, Yogyakarta and 5 provinces in West Papua. These regions were given special statuses based on the constitutional laws of special autonomy (Undang-Undang Keistimewaan Daerah) with each having their own degree of autonomy:
Aceh exercises Sharia law with the Aceh traditional system of government instead of using the unitary system the other provinces have. Aceh was also granted the rights over the participation of regional parties in their province, unlike other provinces.
Jakarta is the capital city and, unlike other cities in Indonesia which were granted a second-tier of country subdivision or the same degree as a regency, exercises the autonomous power of a first-tier level of country subdivision.
Yogyakarta was granted special status over the exercise and involvement of the royal family of Keraton Jogjakarta and Kadipaten Pakualaman, where the Sultan of Jogjakarta rules the province, taking the place of a governor in other provinces. Acting as his deputy is the Adipati of Pakualam. The two rule as the executive leaders of Jogjakarta.
Papua was granted a special status over the exercise of legislative power. Papua has a separate legislative council, the MRP (Majelis Rakyat Papua/Papuan People's Assembly), which has legislative power over Papua inside the People's Consultative Assembly, the Legislative Council of Indonesia. However, the status of Papua has been criticized due to intervention from Jakarta. International human rights activists have called Papua a 'fake autonomous province' due to the lack of real autonomy in the field.
Italy
In Italy, five regions (namely Sardinia, Sicily, Trentino-Alto Adige/Südtirol, Aosta Valley and Friuli-Venezia Giulia) have been granted special status of autonomy. Their statutes are constitutional laws approved by the Italian Parliament, granting them relatively broad powers in relation to legislation and administration, but also significant financial autonomy. They keep between 60% (Friuli-Venezia Giulia) and 100% (Sicily) of all taxes and decide how to spend the revenues. These regions became autonomous in order to take into account that they host linguistic minorities (German-speaking in Trentino-Alto Adige/Südtirol, Arpitan-speaking in Aosta Valley, Friulian and Slovenian-speaking in Friuli-Venezia Giulia) or are geographically isolated (the two islands, but also Friuli-Venezia Giulia).
Malaysia
Malaysia is a federation of 13 states formed in 1963 by the merger of the independent Federation of Malaya and the formerly British colonies of Singapore, Sabah, and Sarawak.
Under the terms of the federation, Sabah and Sarawak are granted significant autonomy in excess of that exercised by the 11 Malayan states, most notably the control over immigration to these two states.
Singapore was a part of Malaysia until 1965. During its time as a state of Malaysia, Singapore enjoyed autonomy in setting labour and education policies.
Russia
The Russian Federation consists of 83 federal subjects, all equal in federal matters but enjoying six more or less different levels of autonomy.
A republic is the most autonomous subject. Each has its own constitution, has its own official language (alongside Russian, which is official throughout the federation) and is meant to be home to a specific ethnic minority. An autonomous okrug also has a substantial ethnic minority, but is not allowed to have its own constitution and official language. An oblast, a krai, and an autonomous oblast has subjects without a substantial ethnic minority, completely equal to an autonomous okrug with other rights. A federal city is a major city that functions as a separate region.
Previously, the Soviet Union often demonstrated traits of asymmetric federalism, including defining the Russian SFSR's constitution inside of the 1936 Soviet Constitution, subnational asymmetric federalism (especially within the Russian SFSR but also in other SSRs), and giving the Russian SFSR the most representation in the Supreme Soviet, particularly the Soviet of Nationalities, where each autonomous area of the Russian SFSR was granted additional representation. At the same time, Russian SFSR did not have its own Communist Party branch, whose First Secretaries de-facto served as a head of states of other Soviet Republics.
Spain
In Spain, which is either called an "imperfect federation" or a "federation in all but its name", the central government has granted different levels of autonomy to its substates, considerably more to the autonomous communities of Catalonia, the Basque Country, Valencia, Andalusia, Navarre and Galicia and considerably less to the others, out of respect for nationalist sentiment and rights these regions have enjoyed historically.
United Kingdom
In the United Kingdom, England has no self-government and is ruled directly by the British Parliament, but Northern Ireland, Scotland, and Wales have varying degrees of autonomy. However, many people, such as the Yorkshire Party, believe that asymmetrical devolution of powers (most notably to the Scottish Parliament and Welsh Parliament) is unfair, which causes the ongoing West Lothian question. The United Kingdom is a unitary state, not a federal one, according to its constitution, and the British Parliament still remains sovereign, though some groups such as the Federal Union seek to change this, and Winston Churchill was famously in favour of a British federation.
References
Citations
Sources
Federalism
Fed
Canadian political phrases | Asymmetric federalism | [
"Physics"
] | 2,312 | [
"Symmetry",
"Asymmetry"
] |
2,240,299 | https://en.wikipedia.org/wiki/Freiman%27s%20theorem | In additive combinatorics, a discipline within mathematics, Freiman's theorem is a central result which indicates the approximate structure of sets whose sumset is small. It roughly states that if is small, then can be contained in a small generalized arithmetic progression.
Statement
If is a finite subset of with , then is contained in a generalized arithmetic progression of dimension at most and size at most , where and are constants depending only on .
Examples
For a finite set of integers, it is always true that
with equality precisely when is an arithmetic progression.
More generally, suppose is a subset of a finite proper generalized arithmetic progression of dimension such that for some real . Then , so that
History of Freiman's theorem
This result is due to Gregory Freiman (1964, 1966). Much interest in it, and applications, stemmed from a new proof by Imre Z. Ruzsa (1992,1994). Mei-Chu Chang proved new polynomial estimates for the size of arithmetic progressions arising in the theorem in 2002. The current best bounds were provided by Tom Sanders.
Tools used in the proof
The proof presented here follows the proof in Yufei Zhao's lecture notes.
Plünnecke–Ruzsa inequality
Ruzsa covering lemma
The Ruzsa covering lemma states the following:
Let and be finite subsets of an abelian group with nonempty, and let be a positive real number. Then if , there is a subset of with at most elements such that .
This lemma provides a bound on how many copies of one needs to cover , hence the name. The proof is essentially a greedy algorithm:
Proof: Let be a maximal subset of such that the sets for are all disjoint. Then , and also , so . Furthermore, for any , there is some such that intersects , as otherwise adding to contradicts the maximality of . Thus , so .
Freiman homomorphisms and the Ruzsa modeling lemma
Let be a positive integer, and and be abelian groups. Let and . A map is a Freiman -homomorphism if
whenever for any .
If in addition is a bijection and is a Freiman -homomorphism, then is a Freiman -isomorphism.
If is a Freiman -homomorphism, then is a Freiman -homomorphism for any positive integer such that .
Then the Ruzsa modeling lemma states the following:
Let be a finite set of integers, and let be a positive integer. Let be a positive integer such that . Then there exists a subset of with cardinality at least such that is Freiman -isomorphic to a subset of .
The last statement means there exists some Freiman -homomorphism between the two subsets.
Proof sketch: Choose a prime sufficiently large such that the modulo- reduction map is a Freiman -isomorphism from to its image in . Let be the lifting map that takes each member of to its unique representative in . For nonzero , let be the multiplication by map, which is a Freiman -isomorphism. Let be the image . Choose a suitable subset of with cardinality at least such that the restriction of to is a Freiman -isomorphism onto its image, and let be the preimage of under . Then the restriction of to is a Freiman -isomorphism onto its image . Lastly, there exists some choice of nonzero such that the restriction of the modulo- reduction to is a Freiman -isomorphism onto its image. The result follows after composing this map with the earlier Freiman -isomorphism.
Bohr sets and Bogolyubov's lemma
Though Freiman's theorem applies to sets of integers, the Ruzsa modeling lemma allows one to model sets of integers as subsets of finite cyclic groups. So it is useful to first work in the setting of a finite field, and then generalize results to the integers. The following lemma was proved by Bogolyubov:
Let and let . Then contains a subspace of of dimension at least .
Generalizing this lemma to arbitrary cyclic groups requires an analogous notion to “subspace”: that of the Bohr set. Let be a subset of where is a prime. The Bohr set of dimension and width is
where is the distance from to the nearest integer. The following lemma generalizes Bogolyubov's lemma:
Let and . Then contains a Bohr set of dimension at most and width .
Here the dimension of a Bohr set is analogous to the codimension of a set in . The proof of the lemma involves Fourier-analytic methods. The following proposition relates Bohr sets back to generalized arithmetic progressions, eventually leading to the proof of Freiman's theorem.
Let be a Bohr set in of dimension and width . Then contains a proper generalized arithmetic progression of dimension at most and size at least .
The proof of this proposition uses Minkowski's theorem, a fundamental result in geometry of numbers.
Proof
By the Plünnecke–Ruzsa inequality, . By Bertrand's postulate, there exists a prime such that . By the Ruzsa modeling lemma, there exists a subset of of cardinality at least such that is Freiman 8-isomorphic to a subset .
By the generalization of Bogolyubov's lemma, contains a proper generalized arithmetic progression of dimension at most and size at least . Because and are Freiman 8-isomorphic, and are Freiman 2-isomorphic. Then the image under the 2-isomorphism of the proper generalized arithmetic progression in is a proper generalized arithmetic progression in called .
But , since . Thus
so by the Ruzsa covering lemma for some of cardinality at most . Then is contained in a generalized arithmetic progression of dimension and size at most , completing the proof.
Generalizations
A result due to Ben Green and Imre Ruzsa generalized Freiman's theorem to arbitrary abelian groups. They used an analogous notion to generalized arithmetic progressions, which they called coset progressions. A coset progression of an abelian group is a set for a proper generalized arithmetic progression and a subgroup of . The dimension of this coset progression is defined to be the dimension of , and its size is defined to be the cardinality of the whole set. Green and Ruzsa showed the following:
Let be a finite set in an abelian group such that . Then is contained in a coset progression of dimension at most and size at most , where and are functions of that are independent of .
Green and Ruzsa provided upper bounds of and for some absolute constant .
Terence Tao (2010) also generalized Freiman's theorem to solvable groups of bounded derived length.
Extending Freiman’s theorem to an arbitrary nonabelian group is still open. Results for , when a set has very small doubling, are referred to as Kneser theorems.
The polynomial Freiman–Ruzsa conjecture, is a generalization published in a paper by Imre Ruzsa but credited by him to Katalin Marton. It states that if a subset of a group (a power of a cyclic group) has doubling constant such that then it is covered by a bounded number of cosets of some subgroup with. In 2012 Tom Sanders gave an almost polynomial bound of the conjecture for abelian groups. In 2023 a solution over a field of characteristic 2 has been posted as a preprint by Tim Gowers, Ben Green, Freddie Manners and Terry Tao. This proof was completely formalized in the Lean 4 formal proof language, a collaborative project that marked an important milestone in terms of mathematicians successfully formalizing contemporary mathematics.
See also
Markov spectrum
Plünnecke–Ruzsa inequality
Kneser's theorem (combinatorics)
References
Further reading
Sumsets
Theorems in number theory | Freiman's theorem | [
"Mathematics"
] | 1,635 | [
"Mathematical theorems",
"Combinatorics",
"Theorems in number theory",
"Sumsets",
"Mathematical problems",
"Number theory"
] |
2,240,310 | https://en.wikipedia.org/wiki/Fermionic%20field | In quantum field theory, a fermionic field is a quantum field whose quanta are fermions; that is, they obey Fermi–Dirac statistics. Fermionic fields obey canonical anticommutation relations rather than the canonical commutation relations of bosonic fields.
The most prominent example of a fermionic field is the Dirac field, which describes fermions with spin-1/2: electrons, protons, quarks, etc. The Dirac field can be described as either a 4-component spinor or as a pair of 2-component Weyl spinors. Spin-1/2 Majorana fermions, such as the hypothetical neutralino, can be described as either a dependent 4-component Majorana spinor or a single 2-component Weyl spinor. It is not known whether the neutrino is a Majorana fermion or a Dirac fermion; observing neutrinoless double-beta decay experimentally would settle this question.
Basic properties
Free (non-interacting) fermionic fields obey canonical anticommutation relations; i.e., involve the anticommutators {a, b} = ab + ba, rather than the commutators [a, b] = ab − ba of bosonic or standard quantum mechanics. Those relations also hold for interacting fermionic fields in the interaction picture, where the fields evolve in time as if free and the effects of the interaction are encoded in the evolution of the states.
It is these anticommutation relations that imply Fermi–Dirac statistics for the field quanta. They also result in the Pauli exclusion principle: two fermionic particles cannot occupy the same state at the same time.
Dirac fields
The prominent example of a spin-1/2 fermion field is the Dirac field (named after Paul Dirac), and denoted by . The equation of motion for a free spin 1/2 particle is the Dirac equation,
where are gamma matrices and is the mass. The simplest possible solutions to this equation are plane wave solutions, and . These plane wave solutions form a basis for the Fourier components of , allowing for the general expansion of the wave function as follows,
Here u and v are spinors labelled by their spin s and spinor indices . For the electron, a spin 1/2 particle, s = +1/2 or s = −1/2. The energy factor is the result of having a Lorentz invariant integration measure. In second quantization, is promoted to an operator, so the coefficients of its Fourier modes must be operators too. Hence, and are operators. The properties of these operators can be discerned from the properties of the field. and obey the anticommutation relations:
We impose an anticommutator relation (as opposed to a commutation relation as we do for the bosonic field) in order to make the operators compatible with Fermi–Dirac statistics. By putting in the expansions for and , the anticommutation relations for the coefficients can be computed.
In a manner analogous to non-relativistic annihilation and creation operators and their commutators, these algebras lead to the physical interpretation that creates a fermion of momentum p and spin s, and creates an antifermion of momentum q and spin r. The general field is now seen to be a weighted (by the energy factor) summation over all possible spins and momenta for creating fermions and antifermions. Its conjugate field, , is the opposite, a weighted summation over all possible spins and momenta for annihilating fermions and antifermions.
With the field modes understood and the conjugate field defined, it is possible to construct Lorentz invariant quantities for fermionic fields. The simplest is the quantity . This makes the reason for the choice of clear. This is because the general Lorentz transform on is not unitary so the quantity would not be invariant under such transforms, so the inclusion of is to correct for this. The other possible non-zero Lorentz invariant quantity, up to an overall conjugation, constructible from the fermionic fields is .
Since linear combinations of these quantities are also Lorentz invariant, this leads naturally to the Lagrangian density for the Dirac field by the requirement that the Euler–Lagrange equation of the system recover the Dirac equation.
Such an expression has its indices suppressed. When reintroduced the full expression is
The Hamiltonian (energy) density can also be constructed by first defining the momentum canonically conjugate to , called
With that definition of , the Hamiltonian density is:
where is the standard gradient of the space-like coordinates, and is a vector of the space-like matrices. It is surprising that the Hamiltonian density doesn't depend on the time derivative of , directly, but the expression is correct.
Given the expression for we can construct the Feynman propagator for the fermion field:
we define the time-ordered product for fermions with a minus sign due to their anticommuting nature
Plugging our plane wave expansion for the fermion field into the above equation yields:
where we have employed the Feynman slash notation. This result makes sense since the factor
is just the inverse of the operator acting on in the Dirac equation. Note that the Feynman propagator for the Klein–Gordon field has this same property. Since all reasonable observables (such as energy, charge, particle number, etc.) are built out of an even number of fermion fields, the commutation relation vanishes between any two observables at spacetime points outside the light cone. As we know from elementary quantum mechanics two simultaneously commuting observables can be measured simultaneously. We have therefore correctly implemented Lorentz invariance for the Dirac field, and preserved causality.
More complicated field theories involving interactions (such as Yukawa theory, or quantum electrodynamics) can be analyzed too, by various perturbative and non-perturbative methods.
Dirac fields are an important ingredient of the Standard Model.
See also
Dirac equation
Spin–statistics theorem
Spinor
Composite Field
Auxiliary Field
References
Peskin, M and Schroeder, D. (1995). An Introduction to Quantum Field Theory, Westview Press. (See pages 35–63.)
Srednicki, Mark (2007). Quantum Field Theory , Cambridge University Press, .
Weinberg, Steven (1995). The Quantum Theory of Fields, (3 volumes) Cambridge University Press.
Quantum field theory
Spinors | Fermionic field | [
"Physics"
] | 1,380 | [
"Quantum field theory",
"Quantum mechanics"
] |
2,240,347 | https://en.wikipedia.org/wiki/Generalized%20arithmetic%20progression | In mathematics, a generalized arithmetic progression (or multiple arithmetic progression) is a generalization of an arithmetic progression equipped with multiple common differences – whereas an arithmetic progression is generated by a single common difference, a generalized arithmetic progression can be generated by multiple common differences. For example, the sequence is not an arithmetic progression, but is instead generated by starting with 17 and adding either 3 or 5, thus allowing multiple common differences to generate it.
A semilinear set generalizes this idea to multiple dimensions – it is a set of vectors of integers, rather than a set of integers.
Finite generalized arithmetic progression
A finite generalized arithmetic progression, or sometimes just generalized arithmetic progression (GAP), of dimension d is defined to be a set of the form
where . The product is called the size of the generalized arithmetic progression; the cardinality of the set can differ from the size if some elements of the set have multiple representations. If the cardinality equals the size, the progression is called proper. Generalized arithmetic progressions can be thought of as a projection of a higher dimensional grid into . This projection is injective if and only if the generalized arithmetic progression is proper.
Semilinear sets
Formally, an arithmetic progression of is an infinite sequence of the form , where and are fixed vectors in , called the initial vector and common difference respectively. A subset of is said to be linear if it is of the form
where is some integer and are fixed vectors in . A subset of is said to be semilinear if it is a finite union of linear sets.
The semilinear sets are exactly the sets definable in Presburger arithmetic.
See also
Freiman's theorem
References
Algebra
Combinatorics
Arithmetic series | Generalized arithmetic progression | [
"Mathematics"
] | 344 | [
"Discrete mathematics",
"Algebra",
"Combinatorics"
] |
2,240,363 | https://en.wikipedia.org/wiki/Television%20antenna | A television antenna, also called a television aerial (in British English), is an antenna specifically designed for use with a television receiver (TV) to receive terrestrial over-the-air (OTA) broadcast television signals from a television station. Terrestrial television is broadcast on frequencies from about 47 to 250 MHz in the very high frequency (VHF) band, and 470 to 960 MHz in the ultra high frequency (UHF) band in different countries.
Television antennas are manufactured in two different types: indoor and outdoor antennas. Indoor antennas are designed to be located on top of or next to the television set, but are ideally placed near a window in a room and as high up as possible for the best reception. The most common types of indoor antennas are the dipole ("rabbit ears"), which work best for VHF channels, and loop antennas, which work best for UHF. Outdoor antennas on the other hand are designed to be mounted on a mast on top of the owner's house, or in a loft or attic where the dry conditions and increased elevation are advantageous for reception and antenna longevity. Outdoor antennas are more expensive and difficult to install but are necessary for adequate reception in fringe areas far from television stations; the most common types of these are the Yagi, log periodic, and (for UHF) the multi-bay reflective array antenna.
Description
The purpose of the antenna is to intercept radio waves from the desired television stations and convert them to tiny radio frequency alternating currents which are applied to the television's tuner, which extracts the television signal. The antenna is connected to the television with a specialized cable designed to carry radio current, called transmission line. Earlier antennas used a flat cable called 300 ohm twin-lead. The standard today is 75 ohm coaxial cable, which is less susceptible to interference which plugs into an F connector or Belling-Lee connector (depending on region) on the back of the TV. To convert the signal from antennas that use a twin-lead line to the modern coaxial cable input, a small transformer called a balun is used in the line.
In most countries, television broadcasting is allowed in the very high frequency (VHF) band from 47 to 68 MHz, called VHF low band or band I in Europe; 174 to 216 MHz, called VHF high band or band III in Europe, and in the ultra high frequency (UHF) band from 470 to 698 MHz, called band IV and V in Europe. The boundaries of each band vary somewhat in different countries. Radio waves in these bands travel by line-of-sight; they are blocked by hills and the visual horizon, limiting a television station's reception area to , depending on terrain.
Analog vs. digital
In the previous standard analog television, used before 2006, the VHF and UHF bands required separate tuners in the television receiver, which had separate antenna inputs. The wavelength of a radio wave equals the speed of light (c), divided by the frequency. The above frequency bands cover a 15:1 wavelength ratio, or almost 4 octaves. It is difficult to design a single antenna to receive such a wide wavelength range, and there is an octave gap from 216 to 470 MHz between the VHF and UHF frequencies. So traditionally, separate antennas (outdoor antennas with separate sets of elements on a single support boom) have been used to receive the VHF and UHF channels.
Starting in 2006, many countries in the world switched from broadcasting using an older analog television standard to newer digital television (DTV). However, the same broadcast frequencies are generally used, so the antennas used for the older analog television will also receive the new DTV broadcasts. Sellers often claim to supply a special digital or high-definition television (HDTV) antenna advised as a replacement for an existing analog television antenna; at best this is misinformation to generate sales of unneeded equipment, At worst, it may leave the viewer with a UHF-only antenna in a local market (particularly in North America) where some digital stations remain on their original high VHF or low VHF frequencies.
Reception issues
Places unable to be reached by television broadcast transmitters are known as black spots in Australia. In East Germany, the areas that could not receive western TV signals were referred to as the Tal der Ahnungslosen, or Valley of the Clueless.
Indoor
Indoor antennas may be mounted on the television itself or stand on a table next to it, connected to the television by a short feed line. Due to space constraints, indoor antennas cannot be as large and elaborate as outdoor antennas, they are not mounted at as high an elevation, and the building walls block some of the radio waves; for these reasons, indoor antennas generally do not give as good reception as outdoor antennas. They are often perfectly adequate in urban and suburban areas, which are usually within the strong radiation footprint of local television stations. Still, in rural fringe reception areas, only an outdoor antenna may give adequate reception. A few of the simplest indoor antennas are described below, but a great variety of designs and types exist. Many have a dial on the antenna with a number of different settings to alter the antenna's reception pattern. This should be rotated with the set on while looking at the screen until the best picture is obtained.
Rabbit ears
The oldest and most widely used (at least in the United States) indoor antenna is the rabbit ears or bunny ears, which are often provided with new television sets. It is a simple half-wave dipole antenna used to receive the VHF television bands, consisting in the US of 54 to 88 MHz (band I) and 174 to 216 MHz (band III), with wavelengths of . It is constructed of two telescoping rods attached to a base, which extend out to about length (approximately one-quarter wavelength at 54 MHz) and can be collapsed when not in use. For best reception, the rods should be adjusted to be a little less than wavelength at the frequency of the television channel being received. However, the dipole has a wide bandwidth, so often adequate reception is achieved without adjusting the length.
The measured gain of rabbit ears is low, about ―2 dBi, or ―4 dB with respect to a half wave dipole. This means it is not as directional and sensitive to distant stations as a large rooftop antenna. Still, its wide-angle reception pattern may allow it to receive several stations located in different directions without requiring readjustment when the channel is changed. Dipole antennas are bi-directional; that is, they have two main lobes in opposite directions, 180° apart. Instead of being fixed in position like other antennas, the elements are mounted on ball-and-socket joints. They can be adjusted to various angles in a V shape, allowing them to be moved out of the way in crowded quarters. Another reason for the V shape is that when receiving channels at the top of the band with the rods fully extended, the antenna elements will typically resonate at their 3rd harmonic. In this mode, the direction of maximum gain (the main lobe) is no longer perpendicular to the rods. Still, the radiation pattern will have lobes at an angle to the rods, making it advantageous to be able to adjust them to various angles.
Whip antenna
Some portable televisions use a whip antenna. This consists of a single telescoping rod about long attached to the television, which can be retracted when not in use. It functions as a quarter-wave monopole antenna. The other side of the feedline is connected to the ground plane on the TV's circuit board, which acts as ground. The whip antenna generally has an omnidirectional reception pattern, with maximum sensitivity in directions perpendicular to the antenna axis and gain similar to rabbit ears.
Loop antenna
The UHF channels are often received by a single turn loop antenna. Since a rabbit ears antenna only covers the VHF bands, it is often combined with a UHF loop mounted on the same base to cover all the TV channels. This of course also depends by country and region: for example in the UK and Ireland, terrestrial TV broadcasts are only on the UHF band, meaning that a loop antenna is necessary and the rabbit ears would only be useful for FM radio reception.
Flat antenna
A more recent phenomenon for indoor antennas are flat antennas, which are lightweight, thin, and usually square-shaped with the claim of having more omnidirectional reception. They are also marketed as being more in line with modern minimalistic home designs. Flat antennas may have a stand or could be hung on a wall or a window. Internally, the thin, flat square is a loop antenna with its circular metallic wiring embedded into conductive plastic.
Outdoor
When a higher-gain antenna is needed to achieve adequate reception in suburban or fringe reception areas, an outdoor directional antenna is usually used. Although most simple antennas have null directions where they have zero response, the directions of useful gain are very broad. In contrast, directional antennas can have an almost unidirectional radiation pattern, so the correct end of the antenna must be pointed at the TV station. As an antenna design provides higher gain (compared to a dipole), the main lobe of the radiation pattern becomes narrower. Outdoor antennas provide up to a 15 dB gain in signal strength and 15-20 dB greater rejection of ghost signals in analog TV. Combined with a signal increase of 14 dB due to height and 11 dB due to lack of attenuating building walls, an outdoor antenna can result in a signal strength increase of up to 40 dB at the TV receiver.
Outdoor antenna designs are often based on the Yagi–Uda antenna or log-periodic dipole array (LPDA). These are composed of multiple half-wave dipole elements, consisting of metal rods approximately half of the wavelength of the television signal, mounted in a line on a support boom. These act as resonators; the electric field of the incoming radio wave pushes the electrons in the rods back and forth, creating standing waves of oscillating voltage in the rods. The antenna can have a smaller or larger number of rod elements; in general, the more elements, the higher the gain and the more directional. Another design used mainly for UHF reception is the reflective array antenna, consisting of a vertical metal screen with multiple dipole elements mounted in front of it.
The television broadcast bands are too wide in frequency to be covered by a single antenna, so the two options are separate antennas used for the VHF and UHF bands or a combination (combo) VHF/UHF antenna. A VHF/UHF antenna combines two antennas feeding the same feedline mounted on the same support boom. More extended elements that pick up VHF frequencies are located at the back of the boom and often function as a log-periodic antenna. Shorter elements that receive the UHF stations are located at the front of the boom and often function as a Yagi antenna.
Since directional antennas must be pointed at the transmitting antenna, this is a problem when the television stations to be received are located in different directions. In this case, two or more directional rooftop antennas, each pointed at a different transmitter, are often mounted on the same mast and connected to one receiver for best performance filter or matching circuits are used to keep each antenna from degrading the performance of the others connected to the same transmission line. An alternative is to use a single antenna mounted on a rotator, a remote servo system that rotates the antenna to a new direction when a dial next to the television is turned.
Sometimes television transmitters are deliberately located such that receivers in a given region need only receive transmissions in a relatively narrow band of the full UHF television spectrum and from the same direction, hence allowing the use of a higher gain grouped aerial.
Installation
Antennas are commonly placed on rooftops and sometimes in attics. Placing an antenna indoors significantly attenuates the level of the available signal. Directional antennas must be pointed at the transmitter they are receiving; in most cases great accuracy is not needed. In a given region, it is sometimes arranged that all television transmitters are located in roughly the same direction and use frequencies spaced closely enough that a single antenna suffices for all. A single transmitter location may transmit signals for several channels. CABD (communal antenna broadcast distribution) is a system installed inside a building to receive free-to-air TV/FM signals transmitted via radio frequencies and distribute them to the audience.
Analog television signals are susceptible to ghosting in the image, multiple closely spaced images giving the impression of blurred and repeated images of edges in the picture. This is due to the signal being reflected from nearby objects (buildings, trees, mountains); several copies of the signal, of different strengths and subject to different delays, are picked up. This is different for other transmissions. Careful positioning of the antenna can produce a compromise position, which minimizes the ghosts on different channels. Ghosting is also possible if multiple antennas connected to the same receiver pick up the same station, especially if the lengths of the cables connecting them to the splitter/merger are different lengths or the antennas are too close together. Analog television is being replaced by digital, which is not subject to ghosting; the same reflected signal that causes ghosting in an analog signal would produce no viewable content at all in digital. However, in this case, interference causes significantly more significant image quality degradation.
Rooftop and other outdoor antennas
Aerials are attached to roofs in various ways, usually on a pole to elevate it above the roof. This is generally sufficient in most areas. In some places, however, such as a deep valley or near taller structures, the antenna may need to be placed significantly higher, using a guyed mast or mast. The wire connecting the antenna indoors is referred to as the or drop, and the longer the downlead is, the greater the signal degradation in the wire. Certain cables may help reduce this tendency.
The higher the antenna is placed, the better it will perform. An antenna of higher gain will be able to receive weaker signals from its preferred direction. Intervening buildings, topographical features (mountains), and dense forests will weaken the signal; in many cases, the signal will be reflected such that a usable signal is still available. There are physical dangers inherent to high or complex antennas, such as the structure falling or being destroyed by weather. There are also varying local ordinances which restrict and limit such things as the height of a structure without obtaining permits. For example, in the United States, the Telecommunications Act of 1996 allows any homeowner to install "An antenna that is designed to receive local television broadcast signals" but that "masts higher than above the roof-line may be subject to local permitting requirements."
Indoor antennas
As discussed previously, antennas may be placed indoors where signals are strong enough to overcome antenna shortcomings. The antenna is simply plugged into the television receiver and placed conveniently, often on the top of the receiver ("set-top"). Sometimes, the position needs to be experimented with to get the best picture. Indoor antennas can also benefit from RF amplification, commonly called a TV booster. Reception from indoor antennas can be problematic in weak signal areas.
Attic installation
Sometimes, it is desirable not to put an antenna on the roof; in these cases, antennas designed for outdoor use are often mounted in the attic or loft, although antennas designed for attic use are also available. Putting an antenna indoors significantly decreases its performance due to lower elevation above ground level and intervening walls; however, in strong signal areas, reception may be satisfactory. One layer of asphalt shingles, roof felt, and a plywood roof deck is considered to attenuate the signal to about half.
Multiple antennas, rotators
It is sometimes desired to receive signals from transmitters which are not in the same direction. This can be achieved, for one station at a time, by using a rotator operated by an electric motor to turn the antenna as desired. Alternatively, two or more antennas, each pointing at a desired transmitter and coupled by appropriate circuitry, can be used. To prevent the antennas from interfering with each other, the vertical spacing between the booms must be at least half the wavelength of the lowest frequency to be received (Distance = ). The wavelength of 54 MHz (Channel 2) is (λ × f = c) so the antennas must be a minimum of apart. It is also important that the cables connecting the antennas to the signal splitter/merger be precisely the same length to prevent phasing issues, which cause ghosting with analog reception. That is, the antennas might both pick up the same station; the signal from the one with the shorter cable will reach the receiver slightly sooner, supplying the receiver with two pictures slightly offset. There may be phasing issues even with the same length of down-lead cable. Band-pass filters or signal traps may help to reduce this problem.
For side-by-side placement of multiple antennas, as is common in a space of limited height such as an attic, they should be separated by at least one full wavelength of the lowest frequency to be received at their closest point.
When multiple antennas are often used, one is for a range of co-located stations, and the other is for a single transmitter in a different direction.
Safety
TV antennas are good conductors of electricity and attract lightning, acting as a lightning rod. A lightning arrester is usually used to protect against this. A large grounding rod connected to both the antenna and the mast or pole is required.
Properly installed masts, especially tall ones, are guyed with galvanized cable; no insulators are needed. They are designed to withstand worst-case weather conditions in the area and are positioned so that they do not interfere with power lines if they fall.
There is an inherent danger in being on the rooftop of a house, required for installing or adjusting a television antenna.
See also
Broadcast television systems
Radio masts and towers, sometimes called Radio and TV antennas
Satellite dish
Satellite television
Terrestrial television
References
External links
Article on the basic theory of TV aerials and their use
See Which TV Stations You Can Get on a Map
'Up on the roof' antenna page
Antennas (radio)
Radio electronics
Radio frequency antenna types
Radio frequency propagation
Radio technology
Antenna | Television antenna | [
"Physics",
"Technology",
"Engineering"
] | 3,703 | [
"Information and communications technology",
"Radio electronics",
"Physical phenomena",
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Radio technology",
"Waves"
] |
2,240,615 | https://en.wikipedia.org/wiki/Comparison%20of%20SSH%20clients | An SSH client is a software program which uses the secure shell protocol to connect to a remote computer. This article compares a selection of notable clients.
General
Platform
The operating systems or virtual machines the SSH clients are designed to run on without emulation include several possibilities:
Partial indicates that while it works, the client lacks important functionality compared to versions for other OSs but may still be under development.
The list is not exhaustive, but rather reflects the most common platforms today.
Technical
Features
Authentication key algorithms
This table lists standard authentication key algorithms implemented by SSH clients. Some SSH implementations include both server and client implementations and support custom non-standard authentication algorithms not listed in this table.
See also
Comparison of SSH servers
Comparison of FTP client software
Comparison of remote desktop software
References
Cryptographic software
Internet Protocol based network software
SSH clients
Secure Shell | Comparison of SSH clients | [
"Mathematics"
] | 172 | [
"Cryptographic software",
"Mathematical software"
] |
2,240,709 | https://en.wikipedia.org/wiki/Diethylene%20glycol | Diethylene glycol (DEG) is an organic compound with the formula (HOCH2CH2)2O. It is a colorless, practically odorless, and hygroscopic liquid with a sweetish taste. It is a four carbon dimer of ethylene glycol. It is miscible in water, alcohol, ether, acetone, and ethylene glycol. DEG is a widely used solvent. It can be a normal ingredient in various consumer products, and it can be a contaminant. DEG has also been misused to sweeten wine and beer, and to viscosify oral and topical pharmaceutical products. Its use has resulted in many epidemics of poisoning since the early 20th century.
Preparation
DEG is produced by the partial hydrolysis of ethylene oxide. Depending on the conditions, varying amounts of DEG and related glycols are produced. The resulting product is two ethylene glycol molecules joined by an ether bond.
"Diethylene glycol is derived as a co-product with ethylene glycol (MEG) and triethylene glycol. The industry generally operates to maximize MEG production. Ethylene glycol is by far the largest volume of the glycol products in a variety of applications. Availability of DEG will depend on demand for derivatives of the primary product, ethylene glycol, rather than on DEG market requirements."
Structure of DEG and related polyols
Diethylene glycol is one of several glycols derived from ethylene oxide. Glycols related to and co-produced with diethylene glycol and having the formula HOCH2CH2(OCH2CH2)nOH are:
n = 0 ethylene glycol ("antifreeze"); monoethylene glycol MEG
n = 1 DEG
n = 2 triethylene glycol, TEG, or triglycol
n = 3 tetraethylene glycol
n = 4 pentaethylene glycol
n > 4 polyethylene glycol
These compounds are all hydrophilic, more so than most diols, by virtue of the ether functionality.
Uses
Diethylene glycol is used in the manufacture of saturated and unsaturated polyester resins, polyurethanes, and plasticizers. DEG is a precursor to morpholine and 1,4-dioxane. It is a solvent for nitrocellulose, resins, dyes, oils, and other organic compounds. It is a humectant for tobacco, cork, printing ink, and glue. It is also a component of brake fluid, lubricants, wallpaper strippers, artificial fog and haze solutions, and heating/cooking fuel. In personal care products (e.g. skin cream and lotions and deodorants), DEG is often replaced by selected diethylene glycol ethers. Most types of ethylene glycol antifreeze contain a few percent of diethylene glycol, present as a by-product of ethylene glycol production.
DEG is an important industrial desiccant. It absorb water from natural gas, minimizing the formation of methane hydrates, which can block pipes.
Toxicology
The toxicity of DEG was discovered in 1937. The toxic dose is 0.14 mg/kg body weight and the lethal dose between 1.0 and 1.63 g/kg. Some suggest that the LD50 in adults is about 1 mL/kg, while others suggest that that is the LD30. Because of its adverse effects, DEG is rarely allowed in foods and drugs. The U.S. Code of Federal Regulations allows no more than 0.2% of diethylene glycol in polyethylene glycol when the latter is used as a food additive. In Australia, it is only allowed at less than 0.25% w/w of DEG as an impurity in polyethylene glycol (PEG), even in toothpaste.
Diethylene glycol has "moderate to low" acute toxicity in animal experiments. The LD50 for small mammals is between 2 and 25 g/kg, less toxic than its relative ethylene glycol but still capable of causing toxicity in humans (in high concentrations only). It appears that diethylene glycol may be more hazardous to humans than implied by oral toxicity data in laboratory animals.
Toxicokinetics
Although there is limited information about toxicokinetics in humans, observations in mass poisonings and experimental studies suggest the following information:
Absorption and distribution
The principal method of absorption is through oral ingestion. Dermal absorption is very low, unless it is administered on broken or damaged skin. After ingestion, DEG is absorbed via the gastrointestinal tract and distributed by the bloodstream throughout the body, reaching peak blood concentrations within 30 to 120 minutes. In the liver DEG is metabolized by enzymes.
Metabolism and elimination
At first, scientists thought that DEG was converted in the liver to ethylene glycol, which is poisonous because of the metabolic production of glycolic acid, glyoxylic acid, and ultimately oxalic acid. The major cause of ethylene glycol toxicity is the accumulation of glycolic acid in the body, but accumulation of calcium oxalate crystals in the kidneys can also lead to acute kidney failure. In the case of DEG, calcium oxalate crystal are not deposited in the kidneys, implying that ethylene glycol is not on the DEG metabolic pathway. Rat models suggest that DEG is metabolized in the liver by enzyme NAD-dependent alcohol dehydrogenase (ADH) to a hydrogen ion, NADH, and 2-hydroxyethoxyacetaldehyde (C4H8O3). Shortly after that, 2-hydroxyethoxyacetaldehyde (C4H8O3) is metabolized by the enzyme aldehyde dehydrogenase (ALDH) to the weak acid 2-hydroxyethoxyacetic acid (HEAA), chemical formula C4H8O4. Later, HEAA leaves the liver through the bloodstream, being partially filtered in the kidneys for elimination.
Mechanisms
Based on available literature, scientists suggest that unmetabolized DEG and HEAA are partially reabsorbed through glomerular filtration. As a consequence, the weak acid HEAA and its metabolites may cause renal delay, leading to metabolic acidosis and further liver and kidney damage.
Signs and symptoms
The symptoms of poisoning typically occur in three characteristic intervals:
First phase: Gastrointestinal symptoms, such as nausea, vomiting, abdominal pain, and diarrhea, develop. Some patients may develop early neurological symptoms like altered mental status, central nervous system depression, and coma, as well as mild hypotension.
Second phase: In one to three days after ingestion (and depending on the dose ingested), patients develop a metabolic acidosis, which causes acute kidney failure, oliguria, increasing serum creatinine concentrations, and later anuria. Other symptoms reported and secondary to acidosis and/or kidney failure are: hypertension, tachycardia and other cardiac dysrhythmias, pancreatitis, and hyperkalemia or mild hyponatremia.
Final phase: At least five to ten days after ingestion, most of the symptoms are related to neurological complications, such as: progressive lethargy, facial paralysis, dysphonia, dilated and nonreactive pupils, quadriplegia, and coma, leading to death.
Treatment
Fomepizole or ethanol should be quickly administered to prevent diethylene glycol being metabolized to the compound or compounds that cause renal damage.
Fomepizole: an alcohol dehydrogenase (ADH) inhibitor with 8000 times more affinity than ethanol. This treatment has minimal adverse effects. However, it is very expensive (about $3000 U.S. per treatment).
Ethanol: ethanol is a competitive ADH substrate. A constant blood concentration of 1 to 1.5 g/L (corresponding to 0.5 to 0.75 mg/L in the breath) should be maintained to acceptably saturate the enzyme. An initial dose of 0.6 to 0.7 g ethanol per kilogram body weight should be given (about 0.8 mL/kg or 0.013 fl.oz/lb). This will cause ethanol intoxication. To avoid adverse effects, frequent serum monitoring and dosage adjustments should be done.
For late diagnosis, when ethanol or fomepizole is ineffective, because DEG has already been metabolized, hemodialysis becomes the only treatment available. Hemodialysis may be administered either alone or in combination with ethanol or fomepizole.
Prognosis
The prognosis depends on prompt diagnosis and treatment, owing to the high mortality from DEG intoxication. Patients who survive but develop kidney failure remain dialysis-dependent. All patients are likely to suffer significant morbidity.
Epidemiology
The physical properties of diethylene glycol make it an excellent counterfeit for pharmaceutical-grade glycerine (also called glycerol) or propylene glycol, and has caused many deaths in different countries. Incidents include its use in China as a component of cheap toothpaste, and by winemakers in Europe as an adulterant to create a "sweet" wine.
1937 – The Massengill incident (United States)
In 1937, S. E. Massengill Co. (a Tennessee drug company), manufactured sulfanilamide dissolved with diethylene glycol, to create a liquid alternative of this drug. The company tested the new product, Elixir sulfanilamide, for viscosity, appearance and fragrance. At the time, the food and drug laws did not require toxicological analysis before releasing for sale. When 105 people died in 15 states during the months of September and October, the trail led back to the elixir, and the toxic potential of this chemical was revealed. This episode was the impetus for the Federal Food, Drug, and Cosmetic Act of 1938. This law, though extensively amended in subsequent years, remains the central foundation of FDA regulatory authority to the present day.
1969 – South Africa
In Cape Town, South Africa, seven children developed vomiting, diarrhea, and dehydration, and died of kidney failure after administration of over-the-counter sedatives. Soon, patients started to present anuria, acidic breathing, hepatomegaly, and unresponsiveness. Patients were treated with fluid hydration and correction of acidosis, but some were not able to survive. Postmortem examination revealed damage in the kidneys and liver, and laboratory testing found DEG instead of propylene glycol in the sedatives.
1985 – Spain
Patients being treated for burns developed sudden anuric kidney failure. Further investigation revealed all patients were treated with topical silver sulfadiazine ointment that contained 7 g/kg of DEG. This event caused the death of five patients.
1985 – Wine scandal (Austria)
During the month of July 1985, Austrian wines were found to contain up to 1,000 parts per million of DEG, giving them a desirable sweetness. Austrian wine was banned in many countries and the U.S. Bureau of Alcohol, Tobacco and Firearms started to test all imported wine.
In November, The New York Times published a wine recall that the Federal Government released after the Bureau of Alcohol, Tobacco and Firearms tested 1,000 bottles. 45 Austrian, 5 German and 12 Italian wines tested positive for DEG. Some wines contained less than 10 parts per million of DEG, a small amount that could not be detected by laboratory analysis in Europe. This triggered the installation of more sensitive laboratory equipment in Banafi laboratories, Italy, and stronger alcohol regulations in Austria.
After recalling millions of wine bottles, the Austrian Government experienced difficulty in finding a way to destroy the product. During September 1986, the Ministry of Public Works started testing a mixture of wine with salt to melt hazardous ice during winter. The primary results revealed that the mixture was more effective than using salt alone. The next year, an Austrian electric power plant (Österreichische Draukraftwerke) in Carinthia announced that technicians developed a way to produce energy through burning 30 million liters of contaminated wine.
1986 – India
At a hospital in Bombay, India, patients were admitted to be treated for diverse health problems. Doctors prescribed glycerine for its osmotic diuretic effect, but patients started to develop kidney failure. Fourteen patients received hemodialysis, but the treatment failed. The episode resulted in the deaths of 21 patients and the discovery of glycerin contaminated with 18.5% v/v of DEG.
1990 – Nigeria
During the summer months, 47 children were admitted to the Jos University teaching hospital, Nigeria, with anuria, fever and vomiting. The children later developed kidney failure and died. All the children had received paracetamol (acetaminophen) syrup to treat upper respiratory infections related with malaria. Once physicians identified a suspect paracetamol syrup, samples were shipped to the Centers for Disease Control and Prevention (CDC) in the U.S., which identified DEG. It was assumed that DEG was used as a substitute of propylene glycol, and this incident encouraged the Nigerian government to develop pharmaceutical quality control guidelines.
1990–1992 – Bangladesh
In Bangladesh between 1990 and 1992, 339 children developed kidney failure, and most of them died, after being given paracetamol (acetaminophen) syrup contaminated with diethylene glycol. The outbreak forced the government to ban the sale of paracetamol elixirs in December 1992, causing a decline of 53% in the admission of patients with kidney failure and an 84% decline in admissions by unexplained kidney failure.
1992 – Argentina
A propolis syrup manufactured by Huilen Laboratories in Buenos Aires, Argentina, contained between 24 and 66.5% DEG, and caused the death of 29 people.
1995–1996 – Haiti
In the poorest country in the Western Hemisphere, disease outbreaks are not recognized unless widespread or unusual. Between November 1995 and June 1996, almost 109 children admitted to the University Hospital in Port-au-Prince, Haiti, presented with acute kidney failure. By June 1996, with no idea what was causing the epidemic, the Pan American Health Organization (PAHO) Haiti representative contacted the World Health Organization (WHO, the parent agency of PAHO), and WHO requested that the Centers for Disease Control and Prevention investigate.
Lead CDC investigator Dr. Katherine O'Brien conducted a case-control investigation, looking for potential clues to the epidemic. The study revealed a strong association between ingestion of two locally produced acetaminophen liquid products (Afebril and Valodon) and illness. Laboratory testing at CDC of samples taken from parents revealed significant contamination with DEG. The factory of the medication manufacturer, Pharval, was subsequently investigated by Dr. Joel Selanikio (also of CDC, and an Epidemic Intelligence Service classmate of Katherine O'Brien). Testing of medication samples taken from the factory samples by both CDC and by an independent commercial lab located in Miami, revealed contamination by DEG of 16.4% and higher. With the available technology of the era, the CDC determined the glycerin used in the syrup preparation was contaminated with approximately 24% DEG. As a result of the case-control findings, and subsequent investigation at the factory, public warnings were issued by the Ministry of Health and bottles of the two medications were taken from pharmacy shelves and destroyed. These measures quickly ended the advance of the epidemic.
Only 88 children deaths were recalled by doctors or had medical records. Nearly half of the victims were under the age of two.
Ending June 1996, the FDA had discovered counterfeit glycerin traced back to Chemical Trading and Consulting (a German broker), which bought 72 barrels of the syrup from Vos B.V., a Dutch company. Vos records revealed the syrup had been bought from Sinochem International Chemicals Company through a German trader, Metall-Chemie. In July 1996, the American Embassy in China contacted Sinochem and requested a list of Chinese glycerin makers, but the company refused to reveal the names. It was not until September 1996 that Sinochem provided a name of the manufacturer of the tainted syrup. They identified Tianhong Fine Chemicals Factory as the manufacturer. While the FDA tried to find out Tianhong's address, Chinese officials were reluctant to become involved. One year and a half after the FDA began to trace the poisonous shipments, an inspector, Ted Sze, finally visited the Tianhong Fine Chemicals Factory in Dalian, northeastern China. Once he was inside, there was nothing to do: the plant had already been shut down. The Dutch authorities assessed a $250,000 fine against Vos B.V., for not alerting anyone when they tested the syrup and found impurities.
2006 – China
Wang Guiping discovered how easy it was to enter China's pharmaceutical supply business and earn extra money. Records also revealed that to fool buyers, Wang falsified his license and laboratory analysis reports.
Wang declared that after making the first order of counterfeit syrup, he swallowed some of it. Once verifying that he was fine, he shipped it to Qiqihar No. 2 Pharmaceutical in 2005. Some time later, Wang found a reference to diethylene glycol in a chemical book. After manufacturing a second batch of syrup containing diethylene glycol for Qiqihar No. 2 Pharmaceutical, no taste-test was performed. The counterfeit syrup ended in ampules of Amillarisin A, a medication for gall bladder problems; special pediatric enema fluid; blood vessel disease injections; intravenous pain reliever; and an arthritis medication.
In April 2006, the Guangdong Province Hospital of Guangzhou began administering Amillarisin A to their patients. Soon thereafter, patients died after receiving the medication. Wang was caught and Qiqihar No. 2 Pharmaceutical was shut down by the authorities. Besides Wang, five employees were prosecuted.
2006 – Panama
Ending September 2006, the Arnulfo Arias Madrid Hospital at Panama City was getting full with patients with contradictory symptoms. The symptoms seemed to match with Guillain–Barré syndrome, but these patients were also losing their ability to urinate, a symptom not related to Guillain–Barré. The death rate of this mysterious illness was nearly 50%, when hospital management decided to isolate all the patients with the illness in a large room and doctors could compare notes and theories. Soon, patients from other parts of the country started to arrive at hospitals. Doctors had no idea what was happening: the mysterious illness was attacking elderly citizens with hypertension and diabetes history. About half were receiving Lisinopril (a blood pressure medicine), and many did not remember clearly if they had been taking other drugs. Suspecting something wrong with Lisinopril, the medicine was removed from the pharmacies while the U.S. Food and Drug Administration conducted lab analyses, revealing the blood pressure drug was safe; CDC epidemiologists were then invited to participate.
When a patient admitted for a heart attack developed the mysterious illness at the hospital, Dr. Nestor Sosa, an infectious disease specialist, analyzed the medical record. Because patients treated with Lisinopril developed a cough (a common side effect of ACE inhibitors), they were prescribed an expectorant. Immediately, biological samples and the syrup were sent by jet to CDC for analysis. When urine analyses for a series of metals, pesticides or their metabolites resulted negative, CDC scientists recalled the Nigeria and Haiti incidents. The CDC employed modern laboratory equipment to analyze the samples and confirm the results: the samples contained approximately 8% v/v DEG. Later, raw glycerin was analyzed and results revealed 22.2% v/v DEG.
The Panamanian Government made a nationwide campaign, collecting around 6,000 bottles of cough syrup and three other products with the tainted glycerin manufactured by Social Security Laboratories. The 46 barrels of syrup were bought by Social Security Laboratories through a Panamanian middleman, Grupo Comercial Medicom, who bought the product from Rasfer Internacional, a Spanish company. In fact, Rasfer received the product from CNSC Fortune Way, which in turn bought it from the Taixing Glycerine Factory. At the request of the United States, the State Food and Drug Administration of China investigated Taixing Glycerine Factory and CNSC Fortune Way, but the agency concluded it is not under their jurisdiction because the factory is not certified to make medicine.
Taixing sold the syrup as "TD glycerin", wording that neither Chinese inspectors nor Spanish medical authorities comprehended. Unfortunately, Taixing used "TD" for the Chinese word , meaning "substitute". A New York Times reporter tried to obtain a comment from CNSC Fortune Way at the CPHI Worldwide (the world's largest annual pharmaceutical convention) held in Milan, Italy, during 2007, but their representatives refused to comment.
In August 2009, the Supreme Court decided to send the diethylene glycol file to the Public Ministry for an extension. The following month, the Toxicology Department of the Institute of Legal Medicine and Forensic Science published a list of 1,155 names whose medicine bottles tested positive for DEG. Only approximately 3,000 bottles had been analyzed from the total 6,000 bottles collected. The fiscal attorney urged affected citizens to approach the Public Ministry to update the official legal file and for further medical analysis. Two months later, findings revealed that 145 were proven to die by DEG intoxication, but the DA still has to investigate 953 lawsuits.
The New York Times reported that Taixing was closed by the Chinese government and CNSC Fortune Way, which is also owned by the Chinese government, was never sanctioned. In Spain, Rasfer International declared bankruptcy after the lawyer of Medicom filed a lawsuit of $400 million in July 2008. Spanish authorities are prosecuting Asunción Criado, general manager of Rasfer Internacional, S.A., and await Panamanian citizens, René Luciani (former Social Security Director) and Jéssica Rodríguez (former Purchase National Director) for their hearings. Meanwhile, in Panama, De la Cruz, legal representative of Medicom, remains in jail pending a trial. Seventeen other persons have also been charged related to this incident. Panama awaits the extradition of Criado for her hearing.
Panama's case made CDC set standardized methodology for DEG identification, hoping to have more timely response in future events. The agency also identified urinary DEG as a biomarker for DEG exposures. The United States Food and Drug Administration also issued an Industry Guidance Document "intended to alert pharmaceutical manufacturers, pharmacy compounders, repackers, and suppliers to the potential public health hazard of glycerin contaminated with diethylene glycol (DEG)" and recommended appropriate testing procedures for the use of glycerin.
During June 2011, the number of confirmed deaths according to the official list rose to 219 victims.
2007 – Worldwide toothpaste incident
In May 2007, a Panamanian named Eduardo Arias discovered a 59-cent toothpaste that was labeled containing DEG. Panamanian officials traced the toothpaste to a local company in the Colón Free Trade Zone. In fact, the company bought the product in China and had already re-exported toothpaste to Costa Rica, Dominican Republic and Haiti, making Panama kick off a local warning. For the end of the month, the Chinese government committed to investigate the "supposedly" tainted toothpaste that had been recalled in Panama and Dominican Republic, but stated that, as per an essay written in 2000, a toothpaste containing 15.6% DEG was not dangerous.
On June 1, 2007, the FDA warned consumers to avoid toothpaste from China, although there was no information if these toothpastes had already entered the US, and started testing any imported Chinese toothpaste. Days later, Colgate-Palmolive found counterfeit toothpaste with its name, which was contaminated with DEG and found at dollar-type discount stores in New York, New Jersey, Pennsylvania and Maryland. The toothpaste was labeled as "Manufactured in South Africa" and contained misspellings like "isclinically", "SOUTH AFRLCA" and "South African Dental Assoxiation". Although there were no reports of anyone harmed, several people in the eastern US reported experiencing headaches and pain after using the product. It was later discovered that a great number of tubes with poison ended up in hospitals for the mentally ill, prisons, juvenile detention centers, other hospitals and many other state institutions.
In July 2007, health authorities in the UK detected a counterfeit Sensodyne toothpaste on sale at a car boot sale in Derbyshire. Soon, other countries also recalling Chinese-made toothpaste were Belize, Canada, Mozambique, Saudi Arabia, New Zealand, Spain, Italy, Japan, and Ireland, plus an Indianapolis, Indiana US hotel-supplier that distributed Chinese toothpaste in Barbados, Belgium, Bermuda, Britain, Canada, Dominican Republic, France, Germany, Ireland, Italy, Mexico, Spain, Switzerland, Turks and Caicos, the United Arab Emirates and United States. What began as a local alert revealed a global problem in more than 30 countries and involving more than thirty brands. The world outcry made Chinese officials ban the practice of using diethylene glycol in toothpaste.
2008 – Nigeria
Ending November 2008, infants started to die after developing unexplained fevers and vomiting. Investigations revealed that all had taken the medicine My Pikin Baby, a teething mixture tainted with diethylene glycol. The poison had caused the death of at least 84 Nigerian children between ages of two months and seven years.
The Nigerian government traced the diethylene glycol to an unlicensed chemical dealer in Lagos, who sold it to a local pharmaceutical manufacturer. Barewa Pharmaceuticals was shut down and the product was pulled off the shelves. They also arrested 12 people in connection with the incident. This being the second incident involving counterfeit glycerine, it prompted the Nigerian National Agency for Food And Drug Administration and Control (NAFDAC) to adopt zero tolerance for counterfeits.
2019/2020 – Brazil
In December 2019, some people in the city of Belo Horizonte, initially all from the same neighbourhood, started having symptoms such as nausea, vomiting, abdominal pain, acute kidney failure, facial nerve paralysis, blurred vision, temporary blindness and sensory changes. On 9 January 2020, a police report indicated quantities of diethylene glycol in one brand of beer from the small upscale brewery Backer that could have poisoned 18 people in Belo Horizonte and other cities in Minas Gerais state. On 17 January, the police confirmed the fourth death from symptoms matching DEG poisoning, and DEG contamination had been found in eight brands of beer from the same brewery. On 9 June, the police indicted 11 people, including brewery owners and employees, for manslaughter, unintentional bodily harm and food contamination. On 18 July, the 10th victim died in a Belo Horizonte hospital, a 65-year-old man who had been hospitalized since December 2019 due to the poisoning. The investigation revealed that DEG had been used as a coolant for the brewery equipment, in what should have been a closed circuit, but an undetected leak in the system contaminated some batches of beer.
2020 – India
In the first week of 2020, around 17 children from Ramnagar, in the union territory of Jammu and Kashmir, were hospitalised, more than half of whom died of kidney failure. The regional drug controller authorities after investigation found out that a faulty batch of the Coldbest PC cough syrup contained 34.97% of diethylene glycol, which resulted in poisoning and subsequent renal failures. The product was recalled and after an investigation, the Drug Controller General of India, VG Somani, said at India Pharma 2020, that the GMP was not followed, and negligence was found during the production process itself. The Himachal Pradesh government is filing a criminal case against the company and its executives.
2022 – India/Gambia/Indonesia/Uzbekistan
The WHO issued a medical product alert for four "contaminated" Indian pediatric medicines, manufactured by a firm in Haryana's Sonepat, saying these drugs identified in Gambia had been potentially linked with acute kidney injuries and 70 deaths among children in the west African country. The cough and cold syrups produced by Maiden Pharmaceuticals Limited, Sonepat in India.
WHO said laboratory analysis of samples of each of the four products confirmed that they contain unacceptable amounts of diethylene glycol and ethylene glycol as contaminants.
Subsequently, on 21 October 2022, 99 children were reported dead in Indonesia after ingesting the cough syrups. As a result authorities in Indonesia banned all syrup medicines. However, they advised that the syrups suspected of causing the deaths in Gambia, were not sold locally in Indonesia.
See also
Counterfeit medications
Ethylene
Ethylene glycol poisoning
Polyethylene glycol
1985 diethylene glycol wine scandal
References
Sources
Merck Index, 12th Edition, 3168.
Alcohol solvents
Diols
Adulteration
Mass poisoning
Nephrotoxins
Glycol ethers | Diethylene glycol | [
"Chemistry"
] | 6,208 | [
"Adulteration",
"Drug safety"
] |
2,240,873 | https://en.wikipedia.org/wiki/Scripted%20sequence | In video games, a scripted sequence is a pre-defined series of events that occur when triggered by player location or actions that play out in the game engine.
Function
Some scripted sequences are used to play short cutscenes that the player has little control of. However, they are commonly used in games such as Half-Life or Call of Duty to bring in new enemies or challenges to the player in a seemingly surprising manner while they are still playing. They can also present further plot points without interrupting the player and making them watch a cutscene. The intended results of this style of presentation is to increase immersion and to maintain a smooth-flowing experience that keeps the player's interest.
Scripted sequences trigger a number of things. A timer, a checkpoint or the progress of the game could activate a scripted sequence. For players that speedrun video games, skipping these scripted sequences that would otherwise slow down their completion time requires skill, and being able to manipulate the game's hit boxes so that the game does not trigger a sequence is necessary for fast completions.
Examples in-game
Half-Life uses scripted sequences throughout the game (aside from one short cutscene). Walking near other characters can trigger scripted sequences such as dialog. These dialog sequences tell the game's story in a different manner and are sometimes there simply for entertainment purposes.
Gears of War uses scripted sequences between sections of game play to provide objective reminders and tell the game's story without the use of cut scenes. The game triggers a playable scripted sequence once all of the enemies have been cleared in an area, usually these sequences play while the player moves to the next area.
Resident Evil 4 has many examples of scripted sequences that utilize a quick time event to feature more action-packed game play. As the player navigates the level, they must react to the event to continue.
Criticisms
Games such as Call of Duty have been criticized for their reliance on these sequences, as many feel they tend to guide a player through a game by the invisible hand of the developers, blocking progression with invisible walls until the scripted sequence has triggered further progression. Also, the use of scripted sequences may diminish replay value as the surprise effect is negated upon subsequent play-throughs.
References
Video game terminology
Fiction forms | Scripted sequence | [
"Technology"
] | 459 | [
"Computing terminology",
"Video game terminology"
] |
2,241,243 | https://en.wikipedia.org/wiki/Sodium%20ferulate | Sodium ferulate, the sodium salt of ferulic acid, is a compound used in traditional Chinese medicine thought to be useful for treatment of cardiovascular and cerebrovascular diseases and to prevent thrombosis, although there is no high-quality clinical evidence for such effects. It is found in the root of Angelica sinensis. As of 2005, it was under preliminary clinical research in China. Ferulic acid can also be extracted from the root of the Chinese herb Ligusticum chuanxiong.
Kraft Foods patented the use of sodium ferulate to mask the aftertaste of the artificial sweetener acesulfame potassium.
References
Dietary supplements
Food additives
Bitter-masking compounds
O-methylated hydroxycinnamic acids
Salts of carboxylic acids
Organic sodium salts | Sodium ferulate | [
"Chemistry"
] | 165 | [
"Salts of carboxylic acids",
"Organic sodium salts",
"Salts"
] |
2,241,254 | https://en.wikipedia.org/wiki/OpenEHR | openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records (EHRs). In openEHR, all health data for a person is stored in a "one lifetime", vendor-independent, person-centred EHR. The openEHR specifications include an EHR Extract specification but are otherwise not primarily concerned with the exchange of data between EHR-systems as this is the focus of other standards such as EN 13606 and HL7.
The openEHR specifications are maintained by the openEHR Foundation, a not for profit foundation supporting the open research, development, and implementation of openEHR EHRs. The specifications are based on a combination of 15 years of European and Australian research and development into EHRs and new paradigms, including what has become known as the archetype methodology for specification of content.
The openEHR specifications include information and service models for the EHR, demographics, clinical workflow and archetypes. They are designed to be the basis of a medico-legally sound, distributed, versioned EHR infrastructure.
Architecture
The architecture of the openEHR specifications as a whole consists of the following key elements:
information models (aka 'Reference Model');
the archetype formalism;
the portable archetype query language;
service models / APIs.
The use of the first two enable the development of 'archetypes' and 'templates', which are formal models of clinical and related content, and constitute a layer of de facto standards of their own, far more numerous than the base specifications on which they are built. The query language enables queries to be built based on the archetypes, rather than physical database schemata, thus decoupling queries from physical persistence details. The service models define access to key back-end services, including the EHR Service and Demographics Service, while a growing set of lightweight REST-based APIs based on archetype paths are used for application access.
The openEHR Architecture Overview provides a summary of the architecture and the detailed specifications.
Reference model
A central part of the openEHR specifications is the set of information models, known in openEHR as 'reference models'. The models constitute the base information models for openEHR systems, and define the invariant semantics of the Electronic Health Record (EHR), EHR Extract, and Demographics model, as well as supporting data types, data structures, identifiers and useful design patterns.
Some of the key classes in the EHR component are the ENTRY classes, whose subtypes include OBSERVATION, EVALUATION, INSTRUCTION, ACTION and ADMIN_ENTRY, as well as the Instruction State Machine, a state machine defining a standard model of the lifecycle of interventions, including medication orders, surgery and other therapies.
Archetypes and multi-level modelling
A key innovation in the openEHR framework is to leave all specification of clinical information out of the information model (also known as "reference model") and instead to provide a powerful means of expressing definitions of the content clinicians and patients need to record that can be directly consumed at runtime by systems built on the Reference Model. This is justified by the need to deal scalably with the generic problem in health of a very large, growing, and ever-changing set of information types.
Clinical content is specified in terms of two types of artefact which exist outside the information model. The first, known as "archetypes" provides a place to formally define re-usable data point and data group definitions, i.e. content items that will be re-used in numerous contexts. Typical examples include "systemic arterial blood pressure measurement" and "serum sodium". Many such data points occur in logical groups, e.g. the group of data items to document an allergic reaction, or the analytes in a liver function test result. Some archetypes contain numerous data points, e.g. 50, although a more common number is 10–20. A collection of archetypes can be understood as a "library" of re-usable domain content definitions, with each archetype functioning as a "governance unit", whose contents are co-designed, reviewed and published.
The second kind of artefact is known in openEHR as a "template", and is used to logically represent a use case-specific data-set, such as the data items making up a patient discharge summary, or a radiology report. A template is constructed by referencing relevant items from a number of archetypes. A template might only require one or two data points or groups from each archetype. In terms of the technical representation, openEHR templates cannot violate the semantics of the archetypes from which they are constructed. Templates are almost always developed for local use by software developers and clinical analysts. Templates are typically defined for GUI screen forms, message definitions and document definitions, and as such, correspond to "operational" content definitions.
The justification for the two layers of models over and above the information model is that if data set definitions consist of pre-defined data points from a library of such definitions, then all recorded data (i.e. instances of templates) will ultimately just be instances of the standard content definitions. This provides a basis for standardised querying to work. Without the archetype "library" level, every data set (i.e. chunk of operational content) is uniquely defined and a standard approach to querying is difficult.
Accordingly, openEHR defines a method of querying based on archetypes, known as AQL (Archetype Querying Language).
Notably, openEHR has been used to model shared care plan. The archetypes have been designed to accommodate the concepts of the shared care plan.
While individual health records may be vastly different in content, the core information in openEHR data instances always complies to archetypes. The way this works is by creating archetypes which express clinical information in a way that is highly reusable, even universal in some cases.
Archetype formalism
openEHR archetypes are expressed in "Archetype Definition Language", an openEHR public specification. Two versions are available: ADL 1.4, and ADL 2, a new release with better support for specialisation, redefinition and annotations, among other improvements. The 1.4 release of ADL and its "object model" counterpart Archetype Object Model (AOM) are the basis for the CEN and ISO "Archetype Definition Language" standard (ISO standard 13606-2).
Templates have historically been developed in a simple, de facto industry-developed XML format, known as ".oet", after the file extension. ADL 2 defines a way to express templates seamlessly with archetypes, using extensions of the ADL language.
Quality assurance of archetypes
Various principles for developing archetypes have been identified. For example, a set of openEHR archetypes needs to be quality managed to conform to a number of axioms such as being mutually exclusive. The archetypes can be managed independently from software implementations and infrastructure, in the hands of clinician groups to ensure they meet the real needs on the ground. Archetypes are designed to allow the specification of clinical knowledge to evolve and develop over time. Challenges in implementation of information designs expressed in openEHR centre on the extent to which actual system constraints are in harmony with the information design.
In the field of Electronic health records there are a number of existing information models with overlaps in their scope which are difficult to manage, such as between HL7 V3 and SNOMED CT. The openEHR approach faces harmonisation challenges unless used in isolation.
International collaboration
Following the openEHR approach, the use of shared and governed archetypes globally would ensure openEHR health data could be consistently manipulated and viewed, regardless of the technical, organisational and cultural context. This approach also means the actual data models used by any EHR are flexible, given that new archetypes may be defined to meet future needs of clinical record keeping. Recently, work in Australia has demonstrated how archetypes and templates may be used to facilitate the use of legacy health record and message data in an openEHR health record system, and output standardised messages and CDA documents.
The prospect of gaining agreement on design and on forms of governance at the international level remains speculative, with influences ranging from the diverse medico-legal environments to cultural variations, to technical variations such as the extent to which a reference clinical terminology is to be integral.
The openEHR framework is consistent with the Electronic Health Record Communication Standard (ISO 13606), and the Archetype Object Model 2 (AOM2) has been officially accepted by ISO TC 215 as the draft specification for the 2017 revision of ISO 13606:2.
International adoption
openEHR archetypes are being used by the National e-Health Transition Authority of Australia, the UK NHS Health and Social Care Information Centre (HSCIC), the Norwegian Nasjonal IKT organisation, and the Slovenian Ministry of Health.
openEHR has been selected as the basis for the standardised EHR in Brazil.
It is beginning to be utilised in commercial solutions throughout the world, including those produced by the openEHR Industry Partners.
Clinical Knowledge Manager (CKM)
One of the outcomes of openEHR modelling approach is the open development of archetypes, templates and terminology subsets to represent health data. Due to the open nature of openEHR, these structures are publicly available to be used and implemented in health information systems. Community users are able to share, discuss and approve these structures in a collaborative repository known as the Clinical Knowledge Manager (CKM). Some currently used openEHR CKMs:
openEHR Clinical Knowledge Manager
NEHTA Clinical Knowledge Manager
UK Clinical Knowledge Manager
Norwegian National ICT Clinical Knowledge Manager
Slovenian MoH Clinical Knowledge Manager
See also
Archetype (information science)
European Institute for Health Records
Electronic Health Record Communication (ISO/CEN EN 13606 - EHRcom)
Health Level 7
Health Informatics Service Architecture (HISA)
HIPAA
ProRec
SNOMED CT
References
External links
openEHR Foundation website
openEHR specifications
openEHR 2015 white paper
Health informatics
Standards for electronic health records | OpenEHR | [
"Biology"
] | 2,156 | [
"Health informatics",
"Medical technology"
] |
2,241,544 | https://en.wikipedia.org/wiki/Br%C3%BA%20na%20B%C3%B3inne | (, "mansion or palace of the Boyne"), also called the Boyne Valley tombs, is an ancient monument complex and ritual landscape in County Meath, Ireland, located in a bend of the River Boyne. It is one of the world's most important Neolithic landscapes, comprising at least ninety monuments including passage tombs, burial mounds, standing stones and enclosures. The site is dominated by the passage tombs of Newgrange (), Knowth () and Dowth (), built during the 32nd century BC. Together these have the largest assemblage of megalithic art in Europe. The associated archaeological culture is called the "Boyne culture".
Brú na Bóinne is also an important archaeoastronomical site; several of the passage tombs are aligned with the winter solstice and equinoxes. The area continued to be a site of ritual and ceremonial activity in the later Bronze Age and Iron Age. In Irish mythology, the tombs are said to be portals to the Otherworld and dwellings of the deities, particularly The Dagda and his son Aengus. They began to be studied by antiquarians in the 18th century, and archaeological excavations began in the 20th century, when some of the passage tombs underwent restoration.
Since 1993, the site has been a World Heritage Site designated by UNESCO as "Brú na Bóinne - Archaeological Ensemble of the Bend of the Boyne".
Location
The area is located eight kilometers west of Drogheda in County Meath, Ireland, in a bend of the River Boyne. It is around 40 kilometers north of Dublin.
Brú na Bóinne is surrounded on its southern, western and eastern sides by the Boyne; additionally, a small tributary of the Boyne, the River Mattock, runs along the northern edge, almost completely surrounding Brú na Bóinne with water. All but two of the prehistoric sites are on this river peninsula.
Description
The area has been a centre of human settlement for at least 6,000 years, but the major structures date to around 5,000 years ago, from the Neolithic period.
The site is a complex of Neolithic mounds, chamber tombs, standing stones, henges and other prehistoric enclosures, some from as early as 35th century BC - 32nd century BC. The site thus predates the Egyptian pyramids and was built with sophistication and a knowledge of science and astronomy, which is most evident in the passage grave of Newgrange. The site is often referred to as the "Bend of the Boyne" and this is often (incorrectly) taken to be a translation of Brú na Bóinne. The associated archaeological culture is often called the Boyne culture.
The site covers 780 ha (1,927 acres) and contains around 40 passage graves, as well as other prehistoric sites and later features. The majority of the monuments are concentrated on the north side of the river. The most well-known sites within Brú na Bóinne are the passage graves of Newgrange, Knowth and Dowth, all known for their collections of megalithic art. Each stands on a ridge within the river bend and two of the tombs, Knowth and Newgrange, appear to contain stones re-used from an earlier monument at the site. Newgrange is the central mound of the Boyne Valley passage grave cemetery, the circular cairn in which the cruciform burial chamber is sited having a diameter of over 100 metres. Knowth and Dowth are of comparable size. There is no in situ evidence for earlier activity at the site, save for the spotfinds of flint tools left by Mesolithic hunters.
The passage tombs were constructed beginning in around 3,300 BC and work stopped around 2,900 BC. The three largest tombs of Newgrange, Knowth and Dowth may have been constructed to be visible from each other and from northern and southern approaches along the River Boyne, as part of a scheme to "bind the previously disparate elements of the extended passage tomb cemetery into a more clearly defined prehistoric numinous precinct". The area continued to be used for habitation and ritual purposes until the early Bronze Age, during which a number of embanked, pit and wooden post circles (collectively referred to as "henges") were built. Artifacts from the later Bronze Age are comparatively inconspicuous: some cist and ring ditch burials and burnt mounds. For the Iron Age there is only evidence of sporadic activity, such as burials near Knowth and at Rosnaree. Valuable items from the Roman period such as coins and jewelry were found as votive offerings near Newgrange.
Numerous other enclosure and megalith sites have been identified within the river bend and have been given simple letter designations, such as the M Enclosures. In addition to the three large tombs, several other ceremonial sites constitute the complex including:
Cloghalea Henge
Townleyhall passage grave
Monknewtown henge and ritual pond
Newgrange cursus
Astronomical alignments
Each of the three main megalith sites have significant archaeoastronomical significance. Newgrange and Dowth have winter solstice solar alignments, while Knowth is oriented towards the March equinox (spring equinox) and the September equinox (autumn equinox). In addition, the immediate environs of the main sites have been investigated for other possible alignments. The layout and design of the Brú na Bóinne complex across the valley has also been studied for astronomical significance.
Visitor centre
All access to Newgrange and Knowth is by guided tour only, with tours beginning at the Visitor Centre, opened in 1997 in Donore, County Meath. The tourist visitor centre is located on the south side of the river Boyne, and the historical site is located on the north side of the river and is accessed via a shuttle with a tour guide.
Public transport access
Bus Éireann route 163 operates between Drogheda and the Brú na Bóinne Visitor Centre via Donore.
The nearest railway station is Drogheda railway station approximately 9 kilometres distant.
Brú na Bóinne National Park
The site will form the basis of a national park. In September 2023 the state bought Dowth Hall and 552 acres of surrounding land.
See also
List of archaeoastronomical sites by country
References
Further reading
Lewis-Williams, D. and Pearce, D., Inside the Neolithic Mind, Thames and Hudson, London, 2005,
O'Kelly, M. J., Newgrange: archaeology, art, and legend, London: Thames and Hudson, Ltd., 1982.
Stout, Geraldine, Newgrange and the Bend of the Boyne, 2002, Cork University Press, , 9781859183410, google books
External links
UNESCO's World Heritage Site description
Official website
Newgrange.com
Knowth.com
Brú na Bóinne - Archaeological Ensemble of the Bend of the Boyne UNESCO Collection on Google Arts and Culture
Brú na Bóinne in myth and folklore
Archaeological sites in County Meath
Neolithic sites of Europe
World Heritage Sites in the Republic of Ireland
Megalithic monuments in Ireland
Tourist attractions in County Meath
Boyne culture
Bronze Age sites in Europe
Archaeological cultures in Ireland
Buildings and structures completed in the 4th millennium BC
Winter solstice
Spring equinox
Autumn equinox | Brú na Bóinne | [
"Astronomy"
] | 1,518 | [
"Astronomical events",
"Winter solstice"
] |
2,241,832 | https://en.wikipedia.org/wiki/Renal%20osteodystrophy | Renal osteodystrophy is currently defined as an alteration of bone morphology in patients with chronic kidney disease (CKD). It is one measure of the skeletal component of the systemic disorder of chronic kidney disease-mineral and bone disorder (CKD-MBD). The term "renal osteodystrophy" was coined in 1943, 60 years after an association was identified between bone disease and kidney failure.
The traditional types of renal osteodystrophy have been defined on the basis of turnover and mineralization as follows: 1) mild, slight increase in turnover and normal mineralization; 2) osteitis fibrosa, increased turnover and normal mineralization; 3) osteomalacia, decreased turnover and abnormal mineralization; 4) adynamic, decreased turnover and acellularity; and, 5) mixed, increased turnover with abnormal mineralization. A Kidney Disease: Improving Global Outcomes report has suggested that bone biopsies in patients with CKD should be characterized by determining bone turnover, mineralization, and volume (TMV system).
On the other hand, CKD-MBD is defined as a systemic disorder of mineral and bone metabolism due to CKD manifested by either one or a combination of the following: 1) abnormalities of calcium, phosphorus, PTH, or vitamin D metabolism; 2) abnormalities in bone turnover, mineralization, volume, linear growth, or strength (renal osteodystrophy); and 3) vascular or other soft-tissue calcification.
Signs and symptoms
Renal osteodystrophy may exhibit no symptoms; if it does show symptoms, they can include:
Bone pain
Joint pain
Muscle pain
Itching
Bone deformation
Bone fracture
The broader concept of chronic kidney disease-mineral and bone disorder (CKD-MBD) is not only associated with fractures but also with cardiovascular calcification, poor quality of life and increased morbidity and mortality in CKD patients (the so-called bone-vascular axis). These clinical consequences are acquiring such an importance that scientific working groups (such as the ERA CKD-MBD Working Group) or international initiatives are trying to promote research in the field including basic, translational and clinical research.
Pathogenesis
Renal osteodystrophy has been classically described to be the result of hyperparathyroidism secondary to hyperphosphatemia combined with hypocalcemia, both of which are due to decreased excretion of phosphate by the damaged kidney.
Low activated vitamin D3 levels are a result of the damaged kidneys' inability to convert vitamin D3 into its active form, calcitriol, and result in further hypocalcemia. High levels of fibroblast growth factor 23 seem now to be the most important cause of decreased calcitriol levels in CKD patients.
In CKD, the excessive production of parathyroid hormone increases the bone resorption rate and leads to histologic bone signs of secondary hyperparathyroidism. However, in other situations, the initial increase in parathyroid hormone and bone remodeling may be slowed excessively by a multitude of factors, including age, ethnic origin, sex, and treatments such as vitamin D, calcium salts, calcimimetics, steroids, and so forth, leading to low bone turnover or adynamic bone disease.
Both high and low bone turnover diseases are currently observed equally in CKD patients treated by dialysis, and all types of renal osteodystrophy are associated with an increased risk of skeletal fractures, reduced quality of life, and poor clinical outcomes.
Diagnosis
Renal osteodystrophy is usually diagnosed after treatment for end-stage kidney disease begins; however the CKD-MBD starts early in the course of CKD. In advanced stages, blood tests will indicate decreased calcium and calcitriol (vitamin D) and increased phosphate, and parathyroid hormone levels. In earlier stages, serum calcium, phosphate levels are normal at the expense of high parathyroid hormone and fibroblast growth factor-23 levels. X-rays will also show bone features of renal osteodystrophy (subperiostic bone resorption, chondrocalcinosis at the knees and pubic symphysis, osteopenia and bone fractures) but may be difficult to differentiate from other conditions. Since the diagnosis of these bone abnormalities cannot be obtained correctly by current clinical, biochemical, and imaging methods (including measurement of bone-mineral density), bone biopsy has been, and still remains, the gold standard analysis for assessing the exact type of renal osteodystrophy.
Differential diagnosis
To confirm the diagnosis, renal osteodystrophy must be characterized by determining bone turnover, mineralization, and volume (TMV system) (bone biopsy). All forms of renal osteodystrophy should also be distinguished from other bone diseases which may equally result in decreased bone density (related or unrelated to CKD):
osteoporosis
osteopenia
osteomalacia
brown tumor should be considered as the top-line diagnosis if a mass-forming lesion is present.
Treatment
Treatment for renal osteodystrophy includes the following:
calcium and/or native vitamin D supplementation
restriction of dietary phosphate (especially inorganic phosphate contained in additives)
phosphate binders such as calcium carbonate, calcium acetate, sevelamer hydrochloride or carbonate, lanthanum carbonate, sucroferric oxyhydroxide, ferric citrate among others
active forms of vitamin D (calcitriol, alfacalcidol, paricalcitol, maxacalcitol, doxercalciferol, among others)
cinacalcet
renal transplantation
haemodialysis five times a week is thought to be of benefit
parathyroidectomy for symptomatic medication refractive end stage disease
Prognosis
Recovery from renal osteodystrophy has been observed following kidney transplantation. Renal osteodystrophy is a chronic condition with a conventional hemodialysis schedule. Nevertheless, it is important to consider that the broader concept of CKD-MBD, which includes renal osteodystrophy, is not only associated with bone disease and increased risk of fractures but also with cardiovascular calcification, poor quality of life and increased morbidity and mortality in CKD patients (the so-called bone-vascular axis). Actually, bone may now be considered a new endocrine organ at the heart of CKD-MBD.
References
External links
Renal Osteodystrophy - NKUDIC, NIH
Kidney diseases
Histopathology
1940s neologisms | Renal osteodystrophy | [
"Chemistry"
] | 1,396 | [
"Histopathology",
"Microscopy"
] |
2,241,862 | https://en.wikipedia.org/wiki/Resinous%20glaze | Resinous glaze is an alcohol-based solution of various types of food-grade shellac. The shellac is derived from the raw material sticklac, which is a resin scraped from the branches of trees left from when the small insect, Kerria lacca (also known as Laccifer lacca), creates a hard, waterproof cocoon. When used in food and confections, it is also known as confectioner's glaze, pure food glaze, natural glaze, or confectioner's resin. When used on medicines, it is sometimes called pharmaceutical glaze.
Pharmaceutical glaze may contain 20–51% shellac in solution in ethyl alcohol (grain alcohol) that has not been denatured (denatured alcohol is poisonous), waxes, and titanium dioxide as an opacifying agent. Confectioner’s glaze used for candy contains roughly 35% shellac, while the remaining components are volatile organic compounds that evaporate after the glaze is applied.
Pharmaceutical glaze is used by the drug and nutritional supplement industry as a coating material for tablets and capsules. It serves to improve the product's appearance, extend shelf life and protect it from moisture, as well as provide a solid finishing film for pre-print coatings. It also serves to mask unpleasant odors and aid in the swallowing of the tablet.
The shellac coating is insoluble in stomach acid and may make the tablet difficult for the body to break down or assimilate. For this reason, it can also be used as an ingredient in time-released, sustained or delayed-action pills. The product is listed on the U.S. Food and Drug Administration's (FDA) inactive ingredient list.
Shellac is labeled as GRAS (generally recognized as safe) by the US FDA and is used as glaze for several types of foods, including some fruit, coffee beans, chewing gum, and candy. Examples of candies containing shellac include candy corn, Hershey's Whoppers and Milk Duds, Nestlé's Raisinets and Goobers, Tootsie Roll Industries's Junior Mints and Sugar Babies, Jelly Belly's jelly beans and Mint Cremes, Russell Stover's jelly beans, and several candies by Godiva Chocolatier and Gertrude Hawk. M&M's do not contain shellac.
A competing non-animal-based product is zein, a corn protein. It is preferred by some vegans because shellac production can kill many insects.
References
Pharmacy
Food additives | Resinous glaze | [
"Chemistry"
] | 538 | [
"Pharmacology",
"Pharmacy"
] |
2,242,167 | https://en.wikipedia.org/wiki/Virtual%20retinal%20display | A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display (like a television) directly onto the retina of the eye.
History
In the past similar systems have been made by projecting a defocused image directly in front of the user's eye on a small "screen", normally in the form of large glasses. The user focused their eyes on the background, where the screen appeared to be floating. The disadvantage of these systems was the limited area covered by the "screen", the high weight of the small televisions used to project the display, and the fact that the image would appear focused only if the user was focusing at a particular "depth". Limited brightness made them useful only in indoor settings as well.
Only recently a number of developments have made a true VRD system practical. In particular the development of high-brightness LEDs have made the displays bright enough to be used during the day, and adaptive optics have allowed systems to dynamically correct for irregularities in the eye (although this is not always needed). The result is a high-resolution screenless display with excellent color gamut and brightness, far better than the best television technologies.
The VRD was invented by Kazuo Yoshinaka of Nippon Electric Co. in 1986. Later work at the University of Washington in the Human Interface Technology Lab resulted in a similar system in 1991. Most of the research into VRDs to date has been in combination with various virtual reality systems. In this role VRDs have the potential advantage of being much smaller than existing television-based systems. They share some of the same disadvantages however, requiring some sort of optics to send the image into the eye, typically similar to the sunglasses system used with previous technologies. It also can be used as part of a wearable computer system.
A Washington-based startup, MicroVision, Inc., has sought to commercialize VRD. Founded in 1993, MicroVision's early development work was financed by US government defense contracts and resulted in the prototype head-mounted display called Nomad.
In 2018, Intel announced Vaunt, a set of smart glasses that are designed to appear like conventional glasses, which use retinal projection via a vertical-cavity surface-emitting laser and holographic grating. Intel gave up on this project, and sold the technology to North.
In the same year, QD Laser, a Japanese laser maker spun off from Fujitsu, developed the first commercialized true VRD RETISSA Display. In the following year, the firm started to sell the successor VRD RETISSA Display II, which featured a higher resolution equivalent to 720p.
In 2023 Sony produced a compact camera with an integrated Retissa Neoviewer retinal projection device, for release in the US. The resolution of the retinal display only (not the camera) is claimed by the manufacturers to be nominally equivalent to 720P.
Although "not a medical device" it is hoped that the retinal projection viewer may be of particular value to some visually impaired users, and the adaptation was heavily subsidised by Sony. Because of the novel user experience, and limited availability, potential buyers were strongly encouraged to participate in "touch-and-try" events to see if the technology is useful to their particular circumstances before committing to a purchase.
See also
Augmented reality
Bionic contact lens
Google Glass
Head-up display
List of emerging technologies
Magic Leap
Optical head-mounted display
Physics of the Future
Smartglasses
Visual prosthetic
References
External links
Animations of how a VRD works
Display technology
Multimodal interaction
Virtual reality
Mixed reality
Japanese inventions | Virtual retinal display | [
"Engineering"
] | 769 | [
"Electronic engineering",
"Display technology"
] |
2,242,314 | https://en.wikipedia.org/wiki/Hygrine | Hygrine is a pyrrolidine alkaloid, found mainly in coca leaves (0.2%). It was first isolated by Carl Liebermann in 1889 (along with a related compound cuscohygrine) as an alkaloid accompanying cocaine in coca. Hygrine is extracted as a thick yellow oil, having a pungent taste and odor.
See also
Coca alkaloids
Pseudotropine
Troparil
References
Pyrrolidine alkaloids
Alkaloids found in Erythroxylum coca
Ketones | Hygrine | [
"Chemistry"
] | 117 | [
"Ketones",
"Alkaloids by chemical classification",
"Pyrrolidine alkaloids",
"Functional groups"
] |
2,242,435 | https://en.wikipedia.org/wiki/Beam%20crossing | A beam crossing in a particle collider occurs when two packets of particles, going in opposite directions, reach the same point in space. Most of the particles in each packet cross each other, but a few may collide, producing other particles that may be observed in a particle detector. In a linear collider there is only one location where beam crossings occur, while in a modern accelerator ring there are a few locations (LHC, for example, has four); it is at these points that detectors are placed.
References
Experimental particle physics
Accelerator physics | Beam crossing | [
"Physics"
] | 114 | [
"Applied and interdisciplinary physics",
"Experimental physics",
"Particle physics",
"Experimental particle physics",
"Particle physics stubs",
"Accelerator physics"
] |
2,242,567 | https://en.wikipedia.org/wiki/Galveston%20Seawall | The Galveston Seawall is a seawall in Galveston, Texas, that was built after the Galveston hurricane of 1900 for protection from future hurricanes. Construction began in September 1902, and the initial segment was completed on July 29, 1904. From 1904 to 1963, the seawall was extended from to over .
Description
Although the Seawall performed as intended, it created an unintended and insurmountable consequence: passive erosion resulting in the gradual disappearance of the once-wide beach and the resort business with it. "Within twenty years, the city had lost one hundred yards of sand. People who once watched auto racing on a wide beach were left with a narrow strip of sand at low tide and a gloomy vista of waves on rocks when the tide was high." Houston soon overtook Galveston as the major city in the region.
Reporting in the aftermath of the 1983 Hurricane Alicia, the Corps of Engineers estimated that $100 million in damage was avoided because of the seawall. On September 13, 2008, Hurricane Ike's large waves over-topped the seawall. As a result, a commission was established by the Texas governor to investigate preparing for and mitigating future disasters.
A proposal has been put forth to build an "Ike Dike", a massive levee system that would protect the Galveston Bay and the important industrial facilities that line the coast and the Houston Ship Channel from a future, potentially more destructive storm. The proposal has gained widespread support from a variety of business interests. However, this proposal never passed the conceptual stage. Since 2009 there has been many similar propositions brought forth for a more practical layered network to consist of smaller, local levees and natural protections have been put forward by the SSPEED Center at Rice University and the University of Houston. These proposals include a surge gate at the mouth of the Houston Ship Channel connecting adjacent high ground near the Fred Hartman Bridge and hard protections for the west shore of Galveston Bay and around the densely developed east end of Galveston Island. Also included is the proposed lower coastal Lone Star Coastal National Recreation Area.
Texas F.M. 3005 is known as Seawall Boulevard where it runs along the seawall. The sidewalk adjacent to Seawall Boulevard on top of the seawall is claimed to be the longest continuous sidewalk in the world at long.
The seawall is long. It is approximately high and thick at its base. The seawall was listed in the National Register of Historic Places in 1977 and designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2001.
Many miles of the seawall are painted with murals. These huge murals are painted by children and depict underwater life.
Gallery
See also
1900 Storm Memorial, installed along the Seawall
The Dolphins (sculpture), installed along the Seawall
National Register of Historic Places listings in Galveston County, Texas
References
Further reading
(Diagrams of the movable concrete mixer plant used for construction of the seawall)
(Diagram and description of the geometry of the seawall to dissipate wave energy)
External links
One-hundred-year-old photos of the Galveston seawall
Buildings and structures in Galveston, Texas
Buildings and structures on the National Register of Historic Places in Texas
Dikes in the United States
Galveston Hurricane of 1900
Historic Civil Engineering Landmarks
National Register of Historic Places in Galveston County, Texas
Seawalls
Tourist attractions in Galveston, Texas
1904 establishments in Texas | Galveston Seawall | [
"Engineering"
] | 677 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
2,242,641 | https://en.wikipedia.org/wiki/Shock%20sensitivity | Shock sensitivity is a comparative measure of the sensitivity to sudden compression (by impact or blast) of an explosive chemical compound. Determination of the shock sensitivity of a material intended for practical use is one important aspect of safety testing of explosives. A variety of tests and indices are in use, of which one of the more common is the Rotter Impact Test with results expressed as FoI (Figure of Insensitivity.) At least four other impact tests are in common use, while various "gap tests" are used to measure sensitivity to blast shock.
Sensitivities vary widely
A few materials such as nitrogen triiodide cannot be touched at all without detonating, and so are of purely academic interest. Some other compounds with a high sensitivity to shock, such as nitroglycerin and acetone peroxide, may detonate from a firm jolt and so cannot be legally transported in pure form. Acetone peroxide is often used by amateurs and terrorists as a means to detonate other explosives as well as acting as the main blasting agent, often resulting in injuries or death to those who underestimate its sensitivity. A number of methods are known to desensitize nitroglycerine so that it can be transported for medical uses, and it is also incorporated into other less sensitive explosives, such as dynamites and gelignites.
Many practical commercial materials of intermediate sensitivity, such as gelignites and water gel explosives, can be safely handled as they will not explode from casual shocks such as being dropped or lightly knocked by a tool. However, they may explode if struck forcefully by a metal tool, and would certainly explode in the barrel if they were used in an artillery shell. Reliable initiation of such materials requires the small explosion of a detonator. Apart from this another explosive material such as Armstrong's mixture is also used in commercial markets and even sold to the public in the form of fireworks, cap guns and party poppers.
Still less sensitive materials such as blasting agents like ANFO, are so insensitive that the impulse from the detonator must be amplified by an explosive booster charge to secure reliable detonation. Some polymer bonded explosives — especially those based on TATB — are designed for use in insensitive munitions, which are unlikely to detonate even if struck by another explosive weapon.
Explosives | Shock sensitivity | [
"Chemistry"
] | 476 | [
"Explosives",
"Explosions"
] |
2,242,713 | https://en.wikipedia.org/wiki/Karel%20Domin | Karel Domin (4 May 1882, Kutná Hora, Kingdom of Bohemia – 10 June 1953, Prague) was a Czech botanist and politician.
After gymnasium school studies in Příbram, he studied botany at the Charles University in Prague, and graduated in 1906. Between 1911 and 1913 he published several important articles on Australian taxonomy. In 1916 he was named as professor of botany. Domin specialised in phytogeography, geobotany and plant taxonomy. He became a member at the Czechoslovak Academy of Sciences, published many scientific works and founded a botany institute at the university. The Domin scale, a commonly used means of classifying a standard area by the number of plant species found in that area, is named after him. Domin edited the exsiccata series Flora Čechoslovenica exsiccata (1929-1936) together with Vladimír Krajina.
In the academic year 1933-34 he was rector of Charles University and was one of the participants of a struggle for ancient academic insignia between the Czech and German universities of Prague (the insigniáda) that resulted in street-fights and looting. From 1935 to 1939 he was a member of parliament; after the Munich Agreement, he co-founded a traditionalist political movement (Akce národní obrody).
He is considered the man who is the most responsible for the creation of Tatra National Park.
References
External links
Short biography (in Czech)
1882 births
1953 deaths
People from Kutná Hora
People from the Kingdom of Bohemia
Czechoslovak National Democracy politicians
Party of National Unity (Czechoslovakia) politicians
Members of the Chamber of Deputies of Czechoslovakia (1935–1939)
Czech botanists
Czech politicians
Czechoslovak fascists
Academic staff of Charles University
Charles University alumni
Burials at Vyšehrad Cemetery | Karel Domin | [
"Biology"
] | 359 | [] |
2,242,786 | https://en.wikipedia.org/wiki/VarioRam | VarioRam is a Porsche-patented engine induction system that was first introduced on the 1992 model year 964 Carrera RS motorsport special. It became standard on the 911 series starting from the M64/21 engine on the model year 1996.
As the name suggests, VarioRam varies the effective length of the inlet ducting depending upon engine load and speed. A long intake length at low rpm provides a better pulse tune because the pulse won't travel back and forth as many times, which aids low-end torque because the high pressure pulse may otherwise contribute to reversion and thereby decreased volumetric efficiency. At higher engine RPM, the intake length is reduced. The result is a flatter torque curve, with more torque available at low- and mid-range engine speeds compared to a similar non-VarioRam engine.
References
Porsche
Engine technology | VarioRam | [
"Technology"
] | 174 | [
"Engine technology",
"Engines"
] |
2,242,794 | https://en.wikipedia.org/wiki/Specious%20present | The specious present is the time duration wherein one's perceptions are considered to be in the present.
Description
The term was coined by E. Robert Kelly, who wrote under the pseudonym "E. R. Clay". In The Alternative: A Study in Psychology (1882), he wrote:
The concept was further developed by philosopher William James. James defined the specious present to be "the prototype of all conceived times... the short duration of which we are immediately and incessantly sensible". C. D. Broad in "Scientific Thought" (1930) further elaborated on the concept of the specious present, arguing that it may be construed as the temporal equivalent of a sensory datum.
The specious present can be classed as a 'thick' conception of time perception, to be contrasted with 'thin' conceptions that see the present as instantaneous.
The concept raises some seemingly paradoxical problems. For example, Robin Le Poidevin notes that the specious present amounts to a duration in which events are both simultaneous and successive: "What we perceive, we perceive as present—as going on right now. Can we perceive a relation between two events without also perceiving the events themselves? If not, then it seems we perceive both events as present, in which case we must perceive them as simultaneous, and so not as successive after all."
Notes
References
Andersen, Holly, and Rick Grush, "A brief history of time-consciousness: historical precursors to James and Husserl", To appear in the Journal of the History of Philosophy.
Le Poidevin, Robin, "The Experience and Perception of Time", The Stanford Encyclopedia of Philosophy (Winter 2004 Edition), Edward N. Zalta (ed.)
Hodder, A. (1901). The adversaries of the sceptic; or, The specious present, a new inquiry into human knowledge. Chapter II, The Specious Present. London: S. Sonnenschein pp. 36–56.
External links
Concepts in the philosophy of mind
Time in life
Perception | Specious present | [
"Physics"
] | 427 | [
"Present",
"Physical quantities",
"Time",
"Spacetime",
"Time in life"
] |
2,242,834 | https://en.wikipedia.org/wiki/Mole%20%28architecture%29 | A mole is a massive structure, usually of stone, used as a pier, breakwater, or a causeway separating two bodies of water. A mole may have a wooden structure built on top of it that resembles a wooden pier. The defining feature of a mole, however, is a road that water cannot freely flow under, unlike a true pier. The oldest known mole is at Wadi al-Jarf, an ancient Egyptian harbor complex on the Red Sea, constructed .
The word comes from Middle French mole, ultimately from Latin mōlēs, meaning a large mass, especially of rock; it has the same root as molecule and mole, the chemical unit of measurement.
Heptastadion
Notable in antiquity was the Heptastadion, a giant mole built in the 3rd century BC in the city of Alexandria, Egypt to join the city to Pharos Island where the Pharos lighthouse stood. The causeway formed a barrier separating Alexandria's oceanfront into two distinct harbours, an arrangement which had the advantage of protecting the harbours from the force of the strong westerly coastal current. The Heptastadion is also believed to have served as an aqueduct while Pharos was inhabited, and geophysical research indicates that it was part of the road network of the ancient city. Silting over the years resulted in the former dyke disappearing under several metres of accumulated silt and soil upon which the Ottomans built a town from 1517 onwards. Part of the modern city of Alexandria is now built on the site.
Stone quaysides
Stone quaysides are sometimes called moles. A well-known example is the Molo in Venice. It is the site of the Doge's Palace and two pillars which form a gateway to the sea. It has been depicted numerous times by artists such as Canaletto.
English Tangier
The Kingdom of England acquired the north African city of Tangier as English Tangier in 1661 as part of King Charles II's marriage settlement with the Portuguese princess Catherine of Braganza, who became Queen of England and Scotland.
A mole (a large breakwater) was then designed to improve the harbour and was planned to be long. The cost was about £340,000, and the improved harbour was to be long, deep at low tide, and capable of keeping out the roughest of seas. Work began on the mole in August 1663 and continued for some years under a succession of Governors.
With an improved harbour the town could have played the same role that Gibraltar later played in British naval strategy.
However, Parliament expressed concern about the cost of maintaining the Tangier garrison, and by 1680 King Charles II had threatened to give up Tangier unless the supplies were voted for its sea defences. A crippling blockade by the Jaysh al-Rifi finally forced the English to withdraw from Tangier in 1683. The King gave secret orders to abandon the city, level the fortifications, destroy the harbour, and evacuate the troops. Samuel Pepys was present at the evacuation and wrote an account of it.
San Francisco Bay Area
In the San Francisco Bay Area in California, there were several moles, combined causeways and wooden piers or trestles extending from the eastern shore and utilized by various railroads, such as the Key System, Southern Pacific Railroad (two), and Western Pacific Railroad: the Alameda Mole, the Oakland Mole, and the Western Pacific Mole. By extending the tracks the railroads could get beyond the shallow mud flats and reach the deeper waters of the Bay that could be navigated by the Bay Ferries. A train fell off the Alameda Mole through an open drawbridge in 1890 killing several people. None of the four Bay Area moles survive today, although the causeway portions of each were incorporated into the filling in of large tracts of marshland for harbor and industrial development.
A large mole was completed in 1947 at the San Francisco Naval Shipyard in the Bayview-Hunters Point neighborhood of San Francisco to accommodate the large Hunters Point gantry crane. The mole required of fill.
Namibia
In Swakopmund, on the coast of Namibia, a mole was built in 1899. Designed by the engineer F. W. Oftloff, it was intended to develop the city's harbour. However, the Benguela Current continually deposited sand onto the mole until it became a promontory. The adjacent area has since become a popular leisure beach, known as the Mole Beach.
World War II
Dunkirk evacuation
The two concrete moles protecting the outer harbour at Dunkirk played a significant part in the evacuation of British and French troops during World War II in May to June 1940. The harbour had been made unusable by German bombing and it was clear that troops were not going to be taken directly off the beaches fast enough. Naval captain William Tennant had been placed ashore to take charge of the navy shore parties and organise the evacuation. Tennant had what proved to be the highly successful idea of using the East Mole to take off troops. The moles had never been designed to dock ships, but despite this, the majority of troops rescued from Dunkirk were taken off in this way. James Campbell Clouston, pier master on the east mole, organised and regulated the flow of men on that site.
Churchill Barriers
The Churchill Barriers are a series of four causeways in the Orkney Islands with a total length of 1.5 miles (2.4 km). They link the Orkney Mainland in the north to the island of South Ronaldsay via Burray and the two smaller islands of Lamb Holm and Glimps Holm.
The barriers were built in the 1940s as naval defences to protect the anchorage at Scapa Flow. They were commissioned following the sinking of HMS Royal Oak in 1939 by German U-boat U-47 which had penetrated the existing defences of sunken blockships and anti-submarine nets. The barriers now serve as road links, carrying the A961 road from Kirkwall to Burwick.
See also
References
External links
Building engineering
da:Mole
es:Muelle (construcción)
lt:Molas
no:Molo
pl:Molo
sv:Pir | Mole (architecture) | [
"Engineering"
] | 1,238 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
2,242,975 | https://en.wikipedia.org/wiki/Wavelet%20packet%20decomposition | Originally known as optimal subband tree structuring (SB-TS), also called wavelet packet decomposition (WPD; sometimes known as just wavelet packets or subband tree), is a wavelet transform where the discrete-time (sampled) signal is passed through more filters than the discrete wavelet transform (DWT).
Introduction
In the DWT, each level is calculated by passing only the previous wavelet approximation coefficients (cAj) through discrete-time low- and high-pass quadrature mirror filters. However, in the WPD, both the detail (cDj (in the 1-D case), cHj, cVj, cDj (in the 2-D case)) and approximation coefficients are decomposed to create the full binary tree.
For n levels of decomposition the WPD produces 2n different sets of coefficients (or nodes) as opposed to sets for the DWT. However, due to the downsampling process the overall number of coefficients is still the same and there is no redundancy.
From the point of view of compression, the standard wavelet transform may not produce the best result, since it is limited to wavelet bases that increase by a power of two towards the low frequencies. It could be that another combination of bases produce a more desirable representation for a particular signal. There are several algorithms for subband tree structuring that find a set of optimal bases that provide the most desirable representation of the data relative to a particular cost function (entropy, energy compaction, etc.).
There were relevant studies in signal processing and communications fields to address the selection of subband trees (orthogonal basis) of various kinds, e.g. regular, dyadic, irregular, with respect to performance metrics of interest including energy compaction (entropy), subband correlations and others.
Discrete wavelet transform theory (continuous in the time variable) offers an approximation to transform discrete (sampled) signals. In contrast, the discrete-time subband transform theory enables a perfect representation of already sampled signals.
Gallery
Applications
Wavelet packets were successfully applied in preclinical diagnosis.
Wavelet packet decomposition proves advantageous for capturing intricate patterns and variations in the electrochemical signals, which can be indicative of the battery's health and degradation over time. By breaking down the complex battery signal into its constituent frequency components, wavelet packet decomposition allows for a more detailed analysis of the underlying characteristics associated with different stages of battery aging.
Wavelet packet decomposition is employed as a preprocessing step to decompose vibration signals acquired from the wind turbine gearbox into multiple frequency bands, capturing both high and low-frequency components. This decomposition allows for the extraction of essential features related to fault signatures at different scales, enabling a more comprehensive analysis of the gearbox's health status. It helps to improve the accuracy and efficiency of fault detection and classification, especially in the complex and critical domain of wind turbine gearbox systems.
In the context of rainfall forecasting, wavelet packet decomposition proves valuable for capturing the complex and multi-scale patterns in precipitation data. It can decompose the original monthly rainfall time series into various sub-series corresponding to different frequency. This decomposition is instrumental in unveiling hidden patterns and trends within the data, which can be crucial for improving the forecasting accuracy.
Moisture detection in timber is crucial for assessing its structural integrity and preventing potential issues such as decay and damage. Wavelet Packet Decomposition is a powerful signal processing technique that offers a multi-resolution analysis of the timber's moisture content. This approach allows for a detailed examination of the signal at different frequency bands, providing a more comprehensive understanding of the moisture distribution within the material.
Researchers employ wavelet packet decomposition to analyze the seismic response of structures, enabling a finer resolution in both time and frequency domains. This detailed analysis allows for the identification of subtle changes in the structural response that may signify damage. By decomposing the seismic response into its constituent frequency components, the researchers gain insights into the time-varying characteristics of the structural behavior. This is crucial for identifying dynamic changes in the structure's response over time, which may indicate the presence and extent of damage.
In the context of forecasting oil futures prices, the multiresolution nature of wavelet packet decomposition enables the forecasting model to capture both high and low-frequency components in the time series, thereby improving the ability to capture the complex patterns and fluctuations inherent in financial data.
References
External links
An implementation of wavelet packet decomposition can be found in MATLAB wavelet toolbox.
An implementation for R can be found in the wavethresh package.
An illustration and implementation of wavelet packets along with its code in C++ can be found at:
JWave: An implementation in Java for 1-D and 2-D wavelet packets using Haar, Daubechies, Coiflet, and Legendre wavelets.
Wavelets
Signal processing | Wavelet packet decomposition | [
"Technology",
"Engineering"
] | 1,004 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
2,242,981 | https://en.wikipedia.org/wiki/Downwinders | Downwinders were individuals and communities, in the United States, in the intermountain West between the Cascade and Rocky Mountain ranges primarily in Arizona, Nevada, New Mexico, and Utah but also in Oregon, Washington, and Idaho who were exposed to radioactive contamination or nuclear fallout from atmospheric or underground nuclear weapons testing, and nuclear accidents.
More generally, the term can also include those communities and individuals who are exposed to ionizing radiation and other emissions due to the regular production and maintenance of coal ash, nuclear weapons, nuclear power, nuclear waste, and geothermal energy. In regions near U.S. nuclear sites, downwinders may be exposed to releases of radioactive materials into the environment that contaminate their groundwater systems, food chains, and the air they breathe. Some downwinders may have suffered acute exposure due to their involvement in uranium mining and nuclear experimentation.
Several severe adverse health effects, such as an increased incidence of cancers, thyroid diseases, CNS neoplasms, and possibly female reproductive cancers that could lead to congenital malformations have been observed in Hanford, Washington, "downwind" communities exposed to nuclear fallout and radioactive contamination. The impact of nuclear contamination on an individual is generally estimated as the result of the dose of radiation received and the duration of exposure, using the linear no-threshold model (LNT). Sex, age, race, culture, occupation, class, location, and simultaneous exposure to additional environmental toxins are also significant, but often overlooked, factors that contribute to the health effects on a particular "downwind" community.
Nuclear testing
Between 1945 and 1980, the United States, the U.S.S.R., the United Kingdom, France and China exploded 504 nuclear devices in atmospheric tests at thirteen primary sites yielding the explosive equivalent of 440 megatons of TNT. Of these atmospheric tests, 330 were conducted by the United States. Accounting for all types of nuclear tests, official counts show that the United States has conducted 1,054 nuclear weapons tests to date, involving at least 1,151 nuclear devices, most of which occurred at Nevada Test Site and the Pacific Proving Grounds in the Marshall Islands, with ten other tests taking place at various locations in the United States, including Alaska, Colorado, Mississippi, and New Mexico. There have been an estimated 2,000 nuclear tests conducted worldwide; the number of nuclear tests conducted by the United States alone is currently more than the sum of nuclear testing done by all other known nuclear states (the USSR, United Kingdom, France, China, India, Pakistan, and North Korea) combined.
These nuclear tests infused vast quantities of radioactive material into the world's atmosphere, resulting in widely dispersed radiation and its subsequent deposition as global fallout.
Exposure
Aboveground nuclear explosions produce a characteristic mushroom cloud, which moves downwind as it reaches its stabilization height. Dispersion of the radioactive elements causes vertical and lateral cloud movement, spreading radioactive materials over adjacent regions. While the large particles settle nearby the site of the detonation, smaller particles and gases may be dispersed around the world. Additionally, some explosions injected radioactive material into the stratosphere, more than 10 kilometers above ground level, meaning it may float there for years before being subsequently deposited uniformly around the earth. Global fallout is the result, which exposes everything to an elevated level of man-made background radiation. While "downwinders" refers to those who live and work closest to the explosion site and are thus most acutely affected, there is a global effect of increased health risks due to ionizing radiation in the atmosphere.
Health effects
The earliest concerns raised about the health effects of exposure to nuclear fallout had to do with fears of genetic alterations that may occur among the offspring of those most exposed. However, the observed inheritable effects of radiation exposure by groups with histories of acute risk are considered minimal compared with the significant increase in thyroid cancer, leukemia and certain solid tumors that have developed within a decade or more after exposure. As studies of biological samples (including bone, thyroid glands and other tissues) have been undertaken, it has become increasingly clear that specific radionuclides in fallout are implicated in fallout-related cancers and other late effects.
Ionizing radiation contained in fallout from nuclear testing is especially damaging to dividing cells. For this reason, fetuses and infants are especially vulnerable to injury. Such cellular damage may later manifest as leukemia and other cancers in children. In 1958, the United Nations Scientific Committee on the Effects of Atomic Radiation reported on fetal and infant deaths caused by radiation.
In 1980, American popular weekly magazine People reported that from about 220 cast and crew who filmed in a 1956 movie, The Conqueror, on location near St. George, Utah, ninety-one had come down with cancer, and 50 had died of cancer. Of these, forty-six had died of cancer by 1980. Among the cancer deaths were John Wayne, Pedro Armendáriz and Susan Hayward, the stars of the film. However, the lifetime odds of developing cancer for men in the U.S. population are 43 percent and the odds of dying of cancer are 23 percent (38 percent and 19 percent, respectively, for women). This places the cancer mortality rate for the 220 primary cast and crew quite near the expected average, but it needs to be noted that this statistic does not include the Native American Paiute extras in the film.
Current status
After adopting the Comprehensive Nuclear Test Ban Treaty in 1996, the U.S. and several other nuclear states pledged to stop nuclear testing. The United States Senate has not yet ratified the treaty, although it stopped testing in 1992. The final US test series was Operation Julin, in September 1992. Three countries have tested nuclear weapons since the CTBT opened for signature in 1996. India and Pakistan both carried out two sets of tests in 1998. North Korea carried out six announced tests, one each in 2006, 2009, 2013, two in 2016 and one in 2017.
In 2011, the US Senate designated January 27 as a national day of remembrance for Americans who, during the Cold War, worked and lived downwind from nuclear testing sites.
For many years, Senator Ben Ray Luján and other members of Congress have attempted to get compensation for those affected by the Trinity test. In 2023, after the film Oppenheimer brought renewed attention to the test, the United States Senate approved the New Mexico downwinders' inclusion in the Radiation Exposure Compensation Act amendment. To become law, the bill would also need to be passed by the United States House of Representatives.
Specific test sites
New Mexico
On July 16, 1945, the United States military conducted the word's first test of an atom bomb in Alamogordo, New Mexico. Code-named Trinity, this explosion also created the world's first victims of an atom bomb: residents of New Mexico.
Years before the test, scientists warned of the risks for civilians of atomic testing. In their memorandum of March 1940, Manhattan Project physicists Otto Frisch and Rudolf Peierls warned: “Owing to the spread of radioactive substances with the wind, the bomb could probably not be used without killing large numbers of civilians, and this may make it unsuitable as a weapon for use by this country.” At the very least, they suggested that “[I]t would be very important to have an organization which determines the exact extent of the danger area, by means of ionization measurements, so that people can be warned from entering it.” Federal officials for the most part ignored these warnings but a last minute small team to monitor some of the radiation was assembled. “New Mexico residents were neither warned before the 1945 Trinity blast, informed of health hazards afterward, nor evacuated before, during, or after the test."
Nevada
From 1951 – 1962, the Nevada Test Site (NTS) was a primary site used for both surface and above-ground nuclear testing, with 100 tests at or above ground level, all of which involved releases of significant amounts of radioactive material into the atmosphere. Atmospheric testing was halted in 1958 after a testing moratorium was agreed upon with the Soviet Union. The Soviets broke the agreement in 1961, and both sides resumed testing. Two American test series followed: Operation Nougat, and then Operation Storax. The Partial Nuclear Test Ban Treaty went into effect in 1963, banning all above ground testing. Further tests were underground, which, with the exception of a few test failures, did not release fallout.
In the 1950s, people who lived in the vicinity of the NTS were encouraged to sit outside and watch the mushroom clouds that were created by nuclear bomb explosions. Many were given radiation badges to wear on their clothes, which the Atomic Energy Commission later collected to gather data about radiation levels.
In a report by the National Cancer Institute, released in 1997, it was determined that the nearly ninety atmospheric tests at the Nevada Test Site (NTS) left high levels of radioactive iodine-131 (5.5 exabecquerels, Ebq) across a large area of the continental United States, especially in the years 1952, 1953, 1955, and 1957.
The National Cancer Institute report estimates that doses received in these years are estimated to be large enough to produce 10,000 to 75,000 additional cases of thyroid cancer in the U.S.
A 1999 review of the 1997 report considered that their estimates of collective doses were in "good agreement" and "should provide confidence that the NCI estimate is not grossly under or over the actual value."
A 2006 report, published by the Scientific Research Society, estimates that about 22,000 additional radiation-related cancers and 2,000 additional deaths from radiation-related leukemia are expected to occur in the United States because of external and internal radiation from both NTS and global fallout.
A 2010 report evaluating data on thyroid cancer incidence from 1973 to 2004 also supported a relationship between exposure from fallout and increased thyroid cancer incidence.
The threat of downwind exposure to radioactivity remaining at the Nevada Test Site from nuclear weapons tests was still an issue as late as 2007. The Pentagon planned to test a 700-ton ammonium nitrate-and-fuel oil "bunker buster" weapon. The planned "Divine Strake" test would have raised a large mushroom cloud of contaminated dust that could have blown toward population centers such as Las Vegas, Boise, Salt Lake City, and St. George, Utah. This project was cancelled in February 2007, in large part due to political pressure inspired by the threat of downwind exposure to radioactivity.
Hanford
While many downwinders were exposed to weapons testing, millions more have been affected by radioactive fallout due to U.S. sites engaged in the production of nuclear weapons and/or nuclear power. For example, Hanford is a former nuclear weapons production site located in south central Washington state, where the Washington state Department of Health collaborated with the citizen-led Hanford Health Information Network (HHIN) to publicize significant data about the health effects of Hanford's operations. Established in 1943, Hanford released radioactive materials into the air, water and soil, releases which largely resulted from the site's routine operation, though some were also due to accidents and intentional releases.
By February 1986, mounting citizen pressure forced the U.S. Department of Energy to release to the public 19,000 pages of previously unavailable historical documents about Hanford's operations. These reports revealed there had been radioactive materials released into the air and Columbia River. The reactors used large amounts of water from the river for cooling, which caused materials in the river water to become radioactive as they passed through the reactor. The water and the radioactive materials it contained were released into the river after passing through the reactors, thus contaminating both groundwater systems and aquatic animals downstream as far west as the Washington and Oregon coasts.
The Hanford Thyroid Disease Study, an epidemiologic study of the relationship between estimated exposure doses to radioiodine and incidence of thyroid disease amongst Hanford's downwinders, led by the Fred Hutchinson Cancer Center, was inconclusive. A consolidated lawsuit brought by two thousand Hanford downwinders for personal injury against the contractors that ran Hanford has been in the court system for many years. The defense in the litigation is fully funded by taxpayer dollars due to Price Anderson indemnification agreements. The first six bellwether plaintiffs went to trial in 2005, to test the legal issues applying to the remaining plaintiffs in the suit.
In October 2015, the Department of Energy resolved the final cases. The DOE paid more than $60 million in legal fees and $7 million in damages.
Marshall Islands
While the term "downwinders" generally connotes nuclear fallout victims based in the continental U.S. near sites such as Hanford and NTS, the population of the Marshall Islands bore a large brunt of nuclear testing under the United States' Pacific Proving Ground program. Now known officially as the Republic of the Marshall Islands, it was a United Nations Trust Territory administered by the United States from 1944 to 1979, years during which the United States tested 66 nuclear weapons in the Marshall Islands.
One of these many tests, the March 1, 1954, explosion of Castle Bravo, a U.S. thermonuclear device, was responsible for most of the radiation the Marshall Islanders were exposed to. The fallout-related doses of this single test are believed to be the highest recorded in the history of worldwide nuclear testing. Many of the Marshall Islands which were part of the Pacific Proving Grounds remain contaminated by nuclear fallout, and many of those downwinders who were living on the islands at the time of testing have suffered from a highly increased incidence of several types of cancers and birth defects.
Effects of radiation on female downwinders
The primary long-term health hazard associated with exposure to ionizing radiation as a result of nuclear fallout is an increased risk for cancers of the thyroid, other solid tumor cancers, and leukemia. The relationship between radiation exposure and subsequent cancer risk is considered "the best understood, and certainly the most highly quantified, dose-response relationship for any common environmental human carcinogen", according to report by the National Cancer Institute. Overall, men in the United States develop cancer at a rate 22% higher than that of women. However, women develop cancer from radiation at a rate from 37.5% to 52% higher than that of men. In recent years, studies conducted by both the National Research Council and the EPA have confirmed that compared to men, women are at a significantly higher risk of radiation-induced cancers, such and that women's sensitivity to radiation-induced cancers is much higher than was previously estimated. The increased radiosensitivity of certain organs in women, such as the breast, ovaries, and thyroid is likely the cause of this difference.
In the EPA's 1999 Federal Guidance Report #13(FGR 13), Cancer Risk Coefficients for Environmental Exposure to Radionuclides, the authors conclude that women have a 48 percent higher radionuclide-related cancer mortality risk than men. Further evidence of sex-based disparities in radiation-induced cancers was published in the 2006 report by the National Research Council's Committee to Assess Health Risks from Exposure to Low Levels of Ionizing Radiation (known as the BEIR VII report), which found that women's risk due to radiation exposure exceeded men's by 37.5 percent. When one considers rates of cancer incidence separately from rates of cancer fatality, the sex disparities are even greater. The BEIR VII Committee concluded that, given the same level of radiation exposure, women are 52 percent more likely than men to develop cancer, while the EPA report puts the estimate of difference as high as 58 percent.
The differences in risk are even greater when considering organ-specific cancers, especially given that both reports identify breast, ovarian, lung, colon, and thyroid tissues as the most radiosensitive among women. For example, the FGR 13 has estimated that the ratio of thyroid cancer incidence for women as compared to men is 2.14, while the findings of BEIR VII suggest that women are even more vulnerable to radiation-induced thyroid cancer at a ratio of 4.76.
As increasing concerns are raised regarding the environmental risks related to breast, the BEIR VII report cited evidence that suggests that "radiation may interact synergistically with other risk factors for breast cancer", raising the possibility that endocrine disrupting chemicals like PCBs and dioxins might combine to increase the risks associated with radiation beyond that which would be caused by either separately. A related concern is that radionuclides that may be passed through the breast milk, causing some women who are downwinders to be reluctant to breastfeed their children. While reducing the radioactive intake of their infants is an important preventative measure, it denies women the opportunity to engage a preventative measure for their own health; i.e. breastfeeding has been widely documented as a practice that can reduce women's risk of developing breast cancer. By refraining from breastfeeding, women downwinders' risks of breast cancer incidence becomes even more elevated.
Pregnancy and birth outcomes
Evidence about radiation-related pregnancy and birth outcomes comes from studies of nuclear bomb and test site survivors and studies of those exposed to diagnostic and therapeutic radiation. Mounting research indicates that above certain levels of radiation a miscarriage will result. It is also clear that fetal malformations are a greater risk if a woman is exposed to high doses of nuclear-related radiation in early pregnancy, when organs are being formed.
If acute radiation exposure occurs in the first ten days following conception, when few cells have formed, it is likely that the embryo will fail to develop and spontaneous abortion will occur.
Fetal malformations are most likely to occur if a pregnant woman receives a radiation dose >500 mSv between the 10th and 40th day of pregnancy, the period of organogenesis during which the organs are formed.
After the 40th day, the effects of radiation exposure are likely to include low birth weight, delayed growth, and possible mental deficits rather than fetal malformations.
Radiation doses above 4,000 mSv are likely to kill both the mother and the fetus.
It has been shown that radiation damage including genome instability and carcinogenesis may occur transgenerationally in both males and females.
The effects of radiation on fetal formation are also particularly relevant as a women's health issue to the extent that female fetuses' ova are formed while the fetus is still in utero.
Adverse effects on a mother carrying a female fetus may therefore be multigenerational and increase both the daughter's and grandchildren's risks for ovarian cancer, infertility, and other reproductive developmental problems.
Compensation
In 1990, the U.S. Congress passed the Radiation Exposure Compensation Act (RECA), providing financial assistance to individuals who suffered from radiation exposure-related diseases, including lung cancer, leukemia, multiple myeloma, lymphomas, thyroid cancer, breast cancer, as well as nonmalignant respiratory conditions such as lung fibrosis and pulmonary fibrosis. This law specifically aimed to compensate former uranium miners who fell ill during the time when the U.S. Government was the sole purchaser of uranium. Since its establishment in 1990, RECA has provided benefits exceeding $2.5 billion to over 39,000 claimants.
Initially, RECA had narrow definitions regarding eligible people and covered diseases, but complaints arose regarding these limitations, leading to efforts to amend the act. In 1999, recognizing the need for change in the compensation process under the Radiation Exposure Compensation Act (RECA), four bills were submitted in the U.S. Congress aimed at amending the act. Advocacy initiatives were directed towards extending coverage to include additional occupations, lower the standard of proof for uranium miners, eliminate distinctions between smokers and nonsmokers, and increase compensation for eligible individuals, which led to approved amendments to RECA in 2000, expanding coverage and modifying eligibility criteria to assist affected groups.
Downwinders eligible for compensation include those living in specified counties of Nevada, Utah, and Arizona for at least two years between January 1951 and October 1958, or during July 1962-periods when the United States conducted above ground nuclear tests without warning, and who are able to show correlations between certain diseases and their personal exposure to nuclear radiation. Miners' compensation covers workers employed in uranium mines in five states-Colorado, New Mexico, Arizona, Wyoming, and Utah-between January 1947 and December 1971. Uranium miners are eligible for $100,000, and onsite participants are eligible for $75,000.
Navajo-related obstacles
There are particular obstacles to receiving needed health care and compensation faced by many widows and widowers of Navajo uranium miners, who were affected by disproportionately high incidences of fatal lung cancer caused by radon exposure. In fact, the health effects of radon were first widely acknowledged when Mormon and Native American miners who hardly smoke (the main reason for lung cancer) had high incidences of lung cancer. Modern mining practices greatly reduce the danger from radon - also present in many coal mines – by proper ventilation. One problem for Navajo widows and widowers seeking the federal benefits for which they are qualified is the requirement that they document their marriages, although many were married in the 1930s and 1940s in undocumented tribal ceremonies. Language and cultural barriers pose further obstacles to Navajo downwinders; since many elderly Navajos do not speak English, their children bear the responsibility to do the research and procure from a tribal law judge a validation certificate of their
tribal marriage. Similarly, it is difficult to access the outdated medical and occupational documentation that the government required even though the government's and uranium companies' own records for Navajo miners are sparse and difficult to access.
See also
Chernobyl disaster
Daigo Fukuryū Maru
Nevada-Semipalatinsk
New Zealand's nuclear-free zone
Project 4.1
Windscale fire
Atomic veteran
The Plutonium Files
References
External links
Nuclear Testing and the Downwinders from the website of the Utah State Historical Society
View documentary short "Downwinders: The People of Parowan"
Article "A people's truth"
Excerpt from Plutopia
Rice, James. Downwind of the Atomic State: Atmospheric Testing and the Rise of the Risk Society. (New York University Press, 2023): https://nyupress.org/9781479815340/downwind-of-the-atomic-state/
Anti–nuclear weapons movement
Environmental issues in the United States
Nuclear weapons testing
Radiation health effects
History of the Southwestern United States
History of Nevada
History of Utah
History of Washington (state)
20th century in the Marshall Islands
Environment of Nevada
Environment of Utah
Environment of Washington (state)
Environment of the Marshall Islands | Downwinders | [
"Chemistry",
"Materials_science",
"Technology"
] | 4,635 | [
"Radiation health effects",
"Nuclear weapons testing",
"Environmental impact of nuclear power",
"Radiation effects",
"Radioactivity"
] |
2,243,048 | https://en.wikipedia.org/wiki/Bojay%C3%A1%20massacre | The Bojayá massacre () was a massacre that occurred on May 2, 2002, in the town of Bellavista, Bojayá Municipality, Chocó Department, Colombia. Revolutionary Armed Forces of Colombia (FARC) guerrillas attacked the town in an attempt to take control of the Atrato River region from United Self-Defense Forces of Colombia (AUC) paramilitaries. During the fighting, a gas cylinder bomb (known in Spanish as a pipeta or cilindro bomba) launched at the AUC paramilitaries positioned by the walls of a church from a FARC mortar went through the roof of the church instead, landing on the altar inside and detonating. 119 civilians died in the attack; approximately 300 inhabitants of the town had taken refuge in the church, and 79 died in the explosion.
Background
A 2001 publication prepared by the Colombian Ministry of National Defence, "Annual Report on Human Rights and International Humanitarian Law, 2000", provided the following description of the situation in Bojayá:"The armed confrontation in the region between the guerrillas and the illegal self-defence forces is very violent due to the economic and strategic interests in play, including, among others: drug trafficking, the inter-oceanic connection, the development of megaprojects like the Panamerican Highway, and the proximity of ports and hydroelectric stations. The region furthermore represents advantages for these groups as a route for the import of arms and supplies from Central America and to provide favourable routes for drug trafficking."
Preceding events
At least 250 paramilitary combatants moved in to Bellavista, the administrative centre of the municipality of Bojayá, on 21 April 2002. They remained there despite protests by local residents. The UNHCHR sent an official communication to the Colombian government on April 23 expressing their concern regarding the presence of the paramilitaries and the possible consequences for the local people. The Ombudsman's Office of Colombia also visited the region on April 26 and released an early warning regarding the threat of an armed confrontation in the area.
Intense fighting broke out on May 1 in a neighboring town, Vigía del Fuerte, and spread to Bellavista later in the day. Around 300 residents took shelter in the local church, 100 in the adjoining parsonage, and another 100 in the Augustinian Missionary residence, over the course of the night.
Details of the attack
According to the official UN investigation report, in the morning of May 2 the AUC paramilitaries had established positions around the church, using the rare concrete buildings and the cement wall around the church yard for protection. The FARC took up positions to the north (in Barrio Pueblo Nuevo), and began launching gas cylinder bombs (pipetas) toward the paramilitary positions. Two of the bombs landed nearby and the third went through the roof of the church, where it exploded on the altar.
The UN investigation found the FARC in violation of several principles of international humanitarian law, including an indiscriminate attack causing unnecessary civilian casualties, failure to distinguish between civilian and combatant, failure to take efforts to protect civilians from avoidable harm, and attacks against cultural property. Prohibitions against these acts are found in Common Article 3 of the 1949 Geneva Conventions and Articles 4, 13, and 16 of Additional Protocol II. The UN also considered the FARC responsible for the forced displacement of civilians generated as a consequence of the attack on the church, placing the act in violation of Article 17 of Protocol II.
The UN found the AUC to be in violation of various aspects of international humanitarian law, including using civilians as human shields, failing to protect civilians from the effects of their military operations, and for causing massive forced displacement of civilian populations in the region due to their acts, threats and combat operations in the area. Given reports of theft by the AUC of goods, equipment and vehicles belonging to local residents, the UN also found the AUC guilty of pillage (a violation of Article 17 of Protocol II).
The UNHCHR additionally found that the Colombian government failed to act, in order to prevent the massive human suffering which ensued from the events in Bojaya; suffering that was predicted and of which the government was explicitly warned of beforehand.
Bellavista Nuevo
The attack caused significant physical damage to Bellavista. In its aftermath, some 4,000 citizens fled Bojayá, including all residents of Bellavista. Five days after the attack the Colombian government announced that a new town would be constructed. Support from the residents of Bellavista for this decision was largely favourable, though not unanimous. Siting and planning was undertaken by graduate students at Universidad Javeriana in Bogotá, who selected a location roughly one kilometre from the old town for its low risk of flooding.
The buildings of the new settlement were well-constructed, and connected to a network of municipal utilities and services. The original town of Bellavista was abandoned, and is now referred to as Bellavista Viejo ("Old Bellavista"). The new settlement took the name Bellavista Nuevo.
Responsibility of the state
The First Administrative Court of Quibdo, Chocó sentenced the Colombian State to a billion and a half Colombian peso compensation to relatives of two of the dead victims on May 29, 2008. It ruled the State was administratively responsible and had neglected to protect its citizens, despite the warnings of the ombudsman.
Death of perpetrator
At dawn of 22 February 2012, nearly 10 years after the event, a Colombian Air Force Embraer EMB 314 Super Tucano identified the camp of FARC's 57th Front, 15 kilometers north of Bojayá near the border with Panama. The Super Tucano dropped two high-precision bombs, destroying the camp and killing six FARC rebels (including Pedro Alfonso Alvarado a.k.a. “Mapanao"), who are believed to have been responsible for the massacre.
See also
List of massacres in Colombia
Notes
2002 mass shootings
21st-century mass murder in Colombia
Spree shootings in Colombia
Improvised explosive device bombings in 2002
Political repression in Colombia
Massacres committed by FARC
Massacres of the Colombian conflict
May 2002 events in South America
2002 murders in Colombia
Massacres in 2002
2002 building bombings
Church bombings
Attacks on churches in South America
Building bombings in Colombia
Gas explosions
Terrorist incidents in Colombia in the 2000s
Terrorist incidents involving incendiary devices
Chocó Department
Terrorist incidents in South America in 2002 | Bojayá massacre | [
"Chemistry"
] | 1,308 | [
"Natural gas safety",
"Gas explosions"
] |
2,243,349 | https://en.wikipedia.org/wiki/Cranberry%20glass | Cranberry glass or Gold Ruby glass is a red glass made by adding gold salts or colloidal gold to molten glass. Tin, in the form of stannous chloride, is sometimes added in tiny amounts as a reducing agent. The glass is used primarily in expensive decorations.
Production
Cranberry glass is made in craft production rather than in large quantities, due to the high cost of the gold. The gold chloride is made by dissolving gold in a solution of nitric acid and hydrochloric acid (aqua regia). The glass is typically hand blown or molded. The finished, hardened glass is a type of colloid, a solid phase (gold) dispersed inside another solid phase (glass).
History
The origins of cranberry glass making are unknown, but many historians believe a form of this glass was first made in the late Roman Empire. This is evidenced by the British Museum's collection Lycurgus Cup, a 4th-century Roman glass cage cup made of a dichroic glass, which shows a different colour depending on whether light is passing through it or reflecting from it; red (gold salts) when lit from behind and green (silver salts) when lit from in front. Kitab al-Asrar, an Arabic work attributed to Abu Bakr al-Razi contains one of the earliest modern descriptions of the preparation of gold ruby glass.
The craft was then lost and rediscovered in the 17th century Bohemian period by either Johann Kunckel in Potsdam or by the Florentine glassmaker Antonio Neri. Neither of them knew the mechanism which yielded the colour, however. Chemist and winner of the 1925 Nobel Prize in Chemistry Richard Adolf Zsigmondy was able to understand and explain that small colloids of gold were responsible for the red colour.
The most famous period of cranberry glass production was in 19th century Britain during the Victorian Era.
Legend holds that cranberry glass was first discovered when a noble tossed a gold coin into a mixture of molten glass. This legend is almost certainly not true, as the gold must be dissolved in aqua regia before being added to the molten glass.
Cranberry glass creations were most popular as a table display, often holding candy or flowers.
Cranberry glass was also frequently used for wine glasses, decanters, and finger bowls. Cranberry glass was also well known for its use in "Mary Gregory" glass. This glass had a white enamel fired onto the glass in a design, usually with a romantic theme.
See also
Purple of Cassius
Heart of Glass, a 1976 film by Werner Herzog on the secret of ruby glass
References
External links
Glass art
Glass trademarks and brands
Glass compositions
Craft materials | Cranberry glass | [
"Chemistry"
] | 552 | [
"Glass compositions",
"Glass chemistry"
] |
610,760 | https://en.wikipedia.org/wiki/Bioacoustics | Bioacoustics is a cross-disciplinary science that combines biology and acoustics. Usually it refers to the investigation of sound production, dispersion and reception in animals (including humans). This involves neurophysiological and anatomical basis of sound production and detection, and relation of acoustic signals to the medium they disperse through. The findings provide clues about the evolution of acoustic mechanisms, and from that, the evolution of animals that employ them.
In underwater acoustics and fisheries acoustics the term is also used to mean the effect of plants and animals on sound propagated underwater, usually in reference to the use of sonar technology for biomass estimation. The study of substrate-borne vibrations used by animals is considered by some a distinct field called biotremology.
History
For a long time humans have employed animal sounds to recognise and find them. Bioacoustics as a scientific discipline was established by the Slovene biologist Ivan Regen who began systematically to study insect sounds. In 1925 he used a special stridulatory device to play in a duet with an insect. Later, he put a male cricket behind a microphone and female crickets in front of a loudspeaker. The females were not moving towards the male but towards the loudspeaker. Regen's most important contribution to the field apart from realization that insects also detect airborne sounds was the discovery of tympanal organ's function.
Relatively crude electro-mechanical devices available at the time (such as phonographs) allowed only for crude appraisal of signal properties. More accurate measurements were made possible in the second half of the 20th century by advances in electronics and utilization of devices such as oscilloscopes and digital recorders.
The most recent advances in bioacoustics concern the relationships among the animals and their acoustic environment and the impact of anthropogenic noise. Bioacoustic techniques have recently been proposed as a non-destructive method for estimating biodiversity of an area.
Importance
In the terrestrial environment, animals often use light for sensing distance, since light propagates well through air. Underwater sunlight only reaches to tens of meters depth. However, sound propagates readily through water and across considerable distances. Many marine animals can see well, but using hearing for communication, and sensing distance and location. Gauging the relative importance of audition versus vision in animals can be performed by comparing the number of auditory and optic nerves.
Since the 1950s to 1960s, studies on dolphin echolocation behavior using high frequency click sounds revealed that many different marine mammal species make sounds, which can be used to detect and identify species under water. Much research in bioacoustics has been funded by naval research organizations, as biological sound sources can interfere with military uses underwater.
Methods
Listening is still one of the main methods used in bioacoustical research. Little is known about neurophysiological processes that play a role in production, detection and interpretation of sounds in animals, so animal behaviour and the signals themselves are used for gaining insight into these processes.
Bioacoustics has also helped to pave the way for new emerging methods such as ecoacoustics (or acoustic ecology), an interdisciplinary field of research that studies the sounds produced by ecosystems, including biological, geophysical and anthropogenic sources. It examines how these sounds interact with the environment, providing insights into biodiversity, habitat health and ecological processes. By analysing soundscapes, ecoacoustics helps monitor environmental changes, assess conservation efforts and detect human impacts on natural systems.
Acoustic signals
An experienced observer can use animal sounds to recognize a "singing" animal species, its location and condition in nature. Investigation of animal sounds also includes signal recording with electronic recording equipment. Due to the wide range of signal properties and media they propagate through, specialized equipment may be required instead of the usual microphone, such as a hydrophone (for underwater sounds), detectors of ultrasound (very high-frequency sounds) or infrasound (very low-frequency sounds), or a laser vibrometer (substrate-borne vibrational signals). Computers are used for storing and analysis of recorded sounds. Specialized sound-editing software is used for describing and sorting signals according to their intensity, frequency, duration and other parameters.
Animal sound collections, managed by museums of natural history and other institutions, are an important tool for systematic investigation of signals. Many effective automated methods involving signal processing, data mining, machine learning and artificial intelligence techniques have been developed to detect and classify the bioacoustic signals.
Sound production, detection, and use in animals
Scientists in the field of bioacoustics are interested in anatomy and neurophysiology of organs involved in sound production and detection, including their shape, muscle action, and activity of neuronal networks involved. Of special interest is coding of signals with action potentials in the latter.
But since the methods used for neurophysiological research are still fairly complex and understanding of relevant processes is incomplete, more trivial methods are also used. Especially useful is observation of behavioural responses to acoustic signals. One such response is phonotaxis – directional movement towards the signal source. By observing response to well defined signals in a controlled environment, we can gain insight into signal function, sensitivity of the hearing apparatus, noise filtering capability, etc.
Biomass estimation
Biomass estimation is a method of detecting and quantifying fish and other marine organisms using sonar technology. As the sound pulse travels through water it encounters objects that are of different density than the surrounding medium, such as fish, that reflect sound back toward the sound source. These echoes provide information on fish size, location, and abundance. The basic components of the scientific echo sounder hardware function is to transmit the sound, receive, filter and amplify, record, and analyze the echoes. While there are many manufacturers of commercially available "fish-finders," quantitative analysis requires that measurements be made with calibrated echo sounder equipment, having high signal-to-noise ratios.
Animal sounds
Sounds used by animals that fall within the scope of bioacoustics include a wide range of frequencies and media, and are often not "sound" in the narrow sense of the word (i.e. compression waves that propagate through air and are detectable by the human ear). Katydid crickets, for example, communicate by sounds with frequencies higher than 100 kHz, far into the ultrasound range. Lower, but still in ultrasound, are sounds used by bats for echolocation. A segmented marine worm Leocratides kimuraorum produces one of the loudest popping sounds in the ocean at 157 dB, frequencies 1–100 kHz, similar to the snapping shrimps. On the other side of the frequency spectrum are low frequency-vibrations, often not detected by hearing organs, but with other, less specialized sense organs. The examples include ground vibrations produced by elephants whose principal frequency component is around 15 Hz, and low- to medium-frequency substrate-borne vibrations used by most insect orders. Many animal sounds, however, do fall within the frequency range detectable by a human ear, between 20 and 20,000 Hz. Mechanisms for sound production and detection are just as diverse as the signals themselves.
Plant sounds
In a series of scientific journal articles published between 2013 and 2016, Monica Gagliano of the University of Western Australia extended the science to include plant bioacoustics.
See also
Acoustic ecology
Acoustical oceanography
Animal communication
Animal language
Anthropophony
Biomusic
Biophony
Diffusion (acoustics)
Field recording
Frog hearing and communication
List of animal sounds
List of Bioacoustics Software
Music therapy
Natural sounds
Soundscape ecology
Underwater acoustics
Vocal learning
Whale sound
Zoomusicology
Phonology
References
Further reading
Ewing A.W. (1989): Arthropod bioacoustics: Neurobiology and behaviour. Edinburgh: Edinburgh University Press.
Fletcher N. (2007): Animal Bioacoustics. IN: Rossing T.D. (ed.): Springer Handbook of Acoustics, Springer.
External links
ASA Animal Bioacoustics Technical Committee
BioAcoustica: Wildlife Sounds Database
The British Library Sound Archive has 150,000 recordings of over 10,000 species.
International Bioacoustics Council links to many bioacoustics resources.
Borror Laboratory of Bioacoustics at The Ohio State University has a large archive of animal sound recordings.
Listen to Nature 400 examples of animal songs and calls
Wildlife Sound Recording Society
Bioacoustic Research Program at the Cornell Lab of Ornithology distributes a number of different free bioacoustics synthesis & analysis programs.
Macaulay Library at the Cornell Lab of Ornithology is the world's largest collection of animal sounds and associated video.
Xeno-canto A collection of bird vocalizations from around the world.
Acoustics
Zoosemiotics
Soundscape ecology
Sound
Noise
Hearing | Bioacoustics | [
"Physics",
"Biology"
] | 1,811 | [
"Behavior",
"Ethology",
"Zoosemiotics",
"Classical mechanics",
"Acoustics",
"Ecological techniques",
"Soundscape ecology"
] |
610,773 | https://en.wikipedia.org/wiki/Per-unit%20system | In the power systems analysis field of electrical engineering, a per-unit system is the expression of system quantities as fractions of a defined base unit quantity. Calculations are simplified because quantities expressed as per-unit do not change when they are referred from one side of a transformer to the other. This can be a pronounced advantage in power system analysis where large numbers of transformers may be encountered. Moreover, similar types of apparatus will have the impedances lying within a narrow numerical range when expressed as a per-unit fraction of the equipment rating, even if the unit size varies widely. Conversion of per-unit quantities to volts, ohms, or amperes requires a knowledge of the base that the per-unit quantities were referenced to. The per-unit system is used in power flow, short circuit evaluation, motor starting studies etc.
The main idea of a per unit system is to absorb large differences in absolute values into base relationships. Thus, representations of elements in the system with per unit values become more uniform.
A per-unit system provides units for power, voltage, current, impedance, and admittance. With the exception of impedance and admittance, any two units are independent and can be selected as base values; power and voltage are typically chosen. All quantities are specified as multiples of selected base values. For example, the base power might be the rated power of a transformer, or perhaps an arbitrarily selected power which makes power quantities in the system more convenient. The base voltage might be the nominal voltage of a bus. Different types of quantities are labeled with the same symbol (pu); it should be clear whether the quantity is a voltage, current, or other unit of measurement.
Purpose
There are several reasons for using a per-unit system:
Similar apparatus (generators, transformers, lines) will have similar per-unit impedances and losses expressed on their own rating, regardless of their absolute size. Because of this, per-unit data can be checked rapidly for gross errors. A per unit value out of normal range is worth looking into for potential errors.
Manufacturers usually specify the impedance of apparatus in per unit values.
Use of the constant is reduced in three-phase calculations.
Per-unit quantities are the same on either side of a transformer, independent of voltage level
By normalizing quantities to a common base, both hand and automatic calculations are simplified.
It improves numerical stability of automatic calculation methods.
Per unit data representation yields important information about relative magnitudes.
The per-unit system was developed to make manual analysis of power systems easier. Although power-system analysis is now done by computer, results are often expressed as per-unit values on a convenient system-wide base.
Base quantities
Generally base values of power and voltage are chosen. The base power may be the rating of a single piece of apparatus such as a motor or generator. If a system is being studied, the base power is usually chosen as a convenient round number such as 10 MVA or 100 MVA. The base voltage is chosen as the nominal rated voltage of the system. All other base quantities are derived from these two base quantities. Once the base power and the base voltage are chosen, the base current and the base impedance are determined by the natural laws of electrical circuits. The base value should only be a magnitude, while the per-unit value is a phasor. The phase angles of complex power, voltage, current, impedance, etc., are not affected by the conversion to per unit values.
The purpose of using a per-unit system is to simplify conversion between different transformers. Hence, it is appropriate to illustrate the steps for finding per-unit values for voltage and impedance. First, let the base power (S) of each end of a transformer become the same. Once every S is set on the same base, the base voltage and base impedance for every transformer can easily be obtained. Then, the real numbers of impedances and voltages can be substituted into the per-unit calculation definition to get the answers for the per-unit system. If the per-unit values are known, the real values can be obtained by multiplying by the base values.
By convention, the following two rules are adopted for base quantities:
The base power value is the same for the entire power system of concern.
The ratio of the voltage bases on either side of a transformer is selected to be the same as the ratio of the transformer voltage ratings.
With these two rules, a per-unit impedance remains unchanged when referred from one side of a transformer to the other. This allows the ideal transformer to be eliminated from a transformer model.
Relationship between units
The relationship between units in a per-unit system depends on whether the system is single-phase or three-phase.
Single-phase
Assuming that the independent base values are power and voltage, we have:
Alternatively, the base value for power may be given in terms of reactive or apparent power, in which case we have, respectively,
or
The rest of the units can be derived from power and voltage using the equations , , and (Ohm's law), being represented by . We have:
Three-phase
Power and voltage are specified in the same way as single-phase systems. However, due to differences in what these terms usually represent in three-phase systems, the relationships for the derived units are different. Specifically, power is given as total (not per-phase) power, and voltage is line-to-line voltage.
In three-phase systems the equations and also hold. The apparent power now equals
Example of per-unit
As an example of how per-unit is used, consider a three-phase power transmission system that deals with powers of the order of 500 MW and uses a nominal voltage of 138 kV for transmission. We arbitrarily select , and use the nominal voltage 138 kV as the base voltage . We then have:
If, for example, the actual voltage at one of the buses is measured to be 136 kV, we have:
Per-unit system formulas
The following tabulation of per-unit system formulas is adapted from Beeman's Industrial Power Systems Handbook.
In transformers
It can be shown that voltages, currents, and impedances in a per-unit system will have the same values whether they are referred to primary or secondary of a transformer.
For instance, for voltage, we can prove that the per unit voltages of two sides of the transformer, side 1 and side 2, are the same. Here, the per-unit voltages of the two sides are E1pu and E2pu respectively.
(source: Alexandra von Meier Power System Lectures, UC Berkeley)
E1 and E2 are the voltages of sides 1 and 2 in volts. N1 is the number of turns the coil on side 1 has. N2 is the number of turns the coil on side 2 has. Vbase1 and Vbase2 are the base voltages on sides 1 and 2.
For current, we can prove that the per-unit currents of the two sides are the same below.
(source: Alexandra von Meier Power System Lectures, UC Berkeley)
where I1,pu and I2,pu are the per-unit currents of sides 1 and 2 respectively. In this, the base currents Ibase1 and Ibase2 are related in the opposite way that Vbase1 and Vbase2 are related, in that
The reason for this relation is for power conservation
Sbase1 = Sbase2
The full load copper loss of a transformer in per-unit form is equal to the per-unit value of its resistance:
Therefore, it may be more useful to express the resistance in per-unit form as it also represents the full-load copper loss.
As stated above, there are two degrees of freedom within the per unit system that allow the engineer to specify any per unit system. The degrees of freedom are the choice of the base voltage (V) and the base power (S). By convention, a single base power (S) is chosen for both sides of the transformer and its value is equal to the rated power of the transformer. By convention, there are actually two different base voltages that are chosen, V and V which are equal to the rated voltages for either side of the transformer. By choosing the base quantities in this manner, the transformer can be effectively removed from the circuit as described above. For example:
Take a transformer that is rated at 10 kVA and 240/100 V. The secondary side has an impedance equal to 1∠0° Ω. The base impedance on the secondary side is equal to:
This means that the per unit impedance on the secondary side is 1∠0° Ω / 1 Ω = 1∠0° pu When this impedance is referred to the other side, the impedance becomes:
The base impedance for the primary side is calculated the same way as the secondary:
This means that the per unit impedance is 5.76∠0° Ω / 5.76 Ω = 1∠0° pu, which is the same as when calculated from the other side of the transformer, as would be expected.
Another useful tool for analyzing transformers is to have the base change formula that allows the engineer to go from a base impedance with one set of a base voltage and base power to another base impedance for a different set of a base voltage and base power. This becomes especially useful in real life applications where a transformer with a secondary side voltage of 1.2 kV might be connected to the primary side of another transformer whose rated voltage is 1 kV. The formula is as shown below.
References
Electrical engineering
Electric power
Power engineering | Per-unit system | [
"Physics",
"Engineering"
] | 1,980 | [
"Physical quantities",
"Energy engineering",
"Power (physics)",
"Electric power",
"Power engineering",
"Electrical engineering"
] |
610,789 | https://en.wikipedia.org/wiki/Knowledge%20level%20modeling | Knowledge level modeling is the process of theorizing over observations about a world and, to some extent, explaining the behavior of an agent as it interacts with its environment.
Crucial to the understanding of knowledge level modeling are Allen Newell's notions of the knowledge level, operators, and an agent's goal state.
The knowledge level refers to the knowledge an agent has about its world.
Operators are what can be applied to an agent to affect its state.
An agent's goal state is the status reached after the appropriate operators have been applied to transition from a previous, non-goal state.
Essentially, knowledge level modeling involves evaluating an agent's knowledge of the world and all possible states and with that information constructing a model that depicts the interrelations and pathways between the various states. With this model, various problem solving methods (i.e. prediction, classification, explanation, tutoring, qualitative reasoning, planning, etc.) can be viewed in a uniform fashion. This modeling aspect is crucial in cognitive architectures for intelligent agents.
In "Applications of Abduction: Knowledge-Level Modeling", Menzies proposes a new knowledge level modeling approach, called KLB, which specifies that "a knowledge base should be divided into domain-specific facts and domain-independent abstract problem solving inference procedures." In his method, abductive reasoning is used to find assumptions which, when combined with theories, achieve the desired goals of the system.
Lack of Knowledge-Level in sports coaches might be dangerous and increase risk of injuries.
For a good example of abductive reasoning, look at logical reasoning.
See also
Knowledge level
Knowledge engineering
References
Knowledge engineering | Knowledge level modeling | [
"Engineering"
] | 336 | [
"Systems engineering",
"Knowledge engineering"
] |
610,813 | https://en.wikipedia.org/wiki/Tarnish | Tarnish is a thin layer of corrosion that forms over copper, brass, aluminum, magnesium, neodymium and other similar metals as their outermost layer undergoes a chemical reaction. Tarnish does not always result from the sole effects of oxygen in the air. For example, silver needs hydrogen sulfide to tarnish, although it may tarnish with oxygen over time. It often appears as a dull, gray or black film or coating over metal. Tarnish is a surface phenomenon that is self-limiting, unlike rust. Only the top few layers of the metal react. The layer of tarnish seals and protects the underlying layers from reacting.
Tarnish preserves the underlying metal in outdoor use, and in this form is called chemical patina. Unlike wear patina necessary in applications such as copper roofing, outdoor copper, bronze, and brass statues and fittings, chemical patina is considered a lot more uneven and undesirable. Patina is the name given to tarnish on copper-based metals, while toning is a term for the type of tarnish which forms on coins.
Chemistry
Tarnish is a product of a chemical reaction between a metal and a nonmetal compound, especially oxygen and sulfur dioxide. It is usually a metal oxide, the product of oxidation; sometimes it is a metal sulfide. The metal oxide sometimes reacts with water to make the hydroxide, or with carbon dioxide to make the carbonate. It is a chemical change. There are various methods to prevent metals from tarnishing.
Prevention and removal
Using a thin coat of polish can prevent tarnish from forming over these metals. Tarnish can be removed by using steel wool, sandpaper, emery paper, baking soda or a file to rub or polish the metal's dull surface. Fine objects (such as silverware) may have the tarnish electrochemically reversed (non-destructively) by resting the objects on a piece of aluminium foil in a pot of boiling water with a small amount of salt or baking soda, or it may be removed with a special polishing compound and a soft cloth. Gentler abrasives, such as calcium carbonate, are often used by museums to clean tarnished silver as they cannot damage or scratch the silver and will not leave unwanted residues.
References
Chemical reactions
Metals
Metalworking terminology
Corrosion | Tarnish | [
"Chemistry",
"Materials_science"
] | 469 | [
"Metals",
"Metallurgy",
"Corrosion",
"Electrochemistry",
"nan",
"Materials degradation"
] |
610,845 | https://en.wikipedia.org/wiki/Urban%20forestry | Urban forestry is the care and management of single trees and tree populations in urban settings for the purpose of improving the urban environment. Urban forestry involves both planning and management, including the programming of care and maintenance operations of the urban forest. Urban forestry advocates the role of trees as a critical part of the urban infrastructure. Urban foresters plant and maintain trees, support appropriate tree and forest preservation, conduct research and promote the many benefits trees provide. Urban forestry is practiced by municipal and commercial arborists, municipal and utility foresters, environmental policymakers, city planners, consultants, educators, researchers and community activists.
Benefits
Environmental and health impacts
Heat waves cause 1,300 deaths each year in the United States alone, which is more than any other weather-related event. As temperatures continue to rise due to global warming, we can expect to see this number increasing in coming years. The risk is exacerbated for low-income households who do not have access to air conditioning, as well as heat-sensitive populations such as the elderly, infants, and those who have chronic health problems. Urban forests mitigate the effects of urban heat island through evapotranspiration and the shading of streets and buildings. Reforesting a 10-meter radius corresponds to 0.7 degree Celsius decrease in daytime air temperature, compared to a 1.3 degree decrease in a 30-meter radius, and over 1.5 degrees in a 60 or 90 meter radius. This reduces the risk of heat stroke, decreases cooling costs, and improves general well-being. Trees have saved 1,200 lives annually in the U.S, by preventing heat related deaths. Urban forests improve air quality by absorbing pollutants such as ozone, nitrogen dioxide, ammonia, and particulate matter as well as performing carbon sequestration. Communities with better air quality measures demonstrate lower levels of childhood asthma. Urban forestry can be an important tool for stormwater management as trees intercept rainwater in the canopy, and can slow down, filter and pump water back into the atmosphere via their roots. Other benefits include noise control, traffic control, and glare and reflection control.
Mental health impacts
Forests that have been included in urban environments have shown beneficial effects for the residents who live there. Urban forestry has been shown to promote psychological healing, stress recovery, and to improve concentration and productivity.
A 2018 study asked low income residents of Philadelphia "how often they felt nervous, hopeless, restless, depressed and worthless." As an experimental mental health intervention, trash was removed from vacant lots. Some of the vacant lots were "greened", with plantings of trees, grass, and small fences. Residents near the "greened" lots who had incomes below the poverty line reported a decrease in feelings of depression of 68%, while residents with incomes above the poverty line reported a decrease of 41%. Removing trash from vacant lots without installing landscaping did not have an observable mental health impact.
Urban forests and green spaces have been associated with milder ADHD symptoms. Children with ADHD struggle with directed attention - a type of attention that is a part of the theory of attention restoration. Directed attention entails "periods of deliberate focus" and requires breaks to continue productivity.
A case study conducted in Belgrade, Serbia evaluated the mental health benefits of The Faculty Urban Forest for a younger population. Time spent in an arboretum is shown to benefit mental health by lowering depression, anxiety, and stress. An ideal forest environment for psychological improvement should have an extensive assortment of coniferous and broadleaved species displaying vibrant colors. These conditions provide a multitude of sensory experiences, which can be experienced with infrastructure such as benches, picnic tables, and pavilions. It is important to provide handicap-accessible options and locate urban forests close to public transportation so that they can support daily visits and restorative experiences for all.
Tree canopy inequity
Urban tree canopy inequity is defined by American Forests as the uneven distribution of urban trees in neighborhoods that are socioeconomically disadvantaged. These neighborhoods that lack sufficient canopy cover compared to areas that have access to suitable canopy cover, experience inaccessibility to the benefits that trees provide, and other social inequalities. Many of the neighborhoods that are most impacted by this inequity are where minorities and impoverished populations reside. This inequity of the urban tree canopy is caused by many social factors, such as environmental racism, which is environmental injustice that largely impacts racial groups by policies or practices, such as redlining policies. This resulted in the trend that poorer and predominantly minority neighborhoods experience less canopy cover.
This inequitable distribution of canopy cover in lower socioeconomic neighborhoods has resulted in many social impacts that raise environmental justice concerns. Urban tree canopy inequity limits the access to the beneficial ecosystem services that trees provide. Populations that do not live in neighborhoods with suitable canopy cover do not experience the protection from the impacts of climate change and the urban heat island effect, air and water pollution, trapping of rainfall surface runoff, and energy savings on air conditioning from tree shade. These populations also do not benefit from the psychological benefits that trees provide, such as having areas that increase social cohesion and congregation, recreation, and nature aesthetics. An example of other social benefits associated with urban tree canopy includes a research study reviewed by The Children & Nature Network, an organization active in the movement of connecting children and their families to nature. Researchers conclude that there is a positive relationship between academic performance and tree cover and species composition, indicating that tree cover and species composition have a positive effect on the academic performance of primary school-aged children, especially those enrolled in socio-economically challenged schools.
There are many environmental impacts that are associated with the inequity of the urban tree canopy. A lack of trees in neighborhoods and a heavy presence of impervious surfaces such as houses, sidewalks, and parking lots, contribute to the heat island effect and there is a lack of temperature moderation. In these areas, temperatures are more extreme. Without the presence of trees, there would be poor air quality and an increase in air pollution as trees remove carbon and pollutants from the air through sequestration, and storing it in their structures. A lack of trees would also result in the risk of chemicals and other harmful pollutants entering water sources and collecting on sidewalks and roads, as they would not be present to increase infiltration and to help reduce and manage rainwater runoff. Biodiversity and habitat for animals decline in these areas as trees are not available for animals and plants to utilize.
As urban tree canopy inequity remains present in lower-socioeconomically disadvantaged neighborhoods, impacting the livelihoods and environmental state of these areas and populations, efforts are being made by urban foresters, city officials, and organizations to address this problem and present solutions. Examples of these efforts include the United States Forest Service outreach and education programs. Organizations such as Casey Tree's Community Tree Planting projects, the Greening of Detroit program, and American Forests help to implement programs and initiatives within cities and neighborhoods to engage volunteers, preserve and care for the urban forest and promote educational and career opportunities for the public. Case studies, such as one based in Washington D.C., analyze and contribute to the knowledge of urban tree canopy inequity by utilizing various methods including interviews, collaboration with private and public organizations, and community outreach that are successful and help present solutions for urban tree canopy inequity. The use of online applications such as i-Tree and its associated tree and forest assessment tools, and Tree Equity Score, along with many others, assist urban foresters, professionals, and students in conducting research on urban areas and presenting planning solutions to urban tree canopy inequity.
Inequities in environmental and health impacts
In the 1930s as part of the New Deal, the federal government started implementing unfair redlining policies, which classified certain neighborhoods as "risky" areas for banks and mortgage lenders to approve in funding home investments. Demographics of these communities typically included higher percentages of Black, African American, and Latino community members. These redlining policies led to overall lack in investment in these areas, including lack of equitable investment in environmental resources. There remain lower percentages of tree canopy coverage in nearly every U.S. city that had formerly redlined neighborhoods, including the three most populous U.S. cities, New York, Chicago, and Los Angeles. People living in urban communities with significantly lower percentages of urban trees do not benefit from the same environmental and health impacts as those in communities with greater tree populations.
In New York, specifically, the South Bronx has far fewer trees than New York City neighborhoods with higher income levels. Tree canopy coverage in the Bronx, in general, is the lowest of all five New York City boroughs. There is only 19.86% canopy coverage provided by street trees, which is much lower than the citywide average of 23.98%. The New York City Department of Health's Heat Vulnerability Index (HVI) measures how the risk of heat related illness and death differs across city neighborhoods, and it shows that New York City neighborhoods that are more heat-vulnerable, such as those in the South Bronx, tend to have lower tree canopy coverage. Populations living in these communities are at greater risk for heat related illness, such as heat stroke, or death due to heat waves. Health outcomes associated with air pollution, such as asthma, are also worse off in neighborhoods with lower tree canopy coverage. In the Bronx, childhood asthma rates are disproportionately high. Children in the Bronx visit the emergency room for asthma 2x the rate of children in other boroughs. There are approximately 17% of children (age 13 and younger) suffering from asthma in the Bronx, compared to the citywide average of 11%.
Impacts on wildlife
Urban forests in the built environment affect urban wildlife in several ways. An urban habitat can impact wildlife behavior significantly and can alter the ecology of urban wildlife, influencing these organisms' behavior. The interactions between humans and wildlife and the impacts of urbanization on these wildlife populations influence cities across the world.
Disturbances
Disturbances in urban forests are known for occurring more frequently and with higher intensities than in nature. Changes in the urban landscape can lead to greater competition for resources among species on fragmented areas of land, leading to more stress for urban wildlife. Urban wildlife is also exposed to warmer temperatures as well as higher levels of pollution as cities alter the natural environment significantly.
The construction of urban infrastructure requires deforestation, leveling, and other activities that lead to habitat fragmentation, reduced genetic diversity, and changes in behavior. Urban wildlife is also exposed to higher amounts of toxic substances, including heavy metals, road treatments, or pesticides from lawns that can lead to abnormal reproduction or development. Consumption of prey species by domesticated pets, such as dogs and cats, also leads to an increased mortality rate in urban habitats. Urban forests are essential to creating habitats for wildlife within cities, and many species have adapted to living in the disturbed conditions of the built environment by utilizing urban green-spaces. Research has shown diverse green-spaces to be better suited for wildlife. For example, in Krakow, Poland, the species richness of owls was higher in parts of the city with varied land uses than more homogeneous areas. Additional support for land-use diversity in urban areas is provided in a study showing the importance of leaving dead and decaying trees on the landscape for wildlife habitat.
Urban forests can alter natural diets by providing dietary supplements to wildlife in the form of fruit or nut-producing ornamental plants, trash, or even domestic pets like cats. By examining coyote scat and using stable isotope analysis, it was estimated that about 22% of the scat or 38% of the urban coyote diet was from human-created sources. Wildlife is also attracted to urban forests for their increased surface waters due to reduced runoff in these areas. Having wildlife interacting around humans in urban areas can create conflicts between humans and animals. A case study in Aspen, Colorado observed the foraging habits of bears, tracking their movements using GPS collars, and found that bears visited forested areas in the city with fruit-bearing trees for food. Alternatively, in a study on the behavioral ecology of urban deer populations, the authors discussed the difficulty of managing this species due to its positive public perception as an aesthetically pleasing animal. Proper species selection, placement of trees, and other urban forest management strategies can be utilized to mitigate human-wildlife conflicts in cities.
Ecosystem Services
Cultural Services
Cultural services are non-material benefits (such as aesthetics and spiritual enrichment) that can be obtained from an ecosystem. Certain tree species have cultural value to different groups of people, and different tree species provide a range of different aesthetic values. The tree species that urban foresters plant affect many cultural benefits provided by urban forestry, such as an increase in physical health, psychological health, social health, property values, community economic development, and tourism. Understanding the values and interests of the different stakeholders in the community can help improve the cultural services provided through urban forestry.
Regulating Services
Trees are important in regulating ecosystem processes; they contribute to filtering air pollution, microclimate regulation, carbon dioxide sequestration, and reducing climate change. Trees can reduce the urban heat island effect through shading paved areas, aiding in airflow, and evapotranspiration. When planted and managed properly, these cooling benefits extend past the city itself. If not planted in locations ideal for their survival, trees will be vulnerable to disease. Diseased trees provide decreased ecosystem services, making it important for urban forestry to be a part of the planning and management of the urban canopy. Trees in urban environments can also aid in stormwater management and reduce the risk of flash floods by intercepting rainfall in the tree canopy. Tree canopy interception can also minimize the amount of sediment and nutrient contamination that occurs downstream. This is now a focus in cities around the world through using water sensitive urban design (WSUD) in urban forestry. Urban forests protect watershed health by utilizing riparian and street buffering with urban forestry practices.
Provisioning Services
There are many different tree species that provide provisioning services in the urban forest. These services have a variety of names, including urban agriculture and edible green infrastructure. Wild food products produced from trees pose a variety of benefits to the residents in that area. They can supply food to local residents and wildlife and increase biodiversity in the community. These trees can be harvested by local residents with minimal education on urban foraging. Some examples of urban agriculture are fruit trees and rooftop gardens. While fruit trees can provide produce and many other benefits, they can also create a mess if the produce is not harvested and fruit is left on the ground. Proper pruning can help reduce the mess created but not eliminate it. An urban forest that can provide produce significantly cuts down on food transportation from distant farms and therefore lowers carbon emissions annually. Urban wood utilization is an often overlooked provisioning service. Almost 70% of urban wood is wasted while only 25% is recycled and/or reused. Urban wood that is reused can be turned into useful products, such as furniture or bioenergy.
Supporting Services
The supporting ecosystem services are necessary for the production of all other ecosystem services. Some of these services include biomass production, nutrient cycling, soil formation, and biodiversity. Additionally, proper management of urban forests can provide habits for native wildlife, including endangered species. Urban forests that include a large range of native and exotic trees provide a large range of habitats for wildlife. It has been shown in Sweden that certain endangered bird species mainly inhabit urban forests where certain trees are planted. One Swedish city contains two thirds of the red-list endangered species of the area by including endangered plants and habitats for endangered wildlife.
Wildlife Habitat Management
The urban forest provides habitat for many wildlife species, including song birds, squirrels and other small mammals, and insects. The urban forest provides the basics that animals need for survival; food, water, shelter, and space or habitat. Fruit or mast producing trees provide food sources, trees and other vegetation provide shelter and habitats, and artificial water sources in cities and their parks provide water. The urban forest can be planned and managed in the context of the wildlife populations in the area, increasing the population of desired species or decreasing the population of undesirable or invasive ones based on the biological and/or cultural carrying capacity of the municipality.
Biodiversity and Threatened/Endangered Species
Biodiversity has been declining across the world due to climate change, deforestation, and the destruction of critical habitats. Preserving and bolstering biodiversity ensures that ecosystems of all kinds are functioning properly, and we can thus reap the benefits of ecosystem services. Urbanization holds potential solutions to achieve high levels of biodiversity when managed correctly. In the United States, the Endangered Species Act's language acts as a means to protect not only listed species but also the conservation of their habitats to sustain them, many of which are found in urban areas. Multiple transcontinental research projects on urban wildlife have found that there is a consistent positive correlation between human population density and species richness across all vertebrate taxonomic groups. Urban areas provide and maintain a mosaic of diverse wildlife habitat to support existing and introduced fauna. Urban Forestry Management Plans in conjunction with Wildlife Management Plans can support and improve urban biodiversity by including following attributes: routine tree inventories to identify a biodiversity baseline for goal setting, intentional tree planting of hardy species to promote biodiversity, and lastly to focus on the preservation and improvement of urban parks and woodlots as vital wooded and edge habitats. Challenges to managing for biodiversity and endangered species include the difficulties in creating and managing artificial, fragmented, yet diverse habitat types simultaneously in the context of social problems such as poverty and crime.
Undesirable and Invasive Species
Invasive species are nonnative plants, animals, microbial pathogens, and fungi that cause damage environmentally and/or economically. These species are having a number of negative effects on our forests, both wild and urban, from being a nuisance to compromising and killing native trees. Oftentimes, invasive species are introduced via urban areas that serve as transportation hubs, meaning that the urban forest is typically the first to be affected by them, and can also serve as the first line of defense to keep them from invading native forests. Without one of the basics of survival, undesirable wildlife cannot inhabit the area. Trees and vegetation can be altered to decrease habitat space and fewer fruit producing trees could be planted or fruit could be cleaned up to limit food sources. In response to the growing prevalence, many municipalities have begun planting disease and pest resistant cultivars, such as modified American Elms and Ash trees to prevent the spread of the fungal Dutch Elm Disease and Emerald Ash Borer infestations, respectively. There are also rising regulations against the planting of invasive tree species that are harmful to the naturally occurring ecosystems because they can out compete native species for resources or attract undesirable wildlife. In April 2019, the state of Indiana enacted the Terrestrial Plants Rule, banning 44 invasive nursery species that cause harm to the urban forest and attract undesirable wildlife, including tree-of-heaven, honeysuckle and autumn olive. The Bradford Pear, a common landscape tree, has been banned from the state of Ohio, and the cities of Charlotte, North Carolina and Pittsburgh, Pennsylvania, as they are known to spread quickly, crowding out native vegetation types from grasses to hardwood trees, further fragmenting and damaging the habitat of native animals as well.
Social impact
Urban forest related events such as planting festivals can significantly reduce social isolation problems, enhance people's experience and raise environmental awareness. Urban forests also encourage more active lifestyles by providing space for exercise and are associated with reduced stress and overall emotional well-being. Urban forests may also provide products such as timber or food, and deliver economic benefits such as increased property values and the attraction of tourism, businesses and investment. Street trees, if managed and cared for, are beneficial in creating sustainable and healthy communities.
Case study
The City of Denver Department of Parks and Recreation website hosts interactive online tools that allow residents to view the financial impact to their neighborhoods directly related to healthy tree planting. In the Washington-Virginia Vale neighborhood the city website cites 2,002 individual trees as having been planted and maintained by the City Forester. These trees are believed to bring in an annual ecosystem benefit of $159,521. This is mostly wrapped up in property benefits, which cite a contribution to this total of $143,331. The majorities of these trees are between 0 and 12 feet tall and are a mix of mostly Elm, Maple, Pine, and Locust species.
Economic impacts
Lifespan value
Trees serve an economical function within the urban forest, providing various monetary benefits. It is estimated that there are around 3.8 billion trees in urban areas around the United States, equating to $2.4 trillion in overall structural value. In addition, environmental and social benefits such as air quality, climate change, water flow, real estate, and even community well-being can be quantified to determine their economical impact. Examples of the economic values created by the urban forest includes an annual $4.7 billion of air pollution removal, $3.8 billion in carbon sequestration. Additionally, recreational experiences have the potential to surpass $2 billion in annual value. Furthermore, while these are national estimates for the United States, all of these estimates may vary by location.
The value of an urban forest is estimated by quantifying social and ecosystem services, then assigning those services monetary worth, which are often based on market value. Modeling tools, such as i-Tree, are used by urban foresters to accurately assess the effects of an urban forest's structure; this information is used to quantify ecosystem services and ultimately the economic value of the forest across a variety of locations. By creating these models, urban foresters are able to quantify and communicate the value of the urban forest to stakeholders and the general public. These evaluations can be used to influence the amount of money allocated to tree management by the government and general populace. Trees may live a long and healthy life if they continue to receive proper management in the form of maintenance and pruning, which sustains the value of the urban forest. Moreover, after death, trees have the potential to remain profitable to the community— if utilized correctly.
Post life value
Typically, wood products such as lumber and wood pellets are associated with rural forestry and logging. Annually, urban forestry creates 14.8 metric tons of wood waste in the United States through pruning and removal. Within urban forestry there are initiatives to use this waste as wood products such as fuel, lumber, art, and more. These initiatives seek to extend the value of urban trees after their lives. One such initiative is the Virginia Urban Wood Group, a nonprofit with the mission to, "enhance the quality of life through the Stewardship of our Commonwealth's urban and community trees." The Virginia Urban Wood Group promotes the production and sale of wood products sourced from urban wood waste. The group connects governmental and commercial professionals such as arborists, municipal foresters, mills, carpenters, and more. Another group contributing to the urban wood waste industry is Wisconsin Urban Wood. This group collects suitable removed trees from local businesses and arborists and sells the wood to local mills. While urban lumber may not be as high of a grade quality as forest grown lumber, these products are suitable for smaller projects such as woodworking and artisan furniture. Some localities use their urban lumber to reduce costs on amenity construction— they use their wood to build their picnic tables and benches. Additionally, some urban wood initiatives seek the use of reclaimed wood to decrease the use of freshly cut lumber.
Practice
Urban forestry is a practical discipline, which includes Tree planting, care, and protection, and the overall management of trees as a collective resource. The urban environment can present many arboricultural challenges such as limited root and canopy space, poor soil quality, deficiency or excess of water and light, heat, pollution, mechanical and chemical damage to trees, and mitigation of tree-related hazards. Among those hazards are mostly non-immediate risks like the probability that individual trees will not withstand strong winds (as during a thunderstorm) and damage parking cars or injure passing pedestrians.
Although quite striking in an urban environment, large trees in particular present a continuing dilemma for the field of urban forestry due to the stresses that urban trees undergo from automobile exhaust, constraining hardscape and building foundations, and physical damage (Pickett et al. 2008). Urban forestry also challenges the arborists that tend the trees. The lack of space requires greater use of rigging skills and traffic and pedestrian control. The many constraints that the typical urban environment places on trees limits the average lifespan of a city tree to only 32 years – 13 years if planted in a downtown area – which is far short of the 150-year average life span of trees in rural settings (Herwitz 2001).
Management challenges for urban forestry include maintaining a tree and planting site inventory, quantifying and maximizing the benefits of trees, minimizing costs, obtaining and maintaining public support and funding, and establishing laws and policies for trees on public and on private land. Urban forestry presents many social issues that require addressing to allow urban forestry to be seen by the many as an advantage rather than a curse on their environment. Social issues include under funding which leads to inadequate maintenance of urban trees. In the UK the National Urban Forestry Unit produced a series of case studies around best practice in urban forestry which is archived here .
Training and Credentials
Within the profession and practice of urban forestry, training and credentials are often a prerequisite to proper and efficient management. Skills within urban forestry may consist of community-based tree stewardship, restoration of neglected spaces, urban canopy monitoring and maintenance, and building social cohesion in urban neighborhoods. Higher education, field experience, and credentials are used to effectively develop and verify these goals. Achievement of the above training can provide prospects for commercial or governmental career opportunities such as a Certified Arborist, Certified Forester, Urban Forester, Professional Consulting Forester, Forestry Technician, and many more.
Higher education in urban forestry is a method of training for aspiring urban foresters. Careers in urban forestry often require higher education that concentrates in urban forestry, arboriculture, forestry, horticulture, natural resource management, urban planning, and environmental science. These interdisciplinary educational disciplines provide crucial knowledge for urban foresters including collecting attribute data of the urban forest and the implementation of best management practices. Precise data on the urban forest is often scarce and not up-to-date due to the difficulty of traditional sampling approaches. Higher education provides insight to modernized technologies that analyze the urban forest, such as remote sensing, and generates accurate data with more precise details on urban tree canopy, individual tree metrics, species, and age structures. The aforementioned educational training creates a path to becoming a credible urban forester.
While in-classroom education is one method of training, experiential learning is highly recommended in order to hone the more technical aspects of the field, such as tree inventory, planting, and pest management. This field work also extends to training social skills. Community and client-based relationships often require a certain social expertise to resolve conflict. Through field training and client interaction, skills in conflict management are acquired. This may include, but not limited to effective listening, participatory planning, and leadership. Social engagement is increasingly necessary when working with marginalized communities, formatting budget plans, managing aesthetics, and other urban forestry responsibilities. Through internships, job experience, and field training opportunities, many skills are developed that are crucial for professions in urban forestry.
Earning credentials and certifications through professional organizations, such as the International Society of Arboriculture (ISA) and the Tree Care Industry Association (TCIA), are often specific qualifications for becoming an urban forester. The ISA, for example, is a global organization that offers an array of certifications and qualifications, including ISA Certified Arborist. According to a 2020 survey, urban forestry employers desired most employees who possessed the ISA Certified Arborist credential, followed by a commercial pesticide applicator license, and a commercial driver's license. It should be recognized that such credentials require a minimum time period of on-the-job training followed by a written and/or practical exam. To accomplish certifications such as these, online course material and tangible study guides can be purchased, such as through the ISA's website. After thorough review, computer-based and paper exams can be taken to officially earn a certain credential. As urban forestry focuses on the extensive management of trees, it is important to note that these organizations are geared to credentialing arborists, or those who manage trees intensively. The TCIA is another professional organization that sets standards for tree firms and provides education and information through publications, conferences, and workshops. While the TCIA is designed to provide tree care firms with training and certification, certain programs, such as the Electrical Hazards Awareness Program (EHAP), may benefit those in urban forestry. An urban forester who directly manages street trees, for example, may find the EHAP useful, for their management decisions because street trees are often affected by overhead and/or underground utilities.
Higher education, field work, and credentials are all methods of training that provide experiences for someone pursuing a career in urban forestry. This training is crucial to establishing trust among urban forest stakeholders and withholding professionalism in the urban forest industry.
Street trees
A street tree is any tree that is growing in a city thoroughfare, whether between the sidewalk and the curb or in an unimproved right-of-way. Street trees provide valuable ecosystem services including stormwater mitigation, air pollutant removal, and shade to mitigate the urban heat island effect. Since street trees are often planted in areas with a high percentage of impervious surfaces, they are an important fraction of an area's overall urban tree cover. When planting street trees, there are many factors to consider and difficulties to overcome. Depending on climate, soil moisture, nutrient dynamic, and location much planning goes into planting street trees. If done incorrectly these trees can cost a municipality time and money to maintain and remove. Urban site conditions, Species selection, and tree management are three key aspects of cultivating street trees.
Urban sites present many challenges to street trees because of their adverse conditions. Limited soil volume, high soil compaction, and intense microclimates are common where street trees are planted. Because of these adverse conditions, street trees typically have lower growth rates and lower survival rates than trees planted in nurseries or more natural settings. There are also conflicts between tree parts and urban infrastructure because of dense urban environments. Tree roots are known to inflict costly damage by fracturing pavement, which is a common cause for tree removal. In order to receive the full benefit of ecosystem services of street trees, urban foresters aim to minimize these conflicts and provide young trees with the highest opportunity to reach maturity.
A guiding principle of urban forestry is to plant the right tree in the right place. Certain species are more tolerant of adverse urban conditions than others, and urban foresters strive to select species that will maximize benefits and minimize costs for a specific site. For example, yellow-poplar (Liriodendron tulipifera) is known to be intolerant of poor urban soils, and therefore is rarely used as a street tree. There is no tree species perfectly suited for every site so characteristics of each species are scrutinized to determine their suitability for planting as a street tree. Some important characteristics of street tree species include tolerance of alkaline soils, compacted soils, low soil volume, de-icing salts, drought, and having good structure. Blackgum (Nyssa sylvatica) and swamp white oak (Quercus bicolor) are species renowned for their adaptability to urban environments, but even they have drawbacks such as Blackgum being difficult to transplant. The London plane (Platanus × hispanica) has been planted in cities all over the world, due to being highly tolerant of urban environments.
Planning is an important step in the establishment of street trees. Policies and guidelines are beneficial in the street tree planning process by lowering costs and improving the health and safety of a municipality. Studies have shown that municipalities that do not abide by policies and guidelines are shown to have higher costs in economic and environmental aspects. Models and formulas may also be used to warrant adequate species diversity for more resiliency to disturbances and stressors. An example of a formula that municipalities abide by in planning is Santamour's 10-20-30 rule. This formula allows for no more than 10% of the same tree species, no more than 20% of the same genus, and no more than 30% of the same family. The Species Selection Model focuses on procedures that create a suitable street tree by surveying common species used in urban areas. The Analytic Hierarchy Process is a three layer structure that includes an objective, criteria, and factors. Some factors that may be included in street tree establishment are tree height, DBH, canopy density, and drought resistance. Planning for the physical tree planting should consider bare root and balled-and burlapped (B&B) trees. When deciding upon bare root or B&B, species, age, street traffic intensity, site type, wound presence, and dimensions of sidewalk pit cuts should be examined. Taking into account bare root and B&B trees along with the above criteria are beneficial in the physical aspects of establishing street trees.
Case study: Nashville tree density increase bill
In late March 2019, the Nashville Metro Council announced its plan to cut down 21 cherry trees from Riverfront Park so that a temporary outdoor stage could be constructed for the NFL draft. Immediate public outcry from residents, including a Change.org petition that garnered over 80,000 signatures, pressured the city and the NFL to revise the plan so that only 10 trees would be uprooted and relocated, leaving the remaining ones untouched.
Following these events Vice Mayor Jim Shulman contacted the Nashville Tree Conservation Corps, a non-profit that "works to promote, preserve, protect, and plant the tree canopy in Davidson County" in order to prevent future incidents such as this one. Thanks to the efforts of the lead sponsor of the bill, council member Jeff Syracuse, and the Nashville Tree Conservation Corps, more than two years later on August 19, 2021, the new public tree bill BL2021-829 was signed into law.
BL2021-829 "seeks to promote transparency and oversight within Metro departments regarding tree removal and replacement." The passage of the bill signaled the Nashville Metro Council's commitment to maintaining urban green space as the city underwent a development boom. Starting in 2015, Nashville experienced a surge of construction projects that consisted of high rise residential towers to a $220 million office building. This development has shown no signs of slowing down, and if left unchecked runs the risk of actively contributing to the environmental degradation of the city.
The law specifically addresses the removal of public trees by requiring that the public be notified of their removal at least two weeks beforehand and that replacements are planted. Key features of the law include the formation of a Tree Working Group, which reviews tree-related policies, and a Tree Review Panel made up of representatives from Water, Parks, Codes, Transportation, General Services, Planning, and the Mayor's Office who oversee public trees. It also mandates that a countywide tree canopy study be conducted every five years in order to keep accurate data on the public trees.
This kind of regulation of the city's trees promises both environmental and social benefits. Maintaining a healthy population of public trees in cities helps to reduce the effect of environmental issues that are common in urban landscapes, such as air pollution and waste heat. Prioritizing tree growth also supports biophilic urban design, which has shown to have health benefits and facilitate stronger social and emotional connections among people.
Planning
There are many benefits, costs, and challenges to planning an urban forest. Urban forests provide both ecosystem services and disservices that are considered prior to planning. Urban forests provide services such as improved air quality, noise reduction, temperature mitigation, and stormwater mitigation when they are placed in the right spot. Urban forest planning is used to maximize the benefits that trees provide by thoughtfully placing them in the best locations. Challenges that are faced during planning include managing the disservices from trees and valuating their services, the loss/replacement cost of green infrastructure, and the cost of remediating gray infrastructure interference. A major loss of green infrastructure could alter the sense of place, community identity, and social cohesion of a municipality.
When planning an urban forest there are several practices that can be used. Many municipalities put plans for an urban forest into an official document such as a master plan. While not every city can implement an urban forest plan, it is possible to implement plans for specific areas, such as parks, that would help increase the canopy cover of a municipality.
During the creation of the urban forest management plan, criteria and goals are usually outlined in the plan early in the planning process. Determining criteria is done by assessing the current state of the urban forest and then incorporating criteria for performance goals into the management plan. Assessment is the first step in planning and provides necessary information on the forest extent, age distribution, tree health, and species diversity. Once the assessment is completed, the next step becomes deciding what criteria—or indicators—to incorporate into the plan so that there are set performance goals. Incorporating indicators into the management plan makes it easier to track the progress of the urban forest and whether goals are being met. Criteria/indicators typically focus on a category of urban forest management and usually include subjects such as:
The urban forest vegetation and its characteristics such as canopy cover, age distributions, and species diversity.
Having a community focus that involves industry cooperation, and community and stakeholder involvement.
The planning of the urban forest and whether it is successful in the management and funding of the urban forest.
The incorporation of indicators into management plans are a strong aid in the implementation and revision of management plans and help reach the goals within the plan.
A key part of a master plan is to map spaces where trees will be planted. In the paper A methodology to select the best locations for new urban forests using multicriteria analysis, three different steps are outlined for determining tree planting areas. The first stage is an excluding stage, which uses a set of criteria to exclude poor locations and indicate potential locations for planting. Second is a suitability stage, which evaluates the potential locations to determine a more selective group of suitable spots. Finally, the feasibility stage is a final test to determine if the suitable locations are the most feasible planting areas with minimal site use conflicts.
The management of urban forest planning falls into many hands. During the writing process of a plan, the input from professionals and citizens are taken into consideration. When designing the plan and determining planting locations, landscape architects, arborists, and urban foresters provide valuable input and knowledge as to what trees to plant and where, in order to ensure an urban forest that is long lived and healthy. The public works department and planning commissioners also play a role in the process to make sure that no trees are planted where they may interfere with emergency practices, underground or above ground utilities, or safety of the public in any way. Planning for an urban forest involves input from a variety of people and the consideration of how trees affect the community they grow in.
Assessment
Urban forest assessment is a strategy that is used within broader management and planning operations that allows urban foresters to better understand and care for the forest resource at hand. It allows aspects of the forest, such as ecosystem services and benefits, species composition, canopy distribution, and health, to be monitored and predicted for current and future management needs. Data from urban forest assessments can prove to be useful in not only providing information for foresters but in quantifying benefits that can show members of the public the importance of preserving and protecting trees in urban forest settings. Urban forest assessments are becoming integral to trees in urban communities as they plan and care for their trees, an example being found within cities like Tallahassee, Florida that have incorporated assessment into their urban forest master plan. Within the United States, the USDA Forest Service has provided resources to inform foresters and community members about the importance of these assessments and the benefits to conducting them.
Urban forestry planning and management methods are key to creating and maintaining an urban forest that produces sustainable benefits for the surrounding community. Stakeholders, such as individual citizens, local volunteer groups, and political figures, can oftentimes be involved in the urban forest planning and management processes within municipalities. Urban forest assessments have the potential to increase urban forest economic, social, and cultural benefits to the community. Diverse stakeholder groups allow a comprehensive plan to develop with unique elements brought to attention by each group. Things included in an urban forest plan include land use, transportation, infrastructure, and green space because they all affect the urban forest structure. It will be determined per municipality why each of these is of certain importance and vice versa, as well as the proper actions to be taken to protect the urban forest function and role in the area.
An assessment must first be completed before any benefits are gained. There are generally two basic ways that urban forests are assessed. The bottom-up approach is a field inventory completed by crews on the ground. This process is detailed and can provide useful forest information needed for management decisions. The top-down approach utilizes aerial and satellite imagery to discern canopy cover, plantable space, and impervious surfaces at a low cost. There are different tools available to complete these assessments. i-Tree is a set of tools cooperatively created and maintained by the USDA Forest Service and other organizations. i-Tree Eco is commonly used for bottom-up approach assessment, and uses the field data collected by the user to quantify value and benefits of the trees. The i-Tree software also has tools helpful to top-down approaches. i-Tree Landscape uses National Land Cover Database (NLCD) along with other layers to provide information about canopy cover, plantable space, ecological benefits, and more. i-Tree Canopy allows the user to interpret aerial and satellite imagery to determine land cover on a smaller scale than landscape.
Impact of Climate Change
Cities and urban areas are more vulnerable to the growing impacts of climate change due to high amounts of paved surfaces, increased pollution, denser human population, and concentration of built structures. This leads to the urban heat island phenomena, in which urban areas with large amounts of impermeable, heat absorbing surfaces are measurably warmer than the surrounding areas, particularly those with more natural cover. As climate change impacts Earth, it will continue to disproportionately affect urban areas, and the warming will continue. This poses challenges for urban foresters as tree species will be pushed out of their species distribution as conditions change and become unfavorable. Trees and the urban canopy are vital in mitigating these heat effects and other challenges. They serve as an asset to the communities which is why planning and implementation of strategies to adapt are coming to the forefront.
Since cities are heavily impacted by climate change, urban forestry professionals need to adopt strategies that will lessen the effect of climate change on cities. Many cities have created management plans to address this issue. The city of Chicago, Illinois created a forest vulnerability assessment and synthesis in 2017 that looks at their current forest assessment and what the future could look like. They found that the species distribution will change for the native tree species and that stressors like drought, heat, and flooding will make the trees more vulnerable to pests and disease. A report published by the United States Department of Agriculture addresses the different ways that an urban forestry program can work to mitigate the impacts of climate change. Some strategies include maintaining natural order (restore riparian buffers and use prescribed fires), promote an integrated pest management program, sustain native animal habitat, and reduce landscape fragmentation among others. Another recent study points out that public action is also a large part of combating climate change. Those researchers note that an urban forestry program is only as strong as its community support, and if the public does not see the urgency of climate change and understand the science behind the program's actions, then progress will be slow.
Strategically planting trees is a proven method of climate change adaptation and mitigation. The city of Houston has developed a simple yet effective framework for tree planting to fight the increasingly noticeable effects of climate change. Native "super" tree species have been identified by a ranking system examining the highest combination of absorption of carbon dioxide, absorption other air pollutants, flood mitigation, and their ability to thrive under projected future climate conditions. Regions of the municipality experiencing disproportionately poor air quality, flooding, elevated heat, and high rates of health concerns are then mapped to plan for large-scale planting of ideal tree species. This framework can be altered and applied to any municipality to improve negative conditions worsened by climate change. The US Forest Service has also identified potential strategies for creating more resilient urban forests to be prepared for more unpredictable conditions. Important to this is the enhancement of taxonomic, structural, and functional diversity of trees in the urban forest. One way to accomplish diversity is through implementation of the 30-20-10 rule, which states that no more than 30% of the trees should belong to a single taxonomic family, no more than 20% of a single taxonomic genera, and no more than 10% a single taxonomic species.
By country
Brazil
The Amazon rainforest is world famous for its ability to sequester carbon from our atmosphere. Since the 1960's, cities were integrally linked with their surrounding forest in the Brazilian Amazonia. Modern urbanization has degraded forests, depleting ecosystem services that are vital to city functioning. Invasive species seem to be a large issue in Brazilian urban forest conservation. Exotic and invasive species are more common than native in 29 amazonian urban forests. 34.7% of all identified species are invasive, while 65.3% were native. Urban forest development and management in Brazil is supported by legislation. The 2012 Brazilian Forest code states that city halls can require green areas in residential allotments, commercial property, and in public infrastructure.
Curitiba's RPPNM
Curitiba is internationally known as a pioneer city in conservationist efforts. Since 2006 Curitiba has instituted the Municipal Private Natural Heritage Reserves (RPPNM) project, allowing owners of relevant native areas within the city to turn them into privately owned natural reserves in exchange of being able to transfer that area's constructive potential somewhere else. This means instead of building on an area of Atlantic Forest, the owner of such can add what could have been built there somewhere else, allowing the building to which the building potential was transferred to surpass the usual urbanistic height and density limit, thus preserving the forest and zeroing the urban impact. The project won 2006's UNEP-Bayer Young Environmental Envoy programme.
Canada
With over 75% of Canadians in urban areas, urban forests play an important role in the daily lives of Canadian citizens. Urban forests provide numerous environmental and health benefits to the people of Canada. Over time, the use of urban forestry in Canada has changed. In the 1960s, Erik Jorgensen of the University of Toronto, coined the oxymoronic term "urban forestry" while assisting a master's student with his curriculum. However, after this milestone in the urban forestry community, urban forestry faded to the background with few accounts of urban forestry being practiced. As urban forestry started gaining recognition globally and the importance of urban forestry was realized, Canada began creating Urban Forest Management Plans (UFMPs). These plans focus on maintenance, improving canopy cover, enhancing tree species diversity, and educational programs, without focus on economic or environmental services urban forests provide. Today, Canada is conducting studies to address the gaps within their urban forestry programs. Because urban forestry is practiced under different departments, labels, and disciplines, the true extent of urban forestry in Canada is unknown.
University of Toronto
The University of Toronto during the 1960s was home to some of the most significant forest pathology developments of the decade. Two professors at the university (Jorgensen, and media professor Marshall McLuhan), were given the catalyst to pioneer the discipline of "urban forestry" when the crisis of Dutch Elm Disease threatened 90% elm mono-culture at the university. What made this new discipline different from prior urban tree management strategies was the sense of scale. Prior to the 1960s urban trees were managed on a tree-by-tree basis. The Dutch Elm Disease finally convinced forest pathologists at the school to consider the urban forest on a systems level, where small changes can create forest-wide effects if not properly managed. In 1962 this thinking gave Jorgensen a convincing enough argument to secure funding for the world's first "Shade Tree Research Laboratory" in an old dairy plant that the university owned. By 1965 the University of Toronto had its first official urban forestry course, called "the Study of Urban Forestry", taught by Jorgensen. Only one year later department head, Dean Sisam, applied the term to the previously known courses of "arboriculture and parks management", three years following that the university began creating diplomas for urban forestry; producing seven graduates by 1982. The University of Toronto's program has continued and has grown significantly into current times, inspiring many other institutions to offer a similar diploma as the discipline diffused across the globe.
Erik Jorgensen
Erik Jorgensen began as a forest pathologist for the federal government in Denmark, he then moved to Toronto in 1959 to begin studies on Dutch Elm Disease (DED). Which at the time was spreading through North America at extreme rates and killing thousands of Elm trees in its path. He was a professor of Forest Pathology at the University of Toronto throughout the 1960s. While being interviewed for a newspaper article in 1969 he defined Urban Forestry as "a specialized branch that has as its objective the cultivation and management of city trees". He continued his career at the University of Toronto and his laboratory became increasingly devoted to shade tree research in Canada. Jorgensen continued to define and justify the importance of Urban Forestry through his conference papers published in the Shade Tree Research Laboratory throughout the 70's and 80's. He ended up leaving the university in 1973 to lead a National Urban Forestry program in Ottawa, Canada.
China
Urban Green Space Development
With a rapidly growing population, China has started developing strategies to improve urban life. The concept "making forests enter cities and making cities embrace forests" has been promoted. The creation of the "National Forest City" title in 2004 has incentivized urban forest development. This program has led to significant positive changes in the quality and quantity of many Chinese cities. Currently, 58 cities have been awarded this title. While changes have been made, inequity of recreational green spaces may still be a challenge. In a case study of Wuhan, China, equal distribution of greenspaces was found, but there was unequal distribution of public parks. These findings suggest that some social groups and populations cannot equally enjoy the recreational and health benefits of these public greenspaces.
Nanjing
Nanjing Vertical Forest Project, designed by Stefano Boeri of Stefano Boeri Architetti, consists of two towers: one 200 meter tower that will hold office spaces, a museum, a rooftop club, and a green architecture school while the other 108-meter tower that will include a Hyatt hotel and swimming pool. With construction now complete, native trees, shrubs, and perennials are being installed. 600 tall trees, 200 medium-sized trees and 2,500 cascading plants and shrubs will be planted on the building facades. It is expected to absorb 18 tonnes of while providing 16,5 tonnes of oxygen annually.
Shanghai
A 99 km long and 100 m wide forest belt surrounding the city of Shanghai was completed in 2003. The heat island issue has been significantly reduced.
Another pilot project by Shanghai Municipal Agricultural Commission aims to convert 35% of the total area of Shanghai to urban forest. A forest network of two rings, eight lines, five zones, multi-corridors, multi-grids, and one chain was introduced in the project, which means planting two ring-shaped forests, an inner ring 500 m wide by 97 km in length surrounding the central district, and an outer ring 180 km long in suburban land, eight longitudinal forest belts 1000 m wide along expressways and major rivers, five large forest parks about 30 km2 each in area scattered in the suburbs, multiple green corridors 25 to 500 m, grids of forests along the seashore and in industrial areas, and one chain linking various habitats.
Japan
In recent years, there has been significant national effort to deploy urban reforestation research initiatives in Japanese metropolitan areas. The current research evaluates tree count, species richness, and carbon sequestration capacity. The Tokyo area has planted 420,563 trees bordering 2,712 kilometers of streets. In 4,177 ha of urban parks in Tokyo, there are over 1.5 million trees planted. The urban forest in Tokyo is managed by the Japan Greenery Research and Development Center Foundation since 1973.
History:
The first planting of camphor trees alongside rural roads is estimated to have happened around the 3rd Century (AD). The first record of government policy ordering roadside tree planting was in 759 AD. Cherry, willow, and Japanese pagoda trees were planted adjacent Kyoto streets by the 9th century. In the Ginza area, cherry and pine trees were planted along sidewalks 5 meters apart in 1873. The growth of these trees, however, was unhealthy, so they were replaced with Shidareyanagi willow trees in 1880. Japanese maple was also one of the most popularly planted species. In 1907, the city of Tokyo did a massive urban planting of the most healthy and dependable street trees that had survived. Historically, the ginkgo was first a widely successful and popular street tree in Tokyo, which is why the tree is now planted along streets and in parks around the world.
In the late 60's, street trees were used to solve urban environmental issues, such as air and noise pollution. The Tokyo Olympic Games also gave the government a valid reason to plant more trees in the city. There were 12,000 street trees planted in Tokyo by 1965. The species composition of street trees changed dramatically from 1980 to 1996. Dogwood, cherry, and Japanese zelkova trees skyrocketed in popularity and were extensively planted. There were 420,564 street trees planted in Tokyo by 1997.
India
The majority of Indian cities excluding Chandigarh and Canhinagar, have very low urban forest availability per capita compared to U.S., Australian, and European cities. There are, however, strong urban forestation initiatives in New Delhi, the capital of India. Currently, 20% of landcover in Delhi is green space. The parks and garden society is newly in charge of urban forestry initiatives. Two biodiversity parks and nine city forests have been constructed in Delhi, and there are still nine more city forests in the planning process. Roads in Dehli are identified by trees species planted beside them (e.g. Vigyan path with Toona ciliata).
Tree planting is promoted in the Gujarat state through association with religious practices in numerous belief systems. In the Puranas (religious Hindu text), each planet, constellation, and zodiac has its own preferred tree. Planting these trees is said to benefit human life and luck. In Gandhinagar city, six ha of land is planted with trees acknowledging these religious beliefs. It is a dedicated space for giving life and love to the trees for health and prosperity of the forest.
The Kerwa Forest Area
A case study performed on the Kerwa Forest Area, 10 km from the city of Bhopal, India, evaluated the effects of human impact and capacity of ecosystems services. Bhopal's swift urbanization has negatively impacted ecosystems in the Kerfa Forest Area. Due to human impacts, there were very few ecosystem services such as carbon sequestration and biodiversity conservation, that were evident enough to be measured in the study. The forest is still able to filter stormwater and provide stable drinking water supplies for Bhopal city residents. 40% of Bhopal citizens rely on the Upper Lake, a reservoir that sits in a region of the Kerfa Forest, for drinking water. Forest degradation has increased runoff from the Kerwa Forest Area, which alters water quality in the lake. Direct overland flow transports excess nutrients from adjacent agricultural fields to the lake, which causes eutrophication and reduces lake biodiversity. The Kerfa Forest Area is under critical environmental stress and supplies ecosystem services necessary to the health and wellbeing of Bhopal residents.
Carbon Sequestration Potential
Native tree species in India have a large potential for carbon sequestration in urban areas with high greenhouse gas concentrations. A highly productive species of Teak trees (T. grandis) can sequester more carbon in less time than other native trees. Planting T. grandis in areas undergoing rapid urbanization can act as carbon sinks for excess carbon dioxide emissions. A mix of native species, however, is often ecologically more valuable and will provide more ecosystem services.
Scandinavia
History
Following urbanization in Europe, rapid city expansion resulted in forests being kept to the edge of cities, making the only urban greenspaces privately owned by monarchs, religious establishments, and other positions of power. Over time, as democracies began to emerge, the public was able to express interest in public recreational areas. Urban forest development was initially dictated by the wealthy and upper class society, yet in the second half of the 19th century, direct government intervention increased. At the same time, more urban greenspaces began opening to the public. The development of urban greenspaces led to a need for management of these areas, leading to the urban forester professions becoming commonplace. Forestry experts then became more involved in forest and green services management as localities and national forest services became responsible for these areas.
Practices
According to a study published in the Scandinavian Journal of Forest Research, an average of 53% of forest lands within any Danish municipality are owned by the municipality itself. While this number varies respectively as the size of a municipality increases and decreases, this average serves as a general statistic. When compared to the other Scandinavian countries, Denmark's municipalities are unique in that they regularly buy and sell land to the private sector. This exchange of land results in various owners of the green spaces that reside within Denmark's urbanized areas. Only around a quarter of municipalities in Denmark have woodland policies in place for managing their urban forests. The others either have a stand-alone policy (around 20%), or no policy at all (roughly 30%). In fairly recent years, the budget for parks and tree maintenance in most places seem to be steadily dwindling. Sweden as well has transitioned into more of a conservation and active management mindset. In Sweden, the urban forests and green spaces are classified into five zones based on size and use. After classification, recommendations for future improvements and management strategies are formed. In addition to urban zone classification, the use of i-tree inventory is also used for the assessment and management planning of their urban green spaces. Swedish municipalities are constantly innovating and adapting their managing strategies for the old growth forests in central urban areas and the younger forests on the outskirts.
Composition
Most of the species in Scandinavian urban forests are native, with a majority of people stating their preference for native species. Common species include Norway Spruce (Picea abies), Scots Pine (Pinus syl vestris), Silver Birch (Betula pendula), and Moor Birch (Betula pubescens). Urban forests also tend to be fairly irregular in age and tree placement, however general favor tends to be shown towards older trees. Visibility is rated as a priority in the design of these places, and is a common issue faced by managing officials. Between surveys conducted across Finland, Denmark, and Sweden, approximately 53% of Urban Canopy cover is managed directly by municipal governments, while the rest is under private ownership.
South Africa
History
Cape Town's indigenous flora, fynbos, is characterized by low-lying shrubbery with few trees. In response to the Cape's natural timber deficiency, alien tree species were introduced during the Dutch occupation, beginning in 1652, to support a growing population and economy. Foreign settlers planted trees in cities, alongside new roads and around private dwellings. Compelled by the need to support a growing population and economy, Cape foresters developed new methods for growing exotic trees in the new climate. These methods, which began in the Cape, later spread to other South African colonies. Many South African towns remain characterized by road-side rows of exotic trees, which were planted from as early as the 17th century.
United Kingdom
In the UK urban forestry was pioneered around the turn of the 19th century by the Midland reafforesting association, whose focus was in the Black Country. England's Community Forests. programme was established in 1990 by the then Countryside Commission as a pilot project to demonstrate the potential contribution of environmental improvement to economic and social regeneration. Each Community Forest was established as a partnership between local authorities and local, regional and national partners including the Forestry Commission and Natural England. Collectively, this work has formed the largest environmental regeneration initiative in England. In the mid-1990s the National Urban Forestry Unit (NUFU) grew out of a Black Country Urban Forestry Unit and promoted urban forestry across the UK, notably including the establishment of the Black Country Urban Forest. As urban forestry become more mainstream in the 21st century, NUFU was wound up, and its advocacy role is now carried on by organisations such as The Wildlife Trusts and the Woodland Trust.
United States
History
Tree warden laws in the New England states are important examples of some of the earliest and most far-sighted state urban forestry and forest conservation legislation. In 1896, the Massachusetts legislature passed the first tree warden law, and the other five New England states soon followed suit: Connecticut, Rhode Island, and New Hampshire in 1901, Vermont in 1904, and Maine in 1919. (Kinney 1972, Favretti 1982, Campanella 2003).
As villages and towns grew in population and wealth, ornamentation of public, or common, spaces with shade trees also increased. However, the ornamentation of public areas did not evolve into a social movement until the late 18th century, when private individuals seriously promoted and sponsored public beautification with shade and ornamental trees (Favretti 1982, Lawrence 1995). Almost a century later, around 1850, institutions and organization were founded to promote ornamentation through private means (Egleston 1878, Favretti 1982). In the 1890s, New England's "Nail" laws enabled towns to take definitive steps to distinguish which shade trees were public. Chapter 196 of the 1890 Massachusetts Acts and Resolves stated that a public shade tree was to be designated by driving a nail or spike, with the letter M plainly impressed on its head, into the relevant trunk. Connecticut passed a similar law in 1893, except its certified nails and spikes bore the letter C. (Northrup 1887).
The rapid urbanization of American cities in the late 19th century was a concern to many as encouraging intellectual separation of humanity and nature (Rees 1997). By the end of the 19th century, social reformers were just beginning to understand the relationship between developing parks in urban areas and "[engendering] a better society" (Young 1995:536). At this time, parks and trees were not necessarily seen as a way to allow urban dwellers to experience nature, but more of a means of providing mechanisms of acculturation and control for newly arrived immigrants and their children (e.g., areas to encourage "structured play" and thus serve as a deterrent for youth crime) (Pincetl and Gearin 2005). Other prominent public intellectuals were interested in exploring the synergy between ecological and social systems, including American landscape architect Frederick Law Olmsted, designer of 17 major U.S. urban parks and a visionary in seeing the value of including green space and trees as a fundamental part of metropolitan infrastructure (Young 2009). To Olmsted, unity between nature and urban dwellers was not only physical, but also spiritual: "Gradually and silently the charm comes over us; the beauty has entered our souls; we know not exactly when or how, but going away we remember it with a tender, subdued, filial-like joy" (Beveridge and Schuyler 1983 cited in Young 2009:320). The conscious inclusion of trees in urban designs for American cities such as Chicago, San Francisco, and Minneapolis was also inspired by Paris's urban forest and its broad, tree-lined boulevards as well as by the English romantic landscape movement (Zube 1973). The belief in green cover by early park proponents as a promoter of social cohesion has been corroborated by more recent research that links trees to the presence of stronger ties among neighbors, more adult supervision of children in outdoor areas, more use of the neighborhood common areas, and fewer property and violent crime (Kuo et al. 1998, Kuo and Sullivan 2001, Kuo 2003).
Many municipalities throughout the United States employ community-level tree ordinances to empower planning officials to regulate the planting, maintenance, and preservation of trees. The development of tree ordinances emerged largely as a response to the Dutch Elm Disease that plagued cities from the 1930s to 1960s, and grew in response to urban development, loss of urban tree canopy, and rising public concern for the environment (Wolf 2003). The 1980s saw the beginning of the second generation of ordinances with higher standards and specific foci, as communities sought to create more environmentally pleasing harmony between new development and existing infrastructure. These new ordinances, legislated by local governments, may include specific provisions such as the diameter of tree and percentage of trees to be protected during construction activities (Xiao 1995). The implementation of these tree ordinances is greatly aided by a significant effort by community tree advocates to conduct public outreach and education aimed at increasing environmental concern for urban trees, such as through National Arbor Day celebrations and the USDA Urban and Community Forestry Program (Dwyer et al. 2000, Hunter and Rinner 2004, Norton and Hannon 1997, Wall et al. 2006). Much of the work on the ground is performed by non-profits funded by private donations and government grants.
Policy on urban forestry is less contentious and partisan than many other forestry issues, such as resource extraction in national forests. However, the uneven distribution of healthy urban forests across the landscape has become a growing concern in the past 20 years. This is because the urban forest has become an increasingly important component of bioregional ecological health with the expanding ecological footprint of urban areas. Based on American Forests' Urban Ecosystem Analyses conducted over the past six years in ten cities, an estimated 634,407,719 trees have been lost from metropolitan areas across the U.S. as the result of urban and suburban development (American Forests 2011). This is often due to the failure of municipalities to integrate trees and other elements of the green infrastructure into their day-to-day planning and decision-making processes (American Forests 2002). The inconsistent quality of urban forestry programs on the local level ultimately impacts the regional context in which contiguous urban forests reside, and is greatly exacerbated by suburban sprawl as well as other social and ecological effects (Webb et al. 2008). The recognition of this hierarchical linkage among healthy urban forests and the effectiveness of broader ecosystem protection goals (e.g., maintaining biodiversity and wildlife corridors), highlights the need for scientists and policymakers to gain a better understanding of the socio-spatial dynamics that are associated with tree canopy health at different scales (Wu 2008).
Wardens
The New England region created urban forestry policies that laid the foundation for urban areas everywhere. Initially, surface level policies, such as Nail laws and the introduction of tree wardens, were created to protect street trees. Nail laws consisted of placing a nail in street trees to mark them as part of the city's responsibility. The nails also served as a protection method from citizens that wanted to either cut these trees down or cause them any harm. Tree wardens were required in Massachusetts starting in 1896 to protect these urban trees. Other New England states quickly followed suit. Each municipality was required to have their own tree warden, someone who was knowledgeable enough about trees to decide how to properly care for them. Some larger municipalities paid these wardens, but many of the smaller municipalities had to recruit volunteers for this position. The wardens' job is to protect the trees and at once protect the public from the trees. Even though shade trees can be perceived as harmless they can also cause risks to the safety of the public. It is the job of the warden to make sure they preserve as many trees as possible, while keeping the public safe.
The responsibilities of tree wardens have grown and shifted over the years. While each municipality has a tree warden in charge of overseeing the urban forest, they have less time to manage each individual tree. That being said, tree wardens are required to approve the pruning and trimming of any public tree. However, they need not be as involved. Rather than needing the tree warden to be present when the tree is maintained, now there are certified arborists and educational programs, so the tree warden can feel at ease about other people and companies maintaining the trees that he or she approves. The scope of their jobs has increased in modern times. While wardens used to primarily ensure that street trees were cared for and did not cause problems, now they have to worry about the entire urban forest. This includes a great deal of planning and following countless regulations.
As society has progressed and the technology has improved, the roles of tree wardens have adapted. For instance, power lines have become a large issue for public trees and the development of utility forestry has been immense. Wardens now create relationships with utility foresters to ensure they follow the requirements for proper spacing between the lines and public trees. Also, tree wardens and urban forest ordinances are no longer restricted to New England. They now span across the entire United States. While they generally follow similar guidelines, their policies can vary quite a bit. In order to keep policies fairly uniform, the introduction of the Tree City USA program was created by the Arbor Day Foundation in 1976.
Australia
Australian urban forestry involves the care and management of single trees and tree populations throughout urban Australia, ameliorating the livability of cities in the country. The establishment and progression of urban forestry in Australia have helped alleviate the impacts of the country's harsh climatic conditions in urban areas. The present focus is on improving tree species adaptability, resiliency, and diversity to continue providing similar benefits in a future of increasingly harsh climatic conditions.
History
The first calls for conserving woodland areas in and around cities arose in the 1970s in response to increasing urbanisation and the consequent demand for recreational green space and awareness of protecting native wildlife. BOBITS – bits of bush in the suburbs – was a popular term at the time to describe these early "simplified versions of Australia's native forests" that flourished in Australian cities by the ecologist policies of the then prime minister Bob Hawke, summarised in its 1988 Greenhouse 21C: A Plan of Action for a Sustainable Future strategy to reduce greenhouse gas emissions. This and many other efforts to implement urban forestry in Australia were stimulated by CSIRO forester John French, drawing inspiration from ongoing efforts in North America. Australia's understanding of urban forestry evolved during the second period to include all spaces used by the urban population. Known as a "city forest" this vision of urban forestry incorporated the economic value of urban trees and a focus on ecosystem services. The third and present period in Australia's urban forestry history is known as "city in a forest", and considers the ongoing efforts to include urban forestry as a solution to environmental and health problems.
Climate challenges
Complications of global climate change are exacerbated within certain regions of Australia based on location and exposure to climate factors. Australia is susceptible to variable climatic intensities due to the Southern Annular Mode (SAM), a dynamic circulation pattern promoting warm and dry conditions via cold front redirection. Such conditions intensify standalone climate change challenges, particularly throughout the southern half of Australia. Since nearly 90% of Australians inhabit urban areas, adaptable, efficient and cost-effective methods of climate change mitigation may limit negative human consequences. Establishing, maintaining and retaining trees and shrubs in the urban environment is a nature-based solution with potential to mitigate some impacts that climate change has on Australia's urban population. Municipalities in Australia are exploring the benefits of urban forestry for their regional needs.
Canberra
A 2020 study of Australia's capital, Canberra, explored opportunities for living infrastructure to mitigate conditions like increasing temperature and drought. The authors identified urban forests as one of four kinds of living infrastructure with the potential to provide ecosystem services like cooling, carbon sequestration, and improved livability. While hopeful that a high-quality urban forest can provide these benefits, the authors emphasized the importance of planning and collaboration across diverse stakeholders for successful implementation.
Adelaide
Adelaide, located in South Australia's driest state, examined the potential of green roofs to combat the urban heat island effect. The study found that providing 30% green roof coverage significantly reduced temperature, electricity usage, and cost. Researchers concluded that green roofs and similar green infrastructure have the potential to mitigate urban heat island effects in this region.
However, climate change impacts also bring challenges to the existing urban forest. A 2019 study of 22 southeastern Australian suburbs showed that over half (53%) of the existing tree species were vulnerable to heat and/or moisture stress. A study of 2017 tree health decline in Melbourne found significant negative relationships between tree health and climate conditions for every species studied. Researchers concluded that drought was the primary factor inducing decline, increasing tree vulnerability to secondary stressors like pests. Researchers emphasized a need for planting trees that are better suited to the region, given predictions of hotter and drier conditions in the coming years.
Ongoing efforts
Australian cities have outlined urban forestry initiatives and visions to guide future regulation of climate change challenges, as seen in the 2014 Urban Forestry Strategy Guide. Cities have set goals to double tree canopy coverage and encourage tree species biodiversity by monitoring taxonomic composition of urban forests.
Constraints
Resolving limitations will require coordinated efforts among cities, regions, and countries (Meza, 1992; Nilsson, 2000; Valencia, 2000).
Loss of green space is continuous as cities expand and densification occurs; available growing space is limited in city centres. This problem is compounded by pressure to convert green space, parks, etc. into building sites (Glickman, 1999).
Inadequate space is allowed for the root system. Research indicates that healthy large maturing trees require approximately 1,000 cubic feet of soil.
Poor soil is used when planting specimens.
Incorrect and neglected staking or usage of tree shelters leads to bark damage.
Larger, more mature trees are often used to provide scale and a sense of establishment to a scheme. These trees grow more slowly and do not thrive in alien soils whilst smaller specimens can adapt more readily to existing conditions.
Lack of information on the tolerances of urban tree cultivars to environmental constraints.
Poor tree selection which leads to problems in the future
Poor nursery stock and failure of post-care
Limited genetic diversity of the tree stock planted (especially the planting of clonal material)
Too few communities have working tree inventories and very few have urban forest management plans.
Lack of public awareness about the benefits of healthy urban forests.
Poor tree care practices by citizens and untrained arborists.
Organizations
American Forests
Casey Trees
Friends of the Urban Forest
Greening of Detroit
Hantz Woodlands
International Society of Arboriculture
National Urban Forestry Unit
Society of American Foresters
Trees Atlanta
American Society of Consulting Arborists
See also
Arboriculture
European Arboricultural Council
Forestry
Garden city movement
Horticulture
i-Tree
Landscape architecture
Million Tree Initiative
Natural resource management
Planting strategy
Silviculture
Tree care
Urban forest inequity
Urban reforestation
References
Notes
Further reading
American Forests. 2002. "Urban Ecosystem Analysis, Knox County, Tennessee." American Forests. Available online as a pdf (archived page).
American Forests. 2011. Urban Ecosystem Analysis. Available online (archived page)
Barro, S. C., Gobster, P. H., Schroeder, H. W. & Bartram, S. M. 1997. "What Makes a Big Tree Special? Insights from the Chicagoland Treemendous Trees Program." Journal of Arboriculture, 23(6), 239–49.
Campanella, T.J. 2003. Republic of shade: New England and the American elm. Yale University Press, New Haven, CT.
Coder, K. 1996. Cultural aspects of trees: traditions and myth. Athens, GA: Cooperative Extension Service, Forest Resources Unit, University of Georgia.
Dwyer, J. F., McPherson, E. G., Schroeder, H. W., & Rowntree, R. A. 1992. Assessing the Benefits and Costs of the Urban Forest. Journal of Arboriculture, 18(5), 227–234.
Dwyer, J. F., Nowak, D. J., Noble, M. H. & Sisinni, S. M. 2000. "Connecting People with Ecosystems in the 21st Century: an assessment of our nation's urban forests." General technical report PNW ; GTR-490 Portland: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station.
Dwyer, J. F., Schroeder, H. W. & Gobster, P. H. 1991. "The Significance of Urban Trees and Forests: Toward a Deeper Understanding of Values." Journal of Arboriculture, 17(10), 276–84.
Egleston, N.H. 1878. Villages and village life with hints for their improvement. Harper and A Brothers, Publishers, New York.
European Union, Commission, Brussels.2016.Urban and Periurban Forests. Management,Monitoring and Ecosystem Services. EMONFUR LIFE+Project Experiences.
Fernow, B.E. 1910. The care of trees in lawn, street and park. Henry Holt and Company, New York.
Glickman, D. 1999. "Building Cities of Green". 1999 National Urban Forest of Conference. American Forests, Washington, DC. pp. 4–7.
Hansen-Moller, J. & Oustrup, L. 2004. "Emotional, physical/functional and symbolic aspects of an urban forest in Denmark to nearby residents." Scandinavian Journal of Forest Research, 19, 56–64.
Hanson, Michael L. (1990). Urban & Community Forestry, a Guide for the Interior Western United States, USDA Forest Service, Intermountain Region, Ogden, Utah.
Hastie, C. 2003. The Benefits of Urban Trees. Warwick District Council, UK.
Herwitz, E. 2001. Trees at Risk: Reclaiming an Urban Forest. Worcester, MA: Chandler House Press.
Jones, O. & Cloke, P. 2002. Tree Cultures: The Place of Trees and Trees in Their Place. Oxford and New York: Berg.
Kaplan, R. & Kaplan, S. 1989. The Experience of Nature: A Psychological Perspective. Cambridge: Cambridge University Press.
Kaplan, R. 1992. Urban Forestry and the Workplace (No. NC-163). Chicago, IL: USDA Forest Service, North Central Forest Experiment Station.
Kellert, S. R. & Wilson, E. O. 1993. The Biophilia Hypothesis. Washington, D.C.: Island Press/ Shearwater Books.
Kinney, J. P. 1972. The development of forest law in America including legislation in America prior to March 4, 1789. Arno Press, New York.
Konijnendijk, C. C, Nilsson, K, Randrup, T. B, Schipperijn J (Eds.) 2005. Urban Forests and Trees- A Reference Book. (Print) 978-3-540-27684-5 (Online) Springer
Kuo, F. E. 2003. "The Role of Arboriculture in a Healthy Social Ecology." Journal of Arboriculture, 29(3).
Lohr, V. I., Caroline H. Pearson-Mims, John Tarnai, and Don A. Dillman. 2004. How Urban Residents Rate and Rank the Benefits and Problems Associated with Trees in Cities. Journal of Arboriculture, 30(1), 28–35.
McPherson, E. G. & Simpson, J. R. (2000). Reducing Air Pollution Through Urban Forestry. Proceedings of the 48th meeting of California Pest Council (available online, pdf file).
McPherson, E. G. 1994. Using Urban Forests for Energy Efficiency and Carbon Storage. Journal of Forestry, 92(10), 36–41.
McPherson, E. G., & Rowntree, R. A. 1993. Energy Conservation Potential of Urban Tree Planting. Journal of Arboriculture, 19(6), 321–331.
McPherson, E. G., Simpson, J. R. & Scott, K. (2002). Actualizing Microclimate and Air Quality Benefits with Parking Lot Shade Ordinances. Wetter und Leben 4: 98 (available online, pdf file).
McPherson, E. G. 1998. Structure and sustainability of Sacramento's urban forest. Journal of Arboriculture 24(4):174–90.
Meza, H.M.B. 1992. "Current Situation of the Urban Forest in Mexico City". J. Arbor., 18: 33-36
Morales, D. J., Micha, F. R., & Weber, R. L. 1983. Two Methods of Valuating Trees on Residential Sites. Journal of Arboriculture, 9(1), 21–24.
Mudrack, L. 1980. "Urban Vegetation: A Reference for New York Communities". New York Department of Environmental Conservation.
Nillsson, K., Randrup, T.B., and Wandell, B.I.M. 2000. "Trees in the Environment". Oxford University Press, New York, NY.
Northrup, B. G. 1887. Arbor Day: Its history and aims, and how to secure them. Rep. Sec. Connecticut Board of Agric. 13 p.
Norton, B. G., & Hannon, B. 1997. "Environmental values: A place-based theory." Environmental Ethics, 19(3), 227–45.
Nowak, D., & Wheeler, J. Program Assistant, ICLEI. February 2006.
Nowak, D. (1993). Plant Chemical Emissions. Miniature Roseworld 10 (1) (available online, pdf file).
Nowak, D. (1995). Trees Pollute? A "Tree Explains It All". Proceedings of the 7th National Urban Forest Conference (available online, pdf file).
Nowak, D. (2000). Tree Species Selection, Design, and Management to Improve Air Quality Construction Technology. Annual meeting proceedings of the American Society of Landscape Architects (available online, pdf file).
Nowak, D. The Effects of Urban Trees on Air Quality USDA Forest Service (available online, pdf file).
Orland, B., Vining, J., & Ebreo, A. 1992. The Effect of Street Trees on Perceived Values of Residential Property. Environment and Behavior, 24(3), 298–325.
Pickett, S. T. A., Cadenasso, M. L., Grove, J. M., Nilon, C. H., Pouyat, R. V., Zipperer, W. C. & Costanza, R. 2008. "Urban Ecological Systems: Linking Terrestrial Ecological, Physical, and Socioeconomic Components of Metropolitan Areas." Urban Ecology, 99–122.
Pincetl, S. & Gearin, E. 2005. "The reinvention of public green space." Urban Geography, 26(5), 365–84.
Rees, W. E. 1997. "Urban ecosystems: the human dimension." Urban Ecosystems, 1:1, 63–75.
Simpson, J. R., & McPherson, E. G. 1996. Potential of Tree Shade for Reducing Residential Energy Use in California. Journal of Arboriculture, 22(1), 10–18.
Solotaroff, W. 1911. Shade-trees in towns and cities. John Wiley & Sons, New York.
USDA Forest Service. 2003. Benefits of Urban Trees: Urban and Community Forestry: Improving Our Quality of Life. In Southern Region (Ed.), Urban Forestry Manual. Athens, GA: USDA Forest Service.
USDA Forest Service. 2004. Urban Forestry Manual – Benefits and Costs of the Urban Forest. Athens, GA: USDA Forest Service.
Valencia, R.L. 2000. Management of Green Area in Mexico City. Presentation to the 20th Session of the North American Forestry Commission, June 6–10, St. Andrews, Canada.
Wall, B. W. T. J. S., and Stephen E. Miller 2006. "An Econometric Study of the Factors Influencing Participation in Urban and Community Forestry Programs in the United States." Arboriculture & Urban Forestry, 32(5), 221–28.
Webb, T. J., Bengston, D. N. & Fan, D. P. 2008. "Forest value orientations in Australia: An application of computer content analysis." Environmental Management, 41:1, 52–63.
Wolf, K. L. 1998. Enterprising landscapes: Business districts and the urban forest. In C. Kollin (Ed.), Cities by Nature's Design: Proceedings of the 8th National Urban Forest Conference. Washington, D.C.: American Forests.
Wolf, K. L. 1999. Grow for the Gold: Trees in Business Districts. Olympia, WA: Washington State Department of Natural Resources.
Wolf, K. L. 2003. "Introduction to Urban and Community Forestry Programs in the United States." Landscape Planning and Horticulture (Japan), 4(3), 19–28.
Wolf, K. L. 2004. Economics and Public Value of Urban Forests. Urban Agriculture Magazine, 13 (Special Issue on Urban and Periurban Forestry), 31–33.
Wolf, K. L. 2007. City Trees and Property Values. Arborist News, 34–36.
Wu, Jianguo. 2008. "Toward a Landscape Ecology of Cities: Beyond Buildings, Trees, and Urban Forests." in Ecology, Planning, and Management of Urban Forests: International Perspectives, edited by M. M. Carreiro. New York: Springer.
Xiao, H. 1995. "Local ordinances to protect private trees: A field investigation & analysis." Ypsilanti, Michigan: Eastern Michigan University.
Young, Robert. 2009. "Interdisciplinary Foundations of Urban Ecology." Urban Ecosystems 12:311-331.
Young, T. 1995. "Modern urban parks." Geographical Review, 85(4), 535.
Zube, E. H. 1973. "The Natural History of Urban Trees." Natural History, 82, 48–51.
Dean, J. (2009). Seeing Trees, Thinking Forests: Urban Forestry at the University of Toronto in the 1960s. In Method and Meaning in Canadian Environmental History (pp. 236–253). Nelson Education Ltd.
Jorgensen, E. (1977). Vegetation Needs and Concerns in Urban Areas. The Forestry Chronicle, 267–270
Kenney, W. A. (2003). A strategy for Canada's urban forests. The Forestry Chronicle, 79(4), 785–789
Ordóñez, C., & Duinker, P. N. (2013). An analysis of urban forest management plans in Canada: Implications for urban forest management. Landscape and Urban Planning, 116, 36–47
Prebble, M. (1970). Organizational Developments and Program Adjustments in the Canadian Forestry Service. The Forestry Chronicle, 154–156
Rosen, M. R., & Kenney, W. A. (n.d.). Urban forestry Trends in Canada
Biophilia hypothesis | Urban forestry | [
"Biology"
] | 18,159 | [
"Biological hypotheses",
"Biophilia hypothesis"
] |
610,876 | https://en.wikipedia.org/wiki/Belarus%20%28tractor%29 | Belarus (, earlier ) is a series of four-wheeled tractors produced since 1950 at Minsk Tractor Works, MTZ (; ) in Minsk, Belarus.
These tractors are very well known throughout the Commonwealth of Independent States and are exported to more than 100 countries worldwide, including the United States and Canada.
History
Up to the 1950s MTZ had not produced wheeled tractors, tracked crawler tractors being more common. These early tractors were essentially re-claimed tanks, with the gun turret removed and a flatbed, winch, crane or dozer blade added; the tractors saw more use in land reclamation and forestry applications rather than agriculture. This was largely due to the tanks being unsuitable for large-scale cultivation, as their engine, transmission and track reliability were poor due to not being designed for pulling loads for long periods, as required in agriculture. New designs were put into production during 1950, and the new MTZ wheeled tractor was born. These tractors were built to the three main concepts of Soviet engineering: reliability, simplicity and value for money.
Some 3 million tractors have been built in the Minsk Tractor Works since 1948. In 2010, distribution of Belarus tractors in United States and Canada was re-established through a local distributor "MTZ Equipment Ltd".
One of the contributing factors was the fact that the factory started making tractors with compression ignition diesel engines matching the current emissions standards, including Tier 3/4i/4 (United States/Canada) and Euro 3a, 3b, 4 for emission standards in Europe.
In February 2014, the MTZ brand made its first appearance at the National Farm Machinery Show in Louisville, Kentucky, with the MTZ 1220 tractor model.
A Belarus model 1523.3 tractor was gifted to Vladimir Putin on his 70th birthday by President of Belarus Aleksander Lukashenko.
Licensing
Azerbaijan
In December 2004 Ganja Auto Plant restarted manufacturing and the first car built at the factory was sold. In 2008, the plant produced about 600 cars and tractors.
Pakistan
Fecto Group first introduced Soviet Brand Tractors in West Pakistan back in 1962. Initially tractors were imported from USSR. Later Fecto Belarus Tractor Assembly Plant was built in Lahore in 1980's. Since then different Tractor Models were introduced ranging from 25 H.P to 80 H.P. These include:
MTZ -50 (55 H.P) C.B.U
MTZ -50 (55 H.P) local locally assembled
T -25 A (25 H.P)s.
UMZ -60 (60 H.P/li>
Belarus-510 ( 57 H.P)
Belarus-520 (62 H.P)
Belarus -511 (57 H.P)
Belarus-800(80 H.P)
Romania
In October 2010 the Romanian company IRUM in Reghin, specializing in articulated lumber tractors, started assembling from imported KD Belarus TAG tractors for the local market.
Serbia
In 2011 the Serbian company Agropanonka started assembling 3,000 tractors per year for its local market.
Tajikistan
In September 2012, Tajikistan and Belarus reached agreement to set up a company producing Belarus tractors in the south of Tajikistan. The new facility will produce 250 tractors per year. By 2017 the company’s capacity will be increased up to 1,500. There are plans to export some of tractors produced in Tajikistan to other countries of the region.
Cambodia
In March 2013, a Belarus Tractor Assembly Plant opened near Phnom Penh in Cambodia. The joint project involves the annual supplying of more than 400 different models of tractors to the markets of Cambodia and surrounding countries.
References
External links
Belarusian brands
History of Minsk
Economy of Minsk
Tractors
Tractors of the Soviet Union | Belarus (tractor) | [
"Engineering"
] | 739 | [
"Engineering vehicles",
"Tractors"
] |
610,881 | https://en.wikipedia.org/wiki/Dry%20well | A dry well or drywell is an underground structure that disposes of unwanted water, most commonly surface runoff and stormwater, in some cases greywater or water used in a groundwater heat pump.
It is a gravity-fed, vertical underground system that can capture surface water from impervious surfaces, then store and gradually infiltrate the water into the groundwater aquifer.
Such a structure is also called a dead well, absorbing well, or negative well, and in the United Kingdom a soakaway or soakage pit, and in Australia a soakwell or soak pit.
Design
Dry wells are excavated pits that may be filled with aggregate or air and are often lined with a perforated casing. The casings consist of perforated chambers made out of plastic or concrete and may be lined with geotextile. They provide high stormwater infiltration capacity while also having a relatively small footprint.
A dry well receives water from entry pipes at its top. It can be used part of a broader stormwater drainage network or on smaller scales such as collecting stormwater from building roofs. It is used in conjunction with pretreatment measures such as bioswales or sediment chambers to prevent groundwater contamination.
The depth of the dry well allows the water to penetrate soil layers with poor infiltration such as clays into more permeable layers of the vadose zone such as sand.
Simple dry wells consist of a pit filled with gravel, riprap, rubble, or other debris. Such pits resist collapse but do not have much storage capacity because their interior volume is mostly filled by stone. A more advanced dry well defines a large interior storage volume by a concrete or plastic chamber with perforated sides and bottom. These dry wells are usually buried completely so that they do not take up any land area. The dry wells for a parking lot's storm drains are usually buried below the same parking lot.
Related concepts
A sump in a basement can be built in dry well form, allowing the sump pump to cycle less frequently (handling only occasional peak demand). A French drain can resemble a horizontal dry well that is not covered. A larger open pit or artificial swale that receives stormwater and dissipates it into the ground is called an infiltration basin or recharge basin. In places where the amount of water to be dispersed is not as large, a rain garden can be used instead.
A covered pit that disposes of the water component of sewage by the same principle as a dry well is called a cesspool. A septic drain field operates on the same slow-drain/large-area principle as an infiltration basin.
See also
References
DRYWELLS, Environmental Services, City of Portland, OR
New Jersey Stormwater - Best Management Practices Manual, Chapter 9.3 Standard for Dry Wells, February 2004
Philadelphia Watershed, Dry Well, Philadelphia Water Department
Water Quality Division: Permits: Drywell Registration, Arizona Department of Environmental Water Quality
External links
Non-residential drywells are regulated in the U.S. to protect drinking water sources - U.S. Environmental Protection Agency
Photos of a reinforced concrete drywell installation
Photos of Australian Soakwell Installation
Environmental engineering
Subterranea (geography)
Stormwater management
Hydraulic structures
Sewerage
Drainage | Dry well | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 664 | [
"Water treatment",
"Stormwater management",
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
610,897 | https://en.wikipedia.org/wiki/French%20drain | A French drain (also known by other names including trench drain, blind drain, rubble drain, and rock drain) is a trench filled with gravel or rock, or both, with or without a perforated pipe that redirects surface water and groundwater away from an area. The perforated pipe is called a weeping tile (also called a drain tile or perimeter tile). When the pipe is draining, it "weeps", or exudes liquids. It was named during a time period when drainpipes were made from terracotta tiles.
French drains are primarily used to prevent ground and surface water from penetrating or damaging building foundations and as an alternative to open ditches or storm sewers for streets and highways. Alternatively, French drains may be used to distribute water, such as a septic drain field at the outlet of a typical septic tank sewage treatment system. French drains are also used behind retaining walls to relieve ground water pressure.
History
The earliest forms of French drains were simple ditches that were pitched from a high area to a lower one and filled with gravel. These may have been invented in France, but Henry Flagg French (1813–1885) of Concord, Massachusetts, a lawyer and Assistant U.S. Treasury Secretary, described and popularized them in Farm Drainage (1859). French's own drains were made of sections of ordinary roofing tile that were laid with a gap in between the sections to admit water. Later, specialized drain tiles were designed with perforations. To prevent clogging, the size of the gravel varied from coarse in the center to fine on the outside and was selected contingent on the gradation of the surrounding soil. The sizes of particles were critical to prevent the surrounding soil from washing into the pores, i. e., voids between the particles of gravel and thereby clogging the drain. The later development of geotextiles greatly simplified this technique.
Subsurface drainage systems have been used for centuries. They have many forms that are similar in design and function to the traditional French drain.
Structure
Ditches are dug manually or by a trencher. An inclination of 1 in 100 to 1 in 200 is typical. Lining the bottom of the ditch with clay or plastic pipe increases the volume of water that can flow through the drain. Modern French drain systems are made of perforated pipe, for example weeping tile surrounded by sand or gravel, and geotextile or landscaping textile. Landscaping textiles prevent migration of the drainage material and prevent soil and roots from entering and clogging the pipe. The perforated pipe provides a minor subterranean volume of storage for water, yet the prime purpose is drainage of the area along the full length of the pipe via its perforations and to discharge any surplus water at its terminus. The direction of percolation depends on the relative conditions within and without the pipe.
Variants
Variations of French drains include:
Curtain drain This form comprises a perforated pipe surrounded by gravel. It is similar to the traditional French drain, the gravel or aggregate material of which extends to the surface of the ground and is uncovered to permit collection of water, except that a curtain drain does not extend to the surface and instead is covered by soil, in which turf grass or other vegetation may be planted, so that the drain is concealed.
Filter drain This form drains groundwater.
Collector drain This form combines drainage of groundwater and interception of surface water or run off water, and may connect into the underground pipes so as to rapidly divert surface water; it preferably has a cleanable filter to avoid migration of surface debris to the subterranean area that would clog the pipes.
Interceptor drain
Dispersal drain This form distributes waste water that a septic tank emits.
Fin drain This form comprises a subterranean perforated pipe from which extends perpendicularly upward along its length a thin vertical section, denominated the "fin", of aggregate material for drainage to the pipe. The length is . This form is less expensive to build than a traditional French drain.
A French drain can end, i.e., open at a downhill slope, dry well, or rain garden where plants absorb and hold the drained water. This is useful if city water systems or other wastewater areas are unavailable.
Depending on the expected level and volume of rainwater or runoff, French drains can be widened or also fitted on two or three underground drainpipes. Multiple pipes also provide for redundancy, in case one pipe becomes overfilled or clogged by a rupture or defect in the piping. A pipe might become overfilled if it is on a side of the drain which receives a much larger volume of water, such as one pipe being closer to an uphill slope, or closer to a roofline that drips near the French drain. When a pipe becomes overfilled, water can seep sideways into a parallel pipe, as a form of load-balancing, so that neither pipe becomes slowed by air bubbles, as might happen in a full-pipe with no upper air space.
Filters are made from permeable materials, typically non-woven fabric, may include sand and gravel, placed around the drainage pipe or envelope to restrict migration of from the surrounding soils. Envelopes are the gravel, stone, rock, or surrounding pipe. These are permeable materials placed around pipe or drainage products to improve flow conditions in the area immediately around the drain and for improving bedding and structural backfill conditions.
Installation
French drains are often installed around a home foundation in two ways:
Buried around the external side of the foundation wall
Installed underneath the basement floor on the inside perimeter of the basement
In most homes, an external French drain or drain tile is installed around the foundation walls before the soil is backfilled. It is laid on the bottom of the excavated area, and a layer of stone is laid on top. A filter fabric is often laid on top of the stone to keep fine sediments and particles from entering. Once the drain is installed, the area is backfilled, and the system is left alone until it clogs.
Other uses
French drains can be used in farmers' fields for the tile drainage of waterlogged fields. Such fields are called "tiled". Weeping tiles can be used anywhere that soil needs to be drained.
Weeping tiles are used for the opposite reason in septic drain fields for septic tanks. Clarified sewage from the septic tank is fed into weeping tiles buried shallowly in the drain field. The weeping tile spreads the liquid throughout the drain field.
Legislation
In the US, municipalities may require permits for building drainage systems as federal law requires water sent to storm drains to be free of specific contaminants and sediment.
In the UK, local authorities may have specific requirements for the outfall of a French drain into a ditch or watercourse.
See also
References
External links
Non-residential French drains are regulated in the U.S. – US EPA
How to Install French Drains ()
What are French drains? Why are they called French drains?
Drainage
Environmental engineering
Foundations (buildings and structures)
Hydraulic structures
Sewerage
Stormwater management
Water streams | French drain | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,446 | [
"Structural engineering",
"Water treatment",
"Stormwater management",
"Chemical engineering",
"Foundations (buildings and structures)",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
610,989 | https://en.wikipedia.org/wiki/Diorama | A diorama is a replica of a scene, typically a three-dimensional model either full-sized or miniature. Sometimes it is enclosed in a glass showcase for a museum. Dioramas are often built by hobbyists as part of related hobbies such as military vehicle modeling, miniature figure modeling, or aircraft modeling.
In the United States around 1950 and onward, natural history dioramas in museums became less fashionable, leading to many being removed, dismantled, or destroyed.
Etymology
Artists Louis Daguerre and Charles Marie Bouton coined the name "diorama" for a theatrical system that used variable lighting to give a translucent painting the illusion of depth and movement. It derives from Greek δια- (through) + ὅραμα (visible image) = "see-through image." The first use in reference to museum displays is recorded in 1902, although such displays existed before.
Modern
The current, popular understanding of the term "diorama" denotes a partially three-dimensional, full-size replica or scale model of a landscape typically showing historical events, nature scenes, or cityscapes, for purposes of education or entertainment.
One of the first uses of dioramas in a museum was in Stockholm, Sweden, where the Biological Museum opened in 1893. It had several dioramas, over three floors. They were also implemented by the Grigore Antipa National Museum of Natural History from Bucharest Romania and constituted a source of inspiration for many important museums in the world (such as the American Museum of Natural History in New York and the Great Oceanographic Museum in Berlin) [reference below].
Miniature
Miniature dioramas are typically much smaller, and use scale models and landscaping to create historical or fictional scenes. Such a scale model-based diorama is used, for example, in Chicago's Museum of Science and Industry to display railroading. This diorama employs a common model railroading scale of 1:87 (HO scale). Hobbyist dioramas often use scales such as 1:35 or 1:48.
An early, and exceptionally large example was created between 1830 and 1838 by a British Army officer. William Siborne, and represents the Battle of Waterloo at about 7.45 pm, on 18 June 1815. The diorama measures and used around 70,000 model soldiers in its construction. It is now part of the collection of the National Army Museum in London.
Sheperd Paine, a prominent hobbyist, popularized the modern miniature diorama beginning in the 1970s.
Full-size
Modern museum dioramas may be seen in most major natural-history museums. Typically, these displays simulate a tilted plane effect to represent what would otherwise be a level surface, incorporating a painted background of distant objects. The displays often use false perspective, carefully modifying the scale of objects placed on the plane to reinforce an illusion through depth perception, in which objects of identical real-world size placed farther from the observer appear smaller than those closer. Often the distant painted background or sky will be painted upon a continuous curved surface so that the viewer is not distracted by corners, seams, or edges. All of these techniques are means of presenting a realistic-appearing view of a large scene in a compact space. A photograph or single-eye view of such a diorama can be especially convincing, since in this case there is no distraction by the binocular perception of depth.
Uses
Miniature dioramas may be used to represent scenes from historic events. A typical example of this type is the dioramas to be seen at Norway's Resistance Museum in Oslo, Norway.
Landscapes built around model railways can also be considered dioramas, even though they often have to compromise scale accuracy for better operating characteristics.
Hobbyists also build dioramas of historical or quasi-historical events using a variety of materials, including plastic models of military vehicles, ships or other equipment, along with scale figures and landscaping.
In the 19th and beginning 20th century, building dioramas of sailing ships had been a popular handcraft of mariners. Building a diorama instead of a normal model had the advantage that in the diorama, the model was protected inside the framework and could easily be stowed below the bunk or behind the sea chest. Nowadays, such antique sailing ship dioramas are valuable collectors' items.
One of the largest dioramas ever created was a model of the entire State of California built for the Panama–Pacific International Exposition of 1915 and that for a long time was installed in San Francisco's Ferry Building.
Dioramas are widely used in the American educational system, mostly in elementary and middle schools. They are often made to represent historical events, ecological biomes, cultural scenes, or to visually depict literature. They are usually made from a shoebox and contain a trompe-l'œil in the background contrasted with two or three-dimensional models in the foreground. In California elementary schools, a popular assignment has fourth graders making a Spanish mission diorama to learn about the California Spanish missions.
Burmese-Chinese brothers Aw Boon Haw and Aw Boon Par, the developers of Tiger Balm, opened Haw Par Villa in 1937 in Singapore, where statues and dioramas were commissioned to teach traditional Chinese values. Today, the site contains over 150 giant dioramas depicting scenes from Chinese Literature, folklore, legends, history, philosophy and statuary of key Chinese religions, Taoism, Buddhism and Confucianism. The best-known attraction in Haw Par Villa is the Ten Courts of Hell, which features gruesome depictions of Hell in Chinese mythology and in Buddhism. Other major attractions include dioramas of scenes from Journey to the West, Fengshen Bang, The Twenty-four Filial Exemplars and the 12 animals in the Chinese zodiac. The park was a major local attraction during the 1970s and 1980s; it is estimated that the park then welcomed at least 1 million annual visitors, and is considered as part of Singapore's cultural heritage.
Historic
Daguerre and Bouton
The Diorama was a popular entertainment that originated in Paris in 1822. An alternative to the also popular "Panorama" (panoramic painting), the Diorama was a theatrical experience viewed by an audience in a highly specialized theatre. As many as 350 patrons would file in to view a landscape painting that would change its appearance both subtly and dramatically. Most would stand, though limited seating was provided. The show lasted 10 to 15 minutes, after which time the entire audience (on a massive turntable) would rotate to view a second painting. Later models of the Diorama theater even held a third painting.
The size of the proscenium was wide by high (7.3 meters x 6.4 meters). Each scene was hand-painted on linen, which was made transparent in selected areas. A series of these multi-layered, linen panels were arranged in a deep, truncated tunnel, then illuminated by sunlight re-directed via skylights, screens, shutters, and colored blinds. Depending on the direction and intensity of the skillfully manipulated light, the scene would appear to change. The effect was so subtle and finely rendered that both critics and the public were astounded, believing they were looking at a natural scene.
The inventors and proprietors of the Diorama were Charles-Marie Bouton (1781– 1853), a Troubador painter who also worked at the Panorama under Pierre Prévost, and Louis Jacques Mandé Daguerre (1787–1851), formerly a decorator, manufacturer of mirrors, painter of Panoramas, and designer and painter of theatrical stage illusions. Daguerre would later co-invent the daguerreotype, the first widely used method of photography.
A second diorama in Regent's Park in London was opened by an association of British men (having bought Daguerre's tableaux) in 1823, a year after the debut of Daguerre's Paris original. The building was designed by Augustus Charles Pugin. Bouton operated the Regent's Park diorama from 1830 to 1840, when it was taken over by his protégé, the painter Charles-Caïus Renoux.
The Regent's Park diorama was a popular sensation, and spawned immediate imitations. British artists like Clarkson Stanfield and David Roberts produced ever-more elaborate (moving) dioramas through the 1830s; sound effects and even living performers were added. Some "typical diorama effects included moonlit nights, winter snow turning into a summer meadow, rainbows after a storm, illuminated fountains," waterfalls, thunder and lightning, and ringing bells. A diorama painted by Daguerre is currently housed in the church of the French town Bry-sur-Marne, where he lived and died.
Daguerre diorama exhibitions (R.D. Wood, 1993)
Exhibition venues : Paris (Pa.1822-28) : London (Lo.1823-32) : Liverpool (Li.1827-32) : Manchester (Ma.1825-27) : Dublin (Du.1826-28) : Edinburgh (Ed.1828-36)
The Valley of Sarnen :: (Pa.1822-23) : (Lo.1823-24) : (Li.1827-28) : (Ma.1825) : (Du.1826-27) : (Ed. 1828-29 & 1831)
The Harbour of Brest :: (Pa.1823) : (Lo.1824-25 & 1837) : (Li.1825-26) : (Ma.1826-27) : (Ed. 1834–35)
The Holyrood Chapel :: (Pa.1823-24) : (Lo.1825) : (Li.1827-28) : (Ma.1827) : (Du.1828) : (Ed.1829-30)
The Roslin Chapel :: (Pa.1824-25) : (Lo.1826-27) : (Li.1828-29) : (Du.1827-28) : (Ed.1835)
The Ruins in a Fog :: (Pa.1825-26) : (Lo.1827-28) : (Ed.1832-33)
The Village of Unterseen :: (Pa.1826-27) : (Lo.1828-29) : (Li.1832) : (Ed.1833-34 & 1838)
The Village of Thiers :: (Pa.1827-28) : (Lo.1829-30) : (Ed. 1838–39)
The Mont St. Godard :: (Pa.1828-29) : (Lo.1830-32) : (Ed.1835-36)
Gottstein
Until 1968, Britain boasted a large collection of dioramas. These collections were originally housed in the Royal United Services Institute Museum, (formerly the Banqueting House), in Whitehall. When the museum closed, the various exhibits and their 15 known dioramas were distributed to smaller museums throughout England and elsewhere, some ending up in Canada. These dioramas were the brainchild of the wealthy furrier Otto Gottstein (1892–1951) of Leipzig, a Jewish immigrant from Hitler's Germany, who was an avid collector and designer of flat model figures called flats. In 1930, Gottstein's influence is first seen at the Leipzig International Exhibition, along with the dioramas of Hahnemann of Kiel, Biebel of Berlin and Muller of Erfurt, all displaying their own figures, and those commissioned from such as Ludwig Frank in large diorama form.
In 1933, Gottstein left Germany, and in 1935 founded the British Model Soldier Society. Gottstein persuaded designer and painter friends in both Germany and France to help in the construction of dioramas depicting notable events in English history. But due to the war, many of the figures arrived in England incomplete. The task of turning Gottstein's ideas into reality fell to his English friends and those friends who had managed to escape from the Continent. Dennis (Denny) C. Stokes, a talented painter and diorama maker in his own right, was responsible for the painting of the backgrounds of all the dioramas, creating a unity seen throughout the whole series. Denny Stokes was given the overall supervision of the fifteen dioramas.
The Landing of the Romans under Julius Caesar in 55 B.C.
The Battle of Hastings
The Storming of Acre (figures by Muller)
The Battle of Crecy (figures by Muller)
The Field of the Cloth of Gold
Queen Elizabeth reviewing her troops at Tilbury
The Battle of Marston Moor
The Battle of Blenheim (painted by Douchkine)
The Battle of Plessey
The Battle of Quebec (engraved by Krunert of Vienna)
The Old Guard at Waterloo
The Charge of the Light Brigade
The Battle of Ulundi (figures by Ochel and Petrocochino/Paul Armont)
The Battle of Fleurs
The D-Day landings
Krunert, Schirmer, Frank, Frauendorf, Maier, Franz Rieche, and Oesterrich were also involved in the manufacture and design of figures for the various dioramas. Krunert (a Viennese), like Gottstein an exile in London, was given the job of engraving for The Battle of Quebec. The Death of Wolfe was found to be inaccurate and had to be redesigned. The names of the vast majority of painters employed by Gottstein are mostly unknown, most lived and worked on the continent, among them Gustave Kenmow, Leopold Rieche, L. Dunekate, M. Alexandre, A. Ochel, Honey Ray, and, perhaps Gottstein's top painter, Vladimir Douchkine (a Russian émigré who lived in Paris). Douchkine was responsible for painting two figures of the Duke of Marlborough on horseback for The Blenheim Diorama, one of which was used, the other, Gottstein being the true collector, was never released.
Denny Stokes painted all the backgrounds of all the dioramas, Herbert Norris, the Historical Costume Designer, whom J. F. Lovel-Barnes introduced to Gottstein, was responsible for the costume design of the Ancient Britons, the Normans and Saxons, some of the figures of The Field of the Cloth of Gold and the Elizabethan figures for Queen Elizabeth at Tilbury. J.F. Lovel-Barnes was responsible for The Battle of Blenheim, selecting the figures, and arrangement of the scene. Due to World War II, when flat figures became unavailable, Gottstein completed his ideas by using Greenwood and Ball's 20 mm figures. In time, a fifteenth diorama was added, using these 20 mm figures, this diorama representing the D-Day landings. When all the dioramas were completed, they were displayed along one wall in the Royal United Services Institute Museum. When the museum was closed the fifteen dioramas were distributed to various museums and institutions. The greatest number are to be found at the Glenbow Museum, (130-9th Avenue, S. E. Calgary, Alberta, Canada): RE: The Landing of the Romans under Julius Caesar in 55 BC, Battle Of Crecy, The Battle of Blenheim, The Old Guard at Waterloo and The Charge of the Light Brigade at Balaclava.
The state of these dioramas is one of debate; John Garratt (The World of Model Soldiers) claimed in 1968, that the dioramas "appear to have been partially broken up and individual figures have been sold to collectors". According to the Glenbow Institute (Barry Agnew, curator) "the figures are still in reasonable condition, but the plaster groundwork has suffered considerable deterioration". There are no photographs available of the dioramas. The Battle of Hastings diorama was to be found in the Old Town Museum, Hastings, and is still in reasonable condition. It shows the Norman cavalry charging up Senlac Hill toward the Saxon lines.
The Storming of Acre is in the Museum of Artillery at the Rotunda, Woolwich. John Garratt, in Encyclopedia of Model Soldiers, states that The Field of the Cloth of Gold was in the possession of the Royal Military School of Music, Kneller Hall; according to the curator, the diorama had not been in his possession since 1980, nor is it listed in their Accession Book, so the whereabouts of this diorama is unknown.
The Battle of Ulundi is housed in the Staffordshire Regiment Museum at Whittington near Lichfield in Staffordshire, UK
Wong
San Francisco, California artist Frank Wong (born 22 September 1932) created dioramas that depict the San Francisco Chinatown of his youth during the 1930s and 1940s. In 2004, Wong donated seven miniatures of scenes of Chinatown, titled "The Chinatown Miniatures Collection", to the Chinese Historical Society of America (CHSA). The dioramas are on permanent display in CHSA's Main Gallery:
"The Moon Festival"
"Shoeshine Stand"
"Chinese New Year"
"Chinese Laundry"
"Christmas Scene"
"Single Room"
"Herb Store"
Documentary
San Francisco filmmaker James Chan is producing and directing a documentary about Wong and the "changing landscape of Chinatown" in San Francisco. The documentary is tentatively titled, "Frank Wong's Chinatown".
Other
Painters of the Romantic era like John Martin and Francis Danby were influenced to create large and highly dramatic pictures by the sensational dioramas and panoramas of their day. In one case, the connection between life and diorama art became intensely circular. On 1 February 1829, John Martin's brother Jonathan, known as "Mad Martin," set fire to the roof of York Minster. Clarkson Stanfield created a diorama re-enactment of the event, which premiered on 20 April of the same year; it employed a "safe fire" via chemical reaction as a special effect. On 27 May, the "safe" fire proved to be less safe than planned: it set a real fire in the painted cloths of the imitation fire, which burned down the theater and all of its dioramas.
Nonetheless, dioramas remained popular in England, Scotland, and Ireland through most of the 19th century, lasting until 1880.
A small scale version of the diorama called the Polyrama Panoptique could display images in the home and was marketed from the 1820s.
Natural history
Natural history dioramas seek to imitate nature and, since their conception in the late 19th century, aim to "nurture a reverence for nature [with its] beauty and grandeur". They have also been described as a means to visually preserve nature as different environments change due to human involvement. They were extremely popular during the first half of the 20th century, both in the US and UK, later on giving way to television, film, and new perspectives on science.
Like historical dioramas, natural history dioramas are a mix of two- and three-dimensional elements. What sets natural history dioramas apart from other categories is the use of taxidermy in addition to the foreground replicas and painted background. The use of taxidermy means that natural history dioramas derive not only from Daguerre's work, but also from that of taxidermists, who were used to preparing specimens for either science or spectacle. It was only with the dioramas' precursors (and, later on, dioramas) that both these objectives merged. Popular diorama precursors were produced by Charles Willson Peale, an artist with an interest in taxidermy, during the early 19th century. To present his specimens, Peale "painted skies and landscapes on the back of cases displaying his taxidermy specimens". By the late 19th century, the British Museum held an exhibition featuring taxidermy birds set on models of plants.
The first habitat diorama created for a museum was constructed by taxidermist Carl Akeley for the Milwaukee Public Museum in 1889, where it is still held. Akeley set taxidermy muskrats in a three-dimensional re-creation of their wetland habitat with a realistic painted background. With the support of curator Frank M. Chapman, Akeley designed the popular habitat dioramas featured at the American Museum of Natural History. Combining art with science, these exhibitions were intended to educate the public about the growing need for habitat conservation. The modern AMNH Exhibitions Lab is charged with the creation of all dioramas and otherwise immersive environments in the museum.
A predecessor of Akeley, naturalist and taxidermist Martha Maxwell created a famous habitat diorama for the first World's Fair in 1876. The complex diorama featured taxidermied animals in realistic action poses, running water, and live prairie dogs. It is speculated that this display was the first of its kind [outside of a museum]. Maxwell's pioneering diorama work is said to have influenced major figures in taxidermy history who entered the field later, such as Akeley and William Temple Hornaday.
Soon, the concern for accuracy came. Groups of scientists, taxidermists, and artists would go on expeditions to ensure accurate backgrounds and collect specimens, though some would be donated by game hunters. Natural history dioramas reached the peak of their grandeur with the opening of the Akeley Hall of African Mammals in 1936, which featured large animals, such as elephants, surrounded by even larger scenery. Nowadays, various institutions lay different claims to notable dioramas. The Milwaukee Public Museum still displays the world's first diorama, created by Akeley; the American Museum of Natural History, in New York, has what might be the world's largest diorama: a life-size replica of a blue whale; the Biological Museum in Stockholm, Sweden is known for its three dioramas, all created in 1893, and all in original condition; the Powell-Cotton Museum, in Kent, UK, is known for having the world's oldest, unchanged, room-sized diorama, built in 1896.
Construction
Natural history dioramas typically consist of 3 parts:
The painted background
The foreground
Taxidermy specimens
Preparations for the background begin in the field, where an artist takes photographs and sketches references pieces. Once back at the museum, the artist has to depict the scenery with as much realism as possible. The challenge lies in the fact that the wall used is curved: this allows the background to surround the display without seams joining different panels. At times the wall also curves upward to meet the light above and form a sky. By having a curved wall, whatever the artist paints will be distorted by perspective; it is the artist's job to paint in such a way that minimises this distortion.
The foreground is created to mimic the ground, plants and other accessories to scenery. The ground, hills, rocks, and large trees are created with wood, wire mesh, and plaster. Smaller trees are either used in their entirety or replicated using casts. Grasses and shrubs can be preserved in solution or dried to then be added to the diorama. Ground debris, such as leaf litter, is collected on site and soaked in wallpaper paste for preservation and presentation in the diorama. Water is simulated using glass or plexiglass with ripples carved on the surface. For a diorama to be successful, the foreground and background must merge, so both artists have to work together.
Taxidermy specimens are usually the centrepiece of dioramas. Since they must entertain, as well as educate, specimens are set in lifelike poses, so as to convey a narrative of an animal's life. Smaller animals are usually made with rubber moulds and painted. Larger animals are prepared by first making a clay sculpture of the animal. This sculpture is made over the actual, posed skeleton of the animal, with reference to moulds and measurements taken on the field. A papier-mâché mannequin is prepared from the clay sculpture, and the animal's tanned skin is sewn onto the mannequin. Glass eyes substitute the real ones.
If an animal is large enough, the scaffolding that holds the specimen needs to be incorporated into the foreground design and construction.
Toy examples
Lego
Lego dioramas are dioramas that are built from Lego pieces. These dioramas range from small vignettes to large, table-sized displays, and are sometimes constructed in a collaboration of two or more people. Some engage in the building of Lego dioramas.
Playmobil
Playmobil dioramas are dioramas that are made of Playmobil pieces.
See also
Armor Modeling and Preservation Society
Cosmorama
Cyclorama
Model airport
Moving panorama
Myriorama
Nativity scene
Model figure
Tableau vivant
Toy
Toy soldier
Notes
References
Dioramas Muzeul National de Istorie Naturala Grigore Antipa
External links
R. D. Wood's Essays on the early history of photography and the Diorama
The world's largest collection of antique sailing ship dioramas
World War II Dioramas in 1:35 scale
Audiovisual introductions in 1822
Scale modeling
Figurines
Visual arts genres
Landscape art by medium
1820s neologisms | Diorama | [
"Physics"
] | 5,196 | [
"Scale modeling"
] |
611,043 | https://en.wikipedia.org/wiki/Chalk%20River%20Laboratories | Chalk River Laboratories (; also known as CRL, Chalk River Labs and formerly Chalk River Nuclear Laboratories, CRNL) is a Canadian nuclear research facility in Deep River, about north-west of Ottawa.
CRL is a site of significant research and development to support and advance nuclear technology, particularly CANDU reactor technology. CRL has expertise in physics, metallurgy, chemistry, biology, and engineering and hosts unique research facilities. For example, Bertram Brockhouse, a professor at McMaster University, received the 1994 Nobel Prize in Physics for his pioneering work in neutron spectroscopy while at CRL from 1950 to 1962. Sir John Cockcroft was an early director of CRL and also a Nobel laureate. Until the shutdown of its nuclear reactor in 2018, CRL produced a large share of the world's supply of medical radioisotopes. It is owned by the Canadian Nuclear Laboratories subsidiary of Atomic Energy of Canada Limited and operated under contract by the Canadian National Energy Alliance, a private-sector consortium led by AtkinsRéalis.
History
In 1952, Atomic Energy of Canada Limited (AECL) was created by the government to promote the peaceful use of nuclear energy. AECL also took over the operation of Chalk River from the NRC. Since the 1950s, AECL has operated various nuclear research reactors to produce nuclear material for medical and scientific applications. At one point, the Chalk River Laboratories produced about one-third of the world's medical isotopes and half of the North American supply. Despite the declaration of peaceful use, from 1955 to 1985, Chalk River facilities supplied about of plutonium, in the form of spent reactor fuel, to the U.S. Department of Energy to be used in the production of nuclear weapons. (The bomb dropped on Nagasaki, Japan, used about of plutonium.)
Canada's first nuclear power plant, a partnership between AECL and Hydro-Electric Power Commission of Ontario, went online in 1962 near the site of Chalk River Laboratories. This reactor, Nuclear Power Demonstration (NPD), was a demonstration of the CANDU reactor design, one of the world's safest and most successful nuclear reactors.
The Deep River neutron monitor operated once in Chalk river.
1952 NRX incident
Chalk River was also the site of two nuclear accidents in the 1950s. The first incident occurred on December 12, 1952, when there was a power excursion and partial loss of coolant in the NRX reactor, which resulted in significant damage to the core. The control rods could not be lowered into the core because of mechanical problems and human errors. Three rods did not reach their destination and were taken out again by accident. The fuel rods were overheated, resulting in a meltdown. Hydrogen explosions seriously damaged the reactor and the reactor building. The seal of the reactor vessel was blown up four feet, and of radioactive water were found in the cellar of the building. This water was dumped in ditches around from the border of the Ottawa River. During this accident, some of radioactive material was released. Future U.S. president Jimmy Carter, then a U.S. Navy officer in Schenectady, New York, was part of a team of 26 men, including 13 U.S. Navy volunteers, involved in the hazardous cleanup. 14 months later the reactor was in use again.
1958 NRU incident
The second accident, in 1958, involved a fuel rupture and fire in the National Research Universal reactor (NRU) reactor building. Some fuel rods were overheated. With a robotic crane, one of the rods with metallic uranium was pulled out of the reactor vessel. When the arm of the crane moved away from the vessel, the uranium caught fire, and the rod broke. The rod's largest part fell into the containment vessel, still burning. The whole building was contaminated. The valves of the ventilation system were opened, and a large area outside the building was contaminated. The fire was extinguished by scientists and maintenance men in protective clothing running along the hole in the containment vessel with buckets of wet sand, throwing the sand down when they passed the smoking entrance.
Both accidents required a major cleanup effort involving many civilian and military personnel. Follow-up health monitoring of these workers has not revealed any adverse impacts from the two accidents. However, the Canadian Coalition for Nuclear Responsibility, an anti-nuclear watchdog group, notes that some cleanup workers who were part of the military contingent assigned to the NRU reactor building unsuccessfully applied for a military disability pension due to health damages.
2007 shutdown
On November 18, 2007, the NRU, which made medical radioisotopes, was shut down for routine maintenance. This shutdown was extended when AECL, in consultation with the Canadian Nuclear Safety Commission (CNSC), decided to connect seismically-qualified emergency power supplies (EPS) to two of the reactor's cooling pumps (in addition to the AC and DC backup power systems already in place), which had been required as part of its August 2006 operating licence issued by the CNSC. This resulted in a worldwide shortage of radioisotopes for medical treatments because Chalk River made the majority of the world's supply of medical radioisotopes, including two-thirds of the world's technetium-99m.
On December 11, 2007, the House of Commons of Canada, acting on independent expert advice, passed emergency legislation authorizing the restarting of the NRU reactor and its operation for 120 days (counter to the decision of the CNSC), which was passed by the Senate and received Royal Assent on December 12. Prime Minister Stephen Harper criticized the CNSC for this shutdown, which "jeopardized the health and safety of tens of thousands of Canadians", insisting that there was no risk, contrary to the testimony of then-CNSC President & CEO Linda Keen. She would later be fired for ignoring a decision by Parliament to restart the reactor, reflecting its policy that the safety of citizens requiring essential nuclear medicine should be taken into account in assessing the overall safety concerns of the reactor's operation.
The NRU reactor was restarted on December 16, 2007.
2008 radioactive leakage
On December 5, 2008, heavy water containing tritium leaked from the NRU.
In its formal report to the CNSC, filed on December 9, 2008 (when the volume of leakage was determined to meet the requirement for such a report) AECL mentioned that of heavy water were released from the reactor, about 10% of which evaporated and the rest contained, but affirmed that the spill was not serious and did not present a threat to public health. The amount that evaporated to the atmosphere is considered to be minor, accounting for less than a thousandth of the regulatory limit.
In an unrelated incident, the same reactor had been leaking of light water per day from a crack in a weld of the reactor's reflector system. This water was systematically collected, purified in an on-site Waste Treatment Centre, and eventually released to the Ottawa River in accordance with CNSC, Health Canada, and Ministry of the Environment regulations. Although the leakage was not a concern to the CNSC from a health, safety or environmental perspective, AECL made plans for a repair to reduce the current leakage rate for operational reasons.
2009 NRU reactor shutdown
In mid-May 2009, the heavy water leak at the base of the NRU reactor vessel, first detected in 2008 (see above), returned at a greater rate and prompted another temporary shutdown until August 2010. The lengthy shutdown was necessary to first completely defuel the entire reactor, then ascertain the full extent of the corrosion to the vessel, and finally to effect the repairsall with remote and restricted access from a minimum distance of due to the residual radioactivity in the reactor vessel. The 2009 shutdown occurred at a time when only one of the other four worldwide regular medical isotope sourcing reactors was producing, resulting in a worldwide shortage.
NRU shutdown in March 2018
The NRU reactor licence expired in 2016. However, the licence was extended to March 31, 2018. The reactor was shut down for the last time at 7 p.m. on March 31, 2018, and has entered a "state of storage" before decommissioning operations which will continue for many years within the scope of future operating or decommissioning licences issued by the CNSC.
Modernization and decommissioning
The site remains in active use as of 2024. In 2016, 1.2 billion CAD was allotted over ten years to decommission 120 old buildings and build new ones. The new buildings were completed starting in 2020, as the Canadian Nuclear Laboratories Research Facilities.
In May 2023, it was announced that the world's first micro-modular reactor, from Global First Power (GFP), is to be built at Chalk River Laboratories and will be used to power the CNL campus as a demonstration unit. It is then expected that multiple microreactors, each the size of a shipping container, will be built at CNL and transported to remote northern communities where they will replace the existing diesel generator infrastructure, saving some 200 million litres of fuel.
Major facilities
ZEEP – Zero Energy Experimental Pile Reactor (1945–1973).
NRX – NRX Reactor (1947–1992).
NRU – National Research Universal 135 MW (thermal) Reactor (1957–2018).
CNBC – Canadian Neutron Beam Centre (ended operation along with NRU in 2018).
PTR – Pool Test 10 kW Reactor (1957–1990).
ZED-2 – Zero Energy Deuterium 200W Reactor (1960–present).
NPD – Nuclear Power Demonstration 20MW(e) reactor; located north of CRL in Rolphton, Ontario (1960–1987).
SLOWPOKE – Safe Low-Power Kritical Experiment 5 kW Reactor (1970–1976); moved to the University of Toronto in 1971.
TASCC – Tandem Accelerator Superconducting Cyclotron (1986–1996).
MAPLE-1 – Multipurpose Applied Physics Lattice Experiment Reactor (2000–2008; cancelled).
MAPLE-2 – Multipurpose Applied Physics Lattice Experiment Reactor (2003–2008; cancelled).
CRIPT – Cosmic Ray Inspection and Passive Tomography
See also
Atomic Energy of Canada Limited
CANDU reactor
Lew Kowarski
George Laurence
Manhattan Project
Nuclear power in Canada
Petten nuclear reactor, a nuclear reactor in the Netherlands that produces Europe's supply of isotopes for nuclear medicine
Science and technology in Canada
References
Further reading
Robert Bothwell, "Nucleus. The History of Atomic Energy of Canada Limited", University of Toronto Press, 1988.
http://www.cnl.ca/en/home/default.aspx
External links
NRC Archives Photographs - Physics- Atomic Energy Project collection
Radioisotopes produced at Chalk River
Nuclear Accidents at Chalk River: The Human Fallout
What are the details of the accident at Chalk River's NRX reactor in 1952? (Canadian Nuclear FAQ, Dr. Jeremy Whitlock)
What are the details of the accident at Chalk River's NRU reactor in 1958? (Canadian Nuclear FAQ, Dr. Jeremy Whitlock)
AM 530 kHz CKML (see article) Emergency Broadcast Information Only (CRTC Approval November 25, 1998)
Atomic Energy of Canada Limited
1944 establishments in Ontario
2007 in Canadian politics
Buildings and structures in Renfrew County
Nuclear accidents and incidents
Nuclear medicine organizations
Nuclear power stations in Ontario
Nuclear research institutes
Nuclear technology in Canada
Public–private partnership projects in Canada
Research institutes in Canada
Research institutes established in 1944
Energy accidents and incidents in Canada
Federal government buildings in Ontario | Chalk River Laboratories | [
"Chemistry",
"Engineering"
] | 2,376 | [
"Nuclear research institutes",
"Nuclear organizations",
"Nuclear accidents and incidents",
"Nuclear medicine organizations",
"Radioactivity"
] |
611,074 | https://en.wikipedia.org/wiki/Point%20mutation | A point mutation is a genetic mutation where a single nucleotide base is changed, inserted or deleted from a DNA or RNA sequence of an organism's genome. Point mutations have a variety of effects on the downstream protein product—consequences that are moderately predictable based upon the specifics of the mutation. These consequences can range from no effect (e.g. synonymous mutations) to deleterious effects (e.g. frameshift mutations), with regard to protein production, composition, and function.
Causes
Point mutations usually take place during DNA replication. DNA replication occurs when one double-stranded DNA molecule creates two single strands of DNA, each of which is a template for the creation of the complementary strand. A single point mutation can change the whole DNA sequence. Changing one purine or pyrimidine may change the amino acid that the nucleotides code for.
Point mutations may arise from spontaneous mutations that occur during DNA replication. The rate of mutation may be increased by mutagens. Mutagens can be physical, such as radiation from UV rays, X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention.
There are multiple ways for point mutations to occur. First, ultraviolet (UV) light and higher-frequency light have ionizing capability, which in turn can affect DNA. Reactive oxygen molecules with free radicals, which are a byproduct of cellular metabolism, can also be very harmful to DNA. These reactants can lead to both single-stranded and double-stranded DNA breaks. Third, bonds in DNA eventually degrade, which creates another problem to keep the integrity of DNA to a high standard. There can also be replication errors that lead to substitution, insertion, or deletion mutations.
Categorization
Transition/transversion categorization
In 1959 Ernst Freese coined the terms "transitions" or "transversions" to categorize different types of point mutations. Transitions are replacement of a purine base with another purine or replacement of a pyrimidine with another pyrimidine. Transversions are replacement of a purine with a pyrimidine or vice versa. There is a systematic difference in mutation rates for transitions (Alpha) and transversions (Beta). Transition mutations are about ten times more common than transversions.
Functional categorization
Nonsense mutations include stop-gain and start-loss. Stop-gain is a mutation that results in a premature termination codon (a stop was gained), which signals the end of translation. This interruption causes the protein to be abnormally shortened. The number of amino acids lost mediates the impact on the protein's functionality and whether it will function whatsoever. Stop-loss is a mutation in the original termination codon (a stop was lost), resulting in abnormal extension of a protein's carboxyl terminus. Start-gain creates an AUG start codon upstream of the original start site. If the new AUG is near the original start site, in-frame within the processed transcript and downstream to a ribosomal binding site, it can be used to initiate translation. The likely effect is additional amino acids added to the amino terminus of the original protein. Frame-shift mutations are also possible in start-gain mutations, but typically do not affect translation of the original protein. Start-loss is a point mutation in a transcript's AUG start codon, resulting in the reduction or elimination of protein production.
Missense mutations code for a different amino acid. A missense mutation changes a codon so that a different protein is created, a non-synonymous change. Conservative mutations result in an amino acid change. However, the properties of the amino acid remain the same (e.g., hydrophobic, hydrophilic, etc.). At times, a change to one amino acid in the protein is not detrimental to the organism as a whole. Most proteins can withstand one or two point mutations before their function changes. Non-conservative mutations result in an amino acid change that has different properties than the wild type. The protein may lose its function, which can result in a disease in the organism. For example, sickle-cell disease is caused by a single point mutation (a missense mutation) in the beta-hemoglobin gene that converts a GAG codon into GUG, which encodes the amino acid valine rather than glutamic acid. The protein may also exhibit a "gain of function" or become activated, such is the case with the mutation changing a valine to glutamic acid in the BRAF gene; this leads to an activation of the RAF protein which causes unlimited proliferative signalling in cancer cells. These are both examples of a non-conservative (missense) mutation.
Silent mutations code for the same amino acid (a "synonymous substitution"). A silent mutation does not affect the functioning of the protein. A single nucleotide can change, but the new codon specifies the same amino acid, resulting in an unmutated protein. This type of change is called synonymous change since the old and new codon code for the same amino acid. This is possible because 64 codons specify only 20 amino acids. Different codons can lead to differential protein expression levels, however.
Single base pair insertions and deletions
Sometimes the term point mutation is used to describe insertions or deletions of a single base pair (which has more of an adverse effect on the synthesized protein due to the nucleotides' still being read in triplets, but in different frames: a mutation called a frameshift mutation).
General consequences
Point mutations that occur in non-coding sequences are most often without consequences, although there are exceptions. If the mutated base pair is in the promoter sequence of a gene, then the expression of the gene may change. Also, if the mutation occurs in the splicing site of an intron, then this may interfere with correct splicing of the transcribed pre-mRNA.
By altering just one amino acid, the entire peptide may change, thereby changing the entire protein. The new protein is called a protein variant. If the original protein functions in cellular reproduction then this single point mutation can change the entire process of cellular reproduction for this organism.
Point germline mutations can lead to beneficial as well as harmful traits or diseases. This leads to adaptations based on the environment where the organism lives. An advantageous mutation can create an advantage for that organism and lead to the trait's being passed down from generation to generation, improving and benefiting the entire population. The scientific theory of evolution is greatly dependent on point mutations in cells. The theory explains the diversity and history of living organisms on Earth. In relation to point mutations, it states that beneficial mutations allow the organism to thrive and reproduce, thereby passing its positively affected mutated genes on to the next generation. On the other hand, harmful mutations cause the organism to die or be less likely to reproduce in a phenomenon known as natural selection.
There are different short-term and long-term effects that can arise from mutations. Smaller ones would be a halting of the cell cycle at numerous points. This means that a codon coding for the amino acid glycine may be changed to a stop codon, causing the proteins that should have been produced to be deformed and unable to complete their intended tasks. Because the mutations can affect the DNA and thus the chromatin, it can prohibit mitosis from occurring due to the lack of a complete chromosome. Problems can also arise during the processes of transcription and replication of DNA. These all prohibit the cell from reproduction and thus lead to the death of the cell. Long-term effects can be a permanent changing of a chromosome, which can lead to a mutation. These mutations can be either beneficial or detrimental. Cancer is an example of how they can be detrimental.
Other effects of point mutations, or single nucleotide polymorphisms in DNA, depend on the location of the mutation within the gene. For example, if the mutation occurs in the region of the gene responsible for coding, the amino acid sequence of the encoded protein may be altered, causing a change in the function, protein localization, stability of the protein or protein complex. Many methods have been proposed to predict the effects of missense mutations on proteins. Machine learning algorithms train their models to distinguish known disease-associated from neutral mutations whereas other methods do not explicitly train their models but almost all methods exploit the evolutionary conservation assuming that changes at conserved positions tend to be more deleterious. While majority of methods provide a binary classification of effects of mutations into damaging and benign, a new level of annotation is needed to offer an explanation of why and how these mutations damage proteins.
Moreover, if the mutation occurs in the region of the gene where transcriptional machinery binds to the protein, the mutation can affect the binding of the transcription factors because the short nucleotide sequences recognized by the transcription factors will be altered. Mutations in this region can affect rate of efficiency of gene transcription, which in turn can alter levels of mRNA and, thus, protein levels in general.
Point mutations can have several effects on the behavior and reproduction of a protein depending on where the mutation occurs in the amino acid sequence of the protein. If the mutation occurs in the region of the gene that is responsible for coding for the protein, the amino acid may be altered. This slight change in the sequence of amino acids can cause a change in the function, activation of the protein meaning how it binds with a given enzyme, where the protein will be located within the cell, or the amount of free energy stored within the protein.
If the mutation occurs in the region of the gene where transcriptional machinery binds to the protein, the mutation can affect the way in which transcription factors bind to the protein. The mechanisms of transcription bind to a protein through recognition of short nucleotide sequences. A mutation in this region may alter these sequences and, thus, change the way the transcription factors bind to the protein. Mutations in this region can affect the efficiency of gene transcription, which controls both the levels of mRNA and overall protein levels.
Specific diseases caused by point mutations
Cancer
Point mutations in multiple tumor suppressor proteins cause cancer. For instance, point mutations in Adenomatous Polyposis Coli promote tumorigenesis. A novel assay, Fast parallel proteolysis (FASTpp), might help swift screening of specific stability defects in individual cancer patients.
Neurofibromatosis
Neurofibromatosis is caused by point mutations in the Neurofibromin 1 or Neurofibromin 2 gene.
Sickle-cell anemia
Sickle-cell anemia is caused by a point mutation in the β-globin chain of hemoglobin, causing the hydrophilic amino acid glutamic acid to be replaced with the hydrophobic amino acid valine at the sixth position.
The β-globin gene is found on the short arm of chromosome 11. The association of two wild-type α-globin subunits with two mutant β-globin subunits forms hemoglobin S (HbS). Under low-oxygen conditions (being at high altitude, for example), the absence of a polar amino acid at position six of the β-globin chain promotes the non-covalent polymerisation (aggregation) of hemoglobin, which distorts red blood cells into a sickle shape and decreases their elasticity.
Hemoglobin is a protein found in red blood cells, and is responsible for the transportation of oxygen through the body. There are two subunits that make up the hemoglobin protein: beta-globins and alpha-globins.
Beta-hemoglobin is created from the genetic information on the HBB, or "hemoglobin, beta" gene found on chromosome 11p15.5. A single point mutation in this polypeptide chain, which is 147 amino acids long, results in the disease known as Sickle Cell Anemia.
Sickle-cell anemia is an autosomal recessive disorder that affects 1 in 500 African Americans, and is one of the most common blood disorders in the United States. The single replacement of the sixth amino acid in the beta-globin, glutamic acid, with valine results in deformed red blood cells. These sickle-shaped cells cannot carry nearly as much oxygen as normal red blood cells and they get caught more easily in the capillaries, cutting off blood supply to vital organs. The single nucleotide change in the beta-globin means that even the smallest of exertions on the part of the carrier results in severe pain and even heart attack. Below is a chart depicting the first thirteen amino acids in the normal and abnormal sickle cell polypeptide chain.
Tay–Sachs disease
The cause of Tay–Sachs disease is a genetic defect that is passed from parent to child. This genetic defect is located in the HEXA gene, which is found on chromosome 15.
The HEXA gene makes part of an enzyme called beta-hexosaminidase A, which plays a critical role in the nervous system. This enzyme helps break down a fatty substance called GM2 ganglioside in nerve cells.
Mutations in the HEXA gene disrupt the activity of beta-hexosaminidase A, preventing the breakdown of the fatty substances. As a result, the fatty substances accumulate to deadly levels in the brain and spinal cord. The buildup of GM2 ganglioside causes progressive damage to the nerve cells. This is the cause of the signs and symptoms of Tay-Sachs disease.
Repeat-induced point mutation
In molecular biology, repeat-induced point mutation or RIP is a process by which DNA accumulates G:C to A:T transition mutations. Genomic evidence indicates that RIP occurs or has occurred in a variety of fungi while experimental evidence indicates that RIP is active in Neurospora crassa, Podospora anserina, Magnaporthe grisea, Leptosphaeria maculans, Gibberella zeae and Nectria haematococca. In Neurospora crassa, sequences mutated by RIP are often methylated de novo.
RIP occurs during the sexual stage in haploid nuclei after fertilization but prior to meiotic DNA replication. In Neurospora crassa, repeat sequences of at least 400 base pairs in length are vulnerable to RIP. Repeats with as low as 80% nucleotide identity may also be subject to RIP. Though the exact mechanism of repeat recognition and mutagenesis are poorly understood, RIP results in repeated sequences undergoing multiple transition mutations.
The RIP mutations do not seem to be limited to repeated sequences. Indeed, for example, in the phytopathogenic fungus L. maculans, RIP mutations are found in single copy regions, adjacent to the repeated elements. These regions are either non-coding regions or genes encoding small secreted proteins including avirulence genes.
The degree of RIP within these single copy regions was proportional to their proximity to repetitive elements.
Rep and Kistler have speculated that the presence of highly repetitive regions containing transposons, may promote mutation of resident effector genes. So the presence of effector genes within such regions is suggested to promote their adaptation and diversification when exposed to strong selection pressure.
As RIP mutation is traditionally observed to be restricted to repetitive regions and not single copy regions, Fudal et al. suggested that leakage of RIP mutation might occur within a relatively short distance of a RIP-affected repeat. Indeed, this has been reported in N. crassa whereby leakage of RIP was detected in single copy sequences at least 930 bp from the boundary of neighbouring duplicated sequences.
To elucidate the mechanism of detection of repeated sequences leading to RIP may allow to understand how the flanking sequences may also be affected.
Mechanism
RIP causes G:C to A:T transition mutations within repeats, however, the mechanism that detects the repeated sequences is unknown. RID is the only known protein essential for RIP. It is a DNA methyltransferease-like protein, that when mutated or knocked out results in loss of RIP. Deletion of the rid homolog in Aspergillus nidulans, dmtA, results in loss of fertility while deletion of the rid homolog in Ascobolus immersens, masc1, results in fertility defects and loss of methylation induced premeiotically (MIP).
Consequences
RIP is believed to have evolved as a defense mechanism against transposable elements, which resemble parasites by invading and multiplying within the genome.
RIP creates multiple missense and nonsense mutations in the coding sequence. This hypermutation of G-C to A-T in repetitive sequences eliminates functional gene products of the sequence (if there were any to begin with). In addition, many of the C-bearing nucleotides become methylated, thus decreasing transcription.
Use in molecular biology
Because RIP is so efficient at detecting and mutating repeats, fungal biologists often use it as a tool for mutagenesis. A second copy of a single-copy gene is first transformed into the genome. The fungus must then mate and go through its sexual cycle to activate the RIP machinery. Many different mutations within the duplicated gene are obtained from even a single fertilization event so that inactivated alleles, usually due to nonsense mutations, as well as alleles containing missense mutations can be obtained.
History
The cellular reproduction process of meiosis was discovered by Oscar Hertwig in 1876. Mitosis was discovered several years later in 1882 by Walther Flemming.
Hertwig studied sea urchins, and noticed that each egg contained one nucleus prior to fertilization and two nuclei after. This discovery proved that one spermatozoon could fertilize an egg, and therefore proved the process of meiosis. Hermann Fol continued Hertwig's research by testing the effects of injecting several spermatozoa into an egg, and found that the process did not work with more than one spermatozoon.
Flemming began his research of cell division starting in 1868. The study of cells was an increasingly popular topic in this time period. By 1873, Schneider had already begun to describe the steps of cell division. Flemming furthered this description in 1874 and 1875 as he explained the steps in more detail. He also argued with Schneider's findings that the nucleus separated into rod-like structures by suggesting that the nucleus actually separated into threads that in turn separated. Flemming concluded that cells replicate through cell division, to be more specific mitosis.
Matthew Meselson and Franklin Stahl are credited with the discovery of DNA replication. Watson and Crick acknowledged that the structure of DNA did indicate that there is some form of replicating process. However, there was not a lot of research done on this aspect of DNA until after Watson and Crick. People considered all possible methods of determining the replication process of DNA, but none were successful until Meselson and Stahl. Meselson and Stahl introduced a heavy isotope into some DNA and traced its distribution. Through this experiment, Meselson and Stahl were able to prove that DNA reproduces semi-conservatively.
See also
Missense mRNA
PAM matrix
References
External links
Modification of genetic information
Mutation
Molecular biology | Point mutation | [
"Chemistry",
"Biology"
] | 4,016 | [
"Biochemistry",
"Modification of genetic information",
"Molecular genetics",
"Molecular biology"
] |
611,177 | https://en.wikipedia.org/wiki/Perchlorate | A perchlorate is a chemical compound containing the perchlorate ion, , the conjugate base of perchloric acid (ionic perchlorate). As counterions, there can be metal cations, quaternary ammonium cations or other ions, for example, nitronium cation ().
The term perchlorate can also describe perchlorate esters or covalent perchlorates. These are organic compounds that are alkyl or aryl esters of perchloric acid. They are characterized by a covalent bond between an oxygen atom of the ClO4 moiety and an organyl group.
In most ionic perchlorates, the cation is non-coordinating. The majority of ionic perchlorates are commercially produced salts commonly used as oxidizers for pyrotechnic devices and for their ability to control static electricity in food packaging. Additionally, they have been used in rocket propellants, fertilizers, and as bleaching agents in the paper and textile industries.
Perchlorate contamination of food and water endangers human health, primarily affecting the thyroid gland.
Ionic perchlorates are typically colorless solids that exhibit good solubility in water. The perchlorate ion forms when they dissolve in water, dissociating into ions. Many perchlorate salts also exhibit good solubility in non-aqueous solvents. Four perchlorates are of primary commercial interest: ammonium perchlorate , perchloric acid , potassium perchlorate and sodium perchlorate .
Production
Perchlorate salts are typically manufactured through the process of electrolysis, which involves oxidizing aqueous solutions of corresponding chlorates. This technique is commonly employed in the production of sodium perchlorate, which finds widespread use as a key ingredient in rocket fuel. Perchlorate salts are also commonly produced by reacting perchloric acid with bases, such as ammonium hydroxide or sodium hydroxide. Ammonium perchlorate, which is highly valued, can also be produced via an electrochemical process.
Perchlorate esters are formed in the presence of a nucleophilic catalyst via a perchlorate salt's nucleophilic substitution onto an alkylating agent.
Uses
The dominant use of perchlorates is as oxidizers in propellants for rockets, fireworks and highway flares. Of particular value is ammonium perchlorate composite propellant as a component of solid rocket fuel. In a related but smaller application, perchlorates are used extensively within the pyrotechnics industry and in certain munitions and for the manufacture of matches.
Perchlorate is used to control static electricity in food packaging. Sprayed onto containers it stops statically charged food from clinging to plastic or paper/cardboard surface.
Niche uses include lithium perchlorate, which decomposes exothermically to produce oxygen, useful in oxygen "candles" on spacecraft, submarines, and in other situations where a reliable backup oxygen supply is needed.
Potassium perchlorate has, in the past, been used therapeutically to help manage Graves' disease. It impedes production of the thyroid hormones that contain iodine.
As perchlorate is generally a non-complexing anion and that its sodium salts is particularly soluble, it is commonly used as a background, or supporting, electrolyte in solution chemistry, electrophoresis, and electrochemistry. Although used as a powerful oxidizer in propulsive powders and explosives, quite surprisingly, the perchlorate anion is a weak oxidant in aqueous solution because of kinetics limitations severely hindering the electron transfer.
Chemical properties
The perchlorate ion is the least redox reactive of the generalized chlorates. Perchlorate contains chlorine in its highest oxidation number (+7). A table of reduction potentials of the four chlorates shows that, contrary to expectation, perchlorate in aqueous solution is the weakest oxidant among the four.
These data show that the perchlorate and chlorate are stronger oxidizers in acidic conditions than in basic conditions.
Gas phase measurements of heats of reaction (which allow computation of ΔfH°) of various chlorine oxides do follow the expected trend wherein exhibits the largest endothermic value of ΔfH° (238.1 kJ/mol) while exhibits the lowest endothermic value of ΔfH° (80.3 kJ/mol).
Weak base and weak coordinating anion
As perchloric acid is one of the strongest mineral acids, perchlorate is a weak base in the sense of Brønsted–Lowry acid–base theory.
As it is also generally a weakly coordinating anion, perchlorate is commonly used as a background, or supporting, electrolyte.
Weak oxidant in aqueous solution due to kinetic limitations
Perchlorate compounds oxidize organic compounds, especially when the mixture is heated. The explosive decomposition of ammonium perchlorate is catalyzed by metals and heat.
As perchlorate is a weak Lewis base (i.e., a weak electron pair donor) and a weak nucleophilic anion, it is also a very weakly coordinating anion. This is why it is often used as a supporting electrolyte to study the complexation and the chemical speciation of many cations in aqueous solution or in electroanalytical methods (voltammetry, electrophoresis…). Although the perchlorate reduction is thermodynamically favorable , and that is expected to be a strong oxidant, most often in aqueous solution, it is practically an inert species behaving as an extremely slow oxidant because of severe kinetics limitations. The metastable character of perchlorate in the presence of reducing cations such as in solution is due to the difficulty to form an activated complex facilitating the electron transfer and the exchange of oxo groups in the opposite direction. These strongly hydrated cations cannot form a sufficiently stable coordination bridge with one of the four oxo groups of the perchlorate anion. Although thermodynamically a mild reductant, ion exhibits a stronger trend to remain coordinated by water molecules to form the corresponding hexa-aquo complex in solution. The high activation energy of the cation binding with perchlorate to form a transient inner sphere complex more favourable to electron transfer considerably hinders the redox reaction. The redox reaction rate is limited by the formation of a favorable activated complex involving an oxo-bridge between the perchlorate anion and the metallic cation. It depends on the molecular orbital rearrangement (HOMO and LUMO orbitals) necessary for a fast oxygen atom transfer (OAT) and the associated electron transfer as studied experimentally by Henry Taube (1983 Nobel Prize in Chemistry) and theoretically by Rudolph A. Marcus (1992 Nobel Prize in Chemistry), both awarded for their respective works on the mechanisms of electron-transfer reactions with metal complexes and in chemical systems.
In contrast to the cations which remain unoxidized in deaerated perchlorate aqueous solutions free of dissolved oxygen, other cations such as Ru(II) and Ti(III) can form a more stable bridge between the metal centre and one of the oxo groups of . In the inner sphere electron transfer mechanism to observe the perchlorate reduction, the anion must quickly transfer an oxygen atom to the reducing cation. When it is the case, metallic cations can readily reduce perchlorate in solution. Ru(II) can reduce to , while V(II), V(III), Mo(III), Cr(II) and Ti(III) can reduce to .
Some metal complexes, especially those of rhenium, and some metalloenzymes can catalyze the reduction of perchlorate under mild conditions. Perchlorate reductase (see below), a molybdoenzyme, also catalyzes the reduction of perchlorate. Both the Re- and Mo-based catalysts operate via metal-oxo intermediates.
Microbiology
Over 40 phylogenetically and metabolically diverse microorganisms capable of growth using perchlorate as an electron acceptor have been isolated since 1996. Most originate from the Pseudomonadota, but others include the Bacillota, Moorella perchloratireducens and Sporomusa sp., and the archaeon Archaeoglobus fulgidus. With the exception of A. fulgidus, microbes that grow via perchlorate reduction utilize the enzymes perchlorate reductase and chlorite dismutase, which collectively take perchlorate to chloride. In the process, free oxygen () is generated.
Natural abundance
Terrestrial abundance
Perchlorate is created by lightning discharges in the presence of chloride. Perchlorate has been detected in rain and snow samples from Florida and Lubbock, Texas. It is also present in Martian soil.
Naturally occurring perchlorate at its most abundant can be found commingled with deposits of sodium nitrate in the Atacama Desert of northern Chile. These deposits have been heavily mined as sources for nitrate-based fertilizers. Chilean nitrate is in fact estimated to be the source of around of perchlorate imported to the U.S. (1909–1997). Results from surveys of ground water, ice, and relatively unperturbed deserts have been used to estimate a "global inventory" of natural perchlorate presently on Earth.
On Mars
Perchlorate was detected in Martian soil at the level of ~0.6% by weight. It was shown that at the Phoenix landing site it was present as a mixture of 60% and 40% . These salts, formed from perchlorates, act as antifreeze and substantially lower the freezing point of water. Based on the temperature and pressure conditions on present-day Mars at the Phoenix lander site, conditions would allow a perchlorate salt solution to be stable in liquid form for a few hours each day during the summer.
The possibility that the perchlorate was a contaminant brought from Earth was eliminated by several lines of evidence. The Phoenix retro-rockets used ultra pure hydrazine and launch propellants consisting of ammonium perchlorate or ammonium nitrate. Sensors on board Phoenix found no traces of ammonium nitrate, and thus the nitrate in the quantities present in all three soil samples is indigenous to the Martian soil. Perchlorate is widespread in Martian soils at concentrations between 0.5 and 1%. At such concentrations, perchlorate could be an important source of oxygen, but it could also become a critical chemical hazard to astronauts.
In 2006, a mechanism was proposed for the formation of perchlorates that is particularly relevant to the discovery of perchlorate at the Phoenix lander site. It was shown that soils with high concentrations of chloride converted to perchlorate in the presence of titanium dioxide and sunlight/ultraviolet light. The conversion was reproduced in the lab using chloride-rich soils from Death Valley. Other experiments have demonstrated that the formation of perchlorate is associated with wide band gap semiconducting oxides. In 2014, it was shown that perchlorate and chlorate can be produced from chloride minerals under Martian conditions via UV using only NaCl and silicate.
Further findings of perchlorate and chlorate in the Martian meteorite EETA79001 and by the Mars Curiosity rover in 2012-2013 support the notion that perchlorates are globally distributed throughout the Martian surface. With concentrations approaching 0.5% and exceeding toxic levels on Martian soil, Martian perchlorates would present a serious challenge to human settlement, as well as microorganisms. On the other hand, the perchlorate would provide a convenient source of oxygen for the settlements.
On September 28, 2015, NASA announced that analyses of spectral data from the Compact Reconnaissance Imaging Spectrometer for Mars instrument (CRISM) on board the Mars Reconnaissance Orbiter from four different locations where recurring slope lineae (RSL) are present found evidence for hydrated salts. The hydrated salts most consistent with the spectral absorption features are magnesium perchlorate, magnesium chlorate and sodium perchlorate. The findings strongly support the hypothesis that RSL form as a result of contemporary water activity on Mars.
Contamination in environment
Perchlorates are of concern because of uncertainties about toxicity and health effects at low levels in drinking water, impact on ecosystems, and indirect exposure pathways for humans due to accumulation in vegetables. They are water-soluble, exceedingly mobile in aqueous systems, and can persist for many decades under typical groundwater and surface water conditions.
Industrial origin
Perchlorates are used mostly in rocket propellants but also in disinfectants, bleaching agents, and herbicides. Perchlorate contamination is caused during both the manufacture and ignition of rockets and fireworks. Fireworks are also a source of perchlorate in lakes. Removal and recovery methods of these compounds from explosives and rocket propellants include high-pressure water washout, which generates aqueous ammonium perchlorate.
In U.S. drinking water
In 2000, perchlorate contamination beneath the former flare manufacturing plant Olin Corporation Flare Facility, Morgan Hill, California was first discovered several years after the plant had closed. The plant had used potassium perchlorate as one of the ingredients during its 40 years of operation. By late 2003, the State of California and the Santa Clara Valley Water District had confirmed a groundwater plume currently extending over nine miles through residential and agricultural communities.
The California Regional Water Quality Control Board and the Santa Clara Valley Water District have engaged in a major outreach effort, a water well testing program has been underway for about 1,200 residential, municipal, and agricultural wells. Large ion exchange treatment units are operating in three public water supply systems which include seven municipal wells with perchlorate detection. The potentially responsible parties, Olin Corporation and Standard Fuse Incorporated, have been supplying bottled water to nearly 800 households with private wells, and the Regional Water Quality Control Board has been overseeing cleanup efforts.
The source of perchlorate in California was mainly attributed to two manufacturers in the southeast portion of the Las Vegas Valley in Nevada, where perchlorate has been produced for industrial use. This led to perchlorate release into Lake Mead in Nevada and the Colorado River which affected regions of Nevada, California and Arizona, where water from this reservoir is used for consumption, irrigation and recreation for approximately half the population of these states. Lake Mead has been attributed as the source of 90% of the perchlorate in Southern Nevada's drinking water. Based on sampling, perchlorate has been affecting 20 million people, with highest detection in Texas, southern California, New Jersey, and Massachusetts, but intensive sampling of the Great Plains and other middle state regions may lead to revised estimates with additional affected regions. An action level of 18 μg/L has been adopted by several affected states.
In 2001, the chemical was detected at levels as high as 5 μg/L at Joint Base Cape Cod (formerly Massachusetts Military Reservation), over the Massachusetts then state regulation of 2 μg/L.
As of 2009, low levels of perchlorate had been detected in both drinking water and groundwater in 26 states in the U.S., according to the Environmental Protection Agency (EPA).
In food
In 2004, the chemical was found in cow's milk in California at an average level of 1.3 parts per billion (ppb, or μg/L), which may have entered the cows through feeding on crops exposed to water containing perchlorates.
A 2005 study suggested human breast milk had an average of 10.5 μg/L of perchlorate.
From minerals and other natural occurrences
In some places, there is no clear source of perchlorate, and it may be naturally occurring. Natural perchlorate on Earth was first identified in terrestrial nitrate deposits /fertilizers of the Atacama Desert in Chile as early as the 1880s and for a long time considered a unique perchlorate source. The perchlorate released from historic use of Chilean nitrate based fertilizer which the U.S.imported by the hundreds of tons in the early 19th century can still be found in some groundwater sources of the United States, for example Long Island, New York. Recent improvements in analytical sensitivity using ion chromatography based techniques have revealed a more widespread presence of natural perchlorate, particularly in subsoils of Southwest USA, salt evaporites in California and Nevada, Pleistocene groundwater in New Mexico, and even present in extremely remote places such as Antarctica. The data from these studies and others indicate that natural perchlorate is globally deposited on Earth with the subsequent accumulation and transport governed by the local hydrologic conditions.
Despite its importance to environmental contamination, the specific source and processes involved in natural perchlorate production remain poorly understood. Laboratory experiments in conjunction with isotopic studies have implied that perchlorate may be produced on earth by oxidation of chlorine species through pathways involving ozone or its photochemical products. Other studies have suggested that perchlorate can also be formed by lightning activated oxidation of chloride aerosols (e.g., chloride in sea salt sprays), and ultraviolet or thermal oxidation of chlorine (e.g., bleach solutions used in swimming pools) in water.
From nitrate fertilizers
Although perchlorate as an environmental contaminant is usually associated with the manufacture, storage, and testing of solid rocket motors, contamination of perchlorate has been focused as a side effect of the use of natural nitrate fertilizer and its release into ground water. The use of naturally contaminated nitrate fertilizer contributes to the infiltration of perchlorate anions into the ground water and threaten the water supplies of many regions in the US.
One of the main sources of perchlorate contamination from natural nitrate fertilizer use was found to come from the fertilizer derived from Chilean caliche (calcium carbonate), because Chile has rich source of naturally occurring perchlorate anion. Perchlorate concentration was the highest in Chilean nitrate, ranging from 3.3 to 3.98%. Perchlorate in the solid fertilizer ranged from 0.7 to 2.0 mg g−1, variation of less than a factor of 3 and it is estimated that sodium nitrate fertilizers derived from Chilean caliche contain approximately 0.5–2 mg g−1 of perchlorate anion. The direct ecological effect of perchlorate is not well known; its impact can be influenced by factors including rainfall and irrigation, dilution, natural attenuation, soil adsorption, and bioavailability. Quantification of perchlorate concentrations in nitrate fertilizer components via ion chromatography revealed that in horticultural fertilizer components contained perchlorate ranging between 0.1 and 0.46%.
Environmental cleanup
There have been many attempts to eliminate perchlorate contamination. Current remediation technologies for perchlorate have downsides of high costs and difficulty in operation. Thus, there have been interests in developing systems that would offer economic and green alternatives.
Treatment ex situ and in situ
Several technologies can remove perchlorate, via treatments ex situ (away from the location) and in situ (at the location).
Ex situ treatments include ion exchange using perchlorate-selective or nitrite-specific resins, bioremediation using packed-bed or fluidized-bed bioreactors, and membrane technologies via electrodialysis and reverse osmosis. In ex situ treatment via ion exchange, contaminants are attracted and adhere to the ion exchange resin because such resins and ions of contaminants have opposite charge. As the ion of the contaminant adheres to the resin, another charged ion is expelled into the water being treated, in which then ion is exchanged for the contaminant. Ion exchange technology has advantages of being well-suitable for perchlorate treatment and high volume throughput but has a downside that it does not treat chlorinated solvents. In addition, ex situ technology of liquid phase carbon adsorption is employed, where granular activated carbon (GAC) is used to eliminate low levels of perchlorate and pretreatment may be required in arranging GAC for perchlorate elimination.
In situ treatments, such as bioremediation via perchlorate-selective microbes and permeable reactive barrier, are also being used to treat perchlorate. In situ bioremediation has advantages of minimal above-ground infrastructure and its ability to treat chlorinated solvents, perchlorate, nitrate, and RDX simultaneously. However, it has a downside that it may negatively affect secondary water quality. In situ technology of phytoremediation could also be utilized, even though perchlorate phytoremediation mechanism is not fully founded yet.
Bioremediation using perchlorate-reducing bacteria, which reduce perchlorate ions to harmless chloride, has also been proposed.
Health effects
Thyroid inhibition
Perchlorate is a potent competitive inhibitor of the thyroid sodium-iodide symporter. Thus, it has been used to treat hyperthyroidism since the 1950s. At very high doses (70,000–300,000 ppb) the administration of potassium perchlorate was considered the standard of care in the United States, and remains the approved pharmacologic intervention for many countries.
In large amounts perchlorate interferes with iodine uptake into the thyroid gland. In adults, the thyroid gland helps regulate the metabolism by releasing hormones, while in children, the thyroid helps in proper development. The NAS, in its 2005 report, Health Implications of Perchlorate Ingestion, emphasized that this effect, also known as Iodide Uptake Inhibition (IUI) is not an adverse health effect. However, in January 2008, California's Department of Toxic Substances Control stated that perchlorate is becoming a serious threat to human health and water resources. In 2010, the EPA's Office of the Inspector General determined that the agency's own perchlorate reference dose (RfD) of 24.5 parts per billion protects against all human biological effects from exposure, as the federal government is responsible for all US military base groundwater contamination. This finding was due to a significant shift in policy at the EPA in basing its risk assessment on non-adverse effects such as IUI instead of adverse effects. The Office of the Inspector General also found that because the EPA's perchlorate reference dose is conservative and protective of human health further reducing perchlorate exposure below the reference dose does not effectively lower risk.
Because of ammonium perchlorate's adverse effects upon children, Massachusetts set its maximum allowed limit of ammonium perchlorate in drinking water at 2 parts per billion (2 ppb = 2 micrograms per liter).
Perchlorate affects only thyroid hormone. Because it is neither stored nor metabolized, effects of perchlorate on the thyroid gland are reversible, though effects on brain development from lack of thyroid hormone in fetuses, newborns, and children are not.
Toxic effects of perchlorate have been studied in a survey of industrial plant workers who had been exposed to perchlorate, compared to a control group of other industrial plant workers who had no known exposure to perchlorate. After undergoing multiple tests, workers exposed to perchlorate were found to have a significant systolic blood pressure rise compared to the workers who were not exposed to perchlorate, as well as a significant decreased thyroid function compared to the control workers.
A study involving healthy adult volunteers determined that at levels above 0.007 milligrams per kilogram per day (mg/(kg·d)), perchlorate can temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). The EPA converted this dose into a reference dose of 0.0007 mg/(kg·d) by dividing this level by the standard intraspecies uncertainty factor of 10. The agency then calculated a "drinking water equivalent level" of 24.5 ppb by assuming a person weighs and consumes of drinking water per day over a lifetime.
In 2006, a study reported a statistical association between environmental levels of perchlorate and changes in thyroid hormones of women with low iodine. The study authors were careful to point out that hormone levels in all the study subjects remained within normal ranges. The authors also indicated that they did not originally normalize their findings for creatinine, which would have essentially accounted for fluctuations in the concentrations of one-time urine samples like those used in this study. When the Blount research was re-analyzed with the creatinine adjustment made, the study population limited to women of reproductive age, and results not shown in the original analysis, any remaining association between the results and perchlorate intake disappeared. Soon after the revised Blount Study was released, Robert Utiger, a doctor with the Harvard Institute of Medicine, testified before the US Congress and stated: "I continue to believe that that reference dose, 0.007 milligrams per kilo (24.5 ppb), which includes a factor of 10 to protect those who might be more vulnerable, is quite adequate."
In 2014, a study was published, showing that environmental exposure to perchlorate in pregnant women with hypothyroidism is associated with a significant risk of low IQ in their children.
Lung toxicity
Some studies suggest that perchlorate has pulmonary toxic effects as well. Studies have been performed on rabbits where perchlorate has been injected into the trachea. The lung tissue was removed and analyzed, and it was found that perchlorate injected lung tissue showed several adverse effects when compared to the control group that had been intratracheally injected with saline. Adverse effects included inflammatory infiltrates, alveolar collapse, subpleural thickening, and lymphocyte proliferation.
Aplastic anemia
In the early 1960s, potassium perchlorate used to treat Graves' disease was implicated in the development of aplastic anemia—a condition where the bone marrow fails to produce new blood cells in sufficient quantity—in thirteen patients, seven of whom died. Subsequent investigations have indicated the connection between administration of potassium perchlorate and development of aplastic anemia to be "equivocable at best", which means that the benefit of treatment, if it is the only known treatment, outweighs the risk, and it appeared a contaminant poisoned the 13.
Regulation in the U.S.
Water
In 1998, perchlorate was included in the U.S. EPA Contaminant Candidate List, primarily due to its detection in California drinking water.
In 2002, the EPA completed its draft toxicological review of perchlorate and proposed an reference dose of 0.00003 milligrams per kilogram per day (mg/kg/day) based primarily on studies that identified neurodevelopmental deficits in rat pups. These deficits were linked to maternal exposure to perchlorate.
In 2003, a federal district court in California found that the Comprehensive Environmental Response, Compensation and Liability Act applied, because perchlorate is ignitable, and therefore was a "characteristic" hazardous waste.
Subsequently, the U.S. National Research Council of the National Academy of Sciences (NAS) reviewed the health implications of perchlorate, and in 2005 proposed a much higher reference dose of 0.0007 mg/kg/day based primarily on a 2002 study by Greer et al. During that study, 37 adult human subjects were split into four exposure groups exposed to 0.007 (7 subjects), 0.02 (10 subjects), 0.1 (10 subjects), and 0.5 (10 subjects) mg/kg/day. Significant decreases in iodide uptake were found in the three highest exposure groups. Iodide uptake was not significantly reduced in the lowest exposed group, but four of the seven subjects in this group experienced inhibited iodide uptake. In 2005, the RfD proposed by NAS was accepted by EPA and added to its integrated risk information system (IRIS).
The NAS report described the level of lowest exposure from Greer et al. as a "no-observed-effect level" (NOEL). However, there was actually an effect at that level although not statistically significant largely due to small size of study population (four of seven subjects showed a slight decrease in iodide uptake).
Reduced iodide uptake was not considered to be an adverse effect, even though it is a precursor to an adverse effect, hypothyroidism. Therefore, additional safety factors, would be necessary when extrapolating from the point of departure to the RfD.
Consideration of data uncertainty was insufficient because the Greer, et al. study reflected only a 14-day exposure (=acute) to healthy adults and no additional safety factors were considered to protect sensitive subpopulations like for example, breastfeeding newborns.
Although there has generally been consensus with the Greer et al. study, there has been no consensus with regard to developing a perchlorate RfD. One of the key differences results from how the point of departure is viewed (i.e., NOEL or "lowest-observed-adverse-effect level", LOAEL), or whether a benchmark dose should be used to derive the RfD. Defining the point of departure as a NOEL or LOAEL has implications when it comes to applying appropriate safety factors to the point of departure to derive the RfD.
In early 2006, EPA issued a "Cleanup Guidance" and recommended a Drinking Water Equivalent Level (DWEL) for perchlorate of 24.5 μg/L. Both DWEL and Cleanup Guidance were based on a 2005 review of the existing research by the National Academy of Sciences (NAS).
Lacking a federal drinking water standard, several states subsequently published their own standards for perchlorate including Massachusetts in 2006 and California in 2007. Other states, including Arizona, Maryland, Nevada, New Mexico, New York, and Texas have established non-enforceable, advisory levels for perchlorate.
In 2008, EPA issued an interim drinking water health advisory for perchlorate and with it a guidance and analysis concerning the impacts on the environment and drinking water. California also issued guidance regarding perchlorate use. Both the Department of Defense and some environmental groups voiced questions about the NAS report, but no credible science has emerged to challenge the NAS findings.
In February 2008, the U.S. Food and Drug Administration (FDA) reported that U.S. toddlers on average were being exposed to more than half of EPA's safe dose from food alone. In March 2009, a Centers for Disease Control study found 15 brands of infant formula contaminated with perchlorate and that combined with existing perchlorate drinking water contamination, infants could be at risk for perchlorate exposure above the levels considered safe by EPA.
In 2010, the Massachusetts Department of Environmental Protection set a 10 fold lower RfD (0.07 μg/kg/day) than the NAS RfD using a much higher uncertainty factor of 100. They also calculated an Infant drinking water value, which neither US EPA nor CalEPA had done.
On February 11, 2011, EPA determined that perchlorate meets the Safe Drinking Water Act criteria for regulation as a contaminant. The agency found that perchlorate may have an adverse effect on the health of persons and is known to occur in public water systems with a frequency and at levels that it presents a public health concern. Since then EPA has continued to determine what level of contamination is appropriate. EPA prepared extensive responses to submitted public comments.
In 2016, the Natural Resources Defense Council (NRDC) filed a lawsuit to accelerate EPA's regulation of perchlorate.
In 2019, EPA proposed a Maximum Contaminant Level of 0.056 mg/L for public water systems.
On June 18, 2020, EPA announced that it was withdrawing its 2011 regulatory determination and its 2019 proposal, stating that it had taken "proactive steps" with state and local governments to address perchlorate contamination. In September 2020 NRDC filed suit against EPA for its failure to regulate perchlorate, and stated that 26 million people may be affected by perchlorate in their drinking water. On March 31, 2022, the EPA announced that a review confirmed its 2020 decision. Following the NRDC lawsuit, in 2023 the US Court of Appeals for the DC Circuit ordered EPA to develop a perchlorate standard for public water systems. EPA stated that it will publish a proposed standard for perchlorate in 2025, and issue a final rule in 2027.
Covalent perchlorates
Although typically found as a non-coordinating anion, a few metal complexes are known. Hexaperchloratoaluminate and tetraperchloratoaluminate are strong oxidising agents.
Several perchlorate esters are known. For example, methyl perchlorate is a high energy material that is a strong alkylating agent. Chlorine perchlorate is a covalent inorganic analog.
Safety
As discussed above, iodide is competitor in the thyroid glads. In the presence of reductants, perchlorate forms potentially explosive mixtures. The PEPCON disaster destroyed a production plant for ammonium perchlorate when a fire caused the ammonium perchlorate stored on site to react with the aluminum that the storage tanks were constructed with and explode.
References
External links
NAS Report: The Health Effects of Perchlorate Ingestion
NRDC's criticism of NAS report
Environment California report (Executive Summary with link to full text)
Macho Moms: Perchlorate pollutant masculinizes fish: Science News Online, August 12, 2006
New Scientist Space Blog: Phoenix discovery may be bad for Mars life
State Threatening to Sue Military over Water Pollution , Associated Press, May 19, 2003.
Health Effects of Perchlorate from Spent Rocket, SpaceDaily.com, July 11, 2002.
Dept of Defense, Dept of Energy, and US Environmental Protection Agency's Strategic Environmental Research and Development Program, Elimination of Perchlorate Oxidizers from Pyrotechnic Flare Compositions, 2009
Endocrine disruptors
Non-coordinating anions
Oxidizing agents
Pyrotechnic oxidizers | Perchlorate | [
"Chemistry"
] | 7,149 | [
"Redox",
"Endocrine disruptors",
"Coordination chemistry",
"Perchlorates",
"Oxidizing agents",
"Salts",
"Non-coordinating anions"
] |
611,229 | https://en.wikipedia.org/wiki/Dissection | Dissection (from Latin "to cut to pieces"; also called anatomization) is the dismembering of the body of a deceased animal or plant to study its anatomical structure. Autopsy is used in pathology and forensic medicine to determine the cause of death in humans. Less extensive dissection of plants and smaller animals preserved in a formaldehyde solution is typically carried out or demonstrated in biology and natural science classes in middle school and high school, while extensive dissections of cadavers of adults and children, both fresh and preserved are carried out by medical students in medical schools as a part of the teaching in subjects such as anatomy, pathology and forensic medicine. Consequently, dissection is typically conducted in a morgue or in an anatomy lab.
Dissection has been used for centuries to explore anatomy. Objections to the use of cadavers have led to the use of alternatives including virtual dissection of computer models.
In the field of surgery, the term "dissection" or "dissecting" means more specifically the practice of separating an anatomical structure (an organ, nerve or blood vessel) from its surrounding connective tissue in order to minimize unwanted damage during a surgical procedure.
Overview
Plant and animal bodies are dissected to analyze the structure and function of its components. Dissection is practised by students in courses of biology, botany, zoology, and veterinary science, and sometimes in arts studies. In medical schools, students dissect human cadavers to learn anatomy. Zoötomy is sometimes used to describe "dissection of an animal".
Human dissection
A key principle in the dissection of human cadavers (sometimes called androtomy) is the prevention of human disease to the dissector. Prevention of transmission includes the wearing of protective gear, ensuring the environment is clean, dissection technique and pre-dissection tests to specimens for the presence of HIV and hepatitis viruses. Specimens are dissected in morgues or anatomy labs. When provided, they are evaluated for use as a "fresh" or "prepared" specimen. A "fresh" specimen may be dissected within some days, retaining the characteristics of a living specimen, for the purposes of training. A "prepared" specimen may be preserved in solutions such as formalin and pre-dissected by an experienced anatomist, sometimes with the help of a diener. This preparation is sometimes called prosection.
Most dissection involves the careful isolation and removal of individual organs, called the Virchow technique. An alternative more cumbersome technique involves the removal of the entire organ body, called the Letulle technique. This technique allows a body to be sent to a funeral director without waiting for the sometimes time-consuming dissection of individual organs. The Rokitansky method involves an in situ dissection of the organ block, and the technique of Ghon involves dissection of three separate blocks of organs - the thorax and cervical areas, gastrointestinal and abdominal organs, and urogenital organs. Dissection of individual organs involves accessing the area in which the organ is situated, and systematically removing the anatomical connections of that organ to its surroundings. For example, when removing the heart, connects such as the superior vena cava and inferior vena cava are separated. If pathological connections exist, such as a fibrous pericardium, then this may be deliberately dissected along with the organ.
Autopsy and necropsy
Dissection is used to help to determine the cause of death in autopsy (called necropsy in other animals) and is an intrinsic part of forensic medicine.
History
Classical antiquity
Human dissections were carried out by the Greek physicians Herophilus of Chalcedon and Erasistratus of Chios in the early part of the third century BC. Before then, animal dissection had been carried out systematically starting from the fifth century BC. During this period, the first exploration into full human anatomy was performed rather than a base knowledge gained from 'problem-solution' delving. While there was a deep taboo in Greek culture concerning human dissection, there was at the time a strong push by the Ptolemaic government to build Alexandria into a hub of scientific study. For a time, Roman law forbade dissection and autopsy of the human body, so anatomists relied on the cadavers of animals or made observations of human anatomy from injuries of the living. Galen, for example, dissected the Barbary macaque and other primates, assuming their anatomy was basically the same as that of humans, and supplemented these observations with knowledge of human anatomy which he acquired while tending to wounded gladiators.
Celsus wrote in On Medicine I Proem 23, "Herophilus and Erasistratus proceeded in by far the best way: they cut open living men - criminals they obtained out of prison from the kings and they observed, while their subjects still breathed, parts that nature had previously hidden, their position, color, shape, size, arrangement, hardness, softness, smoothness, points of contact, and finally the processes and recesses of each and whether any part is inserted into another or receives the part of another into itself."
Galen was another such writer who was familiar with the studies of Herophilus and Erasistratus.
India
The ancient societies that were rooted in India left behind artwork on how to kill animals during a hunt. The images showing how to kill most effectively depending on the game being hunted relay an intimate knowledge of both external and internal anatomy as well as the relative importance of organs. The knowledge was mostly gained through hunters preparing the recently captured prey. Once the roaming lifestyle was no longer necessary it was replaced in part by the civilization that formed in the Indus Valley. Unfortunately, there is little that remains from this time to indicate whether or not dissection occurred, the civilization was lost to the Aryan people migrating.
Early in the history of India (2nd to 3rd century), the Arthashastra described the 4 ways that death can occur and their symptoms: drowning, hanging, strangling, or asphyxiation. According to that source, an autopsy should be performed in any case of untimely demise.
The practice of dissection flourished during the 7th and 8th century. It was under their rule that medical education was standardized. This created a need to better understand human anatomy, so as to have educated surgeons. Dissection was limited by the religious taboo on cutting the human body. This changed the approach taken to accomplish the goal. The process involved the loosening of the tissues in streams of water before the outer layers were sloughed off with soft implements to reach the musculature. To perfect the technique of slicing, the prospective students used gourds and squash. These techniques of dissection gave rise to an advanced understanding of the anatomy and the enabled them to complete procedures used today, such as rhinoplasty.
During medieval times the anatomical teachings from India spread throughout the known world; however, the practice of dissection was stunted by Islam. The practice of dissection at a university level was not seen again until 1827, when it was performed by the student Pandit Madhusudan Gupta. Through the 1900s, the university teachers had to continually push against the social taboos of dissection, until around 1850 when the universities decided that it was more cost effective to train Indian doctors than bring them in from Britain. Indian medical schools were, however, training female doctors well before those in England.
The current state of dissection in India is deteriorating. The number of hours spent in dissection labs during medical school has decreased substantially over the last twenty years. The future of anatomy education will probably be an elegant mix of traditional methods and integrative computer learning. The use of dissection in early stages of medical training has been shown more effective in the retention of the intended information than their simulated counterparts. However, there is use for the computer-generated experience as review in the later stages. The combination of these methods is intended to strengthen the students' understanding and confidence of anatomy, a subject that is infamously difficult to master. There is a growing need for anatomist—seeing as most anatomy labs are taught by graduates hoping to complete degrees in anatomy—to continue the long tradition of anatomy education.
Islamic world
From the beginning of the Islamic faith in 610 A.D., Shari'ah law has applied to a greater or lesser extent within Muslim countries, supported by Islamic scholars such as Al-Ghazali. Islamic physicians such as Ibn Zuhr (Avenzoar) (1091–1161) in Al-Andalus, Saladin's physician Ibn Jumay during the 12th century, Abd el-Latif in Egypt , and Ibn al-Nafis in Syria and Egypt in the 13th century may have practiced dissection, but it remains ambiguous whether or not human dissection was practiced. Ibn al-Nafis, a physician and Muslim jurist, suggested that the "precepts of Islamic law have discouraged us from the practice of dissection, along with whatever compassion is in our temperament", indicating that while there was no law against it, it was nevertheless uncommon. Islam dictates that the body be buried as soon as possible, barring religious holidays, and that there be no other means of disposal such as cremation. Prior to the 10th century, dissection was not performed on human cadavers. The book Al-Tasrif, written by Al-Zahrawi in 1000 A.D., details surgical procedure that differed from the previous standards. The book was an educational text of medicine and surgery which included detailed illustrations. It was later translated and took the place of Avicenna's The Canon of Medicine as the primary teaching tool in Europe from the 12th century to the 17th century. There were some that were willing to dissect humans up to the 12th century, for the sake of learning, after which it was forbidden. This attitude remained constant until 1952, when the Islamic School of Jurisprudence in Egypt ruled that "necessity permits the forbidden". This decision allowed for the investigation of questionable deaths by autopsy. In 1982, the decision was made by a fatwa that if it serves justice, autopsy is worth the disadvantages. Though Islam now approves of autopsy, the Islamic public still disapproves. Autopsy is prevalent in most Muslim countries for medical and judicial purposes. In Egypt it holds an important place within the judicial structure, and is taught at all the country's medical universities. In Saudi Arabia, whose law is completely dictated by Shari'ah, autopsy is viewed poorly by the population but can be compelled in criminal cases; human dissection is sometimes found at university level. Autopsy is performed for judicial purposes in Qatar and Tunisia. Human dissection is present in the modern day Islamic world, but is rarely published on due to the religious and social stigma.
Tibet
Tibetan medicine developed a rather sophisticated knowledge of anatomy, acquired from long-standing experience with human dissection. Tibetans had adopted the practice of sky burial because of the country's hard ground, frozen for most of the year, and the lack of wood for cremation. A sky burial begins with a ritual dissection of the deceased, and is followed by the feeding of the parts to vultures on the hill tops. Over time, Tibetan anatomical knowledge found its way into Ayurveda and to a lesser extent into Chinese medicine.
Christian Europe
Throughout the history of Christian Europe, the dissection of human cadavers for medical education has experienced various cycles of legalization and proscription in different countries. Dissection was rare during the Middle Ages, but it was practised, with evidence from at least as early as the 13th century. The practice of autopsy in Medieval Western Europe is "very poorly known" as few surgical texts or conserved human dissections have survived.
A modern Jesuit scholar has claimed that the Christian theology contributed significantly to the revival of human dissection and autopsy by providing a new socio-religious and cultural context in which the human cadaver was no longer seen as sacrosanct.
A non-existent edict of the 1163 Council of Tours and an early 14th-century decree of Pope Boniface VIII have mistakenly been identified as prohibiting dissection and autopsy; misunderstanding or extrapolation from these edicts may have contributed to reluctance to perform such procedures. The Middle Ages witnessed the revival of an interest in medical studies, including human dissection and autopsy.
Frederick II (1194–1250), the Holy Roman Emperor, decreed that any that were studying to be a physician or a surgeon must attend a human dissection, which would be held no less than every five years. Some European countries began legalizing the dissection of executed criminals for educational purposes in the late 13th and early 14th centuries. Mondino de Luzzi carried out the first recorded public dissection around 1315. At this time, autopsies were carried out by a team consisting of a Lector, who lectured; the Sector, who did the dissection; and the Ostensor, who pointed to features of interest.
The Italian Galeazzo di Santa Sofia made the first public dissection north of the Alps in Vienna in 1404.
Vesalius in the 16th century carried out numerous dissections in his extensive anatomical investigations. He was attacked frequently for his disagreement with Galen's opinions on human anatomy. Vesalius was the first to lecture and dissect the cadaver simultaneously.
The Catholic Church is known to have ordered an autopsy on conjoined twins Joana and Melchiora Ballestero in Hispaniola in 1533 to determine whether they shared a soul. They found that there were two distinct hearts, and hence two souls, based on the ancient Greek philosopher Empedocles, who believed the soul resided in the heart.
Human dissection was also practised by Renaissance artists. Though most chose to focus on the external surfaces of the body, some like Michelangelo Buonarotti, Antonio del Pollaiuolo, Baccio Bandinelli, and Leonardo da Vinci sought a deeper understanding. However, there were no provisions for artists to obtain cadavers, so they had to resort to unauthorised means, as indeed anatomists sometimes did, such as grave robbing, body snatching, and murder.
Anatomization was sometimes ordered as a form of punishment, as, for example, in 1806 to James Halligan and Dominic Daley after their public hanging in Northampton, Massachusetts.
In modern Europe, dissection is routinely practised in biological research and education, in medical schools, and to determine the cause of death in autopsy. It is generally considered a necessary part of learning and is thus accepted culturally. It sometimes attracts controversy, as when Odense Zoo decided to dissect lion cadavers in public before a "self-selected audience".
Britain
In Britain, dissection remained entirely prohibited from the end of the Roman conquest and through the Middle Ages to the 16th century, when a series of royal edicts gave specific groups of physicians and surgeons some limited rights to dissect cadavers. The permission was quite limited: by the mid-18th century, the Royal College of Physicians and Company of Barber-Surgeons were the only two groups permitted to carry out dissections, and had an annual quota of ten cadavers between them. As a result of pressure from anatomists, especially in the rapidly growing medical schools, the Murder Act 1752 allowed the bodies of executed murderers to be dissected for anatomical research and education. By the 19th century this supply of cadavers proved insufficient, as the public medical schools were growing, and the private medical schools lacked legal access to cadavers. A thriving black market arose in cadavers and body parts, leading to the creation of the profession of body snatching, and the infamous Burke and Hare murders in 1828, when 16 people were murdered for their cadavers, to be sold to anatomists. The resulting public outcry led to the passage of the Anatomy Act 1832, which increased the legal supply of cadavers for dissection.
By the 21st century, the availability of interactive computer programs and changing public sentiment led to renewed debate on the use of cadavers in medical education. The Peninsula College of Medicine and Dentistry in the UK, founded in 2000, became the first modern medical school to carry out its anatomy education without dissection.
United States
In the United States, dissection of frogs became common in college biology classes from the 1920s, and were gradually introduced at earlier stages of education. By 1988, some 75 to 80 percent of American high school biology students were participating in a frog dissection, with a trend towards introduction in elementary schools. The frogs are most commonly from the genus Rana. Other popular animals for high-school dissection at the time of that survey were, among vertebrates, fetal pigs, perch, and cats; and among invertebrates, earthworms, grasshoppers, crayfish, and starfish. About six million animals are dissected each year in United States high schools (2016), not counting medical training and research. Most of these are purchased already dead from slaughterhouses and farms.
Dissection in U.S. high schools became prominent in 1987, when a California student, Jenifer Graham, sued to require her school to let her complete an alternative project. The court ruled that mandatory dissections were permissible, but that Graham could ask to dissect a frog that had died of natural causes rather than one that was killed for the purposes of dissection; the practical impossibility of procuring a frog that had died of natural causes in effect let Graham opt out of the required dissection. The suit gave publicity to anti-dissection advocates. Graham appeared in a 1987 Apple Computer commercial for the virtual-dissection software Operation Frog. The state of California passed a Student's Rights Bill in 1988 requiring that objecting students be allowed to complete alternative projects. Opting out of dissection increased through the 1990s.
In the United States, 17 states along with Washington, D.C. have enacted dissection-choice laws or policies that allow students in primary and secondary education to opt out of dissection. Other states including Arizona, Hawaii, Minnesota, Texas, and Utah have more general policies on opting out on moral, religious, or ethical grounds. To overcome these concerns, J. W. Mitchell High School in New Port Richey, Florida, in 2019 became the first US high school to use synthetic frogs for dissection in its science classes, instead of preserved real frogs.
As for the dissection of cadavers in undergraduate and medical school, traditional dissection is supported by professors and students, with some opposition, limiting the availability of dissection. Upper-level students who have experienced this method along with their professors agree that "Studying human anatomy with colorful charts is one thing. Using a scalpel and an actual, recently-living person is an entirely different matter."
Acquisition of cadavers
The way in which cadaveric specimens are obtained differs greatly according to country. In the UK, donation of a cadaver is wholly voluntary. Involuntary donation plays a role in about 20 percent of specimens in the US and almost all specimens donated in some countries such as South Africa and Zimbabwe. Countries that practice involuntary donation may make available the bodies of dead criminals or unclaimed or unidentified bodies for the purposes of dissection. Such practices may lead to a greater proportion of the poor, homeless and social outcasts being involuntarily donated. Cadavers donated in one jurisdiction may also be used for the purposes of dissection in another, whether across states in the US, or imported from other countries, such as with Libya. As an example of how a cadaver is donated voluntarily, a funeral home in conjunction with a voluntary donation program identifies a body who is part of the program. After broaching the subject with relatives in a diplomatic fashion, the body is then transported to a registered facility. The body is tested for the presence of HIV and hepatitis viruses. It is then evaluated for use as a "fresh" or "prepared" specimen.
Disposal of specimens
Cadaveric specimens for dissection are, in general, disposed of by cremation. The deceased may then be interred at a local cemetery. If the family wishes, the ashes of the deceased are then returned to the family. Many institutes have local policies to engage, support and celebrate the donors. This may include the setting up of local monuments at the cemetery.
Use in education
Human cadavers are often used in medicine to teach anatomy or surgical instruction. Cadavers are selected according to their anatomy and availability. They may be used as part of dissection courses involving a "fresh" specimen so as to be as realistic as possible—for example, when training surgeons. Cadavers may also be pre-dissected by trained instructors. This form of dissection involves the preparation and preservation of specimens for a longer time period and is generally used for the teaching of anatomy.
Alternatives
Some alternatives to dissection may present educational advantages over the use of animal cadavers, while eliminating perceived ethical issues. These alternatives include computer programs, lectures, three dimensional models, films, and other forms of technology. Concern for animal welfare is often at the root of objections to animal dissection. Studies show that some students reluctantly participate in animal dissection out of fear of real or perceived punishment or ostracism from their teachers and peers, and many do not speak up about their ethical objections.
One alternative to the use of cadavers is computer technology. At Stanford Medical School, software combines X-ray, ultrasound and MRI imaging for display on a screen as large as a body on a table. In a variant of this, a "virtual anatomy" approach being developed at New York University, students wear three dimensional glasses and can use a pointing device to "[swoop] through the virtual body, its sections as brightly colored as living tissue." This method is claimed to be "as dynamic as Imax [cinema]".
Advantages and disadvantages
Proponents of animal-free teaching methodologies argue that alternatives to animal dissection can benefit educators by increasing teaching efficiency and lowering instruction costs while affording teachers an enhanced potential for the customization and repeat-ability of teaching exercises. Those in favor of dissection alternatives point to studies which have shown that computer-based teaching methods "saved academic and nonacademic staff time ... were considered to be less expensive and an effective and enjoyable mode of student learning [and] ... contributed to a significant reduction in animal use" because there is no set-up or clean-up time, no obligatory safety lessons, and no monitoring of misbehavior with animal cadavers, scissors, and scalpels.
With software and other non-animal methods, there is also no expensive disposal of equipment or hazardous material removal. Some programs also allow educators to customize lessons and include built-in test and quiz modules that can track student performance. Furthermore, animals (whether dead or alive) can be used only once, while non-animal resources can be used for many years—an added benefit that could result in significant cost savings for teachers, school districts, and state educational systems.
Several peer-reviewed comparative studies examining information retention and performance of students who dissected animals and those who used an alternative instruction method have concluded that the educational outcomes of students who are taught basic and advanced biomedical concepts and skills using non-animal methods are equivalent or superior to those of their peers who use animal-based laboratories such as animal dissection.
Some reports state that students' confidence, satisfaction, and ability to retrieve and communicate information was much higher for those who participated in alternative activities compared to dissection. Three separate studies at universities across the United States found that students who modeled body systems out of clay were significantly better at identifying the constituent parts of human anatomy than their classmates who performed animal dissection.
Another study found that students preferred using clay modeling over animal dissection and performed just as well as their cohorts who dissected animals.
In 2008, the National Association of Biology Teachers (NABT) affirmed its support for classroom animal dissection stating that they "Encourage the presence of live animals in the classroom with appropriate consideration to the age and maturity level of the students ... NABT urges teachers to be aware that alternatives to dissection have their limitations. NABT supports the use of these materials as adjuncts to the educational process but not as exclusive replacements for the use of actual organisms."
The National Science Teachers Association (NSTA) "supports including live animals as part of instruction in the K-12 science classroom because observing and working with animals firsthand can spark students' interest in science as well as a general respect for life while reinforcing key concepts" of biological sciences. NSTA also supports offering dissection alternatives to students who object to the practice.
The NORINA database lists over 3,000 products which may be used as alternatives or supplements to animal use in education and training. These include alternatives to dissection in schools. InterNICHE has a similar database and a loans system.
Additional images
See also
1788 Doctors' riot in New York City
Vivisection
Forensics
Andreas Vesalius, founder of modern anatomy
Jean-Joseph Sue, 18th century surgeon and anatomist
Notes
References
Further reading
C. Celsus, On Medicine, I, Proem 23, 1935, translated by W. G. Spencer (Loeb Classics Library, 1992).
Claire Bubb. 2022. Dissection in Classical Antiquity: A Social and Medical History. Cambridge: Cambridge University Press.
External links
How to dissect a frog
Dissection Alternatives
Human Dissections
Virtual Frog Dissection
Alternatives To Animal Dissection in School Science Classes
Research Project on Death and Dead Bodies, last conference: "Death and Dissection" July 2009, Berlin, Germany
Evolutionary Biology Digital Dissection Collections Dissection photographs for study and teaching from the University at Buffalo
The Free Dictionary
Biological techniques and tools
Forensic pathology
Corpses | Dissection | [
"Biology"
] | 5,423 | [
"nan"
] |
611,249 | https://en.wikipedia.org/wiki/Bell%20tower | A bell tower is a tower that contains one or more bells, or that is designed to hold bells even if it has none. Such a tower commonly serves as part of a Christian church, and will contain church bells, but there are also many secular bell towers, often part of a municipal building, an educational establishment, or a tower built specifically to house a carillon. Church bell towers often incorporate clocks, and secular towers usually do, as a public service.
The term campanile (, also , ), from the Italian campanile, which in turn derives from campana, meaning "bell", is synonymous with bell tower; though in English usage campanile tends to be used to refer to a free standing bell tower. A bell tower may also in some traditions be called a belfry, though this term may also refer specifically to the substructure that houses the bells and the ringers rather than the complete tower.
The tallest free-standing bell tower in the world, high, is the Mortegliano Bell Tower, in the Friuli-Venezia Giulia region, Italy.
Purpose
Bells are rung from a tower to enable them to be heard at a distance. Church bells can signify the time for worshippers to go to church for a communal service, and can be an indication of the fixed times of daily Christian prayer, called the canonical hours, which number seven and are contained in breviaries. They are also rung on special occasions such as a wedding, or a funeral service. In some religious traditions they are used within the liturgy of the church service to signify to people that a particular part of the service has been reached.
A bell tower may have a single bell, or a collection of bells which are tuned to a common scale. They may be stationary and chimed, rung randomly by swinging through a small arc, or swung through a full circle to enable the high degree of control of English change ringing. They may house a carillon or chimes, in which the bells are sounded by hammers connected via cables to a keyboard. These can be found in many churches and secular buildings in Europe and America including college and university campuses.
A variety of electronic devices exist to simulate the sound of bells, but any substantial tower in which a considerable sum of money has been invested will generally have a real set of bells.
Some churches have an exconjuratory in the bell tower, a space where ceremonies were conducted to ward off weather-related calamities, like storms and excessive rain. The main bell tower of the Cathedral of Murcia has four.
In Christianity, many churches ring their church bells from belltowers three times a day, at 9 am, 12 pm and 3 pm to summon the Christian faithful to recite the Lord's Prayer; the injunction to pray the Lord's prayer thrice daily was given in Didache 8, 2 f., which, in turn, was influenced by the Jewish practice of praying thrice daily found in the Old Testament, specifically in , which suggests "evening and morning and at noon", and , in which the prophet Daniel prays thrice a day. The early Christians thus came to pray the Lord's Prayer at 9 am, 12 pm and 3 pm; as such, in Christianity, many Lutheran and Anglican churches ring their church bells from belltowers three times a day: in the morning, at noon and in the evening calling Christians to recite the Lord's Prayer. Many Catholic Christian churches ring their bells thrice a day, at 6a.m., noon, and 6p.m., to call the faithful to recite the Angelus, a prayer recited in honour of the Incarnation of God. Oriental Orthodox Christians, such as Copts and Indians, use a breviary such as the Agpeya and Shehimo to pray the canonical hours seven times a day while facing in the eastward direction; church bells are tolled, especially in monasteries, to mark these seven fixed prayer times (cf. ).
The Christian tradition of the ringing of church bells from a belltower is analogous to Islamic tradition of the adhan (call to prayer) from a minaret.
Old bell towers which are no longer used for their original purpose may be kept for their historic or architectural value, though in countries with a strong campanological tradition they often continue to have the bells rung.
History
Europe
In 400 AD, Paulinus of Nola introduced church bells into the Christian Church. By the 11th century, bells housed in belltowers became commonplace.
Historic bell towers exist throughout Europe. The Irish round towers are thought to have functioned in part as bell towers. Famous medieval European examples include Bruges (Belfry of Bruges), Ypres (Cloth Hall, Ypres), Ghent (Belfry of Ghent). Perhaps the most famous European free-standing bell tower, however, is the so-called "Leaning Tower of Pisa", which is the campanile of the Duomo di Pisa in Pisa, Italy. In 1999 thirty-two Belgian belfries were added to the UNESCO's list of World Heritage Sites. In 2005 this list was extended with one Belgian and twenty-three Northern French belfries and is since known as Belfries of Belgium and France. Most of these were attached to civil buildings, mainly city halls, as symbols of the greater power the cities in the region got in the Middle Ages; a small number of buildings not connected with a belfry, such as bell towers of—or with their—churches, also occur on this same list (details). In the Middle Ages, cities sometimes kept their important documents in belfries. Not all are on a large scale; the "bell" tower of Katúň, in Slovakia, is typical of the many more modest structures that were once common in country areas. Archaic wooden bell towers survive adjoining churches in Lithuania and as well as in some parts of Poland.
In Orthodox Eastern Europe bell ringing also has a strong cultural significance (Russian Orthodox bell ringing), and churches were constructed with bell towers (see also List of tall Orthodox Bell towers).
China
Bell towers (Chinese: Zhonglou, Japanese: Shōrō) are common in China and the countries of related cultures. They may appear both as part of a temple complex and as an independent civic building, often paired with a drum tower, as well as in local church buildings. Among the best known examples are the Bell Tower (Zhonglou) of Beijing and the Bell Tower of Xi'an.
Gallery
See also
Bell-gable
Clock tower
Conjuratory
Minaret
Octagon on cube
Zvonnitsa
References and notes
External links
Belfries of Belgium and France, UNESCO World Heritage Centre entry
Les Beffrois – France, Belgique, Pays-Bas, blog describing several bell towers (in French)
All Saints Bell Tower
Towers
Tower | Bell tower | [
"Engineering"
] | 1,420 | [
"Structural engineering",
"Towers"
] |
611,253 | https://en.wikipedia.org/wiki/Slot%202 | Slot 2 refers to the physical and electrical specification for the 330-lead Single Edge Contact Cartridge (or edge-connector) used by Intel's Pentium II Xeon and Pentium III Xeon.
When first introduced, Slot 1 Pentium IIs were intended to replace the Pentium and Pentium Pro processors in the home, desktop, and low-end symmetric multiprocessing (SMP) markets. The Pentium II Xeon, which was aimed at multiprocessor workstations and servers, was largely similar to the ordinary Pentium II, being based on the same P6 Deschutes core, differing by offering the choice of L2 cache capacity of 1024 or 2048 KB besides 512 KB, and by operating it at the core frequency (the Pentium II used cheaper third-party SRAM chips, running at 50% of CPU speed, to reduce cost).
Because the design of the 242-lead Slot 1 connector did not support the full-speed L2 cache of the Xeon, an extended 330-lead connector was developed. This new connector, dubbed 'Slot 2', was used for Pentium II Xeon (codenamed 'Drake') and Pentium III Xeon (codenamed 'Tanner' and 'Cascades'). Slot 1 was finally replaced by the Socket 370 with the revised Pentium III codenamed Tualatin for the low power dual-processor servers, and Slot 2 by Socket 603 with Pentium 4-based Xeon (codenamed Foster) for workstations and quad-processor servers.
See also
List of Intel microprocessors
List of Intel Xeon microprocessors
Slot A
Slot 1
References
Intel CPU sockets | Slot 2 | [
"Technology"
] | 355 | [
"Computing stubs",
"Computer hardware stubs"
] |
611,421 | https://en.wikipedia.org/wiki/Hostname | In computer networking, a hostname (archaically nodename) is a label that is assigned to a device connected to a computer network and that is used to identify the device in various forms of electronic communication, such as the World Wide Web. Hostnames may be simple names consisting of a single word or phrase, or they may be structured. Each hostname usually has at least one numeric network address associated with it for routing packets for performance and other reasons.
Internet hostnames may have appended the name of a Domain Name System (DNS) domain, separated from the host-specific label by a period ("dot"). In the latter form, a hostname is also called a domain name. If the domain name is completely specified, including a top-level domain of the Internet, then the hostname is said to be a fully qualified domain name (FQDN). Hostnames that include DNS domains are often stored in the Domain Name System together with the IP addresses of the host they represent for the purpose of mapping the hostname to an address, or the reverse process.
Internet hostnames
On the Internet, a hostname is a domain name assigned to a host computer. This is usually a combination of the host's local name with its parent domain's name. For example, en.wikipedia.org consists of a local hostname (en) and the domain name wikipedia.org. This kind of hostname is translated into an IP address via the local hosts file, or the DNS resolver. It is possible for a single host computer to have several hostnames but generally, the operating system of the host prefers to have one hostname that the host uses for itself.
Any domain name can also be a hostname, as long as the restrictions mentioned below are followed. So, for example, both en.wikipedia.org and wikipedia.org are hostnames because they both have IP addresses assigned to them. A hostname may be a domain name if it is properly organized into the domain name system. A domain name may be a hostname if it has been assigned to an Internet host and associated with the host's IP address.
Syntax
Hostnames are composed of a sequence of labels concatenated with dots. For example, "en.wikipedia.org" is a hostname. Each label must be 1 to 63 octets long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.
The Internet standards (Request for Comments) for protocols specify that labels may contain only the ASCII letters a through z (in a case-insensitive manner), the digits 0 through 9, and the hyphen-minus character ('-'). The original specification of hostnames required that labels start with an alpha character and not end with a hyphen. However, a subsequent specification permitted hostname labels to start with digits. Internationalized domain names are stored in the Domain Name System as ASCII strings using Punycode transcription.
While a hostname may not contain other characters, such as the underscore character (_), other DNS names may contain the underscore. Systems such as DomainKeys and service records use the underscore as a means to assure that their special character is not confused with hostnames. For example, _http._sctp.www.example.com specifies a service pointer for an SCTP-capable webserver host (www) in the domain example.com. Notwithstanding the standard, Chrome, Firefox, Internet Explorer, Edge, and Safari allow underscores in hostnames, although cookies in IE do not work correctly if any part of the hostname contains an underscore character.
However, it is valid to attempt to resolve a hostname that consists of an underscore. E.g. _.example.com. This is used by RFC 7816 to reduce the amount of information that is made available to intermediate DNS servers during an iterative query. The Query Name Minimisation feature is enabled by default in BIND 9.14.0.
The hostname en.wikipedia.org is composed of the DNS labels en (hostname or leaf domain), wikipedia (second-level domain), and org (top-level domain). Labels such as 2600 and 3abc may be used in hostnames, but -hi-, _hi_, and *hi* are invalid.
A hostname is considered to be a fully qualified domain name (FQDN) when all labels up to and including the top-level domain name (TLD) are specified. The hostname en.wikipedia.org terminates with the top-level domain org and is thus fully qualified. Depending on the operating system DNS software implementation, an unqualified hostname may be automatically combined with a default domain name configured into the system in order to complete the fully qualified domain name. As an example, a student at MIT may be able to send mail to "joe@csail" and have it automatically qualified by the mail system to be sent to joecsail.mit.edu.
General guidelines on choosing a good hostname are outlined in RFC 1178.
Example
saturn and jupiter may be the hostnames of two devices connected to a network named PC. Within PC, the devices are addressed by their hostnames. The domain names of the devices are and , respectively. If PC is registered as a second-level domain name in the Internet, e.g., as , the hosts may be addressed by the fully qualified domain names and .
See also
Domain hijacking
References
Computer networking
Identifiers | Hostname | [
"Technology",
"Engineering"
] | 1,178 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
611,452 | https://en.wikipedia.org/wiki/Medicine%20bag | A medicine bag is usually a small pouch, worn by some Indigenous peoples of the Americas, that contains sacred items. A personal medicine bag may contain objects that symbolize personal well-being and tribal identity. Traditionally, medicine bags are worn under the clothing. Their contents are private, and often of a personal and religious nature.
See also
Medicine man
Midewiwin
Medicine wheel
References
Bags
Native American religion
Religious objects
Traditional medicine
Amulets
Talismans | Medicine bag | [
"Physics"
] | 91 | [
"Religious objects",
"Physical objects",
"Matter"
] |
611,460 | https://en.wikipedia.org/wiki/Localization%20of%20a%20category | In mathematics, localization of a category consists of adding to a category inverse morphisms for some collection of morphisms, constraining them to become isomorphisms. This is formally similar to the process of localization of a ring; it in general makes objects isomorphic that were not so before. In homotopy theory, for example, there are many examples of mappings that are invertible up to homotopy; and so large classes of homotopy equivalent spaces. Calculus of fractions is another name for working in a localized category.
Introduction and motivation
A category C consists of objects and morphisms between these objects. The morphisms reflect relations between the objects. In many situations, it is meaningful to replace C by another category C''' in which certain morphisms are forced to be isomorphisms. This process is called localization.
For example, in the category of R-modules (for some fixed commutative ring R) the multiplication by a fixed element r of R is typically (i.e., unless r is a unit) not an isomorphism:
The category that is most closely related to R-modules, but where this map is an isomorphism turns out to be the category of -modules. Here is the localization of R with respect to the (multiplicatively closed) subset S consisting of all powers of r,
The expression "most closely related" is formalized by two conditions: first, there is a functor
sending any R-module to its localization with respect to S. Moreover, given any category C and any functor
sending the multiplication map by r on any R-module (see above) to an isomorphism of C, there is a unique functor
such that .
Localization of categories
The above examples of localization of R-modules is abstracted in the following definition. In this shape, it applies in many more examples, some of which are sketched below.
Given a category C and some class W of morphisms in C, the localization C[W−1] is another category which is obtained by inverting all the morphisms in W. More formally, it is characterized by a universal property: there is a natural localization functor C → C[W−1] and given another category D, a functor F: C → D factors uniquely over C[W−1] if and only if F sends all arrows in W to isomorphisms.
Thus, the localization of the category is unique up to unique isomorphism of categories, provided that it exists. One construction of the localization is done by declaring that its objects are the same as those in C, but the morphisms are enhanced by adding a formal inverse for each morphism in W. Under suitable hypotheses on W, the morphisms from object X to object Y are given by roofs
(where X' is an arbitrary object of C and f is in the given class W of morphisms), modulo certain equivalence relations. These relations turn the map going in the "wrong" direction into an inverse of f. This "calculus of fractions" can be seen as a generalization of the construction of rational numbers as equivalence classes of pairs of integers.
This procedure, however, in general yields a proper class of morphisms between X and Y. Typically, the morphisms in a category are only allowed to form a set. Some authors simply ignore such set-theoretic issues.
Model categories
A rigorous construction of localization of categories, avoiding these set-theoretic issues, was one of the initial reasons for the development of the theory of model categories: a model category M is a category in which there are three classes of maps; one of these classes is the class of weak equivalences. The homotopy category Ho(M) is then the localization with respect to the weak equivalences. The axioms of a model category ensure that this localization can be defined without set-theoretical difficulties.
Alternative definition
Some authors also define a localization of a category C to be an idempotent and coaugmented functor. A coaugmented functor is a pair (L,l) where L:C → C is an endofunctor and l:Id → L is a natural transformation from the identity functor to L (called the coaugmentation). A coaugmented functor is idempotent if, for every X, both maps L(lX),lL(X):L(X) → LL(X) are isomorphisms. It can be proven that in this case, both maps are equal.
This definition is related to the one given above as follows: applying the first definition, there is, in many situations, not only a canonical functor , but also a functor in the opposite direction,
For example, modules over the localization of a ring are also modules over R itself, giving a functor
In this case, the composition
is a localization of C in the sense of an idempotent and coaugmented functor.
Examples
Serre's C-theory
Serre introduced the idea of working in homotopy theory modulo some class C of abelian groups. This meant that groups A and B were treated as isomorphic, if for example A/B lay in C.
Module theory
In the theory of modules over a commutative ring R, when R has Krull dimension ≥ 2, it can be useful to treat modules M and N as pseudo-isomorphic if M/N has support of codimension at least two. This idea is much used in Iwasawa theory.
Derived categories
The derived category of an abelian category is much used in homological algebra. It is the localization of the category of chain complexes (up to homotopy) with respect to the quasi-isomorphisms.
Quotients of abelian categories
Given an abelian category A and a Serre subcategory B, one can define the quotient category A/B, which is an abelian category equipped with an exact functor from A to A/B that is essentially surjective and has kernel B. This quotient category can be constructed as a localization of A by the class of morphisms whose kernel and cokernel are both in B.Abelian varieties up to isogeny
An isogeny from an abelian variety A to another one B is a surjective morphism with finite kernel. Some theorems on abelian varieties require the idea of abelian variety up to isogeny for their convenient statement. For example, given an abelian subvariety A1 of A, there is another subvariety A2 of A such thatA1 × A2is isogenous to A (Poincaré's reducibility theorem: see for example Abelian Varieties by David Mumford). To call this a direct sum decomposition, we should work in the category of abelian varieties up to isogeny.
Related concepts
The localization of a topological space, introduced by Dennis Sullivan, produces another topological space whose homology is a localization of the homology of the original space.
A much more general concept from homotopical algebra, including as special cases both the localization of spaces and of categories, is the Bousfield localization'' of a model category. Bousfield localization forces certain maps to become weak equivalences, which is in general weaker than forcing them to become isomorphisms.
See also
Simplicial localization
References
Category theory
Localization (mathematics) | Localization of a category | [
"Mathematics"
] | 1,575 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
611,537 | https://en.wikipedia.org/wiki/Magnesite | Magnesite is a mineral with the chemical formula (magnesium carbonate). Iron, manganese, cobalt, and nickel may occur as admixtures, but only in small amounts.
Occurrence
Magnesite occurs as veins in and an alteration product of ultramafic rocks, serpentinite and other magnesium rich rock types in both contact and regional metamorphic terrains. These magnesites are often cryptocrystalline and contain silica in the form of opal or chert.
Magnesite is also present within the regolith above ultramafic rocks as a secondary carbonate within soil and subsoil, where it is deposited as a consequence of dissolution of magnesium-bearing minerals by carbon dioxide in groundwaters.
Formation
Magnesite can be formed via talc carbonate metasomatism of peridotite and other ultramafic rocks. Magnesite is formed via carbonation of olivine in the presence of water and carbon dioxide at elevated temperatures and high pressures typical of the greenschist facies.
Magnesite can also be formed via the carbonation of magnesium serpentine (lizardite) via the following reaction:
2 Mg3Si2O5(OH)4 + 3 CO2 → Mg3Si4O10(OH)2 + 3 MgCO3 + 3 H2O
However, when performing this reaction in the laboratory, the trihydrated form of magnesium carbonate (nesquehonite) will form at room temperature. This very observation led to the postulation of a "dehydration barrier" being involved in the low-temperature formation of anhydrous magnesium carbonate. Laboratory experiments with formamide, a liquid resembling water, have shown how no such dehydration barrier can be involved. The fundamental difficulty to nucleate anhydrous magnesium carbonate remains when using this non-aqueous solution. Not cation dehydration, but rather the spatial configuration of carbonate anions creates the barrier in the low-temperature nucleation of magnesite.
Magnesite has been found in modern sediments, caves and soils. Its low-temperature (around ) formation is known to require alternations between precipitation and dissolution intervals. The low-temperature formation of magnesite might well be of significance toward large-scale carbon sequestration. A major step forward toward the industrial production of magnesite at atmospheric pressure and a temperature of 316 K was described by Vandeginste. In those experiments small additions of hydrochloric acid alternated periodically with additions of sodium carbonate solution. New was also the very short duration of only a few hours for the alternating dissolution and precipitation cycles.
Magnesite was detected in meteorite ALH84001 and on planet Mars itself. Magnesite was identified on Mars using infrared spectroscopy from satellite orbit. Near Jezero Crater, Mg-carbonates have been detected and reported to have formed in lacustrine environment prevailing there. Controversy still exists over the temperature of formation of these carbonates. Low-temperature formation has been suggested for the magnesite from the Mars-derived ALH84001 meteorite.
Magnesium-rich olivine (forsterite) favors production of magnesite from peridotite. Iron-rich olivine (fayalite) favors production of magnetite-magnesite-silica compositions.
Magnesite can also be formed by way of metasomatism in skarn deposits, in dolomitic limestones, associated with wollastonite, periclase, and talc.
Resistant to high temperature and able to withstand high pressure, magnesite has been proposed to be one of the major carbonate bearing phase in Earth's mantle and possible carriers for deep carbon reservoirs. For similar reason, it is found in metamorphosed peridotite rocks in Central Alps, Switzerland and high pressure eclogitic rocks from Tianshan, China.
Magnesite can also precipitate in lakes in presence of bacteria either as hydrous Mg-carbonates or magnesite.
Uses
Refractory material
Similar to the production of lime, magnesite can be burned in the presence of charcoal to produce MgO, which, in the form of a mineral, is known as periclase. Large quantities of magnesite are burnt to make magnesium oxide: an important refractory (heat-resistant) material used as a lining in blast furnaces, kilns and incinerators.
Calcination temperatures determine the reactivity of resulting oxide products and the classifications of light burnt and dead burnt refer to the surface area and resulting reactivity of the product (this is typically determined by an industry metric of the iodine number).
'Light burnt' product generally refers to calcination commencing at 450 °C and proceeding to an upper limit of 900 °C – which results in good surface area and reactivity.
Above 900 °C, the material loses its reactive crystalline structure and reverts to the chemically inert 'dead-burnt' product- which is preferred for use in refractory materials such as furnace linings.
In fire assay, magnesite cupels can be used for cupellation, as the magnesite cupel will resist the high temperatures involved.
Other uses
Magnesite can also be used as a binder in flooring material (magnesite screed). Furthermore, it is being used as a catalyst and filler in the production of synthetic rubber and in the preparation of magnesium chemicals and fertilizers.
Research is proceeding to evaluate the practicality of sequestering the greenhouse gas carbon dioxide in magnesite on a large scale. This has focused on peridotites from ophiolites (obducted mantle rocks on crust) where magnesite can be created by letting carbon dioxide react with these rocks. Some progress has been made in ophiolites from Oman. But the major problem is that these artificial processes require sufficient porosity-permeability so that the fluids can flow but this is hardly the case in peridotites.
Artworks
Magnesite can be cut, drilled, and polished to form beads that are used in jewelry-making. Magnesite beads can be dyed into a broad spectrum of bold colors, including a light blue color that mimics the appearance of turquoise.
The Japanese-American artist Isamu Noguchi used magnesite as a sculptural material for some of his artworks.
Isotopic structure
The recent advancement in the field of stable isotope geochemistry is the study of isotopic structure of minerals and molecules. This requires study of molecules with high resolutions looking at bonding scenario (how heavy isotopes are bonded to each other)- leading to knowledge of stability of molecule depending on its isotopic structure.
Isotopically substituted molecules have higher mass. As a consequence, molecular vibration reduces and the molecule develops a lower zero point energy (see Kinetic isotope effect).
The abundances of certain bonds in certain molecules are sensitive to temperature at which it formed (e.g., abundance of 13C16O18O in carbonates as 13C-18O bond). This information has been exploited to form the foundation of clumped isotope geochemistry. Clumped isotope thermometers have been established for carbonate minerals like dolomite, calcite, siderite etc and non-carbonate compounds like methane and oxygen. Depending on the strength of cation-carbonate oxygen (ie, Mg-O, Ca-O) bonds- different carbonate minerals can form or preserve clumped isotopic signatures differently.
Measurements and reporting
Clumped isotopic analysis has certain aspects to it. These are:
Digestion, analysis and acid fractionation correction
Clumped isotopic analysis is usually done by gas source mass spectrometry where the CO2 liberated from magnesite by phosphoric acid digestion is fed into the isotope ratio mass spectrometer. In such scenario, one needs to ensure that liberation of CO2 from magnesite is complete. Digesting magnesite is hard since it takes a long time and different labs report different digestion times and temperatures (from 12 hours at 100 °C to 1 hour at 90 °C in phosphoric acid). Due to digestion at this high temperature, some of the 13C-18O bonds in the liberated CO2 are broken (leading to reduction in abundance of 'clumped' CO2) during phosphoric acid digestion of carbonates. To account for this additional (analytical artifact), a correction called the 'acid fractionation correction' is added to the magnesite clumped isotope value obtained at temperature of digestion.
Since the CO2 gas is liberated from carbonate mineral during acid digestion, leaving one O behind- a fractionation occurs, and the isotopic composition of the analyzed CO2 gas needs to be corrected for this. For magnesite, the most reliable fractionation factor(α) equation is given as:
103ln(α) = [(6.845 ± 0.475)∗105/T2] + (4.22 ± 0.08); T in K
Different researchers have also used other fractionation factors like dolomite fractionation factor.
Standards
While measuring samples of unknown composition, it is required to measure some standard materials (see Reference materials for stable isotope analysis). With internal standards and reference materials, analytical session is routinely monitored. Standard materials are majorly calcite and marble.
Δ47 – Temperature calibration
To convert clumped isotope data into temperature, a calibration curve is required which expresses the functional form of temperature dependence of clumped isotope composition. No mineral specific calibration exists for magnesite. Based on some experimental data where mineral precipitation temperature and clumped isotope derived temperature doesn't match, a need of mineral specific calibration emerges. The mismatch arises since bonding in magnesite is different from calcite/dolomite and/or acid digestion is conducted at higher temperature.
Magnesite-water and CO2-magnesite isotope fractionation factors
Using clumped isotope derived temperature, C and O isotopic composition of the parental fluid can be calculated using known magnesite-fluid isotope fractionation factors, since fractionation is temperature dependent. Reported magnesite-fluid O and C isotope fractionation factors in literature are not in agreement with each other. The fractionation behaviors have not been substantiated by experimental observation.
Factors controlling isotopic structure in magnesite
Conversion from hydrous Mg-carbonates to magnesite
In low temperature, thus, hydrous Mg-carbonates (hydromagnesite, nesquehonite etc.) form. It is possible to convert these phases into magnesite by changing temperature by mineral dissolution-precipitation or dehydration. While so happens, an isotope effect associated can control the isotopic composition of precipitated magnesite.
Disequilibrium
Disequilibrium processes like degassing, rapid CO2 uptake etc. modify clumped isotopic composition of carbonate minerals specifically at low temperatures. They variably enrich or deplete the system in heavy isotopes of C and O. Since clumped isotope abundance depends on abundance of isotopes of C and O, they are also modified. Another very prominent effect here is that of pH of precipitating fluid. As pH of precipitating fluid changes, DIC pool is affected and isotopic composition of precipitating carbonate changes.
Mineral structure and later thermal effects
Crystalline and cryptocrystalline magnesites have very different mineral structures. While crystalline magnesite has a well developed crystal structure, the cryptocrystalline magnesite is amorphous- mostly aggregate of fine grains. Since clumped isotopic composition depends on specific bonding, difference in crystal structure is very likely to affect the way clumped isotopic signatures are recorded in these different structures. This leads to the fact that their pristine signatures might be modified differently by later thermal events like diagenesis/burial heating etc.
Information on formation from isotopic structure
Clumped isotopes have been used in interpreting conditions of magnesite formation and the isotopic composition of the precipitating fluid. Within ultramafic complexes, magnesites are found within veins and stockworks in cryptocrystalline form as well as within carbonated peridotite units in crystalline form. These cryptocrystalline forms are mostly variably weathered and yield low temperature of formation. On the other hand, coarse magnesites yield very high temperature indicating hydrothermal origin. It is speculated that coarse high temperature magnesites are formed from mantle derived fluids whereas cryptocrystalline ones are precipitated by circulating meteoric water, taking up carbon from dissolved inorganic carbon pool, soil carbon and affected by disequilibrium isotope effects.
Magnesites forming in lakes and playa settings are in general enriched in heavy isotopes of C and O because of evaporation and CO2 degassing. This reflects in the clumped isotope derived temperature being very low. These are affected by pH effect, biological activity as well as kinetic isotope effect associated with degassing. Magnesite forms as surface moulds in such conditions but more generally occur as hydrous Mg-carbonates since their precipitation is kinetically favored. Most of the times, they derive C from DIC or nearby ultramafic complexes (e.g., Altin Playa, British Columbia, Canada).
Magnesites in metamorphic rocks, on the other hand, indicate very high temperature of formation. Isotopic composition of parental fluid is also heavy- generally metamorphic fluids. This has been verified by fluid inclusion derived temperature as well as traditional O isotope thermometry involving co-precipitating quartz-magnesite.
Often, magnesite records lower clumped isotope temperature than associated dolomite, calcite. The reason might be that calcite, dolomite form earlier at higher temperature (from mantle like fluids) which increases Mg/Ca ratio in the fluid sufficiently so as to precipitate magnesite. As this happens with increasing time, fluid cools, evolves by mixing with other fluids and when it forms magnesite, it decreases its temperature. So the presence of associated carbonates have a control on magnesite isotopic composition.
Origin of Martian carbonates can be deconvolved with the application of clumped isotope. Source of the CO2, climatic-hydrologic conditions on Mars could be assessed from these rocks. Recent study has shown (implementing clumped isotope thermometry) that carbonates in ALH84001 indicate formation at low temperature evaporative condition from subsurface water and derivation of CO2 from Martian atmosphere.
Occupational safety and health
People can be exposed to magnesite in the workplace by inhaling it, skin contact, and eye contact.
United States
The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for magnesite exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday.
References
Smithsonian Rock and Gem
Magnesium minerals
Carbonate minerals
Calcite group
Trigonal minerals
Minerals in space group 167
Luminescent minerals
Evaporite | Magnesite | [
"Chemistry"
] | 3,232 | [
"Luminescence",
"Luminescent minerals"
] |
611,714 | https://en.wikipedia.org/wiki/Web%20development | Web development is the work involved in developing a website for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing a simple single static page of plain text to complex web applications, electronic businesses, and social network services. A more comprehensive list of tasks to which Web development commonly refers, may include Web engineering, Web design, Web content development, client liaison, client-side/server-side scripting, Web server and network security configuration, and e-commerce development.
Among Web professionals, "Web development" usually refers to the main non-design aspects of building Web sites: writing markup and coding. Web development may use content management systems (CMS) to make content changes easier and available with basic technical skills.
For larger organizations and businesses, Web development teams can consist of hundreds of people (Web developers) and follow standard methods like Agile methodologies while developing Web sites. Smaller organizations may only require a single permanent or contracting developer, or secondary assignment to related job positions such as a graphic designer or information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. There are three kinds of Web developer specialization: front-end developer, back-end developer, and full-stack developer. Front-end developers are responsible for behavior and visuals that run in the user browser, while back-end developers deal with the servers. Since the commercialization of the Web, the industry has boomed and has become one of the most used technologies ever.
Evolution of the World Wide Web and web development
Origin/ Web 1.0
Tim Berners-Lee created the World Wide Web in 1989 at CERN.
The primary goal in the development of the Web was to fulfill the automated information-sharing needs of academics affiliated with institutions and various global organizations. Consequently, HTML was developed in 1993.
Web 1.0 is described as the first paradigm wherein users could only view material and provide a small amount of information. Core protocols of web 1.0 were HTTP, HTML and URI.
Web 2.0
Web 2.0, a term popularised by Dale Dougherty, then vice president of O'Reilly, during a 2004 conference with Media Live, marks a shift in internet usage, emphasizing interactivity.
Web 2.0 introduced increased user engagement and communication. It evolved from the static, read-only nature of Web 1.0 and became an integrated network for engagement and communication. It is often referred to as a user-focused, read-write online network.
In the realm of Web 2.0 environments, users now have access to a platform that encourages sharing activities such as creating music, files, images, and movies. The architecture of Web 2.0 is often considered the "backbone of the internet," using standardized XML (Extensible Markup Language) tags to authorize information flow from independent platforms and online databases.
Web 3.0
Web 3.0, considered the third and current version of the web, was introduced in 2014. The concept envisions a complete redesign of the web. Key features include the integration of metadata, precise information delivery, and improved user experiences based on preferences, history, and interests.
Web 3.0 aims to turn the web into a sizable, organized database, providing more functionality than traditional search engines. Users can customize navigation based on their preferences, and the core ideas involve identifying data sources, connecting them for efficiency, and creating user profiles.
This version is sometimes also known as Semantic Web.
Evolution of web development technologies
The journey of web development technologies began with simple HTML pages in the early days of the internet. Over time, advancements led to the incorporation of CSS for styling and JavaScript for interactivity. This evolution transformed static websites into dynamic and responsive platforms, setting the stage for the complex and feature-rich web applications we have today.
Static HTML Pages (1990s)
Introduction of CSS (late 1990s)
JavaScript and Dynamic HTML (1990s - early 2000s)
AJAX (1998)
Rise of Content management systems (CMS) (mid-2000s)
Mobile web (late 2000s - 2010s)
Single-page applications (SPAs) and front-end frameworks (2010s)
Server-side javaScript (2010s)
Microservices and API-driven development (2010s - present)
Progressive web apps (PWAs) (2010s - present)
JAMstack Architecture (2010s - present)
WebAssembly (Wasm) (2010s - present)
Serverless computing (2010s - present)
AI and machine learning integration (2010s - present)
Web development in future will be driven by advances in browser technology, Web internet infrastructure, protocol standards, software engineering methods, and application trends.
Web development life cycle
The web development life cycle is a method that outlines the stages involved in building websites and web applications. It provides a structured approach, ensuring optimal results throughout the development process.
A typical Web Development process can be divided into 7 steps.
Analysis
Debra Howcraft and John Carroll proposed a methodology in which web development process can be divided into sequential steps. They mentioned different aspects of analysis.
Phase one involves crafting a web strategy and analyzing how a website can effectively achieve its goals. Keil et al.'s research identifies the primary reasons for software project failures as a lack of top management commitment and misunderstandings of system requirements. To mitigate these risks, Phase One establishes strategic goals and objectives, designing a system to fulfill them. The decision to establish a web presence should ideally align with the organization's corporate information strategy.
The analysis phase can be divided into 3 steps:
Development of a web strategy
Defining objectives
Objective analysis
During this phase, the previously outlined objectives and available resources undergo analysis to determine their feasibility. This analysis is divided into six tasks, as follows:
Technology analysis: Identification of all necessary technological components and tools for constructing, hosting, and supporting the site.
Information analysis: Identification of user-required information, whether static (web page) or dynamic (pulled "live" from a database server).
Skills analysis: Identification of the diverse skill sets necessary to complete the project.
User analysis: Identification of all intended users of the site, a more intricate process due to the varied range of users and technologies they may use.
Cost analysis: Estimation of the development cost for the site or an evaluation of what is achievable within a predefined budget.
Risk analysis: Examination of any major risks associated with site development.
Following this analysis, a more refined set of objectives is documented. Objectives that cannot be presently fulfilled are recorded in a Wish List, constituting part of the Objectives Document. This documentation becomes integral to the iterative process during the subsequent cycle of the methodology.
Planning: sitemap and wireframe
It is crucial for web developers to be engaged in formulating a plan and determining the optimal architecture and selecting the frameworks. Additionally, developers/consultants play a role in elucidating the total cost of ownership associated with supporting a website, which may surpass the initial development expenses.
Key aspects in this step are:
Sitemap creation
Wireframe creation
Tech stack
Design and layout
Following the analysis phase, the development process moves on to the design phase, which is guided by the objectives document. Recognizing the incremental growth of websites and the potential lack of good design architecture, the methodology includes iteration to account for changes and additions over the life of the site. The design phase, which is divided into Information Design and Graphic Design, results in a detailed Design Document that details the structure of the website, database data structures, and CGI scripts.*
The following step, design testing, focuses on early, low-cost testing to identify inconsistencies or flaws in the design. This entails comparing the website's design to the goals and objectives outlined in the first three steps. Phases One and Two involve an iterative loop in which objectives in the Objectives Document are revisited to ensure alignment with the design. Any objectives that are removed are added to the Wish List for future consideration.
Key aspects in this step are:
Page layouts
Review
Approval
Content creation
No matter how visually appealing a website is, good communication with clients is critical. The primary purpose of content production is to create a communication channel through the user interface by delivering relevant information about your firm in an engaging and easily understandable manner. This includes:
Developing appealing calls to action
Making creative headlines
Content formatting for readability
Carrying out line editing
Text updating throughout the site development process.
The stage of content production is critical in establishing the branding and marketing of your website or web application. It serves as a platform for defining the purpose and goals of your online presence through compelling and convincing content.
Development
During this critical stage, the website is built while keeping its fundamental goal in mind, paying close attention to all graphic components to assure the establishment of a completely working site.
The procedure begins with the development of the main page, which is followed by the production of interior pages. The site's navigational structure is being refined in particular.
During this development phase, key functionality such as the Content Management System, interactive contact forms, and shopping carts are activated.
The coding process includes creating all of the site's software and installing it on the appropriate Web servers. This can range from simple things like posting to a Web server to more complex tasks like establishing database connections.
Testing, review and launch
In any web project, the testing phase is incredibly intricate and difficult. Because web apps are frequently designed for a diverse and often unknown user base running in a range of technological environments, their complexity exceeds that of traditional Information Systems (IS). To ensure maximum reach and efficacy, the website must be tested in a variety of contexts and technologies. The website moves to the delivery stage after gaining final approval from the designer. To ensure its preparation for launch, the quality assurance team performs rigorous testing for functionality, compatibility, and performance.
Additional testing is carried out, including integration, stress, scalability, load, resolution, and cross-browser compatibility. When the approval is given, the website is pushed to the server via FTP, completing the development process.
Key aspects in this step are:
Test Lost Links
Use code validators
Check browser
Maintenance and updating
The web development process goes beyond deployment to include a variety of post-deployment tasks.
Websites, in example, are frequently under ongoing maintenance, with new items being uploaded on a daily basis. The maintenance costs increases immensely as the site grows in size. The accuracy of content on a website is critical, demanding continuous monitoring to verify that both information and links, particularly external links, are updated. Adjustments are made in response to user feedback, and regular support and maintenance actions are carried out to maintain the website's long-term effectiveness.
Traditional development methodologies
Debra Howcraft and John Carroll discussed a few traditional web development methodologies in their research paper:
Waterfall: The waterfall methodology comprises a sequence of cascading steps, addressing the development process with minimal iteration between each stage. However, a significant drawback when applying the waterfall methodology to the development of websites (as well as information systems) lies in its rigid structure, lacking iteration beyond adjacent stages. Any methodology used for the development of Web-sites must be flexible enough to cope with change.
Structured Systems Analysis and Design Method (SSADM): Structured Systems Analysis and Design Method (SSADM) is a widely used methodology for systems analysis and design in information systems and software engineering. Although it does not cover the entire lifecycle of a development project, it places a strong emphasis on the stages of analysis and design in the hopes of minimizing later-stage, expensive errors and omissions.
Prototyping: Prototyping is a software development approach in which a preliminary version of a system or application is built to visualize and test its key functionalities. The prototype serves as a tangible representation of the final product, allowing stakeholders, including users and developers, to interact with it and provide feedback.
Rapid Application Development: Rapid Application Development (RAD) is a software development methodology that prioritizes speed and flexibility in the development process. It is designed to produce high-quality systems quickly, primarily through the use of iterative prototyping and the involvement of end-users. RAD aims to reduce the time it takes to develop a system and increase the adaptability to changing requirements.
Incremental Prototyping: Incremental prototyping is a software development approach that combines the principles of prototyping and incremental development. In this methodology, the development process is divided into small increments, with each increment building upon the functionality of the previous one. At the same time, prototypes are created and refined in each increment to better meet user requirements and expectations.
Key technologies in web development
Developing a fundamental knowledge of client-side and server-side dynamics is crucial.
The goal of front-end development is to create a website's user interface and visual components that users may interact with directly. On the other hand, back-end development works with databases, server-side logic, and application functionality. Building reliable and user-friendly online applications requires a comprehensive approach, which is ensured by collaboration between front-end and back-end engineers.
Front-end development
Front-end development is the process of designing and implementing the user interface (UI) and user experience (UX) of a web application. It involves creating visually appealing and interactive elements that users interact with directly. The primary technologies and concepts associated with front-end development include:
Technologies
The 3 core technologies for front-end development are:
HTML (Hypertext Markup Language): HTML provides the structure and organization of content on a webpage.
CSS (Cascading Style Sheet): Responsible for styling and layout, CSS enhances the presentation of HTML elements, making the application visually appealing.
JavaScript: It is used to add interactions to the web pages. Advancement in JavaScript has given rise to many popular front- end frameworks like React, Angular and Vue.js etc.
User interface design
User experience design focuses on creating interfaces that are intuitive, accessible, and enjoyable for users. It involves understanding user behavior, conducting usability studies, and implementing design principles to enhance the overall satisfaction of users interacting with a website or application. This involves wireframing, prototyping, and implementing design principles to enhance user interaction. Some of the popular tools used for UI Wireframing are -
Sketch for detailed, vector-based design
Moqups for beginners
Figma for a free wireframe app
UXPin for handing off design documentation to developers
MockFlow for project organization
Justinmind for interactive wireframes
Uizard for AI-assisted wireframing
Another key aspect to keep in mind while designing is Web Accessibility- Web accessibility ensures that digital content is available and usable for people of all abilities. This involves adhering to standards like the Web Content Accessibility Guidelines (WCAG), implementing features like alternative text for images, and designing with considerations for diverse user needs, including those with disabilities.
Responsive design
It is important to ensure that web applications are accessible and visually appealing across various devices and screen sizes. Responsive design uses CSS media queries and flexible layouts to adapt to different viewing environments.
Front-end frameworks
A framework is a high-level solution for the reuse of software pieces, a step forward in simple library-based reuse that allows for sharing common functions and generic logic of a domain application.
Frameworks and libraries are essential tools that expedite the development process. These tools enhance developer productivity and contribute to the maintainability of large-scale applications. Some popular front-end frameworks are:
React: A JavaScript library for building user interfaces, maintained by Facebook. It allows developers to create reusable UI components.
Angular: A TypeScript-based front-end framework developed and maintained by Google. It provides a comprehensive solution for building dynamic single-page applications.
Vue.js: A progressive JavaScript framework that is approachable yet powerful, making it easy to integrate with other libraries or existing projects.
State management
Managing the state of a web application to ensure data consistency and responsiveness. State management libraries like Redux (for React) or Vuex (for Vue.js) play a crucial role in complex applications.
Back-end development
Back-end development involves building the server-side logic and database components of a web application. It is responsible for processing user requests, managing data, and ensuring the overall functionality of the application. Key aspects of back-end development include:
Server/ cloud instance
An essential component of the architecture of a web application is a server or cloud instance. A cloud instance is a virtual server instance that can be accessed via the Internet and is created, delivered, and hosted on a public or private cloud. It functions as a physical server that may seamlessly move between various devices with ease or set up several instances on one server. It is therefore very dynamic, scalable, and economical.
Databases
Database management is crucial for storing, retrieving, and managing data in web applications. Various database systems, such as MySQL, PostgreSQL, and MongoDB, play distinct roles in organizing and structuring data. Effective database management ensures the responsiveness and efficiency of data-driven web applications. There are 3 types of databases:
Relational databases: Structured databases that use tables to organize and relate data. Common Examples include - MySQL, PostgreSQL and many more.
NoSQL databases: NoSQL databases are designed to handle unstructured or semi-structured data and can be more flexible than relational databases. They come in various types, such as document-oriented, key-value stores, column-family stores, and graph databases. Examples: MongoDB, Cassandra, ScyllaDB, CouchDB, Redis.
Document stores: Document stores store data in a semi-structured format, typically using JSON or XML documents. Each document can have a different structure, providing flexibility. Examples: MongoDB, CouchDB.
Key-value stores: Key-value stores store data as pairs of keys and values. They are simple and efficient for certain types of operations, like caching. Examples: Redis, DynamoDB.
Column-family stores: Column-family stores organize data into columns instead of rows, making them suitable for large-scale distributed systems and analytical workloads. Examples: Apache Cassandra, HBase.
Graph databases: Graph databases are designed to represent and query data in the form of graphs. They are effective for handling relationships and network-type data. Examples: Neo4j, Amazon Neptune.
In-memory databases: In-memory databases store data in the system's main memory (RAM) rather than on disk. This allows for faster data access and retrieval. Examples: Redis, Memcached.
Time-series databases: Time-series databases are optimized for handling time-stamped data, making them suitable for applications that involve tracking changes over time. Examples: InfluxDB, OpenTSDB.
NewSQL databases: NewSQL databases aim to provide the scalability of NoSQL databases while maintaining the ACID properties (Atomicity, Consistency, Isolation, Durability) of traditional relational databases. Examples: Google Spanner, CockroachDB.
Object-oriented databases: Object-oriented databases store data in the form of objects, which can include both data and methods. They are designed to work seamlessly with object-oriented programming languages. Examples: db4o, ObjectDB.
The choice of a database depends on various factors such as the nature of the data, scalability requirements, performance considerations, and the specific use case of the application being developed. Each type of database has its strengths and weaknesses, and selecting the right one involves considering the specific needs of the project.
Application programming interface (APIs)
Application Programming Interfaces are sets of rules and protocols that allow different software applications to communicate with each other. APIs define the methods and data formats that applications can use to request and exchange information.
RESTful APIs and GraphQL are common approaches for defining and interacting with web services.
Types of APIs
Web APIs: These are APIs that are accessible over the internet using standard web protocols such as HTTP. RESTful APIs are a common type of web API.
Library APIs: These APIs provide pre-built functions and procedures that developers can use within their code.
Operating System APIs: These APIs allow applications to interact with the underlying operating system, accessing features like file systems, hardware, and system services.
Server-side languages
Programming languages aimed at server execution, as opposed to client browser execution, are known as server-side languages. These programming languages are used in web development to perform operations including data processing, database interaction, and the creation of dynamic content that is delivered to the client's browser. A key element of server-side programming is server-side scripting, which allows the server to react to client requests in real time.
Some popular server-side languages are:
PHP: PHP is a widely used, open-source server-side scripting language. It is embedded in HTML code and is particularly well-suited for web development.
Python: Python is a versatile, high-level programming language used for a variety of purposes, including server-side web development. Frameworks like Django and Flask make it easy to build web applications in Python.
Ruby: Ruby is an object-oriented programming language, and it is commonly used for web development. Ruby on Rails is a popular web framework that simplifies the process of building web applications.
Java: Java is a general-purpose, object-oriented programming language. Java-based frameworks like Spring are commonly used for building enterprise-level web applications.
Node.js (JavaScript): While JavaScript is traditionally a client-side language, Node.js enables developers to run JavaScript on the server side. It is known for its event-driven, non-blocking I/O model, making it suitable for building scalable and high-performance applications.
C# (C Sharp): C# is a programming language developed by Microsoft and is commonly used in conjunction with the .NET framework for building web applications on the Microsoft stack.
ASP.NET: ASP.NET is a web framework developed by Microsoft, and it supports languages like C# and VB.NET. It simplifies the process of building dynamic web applications.
Go (Golang): Go is a statically typed language developed by Google. It is known for its simplicity and efficiency and is increasingly being used for building scalable and high-performance web applications.
Perl: Perl is a versatile scripting language often used for web development. It is known for its powerful text-processing capabilities.
Swift: Developed by Apple, Swift is used for server-side development in addition to iOS and macOS app development.
Lua: Lua is used for some embedded web servers, e.g. the configuration pages on a router, including OpenWRT.
Security measures
Implementing security measures to protect against common vulnerabilities, including SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). Authentication and authorization mechanisms are crucial for securing data and user access.
Testing, debugging and deployment
Thorough testing and debugging processes are essential for identifying and resolving issues in a web application. Testing may include unit testing, integration testing, and user acceptance testing. Debugging involves pinpointing and fixing errors in the code, ensuring the reliability and stability of the application.
Unit Testing: Testing individual components or functions to verify that they work as expected.
Integration Testing: Testing the interactions between different components or modules to ensure they function correctly together.
Continuous Integration and Deployment (CI/CD): CI/CD pipelines automate testing, deployment, and delivery processes, allowing for faster and more reliable releases.
Full-stack development
Full-stack development refers to the practice of designing, building, and maintaining the entire software stack of a web application. This includes both the frontend (client-side) and backend (server-side) components, as well as the database and any other necessary infrastructure. A full-stack developer is someone who has expertise in working with both the frontend and backend technologies, allowing them to handle all aspects of web application development.
MEAN (MongoDB, Express.js, Angular, Node.js) and MERN (MongoDB, Express.js, React, Node.js) are popular full-stack development stacks that streamline the development process by providing a cohesive set of technologies.
Web development tools and environments
Efficient web development relies on a set of tools and environments that streamline the coding and collaboration processes:
Integrated development environments (IDEs): Tools like Visual Studio Code, Atom, and Sublime Text provide features such as code highlighting, autocompletion, and version control integration, enhancing the development experience.
Version control: Git is a widely used version control system that allows developers to track changes, collaborate seamlessly, and roll back to previous versions if needed.
Collaboration tools: Communication platforms like Slack, project management tools such as Jira, and collaboration platforms like GitHub facilitate effective teamwork and project management.
Security practices in web development
Security is paramount in web development to protect against cyber threats and ensure the confidentiality and integrity of user data. Best practices include encryption, secure coding practices, regular security audits, and staying informed about the latest security vulnerabilities and patches.
Common threats: Developers must be aware of common security threats, including SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF).
Secure coding practices: Adhering to secure coding practices involves input validation, proper data sanitization, and ensuring that sensitive information is stored and transmitted securely.
Authentication and authorization: Implementing robust authentication mechanisms, such as OAuth or JSON Web Tokens (JWT), ensures that only authorized users can access specific resources within the application.
Agile methodology in web development
Agile manifesto and principles
Agile is a set of principles and values for software development that prioritize flexibility, collaboration, and customer satisfaction. The four key values are:
Individuals and interactions over processes and tools.
Working software over comprehensive documentation.
Customer collaboration over contract negotiation.
Responding to change over following a plan.
Agile concepts in web development
Iterative and incremental development: Building and refining a web application through small, repeatable cycles, enhancing features incrementally with each iteration.
Scrum and kanban: Employing agile frameworks like Scrum for structured sprints or Kanban for continuous flow to manage tasks and enhance team efficiency.
Cross-functional teams: Forming collaborative teams with diverse skill sets, ensuring all necessary expertise is present for comprehensive web development.
Customer collaboration: Engaging customers throughout the development process to gather feedback, validate requirements, and ensure the delivered product aligns with expectations.
Adaptability to change: Embracing changes in requirements or priorities even late in the development process to enhance the product's responsiveness to evolving needs.
User stories and backlog: Capturing functional requirements through user stories and maintaining a backlog of prioritized tasks to guide development efforts.
Continuous integration and continuous delivery (CI/CD): Implementing automated processes to continuously integrate code changes and deliver updated versions, ensuring a streamlined and efficient development pipeline.
See also
Outline of web design and web development
Web design
Web development tools
Web application development
Web developer
References | Web development | [
"Engineering"
] | 5,633 | [
"Software engineering",
"Web development"
] |
611,768 | https://en.wikipedia.org/wiki/International%20nonproprietary%20name | An International Nonproprietary Name (INN) is an official generic and nonproprietary name given to a pharmaceutical substance or an active ingredient, encompassing compounds, peptides and low-molecular-weight proteins (e.g., insulin, hormones, cytokines), as well as complex biological products, such as those used for gene therapy. INNs are intended to make communication more precise by providing a unique standard name for each active ingredient, to avoid prescribing errors. The INN system was initiated by the World Health Organization (WHO) in 1953.
Having unambiguous standard names for each pharmaceutical substance (standardization of drug nomenclature) is important because a drug may be sold under many different brand names, or a branded medication may contain more than one drug. For example, the branded medications Celexa, Celapram and Citrol all contain the same active ingredient whose INN is citalopram. The antibacterial medication known as co-trimoxazole as well as those under the brand names Bactrim and Septran all contain two active ingredients easily recognisable by their INN: trimethoprim and sulfamethoxazole.
The WHO publishes INNs in English, Latin, French, Russian, Spanish, Arabic, and Chinese, and a drug's INNs are often cognate across most or all of the languages, with minor spelling or pronunciation differences, for example: paracetamol (en) (la), (fr) and (ru). An established INN is known as a recommended INN (rINN), while a name that is still being considered is called a proposed INN (pINN).
National nonproprietary names such as British Approved Names (BAN), Dénominations Communes Françaises (DCF), Japanese Adopted Names (JAN) and United States Adopted Names (USAN) are nowadays, with rare exceptions, identical to the INN.
INN stems
Each drug's INN is unique but may contain a stem that is shared with other drugs of the same class. In this context, a stem is a syllable (or syllables) created to evoke in the name the pharmacological mechanism of action or the chemical structure of the substance. Stems are mostly placed word-finally (suffixes), but in some cases word-initial stems (prefixes) are used. For example, the beta blocker drugs propranolol and atenolol share the stem -olol (as a suffix), and the benzodiazepine drugs lorazepam and diazepam share the stem -azepam (also a suffix) The list of stems in use are collected in a publication informally known as the Stem Book.
Some examples of stems are:
-anib for angiogenesis inhibitors (e.g. pazopanib)
-anserin for serotonin receptor antagonists, especially 5-HT2 antagonists (e.g. ritanserin and mianserin)
-arit for antiarthritic agents (e.g. lobenzarit)
-ase for enzymes (e.g. alteplase)
-azepam or -azolam for benzodiazepines (e.g. diazepam and alprazolam)
-caine for local anaesthetics (e.g. procaine )
cef- for cephalosporins (e.g. cefalexin)
-coxib for COX-2 inhibitors, a type of anti-inflammatory drugs (e.g. celecoxib)
-grel- or -grel for platelet aggregation inhibitors (e.g. anagrelide, cangrelor, clopidogrel)
-mab for monoclonal antibodies (e.g. infliximab); see Nomenclature of monoclonal antibodies
-meran for messenger RNA products (e.g. tozinameran and elasomeran)
-- or nab- for cannabinoid receptor agonists (e.g. cannabidiol, dronabinol, nabilone)
-olol for beta blockers (e.g. atenolol)
-pril for angiotensing converting enzyme (ACE) inhibitors (e.g. captopril)
-sartan for angiotensin II receptor antagonists (e.g. losartan)
-tinib for tyrosine kinase inhibitors (e.g. imatinib)
-vastatin for HMG-CoA reductase inhibitors, a group of cholesterol-lowering agents (e.g. atorvastatin and simvastatin )
-vir for antivirals (e.g. remdesivir and ritonavir)
for antiretroviral protease inhibitors (e.g. darunavir)
-ciclovir for bicyclic heterocycle antivirals (e.g. aciclovir and famciclovir)
School of INN
The School of INN is a WHO International Nonproprietary Name Programme initiative launched in 2019, which aims to provide information to pharmacy, medical and health students, as well as health professionals and other stakeholders on how an INN is designed and constructed.
Users can take self-administered courses on several topics using this free and open source learning platform. For example, the course An Introduction to Drug Nomenclature and INN provides the user with a general overview of drug nomenclature and how INN are obtained and constructed. The course Learning Clinical Pharmacology (ATC classification, INN system) provides the student with the first steps to learn pharmacology using INN stems.
Registered students can take other courses provided by the School of INN, such as the Stem in a pill course, in which each topic or course contains information correlating INN and pharmacology for a given stem, including indications, mechanism of action, pharmacokinetics, contraindications, and drug interactions for the drugs sharing the stem.
There is also a "How to ..." section about INN Programme services and MedNet INN which enables users to carry out searches in the INN database to retrieve information on INN, its chemical information and ATC codes amonsgt other things.
The School of INN has created pilot sites in collaboration with several Universities around the globe: University of the Western Cape(South Africa), University of Eastern Piedmont (Italy), Université Grenoble Alpes (France) and University Ramon Lull and University of Alcalá in Spain. These pilot sites are involved in disseminating the use of INN, teaching based on INN and related research activities.
Linguistics
Stems and roots
The term stem is not used consistently in linguistics. It has been defined as a form to which affixes (of any type) can be attached. Under a different and apparently more common view, this is the definition of a root, while a stem consists of the root plus optional derivational affixes, meaning that it is the part of a word to which inflectional affixes are added. INN stems employ the first definition, while under the more common alternative they would be described as roots.
Translingual communication
Pharmacology and pharmacotherapy (like health care generally) are universally relevant around the world, making translingual communication about them an important goal. An interlingual perspective is thus useful in drug nomenclature. The WHO issues INNs in English, Latin, French, Russian, Spanish, Arabic, and Chinese. A drug's INNs are often cognates across most or all of the languages, but they also allow small inflectional, diacritic, and transliterational differences that are usually transparent and trivial for nonspeakers (as is true of most international scientific vocabulary). For example, although ibuprofenum (la) has an inflectional difference from ibuprofen (en), and although ibuprofène (fr) has a diacritic difference, the differences are trivial; users can easily recognize the "same word". Although Ибупрофе́н (ru) and ibuprofen (en) have a transliteration difference, they sound similar, and for Russian speakers who can recognize Latin script or English speakers who can recognize Cyrillic script, they look similar; users can recognize the "same word". Thus, INNs make medicines bought anywhere in the world as easily identifiable as possible to people who do not speak that language. Notably, the "same word" principle allows health professionals and patients who do not speak the same language to communicate to some degree and to avoid potentially life-threatening confusions from drug interactions.
Spelling regularization
To facilitate the translation and pronunciation of INN, "f" should be used instead of "ph", "t" instead of "th", "e" instead of "ae" or "oe", and "i" instead of "y"; the use of the letters "h" and "k" should be avoided. Thus a predictable spelling system, approximating phonemic orthography, is used, as follows:
ae or oe is replaced by e (e.g. estradiol vs. oestradiol)
ph is replaced by f (e.g. amfetamine vs. amphetamine)
th is replaced by t (e.g. levmetamfetamine vs. levo-methamphetamine)
y is replaced by i (e.g. aciclovir vs. acyclovir)
h and k are avoided where possible
Names for radicals and groups (salts, esters, and so on)
Many drugs are supplied as salts, with a cation and an anion. The way the INN system handles these is explained by the WHO at its "Guidance on INN" webpage. For example, amfetamine and oxacillin are INNs, whereas various salts of these compounds – e.g., amfetamine sulfate and oxacillin sodium – are modified INNs (INNM).
Comparison of naming standards
Several countries had created their own nonproprietary naming system before the INN was created, and in many cases, the names created under the old systems continue to be used in those countries. As one example, in English the INN name for a common painkiller is paracetamol; the table below gives the alternative names for this in different systems:
Other naming systems not listed above include France's Dénomination Commune Française (DCF) and Italy's Denominazione Comune Italiana (DCIT).
See also
Generic drug
References
Further reading
External links
Pharmacology
Names
1953 establishments
World Health Organization
Pharmacological classification systems | International nonproprietary name | [
"Chemistry"
] | 2,243 | [
"Pharmacological classification systems",
"Pharmacology",
"Medicinal chemistry"
] |
611,954 | https://en.wikipedia.org/wiki/Rescue%20of%20Jews%20during%20the%20Holocaust | During World War II, some individuals and groups helped Jews and others escape the Holocaust conducted by Nazi Germany.
The support, or at least absence of active opposition, of the local population was essential to Jews attempting to hide but often lacking in Eastern Europe. Those in hiding depended on the assistance of non-Jews. Having money, social connections with non-Jews, a non-Jewish appearance, perfect command of the local language, determination, and luck played a major role in determining survival. Jews in hiding were hunted down with the assistance of local collaborators and rewards offered for their denunciation. The death penalty was sometimes enforced on people hiding them, especially in eastern Europe, including Poland. Rescuers' motivations varied on a spectrum from altruism to expecting sex or material gain; it was not uncommon for helpers to betray or murder Jews if their money ran out.
Jews were hidden or saved by non-Jews throughout Nazi-occupied Europe. The Catholic Church and Vatican opposed the systemic murder of Jews, and in Italy the Mussolini government refused to deport Jews or participate in their mass murder. Many diplomats were involved in efforts to help Jews escape, such as by providing documents that allowed safe transit.
Since 1953, Israel's Holocaust memorial, Yad Vashem, has recognized 26,973 people as Righteous among the Nations. Yad Vashem's Holocaust Martyrs' and Heroes' Remembrance Authority, headed by an Israeli Supreme Court justice, recognizes rescuers of Jews as Righteous among the Nations to honor non-Jews who risked their lives during the Holocaust to save Jews from extermination by Nazi Germany.
By country
Poland
Poland had a very large Jewish population, and, according to Norman Davies, more Jews were both killed and rescued in Poland than in any other nation: the rescue figure usually being put at between 100,000–150,000. The memorial at Bełżec extermination camp commemorates 600,000 murdered Jews and 1,500 Poles who tried to save Jews. 6,532 men and women (more than from any other country in the world) have been recognized as rescuers by Yad Vashem in Israel., constituting the largest national contingent. Martin Gilbert wrote that "Poles who risked their own lives to save the Jews were indeed the exception. But they could be found throughout Poland, in every town and village."
Poland during the Holocaust of World War II was under total enemy control: initially, half of Poland was occupied by the Germans, as the General Government and Reichskomissariat; the other half by the Soviets, along with the territories of today's Belarus and Ukraine. The death penalty was threatened for individuals hiding Jews and their families. The list of Polish citizens officially recognized as Righteous includes 700 names of those who lost their lives while trying to help their Jewish neighbors. There were also groups, such as the Polish Żegota organization, that took drastic and dangerous steps to rescue victims. Witold Pilecki, a member of Armia Krajowa, the Polish Home Army, organized a resistance movement in Auschwitz from 1940, and Jan Karski tried to spread the word of the Holocaust.
When AK Home Army Intelligence discovered the true fate of transports leaving the Jewish Ghetto, the council to Aid Jews – Rada Pomocy Żydom (codename Żegota) – was established in late 1942 in co-operation with church groups. The organization saved thousands. Emphasis was placed on protecting children, as it was nearly impossible to intervene directly against the heavily guarded transports. False papers were prepared, and children were distributed among safe houses and church networks. Two women founded the movement: the Catholic writer and activist Zofia Kossak-Szczucka and the socialist Wanda Filipowicz. Some of its members had been involved in Polish nationalist movements, which were themselves anti-Jewish, but which became appalled by the barbarity of the Nazi mass murders. In an emotional protest prior to the foundation of the council, Kossak wrote that Hitler's race murders were a crime about which it was not possible to remain silent. While Polish Catholics might still feel Jews were "enemies of Poland", Kossak wrote that protest was required: "God requires this protest from us... It is required of a Catholic conscience... The blood of the innocent calls for vengeance to the heavens."
In the 1948–49 Zegota Case, the Stalin-backed regime established in Poland after the war secretly tried and imprisoned the leading survivors of Zegota as part of a campaign to eliminate and besmirch resistance heroes who might threaten the new regime.
Jews were aided also by diplomats outside Poland. The Ładoś Group was a group of Polish diplomats and Jewish activists who created in Switzerland a system of illegal production of Latin American passports aimed at saving European Jews from the Holocaust. About 10,000 Jews received such passports, of whom over 3,000 have been saved. The group efforts are documented in the Eiss Archive. Jews were also helped by Henryk Sławik, in Hungary, who helped save over 30,000 Polish refugees, including 5,000 Polish Jews by giving them false Polish passports with a Catholic designation, and by Tadeusz Romer in Japan.
Greece
The Foundation for the Advancement of Sephardic Studies and Culture writes "One cannot forget the repeated initiatives of the head of the Greek Christian Orthodox Metropolitan See of Thessaloniki, Gennadios, against the deportations, and most of all, the official letter of protest signed in Athens on March 23, 1943, by Archbishop Damaskinos of the Greek Orthodox Church, along with 27 prominent leaders of cultural, academic and professional organizations. The document, written in a very sharp language, refers to unbreakable bonds between Christian Orthodox and Jews, identifying them jointly as Greeks, without differentiation. It is noteworthy that such a document is unique in the whole of occupied Europe, in character, content and purpose".
The 275 Jews of the island of Zakynthos, however, survived the Holocaust. When the island's mayor, Loukas Karrer (Λουκάς Καρρέρ), was presented with the German order to hand over a list of Jews, Bishop Chrysostomos returned to the amazed Germans with a list of two names; his and the mayor's. Moreover, the Bishop wrote a letter to Hitler himself stating that the Jews of the island were under his supervision. In the meantime the island's population hid every member of the Jewish community. When the island was almost levelled by the great earthquake of 1953, the first relief came from the state of Israel, with a message that read "The Jews of Zakynthos have never forgotten their Mayor or their beloved Bishop and what they did for us."
The Jewish community of Volos, one of the most ancient in Greece, had fewer losses than any other Jewish community in Greece thanks to the timely and dynamic intervention and mobilization of the massive communist-leftist partisan movement of EAM-ELAS (National Liberation Front (Greece) – Greek People's Liberation Army) and the successful cooperation of the head of the Greek Christian Orthodox Metropolitan See of Demetrias Joachim and the chief rabbi of Volos, Moses Pesach for the evacuation of Volos from the Jewish people, after the events in Thessaloniki (displacement of the city's Jews to concentration camps).
Princess Alice of Battenberg and Greece, who was the wife of Prince Andrew of Greece and Denmark and the mother of Prince Philip, Duke of Edinburgh, and mother-in-law of Queen Elizabeth II of the United Kingdom, stayed in occupied Athens during the Second World War, sheltering Jewish refugees, for which she is recognized as "Righteous Among the Nations" at Yad Vashem.
Although the Germans and Bulgarians deported a great number of Greek Jews, others were successfully hidden by their Greek neighbors.
82-year-old Simon Danieli traveled from Israel to his birthplace in Veria to thank the descendants of the people who helped him and his family escape Nazi persecution during World War II. Danieli was 13 in 1942 when his family—father Joseph, a grain merchant, mother Buena, and nine siblings—fled Veria to escape the increasingly frequent atrocities committed by Nazi forces against the city's Jews. They ended up in a small nearby village in Sykies, where the family was taken in by Giorgos and Panayiota Lanara, who offered them shelter, food and a hiding place in the woods, helped also by a priest, Nestoras Karamitsopoulos. The Nazis, however, soon stormed Sykies, where around 50 more Jews from Veria had also taken refuge. They questioned the priest about the whereabouts of the Jews, but when Karamitsopoulos refused to answer, they began raiding people's homes. They found Jews hidden in eight homes, and promptly set fire the houses. They also turned their wrath on the priest, torturing him and pulling out his beard, according to Danieli.
France
Père Marie-Benoît was a French Capuchin priest who helped smuggle approximately 4,000 Jews into safety from Nazi-occupied Southern France and subsequently was recognized by Yad Vashem as a Righteous among the Nations in 1966. The French town of Le Chambon-sur-Lignon sheltered several thousand Jews. The Brazilian diplomat Luis Martins de Souza Dantas illegally issued Brazilian diplomatic visas to hundreds of Jews in France during the Vichy Government, saving them from almost certain death. Si Kaddour Benghabrit, the religious head of the Islamic Center of France, helped more than a thousand Jews by providing forged identity papers to the Jews of Paris during the German occupation of France. He also managed to hide many Jewish families in the rooms of Paris Mosque as well as in the residencies and women's prayer areas.
Belgium
In April 1943, members of the Belgian resistance held up the twentieth convoy train to Auschwitz, and freed 231 people. Several local governments did all they could to slow down or block the registration processes for Jews they were obliged to perform by the Nazis. Many people saved children by hiding them away in private houses and boarding schools. Of the approximately 50,000 Jews in Belgium in 1940, about 25,000 were deported—though only about 1,250 survived. Marie and Emile Taquet sheltered Jewish boys in a residential school or home. Bruno Reynders was a Belgian monk who defied the Nazis, as he implemented the directive of Pope Pius XII to save the Jews, worked with local orphanages, Catholic Nuns and the Belgian Underground to forge false identities for Jewish children whose parents willingly gave them up in an attempt to spare their lives faced with deportation to the death camps. Pere Bruno risked his life for his values and to save the lives of an estimated 400 Jewish children and is honored as a Righteous Gentile at Yad Vashem.
L'abbé Joseph André is another Catholic priest who secured safe hiding places with Belgian families, orphanages and other institutions for Jewish children and adults.
Denmark
The Jewish community in Denmark remained relatively unaffected by Germany's occupation of Denmark on 9 April 1940. The Germans allowed the Danish government to remain in office and this cabinet rejected the notion that any "Jewish question" should exist in Denmark. No legislation was passed against Jews and the yellow badge was not introduced in Denmark. In August 1943, this situation was about to collapse as the Danish government refused to introduce the death penalty as demanded by the Germans following a series of strikes and popular protests. The German empire forced the Danish government to shut down. During these events, German diplomat Georg Ferdinand Duckwitz tipped off Danish politician Hans Hedtoft that the Danish Jews would be deported to Germany following the collapse of the Danish government. Hedtoft alerted the Danish resistance and the Jewish leader C.B. Henriques informed the acting Chief Rabbi Marcus Melchior in the absence of the Chief Rabbi Max Friediger who had already been arrested as a hostage on 29 August 1943, urging the community to go into hiding in service on 29 September 1943. During the following weeks, more than 7,200 of Denmark's 8,000-strong Jewish communities were ferried to neutral Sweden hidden in fishing boats. A small number of Jews, some 450 in all, were captured by the Germans and shipped to Theresienstadt. Danish officials were able to ensure that these prisoners weren't shipped to extermination camps, and Danish Red Cross inspections and food packages ensured focus on the Danish Jews. Swedish Count Folke Bernadotte ensured their release and transport to Denmark in the final days of the war.
Netherlands
Based on its 1940 population of 9 million the 5,516 Jews rescued in the Netherlands represents the largest per capita number: 1 in 1,700 Dutch was awarded the Righteous Among the Nations medal. Notable rescuers include:
Willem Arondeus, Dutch artist and resistance fighter who helped forge documents allowing Jewish families to flee the country
Gertruida Wijsmuller-Meijer, who helped save about 10,000 Jewish children from Germany and Austria just before the outbreak of the war (Kindertransport) and on the last transport ship leaving the Netherlands to the UK in May 1940.
Jan Zwartendijk, who as a Dutch consular representative in Kaunas, Lithuania, issued exit visas used by between 6,000 and to 10,000 Jewish refugees.
Those who hid and helped Anne Frank and her family, like Miep Gies.
Caecilia Loots, a teacher and antifascist resistance member, who saved Jewish children during the war.
Marion van Binsbergen helped save approximately 150 Dutch Jews, most of them children, throughout the German occupation of the Netherlands.
Tina Strobos, rescued over 100 Jews by hiding them in her house and providing them with forged paperwork to escape the country.
Jan van Hulst (18 December 1903 – 1 August 1975), instrumental in preventing Jews from being deported and murdered during the Holocaust.
The participants of the so-called "Amsterdam dock strike" (better known as the February strike, about 300,000 to 500,000 people who on 25 and 26 February 1941 took part in the first strike against persecution of the Jews in Nazi-occupied Europe).
The village of Nieuwlande (117 inhabitants) that set up a quota for residents to rescue Jews.
Serbia
After the Invasion of Yugoslavia, the country was occupied by Germany and some regions were occupied by Italy, Hungary, Bulgaria and Albania. A joint German-Italian puppet state called Independent State of Croatia was installed. After a bombing campaign on major Serbian cities, a German puppet regime Nedić’s Serbia led by Milan Nedić was installed. In collaboration with the German Army, Serbian Chetnik collaborators along with the Serbian Volunteer Corps as well as the Serbian State Guard assisted in the persecution of Jews in Serbia proper, in Hungarian-occupied Vojvodina region, and in the territory held by the Croatian Ustashas. Serbian Jews who were not transported to concentration camps in Germany were either murdered in Nazi concentration camps within Serbia (Sajmište and Banjica), Banjica being jointly controlled by Nedic's Government and the German Army, or transported to Ustasha-controlled concentration camp Jasenovac and murdered there. Jews living in Hungarian-occupied regions faced mass executions, the most notorious being the Novi Sad raid in 1942.
Serbian civilians were involved in saving thousands of Yugoslavian Jews during this period. Miriam Steiner-Aviezer, a researcher into Yugoslavian Jewry and a member of Yad Vashem's Righteous Gentiles committee states: "The Serbs saved many Jews. Contrary to their present image in the world, the Serbs are a friendly, loyal people who will not abandon their neighbors." As of 2017 Yad Vashem recognizes 135 Serbians as Righteous Among Nations, the highest of any Balkan country.
Bulgaria
Bulgaria joined the Axis powers in March 1941 and took part in the invasion of Yugoslavia and Greece. The Nazi-allied government of Bulgaria, led by Bogdan Filov, fully and actively assisted in the Holocaust in occupied areas. On Passover 1943, Bulgaria rounded up the great majority of Jews in Greece and Yugoslavia, transported them through Bulgaria, and handed them off to German transport to Treblinka, where almost all were murdered. The Nazi-allied government of Bulgaria deported a higher percentage of Jews (from the areas of Greece and the Republic of Macedonia) than did the German occupiers in the region. In Bulgarian-occupied Greece, the Bulgarian authorities arrested the majority of the Jewish population on Passover 1943. The territories of Greece, Macedonia and other nations occupied by Bulgaria during World War II were not considered Bulgarian—they were only administered by Bulgaria, but Bulgaria had no say as to the affairs of these lands.
The active participation of Bulgaria in the Holocaust however did not extend to its pre-war territory and after various protests by Archbishop Stefan of Sofia and the interference of Dimitar Peshev, the planned deportation of the Bulgarian Jews (about 50,000) was stopped. Deportation to the concentration camps was denied. Bulgaria was officially thanked by the government of Israel despite being an ally of Nazi Germany.
Dimitar Peshev was the Deputy Speaker of the National Assembly of Bulgaria and Minister of Justice during World War II. He rebelled against the pro-Nazi cabinet and prevented the deportation of Bulgaria's 48 000 Jews. He was aided by the strong opposition of the Bulgarian Orthodox Church. Although Peshev had been involved in various anti-Semitic legislation that was passed in Bulgaria during the early years of the War, the government's decision to deport Bulgaria's 48 000 Jews on 8 March 1943 was too much for Peshev. After being informed of the deportation, Peshev tried several times to see Prime Minister Bogdan Filov but the prime minister refused. Next, he went to see Interior Minister Petar Gabrovski insisting that he cancel the deportations. After much persuasion, Gabrovski finally called the governor of Kyustendil and instructed him to stop preparations for the Jewish deportations. By 5:30 p.m. on 9 March, the order was cancelled. After the war, Peshev was charged with anti-Semitism and anti-Communism by the Soviet courts, and sentenced to death. However, after an outcry from the Jewish community, his sentence was commuted to 15 years imprisonment, though released after just one year. His deeds went unrecognized after the war, as he lived in poverty in Bulgaria. It was not until 1973 that he was awarded the title of Righteous Among the Nations. He died the same year.
Portugal
Historians have estimated that up to one million refugees fled from the Nazis through Portugal during World War II, an impressive number considering the size of the country's population at that time (circa 6 million). Portugal remained neutral within the overall objectives of the Anglo-Portuguese Alliance; and that astute policy under precarious conditions, made it possible for Portugal to contribute to the rescue of a large number of refugees. Portuguese Prime Minister António de Oliveira Salazar allowed all international Jewish organizations—HIAS, HICEM, the American Jewish Joint Distribution Committee, World Jewish Congress, and Portuguese Jewish relief committees—to establish themselves in Lisbon. In 1944, in Hungary, risking their lives, the diplomats Carlos Sampaio Garrido and Carlos de Liz-Texeira Branquinho, coordinating with Salazar, also helped many Jews escape Nazis and their Hungarian allies. In June 1940, when Germany invaded France, Portuguese consul in Bordeaux, Aristides de Sousa Mendes issued visas, indiscriminately, to a population in panic, without asking previous authorizations from Lisbon, as he was supposed to. On 20 June, the British Embassy in Lisbon accused the Consul in Bordeaux of improperly charging money for issuing visas and Sousa Mendes was called to Lisbon. The number of visas issued by Sousa Mendes cannot be determined; a 1999 study by the Yad Vashem historian Dr. Avraham Milgram published by the Shoah Resource Center, International School for Holocaust Studies, asserts that there is a great difference between reality and the myth created by the generally cited numbers. Sousa Mendes never lost his title as he kept on being listed in the Portuguese Diplomatic Yearbook until 1954 and kept on receiving his full Consul salary, $1,593 Portuguese Escudos, until the day he died. Other Portuguese credited for saving Jews during the war are Professor Francisco Paula Leite Pinto and Moisés Bensabat Amzalak. A devoted Jew, and a Salazar supporter, Amzalak headed the Lisbon Jewish community for more than fifty years (from 1926 until 1978). Leite Pinto, General Manager of the Portuguese railways, together with Amzalak, organized several trains, coming from Berlin and other cities, loaded with refugees.
Spain
In Franco's Spain, several diplomats contributed very actively to rescue Jews during the Holocaust. The two most prominent ones were Ángel Sanz Briz (the Angel of Budapest), who saved around five thousand Hungarian Jews by providing them Spanish passports, and Eduardo Propper de Callejón, who helped thousands of Jews to escape from France to Spain. Other diplomats with a relevant role were Bernardo Rolland de Miota (consul of Spain at Paris), José Rojas Moreno (ambassador at Bucharest), Miguel Ángel de Muguiro (diplomat at the embassy in Budapest), Sebastián Romero Radigales (consul at Athens), Julio Palencia Tubau, (diplomat at the embassy in Sofía), Juan Schwartz Díaz-Flores (consul at Vienna) and José Ruiz Santaella (diplomat at the embassy in Berlin).
Lithuania
According to the data available at Yad Vashem, by 1 January 2019, 904 rescuers of Jews in Lithuania were identified, whereas in the catalogue compiled by the Vilna Gaon State Jewish Museum, 2300 Lithuanians who rescued Jews are indicated, among them 159 members of clergy.
The Republic of Lithuania following the occupation of Poland by Nazi Germany and the Soviet Union in September 1939, accepted and accommodated in the country numbers of Polish and Jewish refugees as well as soldiers of defeated Polish army. Part of these refugees were later saved from the Soviets (and eventually from Nazis) by Japanese consul-general Chiune Sugihara and director of Philips plants in Lithuania and part-time acting consul of Netherlands Jan Zwartendijk after the occupation of Lithuania by the Soviet Union on June 15, 1940.
Chiune Sempo Sugihara, Japanese Consul-General in Kaunas, Lithuania, 1939–1940, issued thousands of visas to Jews fleeing Kaunas after occupation of Lithuania by the Soviet Union in defiance of explicit orders from the Japanese foreign ministry. The last foreign diplomat to leave Kaunas, Sugihara continued stamping visas from the open window of his departing train. After the war, Sugihara was fired from the Japanese foreign service, ostensibly due to downsizing.
As well as in other countries rescuers from Lithuania came from different layers of society. The most iconic figures are librarian Ona Šimaitė, doctor Petras Baublys, writer Kazys Binkis and his wife journalist Sofija Binkienė, musician Vladas Varčikas, writer and translator Danutė Zubovienė (Čiurlionytė) and her husband Vladimiras Zubovas, doctor Elena Kutorgienė, aviator Vladas Drupas, doctor Pranas Mažylis, Catholic priest Juozapas Stakauskas, teacher Vladas Žemaitis, Catholic nun Maria Mikulska and others. In Šarnelė village (Plungė district) Straupiai family (Jonas and Bronislava Straupiai together with their neighbours Adolfina and Juozas Karpauskai) saved 26 people (9 families).
Citizens of Lithuania and foreign countries who rescue people on the territory of Lithuania and citizens of Lithuania abroad are awarded Life Saving Crosses. The President of Lithuania honors Jewish rescuers every year on the occasion of the National Memorial Day for the Genocide of Lithuanian Jews, which is marked on September 23 to commemorate the liquidation of the Vilna Ghetto on that day in 1943.
Albania
Unlike many other Eastern European countries under Nazi occupation, Albania—which has a mixed Muslim and Christian population and a tradition of tolerance—became a safe haven for Jews. At the end of 1938, Albania was the only remaining country in Europe that still issued visas to Jews through its embassy in Berlin. Following the Nazi occupation of Albania, the country refused to hand over its small Jewish population to the Germans, sometimes even providing Jewish families with forged documents. During the war, about 2,000 Jews sought refuge in Albania, and many of them took shelter in rural parts of the country where they were protected by the local population. At the end of the war, Albania's Jewish population was greater than it was prior to the war, making it the only country in Europe where the Jewish population increased during World War II. Out of two thousand Jews in total, only five Albanian Jews perished at the hands of the Nazis. They were discovered by the Germans and subsequently deported to Pristina.
Between February and March in 1939, King Zog I of Albania granted asylum to 300 Jewish refugees before being overthrown by the Italian fascists in April the same year. When the Italians requisitioned the Albanian puppet government to expel its Jewish refugees, the Albanian leaders refused, and in the following years, 400 more Jewish refugees found sanctuary in Albania.
Refik Veseli was the first Albanian to be awarded the title Righteous Among the Nations, having declared afterwards that betraying the Jews "would have disgraced his village and his family. At minimum his home would be destroyed and his family banished". On 21 July 1992, Mihal Lekatari, an Albanian partisan from Kavajë, was recognized as Righteous Among the Nations. Lekatari is noted for stealing blank identity papers from the municipality of Harizaj and distributing identity papers with Muslim names on them to Jewish refugees. In 1997, Albanian Shyqyri Myrto was honored for rescuing Jews, with the Anti-Defamation League's Courage to Care Award presented to his son, Arian Myrto. In 2006, a plaque honoring the compassion and courage of Albania during the Holocaust was dedicated in The Holocaust Memorial Park in Sheepshead Bay in Brooklyn, New York, with the Albanian ambassador to the United Nations in attendance.
During the war, some parts of Kosovo and Macedonia which were occupied by the Axis powers were annexed to Albania, and an estimated 600 Jews were captured in these territories, and consequently killed.
Finland
The government of Finland generally refused to deport Finnish Jews to Germany. It has been said that Finnish government officials told German envoys that "Finland has no Jewish Problem". However, the Secret Police ValPo deported 8 Jews in 1942 who were refugees seeking asylum in Finland. Moreover, it seems highly likely that Finland deported Soviet POWs, among them a number of Jews. The majority of Finnish Jews, however, were protected by the government's co-belligerence with Germany. Their men joined the Finnish army and fought on the front.
The most notable Finnish individual involved in aiding the Jews was Algoth Niska (1888–1954). Niska was a smuggler during the Finnish prohibition but had run into financial troubles after its end in 1932, so when Albert Amtmann, an Austrian-Jewish acquaintance, expressed his concerns over his people's position in Europe, Niska quickly saw a business opportunity in smuggling Jews out of Germany. The modus operandi was quickly established. Niska would forge Finnish passports and Amtmann would acquire the customers, who with their new passports would be able to cross the border out of Germany. All in all, Niska falsified passports for 48 Jews during 1938 and earned 2,5 million Finnish marks ($890,000 or £600,000 in today's money) selling them. Only three of the Jews are known to have survived the Holocaust while twenty were certainly caught. The fates of the other twenty-five are not known. Involved in the operation with Niska and Amtmann were Major Rafael Johannes Kajander, Axel Belewicz and Belewicz's girlfriend Kerttu Ollikainen whose job was to steal the forms on which the passports were forged.
Italy
Despite Benito Mussolinis close alliance with Hitler, Italy did not adopt Nazism's genocidal ideology towards the Jews. The Nazis were frustrated by the Italian forces' refusal to co-operate in the roundups of Jews, and no Jews were deported from Italy prior to the Nazi occupation of the country following the Italian capitulation in September 1943. In Italian-occupied Croatia, the Nazi envoy Siegfried Kasche advised Berlin that Italian forces had "apparently been influenced" by Vatican opposition to German anti-Semitism. As anti-Axis feeling grew in Italy, the use of Vatican Radio to broadcast papal disapproval of race murder and anti-Semitism angered the Nazis. Mussolini was overthrown in July 1943, and the Nazis moved to occupy Italy, commencing a round-up of Jews. Although thousands were caught, the great majority of Italy's Jews were saved. As in other nations, Catholic networks were heavily engaged in rescue efforts.
In Fiume (northern Italy, today Croatian Rijeka), Giovanni Palatucci, after the promulgation of racial laws against Jews in 1938 and at the beginning of war in 1940, as chief of the Foreigners' Office, forged documents and visas to Jews threatened by deportation. He managed to destroy all documented records of some 5,000 Jewish refugees living in Fiume, issuing them false papers and providing them with funds. Palatucci then sent the refugees to a large internment camp in southern Italy protected by his uncle, Giuseppe Maria Palatucci, the Catholic Bishop of Campagna. Following the 1943 capitulation of Italy, Fiume was occupied by the Nazis. Palatucci remained as head of the police administration without real powers. He continued to clandestinely help Jews and maintain contact with the Resistance, until his activities were discovered by the Gestapo. The Swiss Consul to Trieste, a close friend of his, offered him a safe pass to Switzerland, but Giovanni Palatucci sent his young Jewish fiancée instead. Palatucci was arrested on 13 September 1944. He was condemned to death, but the sentence was later commuted to deportation to Dachau, where he died.
On 19 July 1944, the Gestapo rounded up the nearly 2000 Jewish inhabitants of the island of Rhodes, which had been governed by Italy since 1912. Of the approximately 2,000 Rhodesli Jews who were deported to Auschwitz and elsewhere, only 104 survived.
Giorgio Perlasca, who posed as the consul-general of Spain under the Spanish ambassador in Budapest, was able to put under his protection thousands of Jews and non-Jews destined to concentration camps.
The cycling champion Gino Bartali had hidden a Jewish family in his cellar and, according to one of the survivors, saved their lives in doing so. He also used his fame to carry messages and documents to the Italian Resistance and fugitive Jews. Bartali cycled from Florence through Tuscany, Umbria and Marche, many times traveling as far afield as Assisi, all the while wearing the racing jersey emblazoned with his name.
Calogero Marrone was the chief of the Civil Registry office in the municipality of Varese and issued hundreds of fake identity cards in order to save Jews and anti-fascists. He was arrested after an anonymous tip-off and died in the Dachau concentration camp.
Martin Gilbert wrote that, in October 1943, with the SS occupying Rome and determined to deport the city's 5000 Jews, the Vatican clergy had opened the sanctuaries of the Vatican to all "non-Aryans" in need of rescue in an attempt to forestall the deportation. "Catholic clergy in the city acted with alacrity", wrote Gilbert. "At the Capuchin convent on the Via Siciliano, Father Benoit saved a large number of Jews by providing them with false identification papers [...] by the morning of October 16, a total of 4,238 Jews had been given sanctuary in the many monasteries and convents of Rome. A further 477 Jews had been given shelter in the Vatican and its enclaves." Gilbert credited the rapid rescue efforts of the Church with saving over four-fifths of Roman Jews.
Other Righteous Catholic rescuers in Italy included Elisabeth Hesselblad. She and two British women, Mother Riccarda Beauchamp Hambrough and Sister Katherine Flanagan have been beatified for reviving the Swedish Bridgettine Order of nuns and hiding scores of Jewish families in their convent. The churches, monasteries and convents of Assisi formed the Assisi Network and served as a safe haven for Jews. Gilbert credits the network established by Bishop Giuseppe Placido Nicolini and Abbott Rufino Niccaci of the Franciscan Monastery, with saving 300 people. Other Italian clerics honored by Yad Vashem include the theology professor Fr Giuseppe Girotti of Dominican Seminary of Turin, who saved many Jews before being arrested and sent to Dachau where he died in 1945; Fr Arrigo Beccari who protected around 100 Jewish children in his seminary and among local farmers in the village of Nonantola in Central Italy; and Don Gaetano Tantalo, a parish priest who sheltered a large Jewish family. Of Italy's 44,500 Jews, some 7,680 were murdered in the Nazi Holocaust.
Vatican City State
In the 1930s, Pope Pius XI urged Mussolini to ask Hitler to restrain the anti-Semitic actions taking place in Germany. In 1937, the Pope issued the Mit brennender Sorge () encyclical, in which he asserted the inviolability of human rights.
Pius XII
Pope Pius XII succeeded Pius XI on the eve of war in 1939. He used diplomacy to aid the victims of the Holocaust, and directed the Church to provide discreet aid. His encyclicals such as Summi Pontificatus and Mystici corporis preached against racism—with specific reference to Jews: "there is neither Gentile nor Jew, circumcision nor uncircumcision". His 1942 Christmas radio address denounced the murder of "hundreds of thousands" of "faultless" people because of their "nationality or race". The Nazis were furious and The Reich Security Main Office, responsible for the deportation of Jews, called him the "mouthpiece of the Jewish war criminals". Pius XII intervened to attempt to block Nazi deportations of Jews in various countries.
Following the capitulation of Italy, Nazi deportations of Jews to death camps began. Pius XII protested at diplomatic levels, while several thousand Jews found refuge in Catholic networks. On 27 June 1943, Vatican Radio broadcast a papal injunction: "He who makes a distinction between Jews and other men is being unfaithful to God and is in conflict with God's commands".
When the Nazis came to Rome in search of Jews, the Pope had already days earlier ordered the sanctuaries of the Vatican City be opened to all "non-Aryans" in need of refuge and according to Martin Gilbert, by the morning of 16 October, "a total of 477 Jews had been given shelter in the Vatican and its enclaves, while another 4,238 had been given sanctuary in the many monasteries and convents of in Rome. Only 1,015 of Rome's 6,730 Jews were seized that morning". Upon receiving news of the roundups on the morning of 16 October, the Pope immediately instructed Cardinal Secretary of State Maglione, to make a protest to the German ambassador. After the meeting, the ambassador gave orders for a halt to the arrests. Earlier, the Pope had helped the Jews of Rome by offering gold towards the 50 kg ransom demanded by the Nazis.
Other noted rescuers assisted by Pius were Pietro Palazzini Giovanni Ferrofino, Giovanni Palatucci, Pierre-Marie Benoit and others. When Archbishop Giovanni Montini (later Pope Paul VI) was offered an award for his rescue work by Israel, he said he had only been acting on the orders of Pius XII.
Pius' diplomatic representatives lobbied on behalf of Jews across Europe, including in Vichy France, Hungary, Romania, Bulgaria, Croatia and Slovakia, Germany itself and elsewhere. Many papal nuncios played important roles in the rescue of Jews, among them Giuseppe Burzio, the Vatican Chargé d'Affaires in Slovakia; Filippo Bernardini, Nuncio to Switzerland; and Angelo Roncalli, the Nuncio to Turkey. Angelo Rotta, the wartime Nuncio to Budapest and Andrea Cassulo, the Nuncio to Bucharest have been recognized as Righteous Among the Nations.
Pius directly protested the deportations of Slovakian Jews to the Bratislava government from 1942. He made a direct intervention in Hungary to lobby for an end to Jewish deportations in 1944, and on 4 July, the Hungarian leader, Admiral Horthy, told Berlin that deportations of Jews must cease, citing protests by the Vatican, the King of Sweden and the Red Cross. The pro-Nazi, anti-Semitic Arrow Cross Party seized power in October, and a campaign of murder of the Jews commenced. The neutral powers led a major rescue effort and Pius' representative, Angelo Rotta, took the lead in establishing an "international Ghetto", marked by the emblems of the Swiss, Swedish, Portuguese, Spanish and Vatican legations, and providing shelter for some 25,000 Jews.
In Rome, some 4,000 Italian Jews and escaped prisoners of war avoided deportation, many of them hidden in safe houses or evacuated from Italy by a resistance group organized by the Irish-born priest and Vatican official Hugh O'Flaherty. Msgr. O'Flaherty used his political connections to help secure sanctuary for dispossessed Jews. The wife of the Irish ambassador, Delia Murphy, assisted him.
Norway
During the occupation of Norway by Nazi Germany, its Jewish community was subject to persecution and deported to extermination camps. Although at least 764 Jews in Norway were killed, over 1,000 were rescued with the help of non-Jewish Norwegians who risked their lives to smuggle the refugees out, typically to Sweden. , 67 of these individuals have been recognized by Yad Vashem as being Righteous Among the Nations. Yad Vashem has also recognized the Norwegian resistance movement collectively.
China
Ho Feng Shan – Chinese Consul in Vienna started to issue visas to Jews for Shanghai, part of which during this time was still under the control of the Republic of China, for humanitarian reasons. Between 1933 and 1941, the Chinese city of Shanghai under Japanese occupation, accepted unconditionally over 18,000 Jewish refugees escaping the Holocaust in Europe, a number greater than those taken in by Canada, New Zealand, South Africa and British India combined during World War II. After 1943, the occupying Nazi-aligned Japanese ghettoised the Jewish refugees in Shanghai into an area known as the Shanghai ghetto. Many of the Jewish refugees in Shanghai migrated to the United States and Israel after 1948 due to the Chinese Civil War (1946–1950).
Japan
The Japanese government ensured Jewish safety in China, Japan and Manchuria. Japanese Army General Hideki Tōjō received Jewish refugees in accordance with Japanese national policy and rejected German protest. Chiune Sugihara, Kiichiro Higuchi, and Fumimaro Konoe helped thousands of Jews escape the Holocaust from occupied Europe.
Bolivia
Between 1938 and 1941, around 20,000 Jews were given visas for Bolivia under an agricultural visa program. Although most moved on to the neighboring countries of Argentina, Uruguay and Chile, some stayed and created a Jewish Community in Bolivia.
The Philippines
In a notable humanitarian act, Manuel L. Quezon, the first Commonwealth of the Philippines, in cooperation with United States High Commissioner Paul V. McNutt, facilitated the entry into the Philippines of Jewish refugees fleeing fascist regimes in Europe, while taking on critics who were convinced by fascist propaganda that Jewish settlement is a threat to the country. Quezon and McNutt proposed to have 30,000 refugee families on Mindanao, and 40,000-50,000 refugees on Polillo. Quezon gave, as a 10-year loan to Manila's Jewish Refugee Committee, land beside Quezon's family home in Marikina. The land would house homeless refugees in Marikina Hall, dedicated on 23 April 1940.
Leaders and diplomats
Per Anger – Swedish diplomat in Budapest who originated the idea of issuing provisional passports to Hungarian Jews to protect them from arrest and deportation to camps. Anger collaborated with Raoul Wallenberg to save the lives of thousands of Jews.
Władysław Bartoszewski – Polish Żegota activist.
Count Folke Bernadotte of Wisborg – Swedish diplomat, who negotiated the release of 27,000 people (a significant number of whom were Jews) to hospitals in Sweden.
Jacob (Jack) Benardout – British diplomat to Dominican Republic before and during World War II. Issued numerous Dominican Republic visas to Jews in Germany. Only 16 Jewish families arrived in the Dominican Republic (the other Jews dispersed to countries along the way, e.g. Britain, America) and so created the Jewish community of the Dominican Republic.
Hiram Bingham IV – American Vice Consul in Marseilles, France, 1940–1941.
José Castellanos Contreras – a Salvadorean army colonel and diplomat who, while working as El Salvador's Consul General in Geneva from 1942 to 1945, and in conjunction with George Mantello, helped save at least 13,000 Central European Jews from Nazi persecution by providing them with false papers of Salvadorean nationality.
Georg Ferdinand Duckwitz – German diplomatic attaché in Denmark. Alerted Danish politician Hans Hedtoft about the imminent German plans deport to Denmark's Jewish community, thus enabling the following rescue of the Danish Jews.
Harald Edelstam – Swedish diplomat in Norway who helped to protect and smuggle hundreds of Jews and Norwegian resistance fighters to Sweden.
Gisi Fleischmann led the Bratislava Working Group, one of the most important rescue groups, in partnership with Rabbi Chaim Michael Dov Weissmandl. They successfully negotiated with the Nazis in early 1942 to stop the transports from Slovakia and a few months later, via the Europa plan, to try to stop transports from other parts of Europe. They demanded bombing of the rail lines to Auschwitz and authored/distributed the Auschwitz Report in 1944.
Frank Foley – British MI6 agent undercover as a passport officer in Berlin, saved around 10,000 people by issuing forged passports to Britain and the British Mandate of Palestine.
Rafael Leónidas Trujillo – the Dominican dictator promised to receive 100,000 Jewish refugees into the Dominican Republic in 1938 when Franklin D. Roosevelt organized an international conference in Evian to discuss the persecution of the Jews. Dominican Republic was the only nation accepting Jews immigrants after the conference. The DORSA (Dominican Republic Settlement Association) was formed to settle Jews on the northern coast. 5,000 visas were issued, but only 645 European Jews reached the settlement. The refugees were assigned land and cattle and the town of Sosúa was founded. 5000 dollars in gold from Jewish International in New York were paid for each person taken by the Trujillo. Other refugees settled in the capital Santo Domingo.
Albert Göring – German businessman (and younger brother of leading Nazi Hermann Göring) who helped Jews and dissidents survive in Germany.
Paul Grüninger – Swiss commander of police who provided falsely dated papers to over 3,000 refugees so they could escape Austria following the Anschluss.
Carlos María Gurméndez - Uruguayan ambassador to the Netherlands who sheltered German and Dutch Jews in the Uruguayan embassy and assisted with their travel to Uruguay and the United States.
Kiichiro Higuchi – Japanese lieutenant general who saved 20,000 Jewish refugees.
Wilm Hosenfeld – German officer who helped pianist Wladyslaw Szpilman, a Polish Jew, among many others.
Seishirō Itagaki – Japanese Army Minister who proposed and adopted a Japanese national policy to receive Jewish refugees.
Lyndon B. Johnson – Future President of the United States who, as a member of the United States House of Representatives in 1938, helped Austrian conductor Erich Leinsdorf gain permanent residency in the United States. Johnson later helped Jews enter the U.S. through Latin America and become workers on National Youth Administration projects in Texas.
Prince Constantin Karadja – Romanian diplomat, who saved over 51,000 Jews from deportation and extermination, as credited by Yad Vashem in 2005.
Jan Karski – Polish emissary of Armia Krajowa to Western Allies and eye-witness of the Holocaust.
Necdet Kent – Turkish Consul General at Marseille, who granted Turkish citizenship to hundreds of Jews. At one point, he entered an Auschwitz-bound train at enormous personal risk to save from deportation 70 Jews, to whom he had granted Turkish citizenship.
Fumimaro Konoe – Japanese Prime Minister who adopted a Japanese national policy to receive Jewish refugees.
Zofia Kossak-Szczucka – Polish founder of Zegota.
Hillel Kook (aka Peter Bergson) established a US-based rescue group, which had considerable support in the Congress and Senate. The group's activism was the major factor forcing President Roosevelt to establish the War Refugee Board in January 1944. One of the WRB's important actions was initiation and sponsoring of the Wallenberg mission to Budapest.
Carl Lutz – Swiss consul in Budapest, protected tens of thousands of Jews in Hungary.
Luis Martins de Souza Dantas – Brazilian in charge of the Brazilian diplomatic mission in France. He granted Brazilian visas to several Jews and other minorities persecuted by the Nazis. He was proclaimed as Righteous among the Nations in 2003.
George Mantello (b. Mandl Gyorgy) – El Salvador's honorary consul for Hungary, Romania, and Czechoslovakia – provided Salvadoran protection papers for thousands of Jews. He spearheaded an unprecedented Swiss grassroots protests and press campaign. It led to Roosevelt, Churchill and other world leaders threatening Hungary's ruler, regent Miklos Horthy, with post-war retribution if the transports did not stop. That ended the deportation of Jews from Hungary to Auschwitz.
Boris III of Bulgaria – King of Bulgaria from 1918 to 1943 Resisted demands from Hitler to deport the Jews resulting in all 50,000 being spared, Boris died in 1943 after meeting with Hitler.
Paul V. McNutt – United States High Commissioner of the Philippines, 1937–1939, who facilitated the entry of Jewish refugees into the Philippines.
Helmuth James Graf von Moltke – adviser to Nazi Germany on international law; active in Kreisau Circle resistance group, sent Jews to safe-haven countries.
Delia Murphy – wife of Dr. Thomas J. Kiernan, Irish minister in Rome 1941–1946, who worked with Hugh O'Flaherty and was part of the network that saved the lives of POWs and Jews in the hands of the Gestapo.
Jean-Marie Musy toward end of the war negotiated with Himmler on behalf of Recha Sternbuch – to rescue large numbers of Jews in the concentration camps
Giovanni Palatucci – Italian police official who saved several thousand.
Giorgio Perlasca – Italian. When Ángel Sanz Briz was ordered to leave Hungary, he falsely claimed to be his substitute and saved some thousands more Jews.
Dimitar Peshev – Deputy Speaker of the Bulgarian Parliament, played a major role in rescuing Bulgaria's 48 000 Jews, the entire Jewish population in Bulgaria at the time.
Frits Philips – Dutch industrialist who saved 382 Jews by insisting to the Nazis that they were indispensable employees of Philips.
Witold Pilecki – the only person who volunteered to be imprisoned in Auschwitz, organized a resistance inside the camp and as a member of Armia Krajowa sent the first reports on the camp atrocities to the Polish Government in Exile, from where they were passed to the rest of the Western Allies.
Karl Plagge – a major in the Wehrmacht Heer who issued work permits in order to save almost 1,000 Jews (see The Search for Major Plagge: The Nazi Who Saved Jews, by Michael Good)
Enver Hoxha – Led the Resistance against the German and Italians in Albania. Hoxha refused that the Germans or collaborationists deport a single Jew, therefore Albania was the only country in Europe to have an increased Jewish population after the war.
Mehmet Shehu – a resistance fighter in Albania who allowed Jews to enter Albania, and refused to hand the Jews over to The Germans, during the occupation
Eduardo Propper de Callejón – First Secretary in the Spanish embassy in Paris who stamped and signed passports almost non-stop for four days in 1940 to let Jewish refugees escape to Spain and Portugal.
Traian Popovici – Romanian mayor of Cernăuţi (Chernivtsi) who saved 20,000 Jews of Bukovina.
Manuel L. Quezon – President of the Commonwealth of the Philippines, 1935–1941, assisted in resettling Jewish refugees on the island of Mindanao.
Florencio Rivas – Consul General of Uruguay in Germany, who allegedly hid one hundred and fifty Jews during Kristallnacht and later provided them with passports.
Gilberto Bosques Saldívar – General Consul of Mexico in Marseilles, France. For two years, he issued Mexican visas to around 40,000 Jews, Spaniards and political refugees, allowing them to escape to Mexico and other countries. He was imprisoned by the Nazis in 1943 and released to Mexico in 1944.
Ángel Sanz Briz – Spanish consul in Hungary. Together with Giorgio Perlasca, he saved more than 5,000 Jews in Budapest by issuing Spanish passports to them.
Abdol-Hossein Sardari – Head of Consular affairs at the Iranian Embassy in Paris. He saved many Iranian Jews and gave 500 blank Iranian passports to an acquaintance of his, to be used by non-Iranian Jews in France.
Oskar Schindler – German businessman whose efforts to save his 1,200 Jewish workers were recounted in the book Schindler's Ark and the film Schindler's List.
Rabbi Solomon Schonfeld set up a Uk-based rescue committee and rescued many thousands of Jews.
Eduard Schulte – German industrialist, the first to inform the Allies about the mass extermination of Jews.
Irena Sendler – Polish head of Zegota children's department who saved 2,500 Jewish children.
Ho Feng Shan – Chinese Consul in Vienna who freely issued visas to Jews.
Henryk Slawik – Polish diplomat who saved 5,000–10,000 people in Budapest, Hungary.
Aristides de Sousa Mendes – Portuguese diplomat in Bordeaux, who signed about 30,000 visas to help Jews and persecuted minorities to escape the Nazis and The Holocaust.
Recha Sternbuch rescued large numbers of Jews with the help of her husband Yitzchak by smuggling them into Switzerland from Austria, by distributing protection papers, by negotiating with Himmler with help of Jean-Marie Musy to save Jews in the concentration camps as the Germans were retreating, and by rescuing the Jews who arrived to Bergen-Belsen by train from Hungary.
Chiune Sugihara – Japanese consul to Lithuania, 2,140 (mostly Polish) Jews and an unknown number of additional family members were saved by passports, many unauthorized, provided by him in 1940.
Hideki Tōjō – General and Prime Minister of Japan who received Jewish refugees in Manchuria and rejected German protest.
Selâhattin Ülkümen – Turkish diplomat who saved the lives of some 42 Jewish Turkish families, more than 200 persons, among a Jewish community of some 2000 after the Germans occupied the island of Rhodes in 1944.
Raoul Wallenberg – Swedish diplomat. Wallenberg saved the lives of tens of thousands of Jews condemned to certain death by the Nazis during World War II. In January 1945, Wallenberg was imprisoned at the headquarters of Rodion Malinovsky in Debrecen and disappeared. He is believed to have been poisoned in the Lubyanka Building by the NKVD torturer Grigory Mairanovsky.
Sir Nicholas Winton – British stockbroker who organized the Czech Kindertransport which sent 669 children (most of them Jewish) to foster parents ln England and Sweden from Czechoslovakia and Austria after Kristallnacht. Sir Nicholas was nominated for the 2008 Nobel Peace Prize.
Namik Kemal Yolga – A Vice-Consul at the Turkish Embassy in Paris who saved numerous Turkish Jews from deportation.
Guelfo Zamboni – Consul General at Thessaloniki who gave false papers to save the lives of over 300 Jews residing there.
Raymond Geist – Consul General at the American embassy in Berlin. While he was posted in Berlin from 1929 to 1939 he personally intervened with Nazi officials to save those (German Jews as well as opponents of the Nazi regime), who were under the threat of being imprisoned in concentration camps and issued more than 50,000 visas to save their lives. According to the TV series Genius, he was the one who issued visas to Albert Einstein and his family even when he was under orders from J. Edgar Hoover, who was at that time the Director of the FBI to not to give the visas till Albert Einstein signed a declaration confirming that he was not a member of the Communist Party. He was awarded the Order of Merit by the German Federal Republic in 1954.
Religious figures
Catholic officials
Pope Pius XII, preached against racism in encyclicals like Summi Pontificatus. Used Vatican Radio to denounce race murders and anti-Semitism. Directly lobbied Axis officials to stop Jewish deportations. Opened the sanctuaries of the Vatican to Rome's Jews during the Nazi roundup.
Monsignor Hugh O'Flaherty CBE – Irish Catholic priest who saved more than 6,500 Allied soldiers and Jews; known as the "Scarlet Pimpernel of the Vatican". Retold in the film The Scarlet and the Black.
Filippo Bernardini, papal nuncio to Switzerland.
Giuseppe Burzio, the Vatican Chargé d'Affaires in Slovakia. Protested the anti-Semitism and totalitarianism of the Tiso regime. Burzio advised Rome of the deteriorating situation for Jews in the Nazi puppet state, sparking Vatican protests on behalf of Jews.
Angelo Roncalli, the nuncio to Turkey saved a number of Croatian, Bulgarian and Hungarian Jews by assisting their migration to Palestine. Roncalli succeeded Pius XII as Pope John XXIII, and always said that he had been acting on the orders of Pius XII in his actions to rescue Jews.
Andrea Cassulo, papal nuncio in Romania. Appealed directly to Marshall Antonescu to limit the deportations of Jews to Nazi concentration camps planned for the summer of 1942.
Cardinal Gerlier of France refused to hand over Jewish children being sheltered in Catholic homes. In September 1942, Eight Jesuits were arrested for sheltering hundreds of children on Jesuit properties, and Pius XII's Secretary of State, Cardinal Maglione protested to the Vichy Ambassador.
Giuseppe Marcone, apostolic visitor to Croatia, lobbied Croat regime, saved 1000 Jewish partners in mixed marriages.
Archbishop Aloysius Stepinac of Zagreb, condemned Croat atrocities against both Serbs and Jews, and himself saved a group of Jews. He declared publicly in the spring of 1942 that it was "forbidden to exterminate Gypsies and Jews because they are said to belong to an inferior race".
Bishop Pavel Gojdič protested the persecution of Slovak Jews. Gojdic was beatified by the Church and recognized as Righteous Among the Nations by Yad Vashem.
Angelo Rotta, papal nuncio to Hungary. Actively protested Hungary's mistreatment of the Jews, and helped persuade Pope Pius XII to lobby the Hungarian leader Admiral Horthy to stop their deportation. He issued protective passports for Jews and 15,000 safe conduct passes – the nunciature sheltered some 3000 Jews in safe houses. An "International Ghetto" was established, including more than 40 safe houses marked by the Vatican and other national emblems. 25,000 Jews found refuge in these safe houses. Elsewhere in the city, Catholic institutions hid several thousand more Jewish people.
Archbishop Johannes de Jong, later Cardinal, of Utrecht, Netherlands, who drew up together with Titus Brandsma O.Carm. († Dachau, 1942) a letter in which he called for all Catholics to assist persecuted Jews, and in which he openly condemned the Nazi German "deportation of our Jewish fellow citizens" (From: Herderlijk Schrijven, read from all pulpits on Sunday 26 January 1942).
Archbishop Jules-Géraud Saliège of Toulouse – lead a number of French bishops (including Monseigneur Théas, Bishop of Montauban, Monseigneur Delay, Bishop of Marseilles, Cardinal Gerlier, Archbishop of Lyon, Monseigneur Vansteenberghe of Bayonne and Monseigneur Moussaron, Archbishop of Albi – in denouncing roundups and mistreatment of Jews in France, spurring greater resistance.
Père Marie-Benoît, Capuchin priest who saved many Jews in Marseille and later in Rome where he became known among the Jewish community as "father of the Jews".
Mother Matylda Getter's Franciscan Sisters of the Family of Mary sheltered Jewish children escaping the Warsaw Ghetto. Getter's convent rescued more than 750.
Alfred Delp S.J., a Jesuit priest who helped Jews escape to Switzerland while rector of St. Georg Church in suburban Munich; also involved with the Kreisau Circle. Executed 2 February 1945 in Berlin.
Rufino Niccacci, a Franciscan friar and priest who sheltered Jewish refugees in Assisi, Italy, from September 1943 through June 1944.
Maximilian Kolbe – Polish Conventual Franciscan friar. During the Second World War, in the friary, Kolbe provided shelter to people from Greater Poland, including 2,000 Jews. He was also active as a radio amateur, vilifying Nazi activities through his reports.
Bernhard Lichtenberg – German Catholic priest at Berlin's Cathedral. Sent to Dachau because he prayed for Jews at Evening Prayer.
Sára Salkaházi – a Hungarian Roman Catholic nun who sheltered approximately 100 Jews in Budapest.
Margit Slachta, of the Hungarian Social Service Sisterhood, went to Rome to encourage papal action against the Jewish persecutions. In Hungary, she had sheltered the persecuted and protested forced labour and antisemitism. In 1944, Pius appealed directly to the Hungarian government to halt the deportation of the Jews of Hungary. The Sisters of Social Service, nuns who saved thousands of Hungarian Jews; included Sister Sara Salkahazi, recognized by Yad Vashem as well as beatified.
Others
Archbishop Damaskinos – Archbishop of Athens during the German occupation. He formally protested the deportation of Jews and quietly ordered churches under his jurisdiction to issue fake Christian baptismal certificates to Jews fleeing the Nazis. Thousands of Greek Jews in and around Athens were thus able to claim that they were Christian and were thus saved.
Archbishop Stefan of Sofia – Bishop of Sofia and Exarch of Bulgaria, actively supported Dimitar Peshev's pressure against the Bulgarian government to cancel the deportation of the 48,000 Bulgarian Jews.
Bishop George Bell - Bishop of Chichester, England and friend of Dietrich Bonhoeffer. In 1936 Bell received the chair of the International Christian Committee for German Refugees, and in that role he especially supported Jewish Christians, who at that time were supported by neither Jewish nor Christian organizations. He provided a temporary home for exiled Jewish children in his own official residence.
Dietrich Bonhoeffer – a German Lutheran pastor who joined the Abwehr (a German military intelligence organization) which was also the center of the anti-Hitler resistance, and was involved in operations to help German Jews escape to Switzerland. Arrested by the Nazis, he was hanged on 5 April 1945, not long before the war ended.
Metropolitan Bishop Chrysostomos of Zakynthos, who, when ordered by the Axis occupying forces to submit a list of all Jews on the island, submitted a document bearing just two names: his own and the mayor's. Consequently, all 275 Zante Jews were saved.
Omelyan Kovch – Ukrainian Greek Catholic priest who was deported to Majdanek for helping thousands of Jews. He was canonized by Pope John Paul II
Dimitar Peshev was the Deputy Speaker of the National Assembly of Bulgaria and Minister of Justice (1935–1936), before World War II. He rebelled against the pro-Nazi cabinet and prevented the deportation of Bulgaria's 48,000 Jews, and was bestowed the title of "Righteous Among the Nations".
Leopold Socha was a Polish sewage inspector in the city of Lwów (now Lviv, Ukraine). During the Holocaust, Socha used his knowledge of the city's sewage system to shelter a group of Jews from Nazi Germans and their supporters of different nationalities. In 1978, he was recognized by the State of Israel as Righteous Among the Nations.
Andrey Sheptytsky – Metropolitan Archbishop of the Ukrainian Greek Catholic Church, harbored hundreds of Jews in his residence and in Greek Catholic monasteries. He also issued the pastoral letter, "Thou Shalt Not Kill", to protest Nazi atrocities.
André and Magda Trocmé – A French Reformed pastor and his wife who led the Le Chambon-sur-Lignon village movement that saved 3,000–5,000 Jews.
Maria Skobtsova – Russian Orthodox nun who ran a shelter for alcoholics, drug addicts and homeless people; the shelter was also open for refugees who had fled from the Soviet Union. During the first three years of the war she also took in several hundred Jewish people fearing persecution. She died in Ravensbrück concentration camp during the end of the war, after almost two years in the camp. Canonized by the Eastern Orthodox Church as a saint; she is also named a Righteous among the Nations by Yad Vashem
Quakers
The Religious Society of Friends, known as Quakers, from 1933 played a major role in assisting and saving Jews through their international network of centres (Berlin, Paris, Vienna) and organizations. In 1947, the Nobel Peace Prize was awarded to the Friends Service Council and to the American Friends Service Committee. Also individual Friends did rescue work.
Bertha Bracey – As secretary of the Germany Emergency Commission, set up 7 April 1933, in Britain, she raised awareness for the dangers of the Nazi philosophy. With voluntary workers, she handled appeals for assistance from Germany, Austria and Czechoslovakia and contributed substantially to the Kindertransport which brought 10,000 children to England.
Elisabeth Abegg – On 23 May 1967, Yad Vashem recognized German Quaker Elisabeth Abegg as Righteous Among the Nations. She helped many Jewish people by offering them accommodation in her home or directing them to hiding places elsewhere.
Kees Boeke and Betty Boeke-Cadbury – On 4 July 1991, Yad Vashem recognized Cornelis Boeke and his wife Beatrice Boeke-Cadbury as Righteous Among the Nations for hiding Jewish children in Bilthoven.
Laura van den Hoek Ostende – On 29 September 1994, Yad Vashem recognized Dutch Quaker Laura van den Hoek Ostende-van Honk as Righteous Among the Nations for hiding Jews in Putten, Hilversum and Amsterdam.
Mary Elmes – On 23 January 2013, Yad Vashem recognized Irish Quaker Mary Elisabeth Elmes as Righteous Among the Nations for rescuing Jewish children in France.
Auguste Fuchs-Bucholz and Fritz Fuchs – On 11 August 2009, Yad Vashem recognized German Quakers Auguste Fuchs-Bucholz and Fritz Fuchs as Righteous Among the Nations.
Carl Hermann and Eva Hermann-Lueddecke – On 19 January 1976, Yad Vashem recognized German Quakers Carl Hermann and Eva Hermann-Lueddecke as Righteous Among the Nations.
Gilbert Lesage – On 14 January 1985, Yad Vashem recognized French Quaker Gilbert Lesage as Righteous Among the Nations.
Gertrud Luckner – On 15 February 1966, Yad Vashem recognized German Quaker Gertrud Luckner as Righteous Among the Nations.
Ernst Lusebrink and Elfriede Lusebrink-Bokenkruger – On 11 August 2009, Yad Vashem recognized German Quakers Ernst Lusebrink and Elfriede Lusebrink-Bokenkruger as Righteous Among the Nations.
Geertruida Pel and Trijntje Pfann – On 15 August 2012, Yad Vashem recognized Dutch Quaker Geertruida Pel and her daughter Trijntje Pfann as Righteous Among the Nations.
Lili Pollatz-Engelsmann and Manfred Pollatz – On 3 December 2013, Yad Vashem recognized German Quakers Lili Louise Pollatz-Engelsmann and Erwin Herbert Manfred Pollatz as Righteous Among the Nations for hiding German and Dutch Jewish children in their home in Haarlem, Netherlands. Wijnberg, I., Hollaender, A., 'Er wacht nog een kind..., De quakers Lili en Manfred Pollatz, hun school en kindertehuis in Haarlem 1934–1945, AMB Diemen, 2014, ;
Wijnberg, I., Hollaender, A., 'Er wacht nog een kind ..., De quakers Lili en Manfred Pollatz, huIlse Schwersensky-Zimmermann and n school en kinderte men, 2014,
Ilse Schwersensky-Zimmermann and Gerhard Schwersensky – On 2 May 1985, Yad Vashem recognized German Quakers Gerhard Schwersensky and Ilse Schwersensky-Zimmermann as Righteous Among the Nations for hiding Jews in Berlin.
Villages helping Jews
Yaruga, Ukraine
Le Chambon-sur-Lignon, in the Haute-Loire département in France, which saved up to 5,000 Jews.
In occupied Poland, among the hundreds of villages involved, some of the most notable included Głuchów near Łańcut with everyone engaged, as well as the villages of Główne, Ozorków, Borkowo near Sierpc, Dąbrowica near Ulanów, in Głupianka near Otwock, and Teresin near Chełm. In Cisie near Warsaw, 25 Poles were caught hiding Jews; all were killed and the village was burned to the ground as punishment. In Gołąbki, Jerzy and Irena Krępeć provided a hiding place for as many as 30 Jews on their farm and set up homeschooling for all children, Christian and Jewish together; their actions were "an open secret in the village." Other villagers helped "if only to provide a meal." Another farm couple, Alfreda and Bolesław Pietraszek, provided shelter for Jewish families consisting of 18 people in Ceranów near Sokołów Podlaski, and their neighbors brought food to those being rescued. In Markowa, where 17 Jews survived the war in hiding with their Christian neighbors, entire Polish family of Józef and Wiktoria Ulma including 6 children and prenatal child were shot dead by the Germans for hiding the Szall and Goldman families. Dorota and Antoni Szylar hid seven members of Weltz family. Julia and Józef Bar hid five members of Reisenbach family. Michal Bar hid Jakub Lorbenfeld; while Jan and Weronika Przybylak hid Jakub Einhorn.
Tršice, Czech Republic, many people from this village helped hide a Jewish family; six of them were given the honorific of Righteous Among the Nations.
Nieuwlande, Netherlands – during the war, this small village contained 117 inhabitants. Most households in the village and surrounding area cooperated to shelter Jews, thus making it difficult for anyone in the small village to betray their neighbors. Dozens of Jews were thus saved. Over 200 inhabitants have been honored by Yad Vashem.
Moissac, France – There was a Jewish boarding home and orphanage in this town. When the mayor was told that the Nazis were coming, the older students would go camping for several days, the younger students were boarded with families in the area and told to be treated as members of their immediate family; the oldest students hid in the house. When it became too dangerous for the students to stay there any longer, the residents made sure that every student had a safe place to go to. If the students had to move again, the counsellors from the boarding house arranged for a new place and even escorted them to the new housing.
The Portuguese cities of Figueira da Foz, Porto, Coimbra, Curia, Ericeira and Caldas da Rainha were assigned to house refugees. They were pleasant resorts with many available hotels. The refugees led totally ordinary lives. They were allowed to circulate freely within town limits, practice their religions, and enroll their children in local schools. "Here we were given freedom of movement; we were allowed to go on outing and live as we wished", said Ben-Zwi Kalischer. Those times were captured on films that can be found at the Steven Spielberg Film and Video Archive.
Oľšavica, Slovakia
Others
The American Jewish Joint Distribution Committee
The Jewish Labor Committee
See also
Arab rescue efforts during the Holocaust
British Hero of the Holocaust
Jewish settlement in the Japanese Empire
Rescue of Roma during the Porajmos
Rescuer (genocide)
Footnotes
Citations
Sources
Further reading
External links
The Jewish Foundation for the Righteous: Stories of Moral Courage
About the "Righteous Among the Nations" Program at Yad Vashem
Lists of people by activity
People of the Holocaust
Responses to genocide
The Holocaust-related lists | Rescue of Jews during the Holocaust | [
"Biology"
] | 14,328 | [
"Rescue of Jews during the Holocaust",
"Behavior",
"Altruism"
] |
611,964 | https://en.wikipedia.org/wiki/Sobolev%20space | In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
Sobolev spaces are named after the Russian mathematician Sergei Sobolev. Their importance comes from the fact that weak solutions of some important partial differential equations exist in appropriate Sobolev spaces, even when there are no strong solutions in spaces of continuous functions with the derivatives understood in the classical sense.
Motivation
In this section and throughout the article is an open subset of
There are many criteria for smoothness of mathematical functions. The most basic criterion may be that of continuity. A stronger notion of smoothness is that of differentiability (because functions that are differentiable are also continuous) and a yet stronger notion of smoothness is that the derivative also be continuous (these functions are said to be of class — see Differentiability classes). Differentiable functions are important in many areas, and in particular for differential equations. In the twentieth century, however, it was observed that the space (or , etc.) was not exactly the right space to study solutions of differential equations. The Sobolev spaces are the modern replacement for these spaces in which to look for solutions of partial differential equations.
Quantities or properties of the underlying model of the differential equation are usually expressed in terms of integral norms. A typical example is measuring the energy of a temperature or velocity distribution by an -norm. It is therefore important to develop a tool for differentiating Lebesgue space functions.
The integration by parts formula yields that for every , where is a natural number, and for all infinitely differentiable functions with compact support
where is a multi-index of order and we are using the notation:
The left-hand side of this equation still makes sense if we only assume to be locally integrable. If there exists a locally integrable function , such that
then we call the weak -th partial derivative of . If there exists a weak -th partial derivative of , then it is uniquely defined almost everywhere, and thus it is uniquely determined as an element of a Lebesgue space. On the other hand, if , then the classical and the weak derivative coincide. Thus, if is a weak -th partial derivative of , we may denote it by .
For example, the function
is not continuous at zero, and not differentiable at −1, 0, or 1. Yet the function
satisfies the definition for being the weak derivative of which then qualifies as being in the Sobolev space (for any allowed , see definition below).
The Sobolev spaces combine the concepts of weak differentiability and Lebesgue norms.
Sobolev spaces with integer k
One-dimensional case
In the one-dimensional case the Sobolev space for is defined as the subset of functions in such that and its weak derivatives up to order have a finite norm. As mentioned above, some care must be taken to define derivatives in the proper sense. In the one-dimensional problem it is enough to assume that the -th derivative is differentiable almost everywhere and is equal almost everywhere to the Lebesgue integral of its derivative (this excludes irrelevant examples such as Cantor's function).
With this definition, the Sobolev spaces admit a natural norm,
One can extend this to the case , with the norm then defined using the essential supremum by
Equipped with the norm becomes a Banach space. It turns out that it is enough to take only the first and last in the sequence, i.e., the norm defined by
is equivalent to the norm above (i.e. the induced topologies of the norms are the same).
The case
Sobolev spaces with are especially important because of their connection with Fourier series and because they form a Hilbert space. A special notation has arisen to cover this case, since the space is a Hilbert space:
The space can be defined naturally in terms of Fourier series whose coefficients decay sufficiently rapidly, namely,
where is the Fourier series of and denotes the 1-torus. As above, one can use the equivalent norm
Both representations follow easily from Parseval's theorem and the fact that differentiation is equivalent to multiplying the Fourier coefficient by .
Furthermore, the space admits an inner product, like the space In fact, the inner product is defined in terms of the inner product:
The space becomes a Hilbert space with this inner product.
Other examples
In one dimension, some other Sobolev spaces permit a simpler description. For example, is the space of absolutely continuous functions on (or rather, equivalence classes of functions that are equal almost everywhere to such), while is the space of bounded Lipschitz functions on , for every interval . However, these properties are lost or not as simple for functions of more than one variable.
All spaces are (normed) algebras, i.e. the product of two elements is once again a function of this Sobolev space, which is not the case for (E.g., functions behaving like |x|−1/3 at the origin are in but the product of two such functions is not in ).
Multidimensional case
The transition to multiple dimensions brings more difficulties, starting from the very definition. The requirement that be the integral of does not generalize, and the simplest solution is to consider derivatives in the sense of distribution theory.
A formal definition now follows. Let The Sobolev space is defined to be the set of all functions on such that for every multi-index with the mixed partial derivative
exists in the weak sense and is in i.e.
That is, the Sobolev space is defined as
The natural number is called the order of the Sobolev space
There are several choices for a norm for The following two are common and are equivalent in the sense of equivalence of norms:
and
With respect to either of these norms, is a Banach space. For is also a separable space. It is conventional to denote by for it is a Hilbert space with the norm .
Approximation by smooth functions
It is rather hard to work with Sobolev spaces relying only on their definition. It is therefore interesting to know that by the Meyers–Serrin theorem a function can be approximated by smooth functions. This fact often allows us to translate properties of smooth functions to Sobolev functions. If is finite and is open, then there exists for any an approximating sequence of functions such that:
If has Lipschitz boundary, we may even assume that the are the restriction of smooth functions with compact support on all of
Examples
In higher dimensions, it is no longer true that, for example, contains only continuous functions. For example, where is the unit ball in three dimensions. For , the space will contain only continuous functions, but for which this is already true depends both on and on the dimension. For example, as can be easily checked using spherical polar coordinates for the function defined on the n-dimensional unit ball we have:
Intuitively, the blow-up of f at 0 "counts for less" when n is large since the unit ball has "more outside and less inside" in higher dimensions.
Absolutely continuous on lines (ACL) characterization of Sobolev functions
Let If a function is in then, possibly after modifying the function on a set of measure zero, the restriction to almost every line parallel to the coordinate directions in is absolutely continuous; what's more, the classical derivative along the lines that are parallel to the coordinate directions are in Conversely, if the restriction of to almost every line parallel to the coordinate directions is absolutely continuous, then the pointwise gradient exists almost everywhere, and is in provided In particular, in this case the weak partial derivatives of and pointwise partial derivatives of agree almost everywhere. The ACL characterization of the Sobolev spaces was established by Otto M. Nikodym (1933); see .
A stronger result holds when A function in is, after modifying on a set of measure zero, Hölder continuous of exponent by Morrey's inequality. In particular, if and has Lipschitz boundary, then the function is Lipschitz continuous.
Functions vanishing at the boundary
The Sobolev space is also denoted by It is a Hilbert space, with an important subspace defined to be the closure of the infinitely differentiable functions compactly supported in in The Sobolev norm defined above reduces here to
When has a regular boundary, can be described as the space of functions in that vanish at the boundary, in the sense of traces (see below). When if is a bounded interval, then consists of continuous functions on of the form
where the generalized derivative is in and has 0 integral, so that
When is bounded, the Poincaré inequality states that there is a constant such that:
When is bounded, the injection from to is compact. This fact plays a role in the study of the Dirichlet problem, and in the fact that there exists an orthonormal basis of consisting of eigenvectors of the Laplace operator (with Dirichlet boundary condition).
Traces
Sobolev spaces are often considered when investigating partial differential equations. It is essential to consider boundary values of Sobolev functions. If , those boundary values are described by the restriction However, it is not clear how to describe values at the boundary for as the n-dimensional measure of the boundary is zero. The following theorem resolves the problem:
Tu is called the trace of u. Roughly speaking, this theorem extends the restriction operator to the Sobolev space for well-behaved Ω. Note that the trace operator T is in general not surjective, but for 1 < p < ∞ it maps continuously onto the Sobolev–Slobodeckij space
Intuitively, taking the trace costs 1/p of a derivative. The functions u in W1,p(Ω) with zero trace, i.e. Tu = 0, can be characterized by the equality
where
In other words, for Ω bounded with Lipschitz boundary, trace-zero functions in can be approximated by smooth functions with compact support.
Sobolev spaces with non-integer k
Bessel potential spaces
For a natural number k and one can show (by using Fourier multipliers) that the space can equivalently be defined as
with the norm
This motivates Sobolev spaces with non-integer order since in the above definition we can replace k by any real number s. The resulting spaces
are called Bessel potential spaces (named after Friedrich Bessel). They are Banach spaces in general and Hilbert spaces in the special case p = 2.
For is the set of restrictions of functions from to Ω equipped with the norm
Again, Hs,p(Ω) is a Banach space and in the case p = 2 a Hilbert space.
Using extension theorems for Sobolev spaces, it can be shown that also Wk,p(Ω) = Hk,p(Ω) holds in the sense of equivalent norms, if Ω is domain with uniform Ck-boundary, k a natural number and . By the embeddings
the Bessel potential spaces form a continuous scale between the Sobolev spaces From an abstract point of view, the Bessel potential spaces occur as complex interpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms it holds that
where:
Sobolev–Slobodeckij spaces
Another approach to define fractional order Sobolev spaces arises from the idea to generalize the Hölder condition to the Lp-setting. For and the Slobodeckij seminorm (roughly analogous to the Hölder seminorm) is defined by
Let be not an integer and set . Using the same idea as for the Hölder spaces, the Sobolev–Slobodeckij space is defined as
It is a Banach space for the norm
If is suitably regular in the sense that there exist certain extension operators, then also the Sobolev–Slobodeckij spaces form a scale of Banach spaces, i.e. one has the continuous injections or embeddings
There are examples of irregular Ω such that is not even a vector subspace of for 0 < s < 1 (see Example 9.1 of )
From an abstract point of view, the spaces coincide with the real interpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms the following holds:
Sobolev–Slobodeckij spaces play an important role in the study of traces of Sobolev functions. They are special cases of Besov spaces.
The constant arising in the characterization of the fractional Sobolev space can be characterized through the Bourgain-Brezis-Mironescu formula:
and the condition
characterizes those functions of that are in the first-order Sobolev space .
Extension operators
If is a domain whose boundary is not too poorly behaved (e.g., if its boundary is a manifold, or satisfies the more permissive "cone condition") then there is an operator A mapping functions of to functions of such that:
Au(x) = u(x) for almost every x in and
is continuous for any 1 ≤ p ≤ ∞ and integer k.
We will call such an operator A an extension operator for
Case of p = 2
Extension operators are the most natural way to define for non-integer s (we cannot work directly on since taking Fourier transform is a global operation). We define by saying that if and only if Equivalently, complex interpolation yields the same spaces so long as has an extension operator. If does not have an extension operator, complex interpolation is the only way to obtain the spaces.
As a result, the interpolation inequality still holds.
Extension by zero
Like above, we define to be the closure in of the space of infinitely differentiable compactly supported functions. Given the definition of a trace, above, we may state the following
If we may define its extension by zero in the natural way, namely
For its extension by zero,
is an element of Furthermore,
In the case of the Sobolev space W1,p(Ω) for , extending a function u by zero will not necessarily yield an element of But if Ω is bounded with Lipschitz boundary (e.g. ∂Ω is C1), then for any bounded open set O such that Ω⊂⊂O (i.e. Ω is compactly contained in O), there exists a bounded linear operator
such that for each a.e. on Ω, Eu has compact support within O, and there exists a constant C depending only on p, Ω, O and the dimension n, such that
We call an extension of to
Sobolev embeddings
It is a natural question to ask if a Sobolev function is continuous or even continuously differentiable. Roughly speaking, sufficiently many weak derivatives (i.e. large k) result in a classical derivative. This idea is generalized and made precise in the Sobolev embedding theorem.
Write for the Sobolev space of some compact Riemannian manifold of dimension n. Here k can be any real number, and 1 ≤ p ≤ ∞. (For p = ∞ the Sobolev space is defined to be the Hölder space Cn,α where k = n + α and 0 < α ≤ 1.) The Sobolev embedding theorem states that if and then
and the embedding is continuous. Moreover, if and then the embedding is completely continuous (this is sometimes called Kondrachov's theorem or the Rellich–Kondrachov theorem). Functions in have all derivatives of order less than m continuous, so in particular this gives conditions on Sobolev spaces for various derivatives to be continuous. Informally these embeddings say that to convert an Lp estimate to a boundedness estimate costs 1/p derivatives per dimension.
There are similar variations of the embedding theorem for non-compact manifolds such as . Sobolev embeddings on that are not compact often have a related, but weaker, property of cocompactness.
See also
Sobolev mapping
Souček space
Besov space
Triebel–Lizorkin space
Notes
References
.
.
.
.
.
.
.
.
; translation of Mat. Sb., 4 (1938) pp. 471–497.
.
.
.
.
External links
Eleonora Di Nezza, Giampiero Palatucci, Enrico Valdinoci (2011). "Hitchhiker's guide to the fractional Sobolev spaces".
Fourier analysis
Fractional calculus
Function spaces | Sobolev space | [
"Mathematics"
] | 3,479 | [
"Function spaces",
"Vector spaces",
"Calculus",
"Space (mathematics)",
"Fractional calculus"
] |
612,000 | https://en.wikipedia.org/wiki/Hadamard%27s%20inequality | In mathematics, Hadamard's inequality (also known as Hadamard's theorem on determinants) is a result first published by Jacques Hadamard in 1893. It is a bound on the determinant of a matrix whose entries are complex numbers in terms of the lengths of its column vectors. In geometrical terms, when restricted to real numbers, it bounds the volume in Euclidean space of n dimensions marked out by n vectors vi for 1 ≤ i ≤ n in terms of the lengths of these vectors ||vi ||.
Specifically, Hadamard's inequality states that if N is the matrix having columns vi, then
If the n vectors are non-zero, equality in Hadamard's inequality is achieved if and only if the vectors are orthogonal.
Alternate forms and corollaries
A corollary is that if the entries of an n by n matrix N are bounded by B, so |Nij | ≤ B for all i and j, then
In particular, if the entries of N are +1 and −1 only then
In combinatorics, matrices N for which equality holds, i.e. those with orthogonal columns, are called Hadamard matrices.
More generally, suppose that N is a complex matrix of order n, whose entries are bounded by |Nij | ≤ 1, for each i, j between 1 and n. Then Hadamard's inequality states that
Equality in this bound is attained for a real matrix N if and only if N is a Hadamard matrix.
A positive-semidefinite matrix P can be written as N*N, where N* denotes the conjugate transpose of N (see Decomposition of a semidefinite matrix). Then
So, the determinant of a positive definite matrix is less than or equal to the product of its diagonal entries. Sometimes this is also known as Hadamard's inequality.
Proof
The result is trivial if the matrix N is singular, so assume the columns of N are linearly independent. By dividing each column by its length, it can be seen that the result is equivalent to the special case where each column has length 1, in other words if ei are unit vectors and M is the matrix having the ei as columns then
and equality is achieved if and only if the vectors are an orthogonal set. The general result now follows:
To prove (1), consider P =M*M where M* is the conjugate transpose of M, and let the eigenvalues of P be λ1, λ2, … λn. Since the length of each column of M is 1, each entry in the diagonal of P is 1, so the trace of P is n. Applying the inequality of arithmetic and geometric means,
so
If there is equality then each of the λi's must all be equal and their sum is n, so they must all be 1. The matrix P is Hermitian, therefore diagonalizable, so it is the identity matrix—in other words the columns of M are an orthonormal set and the columns of N are an orthogonal set. Many other proofs can be found in the literature.
See also
Fischer's inequality
Notes
References
Further reading
Inequalities
Determinants | Hadamard's inequality | [
"Mathematics"
] | 683 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
612,029 | https://en.wikipedia.org/wiki/Cyclic%20model | A cyclic model (or oscillating model) is any of several cosmological models in which the universe follows infinite, or indefinite, self-sustaining cycles. For example, the oscillating universe theory briefly considered by Albert Einstein in 1930 theorized a universe following an eternal series of oscillations, each beginning with a Big Bang and ending with a Big Crunch; in the interim, the universe would expand for a period of time before the gravitational attraction of matter causes it to collapse back in and undergo a bounce.
Overview
In the 1920s, theoretical physicists, most notably Albert Einstein, considered the possibility of a cyclic model for the universe as an (everlasting) alternative to the model of an expanding universe. In 1922, Alexander Friedmann introduced the Oscillating Universe Theory. However, work by Richard C. Tolman in 1934 showed that these early attempts failed because of the cyclic problem: according to the second law of thermodynamics, entropy can only increase. This implies that successive cycles grow longer and larger. Extrapolating back in time, cycles before the present one become shorter and smaller culminating again in a Big Bang and thus not replacing it. This puzzling situation remained for many decades until the early 21st century when the recently discovered dark energy component provided new hope for a consistent cyclic cosmology. In 2011, a five-year survey of 200,000 galaxies and spanning 7 billion years of cosmic time confirmed that "dark energy is driving our universe apart at accelerating speeds."
One new cyclic model is the brane cosmology model of the creation of the universe, derived from the earlier ekpyrotic model. It was proposed in 2001 by Paul Steinhardt of Princeton University and Neil Turok of Cambridge University. The theory describes a universe exploding into existence not just once, but repeatedly over time. The theory could potentially explain why a repulsive form of energy known as the cosmological constant, which is accelerating the expansion of the universe, is several orders of magnitude smaller than predicted by the standard Big Bang model.
A different cyclic model relying on the notion of phantom energy was proposed in 2007 by Lauris Baum and Paul Frampton of the University of North Carolina at Chapel Hill.
Other cyclic models include conformal cyclic cosmology and loop quantum cosmology.
The Steinhardt–Turok model
In this cyclic model, two parallel orbifold planes or M-branes collide periodically in a higher-dimensional space. The visible four-dimensional universe lies on one of these branes. The collisions correspond to a reversal from contraction to expansion, or a Big Crunch followed immediately by a Big Bang. The matter and radiation we see today were generated during the most recent collision in a pattern dictated by quantum fluctuations created before the branes. After billions of years the universe reached the state we observe today; after additional billions of years it will ultimately begin to contract again. Dark energy corresponds to a force between the branes, and serves the crucial role of solving the monopole, horizon, and flatness problems. Moreover, the cycles can continue indefinitely into the past and the future, and the solution is an attractor, so it can provide a complete history of the universe.
As Richard C. Tolman showed, the earlier cyclic model failed because the universe would undergo inevitable thermodynamic heat death. However, the newer cyclic model evades this by having a net expansion each cycle, preventing entropy from building up. However, there remain major open issues in the model. Foremost among them is that colliding branes are not understood by string theorists, and nobody knows if the scale invariant spectrum will be destroyed by the big crunch. Moreover, as with cosmic inflation, while the general character of the forces (in the ekpyrotic scenario, a force between branes) required to create the vacuum fluctuations is known, there is no candidate from particle physics.
The Baum–Frampton model
This more recent cyclic model of 2007 assumes an exotic form of dark energy called phantom energy, which possesses negative kinetic energy and would usually cause the universe to end in a Big Rip. This condition is achieved if the universe is dominated by dark energy with a cosmological equation of state parameter satisfying the condition , for energy density and pressure p. By contrast, Steinhardt–Turok assume . In the Baum–Frampton model, a septillionth (or less) of a second (i.e. 10−24 seconds or less) before the would-be Big Rip, a turnaround occurs and only one causal patch is retained as our universe. The generic patch contains no quark, lepton or force carrier; only dark energy – and its entropy thereby vanishes. The adiabatic process of contraction of this much smaller universe takes place with constant vanishing entropy and with no matter including no black holes which disintegrated before turnaround.
The idea that the universe "comes back empty" is a central new idea of this cyclic model, and avoids many difficulties confronting matter in a contracting phase such as excessive structure formation, proliferation and expansion of black holes, as well as going through phase transitions such as those of QCD and electroweak symmetry restoration. Any of these would tend strongly to produce an unwanted premature bounce, simply to avoid violation of the second law of thermodynamics. The condition of may be logically inevitable in a truly infinitely cyclic cosmology because of the entropy problem. Nevertheless, many technical back up calculations are necessary to confirm consistency of the approach. Although the model borrows ideas from string theory, it is not necessarily committed to strings, or to higher dimensions, yet such speculative devices may provide the most expeditious methods to investigate the internal consistency. The value of in the Baum–Frampton model can be made arbitrarily close to, but must be less than, −1.
Other cyclic models
Conformal cyclic cosmology—a general relativity based theory by Roger Penrose in which the universe expands until all the matter decays and is turned to light—so there is nothing in the universe that has any time or distance scale associated with it. This permits it to become identical with the Big Bang, so starting the next cycle.
Loop quantum cosmology which predicts a "quantum bridge" between contracting and expanding cosmological branches.
See also
Physical cosmologies:
Big Bounce
Conformal cyclic cosmology
Religion:
Bhavacakra
Cycles of time in Hinduism
Eternal return
Historic recurrence
Kalachakra
Wheel of time
References
Further reading
S. W. Hawking and G. F. R. Ellis, The large-scale structure of space-time (Cambridge, 1973).
External links
Paul J. Steinhardt, Department of Physics, Princeton University
Paul H. Frampton, Department of Physics and Astronomy, The University of North Carolina at Chapel Hill
"The Cyclic Universe": A Talk with Neil Turok
Roger Penrose—Cyclical Universe Model
Physical cosmology
String theory
1920s in science | Cyclic model | [
"Physics",
"Astronomy"
] | 1,426 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"String theory",
"Physical cosmology"
] |
612,057 | https://en.wikipedia.org/wiki/Potential%20well | A potential well is the region surrounding a local minimum of potential energy. Energy captured in a potential well is unable to convert to another type of energy (kinetic energy in the case of a gravitational potential well) because it is captured in the local minimum of a potential well. Therefore, a body may not proceed to the global minimum of potential energy, as it would naturally tend to do due to entropy.
Overview
Energy may be released from a potential well if sufficient energy is added to the system such that the local maximum is surmounted. In quantum physics, potential energy may escape a potential well without added energy due to the probabilistic characteristics of quantum particles; in these cases a particle may be imagined to tunnel through the walls of a potential well.
The graph of a 2D potential energy function is a potential energy surface that can be imagined as the Earth's surface in a landscape of hills and valleys. Then a potential well would be a valley surrounded on all sides with higher terrain, which thus could be filled with water (e.g., be a lake) without any water flowing away toward another, lower minimum (e.g. sea level).
In the case of gravity, the region around a mass is a gravitational potential well, unless the density of the mass is so low that tidal forces from other masses are greater than the gravity of the body itself.
A potential hill is the opposite of a potential well, and is the region surrounding a local maximum.
Quantum confinement
Quantum confinement can be observed once the diameter of a material is of the same magnitude as the de Broglie wavelength of the electron wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials.
A particle behaves as if it were free when the confining dimension is large compared to the wavelength of the particle. During this state, the bandgap remains at its original energy due to a continuous energy state. However, as the confining dimension decreases and reaches a certain limit, typically in nanoscale, the energy spectrum becomes discrete. As a result, the bandgap becomes size-dependent. As the size of the particles decreases, the electrons and electron holes come closer, and the energy required to activate them increases, which ultimately results in a blueshift in light emission.
Specifically, the effect describes the phenomenon resulting from electrons and electron holes being squeezed into a dimension that approaches a critical quantum measurement, called the exciton Bohr radius. In current application, a quantum dot such as a small sphere confines in three dimensions, a quantum wire confines in two dimensions, and a quantum well confines only in one dimension. These are also known as zero-, one- and two-dimensional potential wells, respectively. In these cases they refer to the number of dimensions in which a confined particle can act as a free carrier. See external links, below, for application examples in biotechnology and solar cell technology.
Quantum mechanics view
The electronic and optical properties of materials are affected by size and shape. Well-established technical achievements including quantum dots were derived from size manipulation and investigation for their theoretical corroboration on quantum confinement effect. The major part of the theory is the behaviour of the exciton resembles that of an atom as its surrounding space shortens. A rather good approximation of an exciton's behaviour is the 3-D model of a particle in a box. The solution of this problem provides a sole mathematical connection between energy states and the dimension of space. Decreasing the volume or the dimensions of the available space, increases the energy of the states. Shown in the diagram is the change in electron energy level and bandgap between nanomaterial and its bulk state.
The following equation shows the relationship between energy level and dimension spacing:
Research results provide an alternative explanation of the shift of properties at nanoscale. In the bulk phase, the surfaces appear to control some of the macroscopically observed properties. However, in nanoparticles, surface molecules do not obey the expected configuration in space. As a result, surface tension changes tremendously.
Classical mechanics view
The Young–Laplace equation can give a background on the investigation of the scale of forces applied to the surface molecules:
Under the assumption of spherical shape and resolving the Young–Laplace equation for the new radii (nm), we estimate the new (GPa). The smaller the radii, the greater the pressure is present. The increase in pressure at the nanoscale results in strong forces toward the interior of the particle. Consequently, the molecular structure of the particle appears to be different from the bulk mode, especially at the surface. These abnormalities at the surface are responsible for changes of inter-atomic interactions and bandgap.
See also
Quantum well
Finite potential well
Quantum dot
References
External links
Semiconductor Fundamental
Band Theory of Solid
Quantum dots synthesis
Biological application
Quantum mechanical potentials
Classical mechanics | Potential well | [
"Physics"
] | 994 | [
"Quantum mechanical potentials",
"Quantum mechanics",
"Mechanics",
"Classical mechanics"
] |
612,098 | https://en.wikipedia.org/wiki/Bract | In botany, a bract is a modified or specialized leaf, especially one associated with a reproductive structure such as a flower, inflorescence axis, or cone scale.
Bracts are usually different from foliage leaves; they may be of a different size, color, shape, or texture. Typically, they also look different from the parts of the flower, such as the petals or sepals.
A plant having bracts is referred to as bracteate or bracteolate, while one that lacks them is referred to as ebracteate or ebracteolate.
Variants
Some bracts are brightly coloured and serve the function of attracting pollinators, either together with the perianth or instead of it. Examples of this type of bract include those of Euphorbia pulcherrima (poinsettia) and Bougainvillea: both of these have large colourful bracts surrounding much smaller, less colourful flowers.
In grasses, each floret (flower) is enclosed in a pair of papery bracts, called the lemma (lower bract) and palea (upper bract), while each spikelet (group of florets) has a further pair of bracts at its base called glumes. These bracts form the chaff removed from cereal grain during threshing and winnowing.
Bats may detect acoustic signals from dish-shaped bracts such as those of Marcgravia evenia.
A prophyll is a leaf-like structure, such as a bracteole, subtending (extending under) a single flower or pedicel. The term can also mean the lower bract on a peduncle.
The frequently showy pair of bracts of Euphorbia species in subgenus Lacanthis are the cyathophylls.
Bracts subtend the cone scales in the seed cones of many conifers, and in some cases, such as Pseudotsuga, they extend beyond the cone scales.
Bracteole
A small bract is called a bracteole or bractlet. Technically this is any bract that arises on a pedicel instead of subtending it.
Involucral bracts
Bracts that appear in a whorl subtending an inflorescence are collectively called an involucre. An involucre is a common feature beneath the inflorescences of many Apiaceae, Asteraceae, Dipsacaceae and Polygonaceae. Each flower in an inflorescence may have its own whorl of bracts, in this case called an involucel. In this case they may be called chaff, paleas, or receptacular bracts and are usually minute scales or bristles. Many asteraceous plants have bracts at the base of each inflorescence.
The term involucre is also used for a highly conspicuous bract or bract pair at the base of an inflorescence. In the family Betulaceae, notably in the genera Carpinus and Corylus, the involucre is a leafy structure that protects the developing nuts. Beggar-tick (Bidens comosa) has narrow involucral bracts surrounding each inflorescence, each of which also has a single bract below it. There is then a pair of leafy bracts on the main stem and below those a pair of leaves.
Epicalyx
An epicalyx, which forms an additional whorl around the calyx of a single flower, is a modification of bracteoles In other words, the epicalyx is a group of bracts resembling a calyx or bracteoles forming a whorl outer to the calyx. It is a calyx-like extra whorl of floral appendages. Each individual segment of the epicalyx is called an episepal because they resemble the sepals in them. They are present in the hibiscus family, Malvaceae. Fragaria (strawberries) may or may not have an epicalyx.
Spathe
A spathe is a large bract or pair of bracts forming a sheath to enclose the flower cluster of such plants as palms, arums, irises, crocuses, and dayflowers (Commelina). Zephyranthes tubispatha in the Amaryllidaceae derives its specific name from its tubular spathe. In many arums (family Araceae), the spathe is petal-like, attracting pollinators to the flowers arranged on a type of spike called a spadix.
See also
Pedicel
Flower
Nectary
Glossary of botanical terms
References
Plant morphology
Leaves | Bract | [
"Biology"
] | 979 | [
"Plant morphology",
"Plants"
] |
612,238 | https://en.wikipedia.org/wiki/Tree%20house | A tree house, tree fort or treeshed, is a platform or building constructed around, next to or among the trunk or branches of one or more mature trees while above ground level. Tree houses can be used for recreation, work space, habitation, a hangout space and observation. People occasionally connect ladders or staircases to get up to the platforms.
History
Prehistoric hypotheses
Building tree platforms or nests as a shelter from dangers on the ground is a habit of all the great apes, and may have been inherited by humans. It is true that evidence of prehistoric human-made tree houses have never been found by paleoanthropologists, but remains of wooden tree houses would not remain. However, evidence for cave accommodation, terrestrial human-made rock shelters, and bonfires should be possible to find if they had existed, but are scarce from earlier than 40,000 years ago. This has led to a speculative hypothesis that archaic humans may have lived in trees until about 40,000 years ago. The skeletal changes due to the evolution of human bipedalism started at least four million years ago, but early bipedal hominins may still have spent some time in trees and retained some tree-climbing abilities. Early terrestrial bipedalism is supported by evidence such as fossilized bones and footprints (like the Laetoli footprints). According to the savannah hypothesis, this evolution happened as an effect of early humans adapting to life on the ground in savannah environments, partly for more energy-efficient locomotion.
Among indigenous people
Even today, tree houses are built by some indigenous people in order to escape the danger and adversity on the ground in some parts of the tropics. It has been claimed that the majority of the Korowai clans, a Papuan tribe in the southeast of Irian Jaya, live in tree houses on their isolated territory as protection against a tribe of neighbouring head-hunters, the Citak. The BBC revealed in 2018 that the Korowai had constructed some very high tree houses "for the benefit of overseas programme makers" and did not actually live in them. However, the Korowai people still build tree houses, not elevated but fastened to the trunks of tall trees, to protect occupants and store food away from scavenging animals.
In modern societies
Trees have historically been integrated into the construction of buildings, for example the walls of a chapel, to provide support to a structure built around them. is an example of this practice. Modern tree houses are usually built as play areas for children or for leisure purposes, but may also be used as accommodation in hotels or residential applications. In this case, the main part of the structure is built with more typical construction materials. The use of tree houses in this manner is part of a movement towards the practice of "living architecture".
Tree houses may be considered as an option for building eco-friendly houses in forested areas, because unlike more typical forms of housing, they do not require the clearing of trees.
Support methods and technology
There are numerous techniques to fasten the structure to the tree which seek to minimize tree damage.
The construction of modern tree houses usually starts with the creation of a rigid platform, on which the house will be placed; the platform will lean (possibly on the corners) on the branches. In case there are not enough suitable supports, the methods to support the platform are:
Struts and stilts
Struts and stilts are used for relieving weights on a lower elevation or straight to the ground; tree houses supported by stilts weigh much less on the tree and help to prevent stress, potential strain, and injury caused by puncture holes. Stilts are typically anchored into the ground with concrete, although new designs such as the "Diamond Pier" speeds installation and are less invasive for the root system. Stilts are considered the easiest method of supporting larger tree houses and can also increase structural support and safety.
Stay rods
Stay rods are used for relieving weights on a higher elevation. These systems are particularly useful to control movements caused by wind or tree growth. However, they are used less often due to the natural limits of the system. Higher elevation and more branches tailing off decreases capacity and increases wind sensitivity. Building materials for hanging include ropes, wire cables, tension fasteners, and springs.
Friction and tension fasteners
Friction and tension fasteners are the most common noninvasive methods of securing tree houses. They do not use nails, screws or bolts, but instead grip the beams to the trunk by means of counter-beam, threaded bars, or tying.
Invasive methods
Invasive methods are all methods that use nails, screws, bolts, kingpins, etc. Because these methods require punctures in the tree, they must be planned properly in order to minimize stress. Not all species of plants suffer from puncture in the same way, depending partly on whether the sap conduits run in the pith or in the bark. Nails are generally not recommended. A special kind of bolt developed in the 1990s called a treehouse attachment bolt can support greater weights than earlier methods.
Popularity
Since the mid-1990s, recreational tree houses have enjoyed a rise in popularity in countries such as the United States and parts of Europe. This has been due to increased disposable income, better technology for builders, research into safe building practices and an increased interest in environmental issues, particularly sustainable living. This growing popularity is also reflected in a rise of social media channels, websites, and television shows specially dedicated to featuring tree houses around the world.
Increased popularity has, in turn, given rise to demand for businesses covering all building and design work for clients. There are over 30 businesses in Europe and the US specializing in the construction of tree houses of various degrees of permanence and sophistication, from children's play structures to fully functioning homes.
Popularity of tree house hotels is equally growing due to the popularity in the glamping and unique accommodation industries with a number of booking websites offering accommodation in tree houses.
Building regulations
Many areas of the world have no specific planning laws for tree houses, so the legal issues can be confusing to both the builder and the local planning departments. Treehouses can be exempt, partially regulated or fully regulated - depending on the locale.
In some cases, tree houses are exempted from standard building regulations, as they are considered outside of the regulations specification. An exemption may be given to a builder if the tree house is in a remote or non-urban location. Alternatively, a tree house may be included in the same category as structures such as garden sheds, sometimes called a "temporary structure". There may be restrictions on height, distance from boundary and privacy for nearby properties. There are various grey areas in these laws, as they were not specifically designed for tree-borne structures. A very small number of planning departments have specific regulations for tree houses, which set out clearly what may be built and where. For safety during the tree house construction, it is usually best to do as much work as possible on the ground, taking long-term viability into consideration.
Protest communities
The tree house has been central to various environmental protest communities around the world, in a technique, popularized, known as tree sitting. This method may be used in protests against proposed road building or old-growth forestry operations. Tree houses are used as a method of defence from which it is difficult and costly to safely evict the protesters and begin work. Julia Butterfly Hill is a particularly well known tree sitter who occupied a Californian redwood for 738 days (from December 1997 to December 1999), saving the tree and others in the immediate area. Her accommodation consisted of two platforms above the ground.
Gallery
See also
Cubby-hole
Fab Tree Habhypothetical ecological home design
Out'n'Abouta tree house oriented bed and breakfast in Cave Junction, Oregon
Stilt house
Tree climbing
Treehouse Masters, an American TV series featuring a builder of custom treehouses
Nest
Wendy house
Treefort Music Fest
References
Further reading
Andreas Wenning: Treehouses: Small Spaces in Nature. 3rd, extendet edition. DOM publishers, Berlin 2015, .
External links
Would You Live in a Treehouse? (The Atlantic)
Primitive ancient tree house making Kerala
Garden features
House types
Huts
Trees
Architecture related to utopias
Woodworking | Tree house | [
"Engineering"
] | 1,696 | [
"Architecture related to utopias",
"Architecture"
] |
612,277 | https://en.wikipedia.org/wiki/Hitachi%20HD64180 | The HD64180 is a Z80-based embedded microprocessor developed by Hitachi with an integrated memory management unit (MMU) and on-chip peripherals. It appeared in 1985. The Hitachi HD64180 "Super Z80" was later licensed to Zilog and sold by them as the Z64180 and with some enhancements as the Zilog Z180.
Overview
The HD64180 has the following features:
Execution and bus access clock rates up to 10 MHz.
Memory Management Unit supporting 512K bytes of memory (one megabyte for the HD64180 packaged in a PLCC)
I/O space of 64K addresses
12 new instructions including 8 bit by 8 bit integer multiply, non-destructive AND and illegal instruction trap vector
Two channel Direct Memory Access Controller (DMAC)
Programmable wait state generator
Programmable DRAM refresh
Two channel Asynchronous Serial Communication Interface (ASCI)
Two channel 16-bit Programmable Reload Timer (PRT)
1-channel Clocked Serial I/O Port (CSI/O)
Programmable Vectored Interrupt Controller
The HD64180 has a pipelined execution unit which processes most instructions in fewer clock cycles than the Z80. The most improved instruction group comprises the block instructions; for example those such as LDIR, CPIR, INIR and OTDR. This instruction type takes 21 transition states to execute per iteration; on the HD64180 it takes 14 t-states.
The on-chip DMAC makes block memory transfers possible at a rate faster than the LDIR/LDDR instructions. The on-chip generator for wait states makes it possible to access too-slow hardware on a selective basis using a device filter, as is done for the TRS-80 Model 4's balky keyboard. The on-chip ASCI makes it possible to implement additional RS-232 serial ports.
The HD64180 will not execute the "undocumented" Z80 instructions, particularly the ones that access the index registers IX and IY as 8-bit halves. The Hitachi CPU treats them as illegal instructions and accordingly executes the illegal instruction trap, redirecting the PC register to address zero.
Usage
The Micromint SB180, SemiDisk Systems DT42 CP/M computers, and Olivetti CWP 1 and ETV 210s videotypewriters (also running ROM-based CP/M 2.2) were based on the Hitachi HD64180. The XLR8er upgrade board for the TRS-80 Model 4 also used it. On the Victor HC-90 and HC-95 MSX2 computer, the HD64B180 was used for its turbo mode next to the regular Z80.
See also
Zilog Z180
References
Further reading
External links
HD64180 High Integration CMOS Microcontrollers Family Datasheet
HD64180 8-Bit High Integration CMOS Microprocessor User Manual
HD64180 8-Bit High Integration CMOS Microprocessor Data Book
Arthur-PC, homemade HD64180 based computer
Hitachi products
Embedded microprocessors
8-bit microprocessors | Hitachi HD64180 | [
"Technology"
] | 664 | [
"Computing stubs",
"Computer hardware stubs"
] |
612,330 | https://en.wikipedia.org/wiki/Last.fm | Last.fm is a music website founded in the United Kingdom in 2002. Utilizing a music recommender system known as "Audioscrobbler," Last.fm creates a detailed profile of each user's musical preferences by recording the details of the tracks they listen to, whether from Internet radio stations or from the user's computer or portable music devices. This information is transferred ("scrobbled") to Last.fm's database via the music player (including, among others, Spotify, Deezer, Tidal, Qobuz, MusicBee, SoundCloud, and Anghami) or through a plug-in installed in the user's music player. The data is then displayed on the user's profile page and compiled to create reference pages for individual artists.
On 30 May 2007, it was acquired by CBS Corporation through its streaming division CBS Interactive, which is now part of Paramount Global, for £140 million (US$280 million, ).
The site previously offered a radio streaming service, which was discontinued on 28 April 2014. The ability to access the extensive catalogue of music stored on the site was later removed entirely and replaced by links to YouTube and Spotify where available.
History
The current Last.fm website was developed from two separate sources, Last.fm and Audioscrobbler, which were merged in 2005. Audioscrobbler began as a computer science project by Richard Jones while he was attending the University of Southampton School of Electronics and Computer Science in the United Kingdom. The term scrobbling is defined as the process of finding, processing, and distributing information related to people, music, and other data. Jones developed the first plugins and subsequently opened an API to the community, which led to support for many music players across different operating system platforms. Audioscrobbler was initially limited to tracking which songs its users played on registered computers, enabling charting and collaborative filtering.
Audioscrobbler and Last.fm (2002–2006)
Last.fm was founded in 2002 by Felix Miller, Martin Stiksel, Michael Breidenbruecker, and Thomas Willomitzer, all hailing from Germany or Austria. Initially established as an Internet radio station and music community site, it utilized similar music profiles to generate dynamic playlists. The site’s name cleverly employs a domain hack using .fm, the top level domain of Micronesia, which is popular among FM radio-related sites. The "love" and "ban" buttons enabled users to gradually customize their profiles. Last.fm won the Europrix in 2002 and was nominated for the Prix Ars Electronica in 2003.
The Audioscrobbler and Last.fm teams began collaborating closely, moving into the same offices in Whitechapel, London. By 2003, Last.fm was fully integrated with Audioscrobbler profiles, allowing input through either an Audioscrobbler plugin or a Last.fm station. The sites also shared numerous community forums, although some were unique to each site. The original Audioscrobbler site at the audioscrobbler.com domain name was entirely merged into the new Last.fm site on 9 August 2005. Subsequently, Audioscrobbler.net was launched as a separate development-oriented site on 5 September 2005. At the bottom of each Last.fm page, there was an Audioscrobbler "slogan" that changed each time the page was refreshed. Based on well-known sayings or advertisements, these slogans originally appeared at the top of the Audioscrobbler website pages and were created and contributed by the original site members.
An update to the site was implemented on 14 July 2006, which introduced a new software application for playing Last.fm radio streams and logging tracks played with other media players. Other changes included improvements to the friends system, updating it to require a two-way friendship, the addition of the Last.fm "Dashboard" where users can view relevant information for their profiles on a single page, expanded options for purchasing music from online retailers, and a new visual design for the website (including an optional black colour scheme).
The site began expanding its language offerings on 15 July 2006, starting with a Japanese version. Currently, the site is available in German, Spanish, French, Italian, Polish, Portuguese, Swedish, Russian, Turkish, and Simplified Chinese. In late 2006, Last.fm won the award for Best Community Music Site at the BT Digital Music Awards held in October. Last.fm also partnered with EMI on the Tuneglue-Audiomap project. In January 2007, Last.fm was nominated for Best Website at the NME Awards.
CBS acquisition and redesign (2007–2009)
At the end of April 2007, rumours surfaced regarding negotiations between CBS and Last.fm, indicating that CBS intended to acquire Last.fm for approximately £225 million ($449 million ). In May 2007, it was announced that Channel 4 Radio would broadcast a weekly show called "Worldwide Chart," reflecting the listening habits of Last.fm users worldwide. On 30 May 2007, it was revealed that Last.fm had been acquired by CBS for £140 million, with Last.fm's current management team remaining in place. In July 2008, the "new generation" Last.fm was launched, featuring a completely new layout, color scheme, and several new features, alongside the removal of some old ones. However, this redesign was met with dissatisfaction among some users, who complained about the "unappealing and non-user-friendly layout," bugs, and slow performance. Nonetheless, a month after the redesign, a CBS press release credited it with generating a 20% increase in the site's traffic.
Last.fm debuted Portishead's album Third on 21 April 2008, a week before its release. It was made available as a free stream on the website, attracting 327,000 listeners in 24 hours. It was the first time Last.fm made an album available before its release.
On 22 February 2009, TechCrunch reported that "[the] RIAA asked social music service Last.fm for data about its users' listening habits to find individuals with unreleased tracks on their computers. And Last.fm, which is owned by CBS, allegedly handed the data over to the RIAA." This led to several public statements from both Last.fm and TechCrunch, with Last.fm denying that it had shared any personal data with the RIAA. The request was reportedly prompted by the leak of U2's then-unreleased album No Line on the Horizon and its subsequent widespread distribution through peer-to-peer file sharing services such as BitTorrent.
Three months later, on 22 May 2009, TechCrunch reported that it was CBS, the parent company of Last.fm, that had handed over the data. Last.fm once again denied this allegation, asserting that CBS could not have provided the data without Last.fm's knowledge.
Changes to streaming and access on other platforms (2009–2011)
On 24 March 2009, Last.fm announced a change in its free streaming policy. According to the blog post, "[...] In the United States, United Kingdom, and Germany, nothing will change. In all other countries, listening to Last.fm Radio will soon require a subscription of €3.00 per month." This change took effect on 22 April 2009. The announcement sparked a wave of disappointment among users, leading to a decline in data submissions, refusal to update signatures or avatars, and even account deletions.
On 11 September 2009, CBS Radio announced that Last.fm programming would be available for the first time on four major market FM stations through their HD Radio multicasts. This included KCBS-HD2 in Los Angeles, KITS-HD3 in San Francisco, WWFS-HD2 in New York City, and WXRT-HD3 in Chicago. The programming, which primarily featured music aggregated from Last.fm's user-generated weekly music charts, as well as live performances and interviews from the Last.fm studios in New York City, debuted on 5 October.
On 12 April 2010, Last.fm announced the removal of the option to preview entire tracks, redirecting users instead to sites such as the free Hype Machine and the pay-to-listen service MOG for this purpose. This decision provoked a significant negative reaction from some members of the Last.fm user community, who perceived the removal as a hindrance to lesser-known and unsigned artists' ability to gain exposure for their music, as well as to the overall enjoyment of the site. A new "Play direct from artist" feature was introduced shortly thereafter, allowing artists to select individual tracks for users to stream in full.
The ability to listen to custom radio stations, such as "personal tag radio" and "loved tracks radio," was withdrawn on 17 November 2010. This change provoked an angry response among users. Last.fm stated that the move was due to licensing reasons. The change meant that a tag radio stream would now include all music tagged as such, rather than just that tagged by individual users, effectively broadening the number of tracks that could be streamed under any one tag set.
Website and desktop application redesigns (2012–2013)
In March 2012, Last.fm was breached by hackers, resulting in the compromise of more than 43 million user accounts. The full extent of the breach, along with its connection to similar attacks against Tumblr, LinkedIn, and Myspace during the same timeframe, was not confirmed until August 2016. The passwords were encrypted using an outdated, unsalted MD5 hash. Last.fm informed users of the attack in June 2012.
On 14 February 2012, Last.fm announced the launch of a new beta desktop client for public testing. The new scrobbler was subsequently released for all users on 15 January 2013.
On 12 July 2012, Last.fm announced a new website redesign that was open to public beta, inviting feedback from users participating in the testing phase. The redesign officially went live for all users on 2 August 2012.
While technology websites received the redesign positively, many users expressed dissatisfaction with the changes on the website's forum.
On 19 June 2012, Last.fm launched Last.fm Originals, a new website featuring exclusive performances and interviews with various musical artists.
On 13 December 2012, it was announced that Last.fm would discontinue its radio service after January 2013 for subscribers in all countries except the United States, United Kingdom, Germany, Canada, Ireland, Australia, New Zealand, and Brazil. Additionally, radio in the desktop client would require a subscription in the US, UK, and Germany, although the website radio would remain free in those countries.
End of radio streaming and redesign (2014–present)
In January 2014, the website announced on-demand integration with Spotify and introduced a new YouTube-powered radio player. With the introduction of the YouTube player, the standard radio service became a subscriber-only feature.
On 26 March 2014, Last.fm announced that it would discontinue its streaming radio service on 28 April 2014. In a statement, the site indicated that the decision was made to "focus on improving scrobbling and recommendations".
On 15 April 2015, Last.fm released a subscriber-exclusive beta version of a new website redesign. Digital Spy described user reactions on the site's forums during the week of the redesign as "universally negative".
In 2016, Music Manager was discontinued, and music uploaded to the site by musicians and record labels became inaccessible. After the integration with Spotify, these tracks could still be played and downloaded where the option was available; however, following the change, artists themselves were unable to access their songs in the Last.fm catalog.
The website experienced a slight revival during the COVID-19 pandemic, beginning in 2020, linked to its popularity within music communities on the communication platform Discord. Last.fm celebrated its twentieth anniversary in 2022. Third-party developers have created programs that integrate users' listening statistical data with Discord, including a popular bot from the Netherlands that has over 400,000 total users.
Funding and staff
Last.fm Ltd is funded through the sale of online advertising space and monthly user subscriptions.
Funding prior to CBS acquisition
In 2004, the company received its first round of angel money from Peter Gardner, an investment banker who was introduced to the founders as early as 2002. A second round was led by Stefan Glaenzer, joined by Joi Ito and Reid Hoffman, who also purchased shares from Michael Breidenbruecker. In 2006, the company secured its first round of venture capital funding from European investors Index Ventures, whose General Partners Neil Rimer and Danny Rimer joined Last.fm's board of directors, which included Felix Miller, Martin Stiksel, and Stefan Glaenzer (chair).
Original founders Felix Miller, Martin Stiksel, and Richard Jones left the company in the summer of 2009.
Features
User accounts
The free user account provides access to all the main features listed below. Registered users are also able to send and receive private messages. The newly launched Last.fm Pro user account adds additional features to the free tier, the most notable being the ability to change usernames and gain early access to new features.
Profile
A Last.fm user can build a musical profile using any or all of several methods: by listening to their personal music collection on a music player application on a computer or an iPod with an Audioscrobbler plugin, or by listening to the Last.fm Internet radio service, either through the Last.fm client or the embedded player. All songs played are added to a log from which personal top artist/track bar charts and musical recommendations are calculated.
Last.fm automatically generates a profile page for every user, which includes basic information such as their username, avatar, date of registration, and the total number of tracks played. There is also a Shoutbox for public messages. Profile pages are visible to all, along with a list of top artists and tracks, as well as the 10 most recently played tracks (which can be expanded). Each user's profile features a 'Taste-o-Meter' that provides a rating of how compatible the user's music taste is.
Recommendations
Last.fm includes a personal recommendations page that is only visible to the user and lists suggested new music and events, all tailored to the user's preferences. Recommendations are calculated using a collaborative filtering algorithm, allowing users to browse and hear previews of a list of artists not featured on their own profiles but present on those of others with similar musical tastes.
Artist pages
Once an artist has had a track or tracks "scrobbled" by at least one user, Last.fm automatically generates a main artist page. This page displays details such as the total number of plays, the total number of listeners, the most popular weekly and overall tracks, the top weekly listeners, a list of similar artists, the most popular tags, and a shoutbox for messages. Additionally, there are links to events, album and individual track pages, and similar artists radio. Official music videos and other videos imported from YouTube may also be viewed on the relevant artist and track pages.
Users may contribute relevant biographical details and other information to any artist's main page in the form of a wiki. Edits are regularly moderated to prevent vandalism. A photograph of the artist may also be added. If more than one photograph is submitted, the most popular one is chosen by public vote. User-submitted content is licensed for use under the Creative Commons Attribution Share-Alike License and the GNU Free Documentation License.
Last.fm currently cannot disambiguate artists with the same name; a single artist profile is shared between valid artists with identical names. Additionally, Last.fm and its users do not differentiate between the composer and the artist of music, which can lead to confusion in classical music genres.
Charts
One notable feature of Last.fm is the semi-automatic weekly generation and archiving of detailed personal music charts and statistics, which contribute to profile building. Users have access to several different charts, including Top Artists, Top Tracks, and Top Albums. Each of these charts is based on the actual number of listeners for the track, album, or artist, recorded through an Audioscrobbler plugin or the Last.fm radio stream.
Additionally, charts are available for the top tracks by each artist in the Last.fm system, as well as the top tracks for individual albums (when the tagging information of the audio file is available). Artist profiles also keep track of a short list of Top Fans, calculated using a formula designed to reflect the importance of an artist in a fan's profile, balancing users who listen to hundreds of tracks against those who listen to only a few.
As the information generated is largely compiled from the ID3 data from audio files "scrobbled" from users' own computers, inaccuracies and misspellings can lead to numerous errors in the listings. Tracks with ambiguous punctuation are particularly prone to separate listings, which can dilute the apparent popularity of a track. Artists or bands with the same name are not always differentiated. The system attempts to consolidate different artist tags into a single artist profile and has recently made efforts to harmonize track names.
Global charts
Last.fm generates weekly "global" charts of the top 400 artists and tracks listened to by all Last.fm users.
The results differ significantly from traditional commercial music charts provided by the UK Top 40, Billboard, Soundscan, and others, which are based on radio plays or sales. Last.fm charts are less volatile, and a new album's release may continue to be reflected in play data for many months or even years after it drops out of commercial charts. For example, The Beatles have consistently ranked among the top five bands on Last.fm, reflecting the enduring popularity of their music regardless of current album sales. Significant events, such as the release of a highly anticipated album or the death of an artist, can have a substantial impact on the charts.
The Global Tag Chart displays the 100 most popular tags used to describe artists, albums, and tracks. This is based on the total number of times the tag has been applied by Last.fm users since the tagging system was first introduced and does not necessarily reflect the number of users currently listening to any of the related "global tag radio" stations.
Radio stations
Last.fm previously offered customized virtual "radio stations" consisting of uninterrupted audio streams of individual tracks selected from the music files in the music library. This service was discontinued on 28 April 2014.
Stations could be based on the user's personal profile, the user's "musical neighbours," or the user's "friends." Additionally, stations could be created based on tags, provided enough music was associated with the same tag. Users could also create stations on the fly, and each artist page allowed the selection of a "similar artists" or "artist fan" radio station. In May 2009, Last.fm introduced Visual Radio, an enhanced version of Last.fm radio. This update brought features such as an artist slideshow and combo stations, which allowed users to listen to stations consisting of common similar artists or up to three artists or three tags.
Under the terms of the station's "radio" license, listeners may not select specific tracks (except as previews) or choose the order in which they are played. However, any of the tracks played may be skipped or banned completely. The appropriate royalties are paid to the copyright holders of all streamed audio tracks in accordance with UK law. The radio stream utilizes an MP3 format encoded at 128 kbit/s and 44.1 kHz, which can be played using the in-page Flash player or the downloaded Last.fm client. Community-supported players are also available, along with a proxy that allows users to utilize a media player of their choice.
On 24 March 2009, Last.fm announced that access to Last.fm Radio would require a subscription of €3.00 per month for users residing outside the US, UK, and Germany. This change was initially set to take effect on 30 March, but was postponed until 22 April. This decision resulted in over 1,000 comments on the Last.fm blog, most of which were negative.
Streaming and radio services were discontinued by Last.fm on 28 April 2014, allowing the platform to "focus on its core product, the scrobbling experience." Despite the discontinuation of streaming, the website still generates recommendations based on a user's existing library.
Player
An "in-page" player is automatically provided for all listeners using an HTML5-enabled browser or with Adobe Flash installed on their computers. However, users must download and install the Last.fm client if they wish to include information about tracks played from their own digital music collection in their personal music profile.
Prior to August 2005, Last.fm generated an open stream that could be played in the user's music player of choice, accompanied by a browser-based player control panel. This approach proved challenging to support and has been officially discontinued. The Last.fm client is currently the only officially supported music player for streaming customized Last.fm radio on desktop computers. The current version integrates the music player functions with the plugin that transmits all track data to the Last.fm server, effectively replacing the separate Last.fm Player and standalone track submission plugins. It is also free software licensed under the GNU General Public License and is available for Linux, Mac OS X, and Microsoft Windows operating systems.
The player allows users to enter the name of any artist or tag, presenting a choice of several similar artist stations or global tag stations. Alternatively, users may play Recommendation radio or any of their personal radio stations without needing to visit the website.
The player displays the name of the currently playing station and track, the song's artist, title, and track length, as well as album details, the artist's photo and biographical information, album cover art when available, lists of similar artists, the most popular tags, and top fans. Several buttons allow users to love, skip, or ban a song. The love button adds the song to the user's loved tracks list, while the ban button ensures that the song will not be played again. Both features affect the user's profile, whereas the skip button does not. Other buttons enable users to tag or recommend the currently playing track. Additional features offered by the application include minor editing of the user's profile, such as removing recently played artists and songs from the loved, banned, or previously played track lists; lists of friends and neighbors; lists of tags; and a list of previously played radio stations. Users can also open their full Last.fm profile page directly from the player.
The client also allows users to install player plugins that integrate with various standalone media players, enabling the submission of tracks played in those programs.
In the latest version of the Last.fm Player application, users can choose to utilize an external player. When this option is selected, the Last.fm Player provides a local URL through which the Last.fm music stream is proxied. Users can then open this URL in their preferred media player.
A new version of the desktop client, which had been in beta since early 2012, was released on 15 January 2013. This version disabled the radio function for free users; a paid subscription is required to access that feature.
Last.fm has also developed client software for mobile phones running the iPhone OS, BlackBerry OS, and the Android OS. These apps have only been released in the United States, United Kingdom, and Germany, with the company claiming for four years that it is negotiating licenses to make streaming available in other countries.
Last.fm experienced an outage lasting more than 22 hours on 10 June 2014, marking one of the longest interruptions the company has faced. However, the company remained in contact with visitors through a status page.
Scrobbling
In addition to automatically tracking music played via Last.fm's radio, users can also contribute (scrobble) listening data to their Last.fm profile from other streaming sites or by tracking music played locally on their personal devices. Scrobbling is feasible with music stored and played locally through software on devices such as PCs, mobile phones, tablets, and standalone (hardware) media players. In fact, these were the only methods for scrobbling listening data both before and after the launch of the Last.fm radio service.
Certain sites and media players have built-in capabilities to upload (scrobble) listening data, while for others, users must download and install a plugin for their music player. This plugin automatically submits the artist and title of the song after either half of the song or the first four minutes has played, whichever comes first. If the track is shorter than 30 seconds (31 seconds in iTunes) or lacks metadata (ID3, CDDB, etc.), the track will not be submitted. To accommodate dial-up users or those listening to music while offline, caching of the data and submission in bulk is also possible.
Other third-party applications
Supported applications
Build Last.fm
As of March 2008, the website introduced a section titled "Build," where third-party applications can be submitted for review and subsequently posted to the page.
SXSW Band-Aid
Last.fm partnered with the SXSW festival to create an application embedded in the corresponding group page that filters the various artists at the festival based on a user's listening statistics. It also utilizes Last.fm's recommendation service to suggest other performing artists that the user has not yet listened to.
See also
List of Internet radio stations
List of online music databases
References
External links
Last.fm – official site
Audioscrobbler development site
The Old Last.fm
Free Last.fm Music Streamer Plugin for Chrome
Tiny webcaster Last.fm causes major online splash, Rockbites, 22 July 2003
Last.fm: Music to Listeners' Ears, Wired, 7 July 2003
The Musical Myware, Audio presentation by CEO Felix Miller, IT Conversations, 7 March 2006
Guardian Unlimited Interview, The Guardian Interview with Last.fm co-founder, Martin Stiksel, 4 November 2006
The Celestial Jukebox, New Statesman on the story of Last.fm, June 2009
Last.fm music charts widget
Last.fm for PC alternative download
Last.fm Down-Time Monitoring Tool
2002 establishments in the United Kingdom
Android (operating system) software
BlackBerry software
British music websites
CBS Interactive websites
Domain hacks
Online music stores of the United Kingdom
Internet properties established in 2002
Internet radio in the United Kingdom
IOS software
Online music and lyrics databases
Recommender systems
Social cataloging applications
Software that uses Qt
Windows Phone software
2007 mergers and acquisitions | Last.fm | [
"Technology"
] | 5,513 | [
"Information systems",
"Recommender systems"
] |
612,341 | https://en.wikipedia.org/wiki/Haploidisation | Haploidisation is the process of halving the chromosomal content of a cell, producing a haploid cell. Within the normal reproductive cycle, haploidisation is one of the major functional consequences of meiosis, the other being a process of chromosomal crossover that mingles the genetic content of the parental chromosomes. Usually, haploidisation creates a monoploid cell from a diploid progenitor, or it can involve halving of a polyploid cell, for example to make a diploid potato plant from a tetraploid lineage of potato plants.
If haploidisation is not followed by fertilisation, the result is a haploid lineage of cells. For example, experimental haploidisation may be used to recover a strain of haploid Dictyostelium from a diploid strain. It sometimes occurs naturally in plants when meiotically reduced cells (usually egg cells) develop by parthenogenesis.
Haploidisation was one of the procedures used by Japanese researchers to produce Kaguya, a mouse which had same-sex parents; two haploids were then combined to make the diploid mouse.
Haploidisation commitment is a checkpoint in meiosis which follows the successful completion of premeiotic DNA replication and recombination commitment.
See also
Polyploidy
Ploidy
References
Genetics | Haploidisation | [
"Biology"
] | 284 | [
"Genetics"
] |
612,349 | https://en.wikipedia.org/wiki/KOI8-R | KOI8-R (RFC 1489) is an 8-bit character encoding, derived from the KOI-8 encoding by the programmer Andrei Chernov in 1993 and designed to cover Russian, which uses a Cyrillic alphabet. KOI8-R was based on Russian Morse code, which was created from a phonetic version of Latin Morse code. As a result, Russian Cyrillic letters are in pseudo-Roman order rather than the normal Cyrillic alphabetical order. Although this may seem unnatural, if the 8th bit is stripped, the text is partially readable in ASCII and may convert to syntactically correct KOI-7. For example, "Код Обмена Информацией" in KOI8-R becomes kOD oBMENA iNFORMACIEJ (the Russian meaning of the "KOI" acronym).
KOI8 stands for Kod Obmena Informatsiey, 8 bit () which means "Code for Information Exchange, 8 bit". In Microsoft Windows, KOI8-R is assigned the code page number 20866. In IBM, KOI8-R is assigned code page 878. KOI8-R also happens to cover Bulgarian, but has not been used for that purpose since CP1251 was accepted. The use of these older code pages is being replaced with Unicode as a more common way to represent Cyrillic together with other languages.
Unicode is preferred to KOI-8 and its variants or other Cyrillic encodings in modern applications, especially on the Internet, making UTF-8 the dominant encoding for web pages. KOI8-R, the most popular variant, is used by less than 0.004% of websites which are mainly Russian and Bulgarian. However, both groups prefer other encodings. For further discussion of Unicode's complete coverage of 436 Cyrillic letters/code points, including for Old Cyrillic, and how single-byte character encodings, such as Windows-1251 and KOI8 variants, cannot provide this, see Cyrillic script in Unicode.
Character set
The following table shows the KOI8-R encoding. Each character is shown with its equivalent Unicode code point.
See also
KOI8-B, a derivation of KOI8-R with only the letter subset implemented
KOI8-U, another derivative encoding which adds Ukrainian characters
KOI character encodings
RELCOM
Windows-1251, another common Cyrillic character encoding
References
Further reading
External links
Universal Cyrillic decoder, an online program that may help recovering Cyrillic texts with broken KOI8-R or other character encodings.
Character sets
Computing in the Soviet Union | KOI8-R | [
"Technology"
] | 546 | [
"Computing in the Soviet Union",
"History of computing"
] |
612,467 | https://en.wikipedia.org/wiki/Xenomorph | The xenomorph (also known as a Xenomorph XX121, Internecivus raptus, or simply the alien or the creature) is a fictional endoparasitoid extraterrestrial species that serves as the main antagonist of the Alien and Alien vs. Predator franchises.
The species made its debut in the film Alien (1979) and reappeared in the sequels Aliens (1986), Alien 3 (1992), Alien Resurrection (1997), and Alien: Romulus (2024). The species returns in the prequel series, first with a predecessor in Prometheus (2012) and a further evolved form in Alien: Covenant (2017), and the 2019 short films Alien: Containment, Specimen, Night Shift, Ore, Harvest, and Alone. It also featured in the crossover films Alien vs. Predator (2004) and Aliens vs. Predator: Requiem (2007), with the skull and tail of one of the creatures respectively appearing briefly in Predator 2 (1990), Predator: Concrete Jungle (2005), Predators (2010), and The Predator (2018), as a protagonist (named 6) in the video game Aliens vs. Predator (2010), and will return in the upcoming FX television series Alien: Earth (2025). In addition, the xenomorph appears in various literature and video game spin-offs from the franchises.
The xenomorph's design is credited to Swiss surrealist and artist H. R. Giger, originating in a lithograph titled Necronom IV and refined for the series's first film, Alien. The practical effects for the xenomorph's head were designed and constructed by Italian special effects designer Carlo Rambaldi. Species design and life cycle have been extensively augmented, sometimes inconsistently, throughout each film.
Unlike many other extraterrestrial races in film and television science fiction (such as the Daleks and Cybermen in Doctor Who, or the Klingons and Borg in Star Trek), the xenomorphs are not sapient they lack a technological civilization of any kind, and are instead primal, predatory creatures with no higher goal than the preservation and propagation of their own species by any means necessary, up to and including the elimination of other lifeforms that may pose a threat to their existence. Like wasps or termites, xenomorphs are eusocial, with a single fertile queen breeding a caste of warriors, workers, or other specialist strains. The xenomorphs' biological life cycle involves traumatic implantation of endoparasitoid larvae inside living hosts; these "chestburster" larvae erupt from the host's body after a short incubation period, mature into adulthood within hours, and seek out more hosts for implantation.
Concept and creation
The script for the 1979 film Alien was initially drafted by Dan O'Bannon and Ronald Shusett. Dan O'Bannon drafted an opening in which the crew of a mining ship are sent to investigate a mysterious message on an alien planet. He eventually settled on the threat being an alien creature; however, he could not conceive of an interesting way for it to get onto the ship. Inspired after waking from a dream, Shusett said, "I have an idea: the monster screws one of them", planting its egg in his body, and then bursting out of his chest. Both realized the idea had never been done before, and it subsequently became the core of the film. "This is a movie about alien interspecies rape", O'Bannon said in the documentary Alien Evolution. "That's scary because it hits all of our buttons." O'Bannon felt that the symbolism of "homosexual oral rape" was an effective means of discomforting male viewers.
The title of the film was decided late in the script's development. O'Bannon had quickly dropped the film's original title, Star Beast, but could not think of a name to replace it. "I was running through titles, and they all stank", O'Bannon said in an interview, "when suddenly, that word alien just came out of the typewriter at me. Alien. It's a noun and it's an adjective." The word alien subsequently became the title of the film and, by extension, the name of the creature itself.
Prior to writing the script to Alien, O'Bannon had been working in France for Chilean cult director Alejandro Jodorowsky's planned adaptation of Frank Herbert's classic science-fiction novel Dune. Also hired for the project was Swiss surrealist artist H. R. Giger. Giger showed O'Bannon his nightmarish, monochromatic artwork, which left O'Bannon deeply disturbed. "I had never seen anything that was quite as horrible and at the same time as beautiful as his work" he remembered later. The Dune film collapsed, but O'Bannon would remember Giger when Alien was greenlit, and suggested to director Ridley Scott that he be brought on to design the Alien, saying that if he were to design a monster, it would be truly original.
After O'Bannon handed him a copy of Giger's book Necronomicon, Scott immediately saw the potential for Giger's designs, and chose Necronom IV, a print Giger completed in 1976, as the basis for the Alien's design, citing its beauty and strong sexual overtones. That the creature could just as easily have been male or female was also a strong factor in the decision to use it. "It could just as easily fuck you before it killed you," said line producer Ivor Powell, "[which] made it all the more disconcerting." 20th Century Fox was initially wary of allowing Giger onto the project, saying that his works would be too disturbing for audiences, but eventually relented. Giger initially offered to completely design the Alien from scratch, but Scott mandated that he base his work on Necronom IV, saying that to start over from the beginning would be too time-consuming. Giger initially signed on to design the adult, egg, and chestburster forms, but ultimately also designed the alien planetoid LV-426 and the Space Jockey alien vessel.
Giger conceived the Alien as being vaguely human but a human in full armor, protected from all outside forces. He mandated that the creature have no eyes because he felt that it made them much more frightening if you could not tell they were looking at you. Giger also gave the Alien's mouth a second inner set of pharyngeal jaws located at the tip of a long, tongue-like proboscis which could extend rapidly for use as a weapon. His design for the creature was heavily influenced by an aesthetic he had created and termed biomechanical, a fusion of the organic and the mechanic. His mock-up of the Alien was created using parts from an old Rolls-Royce car, rib bones and the vertebrae from a snake, molded with plasticine. The Alien's animatronic head, which contained 900 moving parts, was designed and constructed by special effects designer Carlo Rambaldi. Giger and Rambaldi together would win the 1980 Academy Award for Visual Effects for their design of the Alien.
Scott decided on the man-in-suit approach for creating the creature onscreen. Initially circus performers were tried, then multiple actors together in the same costume, but neither proved scary. Deciding that the creature would be scarier the closer it appeared to a human, Scott decided that a single, very tall, very thin man be used. Scott was inspired by a photograph of Leni Riefenstahl standing next to a Nuba man. The casting director found , rail-thin graphic designer Bolaji Badejo in a London pub. Badejo went to tai chi and mime classes to learn how to slow down his movements.
Giger's design for the Alien evoked many contradictory sexual images. As critic Ximena Gallardo notes, the creature's combination of sexually evocative physical and behavioral characteristics creates "a nightmare vision of sex and death. It subdues and opens the male body to make it pregnant, and then explodes it in birth. In its adult form, the alien strikes its victims with a rigid phallic tongue that breaks through skin and bone. More than a phallus, however, the retractable tongue has its own set of snapping, metallic teeth that connects it to the castrating vagina dentata."
Name
This creature has no specific name; it was called an alien and an organism in the first film. It has also been referred to as a creature, a serpent, a beast, a dragon, a monster, a nasty, or simply, a thing. The term xenomorph (lit. "alien form" from the Greek xeno-, which translates as either "other" or "strange", and -morph, which denotes shape) was first used by the character Lieutenant Gorman in Aliens with reference to generic extraterrestrial life. The term was erroneously assumed by some fans to refer specifically to this creature, and the word was used by the producers of some merchandise.
The species' binomial names are given in Latin as either Internecivus raptus (meant as "murderous thief") in the Alien Quadrilogy DVD or Lingua foeda acheronsis (meant as "foul tongue from Acheron") in some comic books. The main Alien from Alien vs. Predator is listed in the credits as "Grid", after a grid-like wound received during the film from a Predator's razor net. Alien: Covenant actually credits the Alien as Xenomorph, while also listing a different variety of the creature as the Neomorph. In The Weyland-Yutani Report, the Alien encountered by the Nostromo was specifically referred to as "Xenomorph XX121", and this name is spoken out loud by the android Rook in Alien: Romulus.
Characteristics
At its core, the xenomorph is a hostile parasitic pathogen (Chemical A0-3959X.91) whose mutable mechanisms are signaled by perturbances to its chemistry. It evolves to assume biological and physiological traits of its host, thereby enabling it to adapt to its environment. As the film series has progressed, the creature's design has been modified in many ways, including differing numbers of fingers and limb joints and variations in the design of the Alien's head.
Appearance
When standing upright, the Aliens are bipedal in form, though, depending on their host species, they will adopt either a hunched stance or remain fully erect when walking, sprinting, or in hotter environments. Their overall stance and general behavior seem to result from the mixture of the respective DNA of the embryo and its host. They have a skeletal, biomechanical appearance and are usually colored in muted shades of black, gray, blue or bronze. Their body heat matches the ambient temperature of the environment in which they are found, so they do not radiate heat, making them indistinguishable from their surroundings through thermal imaging. In most of the films, adult Aliens are capable of running and crawling along ceilings, walls, and other hard surfaces. They have great physical strength, having been shown to be capable of breaking through welded steel doors over time.
Aliens have segmented, blade-tipped tails. The sharp tip was initially a small, scorpion-like barb, but from Aliens onwards the blade design increased in size and changed in appearance to more closely resemble a slashing weapon. From Alien Resurrection onwards, the tails have a flat ridge of spines at the base of the blade. This was introduced to help them swim convincingly, and was left intact in the subsequent crossovers. The original shooting script for Aliens and the novelization both featured a scene in which Lieutenant Gorman is "stung" by the barb tail and rendered unconscious; in the final cut of the movie, Gorman is knocked out by falling crates. As a weapon, the strength of the tail is very effective, having been shown to be strong enough to impale and lift a Predator with seemingly little effort.
They have elongated, cylindrical skulls with eyes underneath the "visor". In the novelization of Alien, the character Ash speculates that the xenomorphs "see" by way of electrical impulse, similar to some fish's Ampullae of Lorenzini. This method is illustrated in the original Alien vs Predator PC game and reused for the Predalien 28 years later. The Alien's inner set of jaws is powerful enough to smash through bone and metal. How the creatures see is uncertain; in Alien 3, a spherical lens was used to illustrate the Alien's point of view, so, when the film was projected anamorphically, the image exhibited severe distortion. In the novelization of the movie Alien, the creature is held mesmerized by a spinning green light for several minutes.
In Aliens, the adult creatures have a more textured head rather than a smooth carapace. In the commentary for Aliens, it was speculated that this was part of the maturation of the creatures, as they had been alive far longer than the original Alien, although James Cameron stated that he simply left the carapace off because he liked them better that way. The smooth design of the carapace would be used again in Alien 3 and Alien Resurrection, although made narrower with a longer muzzle and more prominent chin. This design would be kept in Alien versus Predator, and abandoned in Aliens vs. Predator: Requiem in favor of the ribbed design.
Throughout their appearances, human-spawned Aliens have been shown to have different numbers of fingers. In Alien, the creature has webbed, six-fingered hands. In Aliens, the number of fingers is reduced to three, with two "paired" and a single, opposable thumb. The fingers are also shown to be much longer and more skeletal. In Alien Resurrection, the number of digits is increased to four, with two long middle fingers and a pair of thumbs. This design is kept in the Alien vs. Predator films, though the hands were made bulkier in order to make the Aliens seem more formidable against the Predators.
Aliens have been alternatively portrayed as both plantigrade and digitigrade organisms, usually relative to their hosts. Human-spawned Aliens were usually portrayed as having humanoid hind limbs, while in Alien 3 the featured Alien sported double-jointed legs due to its quadrupedal host. This characteristic would be continued in Alien Resurrection for the human-spawned Aliens. Tom Woodruff, who had previously played the "dog-alien" in Alien 3, described the human-spawned Aliens in Resurrection as feeling more like a dog than the previous creature, despite having been born from human hosts. The human-spawned Alien warriors would revert to a plantigrade posture in Alien vs. Predator.
Physiology
Alien blood contains concentrated hydrofluoric acid and sulfuric acid and is capable of corroding almost any substance on contact with alarming speed. It is dull yellow in color and appears to be pressurized inside the body so that it spurts out with great force when the creature is wounded. Ron Cobb suggested the idea of the Alien having acidic blood as a plausible means to make the creature "unkillable"; if one were to use traditional firearms or explosives to attack it, its blood would eat through the hull of the ship. The Alien novelization suggests that, at least at the "facehugger" stage, the acid is not blood but a fluid maintained under pressure between a double layer of skin. In the Aliens vs. Predator: Requiem documentary "Science of the Alien", it is hypothesized that the Aliens' acid blood could contain sulfuric acid due to its corrosiveness and the conspicuously toxic effects on living human tissue. The documentary also speculates that Aliens are immune to their own acidic and toxic liquids due to an endobiological build-up, similar to the human stomach's ability to protect itself from its own digestive fluids. The documentary takes this hypothesis one step further and speculates that the Alien organism's protection system against its own acidic blood is a bio-organically produced Teflon-like insulation. In the original Alien, the facehugger is shown to be able to "spit" acid, dissolving the faceplate of Kane's helmet and allowing the creature immediate access inside. This ability is also exhibited by adult Aliens in Alien 3 and Alien Resurrection; much like a spitting cobra, they use it to blind their victims.
Aliens can produce a thick, strong resin that they use to build their hives and to cocoon their victims, and they can use the walls of their hives as camouflage. Aliens also salivate heavily in the form of a sticky, clear slime; while not a toxic substance in and of itself, it is common for the Alien films to use it as a suspense-building device, wherein a character will notice the falling saliva before noticing its source lying in wait above them.
Intelligence
During various events in Alien Resurrection on the USM Auriga, the crossover film Alien vs. Predator, Aliens on the LV-426 colony Hadley's Hope, and Alien 3 when they are trying to trap the alien, the species displayed observational learning and problem-solving skills. It's also shown the ability to operate machinery at a very basic level, with the queen in Aliens depicted operating an elevator.
On the USM Auriga in Alien Resurrection, the aliens kill one of their own, using its blood to melt through their enclosure and escape (according to the novelization, it was inspired to do so from genetic memories inherited from the original Ripley); in Alien vs. Predator, they use a similar strategy to free the queen from her chains. An alien also uses acid spurting from its severed tail as an improvised weapon by flicking it, indicating awareness of the effects of their acid blood.
In the original film, it is implied that the alien cut the lights on board the Nostromo. On LV-426, the xenomorphs cut power in a section of the complex to gain access to the humans.
The novel for the film Aliens includes a scene where Bishop speculates on the reason why the queen established her "nest" at the base's main power plant. His reasons range from an animalistic drive for warmth to an intentional strategic selection (any attacker could not destroy her without destroying the entire facility). In the director's commentary for Aliens, James Cameron noted that the creatures in Aliens had been alive for far longer than the Alien in the original, and had more time to learn about their environment. In Alien 3, Ripley and the inmates try luring the Alien into the lead works. It becomes obvious that the Alien recognized the trap and the danger it held. At one point, it hesitates to enter the lead works. Later, it hunts down most of the prisoners just before going into the lead works.
Life cycle
Aliens are eusocial life-forms with a caste system ruled over by a queen. Their life cycle comprises several distinct stages: they begin their lives as an egg, which hatches a parasitoid larval form known as a facehugger, which then attaches itself to a living host by, as its name suggests, latching onto its face. In the Alien 3 novelization, Ripley commented that this parasitoid would probably be able to use any host from as small as a cat to as large as an Asian elephant.
The facehugger then "impregnates" the host with an embryo, known as a "chestburster". During this time, the host is kept in an unconscious state with normal vital functions. After depositing the embryo inside the host, the facehugger dies and releases its hold on its victim's face and head, as shown in Alien and Aliens. The host will then experience a short period of near-symptomless recovery during which the embryo is in gestation, followed by the sudden and painful eruption of the chestburster from the host's chest, resulting in their death. The chestburster then matures to an adult phase, shedding its skin and replacing its cells with polarized silicon.
Due to horizontal gene transfer during the gestation period, the alien also takes on some of the basic physical attributes of the host from which it was born (something noticed by Ripley in Alien 3, when the xenomorph plaguing the complex moved on four limbs, having gestated within a quadruped (a dog in the theatrical release and an ox in the director's cut) whereas all the others she had previously seen had gestated within humans/bipeds), allowing the individual alien to adapt to the host's environment (breathe the air, etc.). This is also shown in the two live-action crossover films, Alien vs. Predator (2004) and Aliens vs. Predator: Requiem (2007), where an embryo, having gestated within a Predator/Yautja, displayed Predator/Yautja physical traits (arthropod-like mandibles) from eruption onwards.
This process of horizontal gene transfer is also shown to be two-way; in Alien Resurrection (film and novelization), Ripley's clone, Ripley-8, is shown exhibiting numerous xenomorph characteristics, physical and behavioural; this is touched more upon in the novelization (chapter 4), where it is described that when a host is infested with an xenomorph embryo, it does not just infest the host like a parasite, but also like a virus, "a major breakthrough in adaptive evolution ... a way to guarantee that any host, any host at all, would provide whatever it was the developing embryo needed, even if/when the host's body was inadequate."
The adult phase of the alien is known by various different names. The adult aliens have been referred to as "drones", "warriors", "workers", and sometimes "soldiers", similar to the way ants have been defined. The names of the adult phase have also been used to name different types of adult phases of the alien in numerous sources, including video games, comic books, novels, and the films, but only in the commentaries by the team who created the films. No official name has been given to the adult stage of the alien in the films themselves.
Queen
Queen aliens are significantly larger and stronger than the normal adults, being approximately tall. Their body structure also differs, having two pairs of arms, one large and one small. The queen's head is larger than those of other adult Aliens and is protected by a large, flat crest, like a crown, which varies from queen to queen. Unlike other aliens, the queen's external mouth is separately segmented from the rest of her head, allowing her to turn her mouth left and right almost to the point where it is facing perpendicular to the direction of the rest of her head. In the second film, Aliens, unlike other adults and queens, the queen had high-heel protrusions from her feet.
Egg-laying Alien queens possess an immense ovipositor attached to their lower torso, similar to a queen termite's. Like some insect queens, there appears to be no need for an Alien queen's eggs to be fertilized. When attached to her ovipositor, the queen is supported by a "biomechanical throne" that consists of a lattice of struts resembling massive insect legs.
In the original cut of Alien, the Alien possessed a complete lifecycle, with the still-living bodies of its victims converted into eggs. However, the scene showing the crew converted into eggs was cut for reasons of pacing, leaving the ultimate origin of the eggs obscure. This allowed Aliens director James Cameron to introduce a concept he had initially conceived for a spec script called Mother, a massive mother Alien queen which laid eggs and formed the basis for the Aliens' life cycle. Cameron conceived the queen as a monstrous analogue to Ripley's own maternal role in the film. In that vein, some critics have compared it to Grendel's mother.
The queen was designed by Cameron in collaboration with special effects artist Stan Winston, based upon an initial painting Cameron had done at the start of the project. The Winston Studio created a test foamcore queen before constructing the full hydraulic puppet which was used for most of the scenes involving the large Alien. Two people were inside working the twin sets of arms, and puppeteers off-screen worked its jaws and head. Although at the end of the film, the queen was presented full-body fighting the power-loader, the audience never sees the legs of the queen, save those of the small-scale puppet that appears only briefly. In Aliens, Cameron used very selective camera-angles on the queen, using the 'less is more' style of photography. Subsequently, the movie won an Oscar for Visual Effects. An adult queen was to reappear in Alien Resurrection. The original mechanical head previously used in Aliens was provided by Bob Burns and was an altered design. It was repainted with a blend of green and brown, giving it a shimmering, insect-like quality. This color concept would be abandoned in Alien vs. Predator in favour of the original black color scheme.
In the climax of Alien vs. Predator, the queen's basic design was altered to make her more "streamlined" in appearance and her overall size was increased to six meters (20 feet) tall. Other changes include the removal of the "high-heel" protrusions on her legs, including additional spines on her head and making her waist thinner because there was no need for puppeteers inside her chest. The animatronic queen had 47 points of hydraulic motion.
Aliens vs. Predator: Requiem (2007) introduced a younger form of the full-grown queen, albeit with traits inherited from its Predator host. Recalling the facehugger's method of embryo implantation, the Predalien uses its inner mouth to directly deposit multiple chestburster embryos into pregnant female hosts, also using its mandibles to latch on the faces of said hosts, completely bypassing the need for facehuggers. This is explained by the Brothers Strause as a means of quickly building an army of Aliens before the young queen evolves into its sedentary, egg-laying state.
Egg
Adult xenomorphs are capable of creating their own reproductive egg ('ovamorph') by embedding their prey into an organic substance that (in theory) metabolically reacts to merge host-parasite genetic material. The entire process is xeno-dominant, resulting in a facehugger. The eggs laid by the queen are ellipsoidal, leathery objects between one-half and one meter (two and three feet) high with a four-lobed opening at the top. The eggs can remain in a stasis mode for years, possibly indefinitely, until nearby movement is detected. As a potential host approaches, the egg's lobes unfold like flower petals, and the parasitic facehugger extracts itself from the egg and attaches itself to the potential host.
Giger initially designed the eggs with a much more obvious vaginal appearance, complete with an "inner and outer vulva". The producers complained that Catholic countries would ban the film if the allusion was too strong, so Giger doubled the lobes to four so that, in his words, "seen from above, they would form the cross that people in Catholic countries are so fond of looking at".
The interior of the original egg was composed of "Nottingham lace" (caul fat), which is the lining of a cow's stomach. In the first film, the quick shot of the facehugger erupting from the egg was done with sheep's intestine. Initially, the egg remained totally stationary except for the hydraulic movement of the lobes; however, by Alien Resurrection, the entire egg was made to ripple as it opened. In the Director's Cut of Alien, an additional scene shows still living crew members being cocooned into new eggs, either morphing into a new embryo or acting as a food source for the facehugger inside the egg. According to the novelization for Resurrection, the Egg, in and of itself, could be considered a living organism in its own right.
Facehugger
A facehugger is the second stage in the Alien's metamorphosis. It has eight long, finger-like legs, which allow it to crawl rapidly, and a long tail adapted for making great leaps. These particular appendages give it an appearance somewhat comparable to chelicerate arthropods such as arachnids and horseshoe crabs.
The facehugger is a parasitoid; its only purpose is to make contact with the host's mouth for the implantation process by gripping its legs around the victim's head and wrapping its tail around the host's neck. Upon making contact, the facehugger administers a cynose-based paralytic to the host in order to render it unconscious and immobile. During a successful attachment, the facehugger will insert an ovipositor down the host's throat while simultaneously implanting an embryo. The host is kept alive, and the creature breathes for the host. Attempts to remove facehuggers generally prove fatal to the host, as the parasitoid will respond by tightening its tail around the host's neck, and its acidic blood prevents it from being cut away. In addition, its grip on the host's head is strong enough to tear the host's face off if it is forcibly removed.
Once the Alien embryo is implanted, the facehugger will remain attached until the implant is secure, which can take anywhere from less than a minute to 16 hours. Once this happens, the parasite detaches, crawls away, and dies. The victim awakens with no awareness of the implantation, believing themselves to have been asleep, and appears to have a normal, healthy bodily function.
According to AVPR: Science of the Xenomorph, a behind-the-scenes documentary on Aliens vs. Predator: Requiem, it is theorized that facehuggers may implant a viral agent that "commands" the host's cells to grow the chestburster, as opposed to an implanted embryo. This is an alternate explanation to horizontal gene transfer as to how the resulting xenomorph is able to adopt the characteristics of its host.
Giger's original design for the facehugger was a much larger creature with eyes and a spring-loaded tail. Later, in response to comments from the filmmakers, Giger reduced the creature's size substantially. At first, Giger assumed that the facehugger would wrap around the outside of the astronaut's helmet, but Scott decided that it would have far more impact if the facehugger was revealed once the helmet was removed. Scott and Giger realized that the facehugger should burn through the helmet's faceplate with its acid blood; subsequent redesigns of the space helmet included a far larger faceplate to allow for this.
Dan O'Bannon initially conceived the facehugger as somewhat resembling an octopus, possessing tentacles. However, when he received H. R. Giger's designs, which substituted finger-like digits for tentacles, he found Giger's design concept to be superior. Since no one was available at the time, O'Bannon decided to design the facehugger prop himself. The technical elements of the musculature and bone were added by Ron Cobb. Giger's initial design for the smaller facehugger had the fingers facing forward, but O'Bannon's redesign shifted the legs to the side. When the foam rubber sculpture of the facehugger was produced, O'Bannon asked that it should remain unpainted, believing the rubber, which resembled human skin, was more plausible.
There has been some debate about the sexual appearance of the facehugger, some saying it unmistakably resembles female genitalia.
In Aliens, the facehuggers were redesigned by Stan Winston so that they would be capable of movement. Unlike the creatures in the first film, the creatures would take a much more active role in impregnating their victims. When Ripley throws one off her, the facehugger is now capable of scuttling across the floor and leaping at its prey, wrapping its tail around the victim's throat. The facehugger is also shown to be capable of independently surviving outside of its egg. Due to the film's budget, only two fully working facehuggers were built.
In Alien 3, another addition was planned but ultimately dropped, a "super-facehugger" that would carry the embryo of the queen Alien. This super-facehugger is briefly glimpsed in the Assembly cut of Alien 3 but not identified as such. It made a brief appearance in the canonical Alien book called Alien: Sea of Sorrows, set after the events of Alien Resurrection, about the grandson of Ripley Clone 8, Ellen Ripley's clone.
Chestburster
After impregnation, facehuggers die and the embryo's host wakes up afterward, showing no considerable outward negative symptoms and a degree of amnesia regarding events at the time of implantation. Symptoms build acutely after detachment of the facehugger, the most common being sore throat, slight nausea, increased congestion, and moderate to extreme hunger. In later stages where the incubation period is extended in preparation of a queen birth, symptoms will include a shortness of breath, exhaustion, and hemorrhaging (detectable through biological scanners and present in nosebleeds or other seemingly random bleeding incidents), as well as chest pains caused by a lack of space due to the chestburster's presence or even premature attempts to escape the host.
The incubating embryo takes on some of the host's DNA or traits, such as bipedalism, quadrupedalism, possessing the mandibles of a Predator, and other structural changes that enable adaptation to its new environment. According to Weyland-Yutani medical scientists in Aliens: Colonial Marines, the chestburster will draw nutrients from the host's body in order to develop a placenta as it grows, attaching itself to several major organs in the process. The placenta has cancerous qualities, such that even if the embryo were removed surgically, the placenta would simply cause the affected organs to shut down, resulting in death; the only exceptions to this are from human-xenomorph hybrid hosts like the cloned Ripley 8, who survived an extraction procedure without issue.
Over the course of one to 24 hours—indeterminable in some cases, and sometimes up to a week, in the case of some queens—the embryo develops into a chestburster, at which point, it emerges, violently and fatally ripping open the chest of the host.
There is no on-screen explanation of the reasons for the different incubation times. Fully-grown aliens may avoid harming species acting as hosts for un-emerged chestbursters, though this may only be in the case of a queen embryo.
When a chestburster erupts from the body of a human host, it is less than tall, although the embryo can vary in size from a guinea pig to a large dog depending on the size and species of the host. Its appearance and adaptive characteristics are also determined by the host. Typically, its first instinct upon emerging is to flee and hide until full maturation, as well as find a source of nutrition. However, it soon undergoes a dramatic growth spurt, reaching adult size in a matter of hours; in Alien, the chestburster had grown to in height by the time the Nostromo crew located it again. The chestburster is shown to have molted before reaching maturity. In Aliens vs. Predator: Requiem, Alien warriors who are still growing are shown, displaying shed skin. In the unrated cut, the Predalien is shown wiping off its final molted skin at the film's start.
The chestburster was designed by Alien director Ridley Scott and constructed by special effects artist Roger Dicken. Giger had produced a model of a chestburster that resembled a "degenerate plucked turkey" and was far too large to fit inside a ribcage. Much to Giger's dismay, his model reduced the production team to fits of laughter on sight. Scott drafted a series of alternative designs for the chestburster based on the philosophy of working "back [from the adult] to the child" and ultimately produced "something phallic". The chestburster in the original Alien was armless, but arms were added in Aliens to facilitate the creature crawling its way out of its host's corpse. This concept would be abandoned in Alien Resurrection, but it would return in Alien: Covenant.
Cocoon
The xenomorph lifecycle is expanded in the movie Alien: Romulus with the introduction of a "cocoon" stage, which bridges the gap between the chestburster and the fully-grown adult xenomorph stages as witnessed by the characters Bjorn and Kay while aboard the derelict Renaissance space station. It is shown that a chestburster which had emerged from their crewmate Navarro had attached itself to a wall and built a biomechanical protective cocoon around itself after shedding its skin. While inside the cocoon, the chestburster transformed into a fully-grown adult drone xenomorph.
Alternative forms
Aliens take on various forms depending on the characteristics of their hosts. Most of the Aliens seen to date have been human-spawned, but a number of Aliens born from other hosts have also been seen. Some of these are also a different variants or species altogether such as the Neomorph and Deacon.
"Dragon"
The "Dog Alien" or "Ox Alien", (also known as "Runner Alien" in the expanded universe stories) and referred to in-film as "Dragon", was introduced in Alien 3. The creature itself shares the same basic physical configuration and instincts as the other Aliens shown in the previous films, although there are several differences due to the host it was spawned from (a dog in the theatrical cut, an ox in the novelized version and the assembly cut). The dog Alien in its chestburster form is a miniature version of the adult, unlike the larval human- and Predator-spawned chestbursters. The adult is primarily quadrupedal, has digitigrade hind legs, and lacks the dorsal tubes of the human-spawned variety. The only differences behavior-wise was that this Alien behaved more like a dog or another quadrupedal animal that generally is prone to using its mouth instead of its front legs as its primary weapon to attack and maul its victims with its teeth. This Alien, even when actively provoked, would not attack or kill Ripley, due to the queen growing inside her. This, however, changed towards the movie's climax, at which point the monster, after surviving a torrent of molten lead, burst from the liquid and went into a rampage, pursuing Ripley and presumably attempting to kill her until she destroyed it by showering it with freezing water, causing it to explode from thermal shock.
Originally, H. R. Giger was approached on July 28, 1990, by David Fincher and Tim Zinnemann, and was asked to redesign his own creations for Alien 3. Giger's new designs included an aquatic face-hugger and a four-legged version of the adult Alien. Giger said in an interview "I had special ideas to make it more interesting. I designed a new creature, which was much more elegant and beastly, compared to my original. It was a four-legged Alien, more like a lethal feline—a panther or something. It had a kind of skin that was built up from other creatures—much like a symbiosis." However, when Tom Woodruff and Alec Gillis of Amalgamated Dynamics told Giger that they had their own design, Giger expressed himself as "very upset" and that the creature he had especially designed was his "baby". Even after the production severed contact, Giger continued to fax suggestions to Fincher and made full-scale drawings and a sculpt of the Alien, all of which were rejected.
Giger would later be angered by the end credits of the released film presenting him as merely the creator of the original creature, and the fact that ADI personnel gave a series of interviews that minimized Giger's contribution. Fox eventually reimbursed Giger, but only after he refused to be interviewed for their behind-the-scenes documentary of Alien 3.
However, Giger would comment that he thought the resulting film was "okay" and that the Alien was "better than in the second film".
Newborn
In Alien Resurrection, due to significant genetic tampering in an attempt to recover DNA from the deceased Ellen Ripley and the Alien queen within her, the resulting cloned Aliens show a number of minor human traits. The cloned queen inherits a perversion of a human womb, and as a result, it ceases to lay eggs and gives birth to a humanoid mutant hybrid. Physically, the human/Alien Newborn is very different from other alien young, being larger, with pale, translucent skin, a skull-shaped face with eyes, a human tongue, and a complete absence of a tail. The Newborn fails to bond with its Alien queen mother, killing it, and imprinting on the Ripley clone instead.
The Newborn creature was originally scripted by Joss Whedon as being an eyeless, ivory-white quadruped with red veins running along the sides of its head. It had an inner jaw, with the addition of a pair of pincers on the sides of its head. These pincers would have been used to immobilize its prey as it drained it of blood through the inner jaw. The creature was originally going to rival the queen in size, but Jean-Pierre Jeunet asked ADI to make the human/Alien hybrid, known as the Newborn, more human than Alien. The Newborn's eyes and nose were added to improve its expressions to make it a character, rather than just a "killing machine", and give it depth as a human-like creature.
Predalien
This variation is the result of a facehugger impregnating a Predator. The "Predalien" was first depicted in a painting by Dave Dorman, and subsequently featured in the Aliens versus Predator comics and games. A Predalien chestburster debuted in the final scene of Alien vs. Predator (2004), but did not make a full on film appearance as an adult until Aliens vs. Predator: Requiem (2007).
The Predalien shares many characteristics with its hosts, such as long hair-like appendages, mandibles, skin color, blood that glows in the dark (though still acidic), and similar vocalizations. It is a large, bulky creature, and possesses physical strength greater than that of human-spawned Aliens. Like human-born Aliens, it is also shown to be stronger than its host species, as evidenced by its ability to pin, push, and knock a Predator away with ease.
Deacon
The dark-blue Deacon is a different species that makes an appearance in Prometheus, though it clearly shares traits similar to the xenomorph, including a similar life-cycle. The Deacon is the result of a "Trilobite" (which takes its name from a group of extinct marine arthropods), a large facehugger-like creature, attacking and impregnating an Engineer. After some time, it will burst out of its host, with the notable difference that it is "born" almost fully developed. Its fate is unknown, though the tie-in comic book Prometheus: Fire and Stone, also set on LV-223, features a mutated mountain with acidic veins which are implied to be the heavily mutated Deacon's deadly back spines.
Neomorph
The pale-white Neomorph is featured in Alien: Covenant. It was created through exposure to spores found growing on the Engineer homeworld. The embryonic Neomorph gestates inside the host until it bursts out from wherever in said host they've metastasized (one is seen gaining entry through the ear and emerging from the spine, while a second one, inhaled by nose, later erupts from the host's throat; other means of entry and egress are not made clear), using mostly its head, which is sharp and pointed, not unlike the Deacon. Similarly, the Deacon and Neomorph share the same type of Pharyngeal Jaw (similar to that of a Moray Eel) among other distinctly less biomechanical traits than the traditional xenomorph, though the latter does share with the Neomorph a tail strong enough to cause grevious injury; at one point, a violently thrashing Neomorph tail is seen to instantly remove a human jaw. This behavior is just one of several demonstrating the Neomorph's far more feral nature; they are voracious predators, often eating the corpses of their victims, and they appear to lack their xenomorph cousins' hive structure, possibly since they propagate through mutated animal life.
Offspring
The Offspring, featured in Alien: Romulus, is the result of pregnant character Kay injecting a serum derived from the Xenomorph's genome into her neck, leading to a rapid mutation of her unborn fetus. The creature is violently birthed in an egg, hatches, and rapidly grows to over 8 feet tall. It possesses fleshy skin, black eyes, a tail, a Xenomorph-like tongue with teeth, dorsal tubes, and overall facial similarities to the Engineers. It terrorizes the remaining crew of the Corbelan, damaging the android Andy and feeding off of its mother Kay before pursuing Rain, but is finally defeated by her jettisoning it into the planetary rings below. Although there is no evidence that the Offspring possesses higher thinking, it smiles when in an advantageous position.
See also
Dracunculiasis, a real parasitic infection by a worm (up to 1 m long) that emerges from the body one year after infection
Notes
References
Further reading
Alien – Released on May 25, 1979 – On-line script. Retrieved March 2, 2007.
Aliens – Released on June 18, 1986 – On-line script. Retrieved March 2, 2007.
Alien 3 – Released on May 22, 1992 – On-line script. Retrieved March 2, 2007.
Alien: Resurrection – Released on November 26, 1997 – On-line script. Retrieved March 2, 2007.
Aliens versus Predator (computer game).
Aliens versus Predator 2 (computer game).
Aliens Colonial Marines Technical Manual, HarperCollins 1996, .
Aliens: A Comic Book Adventure (computer game)
The Anchorpoint Essays, DNA Reflex
Xenomorph Types at Alien vs. Predator Central
External links
Alien (franchise) characters
Action film villains
Alien vs. Predator (franchise) characters
Biological weapons in popular culture
Film characters introduced in 1979
Fictional blind characters
Fictional characters with superhuman strength
Fictional extraterrestrial species and races
Fictional hybrid species and races
Fictional mass murderers
Fictional monsters
Fictional parasite characters
Fictional rapists
Fictional superorganisms
H. R. Giger
Horror film villains
Science fiction film characters | Xenomorph | [
"Biology"
] | 9,902 | [
"Superorganisms",
"Fictional superorganisms",
"Biological weapons in popular culture",
"Biological warfare"
] |
612,479 | https://en.wikipedia.org/wiki/Hydrogen%20line | The hydrogen line, 21 centimeter line, or H I line is a spectral line that is created by a change in the energy state of solitary, electrically neutral hydrogen atoms. It is produced by a spin-flip transition, which means the direction of the electron's spin is reversed relative to the spin of the proton. This is a quantum state change between the two hyperfine levels of the hydrogen 1 s ground state. The electromagnetic radiation producing this line has a frequency of (1.42 GHz), which is equivalent to a wavelength of in a vacuum. According to the Planck–Einstein relation , the photon emitted by this transition has an energy of []. The constant of proportionality, , is known as the Planck constant.
The hydrogen line frequency lies in the L band, which is located in the lower end of the microwave region of the electromagnetic spectrum. It is frequently observed in radio astronomy because those radio waves can penetrate the large clouds of interstellar cosmic dust that are opaque to visible light. The existence of this line was predicted by Dutch astronomer H. van de Hulst in 1944, then directly observed by E. M. Purcell and his student H. E. Ewen in 1951. Observations of the hydrogen line have been used to reveal the spiral shape of the Milky Way, to calculate the mass and dynamics of individual galaxies, and to test for changes to the fine-structure constant over time. It is of particular importance to cosmology because it can be used to study the early Universe. Due to its fundamental properties, this line is of interest in the search for extraterrestrial intelligence. This line is the theoretical basis of the hydrogen maser.
Cause
An atom of neutral hydrogen consists of an electron bound to a proton. The lowest stationary energy state of the bound electron is called its ground state. Both the electron and the proton have intrinsic magnetic dipole moments ascribed to their spin, whose interaction results in a slight increase in energy when the spins are parallel, and a decrease when antiparallel. The fact that only parallel and antiparallel states are allowed is a result of the quantum mechanical discretization of the total angular momentum of the system. When the spins are parallel, the magnetic dipole moments are antiparallel (because the electron and proton have opposite charge), thus one would expect this configuration to actually have lower energy just as two magnets will align so that the north pole of one is closest to the south pole of the other. This logic fails here because the wave functions of the electron and the proton overlap; that is, the electron is not spatially displaced from the proton, but encompasses it. The magnetic dipole moments are therefore best thought of as tiny current loops. As parallel currents attract, the parallel magnetic dipole moments (i.e., antiparallel spins) have lower energy.
In the ground state, the spin-flip transition between these aligned states has an energy difference of . When applied to the Planck relation, this gives:
where is the wavelength of an emitted photon, is its frequency, is the photon energy, is the Planck constant, and is the speed of light in a vacuum. In a laboratory setting, the hydrogen line parameters have been more precisely measured as:
λ =
ν =
in a vacuum.
This transition is highly forbidden with an extremely small transition rate of , and a mean lifetime of the excited state of around 11 million years. Collisions of neutral hydrogen atoms with electrons or other atoms can help promote the emission of 21 cm photons. A spontaneous occurrence of the transition is unlikely to be seen in a laboratory on Earth, but it can be artificially induced through stimulated emission using a hydrogen maser. It is commonly observed in astronomical settings such as hydrogen clouds in our galaxy and others. Because of the uncertainty principle, its long lifetime gives the spectral line an extremely small natural width, so most broadening is due to Doppler shifts caused by bulk motion or nonzero temperature of the emitting regions.
Discovery
During the 1930s, it was noticed that there was a radio "hiss" that varied on a daily cycle and appeared to be extraterrestrial in origin. After initial suggestions that this was due to the Sun, it was observed that the radio waves seemed to propagate from the centre of the Galaxy. These discoveries were published in 1940 and were noted by Jan Oort who knew that significant advances could be made in astronomy if there were emission lines in the radio part of the spectrum. He referred this to Hendrik van de Hulst who, in 1944, predicted that neutral hydrogen could produce radiation at a frequency of due to two closely spaced energy levels in the ground state of the hydrogen atom.
The 21 cm line (1420.4 MHz) was first detected in 1951 by Ewen and Purcell at Harvard University, and published after their data was corroborated by Dutch astronomers Muller and Oort, and by Christiansen and Hindman in Australia. After 1952 the first maps of the neutral hydrogen in the Galaxy were made, and revealed for the first time the spiral structure of the Milky Way.
Uses
In radio astronomy
The 21 cm spectral line appears within the radio spectrum (in the L band of the UHF band of the microwave window to be exact). Electromagnetic energy in this range can easily pass through the Earth's atmosphere and be observed from the Earth with little interference. The hydrogen line can readily penetrate clouds of interstellar cosmic dust that are opaque to visible light. Assuming that the hydrogen atoms are uniformly distributed throughout the galaxy, each line of sight through the galaxy will reveal a hydrogen line. The only difference between each of these lines is the Doppler shift that each of these lines has. Hence, by assuming circular motion, one can calculate the relative speed of each arm of our galaxy. The rotation curve of our galaxy has been calculated using the hydrogen line. It is then possible to use the plot of the rotation curve and the velocity to determine the distance to a certain point within the galaxy. However, a limitation of this method is that departures from circular motion are observed at various scales.
Hydrogen line observations have been used indirectly to calculate the mass of galaxies, to put limits on any changes over time of the fine-structure constant, and to study the dynamics of individual galaxies. The magnetic field strength of interstellar space can be measured by observing the Zeeman effect on the 21-cm line; a task that was first accomplished by G. L. Verschuur in 1968. In theory, it may be possible to search for antihydrogen atoms by measuring the polarization of the 21-cm line in an external magnetic field.
Deuterium has a similar hyperfine spectral line at 91.6 cm (327 MHz), and the relative strength of the 21 cm line to the 91.6 cm line can be used to measure the deuterium-to-hydrogen (D/H) ratio. One group in 2007 reported D/H ratio in the galactic anticenter to be 21 ± 7 parts per million.
In cosmology
The line is of great interest in Big Bang cosmology because it is the only known way to probe the cosmological "dark ages" from recombination (when stable hydrogen atoms first formed) to the reionization epoch. After including the redshift range for this period, this line will be observed at frequencies from 200 MHz to about 15 MHz on Earth. It potentially has two applications. First, by mapping the intensity of redshifted 21 centimeter radiation it can, in principle, provide a very precise picture of the matter power spectrum in the period after recombination. Second, it can provide a picture of how the universe was re‑ionized, as neutral hydrogen which has been ionized by radiation from stars or quasars will appear as holes in the 21 cm background.
However, 21 cm observations are very difficult to make. Ground-based experiments to observe the faint signal are plagued by interference from television transmitters and the ionosphere, so they must be made from very secluded sites with care taken to eliminate interference. Space based experiments, even on the far side of the Moon (where they would be sheltered from interference from terrestrial radio signals), have been proposed to compensate for this. Little is known about other foreground effects, such as synchrotron emission and free–free emission on the galaxy. Despite these problems, 21 cm observations, along with space-based gravitational wave observations, are generally viewed as the next great frontier in observational cosmology, after the cosmic microwave background polarization.
Relevance to the search for non-human intelligent life
The Pioneer plaque, attached to the Pioneer 10 and Pioneer 11 spacecraft, portrays the hyperfine transition of neutral hydrogen and used the wavelength as a standard scale of measurement. For example, the height of the woman in the image is displayed as eight times 21 cm, or 168 cm. Similarly the frequency of the hydrogen spin-flip transition was used for a unit of time in a map to Earth included on the Pioneer plaques and also the Voyager 1 and Voyager 2 probes. On this map, the position of the Sun is portrayed relative to 14 pulsars whose rotation period circa 1977 is given as a multiple of the frequency of the hydrogen spin-flip transition. It is theorized by the plaque's creators that an advanced civilization would then be able to use the locations of these pulsars to locate the Solar System at the time the spacecraft were launched.
The 21 cm hydrogen line is considered a favorable frequency by the SETI program in their search for signals from potential extraterrestrial civilizations. In 1959, Italian physicist Giuseppe Cocconi and American physicist Philip Morrison published "Searching for interstellar communications", a paper proposing the 21 cm hydrogen line and the potential of microwaves in the search for interstellar communications. According to George Basalla, the paper by Cocconi and Morrison "provided a reasonable theoretical basis" for the then-nascent SETI program. Similarly, Pyotr Makovetsky proposed SETI use a frequency which is equal to either
× ≈
or
2 × ≈
Since is an irrational number, such a frequency could not possibly be produced in a natural way as a harmonic, and would clearly signify its artificial origin. Such a signal would not be overwhelmed by the H I line itself, or by any of its harmonics.
See also
Balmer series
Chronology of the universe
Dark Ages Radio Explorer
Hydrogen spectral series
H-alpha, the visible red spectral line with wavelength of 656.28 nanometers
Rydberg formula
Timeline of the Big Bang
Footnotes
References
Further reading
Cosmology
External links
— PAST experiment description
Hydrogen physics
Emission spectroscopy
Radio astronomy
Physical cosmology
Astrochemistry
Hydrogen | Hydrogen line | [
"Physics",
"Chemistry",
"Astronomy"
] | 2,181 | [
"Astronomical sub-disciplines",
"Spectrum (physical sciences)",
"Theoretical physics",
"Emission spectroscopy",
"Astrophysics",
"Radio astronomy",
"Astrochemistry",
"nan",
"Spectroscopy",
"Physical cosmology"
] |
612,526 | https://en.wikipedia.org/wiki/Freddy%20%28weather%29 | Freddy is an animated weatherman. The short animated series covers 20 different weather conditions, complete with sound effects. It is currently shown in Hong Kong on the TVB television channel during the weather report. Freddy's purpose is to show the next day's predicted weather. Freddy was created in Milwaukee, Wisconsin and TVB pays license fees for his continual use.
Appearance
Freddy wears a yellow suit and has a pink face; his design has not generally changed over the years.
Freddy and the weather
Freddy's actions gives a short forecast of the day's weather:
For extreme heat, he melts;
For lightning, he gets struck by lightning and runs away;
For rain, a rain cloud drifts in and he opens his umbrella;
For heavy rain, the rain fills the screen;
For strong winds, he is blown away;
For cold, he turns to ice;
For fog, he holds up a lantern for light.
For sunny conditions, a flower sprouts, he picks it and walks.
At the same time, there are various sound effects including falling rain, thunder, blowing wind, footsteps, and Freddy's whistling and reactive cries such as "oooh", "aaah" and "awww".
Other markets
Freddy (also known as "Freddy Forecast" or "Freddy the Forecaster") has been shown in 19 television markets in the United States, such as KNOE-TV in Monroe, Louisiana; KTVO TV in Kirksville, Missouri; WJBF in Augusta, Georgia and many other stations. The series also ran in Perth, Australia for many years.
In Shenzhen, China, Shenzhen Television's news bulletin at noon uses a similar character and features Shenzhen's skyline in the background.
Hong Kong (TVB)
In Hong Kong, where Freddy is known as () in Cantonese and Mandarin, the character is utilised on TVB's weather forecasts, for both its English (TVB Pearl) and Chinese (TVB Jade) channels. In 1993, he became computer-animated.
The skyline behind him on TVB's broadcasts is regularly updated to keep abreast of the ever-changing skyline of Hong Kong. It appears as if he is walking on the water surface of Victoria Harbour. As of January 2008, Freddy was upgraded to his third incarnation.
During the 1980s weather reports, in case of fine weather Freddy would pick flowers, which was criticised as an example of destroying public property. After receiving complaints, TVB changed this in the 1990s to him walking and whistling merrily.
References
Toon Town: the city's homegrown comics, HK Magazine No. 585. Retrieved 29 July 2005.
TVB
Television mascots
Cartoon mascots
Male characters in television
Fictional Hong Kong people
Weather prediction
Television weather presenters
Chinese mascots | Freddy (weather) | [
"Physics"
] | 571 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
612,757 | https://en.wikipedia.org/wiki/Piper%20Alpha | Piper Alpha was an oil platform located in the North Sea about north-east of Aberdeen, Scotland. It was operated by Occidental Petroleum (Caledonia) Limited (OPCAL) and began production in December 1976, initially as an oil-only platform, but later converted to add gas production.
Piper Alpha exploded and collapsed under the effect of sustained gas jet fires in the night between 6 and 7 July 1988, killing 165 of the men on board (30 of whose bodies were never recovered), as well as a further two rescuers. Sixty-one workers escaped and survived. The total insured loss was about £1.7 billion (£ billion in ), making it one of the costliest man-made catastrophes ever. At the time of the disaster, the platform accounted for roughly 10% of North Sea oil and gas production and was the world’s single largest oil producer. The accident is the worst ever offshore oil and gas disaster in terms of lives lost, and comparable only to the Deepwater Horizon disaster in terms of industry impact. The inquiry blamed it on inadequate maintenance and safety procedures by Occidental, though no charges were brought. A separate civil suit resulted in a finding of negligence against two workers who were killed in the accident.
A memorial sculpture is located in the Rose Garden of Hazlehead Park in Aberdeen.
Piper oilfield
Four companies (Occidental Petroleum (UK) Ltd, Getty Oil International (England) Ltd, Allied Chemical (Great Britain) Ltd, and Thomson Scottish Associates Ltd) formed a joint venture and obtained an oil-exploration licence in 1972. They discovered the Piper oilfield located at in January 1973, and began fabrication of the platform, pipelines, and onshore support structures. Oil production started in December 1976, less than four years after discovery (a record rarely beaten ever since), with about of oil per day, later increasing to . Production declined to by 1988.
A large, fixed platform, Piper Alpha was located in the Piper oilfield, around northeast of Aberdeen in of water. Piper Alpha produced crude oil and natural gas from 36 wells. OPCAL built the Flotta oil terminal in the Orkney Islands to receive and process oil from the Piper, Claymore (both operated by OPCAL), and Tartan (Texaco) oilfields, each with its own platform. One diameter main oil pipeline ran from Piper Alpha to Flotta.
The Piper platform was the hub of a network of pipelines connecting it to nearby platforms and to shore. The Tartan field fed oil to Claymore, with the co-mingled oil flowing from Claymore through a short pipeline to join the Piper-Flotta line some to the west of Piper.
Separate diameter gas pipelines were run from the Tartan platform to Piper, and from Piper to the Total-operated manifold compression platform MCP-01 some to the northwest. Another line connected Claymore to Piper, primarily to provide gas from Piper to the Claymore gas lift system. MCP-01 would receive the gas from Piper and Tartan, as well as from the Frigg gas field (through a separate pipeline), and send the resulting stream to St Fergus Gas Terminal through a , 2 × 32-inch pipeline.
The inventory of the pipelines was significant, with the main oil line to Flotta containing around 70,000 tonnes of oil and the three gas lines linking Piper to the surrounding platforms close to 2,000 tonnes of high-pressure gas. The pressure in the Tartan–Piper and Piper–MCP-01 pipelines was around 127 bar.
Construction and layout
Piper Alpha's production facilities were designed by Bechtel in London. The supporting platform jacket, as well as the topside module structures and buildings, were designed by McDermott Hudson. The eight-legged jacket structure was constructed by J. Ray McDermott in Ardersier, Inverness-shire, and Union Industrielle et d'Entreprise in Le Havre, France, with the sections united in Ardersier before being towed out during 1975. The jacket weighed around 14,000 tonnes and was tall, of which a length of was permanently submerged. Four clusters of foundation piles extended a further below the seabed. Topside modules weighing about 10,000 tonnes in aggregate were lifted from a crane ship and installed over the jacket in late 1976.
Piper's hydrocarbon facilities and principal utilities were distributed in four main modules (A, B, C, and D) separated by firewalls and sitting atop the steel jacket. Above the main modules were a drilling derrick, various utilities, the living quarters, flare booms, two pedestal cranes, and the helideck.
For safety reasons, the modules were organized so that the most dangerous platform operations took place far from the personnel areas. However, the conversion from oil to gas broke this safety concept, with the result that sensitive areas were brought together; for example, the gas compression module was next to the control room. The close position of these two areas played a significant role in the accident.
The hydrocarbon inventory actually held within the platform was small in comparison with that contained in the pipelines, standing at around 80 tonnes of process fluids and 160 tonnes of diesel fuel (which was stored in tanks located above module C).
Upgrades and production modes
In 1978, major works were carried out to enable the platform to meet British government gas-conservation requirements, and to avoid waste from the flaring of excess gas. A gas conservation module (GCM) was added, built on top of module B. After this work, Piper Alpha operated in what was known as "phase-2 mode", i.e., using the GCM facility. In phase-2 mode, the GCM would treat the gas obtained in the crude oil separation process, separate condensate (or natural gas liquids, NGL) from it, reinject the condensate in the oil export pipeline to Flotta, and compress the gas for export to the pipeline to MCP-01. From the end of 1980 until July 1988, phase-2 mode was its normal operating state.
In the late 1980s, major construction, maintenance, and upgrade works were planned by Occidental, and by July 1988, the rig was already well into major revamp, with six projects identified, including the change-out of the GCM unit. This meant that the rig was returned to its initial "phase-1 mode", i.e., operating without the GCM unit, only three days before the accident. Despite the complex and demanding work schedule, Occidental made the decision to continue operating the platform in phase-1 mode throughout this period and not to shut it down, as had been originally planned. The planning and controls that were put in place were thought to be adequate. At the time of the accident, Piper weighed around 34,000 tonnes. It continued to export oil at just under per day (or 10% of the entire production of the UK sector of the North Sea, which made it the world’s single largest oil producer) and to export Tartan gas at some per day at standard conditions during this period.
Events on 6 and 7 July 1988
Because the platform was completely destroyed, and many of those involved died, analysis of events can only suggest a possible chain of events based on known facts. Some witnesses to the events question the official timeline.
Preliminary events
At 07:45, 6 July 1988, the permit-to-work forms for the day shift were issued and signed. Of the two condensate pumps, both located in module C, pump B was operating to displace the platform's condensate for transport to the coast, while pump A was due for maintenance. Two permits were issued to that effect, one for a pump overhaul and another for the removal of the pump's pressure safety valve (PSV #504), which was due for recertification. During the day, pump A was electrically and mechanically isolated, but containment was not broken. The PSV, instead, was removed. The open condensate pipe upstream of the PSV was temporarily sealed with a disk cover (a flat metal disc also called a blind flange or blank flange). It was hand-tightened only. Because the work could not be completed by 18:00, the blind flange remained in place. The on-duty engineer filled in information on the PSV removal permit to the effect that pump A was not ready and must not be switched on under any circumstances. However, this information was not provided in the pump overhaul permit.
The night shift started at 18:00 with 62 men running Piper Alpha. As the on-duty custodian was busy, the engineer neglected to inform him of the condition of pump A. Instead, he placed the PSV permit in the control centre and left. This permit disappeared and was not found.
At 19:00 the diesel-driven fire pumps were put under manual control. Like many other offshore platforms, Piper Alpha had an automatic fire-fighting system, driven by both diesel and electric fire pumps. The pumps were designed to suck in large amounts of sea water for firefighting and had automatic controls to start them in case of fire. However, the Piper Alpha procedure adopted by the offshore installation manager (OIM) required manual control of the diesel pumps whenever divers were in the water (as they were for about 12 hours a day during summer) although in reality, the risk was not seen as significant, unless a diver was closer than from any of the four level caged intakes. A recommendation from an earlier audit had suggested that a procedure be developed to keep the pumps in automatic mode if divers were not working in the vicinity of the intakes, as was the practice on the Claymore platform, but this was never implemented.
At 21:45, condensate pump B stopped and could not be restarted. This was likely due to the formation of hydrates and the consequent blockage of gas compression pipework, following problems with the methanol system. The operators were anxious to reinstate condensate pumping capacity. Failure to do so would have meant needing to stop the gas compressors and venting to the flare all the gas that could not be processed.
Around 21:52 a search was made through the documents to determine whether condensate pump A could be started. The permit for pump A overhaul was found but that for its PSV removal was not. The valve was at a distance from the pump, so the permits were stored in different boxes, as they were sorted by location. Because the overhaul had just started on the day, with no equipment removed or containment broken, the operators were under the impression that the pump could be put back in operation quickly and safely. None of those present were aware that a vital part of the machine had been removed. The missing valve was not noticed by anyone, particularly as the blind flange replacing the safety valve was several metres above ground level and obstructed from view.
Explosion in module C and initial reactions
At or shortly before 22:00, gas was reintroduced into pump A, filling it. The loosely fitted flange did not withstand the resulting pressure. Gas audibly leaked out at high pressure, drawing the attention of several men and triggering multiple gas alarms. Before anyone could act, the gas ignited and exploded. The source of ignition is unclear, with the later investigation pointing to hot work, hot surfaces, broken light fittings or an electrostatic spark as potential sources (electrical equipment in the surroundings were rated for hazardous areas). The platform, which originally was built for oil production only, was not of a blast-proof design, so the firewalls were not designed to withstand explosions. The blast blew through the firewalls separating module C from the adjoining modules B and D (the latter of which housed the control room), made up of variously sized panels bolted together. As a result, the control room was almost entirely destroyed. Panels around module B were also dislodged, with one of them rupturing a small condensate pipe, thus creating another fire.
Immediately after the explosion, control room operator Geoff Bollands, who had witnessed the alarms going off in the control room and subsequently survived the blast, activated the rig's emergency stop button before escaping. This closed isolation valves in the wells and sea riser lines and ceased all oil and gas production. Theoretically, the platform would then have been isolated from the flow of oil and gas and the fire contained. The gas pipelines connecting Piper to Tartan and Claymore could only be isolated using separate push buttons, which were not actuated; however, the riser isolation valves probably closed due to loss of power supply in the explosion. (At any rate, the flare continued to burn until 23:30, indicating a leak in the Claymore riser isolation valve.)
The control room of Piper Alpha was abandoned. The rig's design made no allowances for the destruction of the control room, and the platform's organization disintegrated. As the diesel fire pumps had been switched to manual activation, the fire water system could not function properly. Additionally, their vulnerable location in module D, adjacent to the failed division from module C, was impossible to reach for the crew to manually activate them. Electrical power quickly failed too, as cables were routed through vulnerable production areas without redundancy. After the main generator tripped, the emergency generator did not take over. The drilling generator started but subsequently failed. Some battery-run systems remained operational for a while. The emergency lighting failed after briefly remaining functional. The failure of power generation facilities also made the electric fire pumps inoperable. Despite Bollands' activation of the emergency shutdown, no alarms warned workers of the unfolding disaster, as the public announcement and general alarm system had been impaired. Multiple mayday calls were made by radio operator David Kinrade starting at 22:04, before the radio room had to be abandoned at 22:08.
At 22:06 the heat from the flames ruptured crude oil pipework and processing vessels in module B. The released oil ignited and the subsequent pool fire created a black plume of smoke characteristic of oil fires, visible from nearby ships. There is evidence that isolation of the produced oil pipeline was not effective, which may have left a route open for oil flowing into the fire due to the backpressure from the Claymore oil pipeline. The burning oil later dripped onto a lower platform used by the rig for diving operations. The platform floor consisted of steel grates, and under normal circumstances would have allowed the burning oil to drip harmlessly into the sea, but divers on the previous shift had placed rubber matting on the metal grate (likely to cushion their bare feet from the sharp metal grates), allowing the oil to form a burning puddle on the platform. After conferring with Bollands and others while still on the main production deck, lead production operator Robert Vernon and safety officer Robert Carroll donned breathing apparatus and left for the diesel fire pumps in an attempt to start them manually. The pair were never to be seen again.
The OIMs of Tartan and Claymore shortly before 22:20 became aware that an explosion had taken place on Piper Alpha and a fire was raging. However, they both decided not to shut down production and instead await orders to that effect from Aberdeen. By this time, 70 to 80 men had mustered in the canteen, with access to the lifeboats or the helideck made impossible by smoke and fire. This room was becoming increasingly hot and full of smoke. Piper's OIM did not order an evacuation.
Subsequent gas pipeline ruptures and platform collapse
At 22:20, in a case of domino effect, the heat from the burning oil collecting on the diving platform caused the nearby Tartan pipeline to rupture violently. This discharged enormous amounts of highly flammable gas (some 30 tonnes only in the first minute of the release), which immediately ignited into a massive jet fire. The heat and vibrations of the fire were felt by the crews in vessels as far away as from the rig. From that moment on, the platform's destruction was inevitable. This potential for an extreme escalation scenario was known to Occidental; a report commissioned by them in 1986 stated that "the gas pipelines, would take hours to depressurize because of their capacity. This could result in a high pressure gas fire on the cellar deck that would be virtually impossible to fight, and the protection systems would not be effective in providing the cooling needed for the duration of the depressurisation".
The MCP-01 pipeline failed at 22:50 as a result of domino effect, and the ensuing jet fire shot huge flames over into the air. Personnel still left alive were either desperately sheltering in the scorched, smoke-filled accommodation block or leaping from the various deck levels, including the helideck, into the North Sea.
The Claymore gas line ruptured at 23:20, adding even more fuel to the already massive jet fires on board Piper Alpha. At this point the Claymore OIM had received orders from Aberdeen to shut down production, and the gas flowline to Piper with it. He had initiated a pipeline blowdown (depressurization) but this was not yet complete at the moment of the rupture. Tartan's gas pipeline had been shut down around 22:30, with its blowdown commencing around 23:20.
Around 23:45, with critical support structures failing from the intense heat, the platform began to collapse. One of the cranes fell first, followed by the drilling derrick. The generation and utilities module (D) and the fireproofed accommodation block, still occupied by crewmen who had sheltered there, then slipped into the North Sea. By 00:45, 7 July, almost all of Piper Alpha was gone, with only module A still standing.
Rescue operations
Nearby vessels and rescue crafts
Vessels that were close to Piper Alpha at the moment of the first explosion included MSV Tharos, a large semi-submersible firefighting, diving/rescue, and accommodation vessel; standby safety vessel MV Silver Pit, who immediately sent her fast rescue boat towards Piper; Maersk Cutter, which started dousing the drilling floor of Piper with her fire monitors as early as ten minutes after the blast; Lowland Cavalier, which had no monitors but immediately deployed a workboat; and converted supply ship Sandhaven, which was the standby vessel for Santa Fe 135, a semi-submersible drilling rig several miles away, and had her fast rescue craft in the water minutes after she spotted the first fire on Piper Alpha. Other vessels that attended the operation later were Loch Shuna, Maersk Logger and Maersk Leader. Overall, 11 fast rescue crafts (FRC) from nearby vessels were involved in the rescue operations.
Tharos launched her Sikorsky S-76 helicopter at 22:11 but it was unable to land on Piper due to smoke. At 22:23 Tharos received a message from Piper: "People majority in galley area. Tharos come. Gangway. Hoses. Getting bad." She drew alongside Piper Alpha around 22:30 and used her water cannon to cool the platform, which was useful in assisting survivors escape from the pipe deck and helideck. Attempts to deploy her extendable gangway over to Piper were unsuccessful. One survivor who jumped when the Tartan riser failed swam to Tharos and climbed out unaided. When the MCP-01 riser failed, Tharos withdrew to away. The MSV was equipped with a hospital with an offshore medic assisted by diver paramedics from a saturation diving team. A triage and reception area were set up on the vessel's helideck to receive injured casualties.
Silver Pit's FRC was launched within two minutes of the first explosion and rescued the first nine people from the northwest corner within 13 minutes. She rescued a total of 29 people, with Silver Pit herself rescuing a further eight. When the Tartan riser failed Silver Pit withdrew to away. When the MCP-01 riser failed, rope on the deck began to smoulder and the vessel withdrew further away.
Lowland Cavalier deployed a workboat that picked up two people who had fallen from a rope at the northwest corner. When the first gas riser failed, the workboat crew sheltered in the water.
Sandhaven's FRC picked up four men who had climbed down ropes. She returned and picked up two more when the MCP-01 riser failed. At that moment, the craft's propeller got entangled in debris. The boat was engulfed in the fire, throwing the survivors and the three crew into the water. All perished with the exception of coxswain Iain Letham. He was picked up from the sea one hour later with his lifejacket and safety helmet melted by the scorching heat.
Support vessel Maersk Cutter began using her fire monitors ten minutes after the explosion.
Aircraft
A mayday launched by Lowland Cavalier at 22:01 was relayed to a rescue coordination centre, which instructed RAF Kinloss station to scramble a Hawker Siddeley Nimrod maritime patrol aircraft. This was to be sent to the area to act as flying communications platform, handling the signals from helicopters and reporting them back. At 22:22 and 22:28 Sea King helicopters Rescue 137 and Rescue 131 took off from RAF Lossiemouth and RAF Boulmer respectively. Shetland Coastguard helicopter Rescue 117 took off at 22:45. Sea King Rescue 138 left Lossiemouth at 22:51. The Nimrod took off from Kinloss at 22:55 using the designation Rescue 01.
When Rescue 01 was still about from Piper, the aircraft crew radioed the rescue coordination centre that they could already see the fire. Rescue 01 arrived at the scene at 23:27. Three minutes later the first search-and-rescue helicopter, Rescue 137, reached Tharos, followed by the arrival of Rescue 117, Rescue 138 and Rescue 131 at 23:44, 23:48 and 23:53 hours respectively. Tharos requested Rescue 138 to evacuate 12 nonessential personnel to make room for incoming casualties. The helicopter transferred them to nearby Ocean Victory, before returning with paramedics. The search-and-rescue helicopters made unsuccessful sweeps for survivors in the water and ferried injured survivors from rescue vessels to Tharos and to Aberdeen Royal Infirmary.
A civilian Sikorsky S-61 of Bristow Helicopters carrying a medical emergency team arrived at the scene by 1:20 on 7 July. At 2:00 another helicopter landed on Tharos the Offshore Specialist Team from Aberdeen Royal Infirmary, with a significant amount of medical equipment. The last survivors were picked up by Rescue 138 from Tharos at 7:25. By 8:15, 63 personnel (among whom one survivor who subsequently died and the surviving member of the crew of Sandhaven's FRC) had been brought ashore. Aircraft were used to search the area of the platform until the afternoon hours.
Casualties and survivors
At the time of the disaster, 226 people were on the platform; 165 died and 61 survived. Two men from the Sandhaven were also killed in attempts to pick up survivors in a fast rescue boat. Of the 135 deceased whose bodies were recovered, the vast majority died from inhalation of smoke and gas, with only four indicating death from burning, and several others from injury sustained from jumping into the sea. Thirty bodies were never recovered.
Since both the lifeboats and the helideck were impaired by smoke or flames, all the survivors were among those that jumped to the water from various decks or climbed down knotted ropes. Five were the survivors that jumped off the helideck from a height of into the North Sea. Luckily, the sea conditions were calm on the evening of the disaster. The largest number of survivors (37 out of 61) were recovered by MV Silver Pit or her fast rescue boat, whose coxswain James Clark later received the George Medal, as did Iain Letham of the Sandhaven. Others awarded the George Medal were Charles Haffey from Methil, Andrew Kiloh from Aberdeen, and James McNeill from Oban. Sandhaven crewmates Malcolm Storey, from Alness, and Brian Batchelor, from Scunthorpe, were awarded George Medals posthumously.
Aftermath
Controversy exists about whether time was sufficient for a more effective emergency evacuation. Systems critical for emergency management such as the public announcement/general alarm, emergency power, safe haven, and – crucially – the lifeboats, were destroyed or impaired due to poor platform design. Executing the actions described in the emergency response plan became effectively impossible. Additionally, the OIM was perhaps not capable of thinking outside the established procedures and of ordering an improvised evacuation.
It was estimated that the fires had produced flames with a height of about 200 metres and a peak rate of about 100 gigawatts, or three times the total power consumption of the United Kingdom.
Around 670 tonnes of oil were spilled in the accident. On 9 July a slick long and wide was reported. Force 4 conditions, together with dispersant sprayed from a supply vessel, helped disperse it.
Only two downhole safety valves failed to close, and five oil wells were left burning. The fires were eventually extinguished by a team onboard Tharos led by firefighter Red Adair, who had been asked to intervene by Occidental chairman Armand Hammer. A relief well was started on 14 July. The wells were capped by 22 July by fitting new valves on top, which allowed introduction of kill fluids.
The accommodation modules where the majority of those onboard had taken refuge were recovered from the seabed in late 1988. They were transported to Flotta, where they were searched by a team led by twenty officers of Grampian Police and including divers as well as Occidental, Department of Energy and Health and Safety Executive personnel. The bodies of 87 men were found inside. The remains of the platform were toppled into the sea on 28 March 1989.
The total insured loss of the disaster was about £1.7 billion (£ billion in ), making it one of the costliest man-made catastrophes ever. The event had a considerable impact on North Sea oil and gas production. Piper, Tartan, and Claymore were not the only fields impacted, with Scapa, Highlander and Petronella also having to wait up to 13 months before half production was regained. The total deferred production amounted to of oil.
Inquiry and safety recommendations
The Piper Alpha Public Inquiry was set up in November 1988 to establish the cause of the disaster. It was chaired by the Scottish judge William Cullen. A variety of sources of evidence were used, including eyewitness accounts from survivors and crews of nearby vessels, data from nearby platforms, the recovery of the deceased, debris collected from the seabed, documentation available ashore, and accounts from 'back-to-back' personnel who had recently worked on Piper Alpha. Highly unusual for an inquiry of this scope was the fact that the entire scene of the accident had effectively disappeared into the sea. The inquiry decided against the recovery of the process modules from the seabed, due to the time required, the hazards involved, and the low chance that that evidence could actually prove useful for the investigation. The living quarters had, however, been recovered, and this allowed for the collection of key documents to support the investigation.
After 180 days of proceedings extending for 13 months, the report Public Inquiry into the Piper Alpha Disaster (short: Cullen Report) was issued in November 1990. It concluded that the initial condensate leak was the result of maintenance work being carried out simultaneously on a pump and related safety valve. The inquiry was critical of Piper Alpha's operator, Occidental, which was found guilty of having inadequate maintenance and safety procedures, but no criminal charges were ever brought against the company.
The second part of the report made 106 recommendations for changes to North Sea safety procedures:
Thirty-seven recommendations covered procedures for operating equipment, 32 the information of platform personnel, 25 the design of platforms and 12 the information of emergency services.
The responsibility to implement was for 57 with the regulator, 40 for the operators, 8 for the industry as a whole and 1 for stand-by ship owners.
The recommendations led to the enactment of the Offshore Safety Act 1992 and the making of the Offshore Installations (Safety Case) Regulations 1992.
Most significant of these recommendations was that operators were required to present a safety case and that the responsibility for enforcing safety in exploitation operations in the part of the North Sea apportioned to the UK should be moved from the Department of Energy to the Health and Safety Executive, as having both production and safety overseen by the same agency was a conflict of interest.
Civil suit
Occidental and their insurers, Lloyd's of London, paid survivors and families of the dead a total of $220 million in settlements. Later, Lloyd's and Elf Enterprise Caledonia Ltd, successors to Occidental Petroleum, brought civil proceedings against a number of contractor companies who were working on Piper at the time of the accident. Based on the argument that the responsibility for the accident should be shared among the platform operator and the contractors working onboard immediately prior to the accident, Elf was seeking to recover from the defendants a part of the monies paid to the injured and the affected families. Among the companies involved were British Telecommunications, Wood Group, and Stena Offshore.
Giving verdict in what was then the longest civil trial in Scottish history, in 1997 Lord Caplan ruled that two workers who were killed in the explosion, Robert Vernon (who had posthumously received the Queen's Commendation for Bravery) and Terence Sutton, were to blame for the accident. Lord Caplan found that Sutton had failed to tighten the bolts of the blind flange at the suction side of the removed PSV, and that Vernon had put the pump back in operation without checking its status first. Vernon was employed by Occidental and Sutton by contractor Score (UK) Ltd. The finding against Vernon and Sutton was controversial among the families of the victims.
Insurance claims
The disaster led to insurance claims of around US$1.4 billion, making it at that time the largest insured man-made catastrophe. The insurance and reinsurance claims process revealed serious weaknesses in the way insurers at Lloyd's of London and elsewhere kept track of their potential exposures and led to their procedures being reformed.
One of the 1997 rulings by Lord Caplan was that, albeit in principle contractors were obliged to indemnify Elf, Elf's insurers could not claim back monies from the defendant contractors, because Elf had already largely been indemnified by Lloyd's. Elf and Lloyd's appealed this finding in Scotland to the Inner House of the Court of Session, which decided in their favour in December 1999. Subsequently, the contractors appealed the decision to the House of Lords in London in November 2001, but their appeal was rejected. As a result, Elf and their insurers were able to recoup £136 million with accrued interest.
Legacy
The accident is the world's worst offshore oil and gas disaster in terms of lives lost. Only the 2010 Deepwater Horizon tragedy has caused a comparable impact in the industry.
Survivors and relatives of those who died went on to form the Piper Alpha Families and Survivors' Association, which campaigns on North Sea safety issues. A lasting effect of the Piper Alpha disaster was the establishment of the Offshore Industry Liaison Committee, the trade union for oil and gas rig workers. The union, while still in the form of an unofficial committee drawn from different North Sea rigs, organized large strikes in the summers of 1989 and 1990.
Piper Bravo was installed in 1992 to replace Alpha and commenced production in February 1993. A wreck buoy marking Alpha's remains was installed and lies approximately from Bravo.
Beginning in 1998, one month after the 10th anniversary, professor David Alexander, director of the Aberdeen Centre for Trauma Research at Robert Gordon University carried out a study into the long-term psychological and social effects of Piper Alpha. He managed to find 36 survivors who agreed to give interviews or complete questionnaires. Almost all of this group reported psychological problems. More than 70% of those interviewed reported psychological and behavioural symptoms of post-traumatic stress disorder. Twenty-eight (or 78%) said they had difficulty in finding employment following the disaster; some offshore employers apparently regarded Piper Alpha survivors as "Jonahs" – bringers of bad luck, who would not be welcome on other rigs and platforms. The family members of the dead and surviving victims also reported various psychological and social problems. Alexander also stated, "some of these lads are stronger than before Piper. They've learned things about themselves, changed their values, some relationships became stronger. People realised they have strengths they didn't know they had. There was a lot of heroism took place."
In 2013, on the 25th anniversary of the tragedy, trade association Oil and Gas UK organized a three-day conference in Aberdeen to reflect on lessons learned from Piper Alpha and industry safety issues in general.
In process safety
The Piper Alpha disaster and the Cullen Report are milestones in the development of process safety. Its effects on the offshore oil and gas industry can be compared with those the Flixborough disaster had on the onshore chemical and petroleum process industry in the 1970s. The Cullen Report put a strong emphasis on the importance of a robust safety management system (SMS). The requirement for a safety management system to be in place was introduced in British legislation in the wake of Piper Alpha. Elements of process safety management that failed on Piper Alpha included:
Permit-to-work, and in particular the mechanism of permit handover. The whole accident chain of events commenced due to the attempt to start-up a pump which was actually under maintenance.
Company audits, which did not pick up on the systemic failings of the permit-to-work system. The Cullen Report included a recommendation to shift the regulatory regime to a greater focus on SMS audit rather than on inspection.
Contractor management. It had been the first day on the platform for the production operator, who was a contractor and was left on his own without any operating procedures.
Management of change. The platform, originally thought for oil production only, was retrofitted to handle gas. The change was not properly thought out and assessed, as the placement of critical gas facilities next to the unprotected control room stood to demonstrate.
Asset integrity, by way of inspection and maintenance. Safety-critical systems such as liferafts, fire pumps, or emergency lighting do not seem to have received proper attention.
In general, Piper Alpha marked a watershed moment in that it ushered in a greater focus on process safety management and on a risk-based, rather than purely prescriptive, hazard management. As a result of the tragedy, the Safety Case Regulations came into force in 1992. By late 1993, a safety case had to be submitted to the Health and Safety Executive for every platform and rig in British waters (including the exclusive economic zone). The safety case must describe and justify the design, inherent hazards and residual risk in the spirit of the ALARP (as low as reasonably practicable) principle, as well as the means of managing such residual risk. The safety case must be maintained up to date through the lifecycle of the installation.
The safety case regime has been ascribed a measure of success in promoting safer facility design and management of offshore operations in the United Kingdom. Trade association Oil and Gas UK linked a significant fall in lost time injury frequency rate observed since 1997 to the introduction of the regime. The decrease in the number of accidental hydrocarbon release events in the British offshore oil and gas industry has also been correlated to the new regulatory approach. A study commissioned by the Health and Safety executive found that the regime heightened awareness of risks throughout the industry and set in motion a more structured decision-making process targeting risk reduction efforts, safety management system improvements, and a better safety culture. According to another source, Piper was the catalyst for a development from an unsystematic, albeit well-meaning, collection of standards and processes to a systematized approach specific to safety.
However, some criticism of the safety case approach has also been voiced, pointing to implementation and communication problems as well as issues with the supporting safety studies. The industry's cost-cutting initiatives as well as the handling of workers' involvement in the development of safety cases have also been identified as potential factors of degradation of the safety case regime.
The safety case regime has been adopted outside the United Kingdom, both as a regulatory instrument (for example in Australia, Malaysia, and Norway, among others) and as a voluntary initiative taken by several oil companies. In the United States, the American Petroleum Institute's Recommended Practice 75 for Development of a Safety and Environmental Management Program for Outer Continental Shelf (OCS) Operations and Facilities was issued, at least in part, in response to the tragedy.
In terms of facility design, some of the Cullen Report's recommendations have become tenets for the safe design of offshore oil and gas installations:
Systematic identification and assessment of fire and explosion hazards.
Analysis of and protection against smoke and gas ingress as well as survivability against fire and explosion of a temporary refuge (usually within, and potentially extending to the entirety of, the living quarters), where the crew could muster and wait out the accident, while arrangements for emergency management and/or facility evacuation are put in place.
Analysis of escape routes and means of evacuation, with due regards to their survivability, accessibility and redundancy.
Analysis of the survivability of safety-critical systems required for emergency management, such as emergency shutdown valves (in particular those along hydrocarbon risers), primary structural elements, hydrocarbon piping and vessels, fire pumps, firewater distribution and deluge, control and radio rooms, public announcement and general alarm system, emergency sources of power, emergency lighting, all of which failed on Piper, additionally to the aforementioned impairments of escape routes and safe muster area.
These analyses, which are called "forthwith studies" by the Cullen Report, are now standard engineering deliverables in the design of offshore oil and gas facilities. Quantified risk assessment (QRA) also became more common, particularly in support to ALARP arguments. One effect of these studies was that a rectangular (rather than square) layout became common for new North Sea platforms, to allow for increased spacing between vulnerable areas and major hazard modules. For the same reason, bridge-linked platforms became more common to increase separation from the accommodation module. Other lessons learnt in design were the importance of blast walls in protecting safety-critical systems; the need to minimize congestion and promote natural ventilation in process areas, to decrease the chance of explosions; the need to ensure that the temporary refuge HVAC system be capable to repel smoke and gas ingress by positive pressurization and implementation of gas-tight dampers automatically actuated from smoke and gas detectors; redundancy of critical communication systems, like radio and public address; remote start of fire pumps; need to optimize the location and fireproofing of riser emergency shutdown valves; assessment for the need of subsea pipeline isolation valves, to segregate the amount of hydrocarbon available for fire escalation in case of riser or riser valve failure. The resulting changes in the design philosophy of offshore facilities have therefore been towards an inherently safer design (ISD) concept.
In the same spirit, companies also sought to decrease the number of operators needed to run offshore facilities, in an attempt to reduce human exposure to major accidents. The first totally unoccupied (normally unmanned) installation, in the Amethyst gas field, was commissioned in September 1990. There is debate as to whether unmanned facilities are actually beneficial in terms of decreasing risk to the workers, given the requirements to transfer personnel to and from the platform (for inspection and maintenance activities), which in itself carries an amount of risk associated to helicopter flights, boating, and boat-to-platform personnel transfer.
Memorials
A condolence banner was gifted in 1989 by the Victorian Trades Hall Council of Australia, and is now in the Aberdeen Maritime Museum.
On 6 July 1991, the third anniversary of the disaster, a memorial sculpture was unveiled by the Queen Mother in the Rose Garden within Hazlehead Park in Aberdeen. In it are three figures of oil workers, one facing west and representing the physical nature of offshore activities, one facing east and representing eternal movement and youth and the central one, facing north and whose left hand holds a pool of oil sculpted in the shape of an unwinding spiral. It was created by Sue Jane Taylor, a Scottish sculptor who based much of her work around what she saw in and around the oil industry and had actually visited Piper Alpha in 1987. One of the survivors was used as a model for one of the figures. Also in 1991, Scottish composer James MacMillan wrote Tuireadh, a piece for clarinet and string quartet, as a musical complement to the memorial sculpture.
A memorial stone was erected in 1992 in Strathclyde Country Park to commemorate the men lost from that region.
The Piper Alpha Window was created in 1994 by congregation member Jennifer Jane Bayliss for Ferryhill Church in Aberdeen.
The Oil Chapel in the Kirk of St Nicholas, Aberdeen was dedicated in 1990 to mark 25 years of North Sea oil. The chapel hosts a book of remembrance to all those who have died offshore in British waters.
In the media
The incident was featured in the 1990 STV documentary television series Rescue, about the RAF Search and Rescue Force at RAF Lossiemouth, in the episode "Piper Alpha". Coincidentally, the film crew had been documenting the rescue teams at Lossiemouth at the time of accident and were able to accompany the helicopter during the Piper Alpha disaster, filming events as they happened from helicopter Rescue 138.
The disaster was featured in the first episode of the BBC television series Disaster, aired in January 1997.
In 1998, on the occasion of the 10th anniversary, Prospero Productions of Australia released the documentary Paying for the Piper. It was written and produced by Ed Punchard, who was one of the divers that managed to escape the inferno. The film follows Punchard's return to Scotland to confront his past and culminates in a meeting with Occidental officers.
In 2004, National Geographic featured this incident in its Seconds from Disaster documentary as the episode "Explosion in the North Sea".
On 6 July 2008, BBC Radio 3 broadcast a 90-minute play by Stephen Phelps entitled Piper Alpha. Based on the actual evidence given to the Cullen Inquiry, the events of that night were retold 20 years to the minute after they happened.
Also in 2008, to mark the 20th anniversary of the disaster, a stage play, Lest We Forget was commissioned by Aberdeen Performing Arts and written by playwright Mike Gibb. It was performed in Aberdeen in the week leading up to the anniversary with the final performance on 6 July 2008, the 20th anniversary.
In 2011, Lee Hutcheon produced and directed The Men of Piper Alpha, a documentary with several interviews to the survivors.
In 2013, on the 25th anniversary of the disaster, the video Remembering Piper: The Night That Changed Our World was released by British offshore oil and gas industry initiative Step Change in Safety. It incorporates passages from the BBC radio play and artwork by Sue Jane Taylor.
The documentary film Fire in the Night was also released in 2013. It was made by Berriff McGinty Films and co-produced by STV. Producer and cameraman Paul Berriff had been with Sea King Rescue 138 during the filming of the Rescue series.
In 2017 the episode "Oil Rig Explosion" of the Smithsonian Channel documentary series Make It Out Alive! focused on the disaster, with interviews to, among others, Geoff Bollands, Iain Letham, Charles Haffey, and Paul Berriff.
In 2018, a special edition of the Monopoly board game was released to commemorate the 30th anniversary of the tragedy. It was sponsored by a number of companies working in the North Sea offshore oil and gas industry, including majors such as Shell, whose Brent platforms substituted for the four train station squares. The game release was part of a fundraiser for the maintenance costs of the memorial in Hazlehead Park. The box lid prominently featured Piper Alpha imagery and a "Piper Alpha 30th Anniversary" title, which led the game to be referred to as the "Piper Alpha Monopoly". The reactions of some of the survivors and victims' families were negative, calling the game "callous" and a "sick joke". The game was re-released with a different design to clarify that it was really an oil-and-gas (and not a Piper Alpha-themed) Monopoly edition.
Also in 2018, the disaster was featured on the History documentary series James Nesbitt's Disasters That Changed Britain. Testimonials were heard from survivors and relatives of victims.
In 2023, to mark the 35th anniversary, writer Mike Gibb adapted his stage play as a novel titled I Had Never Heard a City Cry Before, a quote from the script.
See also
Alexander L. Kielland
Ocean Ranger
Mumbai High disaster
Deepwater Horizon explosion
Explanatory notes
References
Bibliography
Volume 1 (archived from the original on 2 May 2007, retrieved 18 December 2005).
Volume 2 (archived from the original on 8 February 2007, retrieved 18 December 2005).
Volume 3 (archived from the original on 25 August 2007, retrieved 18 December 2005).
Volume 4 (archived from the original on 25 August 2007, retrieved 18 December 2005).
Volume 5 (archived from the original on 25 August 2007, retrieved 18 December 2005).
Volume 6 (archived from the original on 3 December 2006, retrieved 18 December 2005).
Volume 1 (archived from the original on 6 November 2023, retrieved 20 December 2023).
Volume 2 (archived from the original on 16 December 2023, retrieved 20 December 2023).
Further reading
External links
Links (archived) to all the opinions of the Lords of the Court of Session at first instance and in reclaiming motions of the civil proceedings
"On This Day" (archived) – BBC News article (6 July 1988)
"Piper Alpha Case History" (archived) by the Center for Chemical Process Safety of AIChE
"Piper 25 Conference – Steve Rae" – presentation by a survivor, video on YouTube
"Piper Alpha 25th Anniversary Rededication and Act of Remembrance" from Offshore Energies UK's channel on Vimeo
"Piper Alpha Disaster" (archived) on Education Scotland's website
Collapsed oil platforms
Oil platforms off Scotland
Natural gas platforms
1988 disasters in the United Kingdom
1988 in Scotland
Gas explosions in the United Kingdom
Explosions in Scotland
History of Aberdeen
History of the North Sea
July 1988 events in the United Kingdom
North Sea energy
1988 industrial disasters
Industrial fires and explosions in the United Kingdom
Explosions in 1988
Oil platform disasters
Maritime incidents involving engineering failures
Maritime incidents in 1988
Public inquiries in Scotland
1976 establishments in Scotland
1988 disestablishments in Scotland
Building and structure collapses in the United Kingdom
Building and structure collapses caused by fire | Piper Alpha | [
"Engineering"
] | 9,634 | [
"Structural engineering",
"Natural gas platforms"
] |
612,837 | https://en.wikipedia.org/wiki/Qin%20Jiushao | Qin Jiushao (, ca. 1202–1261), courtesy name Daogu (道古), was a Chinese mathematician, meteorologist, inventor, politician, and writer. He is credited for discovering Horner's method as well as inventing Tianchi basins, a type of rain gauge instrument used to gather meteorological data.
Biography
Although Qin Jiushao was born in Ziyang, Sichuan, his family came from Shandong province. He is regarded as one of the greatest mathematicians in Chinese history. This is especially remarkable because Qin did not devote his life to mathematics. He was accomplished in many other fields and held a series of bureaucratic positions in several Chinese provinces.
Qin wrote Shùshū Jiǔzhāng ("Mathematical Treatise in Nine Sections") in 1247 CE. This treatise covered a variety of topics including indeterminate equations and the numerical solution of certain polynomial equations up to 10th order, as well as discussions on military matters and surveying. In the treatise Qin included a general form of the Chinese remainder theorem that used Da yan shu (大衍术) or algorithms to solve it. In geometry, he discovered "Qin Jiushao's formula" for finding the area of a triangle from the given lengths of three sides. This formula is the same as Heron's formula, proved by Heron of Alexandria about 60 BCE, though knowledge of the formula may go back to Archimedes.
As precipitation was important agriculture and food production, Qin developed precipitation gauges that was widely used in 1247 during the Mongol Empire/Southern Song dynasty to gather meteorological data. Qin Jiushao later records the application of rainfall measurements in the mathematical treatise. The book also discusses the use of large snow gauges made from bamboo situated in mountain passes and uplands which are speculated to be first referenced to snow measurement.
Qin recorded the earliest explanation of how Chinese calendar experts calculated astronomical data according to the timing of the winter solstice. Among his accomplishments are the introduction techniques for solving certain types of algebraic equations using a numerical algorithm (equivalent to the 19th century Horner's method) and for finding sums of arithmetic series. He also introduced the use of the zero symbol into written Chinese mathematics.
After he completed his work on mathematics, he ventured into politics. As a government official he was boastful, corrupt, and was accused of bribery and of poisoning his enemies. As a result, he was relieved of his duties multiple times. Yet in spite of these problems he managed to become very wealthy (Katz, 1993).
Main work
Shushu Jiuzhang (Mathematical Treatise in Nine Sections) (1248)
References
Bibliography
Guo, Shuchun. Encyclopedia of China (Mathematics Volume), 1st ed.
Qin Jiushao, . (Chinese History Timeline), 2007.
Ulrich Libbrecht: Chinese Mathematics in the Thirteenth Century (The Shu-Shu-Chiu-Chang of Chin Chiu shao) Dover Publication
Victor J. Katz "A history of mathematics: an introduction." New York (1993).
External links
Simon Fraser University biography for "Qin Jiushao"
1200s births
1261 deaths
13th-century Chinese mathematicians
13th-century Chinese writers
Chinese inventors
Chinese meteorologists
Medieval Chinese mathematicians
Number theorists
Politicians from Ziyang
Scientists from Sichuan
Song dynasty government officials
Song dynasty science writers
Writers from Ziyang | Qin Jiushao | [
"Mathematics"
] | 676 | [
"Number theorists",
"Number theory"
] |
612,849 | https://en.wikipedia.org/wiki/IBiquity | iBiquity Digital Corporation was a company formed by the merger of USA Digital Radio and Lucent Digital Radio. Based in Columbia, Maryland, with additional offices in Basking Ridge, New Jersey, Los Angeles, California, and Auburn Hills, Michigan, iBiquity was a privately held intellectual properties company with investors in the technology, broadcasting, manufacturing, media, and financial industries.
About
IBOC can operate on both AM band and FM band broadcasts either in a digital-only mode or in a "hybrid" digital+analog mode. The stations can split the digital bandwidth to carry multiple audio program streams (called HD2 or HD3 multicast channels) as well as show on-screen text data such as song title and artist, traffic, and weather information. Nearly 2,000 stations in the US broadcast with this system. The technology is marketed under the trademark HD Radio. It is the only technology approved by the Federal Communications Commission for digital AM and FM broadcasting in the United States. Due in large part to its ability to deliver digital audio services while leveraging the existing analog spectrum (by broadcasting digital information on the sidebands), commercial implementation of the technology is gaining momentum in various countries on one side of the world, including Canada, Mexico, and the Philippines. Testing and showing the system are underway in China, Colombia, Germany, Indonesia, Jamaica, New Zealand, Poland, Switzerland, Thailand, and Ukraine, among other countries. According to iBiquity Digital, holder of the HD Radio trademark the "HD" in "HD Radio" does not stand for "High Definition" or "Hybrid Digital". It is simply part of their trademark, and does not have any meaning on its own. On September 2, 2015, iBiquity announced that DTS was purchasing them for $172 million USD, bringing the HD Radio technology under the same banner as DTS' eponymous theater surround sound systems.
References
External links
HD Radio
Defunct electronics companies of the United States
Digital radio
Companies based in Columbia, Maryland
Somerset County, New Jersey
Oakland County, Michigan
Broadcast engineering
HD Radio | IBiquity | [
"Engineering"
] | 420 | [
"Broadcast engineering",
"Electronic engineering"
] |
612,874 | https://en.wikipedia.org/wiki/Acylation | In chemistry, acylation is a broad class of chemical reactions in which an acyl group () is added to a substrate. The compound providing the acyl group is called the acylating agent. The substrate to be acylated and the product include the following:
alcohols, esters
amines, amides
arenes or alkenes, ketones
A particularly common type of acylation is acetylation, the addition of the acetyl group. Closely related to acylation is formylation, which employ sources of "HCO+ in place of "RCO+".
Examples
Because they form a strong electrophile when treated with Lewis acids, acyl halides are commonly used as acylating agents. For example, Friedel–Crafts acylation uses acetyl chloride () as the agent and aluminum chloride () as a catalyst to add an acetyl group to benzene:
This reaction is an example of electrophilic aromatic substitution.
Acyl halides and acid anhydrides of carboxylic acids are also common acylating agents. In some cases, active esters exhibit comparable reactivity. All react with amines to form amides and with alcohols to form esters by nucleophilic acyl substitution.
Acylation can be used to prevent rearrangement reactions that would normally occur in alkylation. To do this an acylation reaction is performed, then the carbonyl is removed by Clemmensen reduction or a similar process.
Acylation in biology
Protein acylation is the post-translational modification of proteins via the attachment of functional groups through acyl linkages. Protein acylation has been observed as a mechanism controlling biological signaling. One prominent type is fatty acylation, the addition of fatty acids to particular amino acids (e.g. myristoylation, palmitoylation or palmitoleoylation). Different types of fatty acids engage in global protein acylation. Palmitoleoylation is an acylation type where the monounsaturated fatty acid palmitoleic acid is covalently attached to serine or threonine residues of proteins. Palmitoleoylation appears to play a significant role in the trafficking, targeting, and function of Wnt proteins.
See also
Hydroacylation
Acetyl
Ketene
References
Organic reactions | Acylation | [
"Chemistry"
] | 494 | [
"Organic reactions"
] |
612,913 | https://en.wikipedia.org/wiki/Aminoacylation | Aminoacylation is the process of adding an aminoacyl group to a compound.
See also
Acylation
tRNA aminoacylation
Transfer RNA-like structures
References
Organic reactions | Aminoacylation | [
"Chemistry"
] | 37 | [
"Chemical reaction stubs",
"Organic reactions"
] |
612,928 | https://en.wikipedia.org/wiki/PuTTY | PuTTY () is a free and open-source terminal emulator, serial console and network file transfer application. It supports several network protocols, including SCP, SSH, Telnet, rlogin, and raw socket connection. It can also connect to a serial port. The name "PuTTY" has no official meaning.
PuTTY was originally written for Microsoft Windows, but it has been ported to various other operating systems. Official ports are available for some Unix-like platforms, with work-in-progress ports to and , and unofficial ports have been contributed to platforms such as Symbian, Windows Mobile and Windows Phone.
PuTTY was written and is maintained primarily by Simon Tatham, a British programmer.
Features
PuTTY supports many variations on the secure remote terminal, and provides user control over the SSH encryption key and protocol version, alternate ciphers such as AES, 3DES, RC4, Blowfish, DES, and public-key authentication. PuTTY uses its own format of key files – PPK (protected by Message Authentication Code). PuTTY supports SSO through GSSAPI, including user provided GSSAPI DLLs. It also can emulate control sequences from xterm, VT220, VT102 or ECMA-48 terminal emulation, and allows local, remote, or dynamic port forwarding with SSH (including X11 forwarding). The network communication layer supports IPv6, and the SSH protocol supports the zlib@openssh.com delayed compression scheme. It can also be used with local serial port connections.
PuTTY comes bundled with command-line SCP and SFTP clients, called "pscp" and "psftp" respectively, and plink, a command-line connection tool, used for non-interactive sessions.
PuTTY does not support session tabs directly, but many wrappers are available that do.
History
PuTTY development began late in 1998, and was a usable SSH-2 client by October 2000.
Components
PuTTY consists of several components:
PuTTY the Telnet, rlogin, and SSH client itself, which can also connect to a serial port
PSCP an SCP client, i.e. command-line secure file copy. Can also use SFTP to perform transfers
PSFTP an SFTP client, i.e. general file transfer sessions much like FTP
PuTTYtel a Telnet-only client
Plink a command-line interface to the PuTTY back ends. Usually used for SSH Tunneling
Pageant an SSH authentication agent for PuTTY, PSCP and Plink
PuTTYgen an RSA, DSA, ECDSA and EdDSA key generation utility
pterm (Unix version only) an X11 client which supports the same terminal emulation as PuTTY
See also
Comparison of SSH clients
Tera Term
mintty
WinSCP
minicom
References
External links
1998 software
Cross-platform free software
Cryptographic software
Free communication software
Free software programmed in C
Free terminal emulators
Portable software
Secure Shell
SSH File Transfer Protocol clients
Software using the MIT license
Symbian software
Telnet | PuTTY | [
"Mathematics"
] | 641 | [
"Cryptographic software",
"Mathematical software"
] |
612,995 | https://en.wikipedia.org/wiki/Synroc | Synroc, a portmanteau of "synthetic rock", is a means of safely storing radioactive waste. It was pioneered in 1978 by a team led by Professor Ted Ringwood at the Australian National University, with further research undertaken in collaboration with ANSTO at research laboratories in Lucas Heights.
Manufacture
Synroc is composed of three titanate minerals – hollandite, zirconolite and perovskite – plus rutile and a small amount of metal alloy. These are combined into a slurry to which is added a portion of high-level liquid nuclear waste. The mixture is dried and calcined at to produce a powder.
The powder is then compressed in a process known as hot isostatic pressing (HIP), where it is compressed within a bellows-like stainless steel container at temperatures of .
The result is a cylinder of hard, dense, black synthetic rock.
Comparisons
If stored in a liquid form, nuclear waste can enter the environment and the waterways, and cause widespread damage. As a solid, these risks are greatly minimised.
Unlike borosilicate glass, which is amorphous, Synroc is a ceramic that incorporates the radioactive waste into its crystal structure. Naturally occurring rocks can store radioactive materials for long periods. The aim of Synroc is to imitate this by converting liquid into a crystalline structure and use to store radioactive waste. Synroc-based glass composite materials (GCM) combine the process and chemical flexibility of glass with the superior chemical durability of ceramics and can achieve higher waste loadings.
Different types of Synroc waste forms (ratios of component minerals, specific HIP pressures and temperatures etc.) can be developed for the immobilisation of different types of waste. Only zirconolite and perovskite can accommodate actinides. The exact proportions of the main phases vary depending on the HLW composition. For example, Synroc-C is designed to contain about 20% by weight of calcined HLW and it consists of approximately (% by weight): 30 – hollandite; 30 – zirconolite; 20 – perovskite and 20 – Ti-oxides and other phases. Immobilising weapons-grade plutonium or transuranium wastes instead of bulk HLW may essentially change the Synroc phase composition to primarily zirconolite-based or a pyrochlore-based ceramic. The starting precursor for Synroc-C fabrication contains ~57% by weight TiO2 and 2% by weight metallic Ti. The metallic titanium provides reducing conditions during ceramic synthesis and helps decrease volatilisation of radioactive cesium.
Synroc is not a disposal method. Synroc still has to be stored. Even though the waste is held in a solid lattice and prevented from spreading, it is still radioactive and can have a negative effect on its surroundings. Synroc is a superior method of nuclear waste storage because it minimises leaching.
Production use
In 1997 Synroc was tested with real HLW using technology developed jointly by ANSTO and the US DoE's Argonne National Laboratory.
In January 2010, the United States Department of Energy selected hot isostatic pressing (HIP) for processing waste at the Idaho National Laboratory.
In April 2008, the Battelle Energy Alliance signed a contract with ANSTO to demonstrate the benefits of Synroc in processing waste managed by Batelle as part of its contract to manage the Idaho National Laboratory.
Synroc was chosen in April 2005 for a multimillion-dollar "demonstration" contract to eliminate of plutonium-contaminated waste at British Nuclear Fuel's Sellafield plant, on the northwest coast of England.
References
External links
Synroc Wasteform (from World Nuclear Association)
Canberra Observer report on 2005 contract
ANSTO
The Synroc Website
Radioactive waste
Synthetic materials | Synroc | [
"Chemistry",
"Technology"
] | 799 | [
"Synthetic materials",
"Environmental impact of nuclear power",
"Hazardous waste",
"Radioactivity",
"Chemical synthesis",
"Radioactive waste"
] |
613,092 | https://en.wikipedia.org/wiki/Aminoacyl%20tRNA%20synthetase | An aminoacyl-tRNA synthetase (aaRS or ARS), also called tRNA-ligase, is an enzyme that attaches the appropriate amino acid onto its corresponding tRNA. It does so by catalyzing the transesterification of a specific cognate amino acid or its precursor to one of all its compatible cognate tRNAs to form an aminoacyl-tRNA. In humans, the 20 different types of aa-tRNA are made by the 20 different aminoacyl-tRNA synthetases, one for each amino acid of the genetic code.
This is sometimes called "charging" or "loading" the tRNA with an amino acid. Once the tRNA is charged, a ribosome can transfer the amino acid from the tRNA onto a growing peptide, according to the genetic code. Aminoacyl tRNA therefore plays an important role in RNA translation, the expression of genes to create proteins.
Mechanism
The synthetase first binds ATP and the corresponding amino acid (or its precursor) to form an aminoacyl-adenylate, releasing inorganic pyrophosphate (PPi). The adenylate-aaRS complex then binds the appropriate tRNA molecule's D arm, and the amino acid is transferred from the aa-AMP to either the 2'- or the 3'-OH of the last tRNA nucleotide (A76) at the 3'-end.
The mechanism can be summarized in the following reaction series:
Amino Acid + ATP → Aminoacyl-AMP + PPi
Aminoacyl-AMP + tRNA → Aminoacyl-tRNA + AMP
Summing the reactions, the highly exergonic overall reaction is as follows:
Amino Acid + tRNA + ATP → Aminoacyl-tRNA + AMP + PPi
Some synthetases also mediate an editing reaction to ensure high fidelity of tRNA charging. If the incorrect tRNA is added (aka. the tRNA is found to be improperly charged), the aminoacyl-tRNA bond is hydrolyzed. This can happen when two amino acids have different properties even if they have similar shapes—as is the case with valine and threonine.
The accuracy of aminoacyl-tRNA synthetase is so high that it is often paired with the word "superspecificity” when it is compared to other enzymes that are involved in metabolism. Although not all synthetases have a domain with the sole purpose of editing, they make up for it by having specific binding and activation of their affiliated amino acids. Another contribution to the accuracy of these synthetases is the ratio of concentrations of aminoacyl-tRNA synthetase and its cognate tRNA. Since tRNA synthetase improperly acylates the tRNA when the synthetase is overproduced, a limit must exist on the levels of aaRSs and tRNAs in vivo.
Classes
There are two classes of aminoacyl tRNA synthetase, each composed of ten enzymes:
Class I has two highly conserved sequence motifs. It aminoacylates at the 2'-OH of a terminal adenosine nucleotide on tRNA, and it is usually monomeric or dimeric (one or two subunits, respectively).
Class II has three highly conserved sequence motifs. It aminoacylates at the 3'-OH of a terminal adenosine on tRNA, and is usually dimeric or tetrameric (two or four subunits, respectively). Although phenylalanine-tRNA synthetase is class II, it aminoacylates at the 2'-OH.
The amino acids are attached to the hydroxyl (-OH) group of the adenosine via the carboxyl (-COOH) group.
Regardless of where the aminoacyl is initially attached to the nucleotide, the 2'-O-aminoacyl-tRNA will ultimately migrate to the 3' position via transesterification.
Bacterial aminoacyl-tRNA synthetases can be grouped as follows:
Amino acids which use class II aaRS seem to be evolutionarily older.
Structures
Both classes of aminoacyl-tRNA synthetases are multidomain proteins. In a typical scenario, an aaRS consists of a catalytic domain (where both the above reactions take place) and an anticodon binding domain (which interacts mostly with the anticodon region of the tRNA). Transfer-RNAs for different amino acids differ not only in their anticodon but also at other points, giving them slightly different overall configurations. The aminoacyl-tRNA synthetases recognize the correct tRNAs primarily through their overall configuration, not just through their anticodon. In addition, some aaRSs have additional RNA binding domains and editing domains that cleave incorrectly paired aminoacyl-tRNA molecules.
The catalytic domains of all the aaRSs of a given class are found to be homologous to one another, whereas class I and class II aaRSs are unrelated to one another. The class I aaRSs feature a cytidylyltransferase-like Rossmann fold seen in proteins like glycerol-3-phosphate cytidylyltransferase, nicotinamide nucleotide adenylyltransferase and archaeal FAD synthase, whereas the class II aaRSs have a unique fold related to biotin and lipoate ligases.
The alpha helical anticodon binding domain of arginyl-, glycyl- and cysteinyl-tRNA synthetases is known as the DALR domain after characteristic conserved amino acids.
Aminoacyl-tRNA synthetases have been kinetically studied, showing that Mg2+ ions play an active catalytic role and therefore aaRs have a degree of magnesium dependence. Increasing the Mg2+ concentration leads to an increase in the equilibrium constants for the aminoacyl-tRNA synthetases’ reactions. Although this trend was seen in both class I and class II synthetases, the magnesium dependence for the two classes are very distinct. Class II synthetases have two or (more frequently) three Mg2+ ions, while class I only requires one Mg2+ ion.
Beside their lack of overall sequence and structure similarity, class I and class II synthetases feature different ATP recognition mechanisms. While class I binds via interactions mediated by backbone hydrogen bonds, class II uses a pair of arginine residues to establish salt bridges to its ATP ligand. This oppositional implementation is manifested in two structural motifs, the Backbone Brackets and Arginine Tweezers, which are observable in all class I and class II structures, respectively. The high structural conservation of these motifs suggest that they must have been present since ancient times.
Evolution
Most of the aaRSs of a given specificity are evolutionarily closer to one another than to aaRSs of another specificity. However, AsnRS and GlnRS group within AspRS and GluRS, respectively. Most of the aaRSs of a given specificity also belong to a single class. However, there are two distinct versions of the LysRS - one belonging to the class I family and the other belonging to the class II family.
The molecular phylogenies of aaRSs are often not consistent with accepted organismal phylogenies. That is, they violate the so-called canonical phylogenetic pattern shown by most other enzymes for the three domains of life - Archaea, Bacteria, and Eukarya. Furthermore, the phylogenies inferred for aaRSs of different amino acids often do not agree with one another. In addition, aaRS paralogs within the same species show a high degree of divergence between them. These are clear indications that horizontal transfer has occurred several times during the evolutionary history of aaRSs.
A widespread belief in the evolutionary stability of this superfamily, meaning that every organism has all the aaRSs for their corresponding amino acids, is misconceived. A large-scale genomic analysis on ~2500 prokaryotic genomes showed that many of them miss one or more aaRS genes whereas many genomes have 1 or more paralogs. AlaRS, GlyRS, LeuRS, IleRS and ValRS are the most evolutionarily stable members of the family. GluRS, LysRS and CysRS often have paralogs, whereas AsnRS, GlnRS, PylRS and SepRS are often absent from many genomes.
With the exception of AlaRS, it has been discovered that 19 out of the 20 human aaRSs have added at least one new domain or motif. These new domains and motifs vary in function and are observed in various forms of life. A common novel function within human aaRSs is providing additional regulation of biological processes. There exists a theory that the increasing number of aaRSs that add domains is due to the continuous evolution of higher organisms with more complex and efficient building blocks and biological mechanisms. One key piece of evidence to this theory is that after a new domain is added to an aaRS, the domain becomes fully integrated. This new domain's functionality is conserved from that point on.
As genetic efficiency evolved in higher organisms, 13 new domains with no obvious association with the catalytic activity of aaRSs genes have been added.
Application in biotechnology
In some of the aminoacyl tRNA synthetases, the cavity that holds the amino acid can be mutated and modified to carry unnatural amino acids synthesized in the lab, and to attach them to specific tRNAs. This expands the genetic code, beyond the twenty canonical amino acids found in nature, to include an unnatural amino acid as well. The unnatural amino acid is coded by a nonsense (TAG, TGA, TAA) triplet, a quadruplet codon, or in some cases a redundant rare codon. The organism that expresses the mutant synthetase can then be genetically programmed to incorporate the unnatural amino acid into any desired position in any protein of interest, allowing biochemists or structural biologists to probe or change the protein's function. For instance, one can start with the gene for a protein that binds a certain sequence of DNA, and, by directing an unnatural amino acid with a reactive side-chain into the binding site, create a new protein that cuts the DNA at the target-sequence, rather than binding it.
By mutating aminoacyl tRNA synthetases, chemists have expanded the genetic codes of various organisms to include lab-synthesized amino acids with all kinds of useful properties: photoreactive, metal-chelating, xenon-chelating, crosslinking, spin-resonant, fluorescent, biotinylated, and redox-active amino acids. Another use is introducing amino acids bearing reactive functional groups for chemically modifying the target protein.
Certain diseases’ causation (such as neuronal pathologies, cancer, disturbed metabolic conditions, and autoimmune disorders) have been correlated to specific mutations of aminoacyl-tRNA synthetases. Charcot-Marie-Tooth (CMT) is the most frequent heritable disorder of the peripheral nervous system (a neuronal disease) and is caused by a heritable mutation in glycol-tRNA and tyrosyl-tRNA. Diabetes, a metabolic disease, induces oxidative stress, which triggers a build up of mitochondrial tRNA mutations. It has also been discovered that tRNA synthetases may be partially involved in the etiology of cancer. A high level of expression or modification of aaRSs has been observed within a range of cancers. A common outcome from mutations of aaRSs is a disturbance of dimer shape/formation which has a direct relationship with its function. These correlations between aaRSs and certain diseases have opened up a new door to synthesizing therapeutics.
Noncatalytic domains
The novel domain additions to aaRS genes are accretive and progressive up the Tree of Life. The strong evolutionary pressure for these small non-catalytic protein domains suggested their importance. Findings beginning in 1999 and later revealed a previously unrecognized layer of biology: these proteins control gene expression within the cell of origin, and when released exert homeostatic and developmental control in specific human cell types, tissues and organs during adult or fetal development or both, including pathways associated with angiogenesis, inflammation, the immune response, the mechanistic target of rapamycin (mTOR) signalling, apoptosis, tumorigenesis, and interferon gamma (IFN-) and p53 signalling.
Substrate depletion
In 2022, it was discovered that aminoacyl-trna synthetases may incorporate alternative amino acids during shortages of their precursors. In particular, tryptophanyl-tRNA synthetase (WARS1) will incorporate phenylalanine during tryptophan depletion, essentially inducing a W>F codon reassignment. Depletion of the other substrate of aminoacyl-tRNA synthetases, the cognate tRNA, may be relevant to certain diseases, e.g. Charcot–Marie–Tooth disease. It was shown that CMT-mutant glycyl-tRNA synthetase variants are still able to bind tRNA-gly but fail to release it, leading to depletion of the cellular pool of glycyl-tRNA-gly, what in turn results in stalling of the ribosome on glycine codons during mRNA translation.
Clinical
Mutations in the mitochondrial enzyme have been associated with a number of genetic disorders including Leigh syndrome, West syndrome and CAGSSS (cataracts, growth hormone deficiency, sensory neuropathy, sensorineural hearing loss and skeletal dysplasia syndrome).
Prediction servers
ICAARS: B. Pawar, and GPS Raghava (2010) Prediction and classification of aminoacyl tRNA synthetases using PROSITE domains. BMC Genomics 2010, 11:507
MARSpred:
Prokaryotic AARS database:
See also
TARS (gene)
AARS2 (gene)
References
External links
EC 6.1
Protein biosynthesis | Aminoacyl tRNA synthetase | [
"Chemistry"
] | 2,968 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
613,103 | https://en.wikipedia.org/wiki/United%20States%20National%20Radio%20Quiet%20Zone | The National Radio Quiet Zone (NRQZ) is a large area of land in the United States designated as a radio quiet zone, in which radio transmissions are restricted by law to facilitate scientific research and the gathering of military intelligence. About half of the zone is located in the Blue Ridge Mountains of west-central Virginia while the other half is in the Allegheny Mountains of east-central West Virginia; a small part of the zone is in the southernmost tip of the Maryland panhandle.
Location
The Quiet Zone is an approximate rectangle of land, on the north edge, on the south edge and on the east and west edges, comprising approximately . It straddles the borders of Virginia and West Virginia, and also includes a small part of Maryland. The NRQZ is centered between the Green Bank Observatory in Green Bank, West Virginia, and Sugar Grove Station in Sugar Grove, West Virginia. It includes all land with latitudes between 37° 30′ 0.4″ N and 39° 15′ 0.4″ N, and longitudes between 78° 29′ 59.0″ W and 80° 29′ 59.2″ W.
Restrictions
Most broadcast transmitters in the central area of the Quiet Zone are required to operate at reduced power and use directional antennas. This makes cable and satellite essential for acceptable television in much of the region. Restrictions of transmissions are strictest within ten miles (16 km) of the Green Bank and Sugar Grove facilities, where most omnidirectional and high-power transmissions are prohibited.
Not all radio transmissions are prohibited in the Quiet Zone. For example, emergency service (police, fire, and ambulance) radios and CB radios are permitted. However, owners of high-power transmitters, including television stations in the Harrisonburg–Staunton and Charlottesville markets, must coordinate their operations with the Green Bank Observatory. The only broadcast radio stations in the core of the Quiet Zone are part of the Allegheny Mountain Radio network, with one daytimer AM station in Frost, West Virginia, ten miles from the observatory, and low-powered FM stations in Monterey, Virginia and Marlinton, West Virginia. Exceptions to restrictions are usually determined case by case, with preference given to public safety concerns, such as for remote alarm systems, repeaters for emergency services, and NOAA Weather Radio.
The most severe restrictions to the general public are imposed within a 20-mile (32 km) radius of the Green Bank Observatory. The Observatory polices the area actively for devices emitting excessive electromagnetic radiation such as microwave ovens, Wi-Fi access points, and faulty electrical equipment and asks people to stop using such equipment. It does not have enforcement power (although the FCC can impose a fine of $50 on violators) but will work with residents to find solutions.
Cellular telephone use in the central area of the zone is also very restricted.
Zones of protection
The Green Bank Interference Protection Group maintains policies to manage radio-frequency interference (RFI) by dividing into five zones based on available legal instruments. The National Radio Quiet Zone Administrator at the Green Bank Observatory manages the enforcement policies.
Zones 1 and Zone 2 are located within the property of the Green Bank Observatory. The entire property is designated Zone 1 except small portions (such as housing, visitor, and laboratory areas) that are designated Zone 2. Zone 1, the Radio Astronomy Instrument Zone, restricts intentional radiators to those that are deemed essential. All unintentional radiators must be operated within the recommendations on protection criteria used for radio astronomical measurements. Gasoline-powered motor vehicles are prohibited in Zone 1 as their spark-ignition engines generate significant radio interference, resulting in the requirement that all vehicles and equipment be diesel-powered. Zone 2, the Observatory Building Zone, allows intentional radiators licensed by the National Radio Quiet Zone but not other radiators such as Wi-Fi, cordless phones, and other wireless equipment. Certain types of unintentional radiators are allowed. Digital cameras are prohibited, although film photography is allowed.
Zone 3 and Zone 4 are governed by the Radio Astronomy Zoning Act, Chapter 37A of the West Virginia Code. It strictly regulates radio transmitters within of the Green Bank Observatory. Zone 3, the area within , has the greatest restriction; it is surrounded by Zone 4, in which progressively greater emissions are allowed at greater distances. Within these zones, interference to observations are identified and documented. The owners of the offending equipment are visited personally to request cooperation in eliminating the interference. Enforcement is used as a last resort. Enforcement in Zone 4 may be more lenient than the limit set by Chapter 37A.
Zone 5 is the outermost part of the National Radio Quiet Zone.
Uses
The Federal Communications Commission (FCC) created the Quiet Zone in 1958 to protect the radio telescopes at Green Bank and Sugar Grove from harmful interference. Today, the Green Bank Observatory oversees the Quiet Zone.
The Quiet Zone also protects the antennas and receivers of the U.S. Navy's Information Operations Command (NIOC) at Sugar Grove. The NIOC is the location of electronic intelligence-gathering systems and is today said to be a key station in the ECHELON system operated by the National Security Agency (NSA).
The area has also attracted people who believe they suffer from electromagnetic hypersensitivity, though scientific experiments have shown this condition is caused by the nocebo effect rather than electromagnetic waves.
Counties inside the NRQZ
Maryland counties
Extreme southern Garrett
Virginia counties
See also List of radio stations in Virginia, which includes several AM and FM stations within the zone.
Western Albemarle
Alleghany
Amherst, except for the southern quarter
Extreme northern Appomattox
Augusta
Bath
Extreme northern Bedford
Northern Botetourt
Northwestern Buckingham
Northern Craig
Western Greene
Highland
Nelson
Western Page
Rockbridge
Rockingham, except for a small area in the extreme eastern part
Western Shenandoah
West Virginia counties
See also List of radio stations in West Virginia, which includes several AM and FM stations within the zone.
Barbour, except for a small area in the north
Extreme eastern Braxton
Grant, except for an area in the north
Eastern Greenbrier
Southwestern Hampshire
Hardy
Southeastern Harrison
Eastern Lewis
Extreme southern Mineral
Northeastern and east central Monroe
Extreme eastern Nicholas
Pendleton
Pocahontas
Two areas in extreme southwestern and southeastern Preston
Randolph
Extreme southern Taylor
Tucker, except for an area in the extreme northern part
Upshur
Central and eastern Webster
Cities inside the NRQZ
Virginia cities
Buena Vista
The western half of Charlottesville, including much of the University of Virginia grounds
Covington
Harrisonburg
Lexington
Staunton
Waynesboro
West Virginia cities
Buckhannon
Elkins
Weston
Outside
Clarksburg, West Virginia, and Lynchburg, Virginia, are just outside the Quiet Zone.
See also
Electromagnetic interference
Radio silence
Cone of Silence, a fictional device from the 1960s American television series Get Smart
References
External links
Official website
"The Town Without Wi‑Fi"Washingtonian (January 2015)
"The Town Where Wi-Fi Is Banned: The Green Bank Telescope and the Quiet Zone"YouTube (October 2016)
United States communications regulation
Communications in West Virginia
Electromagnetic compatibility
Mass media in West Virginia
Radio regulations
1958 establishments in West Virginia
Communications in Virginia
Mass media in Virginia
1958 establishments in Virginia
1958 establishments in Maryland
Communications in Maryland
Mass media in Maryland
Garrett County, Maryland
Federal Communications Commission
Radio astronomy | United States National Radio Quiet Zone | [
"Astronomy",
"Engineering"
] | 1,472 | [
"Radio electronics",
"Electromagnetic compatibility",
"Radio astronomy",
"Electrical engineering",
"Astronomical sub-disciplines"
] |
613,257 | https://en.wikipedia.org/wiki/Advanced%20Photon%20Source | The Advanced Photon Source (APS) at Argonne National Laboratory (in Lemont, Illinois) is a storage-ring-based high-energy X-ray light source facility. It is one of five X-ray light sources owned and funded by the U.S. Department of Energy Office of Science. The APS began operation on March 26, 1995. It is operated as a user facility, meaning that it is open to the world’s scientific community, and more than 5,500 researchers make use of its resources each year.
How APS works
The APS uses a series of particle accelerators to push electrons up to nearly the speed of light, and then injects them into a storage ring that is roughly two-thirds of a mile around. At every bend in the track, these electrons emit synchrotron radiation in the form of ultrabright X-rays. Scientists at 65 experiment stations around the ring use these X-rays for basic and applied research in a number of fields.
Scientists use the X-rays generated by the APS to peer inside batteries, with the goal of creating longer-lasting, faster-charging energy storage devices; to improve 3D printing for more durable materials; to learn more about the behavior of charged particles in order to improve electronics; and to map the brain to understand more about neurological diseases. APS research played a role in the development of the COVID-19 vaccines in use in the United States.
The Experiment Hall surrounds the storage ring and is divided into 35 sectors, each of which has access to x-ray beamlines, one at an insertion device, and the other at a bending magnet. Each sector also corresponds to a lab/office module offering immediate access to the beamline.
Two Nobel prizes in Chemistry have been granted for work performed in part at the APS. The 2009 prize was awarded for the discovery of the structure of the ribosome, and the 2012 prize for the structure of G protein-coupled receptors.
APS upgrade
The APS underwent an upgrade that saw the original storage ring replaced with a new multi-bend achromat lattice. Construction of nine new feature beamlines and 15 enhanced existing beamlines will be completed in 2024 and 2025. The result will be X-rays that are up to 500 times brighter than those currently generated, and beamlines that will enable greater focusing ability to examine smaller materials in sharper detail. The installation period for the new storage ring began on April 24, 2023, and was completed roughly 12 months later in 2024, with stored beam demonstrated April 20, 2024.
See also
Keith Moffat
EPICS
References
External links
Lightsources.org
Argonne National Laboratory
Synchrotron radiation facilities | Advanced Photon Source | [
"Materials_science"
] | 557 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
613,271 | https://en.wikipedia.org/wiki/Fucoxanthin | Fucoxanthin is a xanthophyll, with formula C42H58O6. It is found as an accessory pigment in the chloroplasts of brown algae and most other heterokonts, giving them a brown or olive-green color. Fucoxanthin absorbs light primarily in the blue-green to yellow-green part of the visible spectrum, peaking at around 510-525 nm by various estimates and absorbing significantly in the range of 450 to 540 nm.
Function
Carotenoids are pigments produced by plants and algae and play a role in light harvesting as part of the photosynthesis process. Xanthophylls are a subset of carotenoids, identified by the fact that they are oxygenated either as hydroxyl groups or as epoxide bridges. This makes them more water soluble than carotenes such as beta-carotene. Fucoxanthin is a xanthophyll that contributes more than 10% of the estimated total production of carotenoids in nature. It is an accessory pigment found in the chloroplasts of many brown macroalgae, such as Fucus spp., and the golden-brown unicellular microalgae, the diatoms. It absorbs blue and green light at bandwidth 450-540 nm, imparting a brownish-olive color to algae.
Fucoxanthin has a highly unique structure that contains both an epoxide bond and hydroxyl groups along with an allenic bond (two adjacent carbon-carbon double bonds) and a conjugated carbonyl group (carbon-oxygen double bond) in the polyene chain. All of these features provide fucoxanthin with powerful antioxidant activity.
In macroalgal plastids, fucoxanthin acts like an antenna for light harvesting and energy transfer in the photosystem light harvesting complexes. In diatoms like Phaeodactylum tricornutum, fucoxanthin is protein-bound along with chlorophyll to form a light harvesting protein complex. Fucoxanthin is the dominant carotenoid, responsible for up to 60% of the energy transfer to chlorophyll a in diatoms When bound to protein, the absorption spectrum of fucoxanthin expands from 450-540 nm to 390-580 nm, a range that is useful in aquatic environments.
Sources
Fucoxanthin is present in brown seaweeds and diatoms and was first isolated from Fucus, Dictyota, and Laminaria by Willstätter and Page in 1914. Seaweeds are commonly consumed in south-east Asia and certain countries in Europe, while diatoms are single-cell planktonic microalgae characterized by a golden-brown color, due to their high content of fucoxanthin. Generally, diatoms contain up to 4 times more fucoxanthin than seaweed, making diatoms a viable source for fucoxanthin industrially. Diatoms can be grown in controlled environments (such as photobioreactors). Brown seaweeds are mostly grown in the open sea, often exposed to metals and metalloids.
Bioavailability
Limited studies of fucoxanthin in humans indicate low bioavailability.
See also
Chlorophyll
References
Carotenoids
Epoxides
Brown algae
Diatom biology | Fucoxanthin | [
"Biology"
] | 717 | [
"Biomarkers",
"Algae",
"Brown algae",
"Carotenoids"
] |
613,351 | https://en.wikipedia.org/wiki/List%20of%20computing%20people | This is a list of people who are important or notable in the field of computing, but who are not primarily computer scientists or programmers.
A
Alfred Aho, co-developer of the AWK
Leonard Adleman, encryption (RSA)
Marc Andreessen, co-founder of Netscape Communications Corporation
B
Tim Berners-Lee, inventor of the World Wide Web
Stephen Bourne, developer of the Bourne shell
C
John Carmack, realtime computer game graphics, id Software
Noam Chomsky, linguist, language theorist (Chomsky hierarchy) and social critic
D
Theo de Raadt, founder of the OpenBSD and OpenSSH projects
E
J. Presper Eckert, ENIAC
Larry Ellison, co-founder of Oracle Corporation
Marc Ewing, creator of Red Hat Linux
F
G
Bill Gates, co-founder and Chairman of Microsoft
James Gosling, "father" of the Java programming language
H
Grace Hopper, she was a pioneer of computer programming who invented one of the first linkers.
I
Jonathan Ive, Senior Vice President of Industrial Design at Apple
J
Steve Jobs, co-founder and CEO of Apple
Bill Joy, co-founder of Sun Microsystems, BSD
K
Brian Kernighan, Dennis Ritchie, C programming language
Donald Knuth, The Art of Computer Programming, TeX
L
Rasmus Lerdorf, creator of the PHP Scripting Language
Lawrence Lessig, professor of law and founder of the Creative Commons
Ada Lovelace
M
John William Mauchly, ENIAC
John McCarthy, LISP programming language
Bob Miner, co-founder of Oracle Corporation
Marvin Minsky, AI luminary
Gordon E. Moore, co-founder of Intel, Moore's Law
Elon Musk, co-founder of PayPal, Tesla, Inc, SpaceX,
N
Roger Needham
John von Neumann, theoretical computer science
Robert Noyce, co-founder of Intel and the founder of integrated circuit
P
Sir John Anthony Pople, pioneer in computational chemistry
Jon Postel, Internet pioneer, founder of IANA
Q
R
Eric Raymond, Open source movement luminary
Dennis Ritchie, Unix operating system and C programming language
Ron Rivest, encryption (RSA)
Guido van Rossum, Python (programming language) Benevolent Dictator For Life
S
Adi Shamir, encryption (RSA)
Mark Shuttleworth, founder of Canonical
Richard Stallman, founder of GNU
Olaf Storaasli, NASA Finite element machine
Bjarne Stroustrup, founder of C++
T
Ken Thompson, Unix and Plan 9 operating systems
Linus Torvalds, Linux
Alan Turing, British mathematician and cryptographer
U
V
W
Prof. Joseph Weizenbaum, computer critic
Kevin Warwick, cyborg scientist, implant self-experimenter
Niklaus Wirth, developed Pascal
Peter J. Weinberger, co-developer of the AWK language
Sophie Wilson, designer of the ARM instruction set
Stephen Wolfram, founder of Wolfram Research, physicist, software developer, mathematician
Steve Wozniak, co-founder of Apple; creator of the Apple I and Apple II computers
X
Y
Z
Jill Zimmerman, James M. Beall Professor of Mathematics and Computer Science at Goucher College
Konrad Zuse, built one of the first computers
Mark Zuckerberg, co-founder of Facebook
See also
List of programmers
List of computer scientists
List of pioneers in computer science
List of Russian IT developers
Computing
Computing people | List of computing people | [
"Technology"
] | 682 | [
"Computing-related lists"
] |
613,362 | https://en.wikipedia.org/wiki/Solar%20storm | A solar storm is a disturbance on the Sun, which can emanate outward across the heliosphere, affecting the entire Solar System, including Earth and its magnetosphere, and is the cause of space weather in the short-term with long-term patterns comprising space climate.
Types
Solar storms include:
Solar flare, a large explosion in the Sun's atmosphere caused by tangling, crossing or reorganizing of magnetic field lines
Coronal mass ejection (CME), a massive burst of plasma from the Sun, sometimes associated with solar flares
Geomagnetic storm, the interaction of the Sun's outburst with Earth's magnetic field
Solar particle event (SPE), proton or energetic particle (SEP)
See also
List of solar storms
Aurora, a luminous phenomenon induced by ionization and excitation of constituents of a planet's upper atmosphere
Heliophysics, the scientific study of the Sun and region of space affected by the Sun
Magnetic cloud, a transient disturbance in the solar wind
Solar cycle, an 11-year cycle of sunspot activity
Solar prominence, a plasma and magnetic structure in the Sun's corona
Solar wind, the stream of particles and plasma emanating from the Sun
Active region, where most solar flares and coronal mass ejections originate
References
Storm
Space weather
Geomagnetic storms
Space hazards
da:Solstorm | Solar storm | [
"Physics"
] | 279 | [
"Physical phenomena",
"Stellar phenomena",
"Solar phenomena"
] |
613,367 | https://en.wikipedia.org/wiki/Nightgown | A nightgown, nightie or nightdress is a loosely hanging item of nightwear, and is commonly worn by women and girls. A nightgown is made from cotton, silk, satin, or nylon and may be decorated with lace appliqués or embroidery at the bust and hem.
A nightgown may have any neckline, and may have sleeves of any type, or be sleeveless, and any shoulder strap or back style. The length of a nightgown may vary from hip-length to floor-length. A short nightgown can be called a "shortie" or a "babydoll", depending on the style. The sweep (taper from top to bottom) of the night gown can vary from virtually straight, to full circle sweep, like the Olga gown. A slip nightgown may be used as a nightgown or as a full slip. Nightgowns may be worn with a matching outer garment such as a robe, a sheer chiffon peignoir, or a dressing gown, to make them appropriate for receiving guests.
History
Early nightgowns
The Dictionary of Fashion History highlights the use of the term "nightgown" as early as 1530, when French linguist John Palsgrave translates "sloppe" to nightgown in his own textbook. There is no indication whether the term referred to sleepwear or an item of clothing with a different purpose, however. There, additionally, is little evidence of designated sleepwear prior to the 16th century. European portraiture from the Middle Ages suggests men and women commonly slept naked, sometimes with a nightcap. Some historians suggest a lack of record of early sleepwear is due to social attitudes. Sleepwear was widely regarded as a private matter within households until it became more popularized.
Modern nightgowns originate from nightshirts on men, or night-chemises on women which date back to as early as the 16th century. Nightshirts and night-chemises tended to just be day shirts or undergarments and were similarly ankle-length, shapeless articles with varying collars. Nightshirts resembled tunics; worn by both sexes for centuries in Egypt and Rome. They tended to be made from white linen so as to be easily washed and absorbent. Nobles and Lords however wore nightshirts that were embroidered.
It was not until the late 17th century that sleepwear developed its own identity in Western Europe, and higher-class women began to wear chemise-like gowns exclusively to bed, known as nightshifts. Nightshifts developed more shape when the negligée was born in France in the early 18th century. The negligée was typically made with soft-sheer fabric and was tighter around the waist, but still loose-fitting for comfort. It was also a sign of wealth and is regarded as the first women's nightwear to be used widely and a predecessor to the modern nightgown. According to historians Willet and Phillis Cunnington, only small adjustments were made to nightdresses up until the late 19th century because of social attitudes; trimmings of lace or tighter fits were viewed "as a sign of depravity that went against the highest principles of prudery in the English lady".
Nightgowns as dressing gowns: 18th and early 19th centuries
Prior to the late 19th century, the term "nightgown" referred not to sleepwear but rather to informal wear. The nightgown was a "version of a modern dressing gown" and tended to be worn around the house or to occasions when formal attire was not necessary. This garment was actually a Banyan, a T-shirt shaped robe adopted by the British from India but became known as a "nightgown", dressing gown or "morning gown" in the early 1700s due to its casual nature.
Nightgowns, or dressing gowns, were predominantly worn by men. English variations of the nightgown or dressing gown were influenced by similar gowns from India, Japan and the Middle East. In the early 18th century, the kimono style became popular. It was loose fitting and fit over men like a coat. The gown "consists of two widths of fabric seamed at center back up to the neck, where it joins a small rectangle of fabric to build up the neckline. Each width of fabric then falls over the shoulders to create the gown front. Additional widths of fabric form the sleeves. V-shaped inserts could be sewn at the side seams for additional fullness."
Nightgowns were also typically made from cotton or silk (damask, brocade, velvet, taffeta, and satin) or wool with linings using satin or lutestring in a bright, contrasting color. The material varied based on the weather and the person wearing the gown. In colder seasons, nightgowns would have fur linings. Trade throughout Europe and Asia from the 16th to 18th centuries led to the foreign fabrics and styles used for nightgowns in Western Europe and America. Exotic associations popularized the nightgown, especially in the kimono or banyan style. Fashion historian Patricia Cunningham has also suggested “the cut of the gown may derive from Persian and Turkish caftans”.
Nightgowns or dressing gowns also increased in demand because of portraiture and implications of status. The garment is seen throughout portraits in the 17th and 18th centuries. "The adoption of the gown by the English may date from the 16th century when Henry VIII wore what appears to be an Islamic caftan with frogged closure for a portrait by Holbein". Cunningham points to paintings "The Apothecary's Shop" (1752) and "The Concert" (1741) which "illustrate the apparent appropriateness of the gown for both professional and social occasions".
19th and 20th centuries
From 1840 to 1900, stylistic changes were made to nightgowns' necklines, collars, sleeves, bodices and closures. "Embellishments such as frills, ruffles, tucks, ribbons, lace, beading, openwork and embroidery would often be added to necklines, collars, bodices, sleeves, cuffs, and skirts." The traditional nightshirt was replaced by pyjamas amongst the Western world when it was adopted from India in 1870. Pyjamas soon became an essential item in men's wardrobes because of their comfort and exotic connotations. Female pyjamas were introduced in 1886 and were a combination of a nightgown and pants "that required 4 ½ yards of calico or flannel fabric. The top had a high collar and a buttoned-down front, and there were frills at the wrists and at the knees." French designer Coco Chanel was the first to release a line of attractive women's pyjamas which coaxed along their popularity.
Despite the introduction of pyjamas, the popularity of the nightgown grew drastically in the 1920s. Between 1920 and 1940, nightgowns did not curve the body but draped down in a straight line. This is widely attributed to French designer Madeleine Vionnet who rejected corsets and became famous "cutting fabric along the bias". Her styles accentuated curves while also providing fluidity and comfort. During this time, nightgowns also moved from domestic use to fashion statements. In 1933, trend setter Mary d'Erlanger wore a nightgown cut low in the front and back to a ball in New York popularizing the elegance of the style. This style, now referred to as the "slip dress" made a resurgence in the 1990s. The middle of the 19th century saw more tailored nightgowns which were full skirted with figure-hugging bodices, reflecting trends of the time. By the 1960s, nightgowns were completely diversified, found in varieties of lengths, patterns, and fabrics.
21st century
In the 21st century, nightgowns are predominantly worn by women. Common modern nightgown styles are made from cotton, satin, silk or lace and have embroidery or lace details with thin shoulder straps. Nightgowns have several different variations. Longer, cotton nightgowns are often referred to as "Victorian Nightgowns", having been influenced by similar styles in the late 19th century. Shorter nightgowns are also known as "nighties" and a common style is the "babydoll" nightgown which is generally lace and silk with a V-neckline. Other variations are the "shirt style" nightgown or the "slip dress" nightgown.
The variety of styles of nightgowns have pushed into daywear and are also often seen on the runway. Nightgown influence has been seen in street style clothes as well as high fashion. In the 1990s, designer Calvin Klein developed a line of nightgown style dresses which were simple, silk gowns that were short or reached the floor. As recently as 2019, actor Gwyneth Paltrow wore a sheer yellow lace nightgown on the red carpet at the Met Gala with a high neck and filled collar much like one from the 19th century. Other trends like beach slip tunics have been linked to the first variations of the nightgown, or nightdress, in the 17th and 18th centuries. Some scholars suggest that as daywear has become more relaxed over time, it has allowed for the nightgown to be repurposed into different styles people wear every day.
Gallery
References
Notes
Bibliography
External links
Yvette Mahe, “History of Sleepwear: Fashion in Time,” History of Fashion, March 27, 2015
Paulla Estes and Niki Foster, “What Are the Different Types of Nightgowns?,” wiseGEEK (Conjecture Corporation, April 5, 2020)
Justin Parkinson, “When Pyjamas Ruled the Fashion World,” BBC News (BBC, January 31, 2016)
Boyd, Sarah. Transforming Sleepwear Into Pajama-Chic Daywear. Forbes. Forbes Magazine, April 21, 2016.
metmuseum.org
Colin McDowell, Madeleine Vionnet (1876-1975), The Business of Fashion (The Business of Fashion, August 23, 2015),
Margaret Swain, "The Patchwork Dressing Gown", Costume 18, no. 1 (January 1984): pp. 59-65
Boucher François and Yvonne Deslandres, 20000 Years of Fashion: the History of Costume and Personal Adornment (New York: Harry N. Abrams, 1987)
Gabriele Stein, "Word Studies in the Renaissance", Oxford Scholarship Online, 2017
Valerie Cumming, Cecil Willet Cunnington, and Phillis Cunnington, The Dictionary of Fashion History (London: Bloomsbury Academic, 2017).
Elizabeth Ewing, Dress and Undress: a History of Women's Underwear (London: Batsford, 1978)
Gown
Lingerie
Gowns | Nightgown | [
"Biology"
] | 2,265 | [
"Behavior",
"Sleep",
"Nightwear"
] |
613,484 | https://en.wikipedia.org/wiki/Height%20gauge | A height gauge is a measuring device used for determining the height of objects, and for marking of items to be worked on.
These measuring tools are used in metalworking or metrology to either set or measure vertical distances; the pointer is sharpened to allow it to act as a scriber and assist in marking out work pieces.
Devices similar in concept, with lower resolutions, are used in health care settings (health clinics, surgeries) to find the height of people, in which context they are called stadiometers.
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
In the toolroom, the distinction between a height gauge and a surface gauge is that a height gauge has a measuring head (whether vernier, fine rack and pinion with dial, or linear encoder with digital display), whereas a surface gauge has only a scriber point. Both are typically used on a surface plate and have a heavy base with an accurately flat, smooth underside.
References
Length, distance, or range measuring devices
Metalworking measuring instruments
Vertical extent | Height gauge | [
"Physics",
"Mathematics"
] | 327 | [
"Vertical extent",
"Physical quantities",
"Quantity",
"Size",
"Wikipedia categories named after physical quantities"
] |
613,557 | https://en.wikipedia.org/wiki/Calculus%20of%20constructions | In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants.
Some of its variants include the calculus of inductive constructions (which adds inductive types), the calculus of (co)inductive constructions (which adds coinduction), and the predicative calculus of inductive constructions (which removes some impredicativity).
General traits
The CoC is a higher-order typed lambda calculus, initially developed by Thierry Coquand. It is well known for being at the top of Barendregt's lambda cube. It is possible within CoC to define functions from terms to terms, as well as terms to types, types to types, and types to terms.
The CoC is strongly normalizing, and hence consistent.
Usage
The CoC has been developed alongside the Coq proof assistant. As features were added (or possible liabilities removed) to the theory, they became available in Coq.
Variants of the CoC are used in other proof assistants, such as Matita and Lean.
The basics of the calculus of constructions
The calculus of constructions can be considered an extension of the Curry–Howard isomorphism. The Curry–Howard isomorphism associates a term in the simply typed lambda calculus with each natural-deduction proof in intuitionistic propositional logic. The calculus of constructions extends this isomorphism to proofs in the full intuitionistic predicate calculus, which includes proofs of quantified statements (which we will also call "propositions").
Terms
A term in the calculus of constructions is constructed using the following rules:
is a term (also called type);
is a term (also called prop, the type of all propositions);
Variables () are terms;
If and are terms, then so is ;
If and are terms and is a variable, then the following are also terms:
,
.
In other words, the term syntax, in Backus–Naur form, is then:
The calculus of constructions has five kinds of objects:
proofs, which are terms whose types are propositions;
propositions, which are also known as small types;
predicates, which are functions that return propositions;
large types, which are the types of predicates ( is an example of a large type);
itself, which is the type of large types.
β-equivalence
As with the untyped lambda calculus, the calculus of constructions uses a basic notion of equivalence of terms, known as -equivalence. This captures the meaning of -abstraction:
-equivalence is a congruence relation for the calculus of constructions, in the sense that
If and , then .
Judgments
The calculus of constructions allows proving typing judgments:
,
which can be read as the implication
If variables have, respectively, types , then term has type .
The valid judgments for the calculus of constructions are derivable from a set of inference rules. In the following, we use to mean a sequence of type assignments
; to mean terms; and to mean either or . We shall write to mean the result of substituting the term for the free variable in the term .
An inference rule is written in the form
,
which means
if is a valid judgment, then so is .
Inference rules for the calculus of constructions
1.
2.
3.
4.
5.
6.
Defining logical operators
The calculus of constructions has very few basic operators: the only logical operator for forming propositions is . However, this one operator is sufficient to define all the other logical operators:
Defining data types
The basic data types used in computer science can be defined within the calculus of constructions:
Booleans
Naturals
Product
Disjoint union
Note that Booleans and Naturals are defined in the same way as in Church encoding. However, additional problems arise from propositional extensionality and proof irrelevance.
See also
Pure type system
Lambda cube
System F
Dependent type
Intuitionistic type theory
Homotopy type theory
References
Sources
Also available freely accessible online: Note terminology is rather different. For instance, () is written [x : A] B.
— An application of the CoC
Dependently typed programming
Lambda calculus
Type theory | Calculus of constructions | [
"Mathematics"
] | 890 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
613,611 | https://en.wikipedia.org/wiki/NGC%20300 | NGC 300 (also known as Caldwell 70 or the Sculptor Pinwheel Galaxy) is a spiral galaxy in the constellation Sculptor. It was discovered on 5 August 1826 by Scottish astronomer James Dunlop. It is one of the closest galaxies to the Local Group, and it most likely lies between the latter and the Sculptor Group. It is the brightest of the five main spirals in the direction of the Sculptor Group. It is inclined at an angle of 42° when viewed from Earth and shares many characteristics of the Triangulum Galaxy. It is 94,000 light-years in diameter, somewhat smaller than the Milky Way, and has an estimated mass of (2.9 ± 0.2) × 1010 .
Nearby galaxies and group information
NGC 300 and the irregular galaxy NGC 55 have traditionally been identified as members of the Sculptor Group, a nearby group of galaxies in the constellation of the same name. However, recent distance measurements indicate that these two galaxies actually lie in the foreground. It is likely that NGC 300 and NGC 55 form a gravitationally bound pair.
Distance estimates
In 1986, Allan Sandage estimated the distance to NGC 300 to be 5.41 Mly (1.66 Mpc). By 1992, this had been updated to 6.9 Mly (2.1 Mpc) by Freedman et al. In 2006, this was revised by Karachentsev et al. to be (). At about the same time, the tip of the red giant branch (TRGB) method was used to produce an estimate of () using edge detection and () using maximum likelihood. These results were consistent with estimates using near-infrared photometry of Cepheid variables by Gieren et al. 2005 that provided an estimate of (). Combining the recent TRGB and Cepheid estimates the distance to NGC 300 is estimated at ().
NGC 300-OT
On a CCD image obtained on May 14, 2008, amateur astronomer L.A.G. Berto Monard discovered a bright optical transient (OT) in NGC 300 that is designated NGC 300-OT. It is located at RA: and DEC: in a spiral arm containing active star formation. Its broad-band magnitude was 14.3 in that image. An earlier image (from April 24, 2008), taken just after NGC 300 reemerged from behind the Sun, evidenced an already brightening OT at ~16.3 magnitude. No brightening was detected on a February 8, 2008 image or on any earlier ones. The transient's peak measured magnitude was 14.69 on May 15, 2008.
At discovery, the transient had an absolute magnitude of , making it faint in comparison to a typical core-collapse supernova but bright in comparison to a classical nova. Additionally, the photometric and spectroscopic properties of the OT imply that it is not a luminous blue variable either. Since its peak, brightness dropped smoothly through September 2008 while becoming continuously redder. After September 2008, brightness continued to fall at a lower rate in the optical spectrum but with strong Hα emissions. Further, the optical spectrum is mostly made up of fairly narrow Hydrogen Balmer and Ca II emission lines coupled with strong Ca II H&K absorption. Research into historical Hubble images provide an accurate upper bound on the progenitor star's brightness. This suggested a low-mass main sequence star as progenitor with the transient resulting from a stellar merger similar to red Galactic nova V838 Monocerotis. Analysis of historical images of the area of the OT suggest with 70% certainty that the progenitor formed in a burst of stars around 8–13 Myr ago and implies the progenitor's mass to be 12–25 M⊙ assuming the OT is due to an evolving massive star.
However, in 2008 a bright mid-infrared progenitor to the transient was discovered in historical Spitzer data. This was a star that was obscured by dust, with energy distribution analogous to a black-body of AU and radiating at K with . This demonstrated that the transient was associated with an energetic explosion of a low-mass ≈ 10 M⊙ star. The transient's low luminosity as compared to typical core-collapse supernova, combined with its spectral attributes and dust covered properties, make it nearly identical to NGG 6946's SN 2008S.
The spectrum of NGC 300-OT observed with Spitzer shows strong, broad emission features at 8 μm and 12 μm. Such features are also seen in Galactic carbon-rich protoplanetary nebulae.
SN 2010da
On May 23, 2010, Monard discovered another transient object of 16th magnitude, denoted as SN 2010da. The optical transient was detected 15".9 west and 16".8 north the center of the galaxy at coordinates 00 55 04.86 −37 41 43.7.
Two sets of independent follow-up spectroscopy data suggested that this was again another optical transient rather than a supernova, possibly an outbursting luminous blue variable star according to one spectrum, as earlier predicted from the nature of the candidate mid-infrared progenitor. The transient faded by 0.5–0.7 mag in 9 days, much faster than the 2008 transient in NGC 300.
Other Novae, Supernovae, and Transients
AT 2019qyl was discovered on 26 September 2019, at magnitude 17.1. It was initially classified as a type IIn/LBV, but later analysis classified the star as a classical nova.
SN 2020acli (type IIn-pec, mag. 18.4205) was discovered by the Distance Less Than 40 Mpc Survey (DLT40) on 12 December 2020.
AT 2024oth (type unknown, mag. 19.85) was discovered by BlackGEM on 27 June 2024.
AT 2024txt (type unknown, mag. 19.77) was discovered by Pan-STARRS on 29 July 2024.
Binary black hole system
An x-ray source in NGC 300 is designated NGC 300 X-1. Astronomers speculate that NGC 300 X-1 is a new kind of Wolf-Rayet + stellar black hole binary system similar to the confirmed such system IC 10 X-1. Their shared properties include an orbital period of 32.8 hours. The black hole has a mass of 17 ± 4 and the WR star has a mass of . Both objects orbit each other at a distance of about 18.2 .
WO star
There is an oxygen-sequence Wolf-Rayet star (WO4 type), known as STWR 13, located in one of the bright H II regions in NGC 300.
Notes
Average (, ) = ((1.845 + 1.86) / 2) ± ((0.1252 + 0.072)0.5 / 2) = 1.86 ± 0.07
See also
List of NGC objects (1–1000)
References
External links
Confirmation image of SN 2010da (2010-05-24) / Wikisky DSS2 zoom-in of same region
Unbarred spiral galaxies
Sculptor (constellation)
0300
003238
070b
00525-3757
-06-03-005
18260805
Articles containing video clips
Virgo Supercluster
Discoveries by James Dunlop | NGC 300 | [
"Astronomy"
] | 1,490 | [
"Constellations",
"Sculptor (constellation)"
] |
613,622 | https://en.wikipedia.org/wiki/NGC%2055 | NGC 55, also known as the String of Pearls Galaxy, is a Magellanic type barred spiral galaxy located about 6.5 million light-years away in the constellation Sculptor. It was discovered on 7 July 1826 by Scottish astronomer James Dunlop. Along with its neighbor NGC 300, it is one of the closest galaxies to the Local Group, probably lying between the Milky Way and the Sculptor Group. It has an estimated mass of (2.0 ± 0.4) × 1010 .
Nearby galaxies and group information
NGC 55 and the spiral galaxy NGC 300 have traditionally been identified as members of the Sculptor Group, a nearby group of galaxies in the constellation of the same name. However, recent distance measurements indicate that the two galaxies actually lie in the foreground.
It is likely that NGC 55 and NGC 300 form a gravitationally bound pair.
Visual appearance
The Webb Society Deep-Sky Observer's Handbook writes the following about NGC 55: "Nearly edge-on and appears asymmetrical with some signs of dust near the bulge, which is diffuse, broad and somewhat elongated with the south edge sharp; southeast of the bulge it is strongly curved and lined with 4 or 5 faint knots; north edge of the curve is sharp." Burnham calls it "one of the outstanding galaxies of the southern heavens", somewhat resembling a smaller version of the Large Magellanic Cloud.
See also
NGC 4236
NGC 4631
List of NGC objects (2001–3000)
Notes
average(6.9 ± 0.7, 7.5 ± 1.1) = ((6.9 + 7.5) / 2) ± ((0.72 + 1.12)0.5 / 2) = 7.2 ± 0.7
References
External links
NGC 55 in Sculptor
SEDS: Spiral Galaxy NGC 55
NGC 0055
NGC 0055
NGC 0055
0055
-07-01-013
001014
072b
18260707
Virgo Supercluster
Discoveries by James Dunlop | NGC 55 | [
"Astronomy"
] | 404 | [
"Constellations",
"Sculptor (constellation)"
] |
613,749 | https://en.wikipedia.org/wiki/Entablature | An entablature (; nativization of Italian , from "in" and "table") is the superstructure of moldings and bands which lies horizontally above columns, resting on their capitals. Entablatures are major elements of classical architecture, and are commonly divided into the architrave (the supporting member immediately above; equivalent to the lintel in post and lintel construction), the frieze (an unmolded strip that may or may not be ornamented), and the cornice (the projecting member below the pediment). The Greek and Roman temples are believed to be based on wooden structures, the design transition from wooden to stone structures being called petrification.
Overview
The structure of an entablature varies with the orders of architecture. In each order, the proportions of the subdivisions (architrave, frieze, cornice) are defined by the proportions of the column. In Roman and Renaissance interpretations, it is usually approximately a quarter of the height of the column. Variants of entablature that do not fit these models are usually derived from them.
Doric
In the pure classical Doric order entablature is simple. The architrave, the lowest band, is split, from bottom to top, into the guttae, the regulae, and the taenia.
The frieze is dominated by the triglyphs, vertically channelled tablets, separated by metopes, which may or may not be decorated. The triglyphs sit on top of the taenia, a flat, thin, horizontal protrusion, and are finished at the bottom by decoration (often ornate) of 'drops' called guttae, which belong to the top of the architrave. The top of the triglyphs meet the protrusion of the cornice from the entablature. The underside of this protrusion is decorated with mutules, tablets that are typically finished with guttae.
The cornice is split into the soffit, the corona, and the cymatium. The soffit is simply the exposed underside. The corona and the cymatium are the principal parts of the cornice.
Ionic
The Ionic order of entablature adds the fascia in the architrave, which are flat horizontal protrusions, and the dentils under the cornice, which are tooth-like rectangular block moldings.
Corinthian
The Corinthian order adds a far more ornate cornice, divided, from bottom to top, into the cyma reversa, the dentils, the ovolo, the modillions, the fascia, and the cyma recta. The modillions are ornate brackets, similar in use to dentils, but often in the shape of acanthus leaves.
The frieze is sometimes omitted—for example, on the portico of the caryatides of the Erechtheum—and probably did not exist as a structure in the temple of Diana at Ephesus. Neither is it found in the Lycian tombs, which are reproductions in the rock of timber structures based on early Ionian work. The entablature is essentially an evolution of the primitive lintel, which spans two posts, supporting the ends of the roof rafters.
Non-classical architecture
The entablature together with the system of classical columns occurs rarely outside classical architecture. It is often used to complete the upper portion of a wall where columns are not present, and in the case of pilasters (flattened columns or projecting from a wall) or detached or engaged columns it is sometimes profiled around them. The use of the entablature, irrespective of columns, appeared after the Renaissance.
See also
Classical order
Classical architecture
Prastara, an entablature in the Hindu temple architecture
References
External links
Architectural elements | Entablature | [
"Technology",
"Engineering"
] | 795 | [
"Building engineering",
"Structural system",
"Architectural elements",
"Columns and entablature",
"Components",
"Architecture"
] |
8,842,207 | https://en.wikipedia.org/wiki/Cancer%20Genome%20Project | The Cancer Genome Project is part of the cancer, aging, and somatic mutation research based at the Wellcome Trust Sanger Institute in the United Kingdom. It aims to identify sequence variants/mutations critical in the development of human cancers. Like The Cancer Genome Atlas project within the United States, the Cancer Genome Project represents an effort in the War on Cancer to improve cancer diagnosis, treatment, and prevention through a better understanding of the molecular basis of the disease. The Cancer Genome Project was launched by Michael Stratton in 2000, and Peter Campbell is now the group leader of the project. The project works to combine knowledge of the human genome sequence with high throughput mutation detection techniques.
The project operates within the scope of the International Cancer Genome Consortium, working with the other participating organizations and countries to build a database of genomic changes present in different types of cancer. The somatic mutation information gathered by the project can be located in the COSMIC database. The Wellcome Trust Sanger Institute's project currently has several internal partners that each focus on different types of cancer and mutagenesis utilizing different methods. Research goes beyond just sequencing to include therapeutic biomarker discoveries made utilizing bioinformatics programs. Among these discoveries are drug sensitivity biomarkers and inhibitor biomarkers. These discoveries paired with the evolution of DNA sequencing technologies to next-generation sequencing techniques, are important in potential disease treatment and may even help lead to more personalized medicine for cancer patients.
Goals
The goals of the project are to help sequence and catalog different cancer genomes. Beyond just sequencing the project's internal partners each have different areas of focus that will assist in the project's overall goal of determining unique ways for early detection of cancer, better prevention, and improved treatment for patients.
Partners
The following groups are internal partners at the Wellcome Trust Sanger Institute with labs involved with the Cancer Genome Project that are each carrying out different areas of research involving cancer genomics, other diseases, and therapy improvements for both of the aforementioned.
Garnett Group
The Garnett group is headed by Mathew Garnett. They work to improve current cancer therapies by determining how alterations in the DNA of cells results in cancer and the implications this has involving patient responses to therapy and its potential improvement. The current research being carried out by the group includes the genomics of drug sensitivity, mapping synthetic-lethal dependencies in cancer cells, a new generation of organoid cancer models, and precision organoid models to study cancer gene function.
Jackson Group
The Jackson group is led by Steve Jackson, and their research focuses on how cells utilize DNA-damage response (DDR) to discover and mend damaged cellular DNA. The research they are conducting have large implications involving diseases that result from loss of function of the DDR system, such as cancer, neurodegenerative diseases, infertility, immunodeficiency, and premature aging.
Liu Group
Pentao Liu leads the Liu group, which utilizes genetics, genomics, and cell biology in mice to study the role of gene functions in the development of normal cells and tissues as well as the development of various diseased cells and tissue, including cancer. The group invests a large interest in lineage choice, stem cell self-renewal, and differentiation, which would have implications in early detection, prevention, and therapy options for cancer and other genetic diseases.
McDermott Group
Ultan McDermott heads the McDermott Group. The group utilizes next-generation sequencing technologies, genetic screens, and bioinformatics to increase the knowledge of the effect that cancer genomes have on drug sensitivity and resistance in relation to patients. The different types of genetic screens being used include CRISPR, chemical mutagenesis, and RNAi. The main areas of focus by the group involve the pharmacogenomics of cancer and genetic screens to build a reserve of drug resistances in cancer.
Nik-Zainal Group
The leader of the Nik-Zainal group is Serena Nik-Zainal. The group uses computational methods to identify the unique signature of mutagenesis in somatic cells to help increase the understanding of how mutations in DNA contribute to aging and cancer. As more cancer genomes are sequenced the information the group generates will encompass a more robust collection, allowing for understanding of how mutations lead to different types and even subtypes of cancer.
Vassiliou Group
The Vassiliou group is led by George Vassiliou, and they focus on hematological cancer. The group studies how different genes and their pathways assist in the evolution of blood cancers, with an ultimate goal of developing treatment that will increase the quality and length of life of patients.
Voet Group
Thierry Voet leads the Voet group. The group utilizes single cell genome variants and its transcribed RNA to study the rate of mutation, genomic instability in gametogenesis and embryogenesis, and the effects of cellular heterogeneity on health and disease.
Research
In an attempt to better understand the mechanics of the mutations that lead to the development of cancer the Nik-Zainal group carried out a study that involved the cataloging of the somatic mutations for 21 different breast cancers. The group then utilized mathematical methods to help determine the unique mutational signatures of the underlying processes leading to the evolution from healthy to diseased tissue for each of the sampled cancers. The results showed that the mutations included several single and double nucleotide substitutions that were able to be differentiated. The unique mutations for each cancer allowed for the 21 samples to be categorized based on type and subtype of cancer, showing a relationship between mutations and the type of resulting cancer. While the group was able to identify these mutations they were unable to determine the underlying mechanisms resulting in them.
The McDermott group in participation with other labs worked to find new treatment possibilities for acute myeloid leukemia (AML), an aggressive cancer with a poor prognosis. They accomplished this by designing a CRISPR genome wide screening tool to locate areas in the genome that would be more susceptible to treatment in the AML cells. The research identified 492 essential genes to the function of the AML cells that would be accessible to being therapeutic targets. The group validated the obtained results by genetic and pharmacological inhibition on select genes. Inhibition of one of the selected genes, KAT2A, was able to suppress the growth of the AML cells across several genotypes will leaving noncancerous cells undamaged. The results from this study propose several promising therapeutic options for AML that will need to farther investigated.
See also
Cancer genome sequencing
The Cancer Genome Atlas
International Cancer Genome Consortium
COSMIC cancer database
References
External links
Francis S. Collins and Anna D. Barker. "Mapping the Cancer Genome". Scientific American, February 2007
Cancer Genome Project Website
International Cancer Genome Consortium Website
Cancer genome databases
Human genome projects
Medical genetics in the United Kingdom
Science and technology in Cambridgeshire
South Cambridgeshire District
Wellcome Trust | Cancer Genome Project | [
"Biology"
] | 1,406 | [
"Human genome projects",
"Genome projects"
] |
8,843,067 | https://en.wikipedia.org/wiki/Silo%20%28library%29 | Silo is a computer data format and library developed at Lawrence Livermore National Laboratory (LLNL) for storing rectilinear, curvilinear, unstructured, or point meshes in 2D and 3D. It supports data upon those meshes, including scalar, vector, and tensor variables; volume fraction-based materials; and mass fraction-based species. It fully supports block structured adaptive mesh refinement (AMR) meshes by way of mesh blocks structured in a hierarchy. Silo sits on top of other low-level storage libraries such as PDB, NetCDF, and HDF5.
Currently, VisIt, an open source software package with its start at LLNL, supports the Silo format for visualization and analysis, among many other formats. As of Version 4.8, July, 2010, the Silo source code is now available
under the standard BSD Open Source License. The source code for two compression libraries which have been part of
previous releases of the Silo library is not available under the
terms of the BSD Open Source license. These are the Hzip and FPzip
compression libraries. For this reason, two different releases of the Silo source code are
made available.
References
Numerical software
Lawrence Livermore National Laboratory | Silo (library) | [
"Mathematics"
] | 262 | [
"Numerical software",
"Mathematical software"
] |
8,843,405 | https://en.wikipedia.org/wiki/Producer%E2%80%93consumer%20problem | In computing, the producer-consumer problem (also known as the bounded-buffer problem) is a family of problems described by Edsger W. Dijkstra since 1965.
Dijkstra found the solution for the producer-consumer problem as he worked as a consultant for the Electrologica X1 and X8 computers: "The first use of producer-consumer was partly software, partly hardware: The component taking care of the information transport between store and peripheral was called 'a channel' ... Synchronization was controlled by two counting semaphores in what we now know as the producer/consumer arrangement: the one semaphore indicating the length of the queue, was incremented (in a V) by the CPU and decremented (in a P) by the channel, the other one, counting the number of unacknowledged completions, was incremented by the channel and decremented by the CPU. [The second semaphore being positive would raise the corresponding interrupt flag.]"
Dijkstra wrote about the unbounded buffer case: "We consider two processes, which are called the 'producer' and the 'consumer' respectively. The producer is a cyclic process and each time it goes through its cycle it produces a certain portion of information, that has to be processed by the consumer. The consumer is also a cyclic process and each time it goes through its cycle, it can process the next portion of information, as has been produced by the producer ... We assume the two processes to be connected for this purpose via a buffer with unbounded capacity."
He wrote about the bounded buffer case: "We have studied a producer and a consumer coupled via a buffer with unbounded capacity ... The relation becomes symmetric, if the two are coupled via a buffer of finite size, say portions"
And about the multiple producer-consumer case: "We consider a number of producer/consumer pairs, where pairi is coupled via an information stream containing ni portions. We assume ... the finite buffer that should contain all portions of all streams to have a capacity of 'tot' portions."
Per Brinch Hansen and Niklaus Wirth saw soon the problem of semaphores: "I have come to the same conclusion with regard to semaphores, namely that they are not suitable for higher level languages. Instead, the natural synchronization events are exchanges of message."
Dijkstra's bounded buffer solution
The original semaphore bounded buffer solution was written in ALGOL style. The buffer can store N portions or elements. The "number of queueing portions" semaphore counts the filled locations in the buffer, the "number of empty positions" semaphore counts the empty locations in the buffer and the semaphore "buffer manipulation" works as mutex for the buffer put and get operations. If the buffer is full, that is number of empty positions is zero, the producer thread will wait in the P(number of empty positions) operation. If the buffer is empty, that is the number of queueing portions is zero, the consumer thread will wait in the P(number of queueing portions) operation. The V() operations release the semaphores. As a side effect, a thread can move from the wait queue to the ready queue. The P() operation decreases the semaphore value down to zero. The V() operation increases the semaphore value.
begin integer number of queueing portions, number of empty positions,
buffer manipulation;
number of queueing portions:= 0;
number of empty positions:= N;
buffer manipulation:= 1;
parbegin
producer: begin
again 1: produce next portion;
P(number of empty positions);
P(buffer manipulation);
add portion to buffer;
V(buffer manipulation);
V(number of queueing portions); goto again 1 end;
consumer: begin
again 2: P(number of queueing portions);
P(buffer manipulation);
take portion from buffer;
V(buffer manipulation) ;
V(number of empty positions);
process portion taken; goto again 2 end
parend
end
As of C++ 20, semaphores are part of the language. Dijkstra's solution can easily be written in modern C++. The variable buffer_manipulation is a mutex. The semaphore feature of acquiring in one thread and releasing in another thread is not needed. The lock_guard() statement instead of a lock() and unlock() pair is C++ RAII. The lock_guard destructor ensures lock release in case of an exception. This solution can handle multiple consumer threads and/or multiple producer threads.
#include <thread>
#include <mutex>
#include <semaphore>
std::counting_semaphore<N> number_of_queueing_portions{0};
std::counting_semaphore<N> number_of_empty_positions{N};
std::mutex buffer_manipulation;
void producer() {
for (;;) {
Portion portion = produce_next_portion();
number_of_empty_positions.acquire();
{
std::lock_guard<std::mutex> g(buffer_manipulation);
add_portion_to_buffer(portion);
}
number_of_queueing_portions.release();
}
}
void consumer() {
for (;;) {
number_of_queueing_portions.acquire();
Portion portion;
{
std::lock_guard<std::mutex> g(buffer_manipulation);
portion = take_portion_from_buffer();
}
number_of_empty_positions.release();
process_portion_taken(portion);
}
}
int main() {
std::thread t1(producer);
std::thread t2(consumer);
t1.join();
t2.join();
}
Using monitors
Per Brinch Hansen defined the monitor: I will use the term monitor to denote a shared variable and the set of meaningful operations on it. The purpose of a monitor is to control the scheduling of resources among individual processes according to a certain policy. Tony Hoare laid a theoretical foundation for the monitor.
bounded buffer: monitor
begin buffer:array 0..N-1 of portion;
head, tail: 0..N-1;
count: 0..N;
nonempty, nonfull: condition;
procedure append(x: portion);
begin if count = N then nonfull.wait;
note 0 <= count < N;
buffer[tail] := x;
tail := tail (+) 1;
count := count + 1;
nonempty.signal
end append;
procedure remove(result x: portion) ;
begin if count = 0 then nonempty.wait;
note 0 < count <= N;
x := buffer[head];
head := head (+) 1;
count := count - 1;
nonfull.signal
end remove;
head := 0; tail := 0; count := 0;
end bounded buffer;
The monitor is an object that contains variables buffer, head, tail and count to realize a circular buffer, the condition variables nonempty and nonfull for synchronization and the methods append and remove to access the bounded buffer. The monitor operation wait corresponds to the semaphore operation P or acquire, signal corresponds to V or release. The circled operation (+) are taken modulo N. The presented Pascal style pseudo code shows a Hoare monitor. A Mesa monitor uses while count instead of if count. A programming language C++ version is:
class Bounded_buffer {
Portion buffer[N]; // 0..N-1
unsigned head, tail; // 0..N-1
unsigned count; // 0..N
std::condition_variable nonempty, nonfull;
std::mutex mtx;
public:
void append(Portion x) {
std::unique_lock<std::mutex> lck(mtx);
nonfull.wait(lck, [&]{ return !(N == count); });
assert(0 <= count && count < N);
buffer[tail++] = x;
tail %= N;
++count;
nonempty.notify_one();
}
Portion remove() {
std::unique_lock<std::mutex> lck(mtx);
nonempty.wait(lck, [&]{ return !(0 == count); });
assert(0 < count && count <= N);
Portion x = buffer[head++];
head %= N;
--count;
nonfull.notify_one();
return x;
}
Bounded_buffer() {
head = 0; tail = 0; count = 0;
}
};
The C++ version needs an additional mutex for technical reasons. It uses assert to enforce the preconditions for the buffer add and remove operations.
Using channels
The very first producer-consumer solution in the Electrologica computers used 'channels'. Hoare defined channels: An alternative to explicit naming of source and destination
would be to name a port through which communication is to take place. The port names would be local to the processes, and the manner in which pairs of ports are to be connected by channels could be declared in the head of a parallel command. Brinch Hansen implemented channels in the programming languages Joyce and Super Pascal. The Plan 9 operating system programming language Alef, the Inferno operating system programming language Limbo have channels. The following C source code compiles on Plan 9 from User Space:
#include "u.h"
#include "libc.h"
#include "thread.h"
enum { STACK = 8192 };
void producer(void *v) {
Channel *ch = v;
for (uint i = 1; ; ++i) {
sleep(400);
print("p %d\n", i);
sendul(ch, i);
}
}
void consumer(void *v) {
Channel *ch = v;
for (;;) {
uint p = recvul(ch);
print("\t\tc %d\n", p);
sleep(200 + nrand(600));
}
}
void threadmain(int argc, char **argv) {
int (*mk)(void (*fn)(void*), void *arg, uint stack);
mk = threadcreate;
Channel *ch = chancreate(sizeof(ulong), 1);
mk(producer, ch, STACK);
mk(consumer, ch, STACK);
recvp(chancreate(sizeof(void*), 0));
threadexitsall(0);
}
The program entry point is at function threadmain. The function call ch = chancreate(sizeof(ulong), 1) creates the channel, the function call sendul(ch, i) sends a value into the channel and the function call p = recvul(ch) receives a value from the channel. The programming language Go has channels, too. A Go example:
package main
import (
"fmt"
"math/rand"
"time"
)
var sendMsg = 0
func produceMessage() int {
time.Sleep(400 * time.Millisecond)
sendMsg++
fmt.Printf("sendMsg = %v\n", sendMsg)
return sendMsg
}
func consumeMessage(recvMsg int) {
fmt.Printf("\t\trecvMsg = %v\n", recvMsg)
time.Sleep(time.Duration(200+rand.Intn(600)) * time.Millisecond)
}
func main() {
ch := make(chan int, 3)
go func() {
for {
ch <- produceMessage()
}
}()
for recvMsg := range ch {
consumeMessage(recvMsg)
}
}
The Go producer-consumer solution uses the main Go routine for consumer and creates a new, unnamed Go routine for the producer. The two Go routines are connected with channel ch. This channel can queue up to three int values. The statement ch := make(chan int, 3) creates the channel, the statement ch <- produceMessage() sends a value into the channel and the statement recvMsg := range ch receives a value from the channel. The allocation of memory resources, the allocation of processing resources, and the synchronization of resources are done by the programming language automatically.
Without semaphores or monitors
Leslie Lamport documented a bounded buffer producer-consumer solution for one producer and one consumer: We assume that the buffer can hold at most b messages, b >= 1. In our solution, we let k be a constant greater than b, and let s and r be integer variables assuming values between 0 and k-1. We assume that initially s=r and the buffer is empty.
By choosing k to be a multiple of b, the buffer can be implemented as an array B [0: b - 1]. The producer simply puts each new message into B[s mod b], and the consumer takes each message from B[r mod b]. The algorithm is shown below, generalized for infinite k.
Producer:
L: if (s - r) mod k = b then goto L fi;
put message in buffer;
s := (s + 1) mod k;
goto L;
Consumer:
L: if (s - r) mod k = 0 then goto L fi;
take message from buffer;
r := (r + 1) mod k;
goto L;
The Lamport solution uses busy waiting in the thread instead of waiting in the scheduler. This solution neglects the impact of scheduler thread switch at inconvenient times. If the first thread has read a variable value from memory, the scheduler switches to the second thread that changes the variable value, and the scheduler switches back to the first thread then the first thread uses the old value of the variable, not the current value. Atomic read-modify-write solves this problem. Modern C++ offers atomic variables and operations for multi-thread programming. The following busy waiting C++11 solution for one producer and one consumer uses atomic read-modify-write operations fetch_add and fetch_sub on the atomic variable count.
enum {N = 4 };
Message buffer[N];
std::atomic<unsigned> count {0};
void producer() {
unsigned tail {0};
for (;;) {
Message message = produceMessage();
while (N == count)
; // busy waiting
buffer[tail++] = message;
tail %= N;
count.fetch_add(1, std::memory_order_relaxed);
}
}
void consumer() {
unsigned head {0};
for (;;) {
while (0 == count)
; // busy waiting
Message message = buffer[head++];
head %= N;
count.fetch_sub(1, std::memory_order_relaxed);
consumeMessage(message);
}
}
int main() {
std::thread t1(producer);
std::thread t2(consumer);
t1.join();
t2.join();
}
The circular buffer index variables head and tail are thread-local and therefore not relevant for memory consistency. The variable count controls the busy waiting of the producer and consumer thread.
See also
Atomic operation
Design pattern
FIFO
Pipeline
Channel
Implementation in Java: Java Message Service
References
Further reading
Mark Grand Patterns in Java, Volume 1, A Catalog of Reusable Design Patterns Illustrated with UML
C/C++ Users Journal (Dr.Dobb's) January 2004, "A C++ Producer-Consumer Concurrency Template Library", by Ted Yuan, is a ready-to-use C++ template library. The small template library source code and examples can be found here
Ioan Tinca, The Evolution of the Producer-Consumer Problem in Java
Articles with example Java code
Concurrency (computer science)
Edsger W. Dijkstra
Problems in computer science | Producer–consumer problem | [
"Technology"
] | 3,532 | [
"Problems in computer science",
"Computer science"
] |
8,843,579 | https://en.wikipedia.org/wiki/Tiangou | The tiangou () is a legendary creature from China. The tiangou resembles a black dog or meteor, and is thought to eat the Sun or Moon during an eclipse.
Tiangou eating the Sun
As a good spirit, it has the appearance of a white-headed fox. It brings peace and tranquility, and gives protection from all sorts of troubles and robbers. It is referred to by astrologers as a constellation guardian of welfare. This constellation consists of seven stars. In ancient China it was called "Dog" (in the constellation Ship).
As a bad spirit, it is a black dog that eats the Moon. According to the legends, as an interpretation of a lunar eclipse, after Houyi shot down the nine Suns in the sky, he was awarded with an immortality-granting pill by the Queen Mother of the West. Before he could eat it, his wife Chang'e ate it, hoping she could maintain her youth. Chang'e felt her body getting lighter and flew away. Seeing this, a black dog that Hou Yi was rearing went inside her room and licked the remains of the pill. He then chased after Chang'e, getting bigger and bigger. Chang'e, terrified, hid on the Moon. The black dog then ate the Moon, along with Chang'e.
After being informed of this, the Queen Mother of the West captured the dog. Surprised to see that the dog was actually Hou Yi's, she assigned him to guard the gates of heavens and bestowed upon him the title of Tiangou. Tiangou spat the Moon and Chang'e back out, and Chang'e continued living on the Moon.
Battle with Zhang Xian
Zhang Xian () is the enemy of the tiangou. It is said that he protects his children from the dog god with his bow and arrows. He is often depicted aiming at the sky, waiting for the beast to appear.
He is the god of birth and the protector of male children. Many sought for him to give them male offspring and to protect their living sons.
Meaning in Japan
The term tengu and the characters used to write it may be borrowed from the name of tiangou, though this is still to be confirmed. Despite the characters, both creatures are independent mythological creatures with no common ancestor or origin. A tengu is usually depicted as a bird or man with a long nose and other bird-like characteristics, while the tiangou is a dog.
See also
Bulgae (Korea)
Dog (Chinese mythology)
Fenrir, the wolf of Norse mythology whose sons Sköll and Hati Hróðvitnisson swallow the Sun and Moon during Ragnarök
Hellhound
Tengu (Japan)
References
de Visser, M. W. (1908). "The Tengu". Transactions of the Asiatic Society of Japan 34 (2): pp. 25–99. Z. P. Maruya & Co.
Yaoguai
Black dogs (folklore)
Eclipses
Dogs in Chinese mythology | Tiangou | [
"Astronomy"
] | 621 | [
"Astronomical events",
"Eclipses"
] |
8,843,849 | https://en.wikipedia.org/wiki/ClearForest | ClearForest was an Israeli software company that developed and marketed text analytics and text mining solutions.
History
Founded in 1998, ClearForest had its headquarters just outside Boston and a development center in Or Yehuda. The company was acquired by Reuters in April, 2007. It now markets its services under the names Calais, OpenCalais, and OneCalais.
ClearForest was previously venture-backed; its last funding round was led by Greylock Ventures and closed in 2005. Other investors included DB Capital Partners, Pitango, Walden Israel, Booz Allen, JP Morgan Partners and HarbourVest Partners.
On February 7, 2008 Reuters announced the launch of Open Calais, a named-entity recognition and semantic analysis service that uses ClearForest technology.
On April 30, 2007, Reuters announced that it would acquire ClearForest. Sources estimate the acquisition to be for $25 Million.
Solutions and Products
ClearForest offers several hosted solutions, including:
OpenCalais, a free web service and open API (for commercial and non-commercial use) that performs named-entity recognition and enables automatic metadata generation using the ClearForest financial module.
Semantic Web Services (SWS), an on-demand service that makes ClearForest's natural language processing tools available as a standard web service. A subset of ClearForest's capabilities is available via SWS at no cost.
Gnosis, a free Firefox extension that uses SWS to analyze the content of a web page. Gnosis identifies named entities such as people, companies, organizations, geographies and products on the page being viewed. Gnosis also automatically processes pages from Wikipedia, providing additional links for people, geographies and other entities which were not explicitly linked within the subject article.
Harvest, a real-time machine-readable news service that uses SWS to process a company's news and document feeds and return machine-readable information about people, companies, locations and over 200 other entities facts and events.
ClearForest also offers Text Analytics solutions targeted at specific business problems, including:
Equity valuation for hedge funds and alternative investments firms
Metadata & database creation for publishers and information providers/services
Tapping "voice of customer" for market and survey research firms
Quality Early Warning for vehicle, capital equipment & durable goods manufacturers
See also
Economy of Israel
References
External links
ClearForest web site
ClearForest semantic web services and Gnosis Firefox extension web site
Software companies based in Massachusetts
Natural language processing
Computational linguistics
Software companies established in 1998
Software companies of Israel
1998 establishments in Massachusetts
Defunct software companies of the United States | ClearForest | [
"Technology"
] | 525 | [
"Natural language processing",
"Natural language and computing",
"Computational linguistics"
] |
8,844,844 | https://en.wikipedia.org/wiki/Intuitor | Intuitor is a website promoting creative learning as both a method of enlightenment and a cultural theme in its own right. Created in 1996, two of its earliest features were instructions for the founder's own four-handed chess variant Forchess and an essay entitled Why Now Is the Most Exciting Time in History to Be Alive. Today, its eclectic format includes educational treatments of physics, statistics, and chess, as well as calls for paradigm shifts such as the adoption of hexadecimal for representing numbers in everyday use.
Insultingly Stupid Movie Physics
Intuitor's most well-known feature is Insultingly Stupid Movie Physics (ISMP), which produces original scientific critiques of contemporary cinema and television. Its main gimmick is a physics rating system parodying the explicit content ratings of the Motion Picture Association of America. Its movie reviews seek to promote a greater understanding of and appreciation for science by lampooning scientific portrayals in pop-culture. It has been cited on popular websites such as Fark and Slashdot, on radio programs throughout the U.S. and Canada, and in major print media. The ISMP was also Something Awful's awful link of the day on June 14, 2006. In calling for "Decency in Movie Physics", ISMP has named the science-fiction film The Core as the "Worst Physics Movie Ever".
External links
Intuitor
American educational websites | Intuitor | [
"Technology"
] | 288 | [
"Computing stubs",
"World Wide Web stubs"
] |
8,845,047 | https://en.wikipedia.org/wiki/Design%20History%20Society | The Design History Society is an arts history organisation founded in 1977 to promote and support the study and understanding of design history. The Society undertakes a range of charitable activities intended to encourage and support research and scholarship, to offer information and create networking opportunities, to foster student participation and public recognition of the subject, and to support regional links and events. The Society welcomes members from related disciplines such as anthropology, architecture and art history, business history, the history of science and technology, craft history, cultural studies, economic and social history, design and design management studies. An elected Executive Committee and Board of Trustees works to enable the activities of the Society, and to ensure that design history is appropriately represented in higher education and research bodies in the UK.
Journal
The Journal of Design History is published quarterly by Oxford University Press on behalf of the Design History Society (J Design Hist, web , print ). It is the leading journal in its field, and plays an active role in the development of design history, including the history of crafts and applied arts, as well as contributing to the broader fields of visual and material culture studies. The journal includes a regular book reviews section and lists books received, and from time to time publishes special issues.
Conference
The annual DHS Conference provides an international platform for interdisciplinary approaches to research and critical debate in design history. Hosted each year by a different partner institution, the conference aims to further global dialogues on design and its histories. See Past Conferences below.
Funding
A range of annual Research Grants encourage debate and research in design history. Individual grants are awarded to support particular research activities, including exhibitions, publication costs, travel and conference attendance, and scholarship in non-Western, post-colonial and other underrepresented areas of research.
The Day Symposium Grant supports DHS members who wish to discuss and disseminate new design history research by convening a one-day symposium.
The Outreach Grant assists DHS members convening a public event to promote design history beyond a traditional academic setting.
Student members benefit from a Student Travel Award and DHS Conference Bursary scheme.
Prizes
Launched in 2017, the Design Writing Prize recognises outstanding writing that engages academic and non-academic audiences in critical and contemporary issues in design.
The Student Essay Prize, established in 1997, is awarded to one undergraduate and one postgraduate essay each year to celebrate excellence in student writing in design history.
Events
In 2018, the DHS launched a rolling calendar of events and activities convened by trustees working with relevant educational, professional and cultural partners. These events create opportunities for engagement beyond the annual conference, support teaching and learning at all levels of design historical education, and aim to reach audiences, both internationally and across the UK.
Membership
Design History Society members include all those interested in design: students, designers, lecturers, historians, researchers, craftspeople, manufacturers, archivists, curators, librarians and collectors. The Society offers membership rates for individuals and institutional members. A concessionary rate is also available for students, full and part-time, the unwaged and seniors. Membership under all categories is administered by Oxford University Press.
Past conferences
References
External links
https://www.designhistorysociety.org/
Design institutions
Arts organizations established in 1977
History organisations based in the United Kingdom | Design History Society | [
"Engineering"
] | 667 | [
"Design",
"Design institutions"
] |
8,845,050 | https://en.wikipedia.org/wiki/Smoke%20hole | A smoke hole (smokehole, smoke-hole) is a hole in a roof for the smoke from a fire to vent. Before the invention of the smoke hood or chimney, many dwellings had smoke holes to allow the smoke from the hearth to escape. Pre-modern English homes with unglazed windows or thatch roofs required no special vent for smoke. These structures typically had only one story for living spaces, and inhabitants made do with a band of relatively clear air near the ground.
Smoke holes in buildings
Smoke holes were often built in a way such they would not leak water such as with a covering or in the gables. In the Native American long house, smoke holes occur in intervallic square openings along the roof.
Smoke holes for tents
In Native American plains style tipi, the smoke hole consisted of one easily accessible smoke flap vent which was positioned around the apex of the interior beams and the flaps were extended outward on poles to open the vent. In modern ceremonial tipis this vent is in the traditional fashion.
Sami tents called a lavvu also have a smoke hole from which smoke from a campfire is vented out the top. Unlike the Native American tipi however, there are no smoke flaps, just a round hole at the top of the tent.
Gallery
In popular culture
In the book It by Stephen King, the members of the losers club build a pit in their club, which they fill with green branches and set them on fire to create smoke. One of them talks about the ritual use of smoke-holes by Native Americans.
References
External links
Traditional Native American dwellings
Architectural elements
Chimneys | Smoke hole | [
"Technology",
"Engineering"
] | 324 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
8,846,283 | https://en.wikipedia.org/wiki/Cyclic%20negation | In many-valued logic with linearly ordered truth values, cyclic negation is a unary truth function that takes a truth value n and returns n − 1 as value if n is not the lowest value; otherwise it returns the highest value.
For example, let the set of truth values be {0,1,2}, let ~ denote negation, and let p be a variable ranging over truth values. For these choices, if p = 0 then ~p = 2; and if p = 1 then ~p = 0.
Cyclic negation was originally introduced by the logician and mathematician Emil Post.
References
. See in particular pp. 188–189.
Mathematical logic | Cyclic negation | [
"Mathematics"
] | 139 | [
"Mathematical logic stubs",
"Mathematical logic"
] |
8,846,521 | https://en.wikipedia.org/wiki/BCJR%20algorithm | The Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is an algorithm for maximum a posteriori decoding of error correcting codes defined on trellises (principally convolutional codes). The algorithm is named after its inventors: Bahl, Cocke, Jelinek and Raviv. This algorithm is critical to modern iteratively-decoded error-correcting codes, including turbo codes and low-density parity-check codes.
Steps involved
Based on the trellis:
Compute forward probabilities
Compute backward probabilities
Compute smoothed probabilities based on other information (i.e. noise variance for AWGN, bit crossover probability for binary symmetric channel)
Variations
SBGT BCJR
Berrou, Glavieux and Thitimajshima simplification.
Log-Map BCJR
Implementations
Susa framework implements BCJR algorithm for forward error correction codes and channel equalization in C++.
See also
Forward-backward algorithm
Maximum a posteriori (MAP) estimation
Hidden Markov model
References
External links
The online textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay, discusses the BCJR algorithm in chapter 25.
The implementation of BCJR algorithm in Susa signal processing framework
Error detection and correction | BCJR algorithm | [
"Engineering"
] | 272 | [
"Error detection and correction",
"Reliability engineering"
] |
8,846,939 | https://en.wikipedia.org/wiki/Mobile%20Suit%20Gundam%20Alive | is a Gundam comic title created by Mizuho Takayama (illustration) and Yuka Minakawa (script). It was premiered on the November 2006 issue of Comic Bom Bom. The prologue chapter Gundam ALIVE Episode 0 is published in Gundam Magazine which was bundled with the November 2006 issue of Comic Bom Bom.
This is a completely new story with no relation to previous Gundam series. Set in the 21st century (present time) with Japan as the main stage where a mysterious army has invaded with super weapons called Mobile Suits, the protagonist stand against them with a mobile suit called "Gundam".
Story
The year is 200X. A mysterious robot fell from the sky into Tokyo. The robot is called a "Mobile Suit". Meanwhile, another Mobile Suit named "Gundam" crash landed near Kurono...
Characters
Earth Liberation Organization Army
Main characters
Tokio Kurono (黒野時夫(くろのときお)) (Kurono Tokio (kuro no Tokio)
The main protagonist of the story. A student and a regular of the soccer club. Kurono has always believed that he possesses some hidden talent. He first piloted the "RX-78-2 Gundam" and Second "RX-93 Nu Gundam", later he pilots the "System-∀99 Turn A Gundam". In the beginning of the story he becomes a pilot to fend off the "MS-06 Zakus" that were chasing Aiko. He is still an amateur at piloting MS (mobile suits) and was beaten badly in his first few confrontations with the Ministry army. Currently he is undergoing MS training at the Nishimikado House.
In the second battle against the "GAT-X207 Blitz Gundam", Kurono has shown Newtype abilities.
Aiko Anna Pruna (アイコ・アンナ・プルナ) (Aiko An'na puruna)
A spacenoid who is a member of the Earth Liberation Organization Army. The original pilot of the "RX-78-2 Gundam" and a pilot of the heavy fighter "G-Fighter", she eventually becomes the captain of the "SCV-70 White Base". Aiko and her brother were attacked while transporting a new MS and crash land on Earth, where she meets Kurono. Aiko was amazed at Kurono's ability to pilot the MS when he took over the controls from her.
She appears to be from the future of another dimension. She was really excited when Kurono brought milk for her as she says that farming in space colonies is difficult and milk is expensive there.
Sakuya Nishimikado (西御門朔也(にしみかど さくや)) (Nishimikado Sakuya (Ni shimi kado sakuya))
A student in the same school Kurona attends. He is the 18th head of the Nishimikado family that has been protecting the "Gundam" for 500 years and an expert at piloting mobile suits.
The ancestor of the Nishimikado family is Morris Aluna Pruna, Aiko's brother, who also fell to Earth but wound up in a time 500 years before the events of the story. So this makes Aiko the grandaunt of Sakuya. Sakuya also pilots the Moless' "MSZ-006 Zeta Gundam", now outfitted with "musha" armour.
Retsu Domyouji (道明寺烈(どうみょうじ れつ)) (Doumyōji Retsu (dō myōji retsu))
The pilot of "GF13-017NJII Shining Gundam". A childhood friend of Sakuya. A monk of the Domyouji family, he broke the rule of not fighting when he used the Shining Gundam to save Kurono.
K2
A member of the Organization Colony Government, serving in the Ministry of the Environment Military. Later he joined the Earth Liberation Organization Army. He appeared piloting a "GX-9900 Gundam X" and later pilots the MS "GX-9901-DX Gundam Double X". He and his troops descend onto Earth to pursue the Gundam. He later retrieved a H.L.V from the ocean. He is always seen with sunglasses on. His real name is *Ardito Austin(アルディート・オースティン) (Arudīto ōsutin).
Ministry of Environment Colonial Government Military
Heinrich (ハインリヒ)
A man who came to Tokio world in Episode 5 and he is the rank captain of the Ministry of the Environment's Environmental Conservation Special Task Force. He pilot of the "GAT-X105 Strike Gundam" and pilot "MA-08 Big Zam" later he pilots "JDG-OOX Devil Gundam" most dangerous machine but ultimately it was swallowed gate.
Grand (グランド)
A man who came to Tokio world in Episode 5, and like Rosa, he is a member of the special forces. He is the pilot of "GAT-X102 Duel Gundam" pilot "GAT-X131 Calamity Gundam" later pilot "MRX-010 Psycho Gundam MK II".
Rosa (ローザ)
A woman who came to Tokio world in Episode 5, and one of the special forces. She is the pilot of "GAT-X103 Buster Gundam" and Pilot "GAT-X303 Raider Gundam" later pilot "MRX-009 Psycho Gundam".
Joras (ジョラス)
A man who came to Tokio world in Episode 5 and is one of the special forces he very mostly sadist, brutal.He is the pilot of the GAT-X207 Blitz Gundam later pilot GAT-X252 Forbidden Gundam.
Young man from the special forces
The real his name unknown the man is who came to Tokio world in Episode 5 and is one of the special forces. He is the pilot of the "GAT-X303 Aegis Gundam".
Douglas C. Elbrus (ダグラス・C・エルブラス Dagurasu C eruburasu)
Main antagonist a true villain of story, he is the director of the Colony Ministry of Environment. His mobile suit was "ZGMF-X13A Providence Gundam" later in final battle pilot with "JDG-OOX Devil Gundam" has Devil Colony form.
Other characters
Tokio's sister (時夫の姉) (Tokio no ane)
No name mentioned yet. After their parents died, she and her brother Tokio lived together. She got Aiko to live together with them and treats her like a family member.
Domyouji family head (道明寺家当主) (Doumyōji-ka tōshu)
No name mentioned yet. He is Retsu Domyouji's father and the current head of the Domyouji family, the Keykeeper who has been protecting Gundam for many years.
Yamanouchi (山之内) (Yamanouchi)
Butler of the Nishimikado family, the person who provides support for Sakuya in various aspects.
Morris Aluna Pruna (モーリス・エルナ・プルナ) (Mōrisu Eruna puruna)
Aiko's brother who members Time Sphere Liberation army and the former pilot of the MSZ-006 Zeta Gundam. Became missing after Aiko and the Gundam was attacked by GX-9900 Gundam X in the midst of its transportation. But actually he fell into a time slip and ended up in Japan 500 years ago, he started the Nishimikado family and remained in the area. His MSZ-006 Zeta Gundam is passed down to later generations of the Nishimikado family.
Mobile Suits and Mobile Armor
Many units from the previous Gundam series appeared and will appear in this series. Here is a list of the main units.
Gundam (Gundam-type)
Only 20 Original Gundam mobile suit unit from previous Gundam series same incarnation and same counterpart in manga ALIVE timeline universe.
RX-78-2 Gundam (ガンダム)
The Close-Quarter Combat Gundam-type Mobile Suit from belong Earth Liberation Army piloted by Aiko then now Kurono. In episode 6 Aiko speak of it as a "special machine". It is revealed in episode 11 that this Gundam has some sort of Psychomu system equipped.
RX-78-2 Gundam Real Type (ガンダム リアルタイプカラー)
The Variant machine that piloted Tokio in simulation only. This Gundam's coloring is based on the movie poster by Kunio Okawara and the plastic model released by Bandai afterwards.
RX-93 v Gundam (Nu Gundam) (νガンダム)
The Newtype-use General-Purpose Gundam-type Mobile Suit with using all-range attack and defense use Equipment called "Fin Funnel" piloted by kurono after he retunning 24 century along them engage battle between special force army and gigatic Gundam-type is Devil Gundam.
FA-93HWS ν Gundam Heavy Weapons System Type (νガンダム ヘビー・ウエポン・システム装備型)
Variant for Nu Gundam wearing Heavy Armor MS use battle three Gundams-type and Ministry corp army.
System-∀99 ∀ Gundam (Turn A Gundam) (∀ガンダム)
The Interstellar Warfare Gundam-type Mobile Suit with Special Equipment called "Moonlight Butterfly" (月光蝶 Gekkouchou) piloted by kurono this Gundam use for final battle opponent is Devil Gundam in Devil Colony Form.
MSZ-006 Zeta Gundam (Ζガンダム)
The Transformable Attack use Gundam-type MS First piloted by Moless then now Sakuya. Since the falling of the Zeta Gundam it has been passed down from generations in the Nishimikado family for 500 years. The Zeta Gundam now is clad in musha armour and has a katana as its main armament. However in episode 0 Sakuya is seen piloting the Zeta Gundam without the armour.
GF13-017NJ Shining Gundam (シャイニングガンダム)
Close Quarters Combat Gundam-type Mobile Fighter with no weapon arms only used martial arts fighting style he disguise with Great Buddha it has existed since the Sengoku era fought against the Nishimikado Zeta Gundam during that 500 years. piloted by Retsu Domyouji
RGZ-91 Re-GZ (リ・ガズィ)
Transformable Attack-Use Mobile Suit Pseudo Gundam-type only appeared episode 17 as final battle during attacking Devil Gundam and Gundam Head.
GX-9900 Gundam X (ガンダムX)
The Satellite System Loading Gundam-type MS belong from Ministry of the Environment military equipment called "Satellite Cannon system" piloted by the K2 which he used to attack Aiko and Moless. and he appeared once in Tokio world and dueling Kurono in Gundam and heavy damage later he upgrade new appearance called Gundam X Divider.
GX-9900-DV Gundam X Divider (ガンダムXディバイダー)
Upgrade version Original Gundam X after heavy damage by Kurono Gundam during first battle power up with Equipped Divider to be using rematches Kurono to second battle same piloted by K2.
GX-9901-DX Gundam Double X (ガンダムダブルエックス)
The Satellite System Equipped Gundam-type MS also successor Gundam X with equipment's "Twins Satellite Cannon system" piloted by K2 after he defects joining Time Sphere Liberation Army.
GAT-X105 Strike Gundam (ストライクガンダム)
The Multi-Mode Gundam-type MS piloted by Heinrich that appeared out of the dimensional gate with 4 other Gundams-type MS in episode 7.
GAT-X105+AQM/E-X01 Aile Strike Gundam (アレイストライクガンダム)
A variant for Strike Gundam equipment Aile Striker packs use high mobility air combat.
GAT-X105+AQM/E-X02 Sword Strike Gundam (ソードストライクガンダム)
Same variant wearing Strike Gundam with equipment Sword Striker packs use Close Quater Combat.
GAT-X102 Duel Gundam (デュエルガンダム)
The General-Purpose Gundam-type MS wearing Assault Shroud armor piloted by Grand first battle between Kurono Gundam and Retsu Shinning Gundam.
GAT-X103 Buster Gundam (バスターガンダム)
The Artillery-use Gundam-type MS Piloted by Rosa first attacking Kurono Gundam and Retsu Shinning Gundam.
GAT-X207 Blitz Gundam (ブリッツガンダム)
The Stealth-use Gundam-type MS Piloted by Joras first battle Kurono Gundam but prevented by Retsu Shinning Gundam who helped him his pitch.
GAT-X303 Aegis Gundam (イージスガンダム)
The Transformable Assault Mobile Suit Gundam-type piloted by unknown aegis pilot first battle between Kurono Gundam, Restu Shinning Gundam and Sakuya Zeta Gundam.
GAT-X131 Calamity Gundam (カラミティガンダム)
The Artillery Gundam-type Mobile Suit piloted by Grand a special force he Appears in episode 15 and 16 along them two new Gundams
GAT-X252 Forbidden Gundam (フォビドゥンガンダム)
Transformable High-Speed Assault Gundam-type Mobile Suit piloted by Joras
See also
Fusion Clashes: Gundam Battle-Rave - A second multiverse
Mobile War History Gundam Burai -
Memory Despair Gundam SEQUEL -
''Mobile Suit Gundam Eight -
References
External links
Yuka Minakawa homepage
review of Gundam ALIVE
Kodansha manga
Children's manga
Alive
Real robot anime and manga
Multiverse | Mobile Suit Gundam Alive | [
"Astronomy"
] | 2,917 | [
"Astronomical hypotheses",
"Multiverse"
] |
8,848,462 | https://en.wikipedia.org/wiki/Larry%20Sandler%20Memorial%20Award | The Larry Sandler Memorial Award is a prestigious international award given for research in the Drosophila community. The award is given for the best dissertation of the preceding year, and is given at the annual Drosophila Research Conference. Awardees may be nominated only by their graduate advisors.
The awardees give the Larry Sandler Memorial Lecture at the annual Drosophila Research Conference. The award honors Dr. Larry Sandler.
Award recipients
1988 Bruce Edgar
1989 Kate Harding
1990 Michael Dickinson
1991 Maurice Kernan
1992 Doug Kellogg
1993 David Schneider
1994 Kendal Broadie
1995 David Begun
1996 Chaoyong Ma
1997 Abby Dernburg
1998 Nir Hacohen
1999 Terence Murphy
2000 Bin Chen
2001 James Wilhelm
2002 Matthew C. Gibson
2003 Sinisa Urban
2004 Sean McGuire
2005 Elissa Hallem
2006 Daniel Ortiz-Barrientos
2007 Yu-Chiun Wang
2008 Adam A. L. Friedman
2009 Timothy T. Weil
2010 Leonardo B. Koerich
2011 Daniel Babcock
2012 Stephanie Turner Chen
2013 Weizhe Hong
2014 Ruei-Jiun Hung
2015 Zhao Zhang
2016 Alejandra Figueroa-Clarevega
2017 Danny E. Miller
2018 Lucy Liu
2019 Laura Seeholzer
2020 Balint Kacsoh
2021 Ching-Ho Chang
2022 Lianna Wat
2023 James O'Connor
2024 Sherzod A. Tokamov
Former chairs of the Award
1988 Chair: Barry Ganetzky
1989 Chair: Barry Ganetzky
1990 Chair: Barry Ganetzky
1991 Chair:
1992 Chair:
1993 Chair:
1994 Chair:
1995 Chair:
1996 Chair: Margaret Fuller ("Minx" Fuller)
1997 Chair: Larry Goldstein
1998 Chair: R. Scott Hawley
1999 Chair: Bill Sullivan
2000 Chair: Bill Saxton
2001 Chair: Lynn Cooley
2002 Chair: Steve DiNardo
2003 Chair: Amanda Simcox ("Mandy Simcox")
2004 Chair: Ross Cagan
2005 Chair: Gerold Schübiger
2006 Chair: R. Scott Hawley
2007 Chair: Helen Salz
2008 Chair: Mariana Wolfner
2009 Chair: John Carlson
2010 Chair: Robin Wharton
2011 Chair: Claude Desplan
2012 Chair: Richard Mann
2013 Chair: Kenneth Irvine
2014 Chair: Marc Freeman
2015 Chair: Erika Bach
2016 Chair: Daniela Drummond-Barbosa
2017 Chair: Bob Duronio
2018 Chair: Kim McCall
2019 Chair: Daniel Barbash
2020 Chair: Barbara Mellone
2021 Chair: Guy Tanentzapf
2022 Chair: Alissa Armstrong
2023 Chair: Tim Mosca
2024 Chair: Elizabeth Rideout
See also
List of biology awards
References
Biology awards
Awards established in 1988
Early career awards
Awards for scholarly publications | Larry Sandler Memorial Award | [
"Technology"
] | 530 | [
"Science and technology awards",
"Science award stubs",
"Biology awards"
] |
8,848,840 | https://en.wikipedia.org/wiki/Thomas%20Hunt%20Morgan%20Medal | The Thomas Hunt Morgan Medal is awarded by the Genetics Society of America (GSA) for lifetime contributions to the field of genetics.
The medal is named after Thomas Hunt Morgan, the 1933 Nobel Prize winner, who received this award for his work with Drosophila and his "discoveries concerning the role played by the chromosome in heredity." Morgan recognized that Drosophila, which could be bred quickly and inexpensively, had large quantities of offspring and a short life cycle, would make an excellent organism for genetic studies. His studies of the white-eye mutation and discovery of sex-linked inheritance provided the first experimental evidence that chromosomes are the carriers of genetic information. Subsequent studies in his laboratory led to the discovery of recombination and the first genetic maps.
In 1981 the GSA established the Thomas Hunt Morgan Medal for lifetime achievement to honor this classical geneticist who was among those who laid the foundation for modern genetics.
Laureates
Source: Genetics Society of America
See also
List of genetics awards
References
Genetics awards
Awards established in 1981 | Thomas Hunt Morgan Medal | [
"Technology"
] | 208 | [
"Science and technology awards",
"Science award stubs"
] |
8,849,222 | https://en.wikipedia.org/wiki/International%20Journal%20of%20Quantum%20Chemistry | The International Journal of Quantum Chemistry is a peer-reviewed scientific journal publishing original, primary research and review articles on all aspects of quantum chemistry, including an expanded scope focusing on aspects of materials science, biochemistry, biophysics, quantum physics, quantum information theory, etc.
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.444.
It was established in 1967 by Per-Olov Löwdin. In 2011, the journal moved to an in-house editorial office model, in which a permanent team of full-time, professional editors is responsible for article scrutiny and editorial content.
References
External links
Chemistry journals
Academic journals established in 1967
Hybrid open access journals
Wiley (publisher) academic journals
English-language journals
Computational chemistry | International Journal of Quantum Chemistry | [
"Chemistry"
] | 150 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Theoretical chemistry",
"Computational chemistry stubs",
"Computational chemistry",
"Physical chemistry stubs"
] |
8,849,356 | https://en.wikipedia.org/wiki/Bertrand%27s%20box%20paradox | Bertrand's box paradox is a veridical paradox in elementary probability theory. It was first posed by Joseph Bertrand in his 1889 work Calcul des Probabilités.
There are three boxes:
a box containing two gold coins,
a box containing two silver coins,
a box containing one gold coin and one silver coin.
A coin withdrawn at random from one of the three boxes happens to be a gold. What is the probability the other coin from the same box will also be a gold coin?
A veridical paradox is a paradox whose correct solution seems to be counterintuitive. It may seem intuitive that the probability that the remaining coin is gold should be , but the probability is actually . Bertrand showed that if were correct, it would result in a contradiction, so cannot be correct.
This simple but counterintuitive puzzle is used as a standard example in teaching probability theory. The solution illustrates some basic principles, including the Kolmogorov axioms.
Solution
The problem can be reframed by describing the boxes as each having one drawer on each of two sides. Each drawer contains a coin. One box has a gold coin on each side (GG), one a silver coin on each side (SS), and the other a gold coin on one side and a silver coin on the other (GS). A box is chosen at random, a random drawer is opened, and a gold coin is found inside it. What is the chance of the coin on the other side being gold?
The following reasoning appears to give a probability of 1/2:
Originally, all three boxes were equally likely to be chosen.
The chosen box cannot be box SS.
So it must be box GG or GS.
The two remaining possibilities are equally likely. So the probability that the box is GG, and the other coin is also gold, is 1/2.
The reasoning for the 2/3 is as follows:
Originally, all six coins were equally likely to be chosen.
The chosen coin cannot be from drawer S of box GS, or from either drawer of box SS.
So it must come from the G drawer of box GS, or either drawer of box GG.
The three remaining possibilities are equally likely, so the probability that the drawer is from box GG is 2/3.
Bertrand's purpose for constructing this example was to show that merely counting cases is not always proper. Instead, one should sum the probabilities that the cases would produce the observed result.
Experimental data
A survey of Psychology freshmen taking an introductory probability course was conducted to assess their solutions to the similar three-card problem. In the three-card problem, three cards are placed into a hat. One card is red on both sides, one is white on both sides, and one is white on one side and red on the other. If a card pulled from the hat is red on one side, the probability of the other side also being red is .
53 students participated and were asked what the probability of the other side being red were. 35 incorrectly responded with ; only 3 students correctly responded with .
Related problems
Other veridical paradoxes of probability include:
Boy or Girl paradox
Monty Hall problem
Three Prisoners problem
Two envelopes problem
Sleeping Beauty problem
The Monty Hall and Three Prisoners problems are identical mathematically to Bertrand's Box paradox. The construction of the Boy or Girl paradox is similar, essentially adding a fourth box with a gold coin and a silver coin. Its answer is controversial, based on how one assumes the "drawer" was chosen.
References
Nickerson, Raymond (2004). Cognition and Chance: The psychology of probabilistic reasoning, Lawrence Erlbaum. Ch. 5, "Some instructive problems: Three cards", pp. 157–160.
Michael Clark, Paradoxes from A to Z, p. 16;
Howard Margolis, Wason, Monty Hall, and Adverse Defaults.
External links
Estimating the Probability with Random Boxes and Names, a simulation
Probability theory paradoxes
Probability problems | Bertrand's box paradox | [
"Mathematics"
] | 817 | [
"Probability problems",
"Mathematical problems",
"Probability theory paradoxes",
"Mathematical paradoxes"
] |
8,849,394 | https://en.wikipedia.org/wiki/Launch%20commit%20criteria | Launch commit criteria are the criteria which must be met in order for the countdown and launch of a Space Shuttle or other launch vehicle to continue. These criteria relate to safety issues and the general success of the launch, as opposed to supplemental data.
Atlas V
Launch commit criteria for Atlas V launches are similar to those used for the Atlas V launch of the Mars Science Laboratory
wind at the launch pad exceeds
ceiling less than or visibility less than
upper-level conditions containing wind shear that could lead to control problems for the launch vehicle.
cloud layer greater than thick that extends into freezing temperatures
cumulus clouds with tops that extend into freezing temperatures within
of the edge of a thunderstorm that is producing lightning for 30 minutes after the last lightning is observed.
field mill instrument readings within of the launch pad or the flight path exceed +/- 1,500 volts per meter for 15 minutes after they occur
thunderstorm anvil is within of the flight path
thunderstorm debris cloud is within or fly through a debris cloud for three hours
Do not launch through disturbed weather that has clouds that extend into freezing temperatures and contain moderate or greater precipitation, or launch within of disturbed weather adjacent to the flight path
Do not launch through cumulus clouds formed as the result of or directly attached to a smoke plume
Falcon 9
NASA has identified the Falcon 9 vehicle cannot be launched under the following conditions.
sustained wind at the level of the launch pad in excess of ,
upper-level conditions containing wind shear that could lead to control problems for the launch vehicle,
launch through a cloud layer greater than thick that extends into freezing temperatures,
launch within of cumulus clouds with tops that extend into freezing temperatures,
within of the edge of a thunderstorm that is producing lightning within 30 minutes after the last lightning is observed,
within of an attached thunderstorm anvil cloud,
within of disturbed weather clouds that extend into freezing temperatures and contain moderate or greater precipitation,
within of a thunderstorm debris cloud,
through cumulus clouds formed as the result of or directly attached to a smoke plume.
The following should delay launch:
delay launch for 15 minutes if field mill instrument readings within of the launch pad exceed +/- 1,500 volts per meter, or +/- 1,000 volts per meter,
delay launch for 30 minutes after lightning is observed within of the launch pad or the flight path.
Unique for Crew Dragon launches of the Falcon 9:
weather downrange has a high chance or is violating splashdown limits (wind, wave, lightning, and precipitation limits) in case of a launch escape
Space Shuttle
Weather
The weather conditions NASA required during countdown and launch were specified for "prior to loading external tank propellant" and "after loading propellant has begun". Weather forecasts were provided by the 45th Weather Squadron at nearby Patrick Air Force Base with concerns such as thunderstorms, winds, low cloud ceilings, or anvil clouds noted in the report.
Prior to loading propellant
Tanking was not to begin if the 24-hour average temperature had been below , the wind was observed or forecast to exceed for the next three-hour period, or there was a forecast to be greater than a 20% chance of lightning within five nautical miles of the launch pad during the first hour of tanking.
After propellant loading was underway
After tanking began, the countdown must not be continued, nor the Shuttle launched, if any of the following weather criteria were exceeded:
Temperature
Once propellant loading had begun, the countdown was to be stopped if the temperature remained above for more than 30 consecutive minutes. The minimum temperature the countdown may proceed at was determined by a table of temperatures determined by wind speed and relative humidity ranging from (high humidity, high winds) to (low humidity, low winds). In no case was the space shuttle to be launched if the temperature was degrees or colder.
Wind
For launch the wind constraints at the launch pad varied slightly for each mission. The peak wind speed allowable was . However, when the wind direction was between 100 degrees and 260 degrees, the peak speed varies and may be as low as .
Precipitation
None was allowed to exist at the launch pad or within the flight path.
References
External links
Launch Weather Forecast for Cape Canaveral from Patrick Space Force Base
Example pre-launch weather report from MAVEN pre-launch press conference
Spaceflight | Launch commit criteria | [
"Astronomy"
] | 864 | [
"Spaceflight",
"Outer space"
] |
8,849,460 | https://en.wikipedia.org/wiki/Split-ring%20resonator | A split-ring resonator (SRR) is an artificially produced structure common to metamaterials. Its purpose is to produce the desired magnetic susceptibility (magnetic response) in various types of metamaterials up to 200 terahertz.
Background
Split ring resonators (SRRs) consist of a pair of concentric metallic rings, etched on a dielectric substrate, with slits etched on opposite sides. SRRs can produce the effect of being electrically smaller when responding to an oscillating electromagnetic field. These resonators have been used for the synthesis of left-handed and negative refractive index media, where the necessary value of the negative effective permeability is due to the presence of the SRRs. When an array of electrically small SRRs is excited by means of a time-varying magnetic field, the structure behaves as an effective medium with negative effective permeability in a narrow band above SRR resonance. SRRs have also been coupled to planar transmission lines for the synthesis of metamaterials transmission line.
These media create the necessary strong magnetic coupling to an applied electromagnetic field not otherwise available in conventional materials. For example, an effect such as negative permeability is produced with a periodic array of split ring resonators.
A single-cell SRR has a pair of enclosed loops with splits in them at opposite ends. The loops are made of nonmagnetic metal like copper and have a small gap between them. The loops can be concentric or square, and gapped as needed. A magnetic flux penetrating the metal rings will induce rotating currents in the rings, which produce their own flux to enhance or oppose the incident field (depending on the SRR resonant properties). This field pattern is dipolar. The small gaps between the rings produces large capacitance values, which lowers the resonating frequency. Hence the dimensions of the structure are small compared to the resonant wavelength. This results in low radiative losses and very high quality factors.
The split ring resonator was a microstructure design featured in the paper by Pendry et al in 1999 called, "Magnetism from Conductors and Enhanced Nonlinear Phenomena". It proposed that the split ring resonator design, built out of nonmagnetic material, could enhance the magnetic activity unseen in natural materials. In the simple microstructure design, it is shown that in an array of conducting cylinders, with an applied external field parallel to the cylinders, the effective permeability can be written as the following. (This model is very limited and the effective permeability cannot be less than zero or greater than one.)
Where is the resistance of the cylinder surface per unit area, a is the spacing of the cylinders, is the angular frequency, is the permeability of free space and r is the radius. Moreover, when gaps are introduced to a double cylinder design similar to the image above, we see that the gaps produce a capacitance. This capacitor and inductor microstructure design introduces a resonance that amplifies the magnetic effect. The new form of the effective permeability resembles a familiar response known in plasmonic materials.
Where d is the spacing of the concentric conducting sheets. The final design replaces the double concentric cylinders with a pair of flat concentric c-shaped sheets, placed on each side of a unit cell. The unit cells are stacked on top of each other by a length, l. The final result of the effective permeability can be seen below.
where c is the thickness of the c-shaped sheet and is the resistance of unit length of the sheets measured around the circumference.
Characteristics
The split ring resonator and the metamaterial itself are composite materials. Each SRR has an individual tailored response to the electromagnetic field. However, the periodic construction of many SRR cells is such that the electromagnetic wave interacts as if these were homogeneous materials. This is similar to how light actually interacts with everyday materials; materials such as glass or lenses are made of atoms, an averaging or macroscopic effect is produced.
The SRR is designed to mimic the magnetic response of atoms, only on a much larger scale. Also, as part of periodic composite structure, the SRR is designed to have a stronger magnetic coupling than is found in nature. The larger scale allows for more control over the magnetic response, while each unit is smaller than the radiated electromagnetic wave.
SRRs are much more active than ferromagnetic materials found in nature. The pronounced magnetic response in such lightweight materials demonstrates an advantage over heavier, naturally occurring materials. Each unit can be designed to have its own magnetic response. The response can be enhanced or lessened as desired. In addition, the overall effect reduces power requirements.
SRR configuration
There are a variety of split-ring resonators and periodic structures: rod-split-rings, nested split-rings, single split rings, deformed split-rings, spiral split-rings, and extended S-structures. The variations of split ring resonators have achieved different results, including smaller and higher frequency structures. The research which involves some of these types are discussed throughout the article.
To date (December 2009) the capability for desired results in the visible spectrum has not been achieved. However, in 2005 it was noted that, physically, a nested circular split-ring resonator must have an inner radius of 30 to 40 nanometers for success in the mid-range of the visible spectrum. Microfabrication and nanofabrication techniques may utilize direct laser beam writing or electron beam lithography depending on the desired resolution.
Various configurations
Split-ring resonators (SRR) are one of the most common elements used to fabricate metamaterials. Split-ring resonators are non-magnetic materials, which initially were fabricated from circuit board material to create metamaterials.
Looking at the image directly to the right, it can be seen that at first a single SRR looks like an object with two square perimeters, with each perimeter having a small section removed. This results in square "C" shapes on fiberglass printed circuit board material. In this type of configuration it is actually two concentric bands of non-magnetic conductor material. There is one gap in each band placed 180° relative to each other. The gap in each band gives it the distinctive "C" shape, rather than a totally circular or square shape. Then multiple cells of this double band configuration are fabricated onto circuit board material by an etching technique and lined with copper wire strip arrays. After processing, the boards are cut and assembled into an interlocking unit. It is constructed into a periodic array with a large number of SRRs.
There are now a number of different configurations that use the SRR nomenclature.
Demonstrations
A periodic array of SRRs was used for the first demonstration of a negative index of refraction. For this demonstration, square shaped SRRs, with the lined wire configurations, were fabricated into a periodic, arrayed, cell structure. This is the substance of the metamaterial. Then a metamaterial prism was cut from this material. The prism experiment demonstrated a negative index of refraction for the first time in the year 2000; the paper about the demonstration was submitted to the journal Science on January 8, 2001, accepted on February 22, 2001 and published on April 6, 2001.
Just before this prism experiment, Pendry et al. was able to demonstrate that a three-dimensional array of intersecting thin wires could be used to create negative values of ε. In a later demonstration, a periodic array of copper split-ring resonators could produce an effective negative μ. In 2000 Smith et al. were the first to successfully combine the two arrays and produce a so-called left-handed material, which has negative values of ε and μ for a band of frequencies in the GHz range.
SRRs were first used to fabricate left-handed metamaterials for the microwave range, and several years later for the terahertz range. By 2007, experimental demonstration of this structure at microwave frequencies has been achieved by many groups. In addition, SRRs have been used for research in acoustic metamaterials. The arrayed SRRs and wires of the first left-handed metamaterial were melded into alternating layers. This concept and methodology was then applied to (dielectric) materials with optical resonances producing negative effective permittivity for certain frequency intervals resulting in "photonic bandgap frequencies". Another analysis showed left-handed materials to be fabricated from inhomogeneous constituents, which yet results in a macroscopically homogeneous material. SRRs had been used to focus a signal from a point source, increasing the transmission distance for near field waves. Furthermore, another analysis showed SRRs with a negative index of refraction capable of high-frequency magnetic response, which created an artificial magnetic device composed of non-magnetic materials (dielectric circuit board).
The resonance phenomena that occurs in this system is essential to achieving the desired effects.
SRRs also exhibit resonant electric response in addition to their resonant magnetic response. The response, when combined with an array of identical wires, is averaged over the whole composite structure which results in effective values, including the refractive index. The original logic behind SRRs specifically, and metamaterials generally was to create a structure, which imitates an arrayed atomic structure only on a much larger scale.
Several types of SRR
In research based in metamaterials, and specifically negative refractive index, there are different types of split-ring resonators. Of the examples mentioned below, most of them have a gap in each ring. In other words, with a double ring structure, each ring has a gap.
There is the 1-D Split-Ring Structure with two square rings, one inside the other. One set of cited "unit cell" dimensions would be an outer square of 2.62 mm and an inner square of 0.25 mm. 1-D structures such as this are easier to fabricate compared with constructing a rigid 2-D structure.
The Symmetrical-Ring Structure is another classic example. Described by the nomenclature these are two rectangular square D type configurations, exactly the same size, lying flat, side by side, in the unit cell. Also these are not concentric. One set of cited dimensions are 2 mm on the shorter side, and 3.12 mm on the longer side. The gaps in each ring face each other, in the unit cell.
The Omega Structure, as the nomenclature describes, has an Ω-shaped ring structure. There are two of these, standing vertical, side by side, instead of lying flat, in the unit cell. In 2005 these were considered to be a new type of metamaterial. One set of cited dimensions are annular parameters of R=1.4 mm and r=1 mm, and the straight edge is 3.33 mm.
Another new metamaterial in 2005 was a coupled S-shaped structure. There are two vertical S-shaped structures, side by side, in a unit cell. There is no gap as in the ring structure; however, there is a space between the top and middle parts of the S and space between the middle part and bottom part of the S. Furthermore, it still has the properties of having an electric plasma frequency and a magnetic resonant frequency.
Research
On May 1, 2000, research was published about an experiment which involved conducting wires placed symmetrically within each cell of a periodic split-ring resonator array. This effectively achieved negative permeability and permittivity for electromagnetic waves in the microwave regime. The concept was and still is used to build interacting elements smaller than the applied electromagnetic radiation. In addition, the spacing between the resonators is much smaller than the wavelength of the applied radiation.
Additionally, the splits in the ring allow the SRR unit to achieve resonance at wavelengths much larger than the diameter of the ring. The unit is designed to generate a large capacitance, lower the resonant frequency, and concentrate the electric field. Combining units creates a design as a periodic medium. Furthermore, the multiple unit structure has strong magnetic coupling with low radiative losses.
Research has also covered variations in magnetic resonances for different SRR configurations.
Research has continued into terahertz radiations with SRRs Other related work fashioned metamaterial configurations with fractals and non-SRR structures. These can be constructed with materials such as periodic metallic crosses, or an ever-widening concentric ring structures known as Swiss rolls. Permeability for only the red wavelength at 780 nm has been analyzed and along with other related work.
See also
History of metamaterials
Superlens
Quantum metamaterials
Metamaterial cloaking
Photonic metamaterials
Metamaterial antennas
Nonlinear metamaterials
Photonic crystal
Seismic metamaterials
Acoustic metamaterials
Metamaterial absorber
Plasmonic metamaterials
Terahertz metamaterials
Tunable metamaterials
Transformation optics
Theories of cloaking
Academic journals
Metamaterials (journal)
Metamaterials books
Metamaterials Handbook
Metamaterials: Physics and Engineering Explorations
References
Further reading
Shepard, K. W. et al. Split-ring resonator for the Argonne Superconducting Heavy Ion Booster. IEEE Transactions on Nuclear Science, VoL. NS-24, N0.3, JUN 1977.
External links
Video: John Pendry lecture: The science of invisibility April 2009, SlowTV
Split Ring Resonator Calculator: Online tool to calculate the LC equivalent circuit and resonant frequency of SRR and CSRR topologies.
Resonators
Materials science
Electromagnetic radiation
Metamaterials
Scattering, absorption and radiative transfer (optics)
Optical materials | Split-ring resonator | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,858 | [
"Physical phenomena",
" absorption and radiative transfer (optics)",
"Applied and interdisciplinary physics",
"Metamaterials",
"Electromagnetic radiation",
"Materials science",
"Materials",
"Radiation",
"Optical materials",
"Scattering",
"nan",
"Matter"
] |
8,849,555 | https://en.wikipedia.org/wiki/Steam%20drum | A steam drum is a standard feature of a water-tube boiler. It is a reservoir of water/steam at the top end of the water tubes. The drum stores the steam generated in the water tubes and acts as a phase-separator for the steam/water mixture. The difference in densities between hot and cold water helps in the accumulation of the "hotter"-water/and saturated-steam into the steam-drum.
History
Initially the boilers were designed with 4 drums and 3 drums like the Stirling boiler. The single drum at the bottom and three drums on the top were connected through a network of tubes which were welded to the drums above and the single drum below. The rational demand of steam in terms of capacity, pressure and temperature resulted in bi drums and single drum boilers.
Working
The separated steam is drawn out from the top section of the drum and distributed for process. Further heating of the saturated steam will make superheated steam normally used to drive a steam turbine. Saturated steam is drawn off the top of the drum and re-enters the furnace in through a superheater. The steam and water mixture enters the steam drum through riser tubes, drum internals consisting of demister separate the water droplets from the steam producing dry steam. The saturated water at the bottom of the steam drum flows down through the downcomer pipe, normally unheated, to headers and water drum. Its accessories include a safety valve, water-level indicator and level controller. Feed-water of boiler is also fed to the steam drum through a feed pipe extending inside the drum, along the length of the steam drum.
A steam drum is used without or in the company of a mud-drum/feed water drum which is located at a lower level. A boiler with both steam drum and mud/water drum is called a bi-drum boiler and a boiler with only a steam drum is called a mono-drum boiler. The bi-drum boiler construction is normally intended for low pressure-rating boiler while the mono-drum is mostly designed for higher pressure-rating.
On steam locomotives the steam drum is also called a steam dome.
Types of Steam Drums
Three drum/four drum boilers – are the veterans of the normal day boilers, although they are still used in some industries.
Bi drum boiler – are used for power generation and steam generation both. For power generation they are used now seldom and are replaced by single drum boilers as the bi drum boilers are non-reheat units. So, due to the high heat rate of the plant a single drum boiler or a once through boiler is more feasible. In process steam generation the bi drum boilers are used commonly as they can adapt to the high load fluctuation and respond to load changes.
Single drum boiler – are used mainly for the power plants for power generation. The pressure limit for single drum boilers is higher than that of the bi drum boilers as the stress concentration is reduced to a greater extent. There exists only one drum and the downcomers are welded to it. Single drum boilers are suitable and can adapt to both reheat and non-reheat type of boilers. They can be designed as Corner tube boiler where the frame is not required as the downcomers itself serves the purpose of it and also they are designed as top supported where the whole boiler assembly needs an external frame and supported by top drum.
See also
Once-through steam generator, does not have a steam drum
References
Steam boiler components
Boilers
Steam generators | Steam drum | [
"Chemistry"
] | 705 | [
"Boilers",
"Pressure vessels"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.