id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
67,415,595 | https://en.wikipedia.org/wiki/Circinella | Circinella is a genus of fungi belonging to the family Syncephalastraceae. It was first described by Philippe Édouard Léon Van Tieghem & (Alexandre Alexis) George Le Monnier in 1873.
The genus has cosmopolitan distribution.
Species:
Circinella angarensis
Circinella chinensis
Circinella glomerata
References
Fungi
Taxa described in 1873
Taxa named by Philippe Édouard Léon Van Tieghem
Fungus genera | Circinella | [
"Biology"
] | 92 | [
"Fungi"
] |
67,415,621 | https://en.wikipedia.org/wiki/Waterford%20Flight | The Waterford Flight is a set of locks on the Erie Canal in upstate New York. Erie Canal Locks E-2 through E-6 make up the combined flight at Waterford, which lifts vessels from the Hudson River to the Mohawk River, bypassing Cohoes Falls. Built in 1915, the Waterford Flight is still in use today as part of the New York State Canal System, which is open to public and commercial traffic. The Waterford Flight is the series of locks with the highest elevation gain () relative to its length () for any canal lock system in the United States.
Planning
The original route of the Erie Canal bypassed Cohoes Falls to the south through the city of Cohoes. At the turn of the century, plans for an enlarged Erie Canal were being drawn up to accommodate more traffic and larger vessels. However, instead of constructing canals from scratch like had been done previously, the plan proposed "canalizing" the local rivers. For the plan near Cohoes, it involved routing a channel from the Mohawk River north of Cohoes Falls directly to the Hudson River at Waterford. Traffic would now flow directly from the Hudson to the Mohawk via the Waterford Flight and completely bypass the old canals from Albany to Cohoes. The Troy Federal Lock and Dam would serve as the unofficial start of the Erie Canal, with the first lock of the Waterford Flight being the official beginning, hence it being named E-2 to this day.
Construction
Construction of the Waterford Flight began in 1905 and took 10 years to complete. The 5 massive locks dwarfed the previous iterations and were mandated to be long, wide, and deep. These dimensions became the standard on the Barge Canal System and have been maintained to this day. In addition to the locks, there are two large guard gates at the northern end of the flight. These gates can be lowered to block additional flow of water from the Mohawk River through the locks to prevent damage during floods. The locks have undergone periods of major restoration in recent decades, including replacement of lock doors and resurfacing of the concrete within the locks themselves.
Present status
Today, the Waterford Flight is in use and is managed by the New York State Canal Corporation. The site of the Waterford Flight is also home to Lock 6 State Canal Park, which follows the length of the canal between the Hudson and the Mohawk and allows public access to the locks and a boat ramp at the Northern end. The set of locks was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2011.
References
Erie Canal
Historic Civil Engineering Landmarks
Locks of the United States
Transport infrastructure completed in 1915 | Waterford Flight | [
"Engineering"
] | 528 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
67,415,899 | https://en.wikipedia.org/wiki/Microbial%20pathogenesis | Microbial pathogenesis is a field of microbiology that started at least as early as 1988, with the identification of the triune Falkow's criteria, aka molecular Koch's postulates. In 1996, Fredricks and Relman proposed a seven-point list of "Molecular Guidelines for Establishing Microbial Disease Causation," because of "the discovery of nucleic acids" by Watson and Crick "as the source of genetic information and as the basis for precise characterization of an organism." The subsequent development of the "ability to detect and manipulate these nucleic acid molecules in microorganisms has created a powerful means for identifying previously unknown microbial pathogens and for studying the host-parasite relationship."
Postulates for the detection of microbial pathogens
In 1996, Fredricks and Relman suggested the following postulates for the novel field of microbial pathogenesis.
(i) A nucleic acid sequence belonging to a putative pathogen should be present in most cases of an infectious disease. Microbial nucleic acids should be found preferentially in those organs or gross anatomic sites known to be diseased, and not in those organs that lack pathology.
(ii) Fewer, or no, copies of pathogen-associated nucleic acid sequences should occur in hosts or tissues without disease.
(iii) With resolution of disease, the copy number of pathogen-associated nucleic acid sequences should decrease or become undetectable. With clinical relapse, the opposite should occur.
(iv) When sequence detection predates disease, or sequence copy number correlates with severity of disease or pathology, the sequence-disease association is more likely to be a causal relationship.
(v) The nature of the microorganism inferred from the available sequence should be consistent with the known biological characteristics of that group of organisms.
(vi) Tissue-sequence correlates should be sought at the cellular level: efforts should be made to demonstrate specific in situ hybridization of microbial sequence to areas of tissue pathology and to visible microorganisms or to areas where microorganisms are presumed to be located.
(vii) These sequence-based forms of evidence for microbial causation should be reproducible.
References
Microbiology
Diseases and disorders
Epidemiology
Cause (medicine) | Microbial pathogenesis | [
"Chemistry",
"Biology",
"Environmental_science"
] | 472 | [
"Epidemiology",
"Microbiology",
"Environmental social science",
"Microscopy"
] |
67,415,903 | https://en.wikipedia.org/wiki/Actinium%28III%29%20iodide | Actinium(III) iodide is the a salt of the radioactive metal actinium. It is a white crystalline solid. This compound was made by heating actinium oxide with a mixture of aluminium metal and iodine at 700 °C for two hours.
References
Actinium compounds
Iodides
Actinide halides | Actinium(III) iodide | [
"Chemistry"
] | 65 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
67,416,043 | https://en.wikipedia.org/wiki/Actinium%28III%29%20sulfide | Actinium(III) sulfide is the radioactive compound of actinium with the formula Ac2S3. This salt was prepared by heating actinium(III) oxalate at 1400°C for 6 minutes in a mixture of carbon disulfide and hydrogen sulfide. The result was conformed to be actinium(III) sulfide by x-ray diffraction.
References
Actinium compounds
Sesquisulfides | Actinium(III) sulfide | [
"Chemistry"
] | 90 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
67,416,400 | https://en.wikipedia.org/wiki/Actinium%28III%29%20phosphate | Actinium(III) phosphate is a white-colored chemical compound of the radioactive element actinium. This compound was created by reacting actinium(III) chloride with monosodium phosphate in aqueous hydrochloric acid. This resulted in the hemihydrate AcPO4·1/2H2O, whose structure was confirmed by x-ray diffraction to match that of lanthanum phosphate. To become anhydrous, it was heated to 700 °C, which resulted in a solid that was black (presumably due to the presence of impurities), and whose specific X-ray structure did not match that of other known correspond to other actinide phosphates.
References
Actinium compounds
Phosphates | Actinium(III) phosphate | [
"Chemistry"
] | 150 | [
"Salts",
"Phosphates",
"Inorganic compounds",
"Inorganic compound stubs"
] |
67,416,519 | https://en.wikipedia.org/wiki/Ultrafilter%20on%20a%20set | In the mathematical field of set theory, an ultrafilter on a set is a maximal filter on the set In other words, it is a collection of subsets of that satisfies the definition of a filter on and that is maximal with respect to inclusion, in the sense that there does not exist a strictly larger collection of subsets of that is also a filter. (In the above, by definition a filter on a set does not contain the empty set.) Equivalently, an ultrafilter on the set can also be characterized as a filter on with the property that for every subset of either or its complement belongs to the ultrafilter.
Ultrafilters on sets are an important special instance of ultrafilters on partially ordered sets, where the partially ordered set consists of the power set and the partial order is subset inclusion This article deals specifically with ultrafilters on a set and does not cover the more general notion.
There are two types of ultrafilter on a set. A principal ultrafilter on is the collection of all subsets of that contain a fixed element . The ultrafilters that are not principal are the free ultrafilters. The existence of free ultrafilters on any infinite set is implied by the ultrafilter lemma, which can be proven in ZFC. On the other hand, there exists models of ZF where every ultrafilter on a set is principal.
Ultrafilters have many applications in set theory, model theory, and topology. Usually, only free ultrafilters lead to non-trivial constructions. For example, an ultraproduct modulo a principal ultrafilter is always isomorphic to one of the factors, while an ultraproduct modulo a free ultrafilter usually has a more complex structure.
Definitions
Given an arbitrary set an ultrafilter on is a non-empty family of subsets of such that:
or : The empty set is not an element of
: If and if is any superset of (that is, if ) then
: If and are elements of then so is their intersection
If then either or its complement is an element of
Properties (1), (2), and (3) are the defining properties of a Some authors do not include non-degeneracy (which is property (1) above) in their definition of "filter". However, the definition of "ultrafilter" (and also of "prefilter" and "filter subbase") always includes non-degeneracy as a defining condition. This article requires that all filters be proper although a filter might be described as "proper" for emphasis.
A filter base is a non-empty family of sets that has the finite intersection property (i.e. all finite intersections are non-empty). Equivalently, a filter subbase is a non-empty family of sets that is contained in (proper) filter. The smallest (relative to ) filter containing a given filter subbase is said to be generated by the filter subbase.
The upward closure in of a family of sets is the set
A or is a non-empty and proper (i.e. ) family of sets that is downward directed, which means that if then there exists some such that Equivalently, a prefilter is any family of sets whose upward closure is a filter, in which case this filter is called the filter generated by and is said to be a filter base
The dual in of a family of sets is the set For example, the dual of the power set is itself:
A family of sets is a proper filter on if and only if its dual is a proper ideal on ("" means not equal to the power set).
Generalization to ultra prefilters
A family of subsets of is called if and any of the following equivalent conditions are satisfied:
For every set there exists some set such that or (or equivalently, such that equals or ).
For every set there exists some set such that equals or
Here, is defined to be the union of all sets in
This characterization of " is ultra" does not depend on the set so mentioning the set is optional when using the term "ultra."
For set (not necessarily even a subset of ) there exists some set such that equals or
If satisfies this condition then so does superset In particular, a set is ultra if and only if and contains as a subset some ultra family of sets.
A filter subbase that is ultra is necessarily a prefilter.
The ultra property can now be used to define both ultrafilters and ultra prefilters:
An is a prefilter that is ultra. Equivalently, it is a filter subbase that is ultra.
An on is a (proper) filter on that is ultra. Equivalently, it is any filter on that is generated by an ultra prefilter.
Ultra prefilters as maximal prefilters
To characterize ultra prefilters in terms of "maximality," the following relation is needed.
Given two families of sets and the family is said to be coarser than and is finer than and subordinate to written or , if for every there is some such that The families and are called equivalent if and The families and are comparable if one of these sets is finer than the other.
The subordination relationship, i.e. is a preorder so the above definition of "equivalent" does form an equivalence relation.
If then but the converse does not hold in general.
However, if is upward closed, such as a filter, then if and only if
Every prefilter is equivalent to the filter that it generates. This shows that it is possible for filters to be equivalent to sets that are not filters.
If two families of sets and are equivalent then either both and are ultra (resp. prefilters, filter subbases) or otherwise neither one of them is ultra (resp. a prefilter, a filter subbase).
In particular, if a filter subbase is not also a prefilter, then it is equivalent to the filter or prefilter that it generates. If and are both filters on then and are equivalent if and only if If a proper filter (resp. ultrafilter) is equivalent to a family of sets then is necessarily a prefilter (resp. ultra prefilter).
Using the following characterization, it is possible to define prefilters (resp. ultra prefilters) using only the concept of filters (resp. ultrafilters) and subordination:
An arbitrary family of sets is a prefilter if and only it is equivalent to a (proper) filter.
An arbitrary family of sets is an ultra prefilter if and only it is equivalent to an ultrafilter.
A on is a prefilter that satisfies any of the following equivalent conditions:
is ultra.
is maximal on with respect to meaning that if satisfies then
There is no prefilter properly subordinate to
If a (proper) filter on satisfies then
The filter on generated by is ultra.
Characterizations
There are no ultrafilters on the empty set, so it is henceforth assumed that is nonempty.
A filter base on is an ultrafilter on if and only if any of the following equivalent conditions hold:
for any either or
is a maximal filter subbase on meaning that if is any filter subbase on then implies
A (proper) filter on is an ultrafilter on if and only if any of the following equivalent conditions hold:
is ultra;
is generated by an ultra prefilter;
For any subset or
So an ultrafilter decides for every whether is "large" (i.e. ) or "small" (i.e. ).
For each subset either is in or () is.
This condition can be restated as: is partitioned by and its dual
The sets and are disjoint for all prefilters on
is an ideal on
For any finite family of subsets of (where ), if then for some index
In words, a "large" set cannot be a finite union of sets none of which is large.
For any if then or
For any if then or (a filter with this property is called a ).
For any if and then or
is a maximal filter; that is, if is a filter on such that then Equivalently, is a maximal filter if there is no filter on that contains as a proper subset (that is, no filter is strictly finer than ).
Grills and filter-grills
If then its is the family
where may be written if is clear from context.
For example, and if then
If then and moreover, if is a filter subbase then
The grill is upward closed in if and only if which will henceforth be assumed. Moreover, so that is upward closed in if and only if
The grill of a filter on is called a For any is a filter-grill on if and only if (1) is upward closed in and (2) for all sets and if then or The grill operation induces a bijection
whose inverse is also given by If then is a filter-grill on if and only if or equivalently, if and only if is an ultrafilter on That is, a filter on is a filter-grill if and only if it is ultra. For any non-empty is both a filter on and a filter-grill on if and only if (1) and (2) for all the following equivalences hold:
if and only if if and only if
Free or principal
If is any non-empty family of sets then the Kernel of is the intersection of all sets in
A non-empty family of sets is called:
if and otherwise (that is, if ).
if
if and is a singleton set; in this case, if then is said to be principal at
If a family of sets is fixed then is ultra if and only if some element of is a singleton set, in which case will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter is ultra if and only if is a singleton set. A singleton set is ultra if and only if its sole element is also a singleton set.
The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point.
Every filter on that is principal at a single point is an ultrafilter, and if in addition is finite, then there are no ultrafilters on other than these. In particular, if a set has finite cardinality then there are exactly ultrafilters on and those are the ultrafilters generated by each singleton subset of Consequently, free ultrafilters can only exist on an infinite set.
Examples, properties, and sufficient conditions
If is an infinite set then there are as many ultrafilters over as there are families of subsets of explicitly, if has infinite cardinality then the set of ultrafilters over has the same cardinality as that cardinality being
If and are families of sets such that is ultra, and then is necessarily ultra.
A filter subbase that is not a prefilter cannot be ultra; but it is nevertheless still possible for the prefilter and filter generated by to be ultra.
Suppose is ultra and is a set.
The trace is ultra if and only if it does not contain the empty set.
Furthermore, at least one of the sets and will be ultra (this result extends to any finite partition of ).
If are filters on is an ultrafilter on and then there is some that satisfies
This result is not necessarily true for an infinite family of filters.
The image under a map of an ultra set is again ultra and if is an ultra prefilter then so is The property of being ultra is preserved under bijections. However, the preimage of an ultrafilter is not necessarily ultra, not even if the map is surjective. For example, if has more than one point and if the range of consists of a single point then is an ultra prefilter on but its preimage is not ultra. Alternatively, if is a principal filter generated by a point in then the preimage of contains the empty set and so is not ultra.
The elementary filter induced by an infinite sequence, all of whose points are distinct, is an ultrafilter. If then denotes the set consisting all subsets of having cardinality and if contains at least () distinct points, then is ultra but it is not contained in any prefilter. This example generalizes to any integer and also to if contains more than one element. Ultra sets that are not also prefilters are rarely used.
For every and every let If is an ultrafilter on then the set of all such that is an ultrafilter on
Monad structure
The functor associating to any set the set of of all ultrafilters on forms a monad called the . The unit map
sends any element to the principal ultrafilter given by
This ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of all sets, which gives a conceptual explanation of this monad.
Similarly, the ultraproduct monad is the codensity monad of the inclusion of the category of finite families of sets into the category of all families of set. So in this sense, ultraproducts are categorically inevitable.
The ultrafilter lemma
The ultrafilter lemma was first proved by Alfred Tarski in 1930.
The ultrafilter lemma is equivalent to each of the following statements:
For every prefilter on a set there exists a maximal prefilter on subordinate to it.
Every proper filter subbase on a set is contained in some ultrafilter on
A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it.
The following results can be proven using the ultrafilter lemma.
A free ultrafilter exists on a set if and only if is infinite. Every proper filter is equal to the intersection of all ultrafilters containing it. Since there are filters that are not ultra, this shows that the intersection of a family of ultrafilters need not be ultra. A family of sets can be extended to a free ultrafilter if and only if the intersection of any finite family of elements of is infinite.
Relationships to other statements under ZF
Throughout this section, ZF refers to Zermelo–Fraenkel set theory and ZFC refers to ZF with the Axiom of Choice (AC). The ultrafilter lemma is independent of ZF. That is, there exist models in which the axioms of ZF hold but the ultrafilter lemma does not. There also exist models of ZF in which every ultrafilter is necessarily principal.
Every filter that contains a singleton set is necessarily an ultrafilter and given the definition of the discrete ultrafilter does not require more than ZF.
If is finite then every ultrafilter is a discrete filter at a point; consequently, free ultrafilters can only exist on infinite sets.
In particular, if is finite then the ultrafilter lemma can be proven from the axioms ZF.
The existence of free ultrafilter on infinite sets can be proven if the axiom of choice is assumed.
More generally, the ultrafilter lemma can be proven by using the axiom of choice, which in brief states that any Cartesian product of non-empty sets is non-empty. Under ZF, the axiom of choice is, in particular, equivalent to (a) Zorn's lemma, (b) Tychonoff's theorem, (c) the weak form of the vector basis theorem (which states that every vector space has a basis), (d) the strong form of the vector basis theorem, and other statements.
However, the ultrafilter lemma is strictly weaker than the axiom of choice.
While free ultrafilters can be proven to exist, it is possible to construct an explicit example of a free ultrafilter (using only ZF and the ultrafilter lemma); that is, free ultrafilters are intangible.
Alfred Tarski proved that under ZFC, the cardinality of the set of all free ultrafilters on an infinite set is equal to the cardinality of where denotes the power set of
Other authors attribute this discovery to Bedřich Pospíšil (following a combinatorial argument from Fichtenholz, and Kantorovitch, improved by Hausdorff).
Under ZF, the axiom of choice can be used to prove both the ultrafilter lemma and the Krein–Milman theorem; conversely, under ZF, the ultrafilter lemma together with the Krein–Milman theorem can prove the axiom of choice.
Statements that cannot be deduced
The ultrafilter lemma is a relatively weak axiom. For example, each of the statements in the following list can be deduced from ZF together with the ultrafilter lemma:
A countable union of countable sets is a countable set.
The axiom of countable choice (ACC).
The axiom of dependent choice (ADC).
Equivalent statements
Under ZF, the ultrafilter lemma is equivalent to each of the following statements:
<li>The Boolean prime ideal theorem (BPIT).
Stone's representation theorem for Boolean algebras.
Any product of Boolean spaces is a Boolean space.
Boolean Prime Ideal Existence Theorem: Every nondegenerate Boolean algebra has a prime ideal.
Tychonoff's theorem for Hausdorff spaces: Any product of compact Hausdorff spaces is compact.
If is endowed with the discrete topology then for any set the product space is compact.
Each of the following versions of the Banach-Alaoglu theorem is equivalent to the ultrafilter lemma:
Any equicontinuous set of scalar-valued maps on a topological vector space (TVS) is relatively compact in the weak-* topology (that is, it is contained in some weak-* compact set).
The polar of any neighborhood of the origin in a TVS is a weak-* compact subset of its continuous dual space.
The closed unit ball in the continuous dual space of any normed space is weak-* compact.
If the normed space is separable then the ultrafilter lemma is sufficient but not necessary to prove this statement.
A topological space is compact if every ultrafilter on converges to some limit.
A topological space is compact if every ultrafilter on converges to some limit.
The addition of the words "and only if" is the only difference between this statement and the one immediately above it.
The Alexander subbase theorem.
The Ultranet lemma: Every net has a universal subnet.
By definition, a net in is called an or an if for every subset the net is eventually in or in
A topological space is compact if and only if every ultranet on converges to some limit.
If the words "and only if" are removed then the resulting statement remains equivalent to the ultrafilter lemma.
A convergence space is compact if every ultrafilter on converges.
A uniform space is compact if it is complete and totally bounded.
The Stone–Čech compactification Theorem.
<li>Each of the following versions of the compactness theorem is equivalent to the ultrafilter lemma:
If is a set of first-order sentences such that every finite subset of has a model, then has a model.
If is a set of zero-order sentences such that every finite subset of has a model, then has a model.
The completeness theorem: If is a set of zero-order sentences that is syntactically consistent, then it has a model (that is, it is semantically consistent).
Weaker statements
Any statement that can be deduced from the ultrafilter lemma (together with ZF) is said to be than the ultrafilter lemma.
A weaker statement is said to be if under ZF, it is not equivalent to the ultrafilter lemma.
Under ZF, the ultrafilter lemma implies each of the following statements:
The Axiom of Choice for Finite sets (ACF): Given and a family of non-empty sets, their product is not empty.
A countable union of finite sets is a countable set.
However, ZF with the ultrafilter lemma is too weak to prove that a countable union of sets is a countable set.
The Hahn–Banach theorem.
In ZF, the Hahn–Banach theorem is strictly weaker than the ultrafilter lemma.
The Banach–Tarski paradox.
In fact, under ZF, the Banach–Tarski paradox can be deduced from the Hahn–Banach theorem, which is strictly weaker than the Ultrafilter Lemma.
Every set can be linearly ordered.
Every field has a unique algebraic closure.
Non-trivial ultraproducts exist.
The weak ultrafilter theorem: A free ultrafilter exists on
Under ZF, the weak ultrafilter theorem does not imply the ultrafilter lemma; that is, it is strictly weaker than the ultrafilter lemma.
There exists a free ultrafilter on every infinite set;
This statement is actually strictly weaker than the ultrafilter lemma.
ZF alone does not even imply that there exists a non-principal ultrafilter on set.
Completeness
The completeness of an ultrafilter on a powerset is the smallest cardinal κ such that there are κ elements of whose intersection is not in The definition of an ultrafilter implies that the completeness of any powerset ultrafilter is at least . An ultrafilter whose completeness is than —that is, the intersection of any countable collection of elements of is still in —is called countably complete or σ-complete.
The completeness of a countably complete nonprincipal ultrafilter on a powerset is always a measurable cardinal.
The (named after Mary Ellen Rudin and Howard Jerome Keisler) is a preorder on the class of powerset ultrafilters defined as follows: if is an ultrafilter on and an ultrafilter on then if there exists a function such that
if and only if
for every subset
Ultrafilters and are called , denoted , if there exist sets and and a bijection that satisfies the condition above. (If and have the same cardinality, the definition can be simplified by fixing )
It is known that ≡RK is the kernel of ≤RK, i.e., that if and only if and
Ultrafilters on ℘(ω)
There are several special properties that an ultrafilter on where extends the natural numbers, may possess, which prove useful in various areas of set theory and topology.
A non-principal ultrafilter is called a P-point (or ) if for every partition of such that for all there exists some such that is a finite set for each
A non-principal ultrafilter is called Ramsey (or selective) if for every partition of such that for all there exists some such that is a singleton set for each
It is a trivial observation that all Ramsey ultrafilters are P-points. Walter Rudin proved that the continuum hypothesis implies the existence of Ramsey ultrafilters.
In fact, many hypotheses imply the existence of Ramsey ultrafilters, including Martin's axiom. Saharon Shelah later showed that it is consistent that there are no P-point ultrafilters. Therefore, the existence of these types of ultrafilters is independent of ZFC.
P-points are called as such because they are topological P-points in the usual topology of the space of non-principal ultrafilters. The name Ramsey comes from Ramsey's theorem. To see why, one can prove that an ultrafilter is Ramsey if and only if for every 2-coloring of there exists an element of the ultrafilter that has a homogeneous color.
An ultrafilter on is Ramsey if and only if it is minimal in the Rudin–Keisler ordering of non-principal powerset ultrafilters.
See also
Notes
Proofs
References
Bibliography
Further reading
Families of sets
Nonstandard analysis
Order theory | Ultrafilter on a set | [
"Mathematics"
] | 5,075 | [
"Mathematical objects",
"Infinity",
"Combinatorics",
"Families of sets",
"Basic concepts in set theory",
"Nonstandard analysis",
"Mathematics of infinitesimals",
"Model theory",
"Order theory"
] |
67,417,478 | https://en.wikipedia.org/wiki/Mie%20potential | The Mie potential is an interaction potential describing the interactions between particles on the atomic level. It is mostly used for describing intermolecular interactions, but at times also for modeling intramolecular interaction, i.e. bonds.
The Mie potential is named after the German physicist Gustav Mie; yet the history of intermolecular potentials is more complicated. The Mie potential is the generalized case of the Lennard-Jones (LJ) potential, which is perhaps the most widely used pair potential.
The Mie potential is a function of , the distance between two particles, and is written as
with
.
The Lennard-Jones potential corresponds to the special case where and in Eq. (1).
In Eq. (1), is the dispersion energy, and indicates the distance at which , which is sometimes called the "collision radius." The parameter is generally indicative of the size of the particles involved in the collision. The parameters and characterize the shape of the potential: describes the character of the repulsion and describes the character of the attraction.
The attractive exponent is physically justified by the London dispersion force, whereas no justification for a certain value for the repulsive exponent is known. The repulsive steepness parameter has a significant influence on the modeling of thermodynamic derivative properties, e.g. the compressibility and the speed of sound. Therefore, the Mie potential is a more flexible intermolecular potential than the simpler Lennard-Jones potential.
The Mie potential is used today in many force fields in molecular modeling. Typically, the attractive exponent is chosen to be , whereas the repulsive exponent is used as an adjustable parameter during the model fitting.
Thermophysical properties of the Mie substance
As for the Lennard-Jonesium, where a theoretical substance exists that is defined by particles interacting by the Lennard-Jones potential, a substance class of Mie substances exists that are defined as single site spherical particles interacting by a given Mie potential. Since an infinite number of Mie potentials exist (using different n, m parameters), equally many Mie substances exist, as opposed to Lennard-Jonesium, which is uniquely defined. For practical applications in molecular modelling, the Mie substances are mostly relevant for modelling small molecules, e.g. noble gases, and for coarse grain modelling, where larger molecules, or even a collection of molecules, are simplified in their structure and described by a single Mie particle. However, more complex molecules, such as long-chained alkanes, have successfully been modelled as homogeneous chains of Mie particles. As such, the Mie potential is useful for modelling far more complex systems than those whose behaviour is accurately captured by "free" Mie particles.
Thermophysical properties of both the Mie fluid, and chain molecules built from Mie particles have been the subject of numerous papers in recent years. Investigated properties include virial coefficients and interfacial, vapor-liquid equilibrium, and transport properties. Based on such studies the relation between the shape of the interaction potential (described by n and m) and the thermophysical properties has been elucidated.
Also, many theoretical (analytical) models have been developed for describing thermophysical properties of Mie substances and chain molecules formed from Mie particles, such as several thermodynamic equations of state and models for transport properties.
It has been observed that many combinations of different () can yield similar phase behaviour, and that this degeneracy is captured by the parameter
,
where fluids with different exponents, but the same -parameter will exhibit the same phase behavior.
Mie potential used in molecular modeling
Due to its flexibility, the Mie potential is a popular choice for modelling real fluids in force fields. It is used as an interaction potential many molecular models today. Several (reliable) united atom transferable force fields are based on the Mie potential, such as that developed by Potoff and co-workers. The Mie potential has also been used for coarse-grain modeling. Electronic tools are available for building Mie force field models for both united atom force fields and transferable force fields. The Mie potential has also been used for modeling small spherical molecules (i.e. directly the Mie substance - see above). The Table below gives some examples. There, the molecular models have only the parameters of the Mie potential itself.
References
Thermodynamics
Intermolecular forces
Computational chemistry
Quantum mechanical potentials | Mie potential | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 940 | [
"Molecular physics",
"Quantum mechanics",
"Intermolecular forces",
"Materials science",
"Quantum mechanical potentials",
"Computational chemistry",
"Theoretical chemistry",
"Thermodynamics",
"Dynamical systems"
] |
67,417,515 | https://en.wikipedia.org/wiki/Land%20Art%20Generator%20Initiative | Land Art Generator Initiative (LAGI), founded by Elizabeth Monoian and Robert Ferry, is an organization dedicated to devising alternative energy solutions through sustainable design and public art by providing platforms for scientists and engineers to collaborate with artists, architects and other creatives on public art projects that generate sustainable energy infrastructures. Since 2010, LAGI has hosted biannual international competitions stimulating artists to design public art that produces renewable energy. Sites for these contests have included Abu Dhabi, United Arab Emirates, Copenhagen, Denmark, New York City and Santa Monica, California. Land Art Generator Initiative also led efforts that have resulted in the world's first Solar Mural artworks.
LAGI International Design Competitions
Every two years since 2010, Land Art Generator Initiative has conducted international competitions leading design teams from over forty countries to create for art-based solutions to renewable energy challenges.
2010 - Adu Dhabi, United Arab Emirates
2012 - Freshkills Park, New York
2014 - Copenhagen, Denmark
2016 - Santa Monica, California
2018 - Melbourne, Australia
2019 - Abu Dubai, United Arab Emirates
2020 - Fly Ranch, Nevada
2022 - Mannheim, Germany
Solar Mural Artworks
The world's first Solar Mural artworks, developed through leadership from the Land Art Generator Initiative, are located in San Antonio, Texas. These artworks are the result of an advanced photovoltaic film technology that allows light to filter through an image-printed film adhered to solar panels. The first is a stand-alone work called La Monarca. The world's first wall-mounted Solar Mural artwork is on the facade of Brackenridge Elementary School.
References
Energy organizations
Sustainable design
Public art | Land Art Generator Initiative | [
"Engineering"
] | 328 | [
"Energy organizations"
] |
67,418,621 | https://en.wikipedia.org/wiki/Coral%20reefs%20of%20Solomon%20Islands | The Coral reefs of Solomon Islands consists of six major islands and over 986 smaller islands, in Oceania, to the east of Papua New Guinea and northwest of Vanuatu. Solomon Islands lie between latitudes 5° and 13°S, and longitudes 155° and 169°E. The distance between the westernmost and easternmost islands is about . The Santa Cruz Islands are situated north of Vanuatu and are especially isolated at more than from the other islands. The Solomon Islands (the Solomons) has the 22nd largest Exclusive Economic Zone of of the Pacific Ocean.
The Solomons has a rich and diverse marine life, including coral reefs and seagrass meadows. The islands are part of the Coral Triangle, the region of the western Pacific with the World's greatest diversity of corals and coral reef species. The recognizable reef systems in the Solomons are: fringing reef, patch reef, barrier reef, atoll reefs and lagoon environment. The baseline survey of marine biodiversity in 2004, identified the Solomons as having the second highest diversity of corals in the World, second only to the Raja Ampat Islands in eastern Indonesia.
The Coral reefs of Solomons make up nearly of total coral reef area. There are 113 Locally Managed Marine Areas (LMMA) containing an estimated 155 no-take zones in the Solomons. The largest LMMA, with a contiguous no-take zone, is on Tetepare Island.
More than 90% of inshore coastal areas, reefs and islets in the Solomons are owned and managed under the customary marine tenure system, under which family units, clans, or tribes have rights to access and use marine resources. This kinship group ownership system is recognised under the Solomon Islands Constitution. The methods of management of marine resources under the customary marine tenure system include limited entry, closed seasons, closed areas, size limits, species prohibitions and restrictions on the use of fishing equipment. The success or failure of conservation efforts on coral reefs largely depends on the attitudes of the communities owning them.
Areas of high biodiversity and conservation value
A total 12 offshore sites and 53 inshore sites have been identified as Key Biodiversity Areas (KBAs) - areas of high biodiversity and conservation. The highest-scoring sites were Marovo Lagoon and the Arnavon Community Marine Conservation Area. The offshore sites are Roncador Reef, Ontong Java Atoll, Tikopia, and Vanikoro, an island in the Santa Cruz group.
KBAs include:
Reef sites and lagoons within Western Province, including Kennedy Island (Kasolo Island), Tetepare Island, Marovo Lagoon (which covers , the Zaira Resource Management Area on the western coast of Vangunu Island and New Georgia Island, and Mushroom Island at the edge of Roviana Lagoon, on the southern side of New Georgia.
Reef sites and lagoons within Choiseul Province including Zinoa Island that is located on the south-west side of Choiseul Island and includes the Zinoa Marine Conservation Area, which covers 150 ha or and consists of two small islands and associated reefs; Rabakel is a marine protected area (MPA), measuring , at the northern end of Choiseul Island that includes a stretch of fringing coral reef; Moli Island, off the western side of Choiseul Island, which has a locally managed marine area (LMMA) that includes a stretch of fringing coral reef; and Muzo Island that is located towards the southern end of Choiseul Island that is fringed by a coral reef and is sheltered from strong wave action by a barrier reef directly to the south.
Reef sites and lagoons within Isabel Province including: the Arnavon Community Marine Conservation Area.
Reef sites and lagoons within Malaita Province including Langa Langa Lagoon or Akwalaafu; Lau Lagoon, the coral reef lagoon off the coast of Fanalei and Walande villages on the eastern side of Malaita Island; and Ndai or Dai, which is a small ( elevated coral reef island off the northern end of Malaita Island.
Reef sites in Rennell and Bellona Province including Rennell Island, and the Indispensable Reefs, which are a chain of three large coral atolls in the Coral Sea spread over a length of , approximately south of Rennell Island. The atolls enclose deep lagoons. North Reef has two narrow openings in the north and northwest, with no islets. The reef has a total area of , including the lagoon and reef flat. Middle Reef has a total area of . A small islet is located near the center of the reef. South Reef has a total area of .
Structure of the reefs of Solomon Islands
Solomon Islands is located at the edge of the Solomon Sea Plate, the Pacific Plate and other oceanic tectonic plates. These autochthonous geological systems have tilted to push up and create some of islands, and some of islands are volcanic in origin.
The coral reefs are developed by the carbonate-based skeletons of a variety of animals and algae. Slowly, over time, the reefs have built up to the surface in oceans. Coral reefs are found in shallow, warm salt water. The sunlight filters through clear water and allows microscopic organisms to live and reproduce. Coral reefs are composed of tiny, fragile animals known as coral polyps. Coral reefs are significantly important because of the biodiversity, and are at risk from the detrimental effects of human action and inaction, such as overfishing; and heightened levels of nutrients in the water due to pollution from human waste, which feeds the growth of Macroalgae species (seaweed).
The coral reefs of the Solomons are predominantly fringing reefs, although Ontong Java Atoll and Sikaiana, are examples of coral atolls. Long submerged barrier reefs are uncommon in Solomons, although there are some examples, including in the Reef Islands, a line of four reefs stretches westwards for while the Great Reef, which is further north, is about long.
The fringing reefs of the Solomons are found off the mountainous volcanic islands of the Solomon Islands archipelago, which includes Choiseul, the Shortland Islands, the New Georgia Islands, Santa Isabel, the Russell Islands, the Florida Islands, Tulagi, Malaita, Maramasike, Ulawa, Owaraha (Santa Ana), Makira (San Cristobal), and the main island of Guadalcanal. Bougainville Island is the largest island in the archipelago, while it is geographically part of the Solomon Islands archipelago, it is politically an autonomous region of Papua New Guinea. The volcanic islands with fringing reefs include the remote, tiny outliers, Tikopia, Anuta, and Fatutaka.
The largest coral reef systems in the Solomons are located where large lagoons are protected by raised or semi-submerged barrier reefs or by raised limestone islands, such as, Marovo Lagoon and Roviana Lagoon (on the southern side of New Georgia Island. There are some large lagoon complexes are that are protected by volcanic islands, raised islands, sand cays, or barrier reefs. Examples of such as reefs are (1) Marau Sound in the eastern part of Guadalcanal; (2) Lau Lagoon on the northeast coast of Malaita; (3) in the vicinity of Vangunu in southeastern New Georgia; (4) in Gizo, on Vonavona island, and the lagoon of New Georgia; (5) along the northeastern coast of Choiseul; (6) on both sides of Manning Strait between Choiseul and Santa Isabel Island; (7) and adjacent to the Shortland Islands near Bougainville.
Sikaiana (formerly called the Stewart Islands) is a small atoll northeast of Malaita is almost in length and its lagoon, known as Te Moana, is totally enclosed by the coral reef. Sikaiana is an example of a coral atoll formed from an oceanic volcano, with a coral reef growing around the shore of the volcano and then, over several million years, the volcano becomes extinct, eroded and subsided completely beneath the surface of the ocean. The reef and the small coral islets on top of it are all that is left of the original island, and a lagoon has taken the place of the former volcano. For the atoll to persist, the coral reef must be maintained at the sea surface, with coral growth matching any relative change in sea level (subsidence of the island or rising oceans). On the atolls, an annular reef rim surrounds the lagoon, and may include natural reef channels.
Rennell has a land area of that is about long and wide. It is the second largest raised coral atoll in the world. Rennell Island is an example of a reef island also formed from an oceanic volcano that has a completely closed rim of dry land, with the remnants of a lagoon that has no direct connection to the open sea.
The Marovo lagoon is the second largest saltwater lagoon in the World at . Huvadhu Atoll in the Maldives is the largest saltwater lagoon at . Marovo lagoon is surrounded by Vangunu Island and Nggatokae Island, both extinct volcanic islands, at . It is part of the New Georgia Islands, which are located to the northwest of Guadalcanal and is protected by a double barrier reef system.
The volcanic island of Malaita includes, on the northwest coast, the Langa Langa Lagoon and the Lau Lagoon on the northeast coast. The Langa Langa Lagoon or Akwalaafu, is . The Lau Lagoon is more than long and contains about 60 artificial islands built on the reef. The people of the Lau Lagoon continue to live on the reef islands.
State of the reefs of the Solomon Islands
Surveys of marine biodiversity
The baseline survey of marine biodiversity in 2004, identified 494 coral species, including nine potentially new species and extended the known range of 122 coral species to include the Solomons. The 2004 survey also recorded 1,019 species of reef fish, of which 47 were species range extensions.
There are 3 species of pearl oysters: Black-lip oyster (Pinctada margaritifera), White-lip oyster (Pinctada maxima), and Pteria penguin.
Six species of giant clam (Tridacna) occur: Tridacna crocea; Hippopus hippopus; Tridacna squamosa; Tridacna maxima; Tridacna derasa (southern giant clam or Smooth giant clam); and Tridacna gigas, which can grow to . These giant clams have been overharvested and restrictions now protect the resource for local subsistence use only. Only farmed shells are allowed to be marketed commercially.
Erect coralline algae (red algae), and Halimeda (green macroalgae) was commonly found throughout the Solomons, particularly at depths deeper than 15 meters.
Reef sites and lagoons within Western Province were surveyed in 2014 near Gizo, Munda, Marovo Lagoon and Nono Lagoon, both located in the New Georgia Islands. These areas had an average Live Coral Cover (LCC) ranging from 18 to 49%. The sites with the highest LCC in the Western Province and second highest in the Solomons were on the exposed side of the fringing reef near Marovo Lagoon measuring an average of 49% LCC. The exposed side of the fringing reef of Marovo Lagoon had an average of 38% LCC. The sites with the lowest live coral cover were found near Munda with an average of 18% LCC. The exposed side of the fringing reef of Marovo Lagoon was dominated by Acropora, followed by Porites, Montipora, Millepora, and Echinopora. Munda was dominated by Acropora, followed by Porites, Montipora and Agariciidae.
Nono Lagoon had an average overall LCC of 32%; with individual sites ranging from 20% to 50% LCC. The lagoon floor cover included leather corals, sponges, and other soft corals such as (Sinularia and Sacrophotons). The algal community within Nono Lagoon was dominated by crustose coralline algae which accounted for 47% of the total algae observed, with the next most dominant algae being turf algae accounting for 29% of the total algae.
The coral diversity in the Western Province was evenly spread among common reef-building genera including Porites, Acropora, Montipora, Pocillopora, Pocillopora, Turbinaria and Millepora (hydrocorals). Isopora was one of the most abundant coral genera in the reefs off Gizo.
Reef sites within Temotu Province were surveyed in 2014 around the Reef Islands, Utupua, Vanikoro and Tinakula and were identified as having some of the highest coral diversity when compared to the other sites surveyed in the Solomons. The Reef Islands are small, uplifted coral islands with an average of 31% LCC. These reefs have the highest percentage of erect coralline algae of all the areas surveyed in the Solomons and are abundant with Halimeda (green macroalgae) that accounted for 38% of the total algae measured. Crown-of-thorns starfish (Acanthaster planci), were found on the reefs in numbers that indicated an active outbreak was occurring. Vanikoro and Utupua are both small atolls found within the Temotu province that had relatively high live coral cover ranging from 42% to 44% respectively. Utupua had an average of 38 of algae of its reefs present, dominated by erect coralline algae 44% coverage of the reef system. Tinakula had lower levels of live coral cover as the volcano of Tinakula had erupted in 2012 so that there was an essentially new reef system beginning to form around the half of the island's reef that was damaged in the eruption. The Reef Islands and Utupua were dominated by Acropora and Porites. Tinakula was dominated by Acropora, Porites, Montipora and Millepora. Vanikoro was also dominated by crustose coralline algae at 32%, however, there was a notable amount of erect coralline algae, particularly Halimeda, accounting for 21% of the total algae observed. A total of 40 different genera were observed at Vanikoro, which was dominated by Acropora, followed by Porites.
Reef sites and lagoons within Isabel Province were surveyed in 2014 around Kerehikapa and Sikopo within the boundaries of the Arnavon Community Marine Conservation Area, as well as the area around Malakobi, which lies outside the conservation area. The site at Kerehikapa had an average of 51% LCC, which was the highest average live coral cover of all areas surveyed in the Solomons. One site had the highest overall with 69% LCC. Algae accounted for an average of 30% around Kerehikapa and was composed predominately of crustose coralline algae and turf algae. Kerehikapa was dominated by Acropora, followed by Porites. These two dominant genera accounted for over 40% of the coral observed at this site.
Sikopo had only 30% LCC when compared to Kerehikapa with significant amounts of coral rubble and very little reef structure, which may have been damage caused by the tsunami that struck Isabel Province in April 2007. The dominant genera around Sikopo were Acropora and Montipora, with Cyphastrea at greater levels as compared to other regions of the Solomons. The survey outside of the conservation area at Malakobi identified live coral cover ranged drastically from 8-45%. The average algal cover of 52% was evenly spread among crustose coralline algae, erect coralline algae, macroalgae and turf algae. Malakobi was dominated by Acropora, Isopora and Porites.
Threats to reefs and marine ecosystems
The reefs in the Solomons are exposed to the effects of pollution and over-utilisation of the reef resources by the residents of the islands, and are facing detrimental effects on them due to a variety of factors that are the consequence of Climate Change such as: bleaching; an increased number of destructive tropical cyclones and the potential threat of ocean acidification that would be the result of a lowered pH of ocean waters caused by increased carbon dioxide emissions into the atmosphere, which results in more CO2 dissolving into the ocean.
There are a variety of human activities that are threats to reefs and marine ecosystems, including: Blast fishing using dynamite; the harvesting of coral for lime production for betel nut chewing; mining of corals for construction purposes (e.g., building of seawalls and other structures); and fishing of species that are important in maintaining the functioning of the reef ecosystem. The forms of pollution include: sewage and waste from houses or factories, such as the processing of fish; sedimentation from run-off following removal of trees through logging in the forests; and plastics and other non-degradable refuse entering the ocean.
Bleaching
There was a 36-month bleaching event on the reefs of the Solomons in 2014–2017. The bleaching was a consequence of an increase in ocean temperatures that happened during the El Niño events. Bleaching events can alter the community composition of coral reefs by changing the relative abundance of corals based on the susceptibility of different species of corals to bleaching-induced mortality. Branching corals such as Acropora (staghorn corals), which tend to be more susceptible to bleaching-induced mortality.
Bleaching is a process that expels the photosynthetic algae from the corals' "stomachs" or polyps. This algae is called zooxanthellae. It is vital to the reef's life because it provides the coral with nutrients; it is also responsible for the color. The process is called bleaching because when the algae is ejected from the coral reef the animal loses its pigment. Zooxanthella densities are continually changing; bleaching is an extreme example of what naturally happens.
Surveys in 2016 identified that the reefs around New Georgia Island and Kolombangara Island in Western Province had substantially survived the thermal anomalies that caused extensive coral bleaching in other parts of the Indo-Pacific as the result of heat stress when water temperatures were recorded as high as , even down as far as during an El Niño event.
The survey of reefs in 2018 at 13 sites in the Western Province, and at 4 islands (Mbabanga, Tetepare, Uepi, Gatokae) in the New Georgia Islands, concluded that reef-building capacity (percentage cover of reef building organisms: hard corals and coralline algae), as the indicator of the resilience of reefs to large-scale disturbances, was high across the sites. The average percent cover of active reef builders at these sites in Western Province (44.6%) was comparable to that of remote uninhabited islands in the Central Pacific (45.2%), and substantially greater than inhabited islands (27.3%) in the same region. The conclusion reached was that the surveyed reefs in the Western Province remain healthy and functional in the face of recent global and local stressors. The conclusion was that the relatively healthy status of the reefs likely reflects local variation in seawater temperature and low levels of bleaching stress.
However, in January 2021, the reefs around Marovo Lagoon including the Zaira Resource Management Area on the western coast of Vangunu Island, and New Georgia Island were reported as suffering a widespread coral bleaching event.
References
Coral reefs
Geography of the Solomon Islands | Coral reefs of Solomon Islands | [
"Biology"
] | 4,063 | [
"Biogeomorphology",
"Coral reefs"
] |
67,418,850 | https://en.wikipedia.org/wiki/Cryptojacking | Cryptojacking is the act of exploiting a computer to mine cryptocurrencies, often through websites, against the user's will or while the user is unaware. One notable piece of software used for cryptojacking was Coinhive, which was used in over two-thirds of cryptojacks before its March 2019 shutdown. The cryptocurrencies mined the most often are privacy coins—coins with hidden transaction histories—such as Monero and Zcash.
Like most malicious attacks on the computing public, the motive is profit, but unlike other threats, it is designed to remain completely hidden from the user. Cryptojacking malware can lead to slowdowns and crashes due to straining of computational resources.
Bitcoin mining by personal computers infected with malware is being challenged by dedicated hardware, such as FPGA and ASIC platforms, which are more efficient in terms of power consumption and thus may have lower costs than theft of computing resources.
Notable events
In June 2011, Symantec warned about the possibility that botnets could mine covertly for bitcoins. Malware used the parallel processing capabilities of GPUs built into many modern video cards. Although the average PC with an integrated graphics processor is virtually useless for bitcoin mining, tens of thousands of PCs laden with mining malware could produce some results.
In mid-August 2011, bitcoin mining botnets were detected, and less than three months later, bitcoin mining trojans had infected Mac OS X.
In April 2013, electronic sports organization E-Sports Entertainment was accused of hijacking 14,000 computers to mine bitcoins; the company later settled the case with the State of New Jersey.
German police arrested two people in December 2013 who customized existing botnet software to perform bitcoin mining, which police said had been used to mine at least $950,000 worth of bitcoins.
For four days in December 2013 and January 2014, Yahoo! Europe hosted an ad containing bitcoin mining malware that infected an estimated two million computers using a Java vulnerability.
Another software, called Sefnit, was first detected in mid-2013 and has been bundled with many software packages. Microsoft has been removing the malware through its Microsoft Security Essentials and other security software.
Several reports of employees or students using university or research computers to mine bitcoins have been published. On February 20, 2014, a member of the Harvard community was stripped of his or her access to the university's research computing facilities after setting up a Dogecoin mining operation using a Harvard research network, according to an internal email circulated by Faculty of Arts and Sciences Research Computing officials.
Ars Technica reported in January 2018 that YouTube advertisements contained JavaScript code that mined the cryptocurrency Monero.
In 2021, multiple zero-day vulnerabilities were found on Microsoft Exchange servers, allowing remote code execution. These vulnerabilities were exploited to mine cryptocurrency.
Detection
Traditional countermeasures of cryptojacking are host-based and not suitable for corporate networks. A potential solution is a network-based approach called Crypto-Aegis, which uses machine learning to detect cryptocurrency activities in network traffic, even when encrypted or mixed with non-malicious data.
References
Cryptocurrencies
Malware
Security breaches
Cybercrime | Cryptojacking | [
"Technology"
] | 690 | [
"Malware",
"Computer security exploits"
] |
67,419,030 | https://en.wikipedia.org/wiki/Target%202035 | Target 2035 is a global effort or movement to discover open science, pharmacological modulator(s) for every protein in the human proteome by the year 2035. The effort is led by the Structural Genomics Consortium with the intention that this movement evolves organically. Target 2035 has been borne out of the success that chemical probes have had in elevating or de-prioritizing the therapeutic potential of protein targets. The availability of open access pharmacological tools is a largely unmet aspect of drug discovery especially for the dark proteome.
The first five years will include building mechanisms (Phase 1 below) which allow researchers to find collaborators with like-minded goals towards discovering a pharmacological tool for a specific protein or protein family, and make it open access (without encumbrances due to intellectual property). One strategic goal is seeding new open science programs on components of the drug discovery pipeline with the goal to bring medicines to the bedside equitably, affordably and rapidly. Phase 1 will also build a framework that welcomes new and (re-)emerging enabling technologies in hit-finding and characterization. An update on the progress was published.
Target 2035 will draw on successes from past and current publicly-funded programs including National Institutes of Health (NIH) Illuminating the Druggable Genome initiative for under-explored kinases, GPCR’s and ion channels, Innovative Medicines Initiative's RESOLUTE project on human SLCs, Innovative Medicines Initiative's Enabling and Unlocking Biology in the Open (EUbOPEN), and Innovative Medicines Initiative's Unrestricted Leveraging of Targets for Research Advancement and Drug Discovery. The NIH recently re-iterated their commitment to making their data open to mitigate the tens of billions due to irreproducible data.
Target 2035 will collaborate with the Chemical Probes Portal and open science platforms, e.g. Just One Giant Lab, in order to spread awareness and education of best practices for chemical modulators and the benefits of open science, respectively.
The following draft plan has been outlined in a white paper.
Phase 1
The first phase, from 2020 to 2025, would be structured to build the foundation for a concerted global effort, and would aim to collect, characterize and make available existing pharmacological modulators for key representatives from all proteins families in the current druggable genome (~4,000 proteins), as well as to develop critical and centralized infrastructure to facilitate data collection, curation, dissemination, and mining that will power the scientific community worldwide. This phase might also create centralized facilities to provide quantitative genome-scale biochemical and cell-based profiling assays to the federated community, as well as to coordinate the development of new technologies to extend the definition of druggability. This first phase will complement and extend ongoing efforts to create chemical tools and chemogenomic libraries to blanket priority gene families, such as kinases and epigenetics families.
One year into Target 2035 has so far yielded infrastructure to house data on chemogenomic compounds reported in the literature. A progress update was published recently. Towards the development of new technologies, Target 2035 started a new initiative Critical Assessment of Computational Hit-Finding Experiments (CACHE) aimed at benchmarking computational methods for hit-finding. The first competition - finding ligands for the WD40 domain of LRRK2 - started in March 2022. The first round of predictions have been submitted. In the meantime, applications for the second CACHE benchmarking - predicting ligands for the RNA-binding domain for Nsp13 - has been posted.
Phase 2
The second phase, from 2025 to 2035, will be to apply the new technologies and infrastructure to generate a complete set of pharmacological modulators for > 90% of the ~20,000 proteins encoded by the genome. “Target 2035” sounds ambitious, but its concept and practicality is on firm ground based on a number of pilot studies, which revealed the following success parameters:
Collaborate with the pharmaceutical sector to access unparalleled expertise, experience, materials, and logistics
Establish clear and quantitative quality criteria for the output (target chemical tool profiles) to provide focus
Organize the project around protein families – it is the most efficient, practical and scientifically sound way to divide this large project into teams
Establish clear open science principles to eliminate or reduce conflicts of interest, to reduce legal encumbrances, and to encourage participation by the community.
References
External links
Drug discovery
Open science
Chemical biology | Target 2035 | [
"Chemistry",
"Biology"
] | 940 | [
"Life sciences industry",
"Drug discovery",
"nan",
"Medicinal chemistry",
"Chemical biology"
] |
67,419,336 | https://en.wikipedia.org/wiki/Cadmium%20cycle | The cadmium cycle is a biogeochemical cycle of dispersion and deposition of cadmium through the atmosphere, biosphere, pedosphere, and hydrosphere. Cadmium typically exists in the environment with an oxidation state of +2 but can be found with an oxidation state of +1 (though quite uncommon).
Sources
Atmospheric sources are dominated by anthropogenic emissions (non-ferrous metal production, fossil fuel combustion, iron and steel production, waste disposal, and cement production), with minor introduction of cadmium through natural emissions (volcanoes, dust, biomass burning, and sea spray). Cadmium introduced as powders and aerosols through anthropogenic sources and natural sources can be detected in almost all corners of the globe. Cadmium is highly soluble and cadmium concentrations are rapidly depleted after wind transport as particles, aerosols, and water droplets. Typically, cadmium deposition decreases latitudinally from the source.
Terrestrial cycling
The majority of cadmium deposition to soils and freshwater is due to anthropogenic atmospheric emissions, contaminants in biosolids, and contaminants in fertilizers. Dry deposition accounts for 30-70% of terrestrial inputs. Cadmium is highly mobile in soils and becomes mineral-associated over time. Higher pH and temperature favor cadmium incorporation into minerals, while lower pH and temperature makes cadmium more soluble. Dissolved cadmium circulates through freshwater systems before introduction to larger bodies of water. In rivers, dissolved cadmium ranges from nanomolar to micromolar concentrations.
Oceanic cycling
The vast majority of marine cadmium (80-90%) comes from wet deposition. Cadmium behaves similarly to nutrients such as phosphate and zinc: dissolved concentrations depend heavily on uptake, assimilation, and deposition by phytoplankton and diatoms. Dissolved cadmium concentrations are sub-nanomolar in the surface ocean and increase with depth, with a maximum in the thermocline. Like other nutrients, cadmium is lowest in the North Atlantic (~0.3 nM). Higher concentrations (up to 1 nM) occur in the deep Indian, Southern, and Pacific oceans due to water mass aging during thermohaline circulation. Coastal waters range from 0.2 to 0.9 nM, denoting a significant terrestrial input.
See also
Cadmium
Carbonic anhydrase
Diatoms
References
Biogeochemical cycle
Cadmium | Cadmium cycle | [
"Chemistry"
] | 505 | [
"Biogeochemical cycle",
"Biogeochemistry"
] |
67,421,377 | https://en.wikipedia.org/wiki/Connecting%20Organizations%20for%20Regional%20Disease%20Surveillance | The Connecting Organizations for Regional Disease Surveillance (CORDS) is a "regional infectious disease surveillance network that neighboring countries worldwide are organizing to control cross-border outbreaks at their source." In 2012, CORDS was registered as a legal, non-profit international organization in Lyon, France. As of 2021, CORDS was composed of "six regional member networks, working in 28 countries in Africa, Asia, the Middle East and Europe."
Synopsis
CORDS are "distinct from more formal networks in geographic regions designated by the World Health Organization (WHO)... Some of these regional networks existed before the sudden 2003 outbreak of SARS," for example:
the Pacific Public Health Surveillance Network (PPHSN) (1996),
the Mekong Basin Disease Surveillance (MBDS) network (1999), and
the East African Integrated Disease Surveillance Network (EAIDSNet) (2000)
the Southeastern European Health Network (SEEHN) (2001)
the Asia Partnership on Emerging Infectious Diseases Research (APEIR) (2006)
the SACIDS Foundation for One Health (SACIDS) of the Southern African Development Community (2008)
the Southeast European Center for Surveillance and Control of Infectious Diseases (SECID) (2013)
History
The CORDS grew out of the 1960s-era Organisations de Coordination et de Cooperation pour la lutte contre les Grandes Endemies (OCCGE) which was an African network, reformed in 1987 to add the West African Health Community (WAHC) and give birth to the West African Health Organisation (WAHO).
The PPHSN was formed in 1996 in order to "streamline" members' "disease reporting and response". In 1997, the PPHSN set up PacNet, in order to "share timely information on disease outbreaks" and "to ensure appropriate action was taken in response to public health threats."
In 2000, the Global Outbreak Alert and Response Network was formalized by the WHO.
In 2001, was formed the Southeastern European Health Network (SEEHN) which grouped the governments of Albania, Bosnia and Herzegovina, Bulgaria, Croatia, Moldova, Montenegro, Romania, and the Former Yugoslav Republic of Macedonia.
In 2003, Israel, Jordan and the Palestinian Authority established the Middle East Consortium on Infectious Disease Surveillance (MECIDS).
The growth of the CORDS can be categorised into several overlapping phases:
from 1996 to 2007, the effort was to train and connect people to contain local epidemics
from 2003 to 2009, the effort was aimed to enhance "cross-border and national surveillance systems to address regional threats", including a particular focus of EAIDSNet on zoonotic diseases
from 2006 to at least 2017 the focus was to strengthen "preparedness for pandemics and other public health threats of regional and global scale.
In 2005, the International Health Regulations (IHR) mandated official reporting of certain types of disease outbreaks to WHO.
In 2007, the Rockefeller Foundation (RF) used funds from the Nuclear Threat Initiative (NTI) to convene in Bellagio "regional surveillance networks from across the globe to initiate a dialogue about how to harness lessons learned, emerging technologies, and nascent support." In 2009 the RF used funds from NTI to "create a community of practice" named CORDS, which in 2012 was concretized in Lyon France as a legal, non-profit international organization.
CORDS convened the 1st Global Conference on Regional Disease Surveillance Networks at the Prince Mahidol Award Conference in 2013.
References
Public health
Epidemiology
2012 establishments in France
Public health organizations
Infectious disease organizations
Bioinformatics organizations
Disaster management tools
Emergency communication
Warning systems
Organizations established in 2012
Organizations based in Lyon
Non-profit organizations based in France
European medical and health organizations | Connecting Organizations for Regional Disease Surveillance | [
"Technology",
"Engineering",
"Biology",
"Environmental_science"
] | 750 | [
"Epidemiology",
"Bioinformatics organizations",
"Safety engineering",
"Measuring instruments",
"Bioinformatics",
"Warning systems",
"Environmental social science"
] |
67,421,597 | https://en.wikipedia.org/wiki/Web%20Application%20Open%20Platform%20Interface | Web Application Open Platform Interface better known as WOPI is a protocol that enables a client to access and change files stored on a server. The protocol was first released as v0.1 by Microsoft in January 2012, but as of November 2020 the current specification is v12.2. The protocol has been adopted by applications outside of Microsoft, such as by Google, ownCloud and Nextcloud.
References
Application layer protocols
Internet properties established in 2012
2012 software | Web Application Open Platform Interface | [
"Technology"
] | 94 | [
"Computing stubs",
"Computer network stubs"
] |
67,421,989 | https://en.wikipedia.org/wiki/Anton%20Pannekoek%20Institute%20for%20Astronomy | Anton Pannekoek Institute for Astronomy is one of the research institutes of the Faculty of Science of the University of Amsterdam. It is named after the Dutch astronomer and Marxist Anton Pannekoek.
References
University of Amsterdam
Astronomy institutes and departments
Astronomy in the Netherlands | Anton Pannekoek Institute for Astronomy | [
"Astronomy"
] | 54 | [
"Astronomy institutes and departments",
"Astronomy stubs",
"Astronomy organizations",
"Astronomy organization stubs"
] |
62,165,952 | https://en.wikipedia.org/wiki/Neopentylene%20fluorophosphate | Neopentylene fluorophosphate, also known as NPF, is an organophosphate compound that is classified as a nerve agent. It has a comparatively low potency, but is stable and persistent, with a delayed onset of action and long duration of effects.
See also
Diisopropyl fluorophosphate
IPTBO
References
Organophosphate insecticides
Acetylcholinesterase inhibitors
Phosphorofluoridates
Dioxaphosphorinanes | Neopentylene fluorophosphate | [
"Chemistry"
] | 104 | [
"Phosphorofluoridates",
"Functional groups",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
62,167,884 | https://en.wikipedia.org/wiki/Dynamically%20Redefined%20Character%20Set | The Dynamically Redefined Character Set, or DRCS for short, was a feature of Digital Equipment Corporation's smart terminals starting with the VT200 series in 1983. DRCS added a RAM buffer where new glyphs could be uploaded from the host system using the Sixel data format.
References
DEC hardware | Dynamically Redefined Character Set | [
"Technology"
] | 64 | [
"Computing stubs",
"Natural language and computing",
"Character encoding"
] |
62,168,097 | https://en.wikipedia.org/wiki/Bivariant%20theory | In mathematics, a bivariant theory was introduced by Fulton and MacPherson , in order to put a ring structure on the Chow group of a singular variety, the resulting ring called an operational Chow ring.
On technical levels, a bivariant theory is a mix of a homology theory and a cohomology theory. In general, a homology theory is a covariant functor from the category of spaces to the category of abelian groups, while a cohomology theory is a contravariant functor from the category of (nice) spaces to the category of rings. A bivariant theory is a functor both covariant and contravariant; hence, the name “bivariant”.
Definition
Unlike a homology theory or a cohomology theory, a bivariant class is defined for a map not a space.
Let be a map. For such a map, we can consider the fiber square
(for example, a blow-up.) Intuitively, the consideration of all the fiber squares like the above can be thought of as an approximation of the map .
Now, a birational class of is a family of group homomorphisms indexed by the fiber squares:
satisfying the certain compatibility conditions.
Operational Chow ring
The basic question was whether there is a cycle map:
If X is smooth, such a map exists since is the usual Chow ring of X. has shown that rationally there is no such a map with good properties even if X is a linear variety, roughly a variety admitting a cell decomposition. He also notes that Voevodsky's motivic cohomology ring is "probably more useful " than the operational Chow ring for a singular scheme (§ 8 of loc. cit.)
References
Dan Edidin and Matthew Satriano, Towards an intersection Chow cohomology for GIT quotients
The last two lectures of Vakil, Math 245A Topics in algebraic geometry: Introduction to intersection theory in algebraic geometry
External links
nLab- bivariant cohomology theory
Abelian group theory
Algebraic geometry
Cohomology theories
Functors
Homology theory | Bivariant theory | [
"Mathematics"
] | 445 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory",
"Functors",
"Algebraic geometry"
] |
62,169,356 | https://en.wikipedia.org/wiki/List%20of%20Digital%20Accessible%20Information%20System%20software | Digital Accessible Information System (DAISY) books can be heard on standalone DAISY players, computers using DAISY playback software, mobile phones, and MP3 players (with limited navigation). DAISY books can be distributed on a CD/DVD, memory card or through the Internet.
A computerized text DAISY book can be read using refreshable Braille display or screen-reading software, printed as Braille book on paper, converted to a talking book using synthesised voice or a human narration, and also printed on paper as large print book. In addition, it can be read as large print text on computer screen.
Software players
Currently available software-based players include, in alphabetical order:
Discontinued software players
AMIS - Adaptive Multimedia Information System: a discontinued open-source self-voicing player for Windows XP, Vista and 7 that works with several screen readers and is available in many languages. It was developed by the DAISY Consortium.
Android Daisy ePub Reader: a discontinued open-source project for the Android platform, last updated in 2013
AnyDaisy discontinued extension for Firefox 3.x by Benetech (does not work in Firefox 4 or above)
ButtercupReader: a discontinued web-based silverlight application for DAISY 3 books
DAISYPlayer: discontinued free player for Microsoft Windows; only available in Spanish
DaisyWorm: player for DAISY 2.02 (2002) and DAISY 3 (2005), for iPhone, iPod touch and iPad; iOS 4 or higher, released commercially in 2010, since discontinued
Darwin Reader: a discontinued reader for Android, reads DAISY 2.02 and 3.0 text and audio books
Go Read: an open source DAISY reader for older Android devices (will not install on Android 10)
GoDaisy: discontinued online DAISY player, in Swedish
InDaisy Reader, a discontinued player for iPhone and iPod, accessible with VoiceOver; supports Daisy 2.02 and Daisy 3
Kolibre Vadelma, a discontinued open source DAISY 2.02-player supporting DAISY Online. Downloads and build instructions available for the Raspberry Pi-platform, compile instructions available for Debian Linux.
MAX DaisyPlayer, a discontinued free player for Microsoft Windows.
Mobile DAISY Player, a discontinued commercial player for Symbian phones
Read2Go: a discontinued accessible, commercial e-book reader for Apple iOS devices (iPad, iPhone, iPod Touch), specifically for books from Bookshare, an online library for people with print disabilities; developed by Benetech
Read:OutLoud 6 (discontinued commercial program for Mac OS and Microsoft Windows)
Read:OutLoud Bookshare Edition {discontinued}
ReadHear (commercial, discontinued; for Mac OS and Microsoft Windows)
Server tools
Daisy Uppsala Archive Project, server-side system for managing DAISY files
Online Daisy Delivery Technology, open-source software to deliver DAISY books online
Hardware players
There are a wide range of hardware products available that can play DAISY content, usually in a portable form factor. Some of these devices are dedicated to playback of books, while others focus on other functionality, such as PDA or mobile Internet access, and offer DAISY playback as either a feature of the unit or as a software add-on.
A short (incomplete) list of products that have built-in support for DAISY playback includes:
American Printing House for the Blind, Inc., Book Port Plus and Book Port DT
Pratsam Mobile, a portable handheld DAISY player that supports cellular networks, the DAISY Online Delivery Protocol, customized for use by the blind and visually impaired
Victor Reader Stream, a hand-held portable DAISY player for the blind, visually handicapped and print impaired, produced by HumanWare
Victor Reader Wave, also by HumanWare, is a portable CD player that can play DAISY content from CD media
BookSense, a similar, smaller unit produced by GW Micro; the advanced XT model features built-in flash memory and Bluetooth headset support for playback, as well as an FM radio
The National Library Service for the Blind and Physically Handicapped (NLS) in the United States has developed a proprietary DAISY player designed for use by its print-disabled patrons. The player will replace the aging cassette-based distribution system.
SensePlayer by HIMS, is an advanced, accessible multi-media player, including field recorder, handheld tactile keyboard for the smart phone and tablet, and also including customized portable OCR device. The SensePlayer from HIMS based on Android operating system and paired with a classical, tactile keyboard.
Production systems
Add-ins or extensions to create DAISY files from office software are also available:
Microsoft and Sonata Software created a Save as DAISY add-in for Microsoft Word to convert Office Open XML text documents to DAISY.
odt2daisy (OpenOffice.org Export As DAISY): an extension for Apache OpenOffice and LibreOffice that exports OpenDocument Text to DAISY XML or to Full DAISY (both XML and audio).
Other tools for DAISY production include:
List of products by the DAISY Consortium
Anemone Daisy Maker, an open-source program to make Daisy books from recordings with optional text and timings data
Book Wizard Producer
DAISY Demon, an open-source shell around the DAISY Pipeline to help automate the production of DAISY talking books, MP3, ePub, Word and HTML from XML file; developed by the Open University
DAISY Pipeline
daisy-validator
Dolphin Publisher
Obi: DAISY/Accessible EPUB 3 production tool
Pipeline GUI
PipeOnline, a web interface for the DAISY Pipeline
PLEXTALK Recording Software
Pratsam Producer, a production system for producing DAISY (with or without audio), import and management of PDF and XML, content quality measuring tools, automatic export of XHTML, DTBook, EPUB or Microsoft Word documents
Tobi: an authoring tool for DAISY and EPUB 3 talking books
References
External links
DAISY Consortium
DaisyNow.Net - The first online DAISY delivery web application
Daisy 3: A Standard for Accessible Multimedia Books
Accessible information
Audiobooks
Blindness equipment
Markup languages
XML-based standards
Open formats | List of Digital Accessible Information System software | [
"Technology"
] | 1,222 | [
"Computer standards",
"XML-based standards"
] |
62,170,079 | https://en.wikipedia.org/wiki/Bzigo | Bzigo is a technology startup company that develops autonomous devices for pest control. The company was founded by Nadav Benedek and Saar Wilf, who are both alumni of the Israel Defense Forces' intelligence Unit 8200.
Technology
The Bzigo device scans a room for mosquitoes using specialized optics and computer vision algorithms to identify flight patterns.
Once it detects that a mosquito has landed, the device marks its location with a pointer and sends a message to a phone application, allowing the recipient to locate the pest and kill it.
References
External links
Insect control
Consumer electronics brands
Smart devices
Indoor positioning system
American companies established in 2016
Electronics companies established in 2016 | Bzigo | [
"Technology"
] | 134 | [
"Home automation",
"Wireless locating",
"Wireless networking",
"Indoor positioning system",
"Smart devices"
] |
62,170,594 | https://en.wikipedia.org/wiki/Jenny%20Pickerill | Jenny Pickerill (born 23 November 1973) is a Professor of Environmental Geography and Head of Department at the University of Sheffield. Her work considers how people value and use the environment, the impact of social justice on environmental policy and establishing ways to change social practise.
Early life and education
Pickerill studied geography at the Newcastle University. She moved to Scotland for her graduate studies, where she specialised in geographic information systems at the University of Edinburgh. She returned to Newcastle for her doctoral degree, where she earned her PhD in geography in 2000. During her PhD, Pickerill worked briefly at Lancaster University where she worked on a project with Bronislaw Szerszynski.
Research and career
Pickerill started her independent research career at Curtin University in Perth. Here she studied the internet activism of Australian environmentalists. Pickerill was made a lecturer in human geography at the University of Leicester in 2003. She spent 2008 as a visiting fellow at the Oxford Internet Institute. She moved to the University of Sheffield in 2014. Pickerill works on environmental geography, in particular, how people use and value the environment. This aspect of her work has involved the use of social science, investigating the complicated relationships between humans and the environment. Pickerill has explored grassroots initiatives that tackle environmental challenges. She has studied how environmental activists share their understanding of the environment using technology and how they frame their message. She is also interested in environmental activists who choose to protect one aspect of the environment whilst ignoring another. Her work recognises that environmental issues often overlap with other aspects of inequality; including racism, colonialism and neo-liberalism. Often activist movements incorporate populations of a range of social categories, and Pickerill has looked at its role in the Occupy movement, anti-war movement and the environmental movement in Australia.
Pickerill has studied the impact of experimental solutions on environmental challenges and role of students in redesigning their future. This has included ways to self-build safe, environmentally friendly housing. She has revealed that women are not well represented in eco-building communities. She is currently investigating the potential for eco-communities in environmentally friendly, sustainable cities.
Selected publications
Alongside her academic publications, Pickerill has written for The Conversation.
References
1973 births
Living people
Environmental scientists
Alumni of Newcastle University
Alumni of the University of Edinburgh
Academics of the University of Sheffield
Academic staff of Curtin University
Academics of the University of Leicester | Jenny Pickerill | [
"Environmental_science"
] | 486 | [
"Environmental scientists",
"British environmental scientists"
] |
62,170,643 | https://en.wikipedia.org/wiki/Artificial%20lateral%20line | An Artificial Lateral Line (ALL) is a biomimetic lateral line system. A lateral line is a system of sensory organs in aquatic animals such as fish, that serves to detect movement, vibration, and pressure gradients in their environment. An artificial lateral line is an artificial biomimetic array of distinct mechanosensory transducers that, similarly, permits the formation of a spatial-temporal image of the sources in immediate vicinity based on hydrodynamic signatures; the purpose is to assist in obstacle avoidance and object tracking. The biomimetic lateral line system has the potential to improve navigation in underwater vehicles when vision is partially or fully compromised. Underwater navigation is challenging due to the rapid attenuation of radio frequency and Global Positioning System signals. In addition, ALL systems can overcome some of the drawbacks in traditional localization techniques like SONAR and optical imaging.
The basic component of either a natural or artificial lateral line is a neuromast, a mechanoreceptive organ that allows the sensing of mechanical changes in water. Hair cells serve as the basic unit in flow and acoustic sensing. Some species (like arthropods) use a single hair cell for this function and other creatures like fish use a bundle of hair cells to achieve pointwise sensing. The fish lateral line consists of thousands of hair cells. In fish, a neuromast is a fine hair-like structure that uses transduction of rate coding to transmit the directionality of the signal. Each neuromast has a direction of maximum sensitivity providing directionality.
Biomimetic features
Neuromast
In the artificial lateral line, neuromast's function is carried out by using transducers. These tiny structures employ various systems such as hot-wire anemometry, optoelectronics or piezoelectric cantilevers to detect mechanical changes in water. Neuromasts are primarily classified into two types based on their location. The superficial neuromast located on the skin is used for velocity sensing to locate certain moving targets, whereas Canal Neuromasts located below the epidermis enclosed in the canal utilize pressure gradient between the inlet and outlet for object detection and avoidance. Fishes use superficial neuromast for rheotaxis and station holding as well.
Out of all the sensing techniques employed, only hot-wire anemometry is non directional. This technique can accurately measure the particle motion in the medium but not the direction of flow. However hot wire anemometer and the data collected is adequate to determine particle motion up to hundreds of nanometers and as a result is comparable with a neuromast in similar flow. The figure is a depiction of a simplified hot-wire sensor. Current carrying conductors undergo increases in temperature due to Joule heating. The flow around the current carrying wire causes it to cool and the change in current required to restore the original temperature is the output. In another variant, the change in resistivity of the material with respect to the change in temperature of the hot wire is used at the output.
Division of labor
There is a division of labor technique employed in these systems wherein superficial neuromasts located on the epidermis senses low frequencies as well as direct current (flow) while the canal neuromast located beneath the epidermis enclosed in canals detect alternating current using pressure gradients. In these systems wherein superficial neuromasts located on the epidermis sense low frequencies as well as direct current while the canal neuromast located beneath the epidermis enclosed in canals detect alternating current using pressure gradients
Cupula
Cupula is a gelatinous sack covering over hair like neuromast protruding from the skin. Cupula formed over neuromast is another feature that developed over time that provides a better response to the flow field. Cupular fibrils extend from the hair-like neuromast. Cupula helps attenuate low-frequency signals by virtue of its inertia and amplify higher frequency signals due to the leverage. In addition, these extended structures provide better sensitivity when the neuromast is submerged in the boundary layer. Recent studies uses drop casting, wherein dripping of HA-MA solution over the electrospun scaffolding to create a gravity driven prolate spheroid shaped cupula formation. Experimental comparison between the naked sensor and the newly developed sensor reveal positive results
Canals
Canal Neuromasts are enclosed in canals that run across the body. These canals filter out low-frequency flow that could saturate the system. A certain pattern is found in the concentration of neuromasts along the body among of aquatic species. The canal system is found to be running along the body in a single line that tend to branch out near the head. In fishes, the canal location is suggestive of the hydrodynamic information that is available during swimming. The exact placement of canals varies across species, a suggestive sign of functional role rather than developmental constraint
Canal distribution along the body
Commonly, the canal concentration peaks near the nose and drops significantly over the rest of the body. This trend is found in fish of varying sizes that occupy different habitats and across a variety of species. Some studies hypothesize the close connection between canal location and bone development and how they are morphologically constrained. The exact placement of canals varies across species and can be a suggestive sign of functional role rather than developmental constraint.
Canal flexibility
The flexibility of the canal system has a significant effect on low-frequency signal attenuation. The flexibility of the sensing element placed in the canal system may add to the sensitivity of the Canal Artificial Line (CALL) system. Experimental data proves that this factor creates a significant jump in the sensitivity of the system. Geometric improvements in the canal system and optimizing the sensing equipment for better results.
Constrictions in canals near neuromast
At higher pressure gradients, the voltage output of devices with wall constrictions near the sensors in the canal lateral line( CALL) were much more sensitive and according to Y Jiang, Z Ma, J Fu, et al their system could perceive a pressure gradient as low as 3.2 E−3 Pa/5 mm comparable to that of Cottus bairdii found in nature. Additionally, this feature attenuates low-frequency hydrodynamic signals.
Applications
Navigation in shallow water bodies present a challenge especially for submersible vehicles. Flow fluctuations may adversely affect the trajectory of the craft making on-line detection and real time reaction an absolute necessity for adaptability.
Progress in the field of artificial lateral line has benefited various fields other than underwater navigation. A major example is the field of seismic imaging. The idea of selective frequency response in superficial neuromast has encouraged scientists to design new methods to develop seismic images of features under the ocean using half the data to generate images with higher resolution compared to traditional methods in addition to saving time required for processing
Similar systems
Electrosensory lateral line (ELL) employs passive electrolocation except for certain groups of freshwater fish that utilize active electrolocation to emit and receive electric fields. It can be distinguished from LLS based on the acute difference in their operation besides similar roles
Integumentary Sensory Organs (ISO's) are other sensory dome-shaped organs found in the cranial region of crocodiles. It is a collection of sensory organs that can detect mechanical, ph and thermal changes. These mechanoreceptors are classified into two. The first of which is Slow Adapting receptors (SA) that sense steady flow. The second is Rapid Adapting receptors (RA) that sense oscillatory stimuli. ISO can potentially detect direction of disturbance with high accuracy in 3D space. Whiskers in harbor seal is another example. In addition some microorganisms use hydrodynamic imaging to predate.
References
Biomimetics
Fish nervous system
Sensory organs in animals | Artificial lateral line | [
"Engineering",
"Biology"
] | 1,619 | [
"Bioinformatics",
"Bionics",
"Biological engineering",
"Biomimetics"
] |
62,172,368 | https://en.wikipedia.org/wiki/Gemini%20Mountains%2C%20Queensland | Gemini Mountains is a rural locality in the Isaac Region, Queensland, Australia. In the , Gemini Mountains had a population of 65 people.
Geography
The Goonyella railway line forms most of the western boundary of the locality, with Mount McLaren railway station serving Graincorp's grain handling facility ().
One of the four segments of the Peak Range National Park is in the south of the locality.
The locality contains the following mountains:
Fletchers Awl ()
Mount Castor ()
Mount Commissioner ()
Mount Mclaren ()
Mount Pollux ()
Mount Saddleback ()
Red Riding Hood ()
The land use is predominantly crop growing with some grazing on native vegetation.
History
The locality takes its name from the mountain range Gemini Mountains (), which consists of two volcanic peaks, Mount Castor and Mount Pollux. Castor and Pollux were the Gemini twins in Greek and Roman mythology.
Demographics
In the , Gemini Mountains had a population of 51 people.
In the , Gemini Mountains had a population of 65 people.
Education
There are no schools in Gemini Mountains. The nearest government primary schools are:
Moranbah State School in Moranbah to the north
Dysart State School in neighbouring Dysart to the east
Clermont State School in Clermont to the south
Kilcummin State School in neighbouring Kilcummin to the west.
The nearest government secondary schools are:
Moranbah State High School in Moranbah to the north
Dysart State High School in neighbouring Dysart to the east
Clermont State High School in Clemont to the south
References
Isaac Region
Localities in Queensland
Castor and Pollux | Gemini Mountains, Queensland | [
"Astronomy"
] | 323 | [
"Castor and Pollux",
"Astronomical myths"
] |
62,173,649 | https://en.wikipedia.org/wiki/Bimetallic%20nanoparticle | A bimetallic nanoparticle is a combination of two different metals that exhibit several new and improved properties. Bimetallic nano materials can be in the form of alloys, core-shell, or contact aggregate. Due to their novel properties, they have gained a lot of attention among the scientific and industrial communities. When used as catalysts, they show improved activity as compared to their monometallic counterparts. They are cost-effective, stable alternatives that exhibit high activity and selectivity. Hence a lot of effort has been put into the advancement of these catalysts. The combination or the type of metals present, how they are combined, and their size determines their properties.
Since two distinct metals are combined, optimizing their properties through manipulation is possible. There is a lot of flexibility in designing the bimetallic nanoparticle for specific applications. There are several techniques developed for their synthesis and accurate characterization. Improved electronic properties that arise due to bi-metallization is the most important among the novel properties. Electronic effects involve charge transfer or orbital hybridization between the constituent metals. Structural changes can result from alloy formation. The chemical and environmental parameters during their synthesis play a role in determining their structural properties. The difference in the reduction rates of the different metal precursors decides the end structural properties of the nanomaterial.
The synthesis of bimetallic nanoparticles can be done using co-reduction, successive reduction, reduction of complexes containing both the metals and electrochemical methods. Co-reduction and successive reduction methods are the most popular preparative techniques.
Methods of synthesis
Co-reduction method
The co-reduction method is similar to that of the reduction method used in the synthesis of monometallic nanoparticles. The difference is that for bimetallic nanoparticle synthesis two metal precursors will be used instead of one. The two precursors along with the stabilizing agent are completely dissolved in a suitable solvent. The metals will be present in their ionic states. To convert them into their zerovalent states a reducing agent is added. The light transition metals have lower reduction potential which means that they are rarer to undergo reduction. These light transition metals when present in their zerovalent states tend to undergo oxidation very quickly and therefore are unstable. Since these metals are very important in the field of catalysis, several methods to stabilize them are sought after.
Successive reduction method
In the successive reduction method, the two precursors are added one after the other. This method generally leads to the formation of core-shell bimetallic nanoparticles. The precursor of the metal that has to form the core is added along with the stabilizing agent first. This is followed by the reducing agent. Once the complete reduction of the first metal is ensured, the second metal precursor is added. The second metal ion gets adsorbed on the nanoparticle surface and gets reduced. This results in the core-shell structure of the bimetallic nanoparticle.
Reduction of bimetallic complexes
A complex containing both the metals to be present in the bimetallic nanoparticle is taken as the precursor. The aqueous solution of these complexes in different concentrations is taken in a quartz vessel and reduced using a photoreactor. Polyvinylpyrrolidone can be used as a stabilizer. The size and composition of the nanoparticles vary with the concentration of the aqueous solution. The composition of the nanoparticles can be analyzed using EDX studies.
Electrochemical method
In chemical methods, the metal ions are reduced to their zerovalent states using a reducing agent. In the electrochemical process, bulk metal is converted into metal atoms. The size of the particle synthesized using this method can be controlled by manipulating the current density. There are two anodes made up of the constituent bulk metal and a platinum metal plate is used as the cathode. The stabilizing agent is mixed with the electrolyte. When current is passed ions of the metals are formed at the anode and are reduced by the electrons generated in the platinum electrode. The major attractions of this method are its cost-effectiveness, high yield, ease of isolation, and the ability to control the composition of metal simply through variation of current density.
Structures of Bimetallic nanoparticles
Crown jewel structure
In this type of arrangement, the more expensive or catalytically important metal atom is individually arranged over the comparatively cheaper metal which is catalytically less active. The precious metal atom will be surrounded by the less expensive metal atoms. As they are present on the surface they are highly accessible for catalytic reactions. Being surrounded by the less expensive metal also alters its electronic properties, this, in turn, improves its catalytic property. As the metal atoms are fixed on the surface individually, the synthesis of crown jewel structure is difficult. It can be achieved through chemical vapor deposition (CVD). The metal is atomized using an electron beam evaporator and the whole process is carried out in an ultra-high vacuum. The metal gets diffused and deposits at different points on the less expensive metal surface. Their distribution can be determined by controlling the metal flux in a reproducible manner. Another alternative is the solution state method. Control of size and distribution is more complicated when using this method as opposed to CVD.
Hollow structure
The structures have a very high surface to volume ratios and porosity. This material is multifunctional owing to its unique structure. The void can be used to encapsulate various multifunctional nanomaterial or even as a nanoreactor. Their shells can also be functionalized. These materials are better catalysts as they are cheaper, less dense and the material is also saved. They can be synthesized by using already prepared metal nanoparticles as sacrificial templates. This takes place through a galvanic replacement reaction in which a metal nanoparticle comes in contact with a different metal of higher reduction potential gets replaced. The diffusion process and direction of the reaction can be controlled by changing the chemical environment.
Core-shell structure
As catalysis is carried out on the nanoparticle surface, the atoms at the center are wasted. This becomes more important when expensive metals are used as catalysts. To reduce the cost of the catalysts an inexpensive metal is made the core and the catalytically active metal is taken as the shell. This is achieved by first reducing the core metal followed by nucleation of the shell metal around it. The core metal also electronically modifies the shell and thereby improves catalytic activity. They can be synthesized by using a one-pot co-reduction method. Two metal precursors are added simultaneously. One of them will reduce first due to the difference in the reduction potentials of the different metal ions. This metal will form the core. The pre-formed nanoparticle acts as the seed required for the nucleation of the second metal around it. These structures can be characterized using TEM imaging. The shape and size can be manipulated by varying the different parameters. It is possible to synthesize a more complex multiwalled nanostructure, but it will require better control over the parameters.
Alloyed structure
The two different metals present are homogeneously arranged. Due to the variation in their standard reduction potentials the metals tend to nucleate separately and form heterostructures or core-shell. Synthesizing alloyed bimetallic nanoparticles require control over reaction kinetics. Using a reducing agent strong enough to reduce both the metal ions is one option. Sodium borohydride is one such reducing agent. Another option is the selection of appropriate counter ion or surfactant. The redox potentials of the metals are adjusted in such a way as to obtain simultaneous reduction through specific coordination or adsorption. The addition of a metal ion that facilitates alloy formation is the third method. A gas-phase synthesis technique is also possible in which the atoms are first brought to their atomic states. But this method will require complicated instrumentation.
References
See Also
Iron–platinum nanoparticle
Metals
Nanoparticles by composition | Bimetallic nanoparticle | [
"Chemistry"
] | 1,667 | [
"Metals"
] |
53,261,402 | https://en.wikipedia.org/wiki/NGC%201222 | NGC 1222 is an early-type lenticular galaxy located in the constellation of Eridanus. The galaxy was discovered on 5 December 1883 by the French astronomer Édouard Stephan. John Louis Emil Dreyer, the compiler of the New General Catalogue, described it as a "pretty faint, small, round nebula" and noted the presence of a "very faint star" superposed on the galaxy.
NGC 1222's morphological type of S0− would suggest that it should have a mostly smooth profile and a very dull appearance. However, the galaxy was imaged by the Hubble Space Telescope in 2016, and the image showed that there were several bright blue star forming regions, as well as dark reddish areas of interstellar dust. NGC 1222 is currently interacting with and swallowing two dwarf galaxies that are supplying the gas and dust needed to become a starburst galaxy.
One supernova has been observed in NGC 1222: SN 2024any (type Ia, mag. 17.59).
See also
NGC 1275, another starburst galaxy
References
External links
1222
011774
Lenticular galaxies
Starburst galaxies
Peculiar galaxies
Eridanus (constellation)
Discoveries by Édouard Stephan
Markarian galaxies | NGC 1222 | [
"Astronomy"
] | 246 | [
"Eridanus (constellation)",
"Constellations"
] |
53,263,934 | https://en.wikipedia.org/wiki/StatCrunch | StatCrunch is a web-based statistical software application from Pearson Education. StatCrunch was originally created for use in college statistics courses. As a full-featured statistics package, it is now also used for research and for other statistical analysis purposes.
History
American statistics professor Webster West created StatCrunch in 1997. Over the next 19 years West assisted by others added many more statistical procedures and graphing capabilities, and made user interface improvements.
In 2005, West received two awards for StatCrunch: the CAUSEweb Resource of the Year Award and the MERLOT Classics Award. In 2013, the StatCrunch Java code was rewritten in JavaScript in order to avoid Java browser security problems, and so that it would run on iOS and Android. In 2015, new ways of importing data were added, including importing multi-page data directly from Wikipedia tables and other Web sources, and also importing with drag-and-drop for various data formats.
In 2016, StatCrunch was acquired by Pearson Education, which had already been serving as the primary distributor of StatCrunch for several years.
Software
A StatCrunch license is included with many of Pearson's statistical textbooks. Because StatCrunch is a web application, it works on multiple platforms, including Windows, macOS, iOS, and Android.
Data in StatCrunch is represented in a "data table" view, which is similar to a spreadsheet view, but unlike spreadsheets, the cells in a data table can only contain numbers or text. Formulas cannot be stored in these cells. There are many ways to import data into StatCrunch. Data can be typed directly into cells in the data table. Entire blocks of data may be cut-and-pasted into the data table. Text files (.csv, .txt, etc.) and Microsoft Excel files (.xls and .xlsx) can be drag-and-dropped into the data table. Data can be pulled into StatCrunch directly from Wikipedia tables or other Web tables, including multi-page tables. Data can be loaded directly from Google Drive and Dropbox. Shared data sets saved by other StatCrunch community users can be searched for by title or keyword and opened in a data table.
Graphs, results, and reports created by StatCrunch can be shared with other users, in addition to the sharing of data sets.
StatCrunch has a library of data transformation functions. StatCrunch can also recode and reorganize data. All data is stored in memory, and all processing happens on the client, so response is fast, even with large data sets.
StatCrunch can interact with multiple graphs simultaneously. If a user selects a data point on one graph, then that same data point is highlighted on all other displayed graphs.
In addition to standard statistical and graphing procedures, StatCrunch has a collection of about forty "applets" which illustrate statistical concepts interactively.
See also
List of statistical packages
Comparison of statistical packages
References
Further reading
Glenn Ledder, Jenna P. Carpenter, Timothy D. Comar Undergraduate Mathematics for the Life Sciences: Models, Processes, and Directions The Mathematical Association of America (2013)
Jonathan Foster Collaborative Information Behavior: User Engagement and Communication Information Science Reference (2010)
Peter C. Bruce Introductory Statistics and Analytics: A Resampling Perspective Wiley (2015)
Bert Wachsmuth "Statistics in the Classroom on Touch-based Smart Phones" The Impact of Pen and Touch Technology on Education, Part of the Human–Computer Interaction Series pp 289–296, Springer (2015)
Webster West "Social Data Analysis with StatCrunch: Potential Benefits to Statistical Education" UCLA Department of Statistics (2009)
Nancy Leveille et al. "A survey of no (or low) cost statistical software packages for business statistics" University of Houston-Downtown (2011)
Renata Phelps, Kath Fisher, Allan H Ellis Organizing and Managing Your Research: A Practical Guide for Postgraduates, page 224, SAGE Publications Ltd (February 22, 2007)
Neil J. Salkind Statistics for People Who (Think They) Hate Statistics: The Excel Edition, page 331, SAGE Publications Inc. (July 21, 2006)
Megan Mocko, author. Dani Ben-Zvi, Katie Makar, editors The Teaching and Learning of Statistics: International Perspectives, pp. 219, 224. Springer International Publishing (2016)
Bert Wachsmuth, author. Edited by Tracy Hammond, Stephanie Valentine, Aaron Adler, Mark Payton "Statistics in the Classroom on Touch-based Smart Phones" (Chapter 30) in The Impact of Pen and Touch Technology on Education Springer International Publishing (2015)
External links
1997 software
Statistical software
Educational math software
Web applications
Pearson plc | StatCrunch | [
"Mathematics"
] | 980 | [
"Statistical software",
"Educational math software",
"Mathematical software"
] |
53,264,021 | https://en.wikipedia.org/wiki/NGC%205308 | NGC 5308 is an edge-on lenticular galaxy in the constellation of Ursa Major. It was discovered on 19 March 1790 by William Herschel. It was described by John Louis Emil Dreyer as "bright, pretty large" when he compiled the New General Catalogue. A small, irregular galaxy near NGC 5308 has been given the designation LEDA 2802348.
NGC 5308 was imaged by the Hubble Space Telescope in 2016. The galaxy appears to be a flat, smooth disk, typical of most lenticular galaxies. Many large globular clusters orbit the galaxy; these are visible as tiny dots surrounding the galaxy, and are mostly made of old, aging stars similar to the galaxy itself.
NGC 5322 Group
According to A.M. Garcia, the galaxy NGC 5308 is a member of the NGC 5322 group (also known as LGG 360), which contains at least 10 other galaxies, inclulding NGC 5322, NGC 5342, NGC 5372, NGC 5376, NGC 5379, NGC 5389, UGC 8684, UGC 8714, and UGC 8716.
Supernova
One supernova has been observed in NGC 5308: SN 1996bk (type Ia, mag. 14.5) was discovered by Piero Mazza and Stefano Pesci on 12 October 1996, located 10.5" south and 17.9" west of center of the galaxy.
See also
List of NGC objects (5001–6000)
References
External links
5308
08722
048860
Lenticular galaxies
Ursa Major | NGC 5308 | [
"Astronomy"
] | 329 | [
"Ursa Major",
"Constellations"
] |
53,264,144 | https://en.wikipedia.org/wiki/Jenny%20Drumgoole | Jenny Drumgoole is a multimedia artist, based in Philadelphia, Pennsylvania, who works in both video and performance, as well as photography. Many of her videos include her taking part in competitions such as "The Real Women of Philadelphia" or The Wing Bowl, as well as performing as one of her personas, Soxx Roxx, and many other heightened versions of herself. After spending a lot of time making photographs, she made a switch over to video and performance work. Her first video she made, Wing Bowl 13, started as a photo project where she was photographing competitive eater, Sonya Thomas, at the Wing Bowl in 2005. She found that the sounds that are able to be included into a video, heightened the experience. She also has a video series called "Happy Trash Day!" where she goes around Philadelphia to celebrate the city sanitation workers. In 2017 Drumgoole was the Digital Artist in Residence at the Main Line Art Center in Haverford, Pennsylvania.
References
External links
http://jennydrumgoole.com/
Living people
21st-century American women artists
Artists from Philadelphia
Multimedia artists
Women multimedia artists
Year of birth missing (living people) | Jenny Drumgoole | [
"Technology"
] | 236 | [
"Multimedia",
"Multimedia artists"
] |
53,264,410 | https://en.wikipedia.org/wiki/Multipole%20density%20formalism | The Multipole Density Formalism (also referred to as Hansen-Coppens Formalism) is an X-ray crystallography method of electron density modelling proposed by Niels K. Hansen and Philip Coppens in 1978. Unlike the commonly used Independent Atom Model, the Hansen-Coppens Formalism presents an aspherical approach, allowing one to model the electron distribution around a nucleus separately in different directions and therefore describe numerous chemical features of a molecule inside the unit cell of an examined crystal in detail.
Theory
Independent Atom Model
The Independent Atom Model (abbreviated to IAM), upon which the Multipole Model is based, is a method of charge density modelling. It relies on an assumption that electron distribution around the atom is isotropic, and that therefore charge density is dependent only on the distance from a nucleus. The choice of the radial function used to describe this electron density is arbitrary, granted that its value at the origin is finite. In practice either Gaussian- or Slater-type 1s-orbital functions are used.
Due to its simplistic approach, this method provides a straightforward model that requires no additional parameters (other than positional and Debye–Waller factors) to be refined. This allows the IAM to perform satisfactorily while a relatively low amount of data from the diffraction experiment is available. However, the fixed shape of the singular basis function prevents any detailed description of aspherical atomic features.
Kappa Formalism
In order to adjust some valence shell parameters, the Kappa formalism was proposed. It introduces two additional refineable parameters: an outer shell population (denoted as ) and its expansion/contraction (). Therefore, the electron density is formulated as:
While , being responsible for the charge flow part, is linearly coupled with partial charge, the normalised parameter scales radial coordinate . Therefore, lowering the parameter results in expansion of the outer shell and, conversely, raising it results in contraction. Although the Kappa formalism is still, strictly speaking, a spherical method, it is an important step towards understanding modern approaches as it allows one to distinguish chemically different atoms of the same element.
Multipole description
In the multipole model description, the charge density around a nucleus is given by the following equation:
The spherical part remains almost indistinguishable from the Kappa formalism, the only difference being one parameter corresponding to the population of the inner shell. The real strength of the Hansen-Coppens formalism lies in the right, deformational part of the equation. Here fulfils a role similar to in the Kappa formalism (expansion/contraction of the aspherical part), whereas individual are fixed spherical functions, analogous to . Spherical harmonics (each with its populational parameter ) are, however, introduced to simulate the electrically anisotropic charge distribution.
In this approach, a fixed coordinate system for each atom needs to be applied. Although at first glance it seems practical to arbitrarily and indiscriminately make it contingent on the unit cell for all atoms present, it is far more beneficial to assign each atom its own local coordinates, which allows for focusing on hybridisation-specific interactions. While the singular sigma bond of the hydrogen can be described well using certain z-parallel pseudoorbitals, xy-plane oriented multipoles with a 3-fold rotational symmetry will prove more beneficial for flat aromatic structures.
Applications
The primary advantage of the Hansen-Coppens formalism is its ability to free the model from spherical restraints and describe the surroundings of a nucleus far more accurately. In this way it becomes possible to examine some molecular features which would normally be only roughly approximated or completely ignored.
Hydrogen positioning
X-ray crystallography allows the researcher to precisely determine the position of peak electron density and to reason about the placement of nuclei based on this information. This approach works without any problems for heavy (non-hydrogen) atoms, whose inner shell electrons contribute to the density function to a far greater degree then outer shell electrons.
However, hydrogen atoms possess a feature unique among all the elements - they possess exactly one electron, which additionally is located on their valence shell and therefore is involved in creating strong covalent bonds with atoms of various other elements. While a bond is forming, the maximum of the electron density function moves significantly away from the nucleus and towards the other atom. This prevents any spherical approach from determining hydrogen position correctly by itself. Therefore, usually the hydrogen position is estimated basing on neutron crystallography data for similar molecules, or it is not modelled at all in the case of low-quality diffraction data.
It is possible (albeit disputable) to freely refine hydrogen atoms' positions using the Hansen-Coppens formalism, after releasing the bond lengths from any restraints derived from neutron measurements. The bonding orbital simulated with adequate multipoles describes the density distribution neatly while preserving believable bond lengths. It may be worth approximating hydrogen atoms' anisotropic displacement parameters, e.g. using SHADE, before introducing the formalism and, possibly, discarding bond distance constraints.
Bonding modelling
In order to analyse the length and strength of various interactions within the molecule, Richard Bader's "Atoms in molecules" theorem may be applied. Due to the complex description of the electron field provided by this aspherical model, it becomes possible to establish realistic bond paths between interacting atoms as well as to find and characterise their critical points. Deeper insight into this data yields useful information about bond strength, type, polarity or ellipticity, and when compared with other molecules brings greater understanding about the actual electron structure of the examined compound.
Charge flow
Due to the fact that for each multipole of every atom its population is being refined independently, individual charges will rarely be integers. In real cases, electron density flows freely through the molecule and is not bound by any restrictions resulting from the outdated Bohr atom model and found in IAM. Therefore, through e.g. an accurate Bader analysis, net atomic charges may be estimated, which again is beneficial for deepening the understanding of systems under investigation.
Drawbacks and limitations
Although the Multipole Formalism is a simple and straightforward alternative means of structure refinement, it is definitely not flawless. While usually for each atom either three or nine parameters are to be refined, depending on whether an anisotropic displacement is being taken into account or not, a full multipole description of heavy atoms belonging to the fourth and subsequent periods (such as chlorine, iron or bromine) requires refinement of up to 37 parameters. This proves problematic for any crystals possessing large asymmetric units (especially macromolecular compounds) and renders a refinement using the Hansen-Coppens Formalism unachievable for low-quality data with an unsatisfactory ratio of independent reflections to refined parameters.
Caution should be taken while refining some of the parameters simultaneously (i.e. or , multipole populations and thermal parameters), as they may correlate strongly, resulting in an unstable refinement or unphysical parameter values. Applying additional constraints resulting from local symmetry for each atom in a molecule (which decreases the number of refined multipoles) or importing populational parameters from existing databases may also be necessary to achieve a passable model. On the other hand, the aforementioned approaches significantly reduce the amount of information required from experiments, while preserving some level of detail concerning aspherical charge distribution. Therefore, even macromolecular structures with satisfactory X-ray diffraction data can be modelled aspherically in a similar fashion.
Despite their similarity, individual multipoles do not correspond to atomic projections of molecular orbitals of a wavefuntion as resulting from quantum calculations. Nevertheless, as brilliantly summarized by Stewart, "The structure of the model crystal density, as a superposition of pseudoatoms [...] does have quantitative features which are close to many results based on quantum chemical calculations". If the overlap between the atomic wavefunctions is small enough, as it occurs for example in transition metal complexes, the atomic multipoles may be correlated with the atomic valence orbitals and multipolar coefficients may be correlated with populations of metal d-orbitals.
A stronger correlation between the X-ray measured diffracted intensities and quantum mechanical wavefunctions is possible using the wavefunction based methods of Quantum Crystallography, as for example the X-ray atomic orbital model, the so-called experimental wavefunction or the Hirshfeld Atom Refinement.
References
Theoretical chemistry
X-ray crystallography
Crystallography
Diffraction | Multipole density formalism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,774 | [
"Spectrum (physical sciences)",
"Materials science",
"Theoretical chemistry",
"Crystallography",
"Diffraction",
"Condensed matter physics",
"nan",
"X-ray crystallography",
"Spectroscopy"
] |
53,264,617 | https://en.wikipedia.org/wiki/Cerium%28III%29%20fluoride | Cerium(III) fluoride (or cerium trifluoride), CeF3, is an ionic compound of the rare earth metal cerium and fluorine.
It appears as a mineral in the form of fluocerite-(Ce) - a very rare mineral species related mainly to pegmatites and rarely to oxidation zones of some polymetallic ore deposits. CeF3 may be used as a Faraday rotator material in the visible, near-infrared and mid-infrared spectral range.
Structure
The crystal structure of cerium(III) fluoride is described as the or tysonite structure. It contains 9-coordinate cerium ions that adopt an approximately tricapped trigonal prismatic coordination geometry, although it can be considered 11-coordinate if two more distant fluorides are considered part of the cerium coordination environment. The three crystallographically independent fluoride ions are 3-coordinate and range in geometry from trigonal planar to pyramidal.
References
Cerium(III) compounds
Fluorides
Lanthanide halides | Cerium(III) fluoride | [
"Chemistry"
] | 222 | [
"Fluorides",
"Salts"
] |
53,264,770 | https://en.wikipedia.org/wiki/Aspergillus%20assulatus | Aspergillus assulatus (also named Neosartorya assulata) is a species of fungus in the genus Aspergillus. It is from the Fumigati section. The species was first described in 2014. It has been reported to produce indole alkaloids and apolar metabolites.
Growth and morphology
A. assulatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
Further reading
assulatus
Fungi described in 2014
Fungus species | Aspergillus assulatus | [
"Biology"
] | 136 | [
"Fungi",
"Fungus species"
] |
53,264,857 | https://en.wikipedia.org/wiki/Probability%20management | The discipline of probability management communicates and calculates uncertainties as data structures that obey both the laws of arithmetic and probability, while preserving statistical coherence. The simplest approach is to use vector arrays of simulated or historical realizations and metadata called Stochastic Information Packets (SIPs). A set of SIPs, which preserve statistical relationships between variables, is said to be coherent and is referred to as a Stochastic Library Unit with Relationships Preserved (SLURP). SIPs and SLURPs allow stochastic simulations to communicate with one another. For example, see Analytica (Wikipedia), Analytica (SIP page), Oracle Crystal Ball, Frontline Solvers, and Autobox.
The first large documented application of SIPs involved the exploration portfolio of Royal Dutch Shell in 2005 as reported by Savage, Scholtes, and Zweidler, who formalized the discipline of probability management in 2006. The topic is also explored at length in.
Vectors of simulated realizations of probability distributions have been used to drive stochastic optimization since at least 1991. Andrew Gelman described such arrays of realizations as Random Variable Objects in 2007.
A recent approach does not store the actual realizations, but delivers formulas known as Virtual SIPs that generate identical simulation trials in the host environment regardless of platform. This is accomplished through inverse transform sampling, also known as the F-Inverse method, coupled to a portable pseudo random number generator, which produces the same stream of uniform random numbers across platforms.
Quantile parameterized distributions (QPDs) are convenient for inverse transform sampling in this context. In particular, the Metalog distribution is a flexible continuous probability distribution that has simple closed form equations, can be directly parameterized by data, using only a handful of parameters.
An ideal pseudo random number generator for driving inverse transforms is the HDR generator developed by Douglas W. Hubbard. It is a counter-based generator with a four-dimensional seed plus an iteration index that runs in virtually all platforms including Microsoft Excel. This allows simulation results derived in R, Python, or other readily available platforms to be delivered identically, trial by trial to a wide audience in terms of a combination of a few parameters for a Metalog distribution accompanied by the five inputs to the HDR generator.
In 2013, ProbabilityManagement.org was incorporated as a 501(c)(3) nonprofit that supports this approach through education, tools, and open standards. Executive Director Sam Savage is the author of The Flaw of Averages: Why we Underestimate Risk in the Face of Uncertainty and is an adjunct professor at Stanford University. Harry Markowitz, Nobel Laureate in Economics, was a co-founding board member. The nonprofit has received financial support from Chevron Corporation, General Electric, Highmark Health, Kaiser Permanente, Lockheed Martin, PG&E, and Wells Fargo Bank. The SIPmath 2.0 Standard supports XLSX, CSV, and XML Formats. The SIPmath 3.0 Standard uses JSON objects to convey virtual SIPs based on the Metalog Distribution and HDR Generator.
References
Stochastic simulation
Monte Carlo methods
Probability distributions
Risk analysis | Probability management | [
"Physics",
"Mathematics"
] | 640 | [
"Functions and mappings",
"Probability distributions",
"Monte Carlo methods",
"Mathematical objects",
"Computational physics",
"Mathematical relations"
] |
53,265,538 | https://en.wikipedia.org/wiki/G%C3%A9otechnique%20Lecture | The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
References
Civil engineering
Geotechnical engineering
Science lecture series
Soil mechanics
Biennial events
1989 establishments in the United Kingdom
Recurring events established in 1989
British lecture series | Géotechnique Lecture | [
"Physics",
"Engineering"
] | 112 | [
"Applied and interdisciplinary physics",
"Soil mechanics",
"Geotechnical engineering",
"Construction",
"Civil engineering"
] |
53,265,643 | https://en.wikipedia.org/wiki/Video%20game%20walkthrough | A video game walkthrough is a guide aimed towards improving a player's skill within a particular video game and often designed to assist players in completing either an entire video game or specific elements. Walkthroughs may alternatively be set up as a playthrough, where players record themselves playing through a game and upload or live-stream it to the internet. Walkthroughs may be considered guides on helping to enhance the experience of players, to assist towards unlocking game achievements or simply as a means to socialise with like-minded individuals as a distraction from everyday life.
Walkthroughs originated as text-based descriptive instructions in magazines for playing through a video game. With the growth in popularity of computers and the internet, video game walkthroughs expanded to digital and video formats, with the typical average age of watchers being 23 years old and predominantly male, according to a study undertaken in Finland during 2015. Some individuals and companies have been known to earn lucrative income through the process of recording and offering guides publicly.
History
With the growth in popularity of video gaming in the early 1980s, a new genre of video game guide book emerged that anticipated walkthroughs. Written by and for gamers, books such as The Winners' Book of Video Games (1982) and How To Beat the Video Games (1982) focused on revealing underlying gameplay patterns and translating that knowledge into mastering games. The term walk-through was used to describe step-by-step video game solutions as early as 1984 in the game guide compilation Conquering Adventure Games; this usage of the term was established by 1988 and popularized with the publication of Quest for Clues, a collection of guides for adventure games and role-playing video games that referred to its solutions as "walkthroughs".
Video game walkthroughs were originally included in video game magazines or on text-bulletin boards, and compiled in guide book anthologies. In the late 1980s through to the mid 2000s, video game walkthroughs were also available through telephone 'hot-lines' in the United States. In the 1980s, walkthrough anthology books were popular and lucrative alternatives to single-game hint books published by game developers, such as Infocom's InvisiClues series. Despite the rise in popularity of internet-based guides, text-based walkthroughs are still present today in both print and digital formats. Examples of print publications include strategy guides published by Prima Games, whereas text-based digital guides are hosted on gaming websites such as IGN, GamesRadar, and GameFAQs, often in the form of wikis. Until its closure by parent company Future plc, Computer and Video Games (CVG) also created and hosted digital guides on their now defunct website.
Player created digital walkthroughs are typically designed to assist other players in accomplishing certain feats within video games and are similar to text-based or telephony-based walkthroughs, except they can also be solely for entertainment purposes. These digital walkthroughs are typically uploaded to video sharing websites such as YouTube or live-streamed playthroughs to media streaming sites such as Twitch. Let's Play videos are a special type of walkthrough generally more focused on entertaining rather than informing the viewer through humorous commentary given by the video's host as they complete the game.
Format
Given there is no standardized format for the creation of text-based walkthroughs, guides exist that contain extensive examples and step-by-step instructions on how to write text-based walkthrough content. Prima Games and Computer and Video Games have produced walkthroughs. Prima Games produces official, dedicated text-based video game walkthroughs and strategy guides for a variety of video games in both print and digital formats. Computer and Video Games (CVG) published both text and video-based walkthroughs of video games on their website and official YouTube channel until their closure by Future in February 2015 in asset consolidation between various Future brands. IGN also creates and publishes video game walkthroughs in both text and video formats.
When it comes to video walkthroughs of games, gameplay may be recorded in multiple ways, such as through the use of screencast software, built-in recording features in some emulators or via a video capture device connected to a console or another computer. Some video games also include built-in recording features, such as Grand Theft Auto V (2013), which included in-game recording and editing features in its PlayStation 4 and Xbox One re-releases, allowing players to record and edit gameplay to share with others. Video content is typically shared over the internet via streaming, using video sharing and media streaming websites such as YouTube and Twitch, where the content has a potential audience consisting of millions of people.
Motivations
In a study on the different motivations of walkthrough viewers conducted by Max Sjöblom and Juho Hamari from the University of Tampere in 2016, numerous viewer motivations were discussed. From the findings, the five most significant motivations were found to be improving player experience, confidence, knowledge about a particular game, socializing and creating an 'escape' or distraction from their everyday life. Walkthroughs may also guide players throughout an entire game or only certain sections and may be guides on finding rare collectables or unlocking achievements.
According to Barbara Ortutay of the Associated Press, players "not only see the live and recorded video sessions as a way to sharpen their abilities, but also as a way to interact with star players in chatrooms or simply be entertained." According to Business Insider and The Verge, viewers of this genre of video content and live streams use them not only for their entertainment value, but also to assist with a variety of things ranging from purchasing decisions to "get[ting] better at playing games." GameRadar+ has called the watching of video game playthroughs the "Netflix of video games" and CNN declared the watching of video games being played by other people via videos and live streams "must-see TV".
Some video game players have been able to make a viable business model out of playing video games as both a guide and for the entertainment of viewers; internet personalities such as TheRadBrad, DanTDM, Chuggaaconroy and Ali-A have been cited as examples of video game players who have been able to make money from creating video game walkthroughs. As a result of the influx of players uploading or streaming their content, multi-channel networks were formed in order to assist content creators in multiple areas, in exchange for a percentage of the advertisement revenue generated.
Demographics
In February 2015, a study of video game walkthrough viewers was conducted by the University of Tampere in Finland and recruited respondents through self-selection (over 93% reported to have a Twitch account). From 1091 validated responses, the average age was approximately 23 years old, of which 92.3% were male. The majority of respondents earned less than ten thousand dollars a year with a secondary level of education. The majority of viewers have a secondary level of education (52.19%), with all other education levels tending to watch less.
See also
Longplay (video games)
References
Notes
Further reading
PC Mag Twitch and Beyond: The Best Video Game Live Streaming Services on PC Magazine
The Business of Playing Video Games on Pacific Standard Magazine
Walkthrough
Video game terminology | Video game walkthrough | [
"Technology"
] | 1,551 | [
"Computing terminology",
"Video game terminology"
] |
53,265,936 | https://en.wikipedia.org/wiki/Sleep%20and%20emotions | Emotions play a key role in overall mental health, and sleep plays a crucial role in maintaining the optimal homeostasis of emotional functioning. Deficient sleep, both in the form of sleep deprivation and restriction, adversely impacts emotion generation, emotion regulation, and emotional expression.
Models of sleep loss and emotional reactivity
Scientists offer two explanations for the effects of sleep loss on emotions. One explanation is that sleep loss causes disinhibition of emotional brain regions, leading to an overall increase in emotional intensity (also referred to as Dysregulation Model). The other explanation describes how sleep loss causes an increase in fatigue and sleepiness, coupled with an overall decrease in energy and arousal, leading to an overall decrease in emotional intensity (also referred to as Fatigue Model).
The dysregulation model
The dysregulation model is supported by neuroanatomical, physiological, and subjective self-report studies. Emotional brain regions (e.g. the amygdala) have shown 60% greater reactivity to emotionally negative photographs following one night of sleep deprivation, as measured by functional magnetic resonance imaging. Five days of sleep restriction (four hour sleep opportunity per night) caused a decrease in connectivity with cortical brain regions involved in the regulation of the amygdala. Pupil diameter was shown to increase significantly in response to negative photographs following sleep deprivation. When exposed to positive stimuli, sleep deprived participants showed amplified emotional reactivity throughout various midbrain, striatal, limbic, and visual processing brain regions. One night of sleep deprivation caused participants to judge neutral images more negatively than non sleep deprived participants. One night of sleep loss also caused increased impulsivity to negative stimuli.
The fatigue model
The fatigue-model is supported by subjective self-report and physiological studies. Arousal, as measured by electroencephalograph (EEG), decreases as sleep loss increases, leading to a decrease in the desire to perform and exert effort. Short-term sleep loss is associated with blunting in the recognition of negative and positive facial expressions. Various forms of emotional expression, including facial and vocal expression, are adversely affected by sleep loss. Following one night of sleep deprivation, participants show decreased facial expressiveness in response to positive stimuli, as well as decreased vocal expression of positive emotion. Sleep deprivation slows the generation of facial reactions in response to emotional faces. One to two nights of sleep loss in healthy adults is associated with a decrease in the generated intensity of positive moods (i.e. happiness and activation), as well as an increase in the generated intensity of negative moods (i.e. anger, depression, fear, and fatigue). Long-term chronic exposure to insufficient sleep is associated with a decline in optimism and sociability, and an increase in subjective experiences of sleepiness and fatigue. Furthermore, sleep restricted to five hours a night over the course of a week causes significant increases in self-reports of subjective mood disturbance and sleepiness.
Sleep, emotions, and psychiatric ailments
Deficient sleep patterns are prominent in many psychiatric ailments. Insomnia increases the risk of a depressive episode, sleep deprivation influences the onset of hypomania, and sleep disturbance contributes to the maintenance of mood disorders. Amongst manic bipolar patients, sleep loss may act as a trigger in the onset of a manic episode.
Sleep patterns are affected by behavioral and emotional disorders, and aspects of emotional and cognitive well-being are influenced by sleep patterns. Scientists have examined the effects of deficient sleep patterns on emotion regulation in individuals diagnosed with mental disorders ( e.g. depression and anxiety), borderline personality disorder, bipolar disorder, and panic disorder. Methods typically include observational, subjective, behavioral, and physiological measures of emotional functioning.
Emotion regulation difficulties are associated with greater symptoms of depression, anxiety, and borderline personality, that worsen with poor sleep patterns. Heart rate variability (HRV) is described as the time interval between heartbeats, and is linked to emotion regulation capacity, with higher resting HRV is associated with greater emotion regulation capacity, and lower resting HRV is associated with low emotion regulation capacity. Physiological data suggests that HRV is negatively affected by sleep loss, as seen in panic disorder patients with poor sleep quality who display increased cognitive inhibition due to reduced HRV. Emotion dysregulation has also been shown to play a role in the maintenance of generalized anxiety disorder, panic disorder, obsessive-compulsive disorder, and posttraumatic stress disorder. Overall deficient sleep plays a role in dampening emotions in clinical populations already susceptible to emotion dysregulation, as well as maintaining various psychiatric conditions through contributing to emotion dysfunction.
Children and emotional Development
Several important emotional characteristics that develop in childhood have been linked to sleep quality and duration, for example approachability, adaptability and attachment. Sleep disruption has been argued to play a role in crying frequency. Crying was interpreted as an early form of a behavioral dysregulation and has therefore been linked to emotion regulation.
Dreaming as a Mood-Regulation System
It is hypothesized that dreaming might be a way of improving mood in non-clinical populations. The evidence for this phenomenon has been collected from home dream reports in psychotherapy and from laboratory dreams collected after waking a participant in a REM sleep phase. Adults often remember dreams which have a negative emotional component, whereby women recall more dreams than men and dream recall is associated with a higher level of anxiety and lighter sleep.
Dreams after Stress
In a study conducted with depressed and healthy adults and were able to show that in healthy subjects, dreaming was a way to positively influence mood and cope with stress at night. Dreams of depressed persons, however, might deteriorate their mood further. This study's interesting results are limited in generalizability due to the small sample and the lack of reported dreams by depressed patients.
Emotions are more apparent in stages of REM-sleep rather than other stages of sleep. It was found that during REM-sleep negative emotions diminish. After going through stages of REM-sleep, people with depression report feeling better, in a study done by Cartwright et al. Conversely, a theory proposed by Revonsuo states that when people experience negative emotions or negative events, when they sleep the REM-sleep replays such events, which is known as rehearsal. During REM-sleep areas of the brain, the suborbital area, and the cortical area are responsible for emotion but also a suppression of arousing emotions are activated. Scientists noticed a decrease in the hormone noradrenaline which is released into the body after a highly stimulating event. People reported trouble falling asleep or sleeping consistently throughout the night when a stressful event was happening in their life, as observed by Åkerstedt. REM-sleep aids people with negative emotion or high stress.
Circadian Rhythm and Emotions
The circadian rhythm provides a person with a signal for when to sleep and when to wake up. If circadian rhythm and sleep-wake cycle are misaligned, this might lead to negative affect and emotional instability. It has been found that emotions vary depending on the circadian rhythm and the duration of how long one was awake. Circadian sleep-rhythm disorders like shift-work disorder or Jetlag-disorder have been found to similarly contribute to the Dysregulation of affect, with symptoms like irritability, anxiety, apathy and dysphoria.
References
Emotion
Human homeostasis
Sleep physiology
Sleep disorders | Sleep and emotions | [
"Biology"
] | 1,531 | [
"Emotion",
"Behavior",
"Sleep physiology",
"Human homeostasis",
"Sleep disorders",
"Homeostasis",
"Sleep",
"Human behavior"
] |
53,266,177 | https://en.wikipedia.org/wiki/Targeted%20analysis%20sequencing | Targeted analysis sequencing (sometimes called target amplicon sequencing) (TAS) is a next-generation DNA sequencing technique focusing on amplicons and specific genes. It is useful in population genetics since it can target a large diversity of organisms. The TAS approach incorporates bioinformatics techniques to produce a large amount of data at a fraction of the cost involved in Sanger sequencing. TAS is also useful in DNA studies because it allows for amplification of the needed gene area via PCR, which is followed by next-gen sequencing platforms. Next-gen sequencing use shorter reads 50–400 base pairs which allow for quicker sequencing of multiple specimens. Thus TAS allows for a cheaper sequencing approach for that is easily scalable and offers both reliability and speed.
References
DNA sequencing | Targeted analysis sequencing | [
"Chemistry",
"Biology"
] | 160 | [
"Molecular biology techniques",
"DNA sequencing"
] |
53,267,005 | https://en.wikipedia.org/wiki/Design%20for%20verification | Design for verification (DfV) is a set of engineering guidelines to aid designers in ensuring right first time manufacturing and assembly of large-scale components. The guidelines were developed as a tool to inform and direct designers during early stage design phases to trade off estimated measurement uncertainty against tolerance, cost, assembly, measurability and product requirements.
Background
Increased competition in the aerospace market has placed additional demands on aerospace manufacturers to reduce costs, increase product flexibility and improve manufacturing efficiency. There is a knowledge gap within the sphere of digital to physical dimensional verification and on how to successfully achieve dimensional specifications within real-world assembly factories that are subject to varying environmental conditions.
The DfV framework is an engineering principle to be used within low rate and high value and complexity manufacturing industries to aid in achieving high productivity in assembly via the effective dimensional verification of large volume structures, during final assembly. The DfV framework has been developed to enable engineers to design and plan the effective dimensional verification of large volume, complex structures in order to reduce failure rates and end-product costs, improve process integrity and efficiency, optimise metrology processes, decrease tooling redundancy and increase product quality and conformance to specification. The theoretical elements of the DfV methods were published in 2016, together with their testing using industrial case studies of representative complexity. The industrial tests published on ScienceDirect proved that by using the new design for verification methods alongside the traditional ‘design for X’ toolbox, the resultant process achieved improved tolerance analysis and synthesis, optimized large volume metrology and assembly processes and more cost-effective tool and jig design.
See also
Design for assembly
Design for inspection
Design for manufacturability
Design for X
References
Quality control | Design for verification | [
"Engineering"
] | 351 | [
"Design stubs",
"Design",
"Design for X"
] |
53,268,205 | https://en.wikipedia.org/wiki/Siva%20Brata%20Bhattacherjee | Siva Brata Bhattacherjee (1921–2003)—sometimes spelt Sibabrata Bhattacherjee—was a professor of physics at the University of Calcutta. He studied with the physicist, Satyendra Nath Bose, under whose supervision he completed his doctoral thesis in solid-state physics at the University College of Science (commonly known as Rajabazar Science College).
In 1945, he came from the University of Dhaka to join the Khaira Laboratory of Physics at the Science College, and specialised in the field of X-ray crystallography. Dr Bhattacherjee also served as a faculty member of the Department of Technology at the erstwhile University of Manchester Institute of Science and Technology.
He was married to Lilabati Bhattacharjee, Director (Mineral Physics) of the Geological Survey of India. Siva Brata is survived by their son Dr Subrata Bhattacherjee, and daughter Mrs Sonali Karmakar née Bhattacherjee.
References
1921 births
2003 deaths
X-ray crystallography
20th-century Indian physicists
Indian crystallographers
Academic staff of the University of Calcutta
University of Dhaka people
Bengali scientists
Academics of the University of Manchester Institute of Science and Technology
Indian expatriates in the United Kingdom
Scientists from West Bengal | Siva Brata Bhattacherjee | [
"Chemistry",
"Materials_science"
] | 269 | [
"X-ray crystallography",
"Crystallography"
] |
53,268,700 | https://en.wikipedia.org/wiki/PBS-1%20silencer | The PBS-1 is a silencer designed for the 7.62x39mm AKM variant of the Soviet AK-47 assault rifle in the Kalashnikov rifle family. It is in diameter and long.
History
The PBS-1 silencer, designed for use with the AKM to reduce the noise when firing, was introduced in the 1960s, and was used mostly by Spetsnaz forces and the KGB. They were used by the Spetsnaz in the Soviet–Afghan War in the 1980s, requiring the use of the AKM (modernized variant of the AK-47), because the newer AK-74 did not have a silencer available. Until a variant of the AK74, the AKS-74UB adapted for use with the PBS-4 suppressor (used in combination with subsonic 5.45×39mm Russian ammunition).
The PBS-1 is a two-chambered silencer using baffles and a rubber wipe. It was designed for use in conjunction with subsonic rifle ammunition. The PBS-1 has been extensively tested by the United States Army Foreign Weapons Test Lab. The rubber wipe requires replacement after 20–25 rounds. With a rubber wipe in place the PBS-1 reliably reduces the sound of an AKM discharge by 15 dB, which make the noise between 130—135 dB.
Gallery
See also
PBS-4 silencer
AK-104
References
Firearm components
Weapons and ammunition introduced in the 1960s
Kalashnikov derivatives
Cold War weapons of the Soviet Union | PBS-1 silencer | [
"Technology"
] | 309 | [
"Firearm components",
"Components"
] |
53,269,934 | https://en.wikipedia.org/wiki/Presence%20sensing%20device | A presence sensing device (PSD) is a safety device for press brakes and similar metal-bending machines. The device operator often holds the sheet metal work-piece in one place while another portion of the piece is being formed in the die. If a foreign object is detected, the PSD immediately retracts the die or stops the motion of the ram. PSDs protect the operator and other employees in the area.
Photoelectric sensors
One category of presence sensing devices is Photoelectric Sensors. Light Curtains also fall into this category. Light curtains use many infrared light beams to form a perimeter around machinery. When two or more consecutively adjacent beams are interrupted, a kill-switch stops the machine until the boundary is reset. Light curtains must be placed in front of the work area. This makes it difficult for press brake operators to work on small parts. One cannot help but disrupt the beam. The operator might "mute" the light curtain in order to get the job done. Certain parts of the beam can be muted. For example, muting the front and rear of the beam allows the middle to offer continued protection for the operator. Additionally, it may be necessary to use auxiliary light beams if the operator will reach between the main light beams and the edge of certain machines.
Electronic safety device
Electronic safety devices use lasers or cameras to sense a foreign object in the vicinity of the press brake. They are less obtrusive than other safety options, which means operators are less opposed to using them.
After some contention by the Occupational Safety and Health Administration (OSHA), an electronic safety device can fall under the PSD umbrella. One such device is the Laser Sentry press brake safety device designed by Glen Koedding in 2003. The concept was challenged with OSHA by a competitor almost immediately. OSHA responded by issuing a letter of disapproval stating that the Laser Sentry did not meet the “safe distance” rule. The rule states that a presence sensing protective device must be a minimum of 6 inches away from the nearest pinch point. However, after further observation, Laser Sentry was deemed a PSD in 2004, when used in conjunction with hydraulic press brakes.
Cameras are another electronic safety device used for press brake safety. The camera can detect an intrusion between the upper and lower dies. If an intrusion is detected, a signal will stop the downward movement of the ram. A camera safety system uses a linear scale to calculate the upper beam's position, velocity, and the stopping distance.
Proper installation
A complete lack of machine guarding or improperly installed safety devices are the main causes of machining accidents. However, proper installation can greatly reduce this risk. Stop-time measurements can remove the guesswork from machine safety. The results of the test are applied to OSHA and American National Standards Institute (ANSI) formulas to ensure the proper installation distance of safety devices. Proper installation is a must and can be ensured by following the manuals provided by the manufacturer.
Safety standards
Any machine safety device should be designed and built to the highest safety standards defined for machinery safety, EN 13849-1 Category 4, and meet the control reliability requirements of ANSI B11.19 and OSHA 1910.217.
Original equipment manufacturers (OEM's) often consider point-of-operation safety to be the user's responsibility. The best safety equipment can only go so far in protecting an operator from injury. Proper training is also imperative to keeping the press brake operator safe. Certification as a press brake operator is available.
References
Industrial equipment
Safety equipment | Presence sensing device | [
"Engineering"
] | 718 | [
"nan"
] |
53,270,578 | https://en.wikipedia.org/wiki/European%20Study%20Groups%20with%20Industry | A European Study Group with Industry (ESGI) is usually a week-long meeting where applied mathematicians work on problems presented by industry and research centres. The aim of the meeting is to solve or at least make progress on the problems.
The study group concept originated in Oxford, in 1968 (initiated by Leslie Fox and Alan Tayler). Subsequently, the format was adopted in other European countries to form ESGIs. Currently, with a variety of names, they appear in the same or a similar format throughout the world. More specific topics have also formed the subject of focussed meetings, such as the environment, medicine and agriculture.
Problems successfully tackled at study groups are discussed in a number of textbooks as well as a collection of case studies, European Success Stories in Industrial Mathematics. A guide for organising and running study groups is provided by the European Consortium for Mathematics in Industry.
European Study Group with Industry
A European Study Group with Industry or ESGI is a type of workshop where mathematicians work on problems presented by industry representatives. The meetings typically last five days, from Monday to Friday. On the Monday morning the industry representatives present problems of current interest to an audience of applied mathematicians. Subsequently, the mathematicians split into working groups to investigate the suggested topics. On the Friday solutions and results are presented back to the industry representative. After the meeting a report is prepared for the company, detailing the progress made and usually with suggestions for further work or experiments.
History
The original Study Groups with Industry started in Oxford in 1968. The format provided a method for initiating interaction between universities and private industry which often led to further collaboration, student projects and new fields of research (many advances in the field of free or moving boundary problems are attributed to the industrial case studies of the 1970s.). Study groups were later adopted in other countries, starting in Europe and then spreading throughout the world. The subject areas have also diversified, for example the Mathematics in Medicine Study Groups, Mathematics in the Plant Sciences Study Groups, the environment, uncertainty quantification and agriculture.
The academics work on the problems for free. The following have been given as motivation for this work:
Discovering new problems and research areas with practical applications.
The possibility of further projects and collaboration with industry.
The opportunity for future funding.
A number of reasons have also been quoted for companies to attend ESGIs:
The possibility of a quick solution to their problem, or at least guidance on a way forward.
Mathematicians can help to identify and correctly formulate a problem for further study.
Access to state-of-the-art techniques.
Building contacts with top researchers in a given field.
ESGIs are currently an activity of the European Consortium for Mathematics in Industry. Their ESGI webpage contains details of European meetings and contact details for prospective industry or academics participants. The current co-ordinator of the ESGIs is Prof. Tim Myers of the Centre de Recerca Matemática, Barcelona. Between 2015 and 2019 ESGIs are eligible for funding through the COST network MI-Net (Maths for Industry Network).
List of recent meetings
Past European meetings are listed on the European Consortium for Mathematics in Industry website. International meetings are covered by the Mathematics in Industry Information Service.
Recent ESGIs include:
ESGI 150, Basque Centre for Applied Mathematics, 21–25 October 2019
ESGI 144, Warsaw, 17 – 22 March 2019
ESGI 145, Cambridge, Apr. 8-12 2019
ESGI 147 Spain, Apr. 8-12 2019
ESGI 152, Palanga, Lithuania, 10–14 June 2019
ESGI 155, Polytechnic Institute of Leiria, Portugal, 1–5 July 2019.
ESGI 154, U. Southern Denmark, 19–23 August 2019
ESGI 148/SWI 2019 Netherlands, Wageningen, 28 Jan. – 1 Feb., 2019
ESGI 151 Estonia, Tartu 4-8 Feb. 2019
ESGI 149 Innsbruck, March 4–8, 2019
International study groups
As well as being held throughout Europe, annual study groups take place in Australia, Brazil, Canada, India, New Zealand, United States, Russia, and South Africa. A site dedicated solely to Dutch study groups may be found here Dutch ESGI. Information on past and upcoming meetings throughout the world may be found on the Mathematics in Industry Information Service website.
Literature
There are many books on mathematical modelling, a number of them containing problems arising from ESGIs or other study groups from around the world, examples include:
Practical Applied Mathematics Modelling, Analysis, Approximation
Topics in Industrial Mathematics: Case Studies and Related Mathematical Methods
Industrial Mathematics: A Course in Solving Real-World Problems
The book European Success Stories in Industrial Mathematics contains brief descriptions of a wide variety of industrial mathematics case studies. The Mathematics in Industry Information Service contains a large repository of past reports from study groups throughout the world.
A guide for organising and running study groups, the ESGI Handbook, has been developed by the Mathematics for Industry Network.
References
Applied mathematics
Mathematics education in the United Kingdom | European Study Groups with Industry | [
"Mathematics"
] | 998 | [
"Applied mathematics"
] |
53,271,460 | https://en.wikipedia.org/wiki/ISLRN | The ISLRN or International Standard Language Resource Number is Persistent Unique Identifier for Language Resources.
Context
On November 18, 2013, 12 major organisations (see list below) from the fields Language Resources and Technologies, Computational Linguistics, and Digital Humanities held a cooperation meeting in Paris (France) and agreed to announce the establishment of the International Standard Language Resource Number (ISLRN), to be assigned to each Language Resource.
Among the 12 organisations, 4 institutions constitute the ISLRN Steering Committee (ST)
ADHO
ACL
Asian Federation of Natural Language Processing ST
COCOSDA, International Committee for the Coordination & Standardisation of Speech Databases and Assessment Techniques
ICCL (COLING)
European Data Forum
ELRA ST
IAMT, International Association for Machine Translation
ISCA
LDC ST
[www.cocosda.org Oriental COCOSDA] ST
RMA, Language Resource Management Agency
Size and Content
The Joint Research Centre(JRC), the [European Commission]'s in-house science service, was the first organisation to adopt the ISLRN initiative and requested.
2500 resources and tools have already been allocated an ISLRN. These resources include written data (Annotated corpus, Annotated text, List of misspelled word, Terminological database, Treebank, Wordnet, etc.) and speech corpora (Synthesised Speech, Transcripts and Audiovisual Recordings, Conversational Speech, Folk Sayings, etc.)
Objectives
Providing Language Resources with unique names and identifiers using a standardized nomenclature ensures the identification of each Language Resources and streamlines the citation with proper references in activities within Human Language Technology as well as in documents and scientific publications. Such unique identifier also enhances the reproducibility, an essential feature of scientific work.
References
External links
ISLRN Portal
Natural language processing | ISLRN | [
"Technology"
] | 372 | [
"Natural language processing",
"Natural language and computing"
] |
53,271,730 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20775 | Small nucleolar RNA 775 (snoR775) is a snoRNAs, belonging to the H/ACA class.
Location
SnoR775 was discovered in the promoter-based non-coding RNA identification study in Arabidopsis thaliana. Its name is based on its close proximity to micro RNA miR775. In fact the snoR775 and miR775 precursors are encoded by a single gene (named sno-miR775). This arrangement might have interesting functional and evolutionary consequences.
See also
TeloSII ncRNAs
References
Small nuclear RNA | Small nucleolar RNA 775 | [
"Chemistry"
] | 122 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
53,271,963 | https://en.wikipedia.org/wiki/Macfarlane%20Burnet%20Medal%20and%20Lecture | The Macfarlane Burnet Medal and Lecture is a biennial award given by the Australian Academy of Science to recognise outstanding scientific research in the biological sciences.
It was established in 1971 and honours the memory of the Nobel laureate Sir Frank Macfarlane Burnet, OM KBE MD FAA FRS, the Australian virologist best known for his contributions to immunology and is the academy's highest award for biological sciences.
Prizewinners
Source: Australian Academy of Science
See also
List of biochemistry awards
List of biology awards
List of prizes named after people
References
Australian science and technology awards
Awards established in 1971
Australian Academy of Science Awards
Biochemistry awards
Biology awards | Macfarlane Burnet Medal and Lecture | [
"Chemistry",
"Technology",
"Biology"
] | 128 | [
"Science and technology awards",
"Biology awards",
"Biochemistry",
"Biochemistry awards"
] |
53,273,108 | https://en.wikipedia.org/wiki/Cyclopropenium%20ion | The cyclopropenium ion is the cation with the formula . It has attracted attention as the smallest example of an aromatic cation. Its salts have been isolated, and many derivatives have been characterized by X-ray crystallography. The cation and some simple derivatives have been identified in the atmosphere of the Saturnian moon Titan.
Bonding
With two π electrons, the cyclopropenium cation class obeys Hückel’s rules of aromaticity for electrons since, in this case, n = 0. Consistent with this prediction, the C3H3 core is planar and the C–C bonds are equivalent. In the case of the cation in [C3(SiMe3)3]+, the ring C–C distances range from 1.374(2) to 1.392(2) Å.
Syntheses
Salts of many cyclopropenyl cations have been characterized. Their stability varies according to the steric and inductive effects of the substituents.
Salts of triphenylcyclopropenium were first reported by Ronald Breslow in 1957. The salt was prepared in two steps starting with the reaction of phenyldiazoacetonitrile with diphenylacetylene to yield 1,2,3-triphenyl-2-cyclopropene nitrile. Treatment of this with boron trifluoride yielded [C3Ph3]BF4.
The parent cation, [C3H3]+, was reported as its hexachloroantimonate () salt in 1970. It is indefinitely stable at −20 °C.
Trichlorocyclopropenium salts are generated by chloride abstraction from tetrachlorocyclopropene:
C3Cl4 + AlCl3 → [C3Cl3]+
Tetrachlorocyclopropene can be converted to tris(tert-butyldimethylsilyl)cyclopropene. Hydride abstraction with nitrosonium tetrafluoroborate yields the trisilyl-substituted cyclopropenium cation.
Amino-substituted cyclopropenium salts are particularly stable. Calicene is an unusual derivative featuring cyclopropenium linked to a cyclopentadienide.
Reactions
Organic chemistry
Chloride salts of cyclopropenium esters are intermediates in the use of dichlorocyclopropenes for the conversion of carboxylic acids to acid chlorides:
Related cyclopropenium cations are produced in the regeneration of the 1,1-dichlorocyclopropenes from the cyclopropenones.
The cyclopropenium chlorides have been applied to peptide bond formation. For example, in the figure below, reacting a boc-protected amino acid with an unprotected amino acid in the presence of the cyclopropenium ion allows the formation of a peptide bond via acid chloride formation followed by nucleophilic substitution with the unprotected amino acid.
This method of mildly generating acid chlorides can also be useful for linking alpha-anomeric sugars. After using the cyclopropenium ion to form the chloride at the anomeric carbon, the compound is iodinated with tetrabutylammonium iodide. This iodine can thereafter be substituted by any ROH group to quickly undergo alpha-selective linkage of sugars.
Additionally, some synthetic routes make use of cyclopropenium ring openings yielding an allylcarbene cation. The linear degradation product yields both a nucleophilic and electrophilic carbon centers.
Organometallic compounds
Many complexes are known with cyclopropenium ligands. Examples include [M(C3Ph3)(PPh3)2]+ (M = Ni, Pd, Pt) and Co(C3Ph3)(CO)3. Such compounds are prepared by reaction of cyclopropenium salts with low valent metal complexes.
As polyelectrolytes
Because many substituted derivatives are known, cyclopropenium salts have attracted attention as possible polyelectrolytes, relevant to technologies such as desalination and fuel cells. The tris(dialkylamino)cyclopropenium salts have been particularly evaluated because of their high stability.
See also
Phosphirenium ion
References
Non-benzenoid aromatic carbocycles
Cations
Cyclopropenes | Cyclopropenium ion | [
"Physics",
"Chemistry"
] | 967 | [
"Cations",
"Ions",
"Matter"
] |
53,274,321 | https://en.wikipedia.org/wiki/Alliance%20for%20Biosecurity | The Alliance for Biosecurity is a consortium of companies that develop products to respond to national security threats, including bioterrorism pathogens and emerging infectious diseases. It is headquartered in Washington DC.
Background
The United States faces risks to national security posed by the danger of bioterrorism or a destabilizing infectious disease pandemic. The vulnerability is considered severe because many of the vaccines and medicines that would be needed to protect people do not currently exist. The Alliance for Biosecurity is a group of pharmaceutical and biotechnology companies that work to create preventive measures and treatments for severe infectious diseases.
Within the U.S. federal government, the Biomedical Advanced Research and Development Authority (BARDA) and the Project BioShield Special Reserve Fund (SRF) provide funding to research, develop, and procure a medicines to control epidemics.
History
The Alliance for Biosecurity was formed in 2005. Its purpose was to build a partnership between government and private sector biotechnology and pharmaceutical companies working in the biodefense space. The Center for Biosecurity, a nonprofit multidisciplinary organization of physicians public health professionals and scientists, was an organizer of the alliance and participates in it. Together, the two groups have provided congressional testimony and authored letters to Congress.
In April 2018, the alliance conducted a national poll about biosecurity. Seventy-three percent of the 1,612 Americans polled said they would support a congressional decision to increase funding to address biosecurity needs and capabilities. The poll was conducted, in part, to measure support for biosecurity funding because reauthorization of the Pandemic and All-Hazards Preparedness Act (PAHPA) is due by September 30, 2018. PAHPA is a law that improved the federal government's medical and public health preparedness for national security threats. Examples of threats include the spread of infectious diseases or chemical, biological, radiological or nuclear (CBRN) attacks.
In 2018, Congress passed the annual Labor, Health and Human Services, and Education appropriations bill before the end of the fiscal year for the first time in over 20 years. Congress also passed a Department of Defense appropriations bill before the end of the fiscal year for the first time in 10 years. The alliance supported passage of both bills. Key funding in the bills included:
Project BioShield Special Reserve Fund (SRF): The fund received a $25 million increase. The SRF was first funded in 2004 and receives an annualized funding level of around $510 million since 2004. The current funding level is $735 million. The program creates public-private partnerships to advance the development of over 50 million doses of drugs against anthrax, smallpox, botulinum toxin and radiological threats.
Project BioShield: This program creates incentives for companies to invest in R&D in products for biodefense, because no commercial markets for such projects exist.
Mission
The Alliance for Biosecurity is a coalition of biopharmaceutical companies and laboratory/academic partners that promotes a strong public-private partnership to ensure medical countermeasures are available to protect public health and enhance national health security. The Alliance advocates for public policies and funding to support the rapid development, production, stockpiling, and distribution of critically needed medical countermeasures.
Legislative support
The alliance has supported the following legislation:
21st Century Cures Act - legislation passed in the U.S. Senate that promotes innovation and efficiency in the development of new medical countermeasures
Medical Countermeasures Innovation Act of 2015 - legislation that would encourage the development of medical countermeasures, including drugs, devices and preventative treatments that could be used after a biological terrorist attack or global pandemic
Pandemic and All-Hazards Preparedness and Advancing Innovation Act of 2018 (H.R. 6378) - legislation that would reauthorize the Pandemic and All-Hazards Preparedness Act (PAHPA) before its expiration on September 30, 2018. The alliance sent a letter with the U.S. Chamber of Commerce to each member of the House of Representatives urging its passage.
Strengthening Public Health Emergency Response Act - legislation that creates new research incentives, improves transparency, and creates "predictable and flexible contracting"
The alliance also gives out awards to Congress. For example, in October 2017 it awarded eight Members of Congress, such as Maryland Congressman Dutch Ruppersberger, with its "Congressional Biosecurity Champion Award," which honors elected officials who work to improve how the U.S. can prevent and fight biosecurity threats. In 2019, it gave this award to Rep. Jaime Herrera Beutler (R-WA).
Organization
Membership
The Alliance for Biosecurity is made up of the following biotechnology companies and university research labs:
Law firm Squire Patton Boggs serves as secretariat for the alliance.
Partners and collaborations
The alliance is a participating member of the Virtual Biosecurity Center, an initiative of the Federation of American Scientists.
See also
9/11 Commission
Biological hazard
Biological warfare
Bioterrorism
Blue Ribbon Study Panel on Biodefense
Congressional Biodefense Caucus
Pandemic influenza
Terrorism
United States biological defense program
References
External links
Alliance for Biosecurity - Lobbying Spending Database
Bioterrorism
Biotechnology organizations
Medical and health organizations based in Washington, D.C. | Alliance for Biosecurity | [
"Engineering",
"Biology"
] | 1,099 | [
"Biotechnology organizations",
"Bioterrorism",
"Biological warfare"
] |
53,274,938 | https://en.wikipedia.org/wiki/Dickinson%20College%20Commentaries | Dickinson College Commentaries is a digital project of Dickinson College, which is located in Carlisle, near Harrisburg, in the U.S. state of Pennsylvania. The project assembles digital commentaries on texts in Latin and ancient Greek and publishes core vocabularies of the most common words in those languages. It is hosted by the department of Classical Studies.
History
In 2010 DCC launched a pilot site in MediaWiki that was dedicated to notes on the selections from Gallic Wars used in the American Advanced Placement Latin Exam. The site moved to Drupal in 2012.
The project director is Christopher Francese, the Asbury J. Clarke Professor of Classical Studies at Dickinson College.
Peer Review
Commentary proposals are reviewed and edited in a process similar to that used by traditional academic print publishers.
Copyright status
Dickinson College Commentaries supports open-source content, and publishes all content under a Creative Commons CC-BY-SA license.
External links
Dickinson College Commentaries
Dickinson Classics Online (Chinese sister project)
Anne Mahoney, “Latin Commentaries on the Web.” Teaching Classical Languages 5.2 (2014): 133–143.
References
Computing in classical studies
Discipline-oriented digital libraries
Educational projects
Digital humanities
Dickinson College | Dickinson College Commentaries | [
"Technology"
] | 240 | [
"Digital humanities",
"Computing and society",
"Computing in classical studies"
] |
53,275,359 | https://en.wikipedia.org/wiki/TRAPPIST-1f | TRAPPIST-1f, also designated as 2MASS J23062928-0502285 f, is an exoplanet, likely rocky, orbiting within the habitable zone around the ultracool dwarf star TRAPPIST-1, located away from Earth in the constellation of Aquarius. The exoplanet was found by using the transit method, in which the dimming effect that a planet causes as it crosses in front of its star is measured.
It was one of four new exoplanets to be discovered orbiting the star in 2017 using observations from the Spitzer Space Telescope.
The planet is likely tidally locked, and has been depicted as an eyeball planet in artistic impressions by NASA.
Physical characteristics
Mass, radius, and temperature
TRAPPIST-1f is an Earth-sized exoplanet, meaning it has a radius close to that of Earth. It has an equilibrium temperature of . It has a radius of and a mass of . It was initially estimated to have a much lower mass, and thus a low density of and a surface gravity around (62% of Earth's value). This suggested a large amount of volatiles, with a 2017 study suggesting that a water ocean may comprise as much as 20% of the planet's mass, increasing the temperature at the bottom of such an ocean to above . However, refined density estimates show that TRAPPIST-1f, like other planets in the system, is only slightly less dense than Earth, consistent with a rocky composition.
Atmosphere
According to simulations of magma ocean-atmosphere interaction, TRAPPIST-1f is likely to retain a fraction of primordial steam atmosphere during the initial stages of evolution, and therefore today is likely to possess a thick ocean covered by atmosphere rich in abiotic oxygen. Helium emission from TRAPPIST-1f (and planets b and e) has not been detected as of 2022.
Host star
The planet orbits an (M-type) ultracool dwarf star named TRAPPIST-1. The star has a mass of 0.08 and a radius of 0.11 . It has a temperature of 2550 K and is at least 7-8 billion years old. In comparison, the Sun is 4.6 billion years old and has a temperature of 5778 K. The star is metal-rich, with a metallicity ([Fe/H]) of 0.04, or 109% the solar amount. This is particularly odd as such low-mass stars near the boundary between brown dwarfs and hydrogen-fusing stars should be expected to have considerably less metal content than the Sun; on the other hand, metal-rich stars are also more likely to have planets than metal-poor ones. Its luminosity () is 0.05% of that of the Sun.
The star's apparent magnitude, or how bright it appears from Earth's perspective, is 18.8. Therefore, it is too dim to be seen with the naked eye.
Orbit
TRAPPIST-1f orbits its host star with an orbital period of about 9.206 days and an orbital radius of about 0.037 times that of Earth's (compared to the distance of Mercury from the Sun, which is about 0.38 AU).
Habitability
The exoplanet was announced to be either orbiting within or slightly outside of the habitable zone of its parent star, the region where, with the correct conditions and atmospheric properties, liquid water may exist on the surface of the planet. On 31 August 2017, astronomers at the Hubble Space Telescope reported the first evidence of possible water content on the TRAPPIST-1 exoplanets.
TRAPPIST-1f has a radius about the same as Earth, at around 1.045 , but was initially thought to have only about two thirds of Earth's mass, at around 0.68 . So, it was considered somewhat unlikely to be a fully rocky planet, and extremely unlikely to be an Earth-like one, that is rocky with a large iron core but without a thick hydrogen-helium atmosphere enveloping the planet. Simulations in 2017 suggested the planet is approximately 20% water by composition, much higher than that of Earth. With such a massive water envelope, the pressure and temperature will be high enough to keep the water in a gaseous state and any liquid water will only exist as clouds near the top of TRAPPIST-1f's atmosphere. Based on this study, TRAPPIST-1f is therefore likely to be no more habitable than any other ice giant with water clouds in its atmosphere. However, refined estimates show that TRAPPIST-1f has about the same mass as Earth, and like other planets in the system, is only slightly less dense than Earth, consistent with a rocky composition.
Its host star is a red ultracool dwarf, with only about 8% of the mass of the Sun (close to the boundary between brown dwarfs and hydrogen-fusing stars). As a result, stars like TRAPPIST-1 have the ability to live up to 4–5 trillion years, 400–500 times longer than the Sun will live. Because of this ability to live for long periods of time, it is likely TRAPPIST-1 will be one of the last remaining stars when the Universe is much older than it is now, when the gas needed to form new stars will be exhausted, and the remaining ones begin to die off.
The planet is very likely tidally locked, with one hemisphere permanently facing towards the star, while the opposite side shrouded in eternal darkness. However, between these two intense areas, there would be a sliver of moderate temperature – called the terminator line, where the temperatures may be suitable (about ) for liquid water to exist. Additionally, a much larger portion of the planet may be habitable if it supports a thick enough atmosphere to transfer heat to the side facing away from the star.
See also
List of extrasolar candidates for liquid water
List of potentially habitable exoplanets
List of transiting exoplanets
References
External links
NASA Briefing on the Discovery of TRAPPIST-1's 7 Planets
Exoplanets discovered in 2017
Near-Earth-sized exoplanets
Near-Earth-sized exoplanets in the habitable zone
Transiting exoplanets
TRAPPIST-1
Aquarius (constellation)
J23062928-0502285 f | TRAPPIST-1f | [
"Astronomy"
] | 1,325 | [
"Constellations",
"Aquarius (constellation)"
] |
53,276,317 | https://en.wikipedia.org/wiki/C7H5FO2 | {{DISPLAYTITLE:C7H5FO2}}
The molecular formula C7H5FO2 (molar mass: 140.11 g/mol) may refer to:
Fluorobenzoic acids
2-Fluorobenzoic acid
3-Fluorobenzoic acid
4-Fluorobenzoic acid | C7H5FO2 | [
"Chemistry"
] | 72 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
56,118,194 | https://en.wikipedia.org/wiki/Sarracenia%20%C3%97%20swaniana | Sarracenia × swaniana is a nothospecies of carnivorous plant from the genus Sarracenia in the family Sarraceniaceae described by hort. and Nichols. It is a hybrid between Sarracenia purpurea subsp. venosa and Sarracenia minor var. minor.
References
swaniana
Hybrid plants | Sarracenia × swaniana | [
"Biology"
] | 70 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
56,119,090 | https://en.wikipedia.org/wiki/Acidicapsa%20acidiphila | Acidicapsa acidiphila is a mesophilic and moderately acidophilic bacterium from the genus of Acidicapsa which has been isolated from acidic water in Cueva de la Mora (cave of the mulberry) in Spain.
References
External links
Type strain of Acidicapsa acidiphila at BacDive - the Bacterial Diversity Metadatabase
Acidobacteriota
Bacteria described in 2017
Biota of Spain | Acidicapsa acidiphila | [
"Biology"
] | 87 | [
"Biota by country",
"Biota of Spain"
] |
56,119,140 | https://en.wikipedia.org/wiki/Caldimicrobium%20rimae | Caldimicrobium rimae is an extremely thermophilic, strictly anaerobic and facultatively chemolithoautotrophic bacterium from the genus of Caldimicrobium which has been isolated from the Treshchinnyi Spring from Uzon Caldera in Russia.
Origins of taxonomical branch
Caldimicrobium rimae varies from its family of Thermodesulfobacteriaceae as it is not capable of oxidizing organic acids or alcohols and use sulfur as an electron receptor.
References
External links
Type strain of Caldimicrobium rimae at BacDive – the Bacterial Diversity Metadatabase
Thermodesulfobacteriota
Bacteria described in 2009
Thermophiles | Caldimicrobium rimae | [
"Biology"
] | 156 | [
"Bacteria stubs",
"Bacteria"
] |
56,119,183 | https://en.wikipedia.org/wiki/Caldimicrobium%20thiodismutans | Caldimicrobium thiodismutans is a Gram-negative, thermophilic, rod-shaped, autotrophic and motile bacterium from the genus of Caldimicrobium which has been isolated from a hot spring in Nakabusa in Japan.
References
External links
Type strain of Caldimicrobium thiodismutans at BacDive – the Bacterial Diversity Metadatabase
Thermodesulfobacteriota
Bacteria described in 2016
Thermophiles | Caldimicrobium thiodismutans | [
"Biology"
] | 105 | [
"Bacteria stubs",
"Bacteria"
] |
56,119,486 | https://en.wikipedia.org/wiki/Elementary%20comparison%20testing | Elementary comparison testing (ECT) is a white-box, control-flow, test-design methodology used in software development. The purpose of ECT is to enable detailed testing of complex software. Software code or pseudocode is tested to assess the proper handling of all decision outcomes. As with multiple-condition coverage and basis path testing, coverage of all independent and isolated conditions is accomplished through modified condition/decision coverage (MC/DC). Isolated conditions are aggregated into connected situations creating formal test cases. The independence of a condition is shown by changing the condition value in isolation. Each relevant condition value is covered by test cases.
Test case
A test case consists of a logical path through one or many decisions from start to end of a process. Contradictory situations are deduced from the test case matrix and excluded. The MC/DC approach isolates every condition, neglecting all possible subpath combinations and path coverage.
where
T is the number of test cases per decision and
n the number of conditions.
The decision consists of a combination of elementary conditions
The transition function is defined as
Given the transition
the isolated test path consists of
Test case graph
A test case graph illustrates all the necessary independent paths (test cases) to cover all isolated conditions. Conditions are represented by nodes, and condition values (situations) by edges. An edge addresses all program situations. Each situation is connected to one preceding and successive condition. Test cases might overlap due to isolated conditions.
Inductive proof of a number of condition paths
The elementary comparison testing method can be used to determine the number of condition paths by inductive proof.
There are possible condition value combinations
When each condition is isolated, the number of required test cases per decision is:
there are edges from parent nodes and edges to child nodes from .
Each individual condition connects to at least one path
from the maximal possible connecting to isolating .
All predecessor conditions and respective paths are isolated. Therefore, when one node (condition) is added, the total number of paths, and required test cases, from start to finish increases by:
Q.E.D.
Test-case design steps
Identify decisions
Determine test situations per decision point (Modified Condition / Decision Coverage)
Create logical test-case matrix
Create physical test-case matrix
Example
This example shows ETC applied to a holiday booking system. The discount system offers reduced-price vacations. The offered discounts are for members or for expensive vacations, for moderate vacations with workday departures, and otherwise. The example shows the creation of logical and physical test cases for all isolated conditions.
Pseudocode
if days > 15 or price > 1000 or member then
return −0.2
else if (days > 8 and days ≤ 15 or price ≥ 500 and price ≤ 1000) and workday then
return −0.1
else
return 0.0
Factors
Number of days:
Price (euros):
Membership card: none; silver; gold; platinum
Departure date: workday; weekend; holiday
possible combinations (test cases).
Example in Python:
if days > 15 or price > 1000 or member:
return -0.2
elif (days > 8 and days <= 15 or price >= 500 and price <= 1000) and workday:
return -0.1
else:
return 0.0
Step 1: Decisions
Step 2: MC/DC Matrix
The highlighted diagonals in the MC/DC Matrix are describing the isolated conditions:
all duplicate situations are regarded as proven and removed.
Step 3: Logical test-Case matrix
Test cases are formed by tracing decision paths. For every decision a succeeding and preceding subpath is searched until every connected path has a start and an end :
Step 4: Physical test-case matrix
Physical test cases are created from logical test cases by filling in actual value representations and their respective results.
Test-case graph
In the example test case graph, all test cases and their isolated conditions are marked by colors, and the remaining paths are implicitly passed.
See also
Control-flow graph
Decision-to-decision path
References
Articles with example pseudocode
Articles with example Python (programming language) code
Software testing | Elementary comparison testing | [
"Engineering"
] | 825 | [
"Software engineering",
"Software testing"
] |
56,120,104 | https://en.wikipedia.org/wiki/Disrupted%20planet | In astronomy, a disrupted planet is a planet or exoplanet or, perhaps on a somewhat smaller scale, a planetary-mass object, planetesimal, moon, exomoon or asteroid that has been disrupted or destroyed by a nearby or passing astronomical body or object such as a star. Necroplanetology is the related study of such a process.
The result of such a disruption may be the production of excessive amounts of related gas, dust and debris, which may eventually surround the parent star in the form of a circumstellar disk or debris disk. As a consequence, the orbiting debris field may be an "uneven ring of dust", causing erratic light fluctuations in the apparent luminosity of the parent star, as may have been responsible for the oddly flickering light curves associated with the starlight observed from certain variable stars, such as that from Tabby's Star (KIC 8462852), RZ Piscium and WD 1145+017. Excessive amounts of infrared radiation may be detected from such stars, suggestive evidence in itself that dust and debris may be orbiting the stars.
Examples
Planets
Examples of planets, or their related remnants, considered to have been a disrupted planet, or part of such a planet, include: ‘Oumuamua and WD 1145+017 b, as well as asteroids, hot Jupiters and those that are hypothetical planets, like Fifth planet, Phaeton, Planet V and Theia. Planets can also be disrupted by black holes; one example involves a "Jupiter-like object" being subject to a tidal disruption event by the supermassive black hole IGR J12580+0134, at the center of the galaxy NGC 4845.
Stars
Examples of parent stars considered to have disrupted a planet include: EPIC 204278916, Tabby's Star (KIC 8462852), PDS 110, RZ Piscium, WD 1145+017 and 47 Ursae Majoris.
Tabby's Star light curve
Tabby's Star (KIC 8462852) is an F-type main-sequence star exhibiting unusual light fluctuations, including up to a 22% dimming in brightness. Several hypotheses have been proposed to explain these irregular changes, but none to date fully explain all aspects of the curve. One explanation is that an "uneven ring of dust" orbits Tabby's Star. However, in September 2019, astronomers reported that the observed dimmings of Tabby's Star may have been produced by fragments resulting from the disruption of an orphaned exomoon.
See also
Former dwarf planets
Asteroid belt
BD+20°307
Formation and evolution of the Solar System
Giant-impact hypothesis
Interstellar medium
List of stars that have unusual dimming periods
Nebular hypothesis
Planetesimal
Protoplanetary disk
Tidal force
WD 0145+234 (star disrupting an exoasteroid)
References
Further reading
External links
NASA – WD 1145+017 b at The Extrasolar Planets Encyclopaedia.
, a presentation by Tabetha S. Boyajian.
, a presentation by Issac Arthur.
, star with unusual light fluctuations (21 December 2017).
Circumstellar disks
Hypothetical astronomical objects
Planetary rings
Unsolved problems in astronomy | Disrupted planet | [
"Physics",
"Astronomy"
] | 686 | [
"Astronomical hypotheses",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Astronomical myths",
"Astronomical controversies",
"Hypothetical astronomical objects",
"Astronomical objects"
] |
56,121,084 | https://en.wikipedia.org/wiki/Pillai%20sequence | The Pillai sequence is the sequence of integers that have a record number of terms in their greedy representations as sums of prime numbers (and one).
It is named after Subbayya Sivasankaranarayana Pillai, who first defined it in 1930.
It would follow from Goldbach's conjecture that every integer greater than one can be represented as a sum of at most three prime numbers. However, finding such a representation could involve solving instances of the subset sum problem, which is computationally difficult. Instead, Pillai considered the following simpler greedy algorithm for finding a representation of as a sum of primes: choose the first prime in the sum to be the largest prime that is at most , and then recursively construct the remaining sum recursively for .
If this process reaches zero, it halts. And if it reaches one instead of zero,
it must include one in the sum (even though it is not prime), and then halt.
For instance, this algorithm represents 122 as 113 + 7 + 2, even though the shorter representations 61 + 61 or 109 + 13 are also possible.
The th number in the Pillai sequence is the smallest number whose greedy representation as a sum of primes (and one) requires terms. These numbers are
0, 1, 4, 27, 1354, 401429925999155061, ...
Each number in the sequence is the sum of the previous number with a prime number , the smallest prime whose following prime gap is larger than . For instance, the number 27 in the sequence is 4 + 23, where the first prime gap larger than 4 is the one between 23 and 29.
Because the prime numbers become less dense as they become larger (as quantified by the prime number theorem), there is always a prime gap larger than any term in the Pillai sequence, so the sequence continues to an infinite number of terms. However, the terms in the sequence grow very rapidly. It has been estimated that expressing the next term in the sequence would require "hundreds of millions of digits".
References
Integer sequences
Prime numbers | Pillai sequence | [
"Mathematics"
] | 432 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Prime numbers",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
56,121,149 | https://en.wikipedia.org/wiki/Synchronous%20detector | In electronics, a synchronous detector is a device that recovers information from a modulated signal by mixing the signal with a replica of the unmodulated carrier. This can be locally generated at the receiver using a phase-locked loop or other techniques. Synchronous detection preserves any phase information originally present in the modulating signal. With the exception of SECAM receivers, synchronous detection is a necessary component of any analog color television receiver, where it allows recovery of the phase information that conveys hue. Synchronous detectors are also found in some shortwave radio receivers used for audio signals, where they provide better performance on signals that may be affected by fading.
See also
Lock-in amplifier
References
Electronic engineering | Synchronous detector | [
"Technology",
"Engineering"
] | 152 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
56,122,968 | https://en.wikipedia.org/wiki/Biferrocene | Biferrocene is the organometallic compound with the formula [(C5H5)Fe(C5H4)]2. It is the product of the formal dehydrocoupling of ferrocene, analogous the relationship between biphenyl and benzene. It is an orange, air-stable solid that is soluble in nonpolar organic solvents.
Biferrocene can be prepared by the Ullmann coupling of iodoferrocene. Its one-electron oxidized derivative [(C5H5)Fe(C5H4)]2+ attracted attention as a prototypical mixed-valence compound.
A related compound is biferrocenylene, [Fe(C5H4)2]2 wherein all cyclopentadienyl rings are coupled. Formally, biferrocene is derived from one fulvalene ligand, and biferrocenylene is derived from two.
Reactions
Biferrocene can easily be converted into a mixed-valence complex, which is called biferrocenium. This [Fe(II)-Fe(III)] cation is a class II type (0.707 > α > 0) mixed-valence complex according to the Robin-Day classification.
Derivatives
Aminophosphine ligands with biferroceno substituents have been prepared as catalysts for asymmetric allylic substitution and asymmetric hydrogenation of alkenes.
Related compounds
Bis(fulvalene)diiron
References
Ferrocenes
Sandwich compounds
Cyclopentadienyl complexes | Biferrocene | [
"Chemistry"
] | 335 | [
"Organometallic chemistry",
"Cyclopentadienyl complexes",
"Sandwich compounds"
] |
56,124,204 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%205%CE%B1-reductase%20inhibitors | This article is about the discovery and development of 5α-reductase inhibitors (5-ARIs), also known as dihydrotestosterone (DHT) blockers.
Development of 5α-reductase inhibitors
These are two types of 5-ARIs, categorized as steroidal and nonsteroidal 5-ARIs.
Steroidal 5α-reductase inhibitors
Steroid 5α-reductase is a membrane-associated enzyme in an oxidoreductase family and has an important role in biological actions towards steroid metabolism. If the steroid 5α-reductase is overexpressed it causes overproduction of DHT that can lead to androgenic disorders in humans.
The 5α-reductase isozymes possess a similar steroidal catalytic site. The only available information about the 5α-reductase isozymes is their primary sequence estimated from c-DNAs and that affects the design of the novel inhibitors. The crystal structure of the 5α-reductase isozymes is not known because the nature of the 5α-reductase enzyme is so unstable during purification. The first 5-ARIs were designed by modifying the structure of natural substrates, including the substitution of one carbon atom of the rings of the steroids by a heteroatom such as nitrogen thereby forming azasteroids. The receptor is known to consist of two hydrogen bond donors, where the C3 and 17β-side chain of the ligands connect, as well as three hydrophobic groups distributed over the steroidal structure. The best receptor inhibitors comply with these factors. Azasteroids are a type of steroid derivatives which have nitrogen atoms replaced at various positions for one of the carbon atoms in the steroid ring system. Two 4-azasteroids, finasteride and dutasteride are marketed as 5-ARIs. Finasteride (Proscar or Propecia) was the first steroidal 5α-reductase inhibitor approved by the U.S. Food and Drug Administration (USFDA). It inhibits the function of two of the isoenzymes (type II and III). In human it decreases the prostatic DHT level by 70–90% and reduces the prostatic size.
Dutasteride (Avodart) was the second steroidal 5α-reductase approved after finasteride. It is a competitive inhibitor of all three 5α-reductase isoenzymes and it inhibits types 1 and 2 better than finasteride, leading to it causing further reduction in DHT, with >90% recuded DHT levels following 1 year of oral administration.
Epristeride is the third marketed steroidal 5-ARI. It is a noncompetitive, specific inhibitor. It potency is not as significant as finasteride or dutasteride and thus it is only marketed in China.
Nonsteroidal 5α-reductase inhibitors
Various pharmaceutical and academic groups have conducted the synthesis of nonsteroidal compounds that inhibit human 5α-reductases due to the unwanted hormonal side effects of steroidal compounds. Nonsteroidal inhibitors can be categorized due to their structure. Many have been obtained from azasteroid inhibitors by taking away one or more rings from the steroid structure.
Four main categories of nonsteroidal 5-ARIs have been described:
Benzo(c)quinolizinones
Benzo(f)quinolonone
Piperidones
Carboxylic acids
Nonsteroidal inhibitors are thought to act as competitive inhibitors on the 5α-reductase isozymes, except for epristeride analogues (carboxylic acids), which are noncompetitive inhibitors.
Bexlosteride falls into the category of benzo(f)quinolonones, and is probably the derivative that has come closest to being marketed. It functions as a 5-ARI1 inhibitor which inhibits testosterone stimulated LNCaP cell growth but without testosterone the compound shows no effect and was therefore never marketed.
Structure–activity relationships
4-Azasteroids
Many steroidal 5-ARIs have been researched but only 3 are marketed. Two of them are 4-azasteroids and will be covered here. As mentioned above, the third one, epristeride is only marketed in China and will not be covered here. The basic SAR of 4-azasteroids is shown below. For competitive inhibiting functions there are two functions considered crucial, 4-en-3-one function and a lipophilic 17β-side chain with one or more oxygen atoms. The main problems for 4-azasteroids is the rapid conversion into inactive 4,5-dihydro form, which is done by the enzyme.
Finasteride is considered similar to the transition state of reduced testosterone and is thus a slow-offset, irreversible inhibitor. The similarity to the transition state is a formation of an enzyme-NADP-dihydrofinasteride adduct by rearrangement on the A-ring of the compound.
Finasteride mainly inhibits the 5α-R2 (IC50=69 nM) and 5α-R3 (IC50=17.4 nM) with little inhibition of 5α-R1 (IC50=360 nM). As mentioned above, finasteride reduces prostatic DHT levels on a 70-90% range but the detailed reduction of DHT is 70.8% and 85% of intraprostatic DHT.
Dutasteride, however, is a so-called dual inhibitor with both 5α-R1 and 5α-R2 inhibition. IC50 for 5α-R1 is 7 nM but 6 nM for 5α-R2. As mentioned above, it reduces DHT > 90% overall, or precisely 94.7% and for intraprostatic DHT the reduction is 97-99%. Dutasteride has also been found to inhibit 5α-R3, in vitro, with IC50=0.33 nM. The 2,5-difluorophenyl side chain on the D-ring of the compound shows significant lipophilic features and as increased lipophilicity enhances the potency of the compounds binding at pocket site, its potency is much greater than of finasteride.
Finasteride is an unsaturated analogue of another 4-azasteroid, or 4-MA. 4-MA is known to have dual inhibiting features with good inhibition on 5α-R1 (IC50=1.7 nM) and 5α-R2 (IC50=1.9 nM). However, 4-MA was never marketed as it showed hepatotoxicity. There is no detailed data about the cause of hepatotoxicity in 4-MA regarding SAR, but a conclusion may be drawn that the R2 group is the cause as there are other 4-azasteroid compounds containing the same R1 group as 4-MA, or CH3, without showing hepatotoxicity.
Nonsteroidal
The common factor in nonsteroidal 5-ARI discovery is that the first compounds were all selective inhibitors to 5α-reductase type 1 only, but were then developed in order to get dual inhibition on both type 1 and 2, since inhibition of the type 2 isozyme is a more important factor in treating the disease of BPH.
Benzo(c)quinolizinones are tricyclic derivatives of 10-azasteroids. The D-ring has been removed and the C-ring substituted for an aromatic one. The first compounds developed were selective 5-alpha reductase type 1 inhibitors, but the most potent one inhibits both type 1 and 2. The fluorine atom is an important part of the structure.
Benzo(f)quinolonone are also tricyclic compounds, but derivatives of the 4-azasteroid structure. The compounds that have been designed can be divided into two categories, hexahydro derivatives and octahydro derivatives. The octahydro derivatives have been proven to be more potent. Compound LY 191704, later named bexlosteride, is the most potent octahydro derivative designed. It is a selective inhibitor to the type 1 isozyme, especially because of the chlorine atom and the amino-methyl group.
Piperidones are also 4-azasteroid derivatives but both B- and D-ring have been removed. The original compounds designed were type 1 selective, especially the ones containing a chlorine atom connected to the aromatic ring. By inserting a styryl group to the piperidones type 2 inhibitory activity increased.
Nonsteroidal carboxylic acids are tricyclic compounds designed to resemble steroidal carboxylic acids such as episteride. As with the other nonsteroidal inhibitors, they have been designed by removing steroid ring systems. As with the piperidones, addition of a styryl group provides good dual inhibition on isozyme 1 and 2, but the nonsteroidal carboxylic acids are mostly type 1 selective.
Natural products
Saw palmetto extract
The European Medicines Agency (EMA) has concluded that the extract of the natural product Saw palmetto can be used to treat symptoms of benign prostatic hyperplasia (BPH) as research has shown its 5-ARI effects. An extract of Serenoa repens, also known as saw palmetto extract, is a 5-ARI that is sold as an over-the-counter dietary supplement. It is also used under the brand name Permixon in Europe as a pharmaceutical drug for the treatment of benign prostatic hyperplasia.
See also
Discovery and development of antiandrogens
List of 5α-reductase inhibitors
References
5α-Reductase inhibitors
Drug discovery | Discovery and development of 5α-reductase inhibitors | [
"Chemistry",
"Biology"
] | 2,078 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
56,125,056 | https://en.wikipedia.org/wiki/Spiral%20optimization%20algorithm | In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature.
The first SPO algorithm was proposed for two-dimensional unconstrained optimization
based on two-dimensional spiral models. This was extended to n-dimensional problems by generalizing the two-dimensional spiral model to an n-dimensional spiral model.
There are effective settings for the SPO algorithm: the periodic descent direction setting
and the convergence setting.
Metaphor
The motivation for focusing on spiral phenomena was due to the insight that the dynamics that generate logarithmic spirals share the diversification and intensification behavior. The diversification behavior can work for a global search (exploration) and the intensification behavior enables an intensive search around a current found good solution (exploitation).
Algorithm
The SPO algorithm is a multipoint search algorithm that has no objective function gradient, which uses multiple spiral models that can be described as deterministic dynamical systems. As search points follow logarithmic
spiral trajectories towards the common center, defined as the current best point, better solutions can be found and the common center can be updated.
The general SPO algorithm for a minimization problem under the maximum iteration (termination criterion) is as follows:
0) Set the number of search points and the maximum iteration number .
1) Place the initial search points and determine the center , ,and then set .
2) Decide the step rate by a rule.
3) Update the search points:
4) Update the center: where .
5) Set . If is satisfied then terminate and output . Otherwise, return to Step 2).
Setting
The search performance depends on setting the composite rotation matrix , the step rate , and the initial points .
The following settings are new and effective.
Setting 1 (Periodic Descent Direction Setting)
This setting is an effective setting for high dimensional problems under the maximum iteration . The conditions on and together ensure that the spiral models generate descent directions periodically. The condition of works to utilize the periodic descent directions under the search termination .
Set as follows: where is the identity matrix and is the zero vector.
Place the initial points at random to satisfy the following condition:
where . Note that this condition is almost all satisfied by a random placing and thus no check is actually fine.
Set at Step 2) as follows: where a sufficiently small such as or .
Setting 2 (Convergence Setting)
This setting ensures that the SPO algorithm converges to a stationary point under the maximum iteration . The settings of and the initial points are the same with the above Setting 1. The setting of is as follows.
Set at Step 2) as follows: where is an iteration when the center is newly updated at Step 4) and such as . Thus we have to add the following rules about to the Algorithm:
•(Step 1) .
•(Step 4) If then .
Future works
The algorithms with the above settings are deterministic. Thus, incorporating some random operations make this algorithm powerful for global optimization. Cruz-Duarte et al. demonstrated it by including stochastic disturbances in spiral searching trajectories. However, this door remains open to further studies.
To find an appropriate balance between diversification and intensification spirals depending on the target problem class (including ) is important to enhance the performance.
Extended works
Many extended studies have been conducted on the SPO due to its simple structure and concept; these studies have helped improve its global search performance and proposed novel applications.
References
Collective intelligence
Multi-agent systems
Optimization algorithms and methods
optimization algorithm | Spiral optimization algorithm | [
"Engineering"
] | 719 | [
"Artificial intelligence engineering",
"Multi-agent systems"
] |
56,125,271 | https://en.wikipedia.org/wiki/Tucker%20Prize | The Tucker Prize for outstanding theses in the area of optimization is sponsored by the Mathematical Optimization Society (MOS). Up to three finalists are presented at each (triennial) International Symposium of the MOS. The winner will receive an award of $1000 and a certificate. The Albert W. Tucker Prize was approved by the Society in 1985, and was first awarded at the Thirteenth International Symposium on Mathematical Programming in 1988.
Winners and finalists
1988:
Andrew V. Goldberg for "Efficient graph algorithms for sequential and parallel computers".
1991:
Michel Goemans for "Analysis of Linear Programming Relaxations for a Class of Connectivity Problems".
Other Finalists: Leslie Hall and Mark Hartmann
1994:
David P. Williamson for "On the Design of Approximation Algorithms for a Class of Graph Problems".
Other Finalists: Dick Den Hertog and Jiming Liu
1997:
David Karger for "Random Sampling in Graph Optimization Problems".
Other Finalists: Jim Geelen and Luis Nunes Vicente
2000:
Bertrand Guenin for his PhD thesis.
Other Finalists: Kamal Jain and Fabian Chudak
2003:
Tim Roughgarden for "Selfish Routing".
Other Finalists: Pablo Parrilo and Jiming Peng
2006:
Uday V. Shanbhag for "Decomposition and Sampling Methods for Stochastic Equilibrium Problems".
Other Finalists: José Rafael Correa and Dion Gijswijt
2009:
Mohit Singh for "Iterative Methods in Combinatorial Optimization".
Other Finalists: Tobias Achterberg and Jiawang Nie
2012:
Oliver Friedmann for "Exponential Lower Bounds for Solving Infinitary Payoff Games and Linear Programs".
Other Finalists: Amitabh Basu and Guanghui Lan
2015:
Daniel Dadush for "Integer Programming, Lattice Algorithms, and Deterministic Volume Computation".
Other Finalists: Dmitriy Drusvyatskiy and Marika Karbstein
2018:
Yin Tat Lee for "Faster Algorithms for Convex and Combinatorial Optimization".
Other Finalists: Damek Davis and Adrien Taylor
2021:
Jakub Tarnawski for "New Graph Algorithms via Polyhedral Techniques".
Other Finalists: Georgina Hall and Yair Carmon
See also
List of computer science awards
References
External links
Official web page (MOS)
Computer science awards
Triennial events
Awards of the Mathematical Optimization Society
Awards established in 1988
1988 establishments in the United States | Tucker Prize | [
"Technology"
] | 482 | [
"Science and technology awards",
"Computer science",
"Computer science awards"
] |
56,125,679 | https://en.wikipedia.org/wiki/Journal%20of%20Commutative%20Algebra | The Journal of Commutative Algebra is a peer-reviewed academic journal of mathematical research that specializes in commutative algebra and closely related fields. It has been published by the Rocky Mountain Mathematics Consortium (RMMC) since its establishment in 2009. It is currently published four times per year.
Historically, the Journal of Commutative Algebra filled a niche for the Rocky Mountain Mathematics Consortium when the Canadian Applied Mathematics Quarterly, formerly published by the RMMC, was acquired by the Applied Mathematics Institute of the University of Alberta. Founding editors Jim Coykendall (currently at Clemson University) and Hal Schenck (currently at Auburn University) began the journal with the goal of creating a top-tier journal in commutative algebra.
Abstracting and indexing
The journal is abstracted and indexed in Current Contents/Physical, Chemical & Earth Sciences, Science Citation Index Expanded, Scopus, MathSciNet, and zbMATH.
References
External links
Academic journals established in 2009
Algebra journals
English-language journals
Quarterly journals
Delayed open access journals | Journal of Commutative Algebra | [
"Mathematics"
] | 209 | [
"Algebra journals",
"Algebra"
] |
56,125,982 | https://en.wikipedia.org/wiki/17%CE%B1-Allyl-19-nortestosterone | 17α-Allyl-19-nortestosterone, also known as 3-ketoallylestrenol or as 17α-allylestr-4-en-17β-ol-3-one, is a progestin which was never marketed. It is a combined derivative of the anabolic–androgenic steroid and progestogen nandrolone (19-nortestosterone) and the antiandrogen allyltestosterone (17α-allyltestosterone). The drug is a major active metabolite of allylestrenol, which is thought to be a prodrug of 17α-allyl-19-nortestosterone.
17α-Allyl-19-nortestosterone has 24% of the affinity of ORG-2058 and 186% of the affinity of progesterone for the progesterone receptor, 4.5% of the affinity of testosterone for the androgen receptor, 9.8% of the affinity of dexamethasone for the glucocorticoid receptor, 2.8% of the affinity of testosterone for sex hormone-binding globulin, and less than 0.2% of the affinity of estradiol for the estrogen receptor. The affinity of 17α-allyl-19-nortestosterone for the androgen receptor was less than that of norethisterone and medroxyprogesterone acetate and its affinity for sex hormone-binding globulin was much lower than that of norethisterone. These findings may help to explain the absence of teratogenic effects of allylestrenol on the external genitalia of female and male rat fetuses.
See also
Altrenogest
Ethinyltestosterone
Vinyltestosterone
References
Abandoned drugs
Tertiary alcohols
Allyl compounds
Estranes
Human drug metabolites
Enones
Progestogens | 17α-Allyl-19-nortestosterone | [
"Chemistry"
] | 408 | [
"Chemicals in medicine",
"Drug safety",
"Human drug metabolites",
"Abandoned drugs"
] |
56,127,040 | https://en.wikipedia.org/wiki/C2-Symmetric%20ligands | {{DISPLAYTITLE:C2-Symmetric ligands}}
In homogeneous catalysis, C2-symmetric ligands refer to ligands that lack mirror symmetry but have C2 symmetry (two-fold rotational symmetry). Such ligands are usually bidentate and are valuable in catalysis. The C2 symmetry of ligands limits the number of possible reaction pathways and thereby increases enantioselectivity, relative to asymmetrical analogues. C2-symmetric ligands are a subset of chiral ligands. Chiral ligands, including C2-symmetric ligands, combine with metals or other groups to form chiral catalysts. These catalysts engage in enantioselective chemical synthesis, in which chirality in the catalyst yields chirality in the reaction product.
Examples
An early C2-symmetric ligand, diphosphine catalytic ligand DIPAMP, was developed in 1968 by William S. Knowles and coworkers of Monsanto Company, who shared the 2001 Nobel Prize in Chemistry. This ligand was used in the industrial production of -DOPA.
Some classes of C2-symmetric ligands are called privileged ligands, which are ligands that are broadly applicable to multiple catalytic processes, not only a single reaction type.
Mechanistic concepts
While the presence of any symmetry element within a ligand intended for asymmetric induction might appear counterintuitive, asymmetric induction only requires that the ligand be chiral (i.e. have no improper rotation axis). Asymmetry (i.e. absence of any symmetry elements) is not required. C2 symmetry improves the enantioselectivity of the complex by reducing the number of unique geometries in the transition states. Steric and kinetic factors then usually favor the formation of a single product.
Chiral fence
Chiral ligands work by asymmetric induction somewhere along the reaction coordinate. The image to the right illustrates how a chiral ligand may induce an enantioselective reaction. The ligand (in green) has C2 symmetry with its nitrogen, oxygen or phosphorus atoms hugging a central metal atom (in red). In this particular ligand the right side is sticking out and its left side points away. The substrate in this reduction is acetophenone and the reagent (in blue) a hydride ion. In absence of the metal and the ligand the Re face approach of the hydride ion gives the (S)-enantiomer and the Si face approach the (R)-enantiomer in equal amounts (a racemic mixture like expected). The ligand and metal presence changes all that. The carbonyl group will coordinate with the metal and due to the steric bulk of the phenyl group it will only be able to do so with its Si face exposed to the hydride ion with in the ideal situation exclusive formation of the (R) enantiomer. The re face will simply hit the chiral fence. Note that when the ligand is replaced by its mirror image the other enantiomer will form and that a racemic mixture of ligand will once again yield a racemic product. Also note that if the steric bulk of both carbonyl substituents is very similar the strategy will fail.
Other C2-symmetric complexes
Many C2-symmetric complexes are known. Some arise not from C2-symmetric ligands, but from the orientation or disposition of high symmetry ligands within the coordination sphere of the metal. Notably, EDTA and triethylenetetraamine form complexes that are C2-symmetric by virtue of the way the ligands wrap around the metal centers. Two isomers are possible for (indenyl)2MX2, Cs- and C2-symmetric. The C2-symmetric complexes are optically stable.
Asymmetric ligands
Ligands containing atomic chirality centers such asymmetric carbon, which usually do not have C2-symmetry, remain important in catalysis. Examples include cinchona alkaloids and certain phosphoramidites. P-chiral monophosphines have also been investigated.
See also
Chiral anion catalysis
Further reading
References
Coordination chemistry
Stereochemistry
Organometallic chemistry
Ligands | C2-Symmetric ligands | [
"Physics",
"Chemistry"
] | 863 | [
"Ligands",
"Stereochemistry",
"Coordination chemistry",
"Space",
"nan",
"Spacetime",
"Organometallic chemistry"
] |
56,127,644 | https://en.wikipedia.org/wiki/Starlight%20%28interstellar%20probe%29 | Project Starlight is a research project of the University of California, Santa Barbara to develop a fleet of laser beam-propelled interstellar probes and sending them to a star neighboring the Solar System, potentially Alpha Centauri. The project aims to send organisms on board the probe.
Overview
Starlight aims to accelerate the spacecraft with powerful lasers, a method the project refers to as DEEP-IN (Directed Energy Propulsion for Interstellar Exploration), thus allowing them to reach stars near the Solar System in a matter of years, in contrast to traditional propulsion methods which would require thousands of years. Each spacecraft would be the size of a DVD disc and would be powered by plutonium. They would fly at one-fifth of the speed of light, and in the case of Alpha Centauri, they would arrive after traveling more than twenty years from Earth.
History
Starlight is a program of the Experimental Cosmology Group of University of California, Santa Barbara (UCSB), and has received funding from NASA. In 2015, the NASA Innovative Advanced Concepts (NIAC) selected DEEP-IN as a phase-1 project.
Terrestrial biomes in space
One goal of Starlight is to send terrestrial organisms along with the spacecraft, and observe how the interstellar environment and extreme acceleration affects them. This effort is known as Terrestrial Biomes in Space, and the lead candidate is Caenorhabditis elegans, a minuscule nematode. The organism will spend most of the voyage in a frozen state, and once the spacecraft approaches its target they will be thawed by heat from the onboard plutonium. Following their revival, the organisms will be monitored by various sensors, and the data they produce will be sent back to Earth. C. elegans have been used extensively in biological research as a model organism, as the worm is one of those having the fewest cells for an animal possessing a nervous system. A backup option for C. elegans are tardigrades, micro-animals that are known for their resilience to various conditions lethal to other animals, such as the vacuum environment of space and strong doses of ionizing radiation.
Planetary protection
NASA's funding does not cover the Terrestrial Biome in Space portion of Starlight, as the experiment may potentially contaminate exoplanets.
See also
, a similar initiative to Starlight
Interstellar probe
Interstellar travel
, proposed in 2016 for NASA
References
Interstellar travel
Proposed space probes
Alpha Centauri
University of California, Santa Barbara | Starlight (interstellar probe) | [
"Astronomy"
] | 509 | [
"Astronomical hypotheses",
"Interstellar travel"
] |
76,069,088 | https://en.wikipedia.org/wiki/Amauroderma%20calcitum | Amauroderma calcitum is a tough woody mushroom in the family Ganodermataceae. It is a polypore fungus found in Brazil.
References
calcitum
Fungus species
Fungi described in 2016 | Amauroderma calcitum | [
"Biology"
] | 42 | [
"Fungi",
"Fungus species"
] |
76,069,241 | https://en.wikipedia.org/wiki/Form%20%28architecture%29 | In architecture, form refers to a combination of external appearance, internal structure, and the unity of the design as a whole, an order created by the architect using space and mass.
External appearance
The external outline of a building includes its shape, size, color, and texture, as well as relational properties, like position, orientation, and visual inertia (appearance of concentration and stability).
Architects are primarily concerned with the shapes of the building itself (contours, silhouettes), its openings (doors and windows), and enclosing planes (floor, walls, ceiling).
Forms can have regular shape (stable, usually with an axis or plane of symmetry, like a triangle or pyramid), or irregular; the latter can sometimes be constructed by combining multiple forms (additive forms, composition) or removing one form from another (subtractive forms).
Multiple forms can be organized in different ways:
in a line or along a circle;
as a regular grid;
as an irregular cluster;
in a star-like radial pattern.
Internal structure
Historically, multiple approaches were suggested to address the reflection of the structure in the appearance of the architectural form. In the 19th-century Germany, Karl Friedrich Schinkel suggested that the structural elements shall remain visible in the forms to create a satisfying feeling of strength and security, while Karl Bötticher as part of his "tectonics" suggested splitting the design into a structural "core-form" () and decorative "art-form" (). Art-form was supposed to reflect the functionality of the core-form: for example, rounding and tapering of the column should suggest its load-bearing function. In the tectonics as envisioned by Bötticher, the function (defined as requirements for internal space) had driven the design: the size determined the roof technology to be used, the latter in turn mandated the support requirements, creating a structural outline of the building, architecture was an art of resolution of the conflicts between the functional need and architectural forms that can be built.
New materials had frequently inspired new forms. For example, arrival of construction iron essentially created a set of new core-forms, and many architects got busy inventing the matching art-forms. Similarly, introduction of reinforced concrete, steel frame, and large plates of sheet glass in the 20th century caused creation of radically new space and mass arrangements.
Space and mass
Space and mass (also Mass and volume) are the primary ingredients that an architect uses to compose an architectural form. The essence of a building is the separation between the finite indoor space fit for humans and unrestricted natural environment outdoors. Unlike the physical objects manifesting the mass (for example, the floor, walls, and ceiling), the human experience of the void, air-filled indoor space is not obvious, yet the idea of architectural space is very old, going back at least to the (táxis, "order"), a subdivision of a building into parts.
The psychological effects of space are very common, as suggested by the English language: feeling of insecurity and compression in "confining circumstances" of inadequate space and powerful "elevated experience" of standing above a great expanse. Space and mass in architecture are not entirely separable: as was noted by George Berkeley in 1709, two-dimensional human vision cannot fully comprehend three-dimensional forms, so the perception of the space is a result of immediate visual sensation and the knowledge of textures pre-acquired through touching (this idea evolved in the 19th century into a theory of apperception).
By placing restrictions on the observer's movements, an architect can evoke a variety of emotions. For example, in Gothic architecture, an elongated nave suggests a forward movement towards the altar while the compressive effect of tall walls draws the gaze towards vaults and windows above, causing a feeling of release and "uplifting" experience. Renaissance architecture tries to guide the observer to a point where all the features appear to be in equilibrium, resolving the conflict between the compression and release, thus creating a feeling of being at rest. Neo-Palladianism in England paid attention to the architectural circulation, with the views unfolding as the visitor experiences the building.
The architectural use of space is not restricted to indoors, similar feelings can be recreated on a grand scale in the city landscape. For example, the colonnades of the St. Peter's Square in Rome suggest walking towards the entrance of the cathedral in a way similar to the navigation experiences indoors. At the same time, the facades of a standalone building usually do not create an architectural space, instead the outside of a building can be thought of as a kind of sculpture, with the masses arranged in a large void.
The balance between the space and mass varied with the historical period and function of the building. For example, Egyptian pyramids and stupas in India have practically no internal space, are almost all mass, and thus manifest themselves in a sculptural fashion. The Byzantine architecture, in contrast, offered in its churches an ascetic shell outside combined with sophisticated indoor spaces. Gothic cathedrals expressed the fusion between the secular and spiritual powers through an equilibrium between the worldly facade masses and mystic spaces inside. The relative importance of space and mass can change very quickly: in 1872, Viollet-le-Duc wrote his book, Entretiens sur l'architecture, completely avoiding the use of the word "space" in its modern meaning; just 20 years later August Schmarsow was declaring the primacy of , "forming the space".
Modern architecture, utilizing the steel frame, enabled space partitioning without any practical limits, transparent walls of architectural glass enable visual journeys into the boundless world behind them. At the same time modern materials reduced the contrast between the space and mass, primarily through the reduced mass of the walls.
Symbolism
The form can be considered to have a direct symbolic value used for communication between the architect and the customer. In particular, most art historians agree that the triangular pediment in the Greco-Roman architecture is not just an imitation of an older roof construction, but a representation of the divine. This idea, first presented in the modern times by a little-known (except for his theories) architect Jean-Louis Viel de Saint Maux in 1787, was hinted at by Cicero much earlier. Cicero also suggested that the utilitarian and symbolic meanings of the pediment are not necessarily contradictory: originally designed as part of the gabled roof to protect from the rain, the pediment had gradually acquired a religious value, so if a building was designed for heaven, where the rain does not fall, dignity would dictate to add a pediment on top of it.
The ability of architecture to represent the universe and the common association of a sphere with the cosmos caused an extensive use of spherical shapes since the early Roman construction (Varro's Aviary, 1st century BC).
Theories
Multiple theories were suggested to explain the origination of forms. Gelernter considers them to be variations of five basic ideas:
A form is defined by its function ("form follows function"). For building to be "good", it should fulfill the functional requirements imposed by external physical, social, and symbolic needs (for example, a theater should have unobstructed view of the stage from the spectators' seats). Each set of functions corresponds to an ideal form (that can be latent and still waiting for a thoughtful architect to find it);
A form is a product of the designer's creativity. An architect's intuition suggests a new form that eventually blossoms, this explains similarities between the buildings with disparate functions built by the same architect;
A form is dictated by the prevailing set of attitudes shared by the society, the Zeitgeist ("Spirit of Age"). While expressing his individuality, an architect still unconsciously reflects the artistic tastes and values that are "in the air" at the time;
A form is defined by the socioeconomic factors. Unlike the Spirit of Age theory, the externalities are more physical (e.g., methods of production and distribution). Architects live in a society and their works are influenced by the prevailing ideology (for example, Versailles represents societal hierarchy while Prairie buildings reflects the power of bourgeoisie);
Architecture forms are timeless, the good ones cross the geographical, cultural, and temporal borders. For hundreds of years, these beliefs were embodied in "The Five Orders of Architecture". According to the theory of types, there are only few basic building forms, like basilica or atrium, with each generating multiple versions with stylistic differences (basilica form can be traced in Roman court buildings, Romanesque and Gothic churches, all the way to the 20th century Environmental Education Center in the Liberty State Park, New Jersey).
Early theories of form
As the nomadic cultures began to settle and desired to provide homes for their deities as well, they faced a fundamental challenge: "how would mortals ... know the kind of built environment that would please the gods?" The first answer was obvious: claim the divine origin of the architectural form, passed to architects by kings and priests. Architects, not having an access to the original source, worked out the ways to scale buildings while keeping the order through the use of symmetry, multiples and fractions of the basic module, proportions.
Plato discussed the ideal forms, "Platonic solids": cube, tetrahedron, octahedron, icosahedron). Per Plato, these timeless Forms can be seen by the soul in the objects of the material world; architects of latter times turned these shapes into more suitable for construction sphere, cylinder, cone, and square pyramid. The contemporaneous Greek architects, however, still assumed the divine origins of the forms of their buildings. Standard temple types with predetermined number and location of columns eventually evolved into the orders, but Greeks thought of these not as frozen in time results of the cultural evolution, but as timeless divine truths captured by mortals.
Vitruvius, in the only surviving classical antiquity treatise on the subject of architecture (), acknowledges the evolutionary origination of forms by referring to the first shelters built by the primitive men, who were emulating the nature, each other, and inventing. Through this process, they had arrived to the immutable "truth of Nature". Thus, to achieve the triple goal of architecture, "firmness, commodity, and delight", an architect should select a timeless form and then adjust it for the site, use, and appearance (much later, in Positivist approach, environment and use create the form in a near-perfect opposite).
Medieval architects strived in their designs to follow the structure of universe by starting with simple geometrical figures (circles, squares, equilateral triangles) and combining them into evolved forms used for both plan and sections views of the building, expecting better structural qualities and adherence to the perceived Divine intentions.
Renaissance brought a wholesale return in architecture to the Classical ideals. While Giacomo da Vignola ("The Five Orders of Architecture", 1562) and Andrea Palladio ("I quattro libri dell'architettura", 1570) had tweaked the proportions recorded by Vitruvius, their books declared the absolute, timeless principles of the architectural design.
Rationalism and empiricism
At the end of Renaissance a view of cosmos through an "organic analogy" (comparison to a living organism) evolved into a mechanical philosophy describing the world where everything is measurable. Gelernter notes that the first manifestations of the new approach occurred much later, in the Baroque style, at the time when both the rationalism and empiricism gained prominence. The Baroque architecture reflected this duality: early Baroque (mid-17th century) can be considered a Classicism revival with forms emphasizing logic and geometry (in opposition to the Mannerism), while in the end of the 17th century Rococo style is associated with the primacy of "sensory delights".
Architects believing in logic (like François Mansart, François Blondel) expected architectural form to follow laws of nature and thus eternal. This theory stressed the importance of the architectural orders that unalterable. Gradually, a shift to empiricism occurred, most pronounced in the "quarrel of the Ancients and the Moderns", an almost 30-year long debate in French academies (1664–1694). Ancients (or "Poussinists") and Moderns (or Rubenists) were expressing rationalist and empiricist views respectively. When applied to architecture, the distinction was the use of Classical geometric forms by Ancients and sensual drama suppressing the geometrical orders in the works of Modernes (Baltasar Neumann, Jakob Prandtauer). Moderns (and Rococo) prevailed, but, taken to a logical conclusion, the pure sensory approach is based on individual perception, so effectively the beauty in architecture was no longer objective and was declared to be rooted only in customs. Claude Perrault (of the Louvre Palace facade fame) in his works freed the architectural form from both God and Nature and declared that it can be arbitrarily changed "without shocking either common sense or reason". However, asserting subjectivity caused a loss of academic vigor: art theory in the beginning of the 18th century declined, affecting art education to the point where between 1702 and 1722 nine highest student awards (Grand Prix de Rome) had to be cancelled due to absence of worthy recipients.
Positivism and Romanticism
During the era of Enlightenment, the idea of timeless and objective form was renewed as part of the Neoclassicism. Two different approaches were proposed:
philosophy of positivism stated that architecture (like anything else) was determined by the outside factors;
Romantic rebellion declared the primacy of geniuses and their inner emotional resources.
The earliest application of positivist thinking to the idea of architectural form belongs to a monk Carlo Lodoli (1690–1761). Lodoli's student, Francesco Algarotti, published in 1757 his mentor's phrase, "in architecture only that shall show that has a definite function," a very early forerunner of the "form follows function" maxim underlying the functionalism. Romantics were striving to bring back the organic unity of man and nature, even though an idea of nature creating the forms through an architect contradicted their cult of human genius. They latched onto Medieval period that was interpreted as a more natural age, with craftsmen building the cathedrals as individual voluntarily that accepted the requirements of the large project. Romantics started the use of Gothic forms a century before the flourishing of Gothic Revival.
The Enlightenment also ushered in the new interpretation of history that declared each historical period to be a stage of growth for the humanity with its own aesthetic criteria (cf. Johann Gottfried Herder's Volksgeist that much later evolved into the Zeitgeist). No longer was the architectural form considered timeless - or merely a whim of an architects imagination: the new approach allowed to classify architecture of each age as an equally valid set of forms, "style" (the use of the word in this sense became established by the mid-18th century).
Lodoli considered form one of the two scientific aims of the architecture, the other one being the function (thought of primarily as the structural efficiency), and stated that these goals should be unified. Form (including the structural integrity, proportions, and utility) was declared to be a result of construction materials applied toward desired goals in ways agreeing with the laws of nature.
Neoclassicism
Neoclassicism declared three sources of architectural form to be valid, without an attempt to explain the contradictions:
the beauty is derived from observation of nature and man-made objects;
the beauty is inside the architect that tries to impress it on the world;
the beautiful designs are the ones inspired by the Classical architecture.
In practice, neoclassicists took the third approach that was declared by Sir Joshua Reynolds to be a shortcut avoiding the "painful" germination of ideals from sensory experience. Artists were expected to imitate, not copy, while also avoiding the Romantic notions of personal expression. One of their leaders, Étienne-Louis Boullée, was preoccupied with Platonic solids, others were reviving the classicism of Palladio.
Eclecticism
The philosophers of the 19th century were discovering the relativism and declaring the loss of rational principles in the world. The architects could have accommodated the new ideas with creating forms unique for each architect. Instead, they mostly chose eclecticism and worked in multiple styles, sometimes grafting one onto another, and fitting the new construction techniques, like iron frame, into old forms. Few experimented with the new forms, Karl Friedrich Schinkel had discussed how an architect can create his own style, but the coherent application of the Nietzschean approach, form as a whim of its creator, will only appear a century later.
Schinkel declared that all architectural forms come from three sources: construction techniques, tradition or historical reminiscences, and nature (the latter are "meaningful by themselves"). Rudolf Wiegmann said that eclecticism with its multiplicity of transplanted forms turns the genuine art of architecture into fashion and proposed instead to concentrate on a national style (German Rundbogenstil).
Romanticism, Arts and Crafts
New generation of Romantic architects continued in the 19th century the tradition of appreciation of Middle Ages and Gothic. Augustus Pugin excelled in Gothic designs near-indistinguishable from the originals while insisting that form follows function: all features of the building should be dictated by convenience, construction, or propriety, while ornamentation's role is to highlight the construction elements. In his opinion, the pointed architecture was essentially Christian art, and the old forms are perfect, just like the faith itself; architects were expected "to follow, not to lead". Schinkel and John Nash switched from Classical to Gothic Revival and back depending on the particular project.
At the end of the 19th century William Morris, inspired by Pugin and John Ruskin, changed direction of Romanticism towards Arts and Crafts. The focus shifted towards the forms of medieval vernacular architecture with architect and builder being the same person. Following idealism of Fichte, Schelling and Hegel, the designers of Arts and Crafts movement saw their job as personal artistic expression unbounded by old traditions (cf. "Free style" of Charles Rennie Mackintosh). New forms were inspired by the properties of construction materials and craftsmanship.
Relativism, Empiricism
The end of the 19th century and the beginning of the 20th one saw the discussions between the relativist philosophers and their positivist opponents, adherents of Phenomenology and Empiricism, who found it hard to accept the impossibility of firm knowledge and thus strived to keep the notion of objective truth. Architects preferring the Classical designs with their timeless principles kept positivist views, while the Romantic ones enjoyed the phenomenological freedom of the designs unbound by any pre-conceived rules. The long tradition of Classicism was eventually finished off by Modernism in the 1920-1930s, with the last defender of the former, Julien Guadet, offering a sophisticated theory of form: the mind comes preconfigured with objective information about beauty (but this information requires discovery based on experience and practice), then modifies these innate designs according to the environment. The issue with this theory came in the early 20th century with new designs that were objectively beautiful yet retained seemingly no Classical principles, thus making the idea of prewired brain doubtful.
References
Sources
Architectural theory | Form (architecture) | [
"Engineering"
] | 4,022 | [
"Architectural theory",
"Architecture"
] |
76,069,854 | https://en.wikipedia.org/wiki/Lutetium%20nitride | Lutetium nitride is a binary inorganic compound of lutetium and nitrogen with the chemical formula .
Preparation
Lutetium nitride can be prepared from direct nitridation of lutetium at 1600 °C:
Physical properties
Lutetium nitride crystalyzes with cubic crystal system of the space group of F3m3.
References
Nitrides
Lutetium compounds
Nitrogen compounds | Lutetium nitride | [
"Chemistry"
] | 82 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
76,070,989 | https://en.wikipedia.org/wiki/Iron%20tetrafluoride | Iron tetrafluoride is a binary inorganic compound with a chemical formula of .
History
Iron tetrafluoride was initially observed in 2003 via mass spectrometry and Fourier-transform infrared spectroscopy.
Preparation
Iron tetrafluoride can be prepared by reaction of iron atoms with elemental in excess neon and argon at 4 K:
Physical properties
Iron tetrafluoride is assumed to have tetrahedral or square planar structure. It has been calculated to be stable in the gas phase.
References
Iron compounds
Fluorides | Iron tetrafluoride | [
"Chemistry"
] | 110 | [
"Fluorides",
"Salts"
] |
76,071,220 | https://en.wikipedia.org/wiki/2002%20Diaz%20pipeline%20incident | The 2002 Diaz pipeline incident, also known simply as the Holley chemical spill, was a chemical leak at the Diaz Chemical Corporation site in Holley, New York. On January 5th, 2002, at approximately 9:30am, a faulty reactor vessel burst open, along with its pipeline carrying chemicals underground from inside the Diaz chemical plant, releasing approximately 80 gallons of thionyl chloride, chloroacetyl chloride, toluene, 2-chloro-6-fluorophenol & related chemicals, and chlorobenzene into the soil, atmosphere, and into local homes, and contaminating 3,100 tons of concrete. Citizens complained of nosebleeds, eye irritation, sore throat, headache, and skin rash. Others who resided in the area at the time of the incident reported the effects as "unbearable", causing them to flee to temporary housing in surrounding towns. Due to some of the chemicals involved being cancerous or able to cause other chronic health issues, this caused the evacuation of eight families and other houses in the area to be left abandoned, temporarily leaving Holley as a ghost town. As of 2021, the Environmental Protection Agency (EPA) believed that only 10% of all the chemicals released into the groundwater & soil have been cleaned up, but the Holley, NY town board claims otherwise, and that they have tested the soil and groundwater shown different results.
Abandonment of chemical plant
Shortly after the town evacuation and abandonment of several houses, the former residents of Holley launched a 60 million dollar civil lawsuit against Diaz, which resulted in the company filing for bankruptcy, and abandoning the former chemical plant, leaving behind reactor vessels, filled chemical drums, and 750 tons of contaminated scrap metal.
References
Industrial accidents and incidents in the United States | 2002 Diaz pipeline incident | [
"Chemistry"
] | 361 | [
"Chemical process engineering",
"Chemical plants"
] |
76,071,427 | https://en.wikipedia.org/wiki/Cobalt%20tetrafluoride | Cobalt tetrafluoride is a binary inorganic compound with a chemical formula of .
Synthesis
Cobalt tetrafluoride was prepared is a gas-phase reaction of fluorination of by using as a fluorinating agent.
Physical properties
Cobalt tetrafluoride is too unstable to exist as a solid or liquid, but it is stable in a dilute gas phase.
References
Cobalt compounds
Fluorides | Cobalt tetrafluoride | [
"Chemistry"
] | 84 | [
"Fluorides",
"Salts"
] |
76,073,805 | https://en.wikipedia.org/wiki/ArkUI | ArkUI is a declarative based user interface framework for building user interfaces on native HarmonyOS, OpenHarmony alongside Oniro applications developed by Huawei for the ArkTS and Cangjie programming language.
Overview
ArkUI 3.0 is declarative in eTS (extended TypeScript) in HarmonyOS 3.0, followed by main ArkTS programming language in HarmonyOS 3.1, contrasting with the imperative syntax used in Java development in earlier versions of HarmonyOS in HarmonyOS 1.0 and 2.0. ArkUI allows for 2D drawing as well as 3D drawing, animations, event handling, Service Card widgets, and data binding. ArkUI automatically synchronizes between UI views and data.
ArkUI integrates with DevEco Studio IDE to provide for real-time previews during editing, alongside support for debugging and other development features.
ArkJS is designed for web development with a Vue 2-like syntax, providing a familiar environment for web developers using JS and CSS. ArkJS incorporates the HarmonyOS Markup Language (HML), which allows attributes prefixed with @ for MVVM architectural pattern.
History
During HDC 2021 on October 22, 2021, the HarmonyOS 3.0 developer preview introduced ArkUI 3.0 for eTS, JS programming languages with ArkCompiler. Compared to previous versions of ArkUI 1.0 and 2.0 under imperative development with Java in earlier versions of HarmonyOS.
During HDC 2022 HarmonyOS 3.1 in November 2022, Huawei ArkUI evolved into full declarative development featuring declarative UI capabilities, improved layout ability, component capability improvement and others. In April 2023, HarmonyOS 3.1 Beta 1 build included ArkUI declarative 2D and 3D drawing capabilities. The upgrade also improves layout, component, and app state management capabilities.
During HDC 2023, August 2023, Huawei announced HarmonyOS 4.0 improvements of ArkUI with ArkTS alongside native HarmonyOS NEXT software development using Ark Engine with ArkGraphics 2D and ArkGraphics 3D. Also, the company announced a cross platform extension of ArkUI called ArkUI-X which would allow developers to run applications across Android, iOS and HarmonyOS under one project using DevEco Studio IDE and Visual Studio Code plugins. On January 18, 2024, during HarmonyOS Ecology Conference, Huawei revealed the HarmonyOS NEXT software stack, that included ArkUI/ArkUI-X programming framework with the Ark Compiler/BiSheng Compiler/Ark Runtime compiler & runtime, for both ArkTS and incoming Cangjie programming language.
ArkUI-X
ArkUI-X is an open-source UI software development kit which is the extension of ArkUI created for building cross platform applications, including Android, iOS targets additionally. Web platform support with ArkJS was released on December 8, 2023. ArkUI-X consists of both a UI language and a rendering engine.
Features
Components
System components are built-in components within the ArkUI framework, categorized into container components and basic components. For example, Row and Column are container components that can hold other components, while Text and Button are basic components.
Examples
The following is an example of a simple Hello World program. It is standard practice in ArkUI to separate the application struct and views into different structs, with the main view named Index.
import ArkTS
// Index.ets
import router from '@ohos.router';
@Entry
@Component
struct Index {
@State message: string = 'Hello World'
build() {
Row() {
Column() {
Text(this.message)
.fontSize(50)
.fontWeight(FontWeight.Bold)
// Add a button to respond to user clicks.
Button() {
Text('Next')
.fontSize(30)
.fontWeight(FontWeight.Bold)
}
.type(ButtonType.Capsule)
.margin({
top: 20
})
.backgroundColor('#0D9FFB')
.width('40%')
.height('5%')
// Bind the onClick event to the Next button so that clicking the button redirects the user to the second page.
.onClick(() => {
router.pushUrl({ url: 'pages/Second' })
})
}
.width('100%')
}
.height('100%')
}
}The @ohos.router routing library implements page transitions, which must be declared in the main_pages.json file before being invoked.
Reception
Taobao claims that the ArkUI version of its app achieves checkout page performance 1.5 times faster than the Android version.
See also
SwiftUI
Flutter
Xamarin
React Native
Qt (software)
Jetpack Compose
References
External links
ArkUI at HarmonyOS Developer and Huawei Developer
ArkUI Example
2021 software
Gesture recognition
HarmonyOS
Proprietary software
Huawei products
Mobile software development
Software development
Programming tools
Software frameworks | ArkUI | [
"Technology",
"Engineering"
] | 1,053 | [
"Software engineering",
"Computer occupations",
"Software development"
] |
76,081,369 | https://en.wikipedia.org/wiki/Evolutionary%20transition%20in%20individuality | Evolutionary transition in individuality is the process through which descendants of independent organism become lower level units within a super-organism on a higher hierarchical level. Examples include cells assembling into a multicellular organism, or endosymbiosis of cells into more complex cells.
See also
Multicellular organism#Evolutionary history
Superorganism
References
Evolutionary biology concepts | Evolutionary transition in individuality | [
"Biology"
] | 72 | [
"Evolutionary biology concepts"
] |
76,082,525 | https://en.wikipedia.org/wiki/Quantum%20Space%20%28company%29 | Quantum Space is a company founded in 2022 that plans to develop spacecraft that will operate in geosynchronous and cislunar space. Corporate leadership includes Dr. Kam Ghaffarian, Kerry Wisnosky and Ben Reed. Ghaffarian is also chairman of Axiom Space. It is based in Rockville, Maryland.
QS-1
On 26 October 2022 Quantum Space announced its first spacecraft mission. The spacecraft will collect space domain awareness data. The QS-1 spacecraft launch is scheduled for October 2024.
The Ranger spacecraft used for QS-1 will include processor and navigation electronics provided by Beyond Gravity, a subsidiary of RUAG.
References
External links
Transport companies established in 2022
Spacecraft manufacturers
2022 establishments in Maryland | Quantum Space (company) | [
"Astronomy"
] | 154 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
76,082,725 | https://en.wikipedia.org/wiki/Environmental%20history%20of%20the%20United%20States | The Environmental history of the United States covers the history of the environment over the centuries to the late 20th century, plus the political and expert debates on conservation and environmental issues. The term "conservation" appeared in 1908 and was gradually replaced by "environmentalism" in the 1970s as the focus shifted from managing and protecting natural resources to a broader concern for the environment as a whole and the negative impact of poor air or water on humans.
For recent history see Environmental policy of the United States.
Environmental trends
The Pre-Columbian Environment
According to Erin Stewart Mauldin, the geological history of the United States predates human settlement by millions of years. The landscape of the North American continent's landscape was shaped by plate tectonics, volcanic activity, and glaciation. The Appalachian Mountains resulted from plate collisions, the Rocky Mountains from the subduction of the Pacific Ocean floor, and the Pacific Northwest and New England from the accretion of microcontinents. Glaciation formed the Great Lakes and influenced soil composition across the country, with volcanic activity contributing to regions like the Columbia Plateau. Paleoindians from Siberia were the continent's first human inhabitants starting 30,000 BCE. They coexisted with megafauna like mammoths. The reasons for these species' extinction, possibly due to climate change or human hunting, remain debated. The absence of large domesticable animals in North America affected the development of societies, limiting hunting and herding and later giving European colonizers a biological edge. Native Americans developed diverse subsistence strategies, including agriculture, hunting, and fishing, with varying practices across regions. They also impacted the landscape through land clearing and hunting practices, leading to environmental changes. The pre-Columbian landscape encountered by Europeans was significantly shaped by human activity, challenging the idea of an untouched wilderness.
New England to 1815
Before 1815 the New England farmers were largely self-sufficient. The forest provided wood to build homes and barns, and fueled the stove all winter long. Timber was sold for ship construction and naval stores were sold for export to England. The remaining forest was the habitat for deer and other game that were easily hunted with muskets or traps. Once cleared the land provided pasture for the sheep (raised for the wool), the hogs, and the family cow, as well as space for the vegetable garden. The significance of the forest ranged from a threat to settlers to being a place of Puritan religious significance, as well as a source of beauty and pride. As population grew the forest transitioned from a perceived abundance to a dwindling asset. After 1815, when export markets reopened after the Napoleonic Wars and the War of 1812, farmers in the region increasingly focused more on profitable commercial crops, especially sheep, cattle, hay, lumber, wheat. As the nearby cities grew they sold more milk and cheese, eggs, apples, cranberries and maple syrup.
Industrial revolution, 1810s-1890s
In the late 18th and early 19th centuries, wastes from mining operations began to enter rivers and streams, and iron bloomeries and furnaces used water for cooling. In the early 19th century, the development of steam engines led to their use in the mining and manufacturing sectors (such as textiles). The expanded use of steam engines generated larger volumes of heated water (thermal pollution). The productivity gains, along with the introduction of railroads in the 1830s and 1840s—which increased the overall demand for coal and minerals—led to additional generation of wastes.
The Second Industrial Revolution led to development of manufacturing processes that generated new types of wastes, including air, water and land pollution (solid and hazardous waste). These new industrial sectors included:
iron and steel
oil and gas extraction
petroleum refineries
manufacturing (smelting) of non-ferrous metals
rubber manufacturing
fertilizers and chemicals
paper products manufacturing.
Hazards of textile mills
Cash wages brought rural workers into the new textile mills first in New England after 1810 and then in the South after 1870. The mills were powered by waterwheels on local rivers and caused little harm to the external environment. The hazards came indoors as workers faced air and noise pollution. The men, women and children worked in family teams 10 hours a day in a tightly enclosed environment filled with dust and fiber. The machinery occasionally caused accidents but the polluted environment was much more serious. Having large numbers of people laboring in close quarters day after day was the ideal settings for the rapid spread of diseases including the common cold, bronchitis, pneumonia and tuberculosis. The also suffered hearing loss and fatigue. Byssinosis, also known as "brown lung disease" or "Monday fever," was particularly prevalent among cotton textile workers, with symptoms including chest tightness and shortness of breath.. There were sharp disagreements among workers, employers, and medical professionals regarding the impact of factory environments on health. The mills seldom employed medical care on site but they did support community hospitals.
Hazards of underground mines
Underground mining was a very hazardous occupation. However the coal, iron, lead and copper was essential for industrialization. Mines paid well and drew many skilled miners from Britain and Germany.
Paul Rakes examines coal mine fatalities in West Virginia in the first half of the 20th century. Besides the well-publicized mine disasters that killed a number of miners at a time, there were many smaller episodes in which one or two miners lost their lives. Mine accidents were considered inevitable, and mine safety did not appreciably improve the situation because of lax enforcement. West Virginia's mines were considered so unsafe that immigration officials would not recommend them as a source of employment for immigrants, but they came anyway for the high pay. When the United States Bureau of Mines was given more authority to regulate mine safety in the 1960s, safety awareness improved, and West Virginia coal mines became less dangerous.
The transition after 1920 from coal to hydro and oil and later to nuclear, gas and solar power dramatically lowered the hazards to energy workers. Likewise the transition from underground to surface mining.
Destruction of a fourth of the forests, 1780s to 1860s
According to geographer Michael Williams, by 1860, about 153 million acres of forest had been cleared for farms, and another 11 million acres cut down by industrial logging, mining, railroad construction, and urban expansion. A fourth of the original forest cover in the eastern states was gone. At the same time there was a major change in how Americans vie wed forests. They were recognized as the foundation of industrialization, agricultural expansion, and material progress. Lumber was the nation's largest industry in 1850, and second in 1860 behind textiles. As Frederick Starr emphasized in 1865, forests were integral to the four key necessities for prosperity: "cheap bread, cheap houses, cheap fuel, and cheap transportation for passengers and freights." Lumbering was a very dangerous trade, with crippling accidents and death a common hazard. But it paid well because the lumber was essential for construction and wood was the main fuel for homes, business, steamboats and railroads. Intellectuals began examining the complex relationships between forests and soil, climate, farming, railroading and the economy. They pondered the overall ecological balance. Was the nation's energy at risk as settlement expanded westward into the trans-Mississippi prairies where wood was scarce. Given the economic and cultural importance of the forests, some worried commentators, especially George Perkins Marsh and Increase Lapham. began questioning the widespread destruction. They saw the forests and backwoods pioneers as symbols of America, and their disappearance was concerning. Romantic writers such as Henry David Thoreau and Ralph Waldo Emerson helped Americans appreciate the aesthetic and recreational value of forests, beyond just their economic importance. The early conservation movement had its roots in these concerns.
Western frontier
The British government attempt to restrict westward expansion with the ineffective Proclamation Line of 1763 was cancelled by the new United States government. The first major movement west of the Appalachian Mountains began in Pennsylvania, Virginia and North Carolina as soon as the Revolutionary War was effectively won in 1781. Pioneers housed themselves in a rough lean-to or at most a one-room log cabin. The main food supply at first came from hunting deer, turkeys, and other abundant small game.
Clad in typical frontier garb, leather breeches, moccasins, fur cap, and hunting shirt, and girded by a belt from which hung a hunting knife and a shot pouch – all homemade – the pioneer presented a unique appearance. In a short time he opened in the woods a patch, or clearing, on which he grew corn, wheat, flax, tobacco and other products, even fruit. In a few years the pioneer added hogs, sheep and cattle, and perhaps acquired a horse. Homespun clothing replaced the animal skins. The more restless pioneers grew dissatisfied with over civilized life, and uprooted themselves again to move 50 or hundred miles (80 or 160 km) further west.
In 1788, American pioneers to the Northwest Territory established Marietta, Ohio as the first permanent American settlement in the Northwest Territory.
The Louisiana Purchase of 1803 doubled the size of the nation. It contained a few small European settlements and large numbers of Native Americans. The federal government had charge of Indian affairs, and one by one purchased Indian lands. Individuals who were willing to assimilate into American society were allowed to remain. Tribes that wanted to keep their self-government kept a small part of their of their land as an Indian reservation and sold the rest to the federal government for an annual subsidy from the Bureau of Indian Affairs. Those tribes east of the Mississippi were usually relocated further west, primarily to Indian Territory (now the state of Oklahoma). See Indian removal
By 1813 the western frontier had reached the Mississippi River. St. Louis, Missouri was the largest town on the frontier, the gateway for travel westward, and a principal trading center for Mississippi River traffic and inland commerce. There was wide agreement on the need to settle the new territories quickly, but the debate polarized over the price the government should charge. The conservatives and Whigs, typified by president John Quincy Adams, wanted a moderated pace that charged the newcomers enough to pay the costs of the federal government. The Democrats, however, tolerated a wild scramble for land at very low prices. The final resolution came in the Homestead Law of 1862, with a moderated pace that gave settlers 160 acres free after they worked on it for five years.
From the 1770s to the 1830s, pioneers moved into the new lands that stretched from Kentucky to Alabama to Texas. Most had operated farmers back east and now relocated in family groups. Historian Louis M. Hacker shows how wasteful the first generation of pioneers was; they were too ignorant to cultivate the land properly and when the natural fertility of virgin land was used up, they sold out and moved west to try again. Hacker describes that in Kentucky about 1812:
Hacker adds that the second wave of settlers reclaimed the land, repaired the damage, and practiced a more sustainable agriculture.
Civil War
In the Civil War (1861-1865) more powerful long range rifles and artillery caused high casualty rates of wounding and death. The Union forces had much better medical and hospital facilities, while the supply system failed so often in the Confederacy that for months at a time soldiers marched and fought barefoot, with little medicine available to their overworked doctors. The Union systematically devastated the railway system in the South, and ruined many cotton plantations. Combat operations killed thousands of horses and mules used to pull supplies, artillery and munitions. The South was overwhelmingly rural, with a priority on growing and exporting cotton to textile factories. Most of the food was imported from the North. The Union blockade shut down most of the cotton exports and nearly all of its food imports. The Union cut off many of its internal rail and river travel routes. The main Southern meat supply was pork, but the output was sharply reduced by disease and shortages of feed. Hunger and bad nutrition weakened the Confederacy and led to desertions as Confederate soldiers realized their families were at risk of starvation.
The war was largely fought in hot, wet regions hosting numerous pandemic diseases. As a result sickness rates were high on both sides. For both armies disease caused about twice as many deaths as did combat. The main causes of fatality were diarrhea and dysentery, followed by typhoid fever, pneumonia, malaria and smallpox.
Malaria was widespread across the South. The Union army suffered over 1 million cases of malaria, resulting in about 10,000 deaths. It accounted for about 1 in 6 of the 6.5 million episodes of illnesses . More Southerners had immunity but they had many cases and far worse medical treatment. Quinine was the only effective remedy--Union doctors had plenty but the blockade cut off supplies to the South. Ticks, lice and fleas caused much illness in both armies especially Typhus and Relapsing fever.
Of the 3 million horses and mules in military service during the war, about half died. The main causes were battle injuries, overwork, diseases like glanders and lack of proper food and care. In the South the Union army shot horses it did not need to keep them out of Confederate hands.
Great Plains 1870s
The population of the Great Plains states, including Minnesota, Dakota, Nebraska, and Kansas, grew from 1.0 million in 1870 to 2.4 million in 1880. The number of farms in the region tripled, increasing from 99,000 in 1870 to 302,000 in 1880. The improved acreage (land under cultivation) quintupled, rising from 5.0 million acres to 24.6 million acres during the same period. the new settlers mostly purchased land on generous terms from transcontinental railroads that were given land grants by Washington. They focused on wheat and cattle. This rapid population influx and agricultural expansion was a hallmark of the settlement and development of the Great Plains in the late 19th century, as the region attracted waves of new settlers from Germany, Scandinavia, and Russia, as well as farmers who sold land in older states to move to larger farms.
Metropolitan industry 1870-1920
In 1870 to 1920, the center of industrialization expanded from New England and the Mid-Atlantic regions into the Midwest to Chicago and St Louis. According to Martin Melosi this became the industrial base of the world's leading industrial power. Environmental degradation, ignored at first, became an increasing concern regarding sewage, garbage, drinking water, and clean air and adequate medical care.. Pollution was caused by primarily by coal. It was the primary energy source to power factories and heat offices and apartments. Its use led to a sharp increase in carbon emissions and air pollution. The concentration of industries such as steel improved efficiency but increased resource waste and pollution of air and water as urban rivers became dumping grounds for industrial waste. Residential overcrowding into tenements led to poor sanitation and more sickness.
Public lands controlled by federal government
Among the first pieces of legislation passed following independence was the Land Ordinance of 1785, which provided for the surveying and sale of lands in the area created by state cessions of western land to the national government. Later, the Northwest Ordinance provided for the political organization of the Northwest Territories (now the states of Michigan, Wisconsin, Ohio, Illinois, and Indiana.
To encourage settlement of western lands, Congress passed the first of several Homestead Acts in 1862, granting parcels in increments to homesteaders who would maintain a living on the land for five years, and then they would own it. Congress also made huge land grants to various railroads working to complete a transcontinental rail system. The railroad grants included mineral and timber-rich lands so that the railroads could get financing to build. Again, the plan was that the railroads would sell off the land to get money, and the new transportation network would not use taxpayer money.
It turned out that much western land was not suited for homesteading because of mountainous terrain, poor soils, lack of available water, and other problems. By the early 20th century, the federal government held significant portions of most western states that had simply not been claimed for any use. Conservationists prevailed upon President Theodore Roosevelt to set aside lands for forest conservation and for special scientific or natural history interest. Much land still remained unclaimed even after such reserves had been initially set up. The US Department of the Interior held millions of acres in the western states, with Arizona and New Mexico joining the union by 1913. US President Herbert Hoover proposed to deed the surface rights to the unappropriated lands to the states in 1932, but the states complained that the lands had been overgrazed and would also impose a burden during the cash-strapped state budgets. The Bureau of Land Management was created to manage much of that land.
History of conservation and environmentalism
Michael Kraft examines the rise and evolution of conservation and environmental politics and policies. "Conservation" originated in the late 19th century as a movement built around the conservation of natural resources and an attempt to stave off air, water, and land pollution. By the 1970s environmentalism evolved into a much more sophisticated control regime, one that employed the Environmental Protection Agency to slow environmental degradation.
According to Chad Montrie, historians largely agree on the basic points of this account: The conservation of natural resources was a significant topic of debate in the early and mid-20th century, highlighted by a tension between the business sector's push for efficient resource utilization and the advocates for preserving wilderness and natural beauty. In the 1960s and 1970s the conservation movement morphed into modern environmentalism. The seminal moment that ignited the transition occurred in 1962 with the publication of Rachel Carson's ground breaking book, "Silent Spring." Carson's urgent message warned about the perils of harmful chemical pollutants, notably substances like DDT with immediate benefits but long-term detrimental impacts, resonated with an educated audience deeply concerned about quality of life issues. The environmental awakening spurred by Carson's work was further fueled by events like the 1969 televised oil spill off the California coast. It prompted many to join mainstream environmental organizations led by visionaries such as David Brower of the Sierra Club. The momentum was bolstered by the inaugural Earth Day in 1970. President Richard Nixon took proactive steps through executive actions and collaboration with Congress to enact pivotal legislation establishing regulatory frameworks that curbed air and water pollution and mitigated adverse effects of corporate greed and rampant consumerism. The emergence of a more radical activism came in the late 1970s and early 1980s, exemplified by chemical disaster at Love Canal in 1977, and a battle in 1982 against a PCB toxic waste dump in a Black community in North Carolina. The result was confrontational grassroots environmentalism that marked the genesis of the "environmental justice" movement. It focused on issues of toxic substances and addressing concerns of "environmental racism." The collective efforts during this period laid a foundation for ongoing environmental advocacy and policy development aimed at safeguarding our planet for future generations.
Conservation Movement
The term "conservation" was coined by American forester Gifford Pinchot in 1907. He told his close friend President Theodore Roosevelt who used it for a national Conference of Governors in 1908 who discussed priorities for conservation.
Origins
The American movement received its inspiration from 19th century Romantic writings that exalted the inherent value of nature, quite apart from human usage. Author Henry David Thoreau (1817–1862) made key philosophical contributions that exalted nature. Thoreau was interested in peoples' relationship with nature and studied this by living close to nature in a simple life. He published his experiences in the book Walden, which argued that people should become intimately close with nature. British and German standards were also influential in designing American policies and training. Bernhard Fernow (1851–1923) emigrated from Germany in 1876 and became was the third chief of the Department of Agriculture's Division of Forestry, 1886 to 1898. He helped design what in 1905 became the Forest Service. Carl A. Schenck (1868–1955), another German expert, migrated to the United States in 1895 and helped shape the education of foresters.
Progressivism: Efficiency, Equity and Esthetics
According to historians Samuel P. Hays and Clayton Koppes, the conservation movement was launched into the national political arena in 1908 by President Theodore Roosevelt and his top advisor Gifford Pinchot. It represented the essence of the Progressive Era and therefore was driven by the primary values of efficiency, equity, and esthetics. Efficiency was to be achieved by full-time experts in the federal bureaucracy (headed by Pinchot) who would use the latest scientific results to manage the public domain to eliminate waste. These disinterested experts would prevent the corruption sought by selfish business interests. Equity meant that natural resources were the province of all the people and should not be plundered by special interests. Instead, resources should be apportioned broadly and equitably. However "all the people" in practice meant white farm owners and ranchers who obtained a free water supply or access to free grazing land. The esthetic theme was an appeal to and upscale white tourists who wanted a taste of wilderness. Wild and scenic lands should be set aside in national parks, not for their intrinsic value, but to provide free recreation, refresh the spirit weakened by urbanization, and even upgrade "sissies" into virile outdoorsmen. In the Great Depression of the 1930s the New Deal of President Franklin D. Roosevelt expanded the E-E-E tradition to include poor whites, with his key advisors being Harry Hopkins and Harold L. Ickes. The main programs reached two million poor unemployed young men through the Civilian Conservation Corps, while the Tennessee Valley Authority to modernize millions of traditional people trapped in an impoverished, isolated region.
Competing ideologies
Both conservationists and preservationists spoke out in political debates during the Progressive Era (the 1890s–early 1920s), with an opposition emerging in the 1920s. There were three main positions.
Laissez-faire: The laissez-faire position first developed in 1776 by Adam Smith argued that owners of private property—including lumber, oil and mining companies, should be allowed to do anything they wished on their properties. Critics warned that this pro-business policy leads to lower prices, mass consumption, waste, and the exhaustion of natural resources. and
Conservationists: The conservationists, led by Theodore Roosevelt and his close allies George Bird Grinnell and Gifford Pinchot, were motivated by the wanton waste that was taking place at the hand of market forces, including logging and hunting. This practice resulted in placing a large number of North American game species on the edge of extinction. Roosevelt recognized that the laissez-faire approach was too wasteful and inefficient. In any case, they noted, most of the natural resources in the western states were already owned by the federal government. The best course of action, they argued, was a long-term plan devised by national experts to maximize the long-term economic benefits of natural resources. To accomplish the mission, Roosevelt and Grinnell formed the Boone and Crockett Club, whose members were some of the best minds and influential men of the day. Its contingency of conservationists, scientists, politicians, and intellectuals became Roosevelt's closest advisers during his march to preserve wildlife and habitat across North America.
Preservationists: Preservationists, led by John Muir (1838–1914), argued that the conservation policies were not strong enough to protect the interest of the natural world because they continued to focus on the natural world as a source of economic production.
The debate between conservation and preservation reached its peak in the public debates over the construction of California's Hetch Hetchy dam in Yosemite National Park which supplies the water supply of San Francisco. Muir, leading the Sierra Club, declared that the valley must be preserved for the sake of its beauty: "No holier temple has ever been consecrated by the heart of man."
President Roosevelt put conservationist issues high on the national agenda. He worked with all the major figures of the movement, especially his chief advisor on the matter, Gifford Pinchot and was deeply committed to efficiency in conserving natural resources. He encouraged the Newlands Reclamation Act of 1902 to promote federal construction of dams to irrigate small farms and placed 230 million acres (360,000 mi2 or 930,000 km2) under federal protection. Roosevelt set aside more federal land for national parks and nature preserves than all of his predecessors combined.
Roosevelt established the United States Forest Service, signed into law the creation of five national parks, and signed the 1906 Antiquities Act, under which he proclaimed 18 new national monuments. He also established the first 51 bird reserves, four game preserves, and 150 national forests, including Shoshone National Forest, the nation's first. The area of the United States that he placed under public protection totals approximately .
Gifford Pinchot had been appointed by McKinley as chief of Division of Forestry in the Department of Agriculture. In 1905, his department gained control of the national forest reserves. Pinchot promoted private use (for a fee) under federal supervision. In 1907, Roosevelt designated 16 million acres (65,000 km2) of new national forests just minutes before a deadline.
In May 1908, Roosevelt sponsored the Conference of Governors held in the White House, with a focus on natural resources and their most efficient use. Roosevelt delivered the opening address: "Conservation as a National Duty".
In 1903 Roosevelt toured the Yosemite Valley with John Muir, who had a very different view of conservation, and tried to minimize commercial use of water resources and forests. Working through the Sierra Club he founded, Muir succeeded in 1905 in having Congress transfer the Mariposa Grove and Yosemite Valley to the federal government. While Muir wanted nature preserved for its own sake, Roosevelt subscribed to Pinchot's formulation, "to make the forest produce the largest amount of whatever crop or service will be most useful, and keep on producing it for generation after generation of men and trees."
Theodore Roosevelt's view on conservationism remained dominant for decades. For example, the New Deal under Franklin D. Roosevelt authorised the building of many large-scale dams and water projects, as well as the expansion of the National Forest System to buy out sub-marginal farms. In 1937, the Pittman–Robertson Federal Aid in Wildlife Restoration Act was signed into law, providing funding for state agencies to carry out their conservation efforts.
Environmentalism
"Environmentalism" emerged on the national agenda in 1970, with Republican Richard Nixon playing a major role, especially with his creation of the Environmental Protection Agency. From 1962 to 1998, the grass roots movement founded 772 national organizations focused primarily on environmental protection or pollution abatement. Furthermore many other organizations adopted such goals in addition to their primary goal, such as the American Lung Association. Using a broad definition, Jason T. Carmichael, J. Craig Jenkins, and Robert J. Brulle identified over 6,000 national and regional organizations, plus another 20,000 or more at the local and state levels that were working on behalf of a multitude of environmental causes in the year 2000.
Fears about Agricultural Land Adequacy
According to historian Tim Lehman, concerns were first raised in the 20th century regarding the long-term adequacy of the nation's agricultural lands. At the federal level studies were made and programs were proposed and some launched to preserve farmlands from conversion to other uses. An awareness of the need for agricultural conservation followed a history of agricultural abundance, as seen in the rapid settlement of western lands in the 1850s to 1880s. The new theme emerged in the Progressive conservation movement, in Hugh Hammond Bennett's soil conservation crusade, and the land utilization movement of the 1920s. The New Deal made a major national program of land use planning. A land acquisition program, soil conservation districts, and county land use planning agreement all contained elements of federal agricultural land use planning, but none of these policies were entirely successful. Scarcity issues faded during the 1950s and 1960s as agricultural productivity soared. The publication of Rachel Carson's Silent Spring in 1962 energized the environmental movement and brought a new awareness in how industrialized agriculture misused the available land with dangerous chemicals. Decades of suburbanization, rapid national and global population growth, renewed worries about soil erosion, fears of oil and water shortages, and the sudden increase in farm exports beginning in 1972 all were worrisome threats to the long-term supply of good farmland. The Carter administration in the late 1970s supported initiatives like the National Agricultural Lands Survey and liberals in Congress introduced legislation to control suburban sprawl. However the Reagan administration and the Department of Agriculture were opposed to new regulations, and no major program was enacted.
Laissez-faire and the Sagebrush Rebellion
The success of Reagan in 1980 was facilitated by the rise of popular opposition to public lands reform and a return to laissez-faire ideology. For example, out west in the 1970s the Sagebrush Rebellion arose, demanding less environmental regulation. Conservatives drew on new organizational networks of think tanks such as The Heritage Foundation, as well as well-funded industry groups, the Republican Party state organizations, and new right-wing citizen-oriented grass-roots organizations. They deployed the traditional strategy based on the rights of owners to control their property; on the protection of mineral extraction rights; and on the right to hunt and recreate and to pursue happiness unencumbered by the federal government at the expense of resource conservation.
Reagan's top appointments in the environment field were James G. Watt as Secretary of the Interior and Anne Gorsuch as head of the EPA. They tried to help the Reagan agenda by slashing spending and lowering morale. Both proved incompetent at the jobs; they picked fights with friends and foes and soon made fools of themselves. Environmentalists seized on their opportunity and made Watt and Gorsuch the centerpiece of their campaigns of ridicule. Reagan realized his mistake and fired the two. He appointed a close friend and troubleshooter, William Clark at Interior. Clark successfully turned off the spotlight and kept the peace. At EPA, Reagan appointed William Ruckelshaus, the EPA's first director and a committed environmentalist. He reversed Gorsuch's policies. Vice President George H. W. Bush typically kept close to Reagan on most issues but in this area he announced that if elected he would be the nation's "environmental president." The long run results of Reagan's two terms were to undermine laissez-faire rhetoric and mobilize the membership, funding and momentum of the environmental movement.
Historiographical debates
William Cronon has criticized advocates for assuming that "wilderness" and "nature" have a reality beyond their creation in the human imagination. This has upset many environmentalists. Cronon writes, "wilderness serves as the unexamined foundation on which so many of the quasi-religious values of modern environmentalism rest." He argues that "to the extent that we live in an urban-industrial civilization but at the same time pretend to ourselves that our real home is in the wilderness, to just that extent we give ourselves permission to evade responsibility for the lives we actually lead."
Role of federal government
Interior Department
Most of the agencies dealing with conservation (before 1970) and environmentalism (since 1970) are based in the Interior Department, formed in 1849.
The National Park Service was created in 1916. It included Yellowstone National Park, which in 1872 became the world's first national park. In 1956, the Fish and Wildlife Service became the manager of lands reserved for wildlife. The Grazing Service and the United States General Land Office were combined to create the Bureau of Land Management in 1946. In 1976 the Federal Land Policy and Management Act became the national policy for retaining public land for federal ownership.
It is responsible for the management and conservation of most federal lands and natural resources. It also administers programs relating to Native Americans, Alaska Natives, Native Hawaiians, territorial affairs, and insular areas of the United States, as well as programs related to historic preservation. As of mid-2004, the department managed 507 million acres (2,050,000 km2) of surface land, or about one-fifth of the land in the United States. It manages 476 dams and 348 reservoirs through the Bureau of Reclamation, national parks, monuments, historical sites, etc. through the National Park Service, and 544 national wildlife refuges through the Fish and Wildlife Service.
Agriculture Department and Forestry
From the early 1900s to the present, there has been a fierce rivalry over control of forests between the Department of Agriculture and the Department of the Interior. From 1905 to the present the main forestry unit has been in the Department of Agriculture.
The concept of national forests was born from Theodore Roosevelt's conservation group, Boone and Crockett Club, due to concerns regarding poaching Yellowstone National Park beginning as early as 1875. In 1876, Congress formed the office of Special Agent in the Department of Agriculture to assess the quality and conditions of forests in the United States. Franklin B. Hough was appointed the head of the office. In 1881, the office was expanded into the newly formed Division of Forestry. The Forest Reserve Act of 1891 authorized withdrawing land from the public domain as forest reserves managed by the Department of the Interior. However, the Transfer Act of 1905 transferred the management of forest reserves from the United States General Land Office of the Interior Department to the Bureau of Forestry in the Agriculture department under the new name United States Forest Service. Gifford Pinchot was the first United States Chief Forester in the Presidency of Theodore Roosevelt.
Significant federal legislation affecting the Forest Service includes the Weeks Act of 1911, the Taylor Grazing Act of 1934, P.L. 73-482; the Multiple Use – Sustained Yield Act of 1960, P.L. 86-517; the Wilderness Act, P.L. 88-577; the National Forest Management Act, P.L. 94-588; the National Environmental Policy Act, P.L. 91–190; the Cooperative Forestry Assistance Act, P.L. 95-313; and the Forest and Rangelands Renewable Resources Planning Act, P.L. 95-307.
Army Corps of Engineers and dam building
The Army Corps of Engineers is in charge of navigable waterways, and has built many of the major dams.
Tennessee Valley Authority
The Tennessee Valley Authority (TVA) is a federally owned electric utility coverings all of Tennessee and portions of nearby states. The TVA was created in 1933 as part of a New Deal agency to build dams on the Tennessee River to provide flood control, electricity generation, fertilizer manufacturing, regional planning, economic development to the Tennessee Valley. The region was a very poor part of Appalachia and out of contact with the modern industrial and agricultural economy. Unlike private utility companies TVA was envisioned as regional economic development agency that would work to help modernize the region's economy and society. Its chairman Arthur Morgan was a visionary, who wanted a model for modernizing traditional society. Some New Dealers hoped it would be a model for other regions, but others strongly disagreed and the president was undecided. Any hope for opening "Seven Little TVAs" across the country died when conservatives regained control of Congress in 1938 and ended liberal experimentation.
Environmental Protection Agency
The U.S. Environmental Protection Agency (EPA) is an independent federal agency created by an executive order of President President Nixon in 1970 and is part of the executive branch of the government. It reports to the president and was not created by act of Congress. The primary mission of the EPA is to protect human health and safeguard the natural environment (air, water and land) of the nation.
The EPA was established to combine into a single agency many of the existing federal government activities of research and development, monitoring, setting of standards, compliance and enforcement related to protection of the environment. Its most important role is to evaluate every Environmental impact statement that is required whenever a federal role is involved. Thereby EPA has the power to demand changes from most federal agencies to protect the environment according to EPA's standards. In addition, the Environmental impact statement allows a public role—private citizen watchdogs can and often do sue to tie up proposed non-government projects for years.
In 2000 to 2010 the budget held fairly steady at $7.6 to $8.4 billion (with no adjustment for inflation). In terms of objectives, 13% is budgeted for clean air and global climate change, 36% for clean and safe water, 24% for land preservation and restoration, 17% for healthy communities and ecosystems, and 11% for compliance and environmental stewardship. In 2008 it had a staff of about 18,000 people in headquarters and departmental or divisional offices, 10 regional offices, and over 25 laboratories located across the nation. More than half of the staff are engineers, scientists and environmental protection specialists. The others include legal counsel, financial, public affairs and computer specialists.
President Nixon, on July 9, 1970, told Congress of his plan to create the EPA by combining parts of three federal departments, three bureaus, three administrations and many other offices into the new single, independent agency to be known as the Environmental Protection Agency. Congress had 60 days to reject the proposal, but opinion was favorable and the reorganization took place without legislation. On December 2, 1970, the EPA was officially established and began operation under director William Ruckelshaus. The EPA began by consolidating 6550 employees from different agencies in several cabinet-level departments into a new agency with a $1.4 billion budget.
Kraft notes that despite its limited charter from 1970, over time EPA has expanded its regulatory function and jousted with the forces of business and economic development. Kraft considers the next major transition in environmental policy to be the process of insuring the "sustainability" of resources through a coalition of interests ranging from policymakers to business leaders, scholars, and individual citizens. At the turn of the 21st century, these often competing groups were wrestling with disparate environmental, economic, and social values.
Russell shows that from 1970 to 1993, the EPA devoted more of its resources to human health issues, notably cancer prevention, than to the protection of nonhuman species. The limited scope of environmental protection was due to a variety of reasons. An institutional culture favored human health issues because most employees were trained in this area. The emphasis on cancer came from the legal division's discovery that judges were more persuaded by arguments about the carcinogenicity of chemicals than by threats to nonhumans. The views of the agency leaders, who followed politically realistic courses, also played an important part in shaping the EPA's direction. Those supporting ecological issues acquired a new tool in the 1980s with the development of risk assessments so that advocates of ecological protection could use language framed by advocates of human health to protect the environment.
Complaints about federal management
Complaints about federal management of public lands constantly roil relations between public lands users (ranchers, miners, researchers, off-road vehicle enthusiasts, hikers, campers and conservation advocates) and the agencies and environmental regulation on the other. Ranchers complain that grazing fees are too high and that grazing regulations are too onerous despite environmentalist complaints that the opposite is true and that promised improvements to grazing on federal lands do not occur. Miners complain of restricted access to claims, or to lands to prospect. Researchers complain of the difficulty of getting research permits, only to encounter other obstacles in research, including uncooperative permit-holders and, especially in archaeology, vandalized sites with key information destroyed. Off-road vehicle users want free access, but hikers and campers and conservationists complain grazing is not regulated enough and that some mineral lease holders abuse other lands or that off-road vehicle destroy the resource. Each complaint has a long history.
White House roles
Theodore Roosevelt presidency 1901-1909
Conservation was a minor issue for most presidents. Theodore Roosevelt carved a leadership role that several successors followed.
Roosevelt was a prominent conservationist, putting the issue high on his national agenda. He changed the land by creating 50 wildlife refuges, 18 national monuments, and five national parks, and above all by publicizing conservation issues. Roosevelt's conservation efforts were aimed not just at environment protection, but also at ensuring that society as a whole, rather than just select individuals or companies, benefited from the country's natural resources. His key adviser on conservation matters was Gifford Pinchot, the head of the Bureau of Forestry. Roosevelt increased Pinchot's power over environmental issues by transferring control over national forests from the Department of the Interior to the Bureau of Forestry, which was part of the Agriculture Department. Pinchot's agency was renamed to the United States Forest Service, and Pinchot presided over the implementation of assertive conservationist policies in national forests. Under William Howard Taft, Pinchot had a heavily publicized dispute over environmental policy with Secretary of the Interior Richard A. Ballinger that led to Pinchot's dismissal and to Roosvelt's break with Taft in 1912.
Roosevelt relied on the Newlands Reclamation Act of 1902, which promoted federal construction of dams to irrigate small farms and placed 230 million acres (360,000 mi2 or 930,000 km2) under federal protection. In 1906, Congress passed the Antiquities Act, granting the president the power to create national monuments in federal lands. Roosevelt set aside more federal land, national parks, and nature preserves than all of his predecessors combined. Roosevelt established the Inland Waterways Commission to coordinate construction of water projects for both conservation and transportation purposes, and in 1908 he hosted the Conference of Governors. This was the first time governors had ever met together and the goal was to boost and coordinate support for conservation. Roosevelt then established the National Conservation Commission to take an inventory of the nation's natural resources.
Conference of Governors, 1908
To reach a broad natrional audience of state leaders, and obtain heavy media coverage, President Roosevelt sponsored the first ever Conference of Governors. It was held in the White House May 13–15, 1908. Pinchot, at that time Chief Forester of the U.S., was the primary mover of the conference, and a progressive conservationist, who strongly believed in the scientific and efficient management of natural resources on the federal level. He was also a prime mover of the previous Inland Waterways Commission, which recommended such a meeting the previous October.
The focus of the conference was on natural resources and their proper use. Roosevelt delivered the opening address: "Conservation as a National Duty." Among those speaking were leading industrialists, such as James J. Hill, politicians, and resource experts. Andrew Carnegie, a leading philanthropist was in attendance. The speeches emphasized both the nation's need to exploit renewable resources and the differing situations of the various states, requiring different plans. This Conference was a seminal event in the history of conservationism; it brought the issue to public attention in a highly visible way. The next year saw two outgrowths of the Conference: the National Conservation Commission, which Roosevelt and Pinchot set up with representatives from the states and Federal agencies, and the First National Conservation Commission, which Pinchot led as an assembly of private conservation interests.
Opposition
Roosevelt's policies faced opposition from both liberal environmental activists like John Muir and conservative proponents of laissez-faire like Senator Henry M. Teller of Colorado. While Muir, the founder of the Sierra Club, wanted nature preserved for the sake of pure beauty, Roosevelt subscribed to Pinchot's formulation, "to make the forest produce the largest amount of whatever crop or service will be most useful, and keep on producing it for generation after generation of men and trees." Teller and other opponents of conservation, meanwhile, believed that conservation would prevent the economic development of the West and feared the centralization of power in Washington. The backlash to Roosevelt's ambitious policies prevented further conservation efforts in the final years of Roosevelt's presidency and would later contribute to the Pinchot–Ballinger controversy during the Taft administration.
Franklin D. Roosevelt presidency, 1933-1945
Franklin D. Roosevelt had a lifelong interest in the environment and conservation starting with his youthful interest in forestry on his family estate. Although he was never an outdoorsman or sportsman on the scale of his distant cousin Theodore Roosevelt, their presidential roles in conservation were comparable. When Franklin was Governor of New York, the Temporary Emergency Relief Administration was a state-level system that became the model for his federal Civilian Conservation Corps, with 10,000 or more men building fire trails, combating soil erosion and planting tree seedlings in marginal farmland in upstate New York. The governor worked closely with Harry Hopkins and in 1933 brought Hopkins to Washington to use the New York experience to shape the national programs of work relief.
Roosevelt's New Deal was active in expanding, funding, and promoting the National Park and National Forest systems. Their popularity soared, from three million visitors a year at the start of the decade to 15.5 million in 1939. Every state had its own state parks, and Roosevelt made sure that WPA and CCC projects were set up to upgrade them as well as the national systems.
From 1933 to 1942 the Civilian Conservation Corps (CCC) enrolled 3.4 million young men for six months service. It built of trails, planted two billion trees, and upgraded of dirt roads. CCC made permanent "improvements" on 118 million acres (triple the size of Connecticut). A 1936 CCC press release claimed it "greatly increased the value of the forest and added to its usefulness to the public," while CCC Director Robert Fechner boasted in his 1939 annual report, the Corps had "constructively altered the landscape of the United States." Even more important to the New Deal's ambitions, it clothed, fed, housed and gave medical, dental, and eye care, as well as vigorous outdoor exercise, to unemployed urban youth who needed help that their poverty stricken families could not provide. Furthermore the parents received $25 a month while their sones were away. Likewise Arno B. Cammere, the energetic head of the National Park system, realized that helping solve the unemployment crisis was Roosevelt's main goal. The conservation projects of the Park and Forest services were dramatically expanded.
According to Richard Lowitt, the New Deal Interior Department led by Secretary Harold L. Ickes, emphasized economic benefits from hydroelectric power. The Department sought to build "the foundations for a more stable economy in the West that would expand enormously and bring in its wake a rising standard of living, increased population, and a greater measure of equality with other sections of the country". The New Deal ignored the fears of the upper class purists who realized their single goal of preserving wilderness instead of "improving" it was being undermined.
Wartime and postwar: 1942-1953
When unemployment practically ended in 1942, many of the New Deal agencies closed down permanently, including the WPA and CCC. New conservation programs were put on hold unless they contributed to the war effort. The Army Corps of Engineers turned to military construction and took charge of building the atomic bomb. The TVA played a major role in producing the uranium and plutonium used in the bombs dropped on Hiroshima and Nagasaki. Vice President took over when Roosevelt died in April 1945. Truman never enjoyed his youth on the farm and had no interest in the outdoors, nor did his Interior Secretary Oscar L. Chapman. However both Truman and Chapman were keenly aware of the patronage advantages to the Democratic Party in large dam projects. They sponsored a major expansion with no concern for negative environmental impact. After the war ended the Corps of Engineers built 400 dams and 3400 flood control projects, while TVA added 4 dams, and the Bureau of Reclamation added 41.
Eisenhower presidency, 1953-1961
Water projects continued at a fast pace, with 11,000 new dams in the 1950s and 19,000 in the 1960s. The "Big Dam Era" was made possible by very expensive combinations of high dams, powerful turbines, and high-tension long distance transmission lines whereby electrical utilities brought power to customers hundreds of miles away. The era began in the 1930s and was practically over by 1970.
Meanwhile the environmental movement was starting to form. Aldo Leopold published a highly influential book in 1949, A Sand County Almanac, which helped define environmental ethics. It eventually sold more than two million copies.
In terms of ideology, liberals (and the Democratic Party) wanted national control of natural resources—the level at which organized ideological pressures were effective. Conservatives (and the Republican Party) wanted state or local control, whereby the financial benefit to local businesses and jobs could be decisive. In a debate going back to the early 20th century, preservationists wanted to protect the inherent natural beauty of the national parks, whereas economic maximizers wanted to build dams and divert water flows. Eisenhower articulated the conservative position in December 1953, declaring that conservation was not about "locking up and putting resources beyond the possibility of wastage or usage," but instead involved "the intelligent use of all the resources we have, for the welfare and benefit of all the American people." Liberals and environmentalists forced the resignation of Secretary of the Interior Douglas McKay in 1956. He was a businessman with little interest in the environment who allegedly promoted "giveaways" to mining companies regardless of environmental damage.
Eisenhower's personal activity on environmental issues came in foreign policy. He supported the UN convention of 1958 that provided a strong foundation for international accords governing the use of the world's high seas, especially regarding fishing interests. Eisenhower also promoted the peaceful use of atomic energy for the production of electricity, with strong controls against diversion into nuclear weapons. However, there was little attention to nuclear waste.
Kennedy and Johnson presidencies, 1961-1968
John F. Kennedy was a city boy like his constituents. He did not hunt or fish, hike or explore, nor seek out the wilderness. He did greatly enjoy the ocean and the seashore but otherwise the environment and environmentalism bored him.
The 1962 publication of Silent Spring by Rachel Carson brought new attention to environmentalism and the danger that pollution and pesticide poisoning (i.e., DDT) posed to public health.
When Vice President Lyndon B. Johnson succeeded the assassinated president in November 1963, he retained Kennedy's staunchly pro-environment Secretary of the Interior, Stewart Udall. Johnson helped pass a series a series of bills designed to protect the environment. He signed into law the Clean Air Act of 1963, which had been proposed by Kennedy. The Clean Air Act set emission standards for stationary emitters of air pollutants and directed federal funding to air quality research. In 1965, the act was amended by the Motor Vehicle Air Pollution Control Act, which directed the federal government to establish and enforce national standards for controlling the emission of pollutants from new motor vehicles and engines. In 1967, Johnson and Senator Edmund Muskie led passage of the Air Quality Act of 1967, which increased federal subsidies for state and local pollution control programs.
During his time as President, Johnson signed over 300 conservation measures into law, forming the legal basis of the modern environmental movement. In September 1964, he signed a law establishing the Land and Water Conservation Fund, which aids the purchase of land used for federal and state parks. That same month, Johnson signed the Wilderness Act, which established the National Wilderness Preservation System; saving 9.1 million acres of forestland from industrial development.
In 1965, Muskie led passage of the Water Quality Act of 1965, though conservatives stripped a provision of the act that would have given the federal government the authority to set clean water standards. The Endangered Species Preservation Act of 1966, the first piece of comprehensive endangered species legislation, authorizes the Secretary of the Interior to list native species of fish and wildlife as endangered and to acquire endangered species habitat for inclusion in the National Wildlife Refuge System. The Wild and Scenic Rivers Act of 1968 established the National Wild and Scenic Rivers System. The system includes more than 220 rivers, and covers more than 13,400 miles of rivers and streams. The National Trails System Act of 1968 created a nationwide system of scenic and recreational trails.
As First Lady and trusted presidential confidant, Lady Bird Johnson helped establish the public environmental movement in the 1960s. She worked to beautify Washington D.C. by planting thousands of flowers, set up the White House Natural Beauty Conference, and lobbied Congress for the president's full range of environmental initiatives. In 1965, she took the lead in calling for passage of the Highway Beautification Act. The act called for control of outdoor advertising, including removal of certain types of signs, along the nation's growing Interstate Highway System and the existing federal-aid primary highway system. It also required certain junkyards along Interstate or primary highways to be removed or screened and encouraged scenic enhancement and roadside development. According to Secretary of Interior Stewart Udall, she single-handedly, "influenced the president to demand-and support-more far-sighted conservation legislation."
Nixon presidency, 1969-1974
Time magazine called Barry Commoner the "Paul Revere of ecology" for his work on the threats to life from the environmental consequences of fallout from nuclear tests and other pollutants of the water, soil, and air. Time's cover on February 2, 1970, represented a "call to arms", to mobilize public opinion by appeals to fears of chemical pollution of food and water. On April 22, 1970, the first Earth Day took place, which saw 20 million Americans demonstrating peacefully in favor of environmental reform, accompanied by special events held at university campuses across the nation. The huge response to Earth Day convinced Richard Nixon that he could expand his political base by championing the new environmental movement. His instincts were right: there was especially strong popular support for the Environmental Protection Agency (EPA) and the Clean Air Act of 1970. Polls showed support was high among men and women of all ages, and among conservatives as well as liberals. The media led the stampede. A survey of 21,000 editorials in 5 major newspapers from October 1970 to September 1971 showed that environmental topics were the number one social issue. The top concerns were water quality, land use, air quality and waste disposal.
Nixon came late to the conservation movement. Environmental policy had not been a significant issue in the 1968 election, and the media rarely asked about the subject. Nixon broke the silence by highlighting the environment in his State of the Union speech in January 1970: The great question of the seventies is: shall we surrender to our surroundings, or shall we make our peace with nature and begin to make reparations for the damage we have done to our air, to our land, and to our water? Restoring nature to its natural state is a cause beyond party and beyond factions. It has become a common cause of all the people of this country. It is a cause of particular concern to young Americans, because they more than we will reap the grim consequences of our failure to act on programs which are needed now if we are to prevent disaster later. Clean air, clean water, open spaces—these should once again be the birthright of every American. If we act now, they can be.
The president then introduced 36 environmental initiatives, and pushed most of them through. He strongly supported advisors who deeply believed in environmentalism, especially Russell E. Train, John Ehrlichman, William Ruckelshaus, and John C. Whitaker.
In June 1970 Nixon announced the formation of the Environmental Protection Agency (EPA), using an Executive order that did not require Congressional approval. Other breakthrough initiatives supported by Nixon included the Clean Air Act of 1970, and the Occupational Safety and Health Administration (OSHA). His National Environmental Policy Act required environmental impact statements for many Federal projects. Furthermore, he put protection of the global environment on the international diplomatic agenda for the first time in world history. Then Nixon reversed himself and in 1972 he vetoed the Clean Water Act —objecting not to the policy goals of the legislation but to the amount of money to be spent on them, which he deemed excessive. After Congress overrode his veto, Nixon impounded the funds he deemed unjustifiable.
Nixon's achievements
Political scientists Byron Daines and Glenn Sussman identify six major achievements for which they give credit to Nixon.
He broadened the attention span of the Republican Party to include environmental issues, for the first time since the days of Theodore Roosevelt. He thereby "dislodged the Democratic Party from its position of dominance over the environment."
He used presidential powers, and promoted legislation in Congress to create a permanent political structure, most notably the Environmental Protection Agency, as well as the White House Council on Environmental Quality, the National Oceanic and Atmospheric Administration, and others.
He helped ensure that Congress build a permanent structure supportive of environmentalism, especially the National Environmental Policy Act of 1970, which enjoined all federal agencies to help protect the environment.
Nixon appointed a series of strong environmentalists in highly visible positions, most notably William Ruckelshaus, Russell Train, Russell W. Peterson, and John C. Whitaker (who was a senior White House aide for four years, becoming Undersecretary of the Interior in 1973).
Nixon initiated worldwide diplomatic attention to environmental issues, working especially with NATO.
Finally, state: "Nixon did not have to be personally committed to the environment to become one of the most successful presidents in promoting environmental priorities."
Historians pose a strange paradox regarding Nixon. In 1970-1971 he unexpectedly emerged as a great environmentalist who deserves credit for several of the most important environmental laws in American history. By 1972, however, he suddenly moved far to the right, despising environmentalists as left-wing fanatics who would bankrupt the economy.
For subsequent presidents see Environmental policy of the United States.
Organizations
There are a multitude of environmental organizations—over 160 are covered at the List of environmental and conservation organizations in the United States. However the "Group of Ten" (or "Big Green") have been preeminent since the late 20th century: Sierra Club (founded 1892); Audubon (founded 1905); National Parks Conservation Association (1919); Izaak Walton League (1922); National Wildlife Federation (1936); The Wilderness Society (1937); Environmental Defense Fund (1967); Friends of the Earth (1969); Natural Resources Defense Council (1970); and Earthjustice (1971).
Stopping the Echo Park Dam
A critical transition took place after World War II that turned these groups into activist organizations working to save the Wilderness. The clientele for the clubs had been an upper-class conservative Republican audience with close ties to big business. They enjoyed expensive and exotic vacations at uncrowded wilderness sites. Mountain climbing was popular. The older leaders retired and were replaced by men with a mission, especially Howard Zahniser at the Wilderness Society in 1945 and David Brower at the Sierra Club in 1952. They were dismayed at the aggressive plans put forward by the "Iron Triangle" that controlled conservation policy. The Iron Triangle was the informal backstage coalition of key members of Congress, plus leaders of the major federal agencies, plus local businessmen keen on speeding up economic development by using natural resources. After a decade of depression and war, the nation was ready to move ahead. The Bureau of Reclamation took the lead with an elaborate plan to develop dams on the Colorado River for the benefit of the economies of Arizona, California, Colorado, Nevada, New Mexico, Utah, and Wyoming. The centerpiece would be a huge new Echo Park Dam inside Dinosaur National Monument. Zahniser and Brower, working with 30 other groups, launched recruiting drives to bring in middle class members with idealistic goals to fight the destruction of the wilderness at Echo Park. They raised money for staff; mobilized local branches; and flooded the market with glossy magazines featuring nature photography by the likes of Ansel Adams; and petitioned local, state and national politicians. They convinced Congress to delete Echo Park Dam from the Colorado River Storage Project in 1955, but had to agree on an alternative dam site at Glen Canyon Dam. They went on to oppose other grandiose projects. To make their goals permanent Zahniser drafted an ambition "Wilderness Act" designed to permanently protect 50 million acres of wilderness with no commercial activities such as mining or hydroelectric power dams. In the end he achieved a Wilderness Act in 1964 that protected 9 million acres and set a national standard, while mobilizing grass roots voters and setting a model of activism for other national and local organizations to emulate in challenging the Iron Triangle.
The Sierra Club
The Sierra Club is a major environmental organization. It was founded in May, 1892, by preservationist John Muir (1838–1914). He became the first president, serving for 20 years. The Club did not engage in lobbying. Instead it provided its upscale clientele with outdoor adventures, such as guided tours, wilderness camping and mountain climbing. Reform-minded activists known as the "John Muir Sierrans" wanted a more aggressive role in protecting the environment. They brought in the hyperenergetic and controversial David Brower (1912–2000) as Executive Director 1952 to 1969. The Club now became the first large-scale environmental preservation organization in the world, best known for systematic lobbying of politicians to promote environmentalist policies. Major activities include promoting sustainable energy and mitigating global warming, as well as opposition to the use of coal, hydropower, and nuclear power. The organization takes strong positions on issues that sometimes create controversy, criticism, or opposition either internally or externally or both. The club is known for its political endorsements generally supporting liberal and progressive candidates in elections.
Under Brower's leadership, Sierra's membership grew rapidly, from 7,000 in 1952 to 70,000 members in 1969. It was the largest and most prominent conservation organization. Building on the biennial Wilderness Conferences which the Club launched in 1949 together with The Wilderness Society, Brower helped win passage of the Wilderness Act in 1964. Brower and the Sierra Club also led a major battle to stop the Bureau of Reclamation from building two dams that would flood portions of the Grand Canyon. Brower was keen on publicity and sponsored numerous heavily illustrated books to promote knowledge and admiration for the nation's wilderness. On the other hand powerful members of Congress fought for new high dams to use water power to promote the local economy, regardless of the flooding they caused to wilderness areas. Their leader in Congress was Wayne N. Aspinall, the Democrat from western Colorado who dominated the House Committee on Interior and Insular Affairs as chairman from 1959 to 1973. Brower complained that the environmental movement had seen "dream after dream dashed on the stony continents of Wayne Aspinall." The congressman shot back that the environmentalists were "over-indulged zealots" and "aristocrats" to whom "balance means nothing."
The Wilderness Society
The Wilderness Society is a non-profit conservation organization founded in 1937 by Bob Marshall (1901–1939), who largely funded its startup. It is dedicated to protecting natural areas and federal public lands in the United States and advocates for the designation of federal wilderness areas and other protective designations, such as for national monuments. It calls for balanced uses of public lands, and advocate for federal politicians to enact various land conservation and balanced land use proposals. The Society specializes in issues involving lands under the management of federal agencies; such lands include national parks, national forests, national wildlife refuges, and areas overseen by the Bureau of Land Management. In the early 21st century, the society has been active in fighting recent political efforts to reduce protection for America's roadless and undeveloped lands and wildlife. It was instrumental in the passage of the 1964 Wilderness Act. The primary drafter of the Wilderness Act was Howard Zahniser (1906–1964), who served as executive secretary of the Wilderness Society from 1945 until his death. The Wilderness Act led to the creation of the National Wilderness Preservation System, which protects 109 million acres of U.S. public wildlands.
Activism
Ecocentrics
According to Keith Makoto Woodhouse, the ecocentric movement is controversial and internally divided. It rejects the anthropocentric belief that humans are intrinsically superior to other forms of life, and have the right to rule over and manipulate nature.
The ecocentrics focus largely on wilderness preservation. They are highly controversial in their use of direct action-and in their reluctance to engage in standard political activity. For example the Earth First! activists used Tree spiking—driving long spikes into trees that would destroy sawmills and injure workers. "Ecotage" is the crime of sabotage on behalf of the environment.
Environmental justice
Environmental justice or eco-justice, is a social movement to address environmental injustice, which occurs when poor or marginalized communities are harmed by hazardous waste, resource extraction, and other land uses from which they do not benefit.
The movement began in the United States in the 1980s. It was heavily influenced by the American civil rights movement and focused on environmental racism within rich countries. The movement was later expanded to consider gender, international environmental injustice, and inequalities within marginised groups. As the movement achieved some success in rich countries, environmental burdens were shifted to the Global South (as for example through extractivism or the global waste trade). The movement for environmental justice has thus become more global, with some of its aims now being articulated by the United Nations. The movement overlaps with movements for Indigenous land rights and for the human right to a healthy environment.
The goal of the environmental justice movement is to achieve agency for marginalised communities in making environmental decisions that affect their lives. The global environmental justice movement arises from local environmental conflicts in which environmental defenders frequently confront multi-national corporations in resource extraction or other industries. Local outcomes of these conflicts are increasingly influenced by trans-national environmental justice networks. Environmental justice scholars have produced a large interdisciplinary body of social science literature that includes contributions to political ecology, environmental law, and theories of sustainability.
Environmentalist lawsuits blocking clean air projects
An editorial in The Washington Post on April 6, 2024 discusses the challenges faced by clean energy projects as caused by environmental activists in lawsuits around the United States. One example is the Cardinal-Hickory Creek high-voltage transmission line between Iowa and Wisconsin. It would connect over 160 renewable energy facilities producing 25 gigawatts of green power. It is facing a temporary halt due to a lawsuit by environmental groups condemning its impact on the Upper Mississippi River National Wildlife and Fish Refuge. The editorial argues this is just one example of the conflicts between environmental protection and the need for new infrastructure to support the clean energy transition. Solar, wind, and carbon capture projects often face opposition from conservation groups. The permitting process, established by laws like the National Environmental Policy Act (NEPA), generally leans against developers and allows virtually anyone to challenge projects in court on environmental grounds. This leads to lengthy delays and increased costs for clean energy projects. Researchers found that nearly two-thirds of solar energy projects, 31% of transmission lines, and 38% of wind energy projects that completed federal environmental impact studies between 2010-2018 were litigated. The editorial says that many environmental concerns are valid, but the permitting process does not reasonably weigh the costs and benefits of building essential clean energy infrastructure. It needs to be streamlined to accelerate the clean power expansion required to meet emissions reduction goals. The editorial concludes that Congress should reform the permitting process and preempt state and local rules that make it harder to build high-priority clean energy projects.
Leadership
See , for articles on people in Category:American environmentalists.
Ansel Adams
Bruce Babbitt
David Brower
Rachel Carson
Barry Commoner
William Cronon
William O. Douglas
Al Gore
Harold L. Ickes
Lady Bird Johnson
Aldo Leopold
George Perkins Marsh
Robert Marshall
John Muir
Gifford Pinchot
Franklin D. Roosevelt
Theodore Roosevelt
William Ruckelshaus
Dorceta Taylor
Henry David Thoreau
Russell E. Train
Stewart Udall
Howard Zahniser
See also
Conservation in the United States, history of activism before 1960
Forestry
List of national forests of the United States
History of the lumber industry in the United States
Timeline of history of environmentalism, on organized efforts
George Perkins Marsh Prize for best book in environmental history
Environmental movement in the United States, after 1960
Prairie restoration
Environmental policy of the United States
United States Bureau of Reclamation on water policy
United States Environmental Protection Agency
United States environmental law
Pittman–Robertson Federal Aid in Wildlife Restoration Act, of 1937
Sagebrush Rebellion, opposition in the Reagan Era
Environmental issues in the United States, current issues
List of environmental issues
Environmental justice
Environmental protection
Environmental science
Environmental education in the United States
Environmental justice
Environmental racism in the United States
Grassroots environmental activism in the United States–Mexico borderlands
Green New Deal
Climate change in the United States
Climate change and agriculture in the United States
Holocene extinction#Americas, extinction of species caused by human action
Native American use of fire in ecosystems
Environmental history, global perspective
Rural American history
:Category:Environmental non-fiction books
Notes
Sources
Further reading
General
Allaby, Michael, and Chris Park, eds. A dictionary of environment and conservation (Oxford University Press, 2013), with a British emphasis.
Allitt, Patrick. A Climate of Crisis: America in the Age of Environmentalism (2014), wide-ranging scholarly history since 1950s blurb
Allosso, Dan. American Environmental History (2nd edition parts one and two, 2017) a moralistic basic survey; well illustrated;
Andrews, Richard N.L., Managing the Environment, Managing Ourselves: A History of American Environmental Policy (Yale UP, 1999)
Bates, J. Leonard. "Fulfilling American Democracy: The Conservation Movement, 1907 to 1921", Mississippi Valley Historical Review (1957) 44#1 pp. 29–57. in JSTOR
Becher, Anne. American environmental leaders: From colonial times to the present (2 vol. ABC-CLIO, 2000) 320 brief biographies; vol 1 online
Black, Brian C., and Donna L. Lybecker. Great Debates in American Environmental History (2 vol. Greenwood, 2008), covers 150 topics in encyclopedic fashion with pro and con arguments. online book review
Block, Walter. "Environmentalism and economic freedom: the case for private property rights." Journal of Business Ethics (1998): pp. 1887-1899. argues for laissez-faire policies.
Browning, Judkin, and Timothy Silver. An Environmental History of the Civil War (U of North Carolina Press, 2020). online see also online review of this book
Brulle, Robert J. "Politics and the Environment." Handbook of politics: State and society in global perspective (2010): 385-406. online
Burch, Jr., John R. Water Rights and the Environment in the United States (ABC-CLIO 2015), a comprehensive documentary and reference guide to historical water issues.
Carmichael, Jason T., J. Craig Jenkins, and Robert J. Brulle. "Building environmentalism: The founding of environmental movement organizations in the United States, 1900–2000." Sociological Quarterly 53.3 (2012): 422-453 online.
Cohen, Michael P. The History of the Sierra Club, 1892-1970 (1988) online
Cox, Thomas R., et al. This well-wooded land: Americans and their forests from colonial times to the present (1985) online
Cox, Thomas R. "Americans and their forests: Romanticism, progress, and science in the late nineteenth century." Journal of Forest History 29.4 (1985): 156-168. online
Dauvergne, Peter. The A to Z of Environmentalism (Scarecrow, 2009), worldwide coverage; online
Davis, Richard C. Encyclopedia of American forest and conservation history (1983) vol 1 online see also 2 online, 871pp. See online review of this book
Decker, Jefferson. The Other Rights Revolution: Conservative Lawyers and the Remaking of American Government (Oxford UP, 2016), legal opponents of environmentalism; ch. 4 on Sagebrush Rebellion online
Dewey, Scott. "Working-Class Environmentalism in America" Oxford Research Encyclopedia (2019) online
Dewey, Scott. "Don't Breathe the Air": Air Pollution and U.S. Environmental Politics, 1945-1970 (Texas A&M UP, 2000).
Drake, Brian Allen, ed. The Blue, the Gray, and the Green: Toward an Environmental History of the Civil War (U of Georgia Press, 2015) online
Fiege, Mark. The Republic of Nature: An Environmental History of the United States (2022) online, scholar looks at environment's role in nine famous events, such as the Civil War, transcontinental railroad and segregation.
Golze, Alfred R. Reclamation in the United States (2nd ed. 1961) online
Hay, Peter, ed. Main currents in western environmental thought (Indiana UP, 2002). online
Hays, Samuel P. Conservation and the Gospel of Efficiency: The Progressive Conservation Movement 1890–1920 (Harvard UP, 1959), influential pioneer study online
Hays, Samuel P. Beauty, Health, and Permanence: Environmental Politics in the United States, 1955–1985 (1987), a standard scholarly history;online with new preface
Hays, Samuel P. A History of Environmental Politics since 1945 (2000), online a short survey
Johnson, Erik W., and Scott Frickel. "Ecological Threat and the Founding of U.S. National Environmental Movement Organizations, 1962–1998," Social Problems 58 (2011), 305–29. online
Krech III, Shepard. The ecological Indian : myth and history (1999) controversial among experts. online
Krech III, Shepard. "Reflections on conservation, sustainability, and environmentalism in indigenous North America." American anthropologist 107.1 (2005): 78-86.
Lehman, Tim. Public Values, Private Lands: American Farmland Preservation Policy, 1933-1985 (U of North Carolina Press, 1995)
McCright, Aaron M., Chenyang Xiao, and Riley E. Dunlap. "Political polarization on support for government spending on environmental protection in the USA, 1974–2012." Social science research 48 (2014): 251-260. online
McGurty, Eileen Maura. "Warren County, NC, and the emergence of the environmental justice movement: Unlikely coalitions and shared meanings in local collective action." Society & Natural Resources 13.4 (2000): 373-387. DOI:10.1080/089419200279027
Magoc, Chris J. Chronology of Americans and the Environment (2011)
Mauch, Christof, and Thomas Zeller, eds. Rivers in history: perspectives on waterways in Europe and North America (U of Pittsburgh Press, 2008).
Melosi, Martin V. Pollution & Reform in American Cities, 1870-1930 (1980).
Melosi, Martin V. Coping with Abundance: Energy and Environment in Industrial America (Temple UP, 1985) *Melosi, Martin V. Effluent America: Cities, Industry, Energy, and the Environment (2001)online
Melosi, Martin V. Garbage in the Cities: Refuse Reform and the Environment (U of Pittsburgh Press. 2004).
Melosi, Martin V. Precious Commodity : Providing Water for America's Cities (U of Pittsburgh Press, 2011)
Merchant, Carolyn. American environmental history: An introduction (Columbia UP, 2007), a slightly revised 2nd edition; the first edition was published as The Columbia guide to American environmental history (Columbia UP, 2002). online 2007 edition
Miller, Char. The Atlas of U.S. and Canadian Environmental History (2012)
Nash, Roderick. The Rights of Nature: A History of Environmental Ethics (U of Wisconsin Press, 1989)
Nash, Roderick. Wilderness and the American Mind, (4th ed. 2001), a standard intellectual history of the concept of wilderness
Paehlke, Robert, ed. Conservation and environmentalism: an encyclopedia (Garland, 1995). online
Pyne, Stephen. Fire in America: A Cultural History of Wildland and Rural Fire (Princeton UP, 1982). online
Rosier, Paul C. Environmental Justice in North America (Routledge, 2024) online book review
Rome, Adam. Bulldozer in the Countryside: Suburban Sprawl and the Rise of American Environmentalism (2001) online
Rothman, Hal K. The Greening of a Nation? Environmentalism in the United States since 1945 (Harcourt Brace, 1998). .
Ryder, Andrew. "Liberal economics and the rise of laissez-faire ecology," Economic Affairs (January 2010) doi.org/10.1111/j.1468-0270.2010.02008.x
Sale, Kirkpatrick. The Green Revolution: The American Environmental Movement, 1962–1999 (Hill & Wang, 1993) online
Sandler, Ronald, and Phaedra C. Pezzullo, eds. Environmental Justice and Environmentalism: The Social Justice Challenge to the Environmental Movement (MIT Press, 2007)
Scheffer, Victor B. The Shaping of Environmentalism in America (1991).
Steinberg, Ted. Down to Earth: Nature's Role in American History (Oxford UP, 2002)
Stradling, David. Smokestacks and Progressives: Environmentalists, Engineers, and Air Quality in America,. 1881-1951 (Johns Hopkins UP, 1999).
Strong, Douglas H. Dreamers & Defenders: American Conservationists. (1988) biographical studies of the major leaders
Taylor, Dorceta E. The Rise of the American Conservation Movement: Power, Privilege, and Environmental Protection (Duke UP, 2016) online
Turner, James Morton, " 'The Specter of Environmentalism': Wilderness, Environmental Politics, and the Evolution of the New Right. Journal of American History 96.1 (2009): 123–47 online
Unger, Nancy C., Beyond Nature's Housekeepers: American Women in Environmental History. (Oxford UP, 2012)
Whitney, Gordon G. From Coastal Wilderness to Fruited Plain: A History of Environmental Change in Temperate North America from 1500 to the Present (1994)
Williams, Michael. (1989) Americans and Their Forests: A Historical Geography (Cambridge UP), a major scholarly study
Williams, Michael. "Clearing the United States forests: pivotal years 1810–1860," Journal of Historical Geography 8#1 (1982) 12–28. online
Woodhouse, Keith Makoto. The Ecocentrists: A History of Radical Environmentalism (2018)
Wyss, Robert. The Man Who Built the Sierra Club: A Life of David Brower (Columbia UP, 2016).
Presidential and federal government studies
Black, Megan. The Global Interior: Mineral Frontiers and American Power(Harvard UP, 2018).
Bryce, Emma. "America's Greenest Presidents' New York Times Sept 20, 2012; a poll of scholars ranks Theodore Roosevelt as #1 followed by Nixon, Carter, Obama, Jefferson, Ford, FDR, and Clinton online
Blumm, Michael C. "The Nation's First Forester-in-Chief: The Overlooked Role of FDR and the Environment." Journal of Land Use & Environmental Law 33 (2017): 25–60. A review of Brinkley (2016). online
Bureau of Reclamation. "Bureau of Reclamation: A Very Brief History" (2024) online
Cannon, Jonathan, and Jonathan Riehl. "Presidential greenspeak: How presidents talk about the environment and what it means." Stanford Environmental Law Journal 23 (2004): 195–272. online
Cawley, R. M. Federal land, western anger: The Sagebrush Rebellion and environmental politics (UP Kansas, 1993). online
Clements, Kendrick A. "Herbert Hoover and conservation, 1921-33." American Historical Review 89.1 (1984): 67-88. online
Coodley, Gregg, and David Sarasohn. The Green Years, 1964–1976: When Democrats and Republicans United to Repair the Earth (UP of Kansas, 2021) online
Cutright, Paul Russell. Theodore Roosvelt the naturalist (1956) online
Cutright, Paul Russell. Theodore Roosevelt: The Making of a Conservationist (U of Illinois Press, 1985) online
Engelbert, Ernest A. "Political Parties and Natural Resources Policies-An Historical Evaluation, 1790-1950." Natural Resources Journal 1 (1961): 226+ online
Flippen, J. Brooks. "Conservative Conservationist: Russell E. Train and the Emergence of American Environmentalism" (LSU Press, 2006)
Gates. Paul W. History of Public Land Law Development (1968) a major scholarly history online
Graham Jr., Otis L. Presidents and the American Environment (UP of Kansas, 2015) online
King, Judson. The Conservation Fight, From Theodore Roosevelt to the Tennessee Valley Authority (2009)
Klyza, Christopher McGrory. "Power, partisanship, and contingency: the president and US environmental policy." in Handbook of US Environmental Policy (Edward Elgar, 2020).
Klyza, Christopher McGrory, and David J. Sousa. American environmental policy (2nd ed. MIT Press, 2013). online
Koppes, Clayton R. "Environmental policy and American liberalism: the Department of the Interior, 1933–1953." Environmental Review 7.1 (1983): 17-53.
Kotlowski, Dean J.; "Richard Nixon and the Origins of Affirmative Action" The Historian. (1998) 60#3 pp. 523 ff.
Kotlowski, Dean J. "Deeds Versus Words: Richard Nixon and Civil Rights Policy." New England Journal of History 1999–2000 56(2–3): 122–144.
Kraft Michael E. "U.S. Environmental Policy and Politics: From the 1960s to the 1990s" Journal of Policy History (2000) 12#1 :17-42. doi:10.1353/jph.2000.0006
Kraft Michael E. U.S. Environmental Policy and Politics (6th ed. Pearson, 2015 ) excerpt
Landy, Marc K. et al. The Environmental Protection Agency: From Nixon to Clinton (2nd ed. Oxford UP, 1994)
Layzer, Judith A. Open for Business : Conservatives' Opposition to Environmental Regulation (2012) online
Lindstrom, Matthew J. ed. Encyclopedia of the U.S. Government and the Environment (2 vol ABC-CLIO, 2010), 950pp
Macekura, Stephen. "The limits of the global community: the Nixon administration and global environmental politics." Cold War History 11.4 (2011): 489–518.
Melosi, Martin V. "Environmental Policy" in A Companion to Lyndon B. Johnson, ed. by Mitchell B. Lerner. (Blackwell, 2012) pp. 187–209.
Melosi, Martin V. "Lyndon Johnson and Environmental Policy,' in Robert Divine, ed., The Johnson Years, Volume Two: Vietnam, The Environment and Science (U of Kansas Press, 1987), pp.113–149
Miller, Char. Gifford Pinchot and the Making of Modern Environmentalism (2001)
Peterson, Tarla Rai, ed. Green Talk in the White House: The Rhetorical Presidency Encounters Ecology (Texas A&M UP, 2004) excerpt
Phillips, Sarah T. This Land, This Nation: Conservation, Rural America, and the New Deal (2007)
Pinkett, Harold T. Gifford Pinchot: Private and Public Forester (U of Illinois Press, 1970).
Shallat, Todd. Structures in the stream: Water, science, and the rise of the US Army Corps of Engineers (University of Texas Press, 2010).
Short, C. Brant. Ronald Reagan and the Public Lands: America's Conservation Debate (1989).
Smith, Frank E. The Politics of Conservation (1966), focus on federal water issues and dams, especially TVA. see online review
Soden, Dennis, ed. The Environmental Presidency (SUNY, 1999) online
Steen, Harold K. The US forest service: A centennial history (U of Washington Press, 2013). online
Stine, Jeffrey K. "Natural Resources and Environmental Policy." in The Reagan Presidency: Pragmatic Conservatism and Its Legacies, ed. by W. Elliot Brownlee and Hugh David Graham (Kansas UP, 2003) pp. 233–256.
Sussman, Glen, and Byron W. Daynes. "Spanning the century: Theodore Roosevelt, Franklin Roosevelt, Richard Nixon, Bill Clinton, and the environment." White House Studies 4.3 (2004): 337-355. online
Swain, Donald C. National Conservation Policy: Federal Conservation Policy, 1921-1933 (U of California Press, 1963) online
Utley, Robert M. and Barry Mackintosh; The Department of Everything Else: Highlights of Interior History (Dept. of the Interior, 1989) online
Woolner, David, and H. Henderson, eds. FDR and the Environment (Springer, 2015) online.
Regions
Brosnan, Kathleen A. et al. eds. City of Lake and Prairie: Chicago's Environmental History (U of Pittsburgh Press, 2020) online
Castaneda, Christopher J., and Lee M. A. Simpson, eds. River city and valley life: an environmental history of the Sacramento region (U of Pittsburgh Press, 2013) in California; online
Cawley, R. McGreggor. Federal Land, Western Anger: The Sagebrush Rebellion and Environmental Politics (1993), on conservatives
Cowdrey, Albert E. This Land, This South: An Environmental History (UP of Kentucky, 1995). online
Cronon, William, Changes in the Land: Indians, Colonists and the Ecology of New England (Hill and Wang, 1983)
Cronon, William, Nature's Metropolis: Chicago and the Great West (W.W. Norton, 1991); influential classic see online commentary
Cumbler, John T. Reasonable use: The people, the environment, and the state. New England 1790-1930. (Oxford UP, 2001).
Cumbler, John T. Northeast and Midwest United States: An Environmental History (ABC-CLIO, 2005) online
Cunfer, Geoff, and Bill Waiser, eds. Bison and people on the North American Great Plains: A deep environmental history (Texas A&M UP, 2016) online.
Dant, Sara. Losing Eden: An Environmental History of the American West. (U of Nebraska Press, 2023). online, also see online book review
Davis, D. E., ed. Southern United States: An Environmental History (ABC-CLIO, 2006) online
Deslatte, Aaron. "Florida's growth management experience: From top-down direction to Laissez-Faire land use." in The Palgrave Handbook of Sustainability (2018): 739-755 online.
Flores, Dan. The natural west: Environmental history in the Great Plains and Rocky Mountains (U of Oklahoma Press, 2003) online.
Fradkin, Philip. A River No More: The Colorado River and the West (1981)
Frehner, Brian, and Kathleen A. Brosnan, eds. The Greater Plains: Rethinking a Region's Environmental Histories (U of Nebraska Press, 2021) online.
Harrison, Blake, et al. A Landscape History of New England (2011)
Harvey, Mark W. T. "Echo Park, Glen Canyon, and the postwar wilderness movement." Pacific Historical Review (1991): 43-67. online Colorado River region
Jacobs, Elizabeth T., Jefferey L. Burgess, and Mark B. Abbott. "The Donora smog revisited: 70 years after the event that inspired the clean air act." American journal of public health 108.S2 (2018): S85-S88. online the fatal 1948 Donora smog in Pennsylvania in 1948.
Judd, Richard W. Second nature: An Environmental History of New England (2014)
Judd, Richard W. Common Lands and Common People, The Origins of Conservation in Northern New England (1997) online
Klyza, Christopher McGrory et al. The Story of Vermont : A Natural and Cultural History (2nd ed. 2015)
Mauldin, Erin Stewart. Unredeemed Land : An Environmental History of Civil War and Emancipation in the Cotton South (Oxford UP, 2018)
Melosi, Martin V., and Charles Reagan Wilson, eds. The New Encyclopedia of Southern Culture: Volume 8: Environment (2007); 320pp with 98 short essays by experts. blurb
Melosi, Martin V. Fresh Kills: A History of Consuming and Discarding in New York City (Columbia UP, 2020).
Melosi, Martin V., and Joseph A. Pratt. Energy Metropolis : An Environmental History of Houston and the Gulf Coast (U of Pittsburgh Press, 2007)
Reisner, Marc. Cadillac desert: The American West and its disappearing water (Penguin, 1993) says the villain was the federal Bureau of Reclamation see ; also see online copy.
Rice, James D. Nature and History in the Potomac Country: From Hunter-Gatherers to the Age of Jefferson (2009), near Washington DC
Sayen, Jamie. Children of the Northern Forest: Wild New England's History from Glaciers to Global Warming (Yale UP, 2023). the story of northern New England's undeveloped forests
Turk, Eleanor L. "Selling the Heartland: Agents, Agencies, Press, and Policies Promoting German Emigration to Kansas in the Nineteenth Century." Kansas History 12 (1989): 150-59.
Vogel, David. California greenin': How the Golden State became an environmental leader (Princeton UP, 2019).
Wexler, Alan, and Molly Braun, Atlas of westward expansion (1995) online.
Wild, Peter. Pioneer Conservationists of Western America (1979) online
Worster, Donald. Under Western Skies: Nature and History in the American West (Oxford UP, 1992) online
Zimring, Carl A., and Steven H. Corey, eds. Coastal Metropolis: Environmental Histories of Modern New York City (U of Pittsburgh Press, 2021) .
Historiography
Coates, Peter. "Emerging from the Wilderness (or, from Redwoods to Bananas): Recent Environmental History in the United States and the Rest of the Americas," Environment and History 10 (2004), pp. 407–38 online
Coulter, Kimberly, and Christof Mauch, eds. The Future of Environmental History: Needs and Opportunities ( Rachel Carson Center for Environment and Society, 2011).
Fleming, Donald. "Roots of the New Conservation Movement," Perspectives in American History 6 (1972): 7-91.
Hays, Samuel P. Explorations In Environmental History (U of Pittsburgh Press, 1998) essays by Hays online
Hendricks, Rickey L. "The Conservation Movement: A Critique of Historical Sources." History Teacher 16#1 (1982), pp. 77–104. online
Hersey, Mark D., and Ted Steinberg, eds. A Field on Fire: The Future of Environmental History (2019).
Lee, Lawrence B. "100 years of reclamation historiography." Pacific Historical Review 47.4 (1978): 507-564.online; Covers 1) irrigation , 1878-1902, 2) reclamation service, 3) agricultural settlement, 1902–28, 4) engineering 1887-1953, 5) Department of Agriculture, 1898-1938, 6) historians, 1898-1978, and 7) challenges to Bureau
Lynch, Tom, et al. eds. The Bioregional Imagination: Literature, Ecology, and Place (U of Georgia Press, 2011), focus on literature; online
Sackman, Douglas Cazaux, ed. A Companion to American Environmental History (2010), 696pp; 33 essays by scholars that emphasize the historiography; online
Primary sources
Burch, Jr., John R. Water Rights and the Environment in the United States (ABC-CLIO 2015), a comprehensive documentary and reference guide to historical water issues.
Carson, Rachel, Silent Spring (Riverside Press, 1962), highly influential in shaping public opinion
Foss, Philip O. ed. Conservation in the United States A Documentary History : Recreation (1971) online 808pp covering parks, hunting, fishing, forests, lakes, highway beautification
McHenry, Robert and Charles Van Doren, eds. A documentary history of conservation in America (Praeger, 1972) online
McKibben, Bill, ed. American Earth: Environmental Writing Since Thoreau, (Library of America, 2008); 1080 pages of excerpts from 96 authors, plus 82 illustrations.
Magoc, Chris J. ed. Environmental issues in American history : a reference guide with primary documents (2006)
Merchant, Carolyn, ed., Major problems in American environmental history: documents and essays (1993).
Nash, American environmentalism : readings in conservation history (3rd ed. 1990)
Nicoll, Don. "Train, Russell oral history interview." (1999). online
Smith, Frank E. ed. Conservation in the United States: A Documentary History: Land and Water 1900-1970 (1971), 785pp
Stoll, Steven, ed. U.S. Environmentalism since 1945: A Brief History with Documents (Palgrave Macmillan, 2006)
Stradling, David, ed. Conservation in the Progressive Era: Classic Texts (U of Washington Press, 2004)
Wells, Christopher W. ed. Environmental Justice in Postwar America A Documentary Reader (2018)
External links
H-Environment web resource for students of environmental history
Syllabus for William Cronon course at U Wisconsin--Madison
American Society for Environmental History
Environmental History Now
Environmental History Resources
Environmental History Timeline
Environmental History on the Internet
"The Evolution of the Conservation Movement" historical documents and illustrations, 1850 to 1920, from the Library of Congress (these are no longer copyright)
Rachel Carson Center for Environment and Society and its Environment & Society Portal
Forest History Society
HistoricalClimatology.com Explores climate history, a form of environmental history.
Climate History Network Network of climate historians.
Top 23 Global Nonprofits Protecting the Environment
Journals
Environmental History, Co-published quarterly by the American Society for Environmental History and the (US) Forest History Society
JSTOR: All Volumes and Issues - Browse - Environmental History [1996–2007 (Volumes 1–12)]
JSTOR: All Volumes and Issues - Browse - Forest & Conservation History [1990–1995 (Volumes 34–39)]
JSTOR: All Volumes and Issues - Browse - Environmental Review: ER [1976–1989 (Volumes 1–13)]
JSTOR: All Volumes and Issues - Browse - Environmental History Review [1990–1995 (Volumes 14–19)]
JSTOR: All Volumes and Issues - Browse - Journal of Forest History [1974–1989 (Volumes 18–33)]
JSTOR: All Volumes and Issues - Browse - Forest History [1957–1974 (Volumes 1–17)]
Nixon on environment, an exhibit by the Nixon Foundation
History of environmentalism
Environmental history
Environmental social science
Environmentalism in the United States
Environmental policy in the United States
United States federal policy
Environmental history of Canada | Environmental history of the United States | [
"Environmental_science"
] | 19,388 | [
"Environmental social science"
] |
76,082,733 | https://en.wikipedia.org/wiki/Hieraves | Hieraves is a clade of telluravian birds named by Wu et al. (2024) that includes the orders Strigiformes (owls) and Accipitriformes (hawks and their relatives). The Cathartidae (New World vultures) are usually included in Accipitriformes, but some authors treat them as a third order Cathartiformes in the Hieraves. In the past, either owls, New World vultures, and hawks were found to be basal outgroups with respect to Coraciimorphae inside Afroaves, or Accipitriformes and Cathartiformes were recovered as a basal clade in respect to the rest of the members of Telluraves. Houde and Braun (2019) found support for Hieraves (then unnamed), but they were found to be the sister group to Coraciimorphae and Australaves. The analysis of Wu et al. (2024) has found Hieraves to be the sister clade to Australaves. Stiller et al. (2024) found Hieraves to be basal to Afroaves.
References
Neognathae
Birds | Hieraves | [
"Biology"
] | 244 | [
"Birds",
"Animals"
] |
76,084,000 | https://en.wikipedia.org/wiki/Amauroderma%20africana | Amauroderma africana is a tough woody mushroom in the family Ganodermataceae. It is a polypore fungus.
References
africana
Fungus species
Fungi described in 2004
Taxa named by Leif Ryvarden | Amauroderma africana | [
"Biology"
] | 45 | [
"Fungi",
"Fungus species"
] |
76,084,096 | https://en.wikipedia.org/wiki/Amauroderma%20deviatum | Amauroderma deviatum is a tough woody mushroom in the family Ganodermataceae. It is a polypore fungus.
References
deviatum
Fungus species
Fungi described in 2016 | Amauroderma deviatum | [
"Biology"
] | 39 | [
"Fungi",
"Fungus species"
] |
76,084,106 | https://en.wikipedia.org/wiki/Amauroderma%20elegantissimum | Amauroderma elegantissimum is a tough woody mushroom in the family Ganodermataceae. It is a polypore fungus.
References
elegantissimum
Fungus species
Fungi described in 2016 | Amauroderma elegantissimum | [
"Biology"
] | 39 | [
"Fungi",
"Fungus species"
] |
76,084,509 | https://en.wikipedia.org/wiki/Nickel%20tetrafluoride | Nickel tetrafluoride is an inorganic compound with a chemical formula .
Synthesis
Nickel tetrafluoride is claimed to result from the reaction of with and with .
Chemical properties
Nickel tetrafluoride is an extremely strong oxidizer. The oxidizing properties are enhanced in presence of Lewis acids in anhydrous HF. In terms of oxidizing power, it is comparable to krypton difluoride. It can oxidize bromine pentafluoride to hexafluorobrome(VII) cation, potassium hexafluoroplatinate(V) to platinum(VI) fluoride.
References
Nickel compounds
Fluorides | Nickel tetrafluoride | [
"Chemistry"
] | 144 | [
"Fluorides",
"Salts"
] |
76,086,405 | https://en.wikipedia.org/wiki/The%20Mixon | The Mixon (reef, rocks or shoal) are a limestone outcrop in the English Channel about off Selsey Bill, West Sussex. It was formed during the Eocene period.
At the east end of the reef is a gully with a depth of . Known as the "Mixon Hole", this feature makes up the north side of a drowned river gorge. The Mixon is part of a Marine Conservation Zone and supports diverse wildlife including short-snouted seahorses, squat lobsters and crabs along with red algae and kelp in shallower waters. The Mixon Hole is a popular destination for scuba divers. Rock from the Mixon has been quarried at least from Roman times till the 19th century and used in the local building industry.
The reef has been a major hazard to shipping over the centuries, with stories of wrecks dating from medieval times.
Name
The name Mixon probably is derived from the Old English: mixen meaning 'dunghill'. It is thought that dung from bullocks was stored in this area during the Anglo-Saxon period.
History
The exact configuration of the coastline in the early Holocene is not precisely known, but the Mixon and other reefs in the area that were formed within the sands and silts of the Bracklesham Group are thought to have significantly shaped the palaeogeographic landscape and protected against coastal erosion.
Archaeological evidence demonstrates that the Mixon would have been the shoreline during the Roman occupation, and was not breached by the sea until the 10th or 11th century.
The Mixon rocks have been a great hazard to shipping vessels over the centuries. The cartographer John Speed placed the Mixon (incorrectly), off the north-east coast of the Isle of Wight on his 1610 map.
Probably the earliest sailing directions about this area are in the "Great Britains Coasting Pilot" for 1693, where the author Greenvile Collins writes of the Mixon and Owers shoals:
To warn shipping vessels of the dangerous Mixon and Owers shoals, a light vessel was anchored off the Mixon in 1788 by Trinity House. From that date onward a series of vessels have been used for the same purpose, and between 1939 and 1973 the commonly-used craft was lightship number 3. In 1973 the lightship was replaced with a beacon and then from 2015 a South Cardinal was installed.
The Mixon rock has been quarried at least since the Roman occupation of the area and became an important building stone in the late Saxon period. There is evidence of its use mainly on the Manhood Peninsula but also within an area bounded by Westbourne, Westhampnett, Oving and South Bersted. Quarrying ceased after an Admiralty prohibition order in 1827. Some examples of structures where Mixon stone was used are the Fishbourne Roman Palace and the Hayling Island Bridge.
In the 19th century, the "Channel Pilot" recorded the presence of a deep hole at the eastern end of the reef. Known as the Mixon Hole, the depression is approximately 8 Fathoms deep. More recently the Mixon Hole has been described as the "most dramatic underwater cliff in the [English] channel".
The great depth of the Mixon hole, plus it being relatively near to the shore, has made it a popular dive site.
Marine life
The crevices and ledges within the Mixon Hole provide a habitat for a variety of marine species including short-snouted seahorses, squat lobsters and crabs, along with red algae.
The short-snouted seahorse is protected under the Wildlife and Countryside Act 1981 of the United Kingdom and by CITES. The UK Government established Marine Conservation Zones(MCZ) to protect the populations and habitats of rare or threatened species. The Mixon is within the Selsey Bill and the Hounds MCZ that was designated on the 31 May 2019. It has an area of around
Geology
Mixon Rock is a tough, coarse-grained, pale grey to honeyyellow bioclastic limestone or calcareous sandstone. This stone belongs to the Bracklesham Group that was formed about 45 million years ago. The stone itself has been formally named by geologists as "Mixon Rock". The rock contains microfossils, such as Foraminifera, along with shell debris, sponge spicules and echinoid spines, some corals, bryozoans and shark teeth. It also contains scattered sand grains and glauconite.The Mixon is only one of three localities in the United Kingdom where an extinct genus of Foraminifera known as Alveolina can be found.
The north face of the Mixon Hole is a clay cliff that is vertical in its upper parts to between 5 and 20 metres below sea level. At the top of the cliff limestone overlies softer grey clay. The Mixon Hole forms the north side of a drowned river gorge which is kept open by the strong tidal currents through it.
At the base of the hole a mixture of boulders and cobbles of both clay and limestone has fallen from the cliff above. Away from the cliff on the seabed there is a preponderance of empty slipper limpet shells.
Folklore, myths and legends
There are many myths and legends associated with The Mixon. For example, the foundation story of Sussex as recorded in the "Anglo-Saxon Chronicle" tells how the Anglo-Saxon king Ælle and his three sons landed at a place called Cymenshore in AD 477. The modern location for Cymenshore has been lost although the written evidence suggests that it was located at The Mixon. However most academics agree that although it is possible that Cymenshore existed, the foundation story itself is a myth.
Another more recent example is a custom that suggests that the dead should be placed, in their coffins, on Selsey beach at night. In the morning the coffins would be gone — and it was said that the people of the sea had taken them to the Mixon Hole. A more plausible explanation is that abandoning coffins here was associated with smuggling. A coffin full of contraband would be deposited on the beach, ready to be picked up and distributed by a coastal cutter, and because it seemed to be a funeral rite this practice would not attract the attention of the customs official.
See also
Geography of Sussex
Beachy Head West
Pagham Harbour
History of Sussex
Notes
Citations
References
West Sussex
Marine biology
Geology
Sussex | The Mixon | [
"Biology"
] | 1,312 | [
"Marine biology"
] |
76,087,628 | https://en.wikipedia.org/wiki/Music%20cipher | In cryptography, a music cipher is an algorithm for the encryption of a plaintext into musical symbols or sounds. Music-based ciphers are related to, but not the same as musical cryptograms. The latter were systems used by composers to create musical themes or motifs to represent names based on similarities between letters of the alphabet and musical note names, such as the BACH motif, whereas music ciphers were systems typically used by cryptographers to hide or encode messages for reasons of secrecy or espionage.
Types
There are a variety of different types of music ciphers as distinguished by both the method of encryption and the musical symbols used. Regarding the former, most are simple substitution ciphers with a one-to-one correspondence between individual letters of the alphabet and a specific musical note. There are also historical music ciphers that utilize homophonic substitution (one-to-many), polyphonic substitution (many-to-one), compound cipher symbols, and/or cipher keys; all of which can make the enciphered message more difficult to break. Regarding the type of symbol used for substitution, most music ciphers utilize the pitch of a musical note as the primary cipher symbol. Since there are fewer notes in a standard musical scale (e.g., seven for diatonic scales and twelve for chromatic scales) than there are letters of the alphabet, cryptographers would often combine the note name with additional characteristics––such as octave register, rhythmic duration, or clef––to create a complete set of cipher symbols to match every letter. However, there are some music ciphers which rely exclusively on rhythm instead of pitch or on relative scale degree names instead of absolute pitches.
Musical steganography
Music ciphers often have both cryptographic and steganographic elements. Simply put, encryption is scrambling a message so that it is unreadable; steganography is hiding a message so no knows it is even there. Most practitioners of music ciphers believed that encrypting text into musical symbols gave it added security because, if intercepted, most people would not even suspect that the sheet music contained a message. However, as Francesco Lana de Terzi notes, this is usually not because the resulting cipher melody appears to be a normal piece of music, but rather because so few people know enough about music to realize it is not ("ma gl'intelligenti di musica sono poci"). A message can also be visually hidden within a page of music without actually being a music cipher. William F. Friedman embedded a secret message based on Francis Bacon's cipher into a sheet music arrangement of Stephen Foster's "My Old Kentucky Home" by visually altering the appearance of the note stems. Another steganographic strategy is to musically encrypt a plaintext, but hide the message-bearing notes within a larger musical score that requires some visual marker that distinguishes them from the meaningless null-symbol notes (e.g., the cipher melody is only in the tenor line or only the notes with stems pointing down).
Diatonic substitution ciphers
Diatonic music ciphers utilize only the seven basic note names of the diatonic scale: A, B, C, D, E, F, and G. While some systems reuse the same seven pitches for multiple letters (e.g., the pitch A can represent the letters A, H, O, or V), most algorithms combine these pitches with other musical attributes to achieve a one-to-one mapping. Perhaps the earliest documented music cipher is found in a manuscript from 1432 called "The Sermon Booklets of Friar Nicholas Philip." Philip's cipher uses only five pitches, but each note can appear with one of four different rhythmic durations, thus providing twenty distinct symbols. A similar cipher appears in a 15th-century British anonymous manuscript as well as in a much later treatise by Giambattista della Porta.
In editions of the same treatise (De Furtivis Literarum Notis), Porta also presents a simpler cipher which is much more well-known. Porta's music cipher maps the letters A through M (omitting J and K) onto a stepwise, ascending, octave-and-a-half scale of whole notes (semibreves); with the remainder of the alphabet (omitting V and W) onto a descending scale of half notes (minims). Since alphabetic and scalar sequences are in such close step with each other, this is not a very strong method of encryption, nor are the melodies it produces very natural. Nevertheless, one finds slight variations of this same method employed throughout the 17th and 18th centuries by Daniel Schwenter (1602), John Wilkins (1641), Athanasius Kircher (1650), Kaspar Schott (1655), Philip Thicknesse (1722), and even the British Foreign Office (ca. 1750).
Chromatic substitution ciphers
Music ciphers based on the chromatic scale provide a larger pool of note names to match with letters of the alphabet. Applying sharps and flats to the seven diatonic pitches yields twenty-one unique cipher symbols. Since this is obviously still less than a standard alphabet, chromatic ciphers also require either a reduced letter set or additional features (e.g., octave register or duration). Most chromatic ciphers were developed by composers in the 20th Century when fully chromatic music itself was more common. A notable exception is a cipher attributed to the composer Michael Haydn (brother of the more famous Joseph Haydn). Haydn's algorithm is one of the most comprehensive with symbols for thirty-one letters of the German alphabet, punctuations (using rest signs), parentheses (using clefs), and word segmentation (using bar lines). However, because many of the pitches are enharmonic equivalents, this cipher can only be transmitted as visual steganography, not via musical sound. For example, the notes C-sharp and D-flat are spelled differently, but they sound the same on a piano. As such, if one were listening to an enciphered melody, it would not be possible to hear the difference between the letters K and L. Furthermore, the purpose of this cipher was clearly not to generate musical themes that could pass for normal music. The use of such an extreme chromatic scale produces wildly dissonant, atonal melodies that would have been obviously atypical for Haydn's time.
20th-century ciphers
Although chromatic ciphers did not seemed to be favored by cryptographers, there are several 20th-century composers who developed systems for use in their own music: Arthur Honegger, Maurice Duruflé, Norman Cazden, Olivier Messiaen, and Jacques Chailley. Similar to Haydn's cipher, most likewise match the alphabet sequentially onto a chromatic scale and rely on octave register to extend to twenty-six letters. Only Messiaen's appears to have been thoughtfully constructed to meet the composer's aesthetic goals. Although he also utilized different octave registers, the letters of the alphabet are not mapped in scalar order and also have distinct rhythmic values. Messiaen called his musical alphabet the langage communicable, and used it to embed extra-musical text throughout his organ work Méditations sur le Mystère de la Sainte Trinité.
Compound motivic ciphers
In a compound substitution cipher, each single plaintext letter is replaced by a block of multiple cipher symbols (e.g., 'a' = EN or 'b' = WJU). Similarly, there are compound music ciphers in which each letter is represented by a musical motive with two or more notes. In the case of the former, the compound symbols are to make frequency analysis more difficult; in the latter, the goal is to make the output more musical.
For example, in 1804, Johann Bücking devised a compound cipher which generates musical compositions in the form of a minuet in the key of G Major. Each letter of the alphabet is replaced by a measure of music consisting of a stylistically typical motive with three to six notes. After the plaintext is enciphered, additional pre-composed measures are appended to the beginning and end to provide a suitable musical framing. A few years earlier, Wolfgang Amadeus Mozart appears to have employed a similar technique (with much more sophisticated musical motives), although more likely intended as a parlor game than an actual cipher. Since the compound symbols are musically meaningful motives, these ciphers could also be considered similar to codes.
Friedrich von Öttingen-Wallerstein proposed a different type of compound music cipher modeled after a polybius square cipher. Öttingen-Wallerstein used a 5x5 grid containing the letters of the alphabet (hidden within the names of angels). Instead of indexing the rows and columns with coordinate numbers, he used the solfege syllables Ut, Re, Mi Fa, and Sol (i.e., the first five degrees of a diatonic scale). Each letter, therefore, becomes a two-note melodic motive. This same cipher appears in treatises by Gustavus Selenus (1624) and Johann Balthasar Friderici (1665) (but without credit to the earlier version of Öttingen-Wallerstein).
Music ciphers with keys
Because Öttingen-Wallerstein's cipher uses relative scale degrees, rather than fixed note names, it is effectively a polyalphabetic cipher. The same enciphered message could be transposed to a different musical key––with different note names––and still retain the same meaning. The musical key literally becomes a cipher key (or cryptovariable), because the recipient needs that additional information to correctly decipher the melody. Öttingen-Wallerstein inserted rests as cipherkey markers to indicate when a new musical key was needed to decrypt the message.
Francesco Lana de Terzi used a more conventional text-string cryptovariable, to add security to a very straightforward 'Porta-style' music cipher (1670). Similar to a Vigenère cipher, a single-letter cipher key shifts the position of the plaintext alphabet in relation to the sequence musical cipher symbols; a multi-letter key word shifts the musical scale for each letter of the text in a repeating cycle.
A more elaborate cipherkey algorithm was found in an anonymous manuscript in Port-Lesney, France, most likely from the mid-18th century. The so-called 'Port-Lesney' music cipher uses a mechanical device known as an Alberti cipher disk There are two rotating disks: the outer disk contains two concentric rings (one with time signatures and the other with letters of the alphabet); the inner disk has a ring of compound musical symbols, and a small inner circle with three different clef signs. The disks are rotated to align the letters of the alphabet with compound musical symbols to encrypt the message. When the melody is written out on a music staff, the corresponding clef and time signature are added to the beginning to indicate the cipher key (which the recipient aligns on their disk to decipher the message). This particular music cipher was apparently very popular, with a dozen variations (in French, German, and English) appearing throughout the 18th and 19th centuries.
The more recent Solfa Cipher combines some of the above cryptovariable techniques. As the name suggests, Solfa Cipher uses relative solfege degrees (like Öttingen-Wallerstein) rather than fixed pitches, which allows the same encrypted message to be transposable to different musical keys. Since there are only seven scale degrees, these are combined with a rhythmic component to create enough unique cipher symbols. However, instead of absolute note lengths (e.g., quarter note, half note, etc.) that are employed in most music ciphers, Solfa Cipher uses relative metric placement. This type of tonal-metric cipher makes the encrypted melody both harder to break and more musically natural (i.e. similar to common-practice tonal melodies). To decrypt a cipher melody, the recipient needs to know in which musical key and with what rhythmic unit the original message was encrypted, as well as the clef sign and metric location of the first note. The cipher key could also be transmitted as a date by using Solfalogy, a method of associating each unique date with a tone and modal scale. To further confound interceptors, the transcribed sheet music could be written with a decoy clef, key signature, and time signature. The musical output, however, is a relatively normal, simple, singable tune in comparison to the disjunct, atonal melodies produced by fixed-pitch substitution ciphers.
References
Sources
Alberti, Leon Battista. 1467. De Cifris. Biblioteca Nazionale Marciana. Cod. Marc. Lat. XIV 32 (4702) f. 1r. (sec. XVI).
Arnold, George. 1862. The Magician's Own Book. New York: Dick & Fitzgerald.
Bacon, Francis. 1605. The Proficience and Advancement of Learning Divine and Humane. Oxford
Belloni, Gabriella. 1982. "Conoscenza magica e ricerca scientifica in G. B. Della Porta". Criptologia / Giovan Battista Della Porta. Rome: Centro internazionale di studi umanistici
Bernard, Francis. c.1400. Sloan MS 351, British Library.
Bertini, A. 1811. Stigmatographie ou l'art d'écrire avec des points. France: Martinet.
Boethius, Anicius Manlius Severinus. c.524. De Institutione Musica. Translated by Calvin Bower, 1989, Fundamentals of Music (ed. C. Palisca), Yale University Press.
Bücking, Johannn. J. 1804. Anweisung zur geheimen Correpondenz. Heinrich Georg Albreht.
Cazden, Norman. 1961a. "Staff Notation as a Non-Musical Communications Code," Perspectives of New Music, Vol. 5, No. 1, 113–128
Cazden, Norman. 1961b. "How to Compose Non-Music," Perspectives of New Music, Vol. 5, No. 2, 287–296
Chailley, Jacques. 1981. "Anagrammes Musicales Et "langages Communicables"." Revue De Musicologie 67, no. 1: 69-80. doi:10.2307/928141.
Champour, MM. de and François Malepeyre. 1856. Nouveau manuel complet de la fabrication des encres telles. A la Librairie encyclopédique de Roret.
Code, David Løberg. 2023. "Can musical encryption be both? A survey of music-based ciphers." Cryptologia Volume 47 - Issue 4, https://doi.org/10.1080/01611194.2021.2021565
Daverio, John. 2002. Crossing Paths: Schubert, Schumann, and Brahms. Oxford University Press.
Djossa, Christina Ayele. 2018. "With Music Cryptography, Composers Can Hide Messages in Their Melodies," Atlas Obscura. https://www.atlasobscura.com/articles/musical-cryptography-codes
Duruflé, Maurice. 1942. Prélude et fugue sur le nom d'Alain, Op. 7.
Écorcheville, Jules. 1909. "Homage à Joseph Haydn," Revue musicale S.I.M. Société internationale de musique. https://gallica.bnf.fr/ark:/12148/bpt6k5589273s/f51.item
Ernst, Thomas. 1996. “Schwarzweisse Magie. Der Schlussel zum dritten Buch der Steganographia des Trithemius.” Daphnis 25, Heft 1.
Friderici, Johann Balthasar. 1665. Cryptographia. https://www.digitale-sammlungen.de/de/view/bsb10897282?page=193
Gale, John. 1796. Gale's Cabinet of Knowledge. W. Kemmish.
Godwin, Francis. 1638. The Man in the Moone or A Discourse of a Voyage Thither by Domingo Gonsales. John Norton.
Guyot, Edme Gilles and Guillaume Germain Guyot. 1769. Recreations Sur Les Nombres. Gueffier.
Honegger, Arthur. 1928. Homage à Albert Roussel, H.69. Editions Salabert.
Hooper, William. 1794. Rational Recreations. B. Law and Son.
Jacob, Paul Lacroix. 1858. La cryptographie: ou, L'art d'écrire en chiffres. Adolphe Delahays.
Kircher, Athanasius. 1650. Musurgia Universalis. https://archive.org/details/bub_gb_97xCAAAAcAAJ/page/n389/mode/2up
Klüber, Johann Ludwig. 1808. Kryptographik. Cottaschen. https://archive.org/details/bub_gb_lqRAAAAAcAAJ/page/n543/mode/2up
Knowlson, James R. 1968. "A Note on Bishop Godwin's 'Man in the Moone:' The East Indies Trade Route and a 'Language' of Musical Notes." Modern Philology 65, no. 4: 357–61
Kojima, Kenji. 2013. "Algorithmic Composition 'CiberTune'". http://kenjikojima.com/cipherTune/
Lacombe, Jacques. 1792. Encyclopédie Méthodique: Des Amusemens Des Sciences Mathématiques Et Physiques. Chez Panckoucke
Langlais, Jean. 1976. Deuxième symphonie pour orgue 'Alla Webern'
McAvoy, Gary. 2021. The Vivaldi Cipher. Literati Editions.
Meister, Aloys. 1906. Die Geheimschrift im Dienste der päpstlichen Kurie von ihren Anfängen bis zum Ende des XVI Jarhhunderts. Schöningh.
Messiaen, Olivier. 1969. Méditations sur le mystère de la Sainte Trinité
Mozart, Wolfgang Amadeus. 1787. Music manuscript MS 253–01, Bibliothèque Nationale de France. https://catalogue.bnf.fr/ark:/12148/cb424728984
New York Public Library, Manuscripts and Archives Division. 1916. "My Old Kentucky Home, Good Night" The New York Public Library Digital Collections. https://digitalcollections.nypl.org/items/bd9b1e30-8607-0131-ac72-58d385a7b928
Noguchi, Hideo. 1990. "Mozart – Musical Game in C K.516f" Mitteilungen der Internationalen Stiftung Mozarteum 38, 89–101.
Öttingen-Wallerstein, Friedrich von. c.1600. Steganographia comitis. https://diglib.hab.de/wdb.php?dir=mss/56-aug-4f
Philip, Nicholas. 1436. "The Sermon Booklets of Friar Nicholas Philip" MS Lat. th.d.I, Bodleian Library, Oxford.
Porta, Giambattista della. 1602. De Furtivis Literarum Notis. https://books.google.com/books?id=UIZeAAAAcAAJ
Porta, Giambattista della. 1606. De Occultis Literarum Notis. https://warburg.sas.ac.uk/pdf/noh4260.o11b2715108.pdf
Prince, Jon and Mark Schmuckler. 2014. "The Tonal-Metric Hierarchy: A Corpus Analysis," Music Perception, 31(3), 254–270.
Rettensteiner, Werigand. 1808. Biographische Skizze von Michael Haydn.
Reuter, Christoph. 2013. "Namadeus – Play Your Name with Mozarts Game (KV 516f). Musicpsychologie, Bd.23, 154-159.
Sams, Eric. 1966. "The Schumann Ciphers" The Musical Times, May 1966: 392–399.
Schooling, John Holt. 1896. “Secrets in Cipher I-IV”, Pall Mall Magazine, viii, 119–29, 245–56, 452–62, 608–18
Schott, Kaspar. 1655. Schola Steganographica. https://books.google.com/books?id=XQNCAAAAcAAJ
Schwenter, Daniel. 1622. Steganologia & Steganographia Aucta. https://www.digitale-sammlungen.de/de/view/bsb11081558?page%3D325&sa=D&source=editors&ust=1623428284662000&usg=AOvVaw1bu67EE5i9j8vDKBVE6OjS
Selenius, Gustavus. 1624. Cryptomenytices et Cryptographiae libri IX. https://books.google.com/books?id=gc9TAAAAcAAJ
Shenten, Andrew. 2008. Olivier Messiaen's System of Signs: Notes Towards Understanding His Music. Ashgate Publishing.
Sudre, François. 1866. Langue Musicale Universelle. http://www.ifost.org.au/~gregb/solresol/sudre-book.pdf
Terzi, Francesco Lana de. 1670. Prodromo, ouero, Saggio di alcune inuentione nuoue, premesso all'arte maestra https://archive.org/details/prodromoouerosag00lana/page/n265/mode/1up
Theun, Johann Christophe. 1772. Neue physikalische und mathematische Belustigungen, Bey Berhard Kletts
Thicknesse, Philip. 1772. A treatise on the art of decyphering, and of writing in cypher. With an harmonic alphabet. W. Brown. https://archive.org/details/atreatiseonartd00thicgoog/page/n125/mode/2up
Wilkins, John. 1641. Mercury, or The Secret and Swift Messenger. http://lcweb2.loc.gov/cgi-bin/ampage?collId=rbc3&fileName=rbc0001_2009fabyan19070page.db&recNum=168
External links
Music-based ciphers, online encoders, https://wmich.edu/mus-theo/ciphers
Elgar's Enigma Cipher, https://enigmathemeunmasked.blogspot.com/
Solfa Cipher, https://solfa-co.de
Music Sheet Cipher, https://www.dcode.fr/music-sheet-cipher
Cryptography
Ciphers
Music theory | Music cipher | [
"Mathematics",
"Engineering"
] | 4,938 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
76,088,934 | https://en.wikipedia.org/wiki/Solar%20reforming | Solar reforming is the sunlight-driven conversion of diverse carbon waste resources (including solid, liquid, and gaseous waste streams such as biomass, plastics, industrial by-products, atmospheric carbon dioxide, etc.) into sustainable fuels (or energy vectors) and value-added chemicals. It encompasses a set of technologies (and processes) operating under ambient and aqueous conditions, utilizing solar spectrum to generate maximum value. Solar reforming offers an attractive and unifying solution to address the contemporary challenges of climate change and environmental pollution by creating a sustainable circular network of waste upcycling, clean fuel (and chemical) generation and the consequent mitigation of greenhouse emissions (in alignment with the United Nations Sustainable Development Goals).
Background
The earliest sunlight-driven reforming (now referred to as photoreforming or PC reforming which forms a small sub-section of solar reforming; see Definition and classifications section) of waste-derived substrates involved the use of TiO2 semiconductor photocatalyst (generally loaded with a hydrogen evolution co-catalyst such as Pt). Kawai and Sakata from the Institute for Molecular Science, Okazaki, Japan in the 1980s reported that the organics derived from different solid waste matter could be used as electron donors to drive the generation of hydrogen gas over TiO2 photocatalyst composites. In 2017, Wakerley, Kuehnel and Reisner at the University of Cambridge, UK demonstrated the photocatalytic production of hydrogen using raw lignocellulosic biomass substrates in the presence of visible-light responsive CdS|CdOx quantum dots under alkaline conditions. This was followed by the utilization of less-toxic, carbon-based, visible-light absorbing photocatalyst composites (for example carbon-nitride based systems) for biomass and plastics photoreforming to hydrogen and organics by Kasap, Uekert and Reisner. In addition to variations of carbon nitride, other photocatalyst composite systems based on graphene oxides, MXenes, co-ordination polymers and metal chalcogenides were reported during this period. A major limitation of PC reforming is the use of conventional harsh alkaline pre-treatment conditions (pH >13 and high temperatures) for polymeric substrates such as condensation plastics, accounting for more than 80% of the operation costs. This was circumvented with the introduction of a new chemoenzymatic reforming pathway in 2023 by Bhattacharjee, Guo, Reisner and Hollfelder, which employed near-neutral pH, moderate temperatures for pre-treating plastics and nanoplastics. In 2020, Jiao and Xie reported the photocatalytic conversion of addition plastics such as polyethylene and polypropylene to high energy-density to C2 fuels over a Nb2O5 catalyst under natural conditions.
The photocatalytic process (referred to as PC reforming; see Categorization and configurations section below) offers a simple, one-pot and facile deployment scope, but has several major limitations, making it challenging for commercial implementation. In 2021, sunlight-driven photoelectrochemical (PEC) systems/technologies operating with no external bias or voltage input were introduced by Bhattacharjee and Reisner at the University of Cambridge. These PEC reforming (see Categorization and configurations section) systems reformed diverse pre-treated waste streams (such as lignocellulose and PET plastics) to selective value-added chemicals with the simultaneous generation of green hydrogen, and achieving areal production rates 100-10000 times higher than conventional photocatalytic processes. In 2023, Bhattacharjee, Rahaman and Reisner extended the PEC platform to a solar reactor which could reduce greenhouse gas CO2 to different energy vectors (CO, syngas, formate depending on the type of catalyst integrated) and convert waste PET plastics to glycolic acid at the same time. This further inspired the direct capture and conversion of CO2 to products from flue gas and air (direct air capture) in a PEC reforming process (with simultaneous plastic conversion). Choi and Ryu demonstrated a polyoxometallate-medated PEC process to achieve biomass conversion with unassisted hydrogen production in 2022. Similarly, Pan and Chu, in 2023 reported a PEC cell for renewable formate production from sunlight, CO2 and biomass-derived sugars. These developments has led solar reforming (and electroreforming, where renewable electricity drives redox processes; see Caterogization and configurations section) to gradually emerge as an active area of exploration.
Concept and considerations
Definition and classifications
Solar reforming is the sunlight-driven transformation of waste substrates to valuable products (such as sustainable fuels and chemicals) as defined by scientists Subhajit Bhattacharjee, Stuart Linley and Erwin Reisner in their 2024 Nature Reviews Chemistry article where they conceptualized and formalized the field by introducing its concepts, classification, configurations and metrics. It generally operates without external heating and pressure, and also introduces a thermodynamic advantage over traditional green hydrogen or CO2 reduction fuel producing methods such as water splitting or CO2 splitting, respectively. Depending on solar spectrum utilization, solar reforming can be classified into two categories: "solar catalytic reforming" and "solar thermal reforming". Solar catalytic reforming refers to transformation processes primarily driven by ultraviolet (UV) or visible light. It also includes the subset of 'photoreforming' encompassing utilization of high energy photons in the UV or near-UV region of the solar spectrum (for example, by semiconductor photocatalysts such as TiO2). Solar thermal reforming, on the other hand, exploits the infrared (IR) region for waste upcycling to generate products of high economic value. An important aspect of solar reforming is value creation, which means that the overall value creation from product formation must be greater than substrate value destruction. In terms of deployment architectures, solar catalytic reforming can be further categorized into: photocatalytic reforming (PC reforming), photoelectrochemical reforming (PEC reforming) and photovoltaic-electrochemical reforming (PV-EC reforming).
Advantages over conventional waste recycling and upcycling processes
Solar reforming offers several advantages over conventional methods of waste management or fuel/chemical production. It offers a less energy-intensive and low carbon alterative to methods of waste reforming such as pyrolysis and gasification which require high energy input. Solar reforming also provides several benefits over traditional green hydrogen production methods such as water splitting (H2O → H2 + O2, ΔG° = 237 kJ mol−1). It offers a thermodynamic advantage over water splitting by circumventing the energetically and kinetically demanding water oxidation half reaction (E0 = +1.23 V vs. reversible hydrogen electrode (RHE)) by energetically neutral oxidation of waste-derived organics (CxHyOz + (2x−z)H2O → (2x−z+y/2)H2 + xCO2; ΔG° ~0 kJ mol−1). This results in better performance in terms of higher production rates, and also translates to other similar processes which depend on water oxidation as the counter reaction such as CO2 splitting. Furthermore, concentrated streams of hydrogen produced from solar reforming is safer than explosive mixtures of oxygen and hydrogen (from traditional water splitting), that otherwise require additional separation costs. The added economic advantage of forming two different valuable products (for example, gaseous reductive fuels and liquid oxidative chemicals) simultaneously makes solar reforming suitable for commercial applications.
Solar reforming metrics
Solar reforming encompasses a range of technological processes and configurations and therefore, suitable performance metrics can evaluate the commercial viability. In artificial photosynthesis, the most common metric is the solar-to-fuel conversion efficiency (ηSTF) as shown below, where 'r' is the product formation rate, 'ΔG' is the Gibbs free energy change during the process, 'A' is the sunlight irradiation area and 'P' is the total light intensity flux. The ηSTF can be adopted as a metric for solar reforming but with certain considerations. Since the ΔG values for solar reforming processes are very low (ΔG ~0 kJ mol‒1), this makes the ηSTF per definition close to zero, despite the high production rates and quantum yields. However, replacing the ΔG for product formation (during solar reforming) with that of product utilisation (|ΔGuse|; such as combustion of the hydrogen fuel generated) can give a better representation of the process efficiency.
Since solar reforming is highly dependent on the light harvester and its area of photon collection, a more technologically relevant metric is the areal production rate (rareal) as shown, where 'n' is the moles of product formed, 'A' is the sunlight irradiation area and 't' is the time.
Although rareal is a more consistent metric for solar reforming, it neglects some key parameters such as type of waste utilized, pre-treatment costs, product value, scaling, other process and separation costs, deployment variables, etc. Therefore, a more adaptable and robust metric is the solar-to-value creation rate (rSTV) which can encompass all these factors and provide a more holistic and practical picture from the economic or commercial point of view. The simplified equation for rSTV is shown below, where Ci and Ck are the costs of the product 'i' and substrate 'k', respectively. Cp is the pre-treatment cost for the waste substrate 'k', and ni and nk are amounts (in moles) of the product 'i' formed and substrate 'k' consumed during solar reforming, respectively. Note that the metric is adaptable and can be expanded to include other relevant parameters as applicable.
Categorization and configurations
Solar reforming depends on the properties of the light absorber and the catalysts involved, and their selection, screening and integration to generate maximum value. The design and deployment of solar reforming technologies dictates the efficiency, scale and target substrates/products. In this context, solar reforming (more specifically, solar catalytic reforming) can be classified into three architectures:
Photocatalytic (PC) reforming - PC reforming is a one-pot process involving homogeneous or heterogenous photocatalyst suspensions (or immobilized photocatalysts on sheets or floating materials for easy recovery), which, under sunlight irradiation generate charge carriers (electron-hole pairs) to catalyze redox reactions (UV or near-UV based photoreforming systems generally also come under PC reforming). Despite the low cost and simplicity of PC reforming, there are major drawbacks of this approach which includes low product formation rates, poor selectivity of oxidation products or overoxidation to release CO2, challenging catalyst/process optimization and harsh pre-treatment conditions.
Photoelectrochemical (PEC) reforming - PEC reforming involves the use of PEC systems/assemblies which consist of separated (photo)electrodes generally connected using a wire and submerged in solution (electrolyte). A photoelectrode consists of a light-absorber and additional charge transport and catalyst layers to facilitate the redox processes. While conventional PEC systems typically require a bias or voltage input in addition to the energy obtained from incident light irradiation, PEC reforming ideally operates with a single light absorber without any external bias or voltage (that is, completely driven by sunlight). PEC reforming can already produce clean fuels and valuable chemicals with high selectivity and achieve production rates which are 2-4 orders of magnitude higher than conventional PC processes. The spatial separation between the redox processes offered by PEC systems allows flexibility in the screening and integration of light-absorbers and catalysts, and also better product separation. They can also benefit from better spectral utilization such as using solar concentrators or thermoelectric modules to harvest heat, thereby improving reaction kinetics and performance. The versatility and high performance of these new PEC arrangements, therefore has wide scope of further exploitation and research.
PV-EC reforming and extension to 'electroreforming' systems - PV-EC reforming refers to the use of electricity generated from photovoltaic panels (and therefore driven by sunlight) to drive electrochemical (electrolysis) reactions for waste reforming. The concept of PV-EC reforming can be further extended to 'electroreforming' where renewable electricity from sources other than the sun (for example, wind, hydro, nuclear, among others) is used to power the electrochemical reactions achieving valuable fuel and chemical production from waste feedstocks. While traditionally most electrolysers, including commercial ones focus on water splitting to produce hydrogen, new electrochemical systems, catalysts and concepts have emerged which have started to look into waste substrates for utilisation as sustainable feedstocks.
Introduction of 'Photon Economy'
An important concept introduced in the context of solar reforming is the 'photon economy', which, as defined by Bhattacharjee, Linley and Reisner, is the maximum utilization of all incident photons for maximizing product formation and value creation. An ideal solar reforming process is one where the light absorber can absorb incident UV and visible light photons with maximum quantum yield, generating high charge carrier concentration to drive redox half reactions at maximum rate. On the other hand, the residual, non-absorbed low-energy IR photons may be used for boosting reaction kinetics, waste pre-treatment or other means of value creation (for example, desalination, etc.). Therefore, proper light and thermal management through various means (such as using solar concentrators, thermoelectric modules, among others) is encouraged to have both an atom economical and photon economical approach to extract maximum value from solar reforming processes.
Reception and media
The technological advancements in solar reforming garnered widespread interest in recent years. The works from scientists at Cambridge on PC reforming of raw lignocellulosic biomass or pre-treated polyester plastics to produce hydrogen and organics attracted attention of several stakeholders. The recent technological breakthrough leading to the development of high-performing solar powered reactors (PEC reforming) for the simultaneous upcycling of greenhouse gas CO2 and waste plastics to sustainable products received widespread acclaim and was highlighted in several prominent national and international media outlets. Solar reforming processes primarily developed in Cambridge were also selected as "one of the eleven great ideas from British universities that could change the world" by Sunday Times (April 2020 edition) and featured in the UK Prime Minister's Speech on Net Zero, "Or the researchers at Cambridge who pioneered a new way to turn sunlight into fuel" (indicating solar reforming which was a major subset of the broader research activities at Cambridge).
Outlook and future scope
Solar reforming is currently in the development phase and the scalable deployment of a particular solar reforming technology (PC, PEC or PV-EC) would depend on a variety of factors. These factors include deployment location and sunlight variability/intermittency, characteristics of the chosen waste stream, viable pre-treatment methods, target products, nature of the catalysts and their lifetime, fuel/chemical storage requirements, land use versus open water sources, capital and operational costs, production and solar-to-value creation rates, and governmental policies and incentives, among others. Solar reforming may not be only limited to the conventional chemical pathways discussed, and may also include other relevant industrial processes such as light-driven organic transformations, flow photochemistry, integration with industrial electrolysis, among others. The products from conventional solar reforming such as green hydrogen or other platform chemicals have a broad value-chain. It is also now understood that sustainable fuel/chemical producing technologies of the future will rely on biomass, plastics and CO2 as key carbon feedstocks to replace fossil fuels. Therefore, with sunlight being abundant and the cheapest source of energy, solar reforming is well-positioned to drive decarbonization and facilitate the transition from a linear to circular economy in the coming decades.
See also
Artificial photosynthesis
Circular economy
Conference of the parties
Electrochemical reduction of carbon dioxide
Electrochemistry
Hydrogen economy
Net zero emissions
Photocatalysis
Photoelectrochemistry
Solar fuel
References
Sustainability
Sustainable energy
Energy
Engineering
Science and technology
University of Cambridge
Solar energy
Chemistry
Materials science
Chemical industry
Climate change mitigation
Green chemistry | Solar reforming | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,404 | [
"Green chemistry",
"Chemical engineering",
"Environmental chemistry",
"nan"
] |
76,089,082 | https://en.wikipedia.org/wiki/Khazan%20system | The Khazan is a traditional farming system of Goa, India. It comprises mainly rice-fish fields established on reclaimed coastal wetlands, salt marshes and mangrove forests. It involves construction of levees and sluice gates to prevent sea water from entering the fields.
The Bandora (Bandiwade) copper-plate inscription of Anirjita-varman (likely a Konkan Maurya king), dated to 5th-6th century on palaeographical grounds, refers to the khazan system as khajjana. It records the grant of tax-exempt land in Dwadasa-desha (modern Bardez), including one hala (a unit) of khajjana land. The recipient of the grant was expected to convert this wetland into a cultivated field by constructing a bund to prevent the salty sea water from entering the land.
Historically, an association of villagers (gaunkaris) maintained the local khazan fields and its associated levees. This system continued under the Portuguese rule, with communidades maintaining the khazan system through an association of farmers (bhous or bhaus).
References
Bibliography
Flood control in India
Coastal construction
Wetlands of India | Khazan system | [
"Engineering"
] | 250 | [
"Construction",
"Coastal construction"
] |
76,090,272 | https://en.wikipedia.org/wiki/Connie%20Roth | Connie Barbara Roth (born 1974) is a Canadian-American soft matter physicist and polymer scientist whose research concerns the glass transition and aging in polymer films. She is a professor of physics at Emory University.
Education and career
Roth became interested in physics as a teenager in Toronto through the MacGyver television show, and began her interest in polymer films through studying paper and toner in a summer internship at the Xerox Research Centre of Canada. She studied physics as an undergraduate at McMaster University in Ontario, graduating in 1997. She went to the University of Guelph, also in Ontario, for graduate study in physics, earning a master's degree in 1999 and completing her Ph.D. in 2004.
After postdoctoral research at Simon Fraser University in British Columbia and Northwestern University in Chicago, Roth joined the Emory University faculty in 2007. She was promoted to associate professor in 2013 and full professor in 2021.
Recognition
Roth was named as a Fellow of the American Physical Society (APS) in 2019, after a nomination from the APS Division of Polymer Physics, "for exceptional contributions to the understanding of glass transition and aging phenomena in polymer films and blends". She was the 2019 recipient of the Fellows Award of the North American Thermal Analysis Society.
References
External links
Home page
1974 births
Living people
Canadian physicists
Canadian women physicists
American physicists
American women physicists
Polymer scientists and engineers
McMaster University alumni
University of Guelph alumni
Emory University faculty
Fellows of the American Physical Society | Connie Roth | [
"Chemistry",
"Materials_science"
] | 299 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
76,090,350 | https://en.wikipedia.org/wiki/Rheology%20of%20peanut%20butter | Peanut butter is a viscoelastic food that exhibits both solid and fluid behaviors. It consists of ground up peanuts and may contain additional additives, such as stabilizers, sugars, or salt. Its characteristic soft, spreadable texture can be further defined through rheology the study of flow and deformation of matter, affecting texture, consistency, and mouthfeel. Specifically for peanut butter, rheology can be used to more accurately define characteristics, such as spreadability and grittiness.
Soft matter context
In a soft matter context, peanut butter can be considered as a colloidal dispersion, where solid, insoluble peanut particles are suspended in liquid oil. There are two types of peanut butter, and at room temperature, these two types of peanut butter behave differently. Non-stabilized peanut butter, also known as "natural" or "100%" peanut butter consists only of ground peanuts and peanut oil and may contain seasonings, such as salt. In natural peanut butter at room temperature, the insoluble peanut particles separate from peanut oil, and the difference in density causes the peanut oil to float upwards. Stabilized peanut butter contains additional ingredients, such as vegetable oil, to prevent the grounded peanuts and peanut oil from separating into two layers.
During the grinding process, the peanuts release oils, forming a peanut paste consisting of peanut oil and peanut grounds. The grinding process also causes an increase in the overall product temperature, and at this point a stabilizer might be added, such as hydrogenated vegetable oils. At this temperature, the stabilizer melts, uniformly dispersing into the peanut paste. This oil then crystallizes once the product returns to ambient temperatures, and the formed crystalline lattices trap the stabilizer particles within the paste. This prevents the final peanut butter from separating into two separate phases.
Without the stabilizer, the peanut oil alone is not enough, as it is unable to crystallize at room temperature. The melting point of peanut oil is . At room temperature, the oils in natural peanut butter remain liquid, causing a phase separation. Within the stabilized peanut butter, the microstructural features are able to remain well-dispersed in a matrix of stabilized oil due to crystallization, while in the unstabilized peanut butter, the features are not able to retain the same uniformity.
Methods to characterize peanut butter rheology
For most viscous semi-liquid foods, rheological characteristics are determined in shear flow using a coaxial viscometer. However, as peanut butter is not only a highly viscous material, it is also self-lubricating, meaning it releases oils under shear. If placed in a typical coaxial viscometer, the resulting flow pattern a distorted shear flow or plug flow. For accurate data, rheometers typically require no-slip, and the properties of peanut butter do not satisfy this condition. This causes it to be particularly difficult to study its rheology. There have been a few methods devised to overcome this.
Squeezing flow viscosimetry
Squeezing flow viscosimetery uses two parallel plates to compress a fluid uniaxially This method can be used to better understand the viscoelastic properties of peanut butter. Peanut butter samples can be placed between two lubricated plates, and samples can be subjected to either uniaxial deformation at various constant displacement rates, or to uniaxial creep deformation under various constant loads. As the plates compressed the sample, if the sample retained a cylindrical shape without bulging, this is evident that there is a lack of shear flow.
Using this method, peanut butter has been determined to be a power-law fluid with shear thinning properties. In other words, under high shear rates, there is a lower apparent viscosity. This is likely due to the size difference in peanut and oil particles. The larger peanut particles likely form loosely bound aggregates that break down as shear rate increases (e.g. mixing), which allow the oil to better disperse between peanut particles, resulting in a reduced viscosity.
Rough plates with parallel plate rheometers
Another way to overcome the wall-slip effects, is to rough up the contact surface of parallel plate rheometers using a material such as sandpaper. In order to determine if this method sufficiently reduces the wall-slip effects, stress growth experiments can be conducted. If the stress over time is independent of gap size, then wall slip has been successfully reduced.
Rheological properties
The apparent yield stress for the stabilized suspension (374 Pa) was significantly larger than the unstabilized sample (27 Pa) under the Bingham model. This is likely due to the effects of the stabilizing agent. During the grinding stage, the stabilizer dispersed around the peanut particles. At room temperature, the stabilizer crystallized around the particles, creating a strong network of particles within the suspension that can resist the onset of flow. In unstabilized peanut butter, the peanut oil remains in a liquid state. Even when the peanut particles are mixed in homogeneously, the peanut butter remains more liquid-like.
Previously conducted creep (stress vs. strain) experiments were conducted to determine the viscosity of peanut butter. In the stabilized peanut butter, under stresses of 250 MPa, the viscosity increases rapidly with increasing strain, exemplifying solid-like behavior. With stresses greater than 250 MPa, stabilized peanut butter displays liquid-like behavior. In an unstabilized sample, the same viscoelastic transitional behavior was found at 10 MPa.
Both stabilized and unstabilized peanut butter displayed highly non-linear behavior, and the storage (G’) and loss (G’’) modulus was determined. Both peanut butter types have a decrease in G’ and G’’ until critical strain amplitude is reached. Beyond this critical point, both moduli start to increase. The initial observed decrease was likely due to a structure breakdown under strain. Mentioned previously, the increase in strain causes loosely aggregated peanut particles to break, allowing a more homogeneous oil-peanut mixture to form. However, the increase in moduli at a critical strain implies a less homogenous structure is being formed, causing a greater resistance to flow. This might mean at some critical strain, the particles start to behave in a shear thickening manner. A possible reason could be that the maximum volume packing fraction changes with strain amplitude. Meaning at a critical strain, the flow would cause particles to create a less ordered structure resulting in an increase in viscosity.
Complex viscosity is a measure of the total resistance to flow as a function of angular frequency. For peanut butter, it was found that the initial complex viscosity as angular frequency increased was very high. However, if the angular frequency was decreased and increased again, a different behavior emerged, and the peanut butter was unable to retain the same initial complex viscosity. This shows that once the existing structure of the sample was broken, the sample's thixotropic effects, or the rheological properties dependent on flow history, are less pronounced.
Other factors
By varying the grinding time of peanuts, the resulting rheology and texture of natural peanut butter (with no stabilizer) can be affected. More specifically, as grinding time increases, the apparent viscosity decreases. This is likely due to an increase in peanut oil produced by a higher grinding time, causing a lubricating effect to decrease viscosity.
Increasing the grinding time also produced peanut butter with a narrower particle size distribution with high densities. As smaller particles can compact better with less void space than larger particles, density would increase as grinding time increased. For shorter grinding times, there is a wider particle size distribution, meaning the overall peanut particle size is less uniform. This results in a wider linear viscoelastic region, and allows unstabilized peanut butter to behave more similarly to stabilized peanut butter. This is because in stabilized peanut butter, the peanuts' protein bodies and cell wall fragments are able to be more uniformly distributed throughout the peanut butter, rather than clumping. If the particle size is more widely distributed, it mimics the particle size distribution of stabilized peanut butter, resulting in a more stable natural peanut butter.
Applications
The rheology of peanut butter may affect its best texture, flavor, storage stability, and overall quality. This understanding can be applied when determining better or alternative stabilizers for peanut butter or better grinding manufacturing processes for unstabilized peanut butter to prevent oil separation more effectively.
References
Peanut butter
Food science
Rheology | Rheology of peanut butter | [
"Chemistry"
] | 1,747 | [
"Rheology",
"Fluid dynamics"
] |
76,090,560 | https://en.wikipedia.org/wiki/Ro65-6570 | Ro65-6570 is an opioid drug. It has a potential use in preventing the addiction to other opioids.
Mechanism of action
Ro65-6570 is an opioid drug, it works by activating opioid receptors. However, instead of acting at the mu, kappa and delta receptors, it is instead an agonist at the nociceptin receptor.
Potential uses
Analgesic
Ro65-6570 has analgesic properties. In rats, it is able to reduce cancer pain. It is also able to reduce pain caused by arthritis.
Prevention of opioid addiction
While being an opioid agonist, Ro65-6570 did not display addictive properties, it instead reduced the addictive properties of other opioids, but did not affect the analgesic effect of those. This could make it useful if combined with more potent opioids, for example oxycodone and Ro65-6570 would reduce pain, but would be less addictive, unlike oxycodone alone. This effect was antagonized by the nociceptin receptor antagonist J-113,397, further suggesting that this action is linked to the NOP receptor.
References
Nociceptin receptor agonists
Imidazolidinones
Piperidines
Spiro compounds
Anilines | Ro65-6570 | [
"Chemistry"
] | 277 | [
"Organic compounds",
"Spiro compounds"
] |
76,090,672 | https://en.wikipedia.org/wiki/Fugro%20SpAARC | The Fugro SpAARC Space Automation, Artificial Intelligence and Robotics Control (SpAARC) facility is a mission control center created by a collaboration between the Australian Space Agency, the government of Western Australia (WA) and the Dutch company Fugro. SpAARC provides telerobotic control for both spaceflight and terrestrial vehicles. SpAARC, opened in November 2020, is Fugro's largest remote operations center. SpAARC is located in Perth, Australia.
Uses
Fugro SpAARC was selected as the contingency control center for the Intuitive Machines Nova-C IM-1 lunar landing mission.
See also
Australian Space Agency
References
External links
Fugro SpAARC
Spaceflight
Rooms
Spaceflight technology
Command and control
Technology companies of Australia
Robotics companies of Australia
Companies based in Perth, Western Australia | Fugro SpAARC | [
"Astronomy",
"Engineering"
] | 169 | [
"Outer space",
"Spacecraft stubs",
"Astronomy stubs",
"Rooms",
"Spaceflight",
"Architecture"
] |
65,930,624 | https://en.wikipedia.org/wiki/Disease%20package | In plant science, the disease package of a cultivar is the susceptibility/resistance of that cultivar, in vague overall terms. It is not precise in the absolute sense but is meant to be useful when comparing one cultivar to another, relatively.
References
Plant pathogens and diseases | Disease package | [
"Biology"
] | 61 | [
"Plant pathogens and diseases",
"Plants"
] |
65,930,921 | https://en.wikipedia.org/wiki/Rapidly%20Attachable%20Fluid%20Transfer%20Interface | Rapidly Attachable Fluid Transfer Interface (RAFTI) is a standard interface developed by Orbit Fab for transferring fluids, e.g., propellants, in space. It has been defined by a group of 30 companies.
The interface specification has high and low pressure variants, both for operation between -40 °C and 120 °C.
Low pressure : for MMH, UDMH, Water, H2O2, Methanol, Kerosene, Green Monoprops, isopropyl alcohol, HFE, N2O. (fluids)
High pressure : for Nitrogen, Helium, Xenon, Krypton. (gases)
History - timeline
2020: A first implementation is planned to be tested in space in 2021 as part of a prototype fuel depot.
A free flying orbital demo was launched in June 2021 to test transfers of high-test peroxide.
In 2024, Orbit Fab announced production ramp up and prices, and that three SpaceForce Tetra 5 satellites would be launched in 2025 with RAFTI interfaces for a refuelling demo in geostationary orbit.
See also
Robotic Refueling Mission NASA tests on the ISS (including cryogenic)
References
Rocket propellants | Rapidly Attachable Fluid Transfer Interface | [
"Astronomy"
] | 245 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
65,931,321 | https://en.wikipedia.org/wiki/Data%20Lords | Data Lords is a large-ensemble jazz album by the Maria Schneider Orchestra that was released in 2020.
Summary
The tracks of the album are thematically organized in two sections, which the liner notes call "a story of two worlds" and are much like a two-disk release. The two sections are named "The Digital World" and "The Natural World".
Accolades
2021 - Finalist for the Pulitzer Prize for Music
2021 - Grammy Award for Best Large Jazz Ensemble Album
2021 - The track "Sputnik" won the Grammy Award for Best Instrumental Composition
2021 - Le Grand Prix de l’Académie du Jazz for Best Record of the Year
Track listing
Personnel
Greg Gisbert – trumpet, flügelhorn
Tony Kadleck – trumpet, flügelhorn
Nadje Noordhuis – trumpet, flügelhorn
Mike Rodriguez – trumpet, flügelhorn
Marshall Gilkes – trombone
Ryan Keberle – trombone
Keith O'Quinn – trombone
George Flynn – bass trombone
Dave Pietro – alto saxophone, clarinet, piccolo, flute
Steve Wilson – alto saxophone, soprano saxophone, clarinet, flute
Donny McCaslin – tenor saxophone, flute
Rich Perry – tenor saxophone
Scott Robinson – baritone, Bb, bass & contrabass clarinets, muson
Gary Versace – accordion
Frank Kimbrough – piano
Ben Monder – guitar
Jay Anderson – bass
Johnathan Blake – drums, percussion
Additional Credits
Producer: Brian Camelio, Maria Schneider, Ryan Truesdell
Associate Producer: Zachary Bornheimer
Engineering: Brian Montgomery, assisted by Charles Mueller and Edwin Huet
Trumpet electronics programming on "CQ CQ, Is Anybody There?": Michael Lenssen
Recording production assistance: Eunha So
Mixing: Brian Montgomery and Maria Schneider
Mastering: Gene Paul at G&J Audio, and Nate Wood
Package Design:
Illustration: Aaron Horkey
Graphic design: Cheri Dorr
Print production: Franklin Press, Inc.
Session photography: Briene Lermitte
Video documentation on ArtistShare: Marie Le Claire assisted by Erin Harper
References
External links
Data Lords page on ArtistShare.com
Trailer
Pre-concert interview
2020 albums
Big band albums
Grammy Award for Best Large Jazz Ensemble Album
Jazz albums by American artists
Maria Schneider (musician) albums
Works about the Internet | Data Lords | [
"Technology"
] | 447 | [
"Works about the Internet",
"Works about computing"
] |
65,931,909 | https://en.wikipedia.org/wiki/Manuka%20oil | Manuka oil is an essential oil obtained from the steam distillation of the leaves and small branches of the tree Leptospermum scoparium (commonly known as mānuka, or New Zealand tea tree).
Though it is used in a wide range of cosmetics, cosmeceuticals and naturopathic and topical medications, manuka oil is a relatively new development; it was first identified during the 1970s and has been produced commercially since the 1980s and investigated by global research teams since then.
Main constituents
The composition of manuka oil is dependent on its chemotype. Manuka oil from the East Cape region of New Zealand, described as a high triketone chemotype, is commercially important because of its antimicrobial properties (the ability to kill bacteria, viruses, yeasts and fungi).
The triketone chemotype of manuka oil from the East Cape contains over 20% triketones (often as high as 33%), comprising flavesone, leptospermone and iso-leptospermone. Manuka that grows in the Marlborough Sounds region of New Zealand also has relative high levels of triketones, between 15 and 20%. In contrast, manuka that grows in Australia have a different essential oil profile that does not include triketones
More than ten other chemotypes of New Zealand manuka that have been described. These oils are rich in terpene compounds, particularly sesquiterpenes, such as myrcene, humulene, caryophyllene, α-pinene, linalool, α-copaene, elemene, selinene, calamenene, cubebene and cadinene amongst others.
Production
Until recently, most of New Zealand's manuka oil production came from wild-harvested manuka. Harvesters used brush cutters to gather fresh branches, leaving the bushes viable for regrowth available for future years. In recent years manuka plantations in the East Cape region of New Zealand are allowing for the mechanical harvesting of manuka leaf to produce essential oil at a commercial scale. The oil is distilled from the leaves and small branches of the manuka bush using the technique of steam distillation where the steam is passed through the leaf material. The steam is then condensed and the oil floats on top of the condensed water from where it is drawn off. Distillation processes vary from the super- heated fast extraction method to the slower ambient pressure distillation at lower temperatures. Each tonne of foliage produces 3–5 litres of manuka essential oil.
References
Essential oils | Manuka oil | [
"Chemistry"
] | 539 | [
"Essential oils",
"Natural products"
] |
65,933,012 | https://en.wikipedia.org/wiki/The%20Meaning%20of%20Relativity | The Meaning of Relativity: Four Lectures Delivered at Princeton University, May 1921 is a book published by Princeton University Press in 1922 that compiled the 1921 Stafford Little Lectures at Princeton University, given by Albert Einstein. The lectures were translated into English by Edwin Plimpton Adams. The lectures and the subsequent book were Einstein's last attempt to provide a comprehensive overview of his theory of relativity and is his only book that provides an accessible overview of the physics and mathematics of general relativity. Einstein explained his goal in the preface of the book's German edition by stating he "wanted to summarize the principal thoughts and mathematical methods of relativity theory" and that his "principal aim was to let the fundamentals in the entire train of thought of the theory emerge clearly". Among other reviews, the lectures were the subject of the 2017 book The Formative Years of Relativity: The History and Meaning of Einstein's Princeton Lectures by Hanoch Gutfreund and Jürgen Renn.
Background
The book contains four of Einstein's Stafford Little Lectures that were given at Princeton University in 1921. The lectures follow a series of 1915 publications by Einstein developing the theory of general relativity. During this time, there were still many controversial issues surrounding the theories and he was still defending several of his views. The lectures and the subsequent book were Einstein's last attempt to provide a comprehensive overview of his theory of relativity. It is also his only book that provides an overview of the physics and mathematics of general relativity in a comprehensive manner that was accessible to non-specialists. Einstein explained his goal in the preface of the book's German edition by stating he "wanted to summarize the principal thoughts and mathematical methods of relativity theory" and that his "principal aim was to let the fundamentals in the entire train of thought of the theory emerge clearly".
On December 27, 1949, The New York Times ran a story titled "New Einstein theory gives a master key to the universe" in reaction to the new appendix in the book's fifth edition in which Einstein expounded upon his latest unification efforts. Einstein had nothing to do with the article and subsequently refused to speak with any reporters on the matter; he reportedly used the message "[c]ome back and see me in twenty years" to brush off their inquiries.
Content
The book is made of four lectures. The first is titled "Space and Time in Pre-Relativity Physics". The second lecture is titled The Theory of Special Relativity and discusses the special theory of relativity. The third and fourth lectures cover the general theory of relativity in two parts. Einstein added an appendix to update the book for its second edition, which published in 1945. A second appendix was later added for the fifth edition as well, in 1955, which discusses the nonsymmetric field. The second appendix contains Einstein's attempts at a unified field theory.
Reception
The book has received many reviews since its initial publication. The first edition of the book was reviewed by Nature in 1923. Other early versions of the book were reviewed by George Yuri Rainich in 1946, as well as Abraham H. Taub, Philip Morrison, and I. M. Levitt in 1950. Reviews for the book's fifth edition include a short announcement in 1955 that called the book "a well-known classic". A 1956 review of the fifth edition summarizes its publication history and contents and closes by stating "Einstein's little book then serves as an excellent tying-together of loose ends and as a broad survey of the subject."
Among other references to the book, a 2005 column of The Physics Teacher, included the work in a list of books "by and about Einstein that all physics teachers should have" and "should have immediate access to", while a 2019 review of another work opened by stating: "Every teacher of General Relativity depends heavily on two texts: one, the massive Gravitation by Misner, Thorne and Wheeler, the second the diminutive The Meaning of Relativity by Einstein." The Meaning of Relativity is the focus of a 2017 book, The Formative Years of Relativity by Hanoch Gutfreund and Jürgen Renn, which described The Meaning of Relativity as "Einstein's definitive exposition of his special and general theories of relativity".
Publication history
Original English editions
Notable reprints
German editions
See also
List of scientific publications by Albert Einstein
Annus Mirabilis papers
History of general relativity
History of special relativity
References
Further reading
External links
The Meaning of Relativity 5th edition at Princeton University Press
The Meaning of Relativity 5th edition at JSTOR
The Meaning of Relativity at Springer Link
An insightful tome recounts the heady early days of general relativity review by Andrew Robinson at sciencemag.org
1922 non-fiction books
Physics books
Theory of relativity
Works by Albert Einstein
Princeton University Press books | The Meaning of Relativity | [
"Physics"
] | 966 | [
"Theory of relativity"
] |
65,933,387 | https://en.wikipedia.org/wiki/Mycena%20amicta | Mycena amicta, commonly known as the coldfoot bonnet, is a species of mushroom in the family Mycenaceae. It was first described in 1821 by mycologist Elias Magnus Fries.
Description
Fresh specimens appear unmistakably blue; this fades to brownish hues in age.
The cap, initially conical to convex in shape, flattens out with age and typically reaches diameters of up to . The cap cuticle can be peeled. The gills are close and the stipe is covered in powdery hairs.
The mushrooms appear in small groups, on the trunks of broadleaved trees, and particularly in the Pacific Northwest, around rotted conifer wood.
References
amicta
Fungi described in 1821
Fungi of Europe
Fungus species | Mycena amicta | [
"Biology"
] | 150 | [
"Fungi",
"Fungus species"
] |
65,933,492 | https://en.wikipedia.org/wiki/Tracklib | Tracklib is a music service that allows producers to sample original music and clear the samples for official use. The platform was founded with the aim to solve legal and ethical issues surrounding sampling and music clearances. The platform has been previously used to sample and clear tracks for commercial releases by J. Cole, Lil Wayne, DJ Khaled, Mary J Blige, Brockhampton, A-Reece among others.
History
Tracklib is based in Stockholm, Sweden and was originally founded in 2014. After an invite-only beta version in 2017, the music service officially launched to the public in April 2018. In May 2020, Tracklib changed their service to a subscription model.
Services
The catalog of Tracklib consists of original master recordings and stems. Each track is part of one out of three tiers (Category A, B, or C) which each its purchase and clearance costs. Users can browse and hear all music before downloading it in WAV-format to use in a digital audio workstation (DAW) such as Ableton, Reason, or FL Studio. In 2019, Tracklib developed and launched a technology for users to select and preview loops. Tracklib functions as an intermediary between record labels, publishers, copyright owners, and artists. This allows users to clear all music and purchase a license for official usage of the selected recording(s). The difference with other music services such as Splice and Loopmasters, is that Tracklib only includes original master recordings and stems. All music is previously released and no royalty-free sounds or sample packs are available on Tracklib.
Catalog
Original master recordings on Tracklib include music from artists such as Bob James, Louis Armstrong, Billie Holiday, Sly and Robbie, Ray Charles, across genres such as jazz, R&B/soul, reggae, classical music, rock music, and hip hop. The catalog also includes previously unreleased recordings by Isaac Hayes.
Releases
J. Cole - "Middle Child" (6× platinum)
¥$ (Kanye West & Ty Dolla $ign) - “Burn”
DJ Khaled - "Holy Mountain"
Brockhampton - "Dearly Departed"
Lil Wayne - "Harden"
Fred Again - "Leavemealone"
Nas - "WTF SMH"
Drake - "Stories About My Brother"
Nicki Minaj - "Super Freaky Girl"
Mary J. Blige - "Know"
Phantogram - Ceremony
Vic Mensa - "Let U Know"
Other notable artists with songs containing Tracklib samples are Firebeatz, A-Trak, Young M.A, $NOT & Statik Selektah.
Advisory board
Tracklib's advisory board consists of producers Prince Paul, Erick Sermon, and Drumma Boy, later joined by producer Zaytoven in 2018 and Scott Storch in 2020. Former Spotify executives Petra Hansson and Niklas Ivarsson joined the advisory board in 2019.
See also
Loopmasters
Splice (platform)
Grooveshark
AccuRadio
References
Computing websites
Cross-platform software
Internet properties established in 2012
Project management software
Project hosting websites
Sampling (music)
Version control | Tracklib | [
"Technology",
"Engineering"
] | 635 | [
"Software engineering",
"Computing websites",
"Version control"
] |
65,935,430 | https://en.wikipedia.org/wiki/Joshua%20Zak | Joshua Zak (; 26 September 1929 – 14 March 2024) was an Israeli theoretical physicist and writer known for the Zak transform, Zak phase and the Magnetic Translation Group. He received the 2022 Israel Prize and 2014 Wigner medal.
Most cited publications
Zak J. Berry's phase for energy bands in solids. Physical Review Letters. 1989 Jun 5;62(23):2747.
J. Zak. Magnetic translation group. Physical Review. 1964 Jun 15;134(6A):A1602.
Zak J, Moog ER, Liu C, Bader SD. A universal approach to magneto-optics. Journal of Magnetism and Magnetic Materials. 1990 Sep 1;89(1–2):107–23
Zak J, Moog ER, Liu C, Bader SD. Magneto-optics of multilayers with arbitrary magnetization directions. Physical Review B. 1991 Mar 15;43(8):6423.
Zak J. Finite translations in solid-state physics. Physical Review Letters. 1967 Dec 11;19(24):1385.
Honors and awards
Wigner Medal (2014)
Israel Prize, for his achievements in physics (2022)
The Brown–Zak fermion and the Zak transform are named after him.
References
External links
Profile at the Technion – Israel Institute of Technology website
1929 births
2024 deaths
Israel Prize in physics recipients
Theoretical physicists
Scientists from Vilnius
Polish Jews in Israel
Israeli physicists
Jewish physicists | Joshua Zak | [
"Physics"
] | 310 | [
"Theoretical physics",
"Theoretical physicists"
] |
65,936,305 | https://en.wikipedia.org/wiki/Auke%20Ijspeert | Auke Jan Ijspeert (born 1971 in Geneva) is a Swiss-Dutch roboticist and neuroscientist. He is a professor of biorobotics in the Institute of Bioengineering at EPFL, École Polytechnique Fédérale de Lausanne, and the head of the Biorobotics Laboratory at the School of Engineering.
Career
He has studied physics at EPFL and
a degree of an "Ingénieur physicien" (equivalent to Master's degree) in 1995. He joined John Hallam and David Willshaw at the University of Edinburgh as a doctoral student, and in 1999 graduated with a PhD in artificial intelligence on the "Design of artificial neural oscillatory circuits for the control of lamprey- and salamander-like locomotion using evolutionary algorithms". He worked as a postdoctoral researcher with Michael A. Arbib and Stefan Schaal at University of Southern California (USC), and then at EPFL with Jean-Daniel Nicoud and with Luca Maria Gambardella (Dalle Molle Institute for Artificial Intelligence Research - IDSIA).
In 2001, he became a research assistant professor at the Department of Computer Science of the University of Southern California, and an external collaborator at ATR (Advanced Telecommunications Research institute) in Japan. From 2003 to 2017, he was an adjunct faculty Department of Computer Science of the University of Southern California. In 2002, he received Swiss National Science Foundation assistant professorship at the School of Computer and Communication Sciences of EPFL. In 2009, he was named associate professor at EPFL's School of Engineering, and in 2016 he was promoted as full professor. He leads the Biorobotics Laboratory at the School of Engineering.
Research
The Ijspeert group's transdisciplinary research is situated at the intersection of robotics, computational neuroscience, nonlinear dynamical systems, and applied machine learning. Employing numerical simulations and robots, they aim at a better understanding of animal locomotion and movement control, and by taking inspiration in nature, they design new types of robots and locomotion controllers. Their research was featured in public presentations at TED Global and World.minds conferences.
Their research is focused on computational aspects of locomotion control, sensorimotor coordination, and learning in animals and in robots. Furthermore, their research focuses also on rehabilitation robotics, such as exoskeletons, and in locomotion restoration. Their interests extent to research projects in areas such as neuromechanical simulations of locomotion and movement control; systems of coupled nonlinear oscillators for locomotion control; design and control of amphibious, legged, and reconfigurable robots; and, control of humanoid robots, and of exoskeletons.
The research of Ijspeert's group has been featured in news outlets such as IEEE Spectrum, New Scientist, Tech Crunch, Le Temps, Nature, SwissInfo, The Washington Post, Cosmos, CNN, SRF, Tages-Anzeiger, The Mirror, and Der Standard.
Distinctions
Ijspeert is an IEEE Fellow. He is a member of the board of reviewing editors of Science Magazine, and an associate editor of Soft Robotics, IEEE Transactions on Medical Robotics and Bionics, and for the International Journal of Humanoid Robotics. He has been an associate editor for the IEEE Transactions on Robotics (2009-2013) and a guest editor for the Proceedings of IEEE, IEEE Transactions on Biomedical Engineering, Autonomous Robots, IEEE Robotics and Automation Magazine, and Biological Cybernetics.
He is a recipient of the Young Professorship Award (2002), the Young Researcher Scholarship (1999), and the Young Researcher Scholarship (1995) all awarded from the Swiss National Science Foundation . He also received the Marie Curie Scholarship from the European Commission (1997).
Public involvements
Ijspeert is also involved in his local church, the parish of Ecublens-St-Sulpice that is part of the Evangelical Reformed Church of the Canton of Vaud, and he is member of its parish council since 2019.
Selected works
References
External links
Website of the Biorobotics Laboratory
1971 births
Living people
École Polytechnique Fédérale de Lausanne alumni
Alumni of the University of Edinburgh
University of Southern California alumni
Academic staff of the École Polytechnique Fédérale de Lausanne
Swiss neuroscientists
Biological engineering
Dutch neuroscientists | Auke Ijspeert | [
"Engineering",
"Biology"
] | 899 | [
"Biological engineering"
] |
65,936,952 | https://en.wikipedia.org/wiki/Philosophy%20%26%20Technology | Philosophy & Technology is a quarterly peer-reviewed academic journal covering philosophy of technology. It is published by Springer Science+Business Media and the editor-in-chief is Luciano Floridi (University of Oxford). Besides regular issues, the journal publishes occasional special issues and topical collections on particular philosophical topics.
Abstracting and indexing
The journal is abstracted and indexed in EBSCO databases, PhilPapers, ProQuest databases, and Scopus.
References
External links
Ethics of science and technology
Philosophy of technology
Philosophy journals
Quarterly journals
English-language journals
Academic journals established in 1988 | Philosophy & Technology | [
"Technology"
] | 117 | [
"Philosophy of technology",
"Science and technology studies",
"Ethics of science and technology"
] |
65,937,478 | https://en.wikipedia.org/wiki/Principles%20of%20Optics | Principles of Optics, colloquially known as Born and Wolf, is an optics textbook written by Max Born and Emil Wolf that was initially published in 1959 by Pergamon Press. After going through six editions with Pergamon Press, the book was transferred to Cambridge University Press who issued an expanded seventh edition in 1999. A 60th anniversary edition was published in 2019 with a foreword by Sir Peter Knight. It is considered a classic science book and one of the most influential optics books of the twentieth century.
Background
In 1933, Springer published Max Born's book Optik, which dealt with all optical phenomena for which the methods of classical physics, and Maxwell's equations in particular, were applicable. In 1950, with encouragement from Sir Edward Appleton, the principal of Edinburgh University, Born decided to produce an updated version of Optik in English. He was partly motivated by the need to make money, as he had not been working long enough at Edinburgh to earn a decent pension, and at that time, was not entitled to any pension from his time working in Germany.
The first problem that Born had to tackle was that after the US joined the war in 1941, Optik had been reproduced and sold widely in the US, along with many other books and periodicals. This had been done under the aegis of the Office of Alien Property which was authorised to confiscate enemy property, so that neither the authors nor the publishers received any payment for these sales. When the war ended, the printing continued, still with no payment of royalties to authors or publishers. Born had been writing regularly to try and reclaim his book, pointing out that he was not an alien, as he had been a British citizen at the start of the war. He enlisted the support of various people and organisations, including the British Ambassador in Washington. In response, he got a letter saying that he would have to pay 2% of the retail price of any new book he wrote which was based on Optik. An article in the Manchester Guardian about how Jean Sibelius had been deprived of royalties in the same way, prompted him to write a letter describing his own situation. Eventually, his rights to the book were returned and he received backdated royalties.
He quickly realised that the important developments in optics which had occurred in the years since the original book had been written would need to be covered. He approached Dennis Gabor, the inventor of holography to collaborate with him in writing the book. Emil Wolf, a research assistant at Cambridge University, was invited to write a chapter in the book. Gabor subsequently dropped out because of time constraints. Born and Wolf were then the main authors with specialist contributions from other authors. Wolf wrote several chapters and edited the other contributions; Born's input was a modified version of Optik and also collaboration with Wolf in the planning of the book, and many discussions concerning disputed points, presentation and so on.
They hoped to complete the book by the end of 1951, but they were "much too optimistic". The book was actually first published in December 1959.
Problems with Pergamon Press and Robert Maxwell
Pergamon Press was a scientific publishing company which was set up in 1948 by Robert Maxwell and Paul Rosbaud. The latter had been a scientific advisor for Springer in Germany before and during the war and was one of the editors dealing with Optik. He was also an undercover agent for the Allies during the war. He persuaded the authors to place the book with Pergamon Press, a decision which they would later regret.
A detailed account is given by Gustav Born, Max's son He explains how the libel laws in the UK prevented him from speaking about this until after Maxwell's death.
Maxwell tried to get the authors to agree to a much lower rate of royalties for US sales than was agreed in their contract because the book was to be marketed by a different publisher which would mean reduced profits for Pergamon. It was then actually marketed through the US branch of Pergamon but the authors still received reduced royalties. They also found that the sales figures in their statements were lower than the true figures. A clause in the contract meant that they had to go to arbitration rather than go to court to resolve this. Gustav acted for his father in the matter as Max Born was now living in Germany and was in his late seventies. The case was heard by Desmond Ackner(later Lord Ackner) in 1962. He found in favour of the authors on all counts. Nonetheless, they continued to be underpaid. Opening figures in one year's statement did not agree with closing figures from the previous year's statement. Some editions were reprinted several times but did not appear in the accounts at all. After Born's death, Wolf found that an international edition was being distributed in the Far East which he had not been told about. Pergamon sent him a small cheque when he raised the matter with them. When he threatened them with legal action, they sent another cheque for three times the amount. Wolf said that the book was re-printed seventeen times (not counting unauthorized editions and translations).
Rosbaud left Pergamon Press in 1956 “because he found Maxwell to be completely dishonest”. Other authors told Gustav Born that they had had the same problems with Maxwell. They included Sir Henry Dale, who shared the Nobel prize in medicine in 1936 and Edward Appleton.
Contents
1st edition
The book aimed to cover only those optical phenomena which can be derived from Maxwell's electromagnetic theory and is intended to give a complete picture of what was then known derived from Maxwell's equations.
2nd edition
This was published in 1962. It contained corrections of errors and misprints.
Lasers had been developed since the 1st edition was published but were not covered because laser operation is outside the scope of classical optics. Some references to research which used lasers were included.
3rd edition
This was published in 1965. It again had correction of errors and misprints, and references to recent publications were added.
A new figure (8.54), donated by Leith and Upatnieks, showed images of the first 3-dimensional holographic image. This related to the section in Chapter VIII which described Gabor's wavefront re-construction technique (holography).
4th edition
This was published in 1968 and included corrections, improvements to the text, and additional references.
5th edition
This was published in 1974 and again included corrections, improvements to the text, and additional references.
Significant changes were made to Sections 13.1-13.3. which deals with the optical properties of metals. It is not possible to describe the interaction of an optical electromagnetic wave with a metal using classical optical theory. Nonetheless, some of the main features can be described, at least in quantitative terms, provided the frequency dependence of conductivity and the role of free and bound electrons are taken into account.
6th edition
This was published in 1985, and contained a small number of corrections
7th edition
In 1997, publication of the book was transferred to Cambridge University Press, who were willing to reset the text, thus providing an opportunity to make substantial changes to the book.
The invention of the laser in 1960, a year after the first edition was published, had led to many new activities and entirely new fields in optics. A fully updated "Principles of Optics" would have required several new volumes so Wolf decided to add only a few new topics, which would not require major revisions to the text.
A new section was added to Chapter IV, presenting the principles of computerised axial tomography (or CAT) which has revolutionised diagnosis in medicine. There is also an account of the Radon transform developed in 1917, which underlies the theory of CAT.
An account of Kirchhoff-Rayleigh diffraction theory was added to Chapter VIII as it had become more popular. There is a debate as to whether it or the older Kirchhoff theory best describes diffraction effects.
A recently discovered phenomenon is presented, in which spectral analysis of the light distribution of superimposed broad-band light fields provides important physical information from which the coherence properties of the light can be deduced.
Chapter XIII was added, entitled "The theory of scattering of light by inhomogeneous media". The underlying theory was developed many years before in the analysis of the quantum mechanical potential scattering, and had more recently been derived for optical scattering. Diffraction tomography is discussed. It is applied when the finite wavelength of the waves involved, e.g. optical and ultrasonic waves, cannot be ignored as is the case in X-ray tomography.
Three new appendices were also added:
Proof of the inequality for the spectral degree of coherence
Evaluation of two integrals
Proof of Jones' lemma
Publication history
To date, there have been seven editions of the book.
The first six were published by Pergamon Press in 1959, 1962, 1965, 1968, 1974 and 1980. Cambridge University Press took over the book in 1997, and published an expanded seventh edition in 1999 A special Sixtieth Anniversary version was released in 2019, sixty years after the first edition.
Original editions
Reprints
In 1999, Wolf commented that there had been seventeen authorised reprints and an unknown number of unauthorised reprints.
The fifth edition was reprinted in 1975 and 1977. Between 1983 and 1993, the sixth edition of the book was reprinted seven times. Some of these reprints, including those in the years 1983 and 1986, included corrections.
Cambridge University Press produced a reprint of the 6th Edition in 1997. A reprint of the 7th Edition was produced in 2002 with corrections. Fifteen reprints were made before the 60th Anniversary edition was printed in 2019.
Translations
Reception
The first edition was very well received. A biography of Max Born said: "it presents a systematic treatment based on electromagnetic theory for all optical phenomena that can be described in terms of a continuous distribution of matter". Its timing was very opportune. The arrival of the laser shortly after its publication meant that the insights it provided into the description and analysis of light were directly applicable to the behaviour of laser light. It was extensively used by university teachers, researchers used it as a source of rigorous information. Its excellent sales reflected its value to the world optics community.
Gabor said that the account of holography in the book was the first systematic description of the technique in an authoritative text book. Gabor sent Wolf a copy of one of his papers with the inscription "Dear Emil, I consider you my chief prophet, Love, Dennis"
The seventh edition was reviewed by Peter W. Milonni, Eugene Hecht, and William Maxwell Steen. Previous editions of the book were reviewed by Léon Rosenfeld, Walter Thompson Welford, John D. Strong, and Edgar Adrian, among others.
Peter W. Milonni opened his review of the book by endorsing the book's dust jacket description, stating it is "one of the classic science books of the twentieth century, and probably the most influential book in optics published in the past 40 years."
Eugene Hecht opened his review of the book by comparing the task to reviewing The Odyssey, in that it "cannot be approached without a certain awe and the foreknowledge that whatever you say is essentially irrelevant". Hecht then summarizes his own review, in order to help "anyone who hasn't the time to read the rest of this essay" by stating: "Principles of Optics is a great book, the seventh edition is a fine one, and if you work in the field you probably ought to own it." Hecht went on to state that the book "is a great, rigorous, ponderous, unwavering mathematical tract that deals with a wealth of topics in classical optics." He noted that the book can be hard to understand; he wrote: "This is a tour de force, never meant for easy reading." After analyzing some of the changes to the new edition, Hecht ended the review with the same summary as the introduction, emphasizing again that "if you work in the field you probably ought to own it".
See also
Bibliography of Max Born
List of textbooks in electromagnetism
References
Further reading
External links
1959 non-fiction books
1964 non-fiction books
1965 non-fiction books
1970 non-fiction books
1975 non-fiction books
1980 non-fiction books
1999 non-fiction books
2019 non-fiction books
Max Born
Optics
Physics education in the United Kingdom
Physics textbooks
Pergamon Press books | Principles of Optics | [
"Physics",
"Chemistry"
] | 2,533 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.